Friday, July 20, 2018

“Huge milestone in advancing artificial intelligence” (Gates) as AI becomes a team – big implications for learning?

An event took place this month that received little publicity but was described by Bill Gates as a “huge milestone in advancing artificial intelligence”. It could have profound implications for advances in AI. An OpenAI team, of five neural networks, and a lot of other techniques, beat human teams in the game Dota2.
To understand the significance of this, we must understand the complexity of the task. A computer environment like Dota2 is astoundingly complex and it is played by teams of five, all of whom determine long-term success, exploring the complex environment, making decisions in real time, identifying threats, employing clever team strategies. It’s a seriously bad, chaotic, complicated and fast environment made even messier by the fact that you’re playing against some very smart teams of humans.
To win they created five separate neural networks, each representing a player. You need an executive layer to determine the weightings for each of the player’s actions to determine priorities. Using reinforcement learning techniques and playing itself millions of times (equivalent of 180 years of playtime per day!), it learned fast. Not yet at the level of the professional Dota2 team, but it will get there. 
What’s frightening is the speed of the training and actual learned competence across a ‘team’ in a team context. It needed to optimize team decisions and think about long-term goals, not short-term wins. This is way beyond single player games like chess and GO. 
The implications are huge as AI moves from being good at single, narrow tasks, to general tasks, using a modular approach.
Team AI in military
Let’s take the most obvious parallel. In a battle of robot soldiers versus humans, the robots and drones will simulate the mission, going through millions of possible scenarios, then, having learned all it needs to know, attack in real time, problem solving, as a team, as it goes. Autonomous warfare will be won by the smartest software, organic or not. This is deeply worrying, so worrying that senior AI luminaries, such as Musk, and within Google, have signed a pledge this week saying they will not work on autonomous weapons. So let’s turn to the civilian world.
Team AI in robotics
I’ve written about shared learning in AI across teams of robots which give significant advantages in terms of speed and shared experience. Swarm and cloud robotics are very real. But let’s turn to something very real – business simulations.
Team AI in learning for individuals
Imagine a system that simulates the senior management of a company– its five main Directors. It learns by gathering data about the environment – competitors, financial variables, consumer trends, cost modeling, instantaneous cost-benefit analysis, cashflow projections, resource allocation, recruitment policies, training. It may not be perfect but it is likely to learn faster than any group of five humans and make decisions faster, beating possible competitors. Would a team AI be useful, as an aid to decision making? Do you really need those expensive managers to learn this stuff and make these decisions?
Team AI in learning for individuals
Let’s bring it down to specific strategies for individuals. Suppose I want to maximize my ‘learning’ time. The model in the past has always been one teacher to one or many learners. Imagine pooling the expertise of many teachers to decide what you see and do next in your learning experience or journey? Imagine those teachers having different dimensions, cross-curricular and so on? This breaks the traditional teaching mould. The very idea of a single teacher may just be an artifact of the old model of timetabling and institutions. Multi-teaching, by best of breed systems, may be a better model.
Team AI and blended learning
One could see AI determine the optimal blend for a course or you as an individual. If we take Blended Learning (not Blended Teaching) as our staring point, one of then, based on the learning task, learner and resources available from the AI teacher team, we could see guidance emerge on optimizing the learning journey for that individual, a pedagogic expert who determines what particular type of teaching/learning experience you need at that particular moment?
Conclusion
Rather than seeing AI as a single entity and falsely imagining that it should simulate the behavior of a single ‘teacher’ we should see it as a set of different things that work as a team to help you learn. At the moment we use all sorts of ~AI to help you learn – Google, Recommendation engines, Engagement bots (Differ), Support bots (Jill Watson), Content creation (WildFire), Adaptive Learning (CogBooks), Spaced-practice (WildFire), even Wellbeing. Imagine bringing all of these together for you as an individual learner – coordinated, optimized, not a teacher but a team of teachers.

 Subscribe to RSS

Wednesday, July 04, 2018

Data is not the new oil, more likely the new snakeoil….

Oil is messy and dirty (crude) when it comes out of the ground but it is largely useful when fractioned. Up to 50% is used for petrol, 20% distillate fuel (heating oil and diesel fuel) and 8%  jet fuel. The rest has many other useful purposes. The unwanted elements and compounds are a tiny percentage. Data is not the new oil. It’s stored in weird ways and places, is often old, useless, messy, embarrassing, secret, personal, observed, derived, analytic, may need to be anonymised, training sets identified and subject to GDPR. To quote that old malapropism, ‘data is a minefield of information’!
1. Data dumps
Data, on the other hand, is really messy, with much of it:

  • In odd data structures
  • In odd formats/encrypted
  • In different databases

Just getting a hold of the stuff is difficult.

2. Defunct data
Then there’s the problem of relevance and utility, as much of it is:

  • Old
  • Useless
  • Messy

In fact, much of it could be deleted. We have so much of the stuff because we haven’t known what to do with it, don’t clean it and don’t know how to manage it. 
3. Difficult data
There are also problems around:

  • Data that is embarrassing
  • Data that is secret
There may be very good reasons for not opening up historic data, such as emails and internal communications. It may open up a sizeable legal and other HR risks for organisations. Think Wikileaks email dumps. It’s not like a barrel of oil, more like a can of worms. Like oil spills, we also have data leaks. 
4. Different data
Once cleaned, one can see that there’s many different types of data. Unlike oil it has not so much fractions as different categories of data. In learning we can have ‘Personal’data, provided by the person or actions performed by that person with their full knowledge. This may be gender, age, educational background, needs, stated goals and so on. Then there’s ‘Observed’data from the actions of the user, their routes, clicks, pauses and choices. You also have ‘Derived’data inferred from existing data to create new data and higher level ‘Analytic’data from statistical and probability techniques related to that individual. . Data may be created on the fly or stored.
5. Anonymised data
Just when you thought it was getting clearer. You also have ‘Anonymised’ data is a bit like oil of an unknown origin. It is clean of any attributes that may relate it to specific individuals. This is rather difficult to achieve as there are often techniques to back engineer attribution to individuals.as
6. Supervised data
In AI there’s also ‘Training’data used for training AI systems and ‘Production’data which the system actually uses when it is launched in the real world. This is not trivial. Given the problems stated above, it is not easy to get a suitable data set, which is clean and reliable for training. Then, when you launch the service or product the new data may be subject to all sorts of unforeseen problems not uncovered in the training 
7. Paucity of data
But the problems don’t stop there. In the learning world, the data problem is even worse as there is another problem – the paucity of data. Institutions are not gushing wells of data. Universities, for example, don’t even know how many students turn up for lectures. Data on students is paltry. This makes most data analytics projects next to useless. The data can be best handled in a spreadsheet. It is certainly not as large, clean and relevant as it needs to be to produce genuine insights.
Prep
Before entering these data analytics projects ask yourself some serious questions about 'data. Data size by itself, is overated, but size still matters, whether n = tens, hundreds, thousands, millions, the Law of Small Numbers still matters. Don’t jump until you are clear about how much relevant and useful data you have, where it is, how clean it is and in what databases.
New types of data may be more fruitful than legacy data. In learning this could be dwell time on questions, open input data, wrong answers to questions and so on.
More often than not, what you have as data is really proxies for phenomenon. Be careful here, as your oil may actually be snakeoil.
Conclusion
GDPR has made its management and use more difficult. All of this adds up to what I’d call the ‘data delusion’, the idea that data is one thing, easy to manage and generally useful in data analytics projects in institutions and organisations. In general, it is not. That's not to say you should ignore its uses - just don't get sucked into data analytics projects in learning that promise lots but deliver little. Far better to focus on the use of data in adaptive learning or small scale teaching and learning projects where relatively small amounts of data can be put to good use.


 Subscribe to RSS

Saturday, June 30, 2018

Clever AI/AR/Teacher hybrid systems for classroom use

Most AI-driven systems deliver content via screens to the student and then dashboards to the teacher. But there is a third way – hybrid AI/AR/Teacher systems that give the teacher enhanced powers to see what they can’t see with their own eyes. No teacher has eyes in the back of their heads but, like self-driving cars, you can have eyes everywhere, that recognise individual students, read their expressions, identify their behaviours and provide personalised learning experiences and feedback. You become a more powerful teacher by seeing more, getting and giving more feedback and having less admin and paperwork to do. The promise is that such hybrid systems allow you to do what you do best – teach, especially addressing the needs of struggling students.
AI/AR in classroom
I’ve written about the use of 3D video in teacher training before but this AR (Augmented Reality) idea struck me as clever. Many uses of AI lie outside of the classroom. This augments the strengths of the teacher by integrating dashboards, personal symbols and other AR techniques into the classroom and the practice of teaching. 
Ken Holstein, at Carnegie Mellon, seems like an imaginative and creative researcher, and has been looking at hybrid teacher-AR  - AI systems that present adaptive software but also highlight each individual student's progress, whether they’re attentive, struggling, need help and so on. Symbols appear above the heads of each student. The teacher needs glasses that can display this information, linked to a back-end system that gathers data about each student’s performance.
It does, of course, seem all very Big Brother, to some even monstrous, especially those comfortable with traditional classroom teaching. However, as results seem to have plateaued in K12 education, we may need to make teachers more effective by being able to focus on the students who are having difficulties. These ideas make personalised learning possible not by replacing the teacher (the idea behind most AI/adaptive systems) but by giving the teacher individual feedback over the heads of each student, so that personalised learning can be realised. 
Face recognition in the classroom
Let’s up the stakes with this face recognition system used in China. It recognises student faces instantly, as they arrive for school, so no need for registration. In the classroom it scans the students every 30 seconds, recognising seven different expressions like neutral, happy, sad, disappointed, angry and surprised, as well as six types of behaviour, such as reading, writing, distracted and so on. So it helps the teacher manage registration, performance and behaviour.
They also claim that it helps teacher improve by adapting to the feedback and statistical analysis they receive from the system. When I’ve shown people this system, some react with horror but if we are to reduce teacher workload, should we consider such systems to help with problems around non-teaching paperwork, student feedback and classroom behaviour?
Conclusion
What seems outlandish today often turns out to be normal in the future – internet, smart phones, VR. Combinations of technology are often more effective than single approaches - witness the smartphone or self-driving car. These adaptive AR/AI hybrid systems may turn out to be very effective by being sensitive to both teacher and student needs. The aim, is not to replace but enhance the teacher's skills, giving them real-time data, personal feedback on all students in their class and data to reflect on their own skills. Let’s not throw the advantages out before we’ve had time to consider the possibilities.

 Subscribe to RSS

Monday, June 25, 2018

AI and assessment

I used my fingerprint to access this Mac to write this piece, my iPhone uses face recognition and when I travel, face recognition is used to identify me when I leave and enter the country. I am constantly being ‘assessed’ using AI. As the pendulum swings towards online learning, it makes sense to use it in online examinations. Yet the only example of AI being used in assessment in learning is in checks for cheating – plagiarism checkers.
AI is not perfect but neither are humans. Human performance falls, when marking large numbers of essays, they make mistakes, have biases based on names and gender, cognitive biases, as well as biases on what is acceptable in terms of critiques and creativity. This is not about replacing teacher assessment, it’s about automating some of that work to allow teachers to teach and provide more targeted, constructive feedback and support. It’s about optimising teachers’ time. It is also about opening up the huge potential in online assessment, on the not inconsiderable grounds of convenience, quality and cost.
1. Identification
Live or recorded monitoring (proctoring) is used to watch the candidate. You can also monitor feeds, use a locked down browser, freeze screen, block cut and paste, and limit external access.  Video, including 360 degree cameras, and audio are also used to detect possible cheating. Using webcams you can scan for suspicious objects and background noise, also use face recognition.
Coursera holds a patent on keystroke recognition. They get you to type in a sentence, then measure two things; dwell time on each key and time between keystokes, giving you as a candidate a unique signature, so that exam input can be checked to be by you. 
In addition they scan your photo ID, a Driver's license or Passport. Proctoring companies use machine learning to adapt to student behaviour, improving its analysis with each exam. Their facial recognition, eye movement tracking and auditory analysis identifies suspicious behaviour, with incident reports and session activity data generated at the end of each exam.Multi-factor authentication — ID and photo capture, facial recognition and keystroke analysis are all used to verify student identity.
All of these techniques and others are improving rapidly and it is clear from these real examples that AI is already useful in enabling more convenient, cheaper and on-demand identification and assessment. 
2. Interface (voice)
Learners largely use keyboards, whether physical or virtual to write. This is the norm at home and in the workplace. Yet assessment is still largely by writing with a pen. This creates a performance problem. On most writing and critical thinking tasks one needs to be able to ‘rewrite’ (reorder, delete, add, amend) text. Writing with a pen encourages the opposite – the memorisation of blocks of text, even entire essays.
We have already seen how keystroke patterns can be used to identify candidates but voice is also rapidly becoming a normal form of interaction with computers, with 10% of searches on Google, Siri and Cortana are common tools, as well as home devices such as Amazon’s Alexa and Google Home. The advantages of voice for assessment are clear; natural interface, frictionless, speaking is a more universal skill than writing and it eliminates literacy problems, where literacy is not the purpose of the assessment. Voice also helps assess within 3D environments such as VR assessment, where you can navigate and interact wholly by voice. We have a system in WildFire which is wholly voice-driven within or without VR. VR is another form of interface in assessment (more of this later in this article).
3. Retrieval as formative assessment
Formative testing has a solid research base. It shows that testing as a form of retrieval is one of the most effective methods of study. A metastudy by Adesope et al (2017) shows the superiority of testing over reading and other forms of study. 
However, most online learning relies heavily on multiple-choice questions, which have become the staple of much e-learning content. These have been shown to be effective, as almost any type of test item is effective to a degree, but they have also been shown to be less effective than open-response, as they test recognition from a list, not whether it is actually known. MCQs are a relic of the early days of automated marking, when templates could be used around boxes to visually or machine-read ticks/crosses. There are many problems with multiple choice questions; the answer is given, requires recognition rather than retrieval skills, guessing gives you a 25%/33% chance of being right, distractors can be remembered, cheating works and surface structure seriously distorts efficacy.
Kang et al. (2007) showed that, with 48 undergraduates, reading academic Journal quality material, open input is superior to multiple-choice (recognition) tasks. Multiple choice testing had an affect similar to that of re-reading whereas open-input resulted in more effective student learning. McDaniel et al. (2007) repeated this experiment in a real course with 35 students enrolled in a web-based Brain and Behavior course at the University of New Mexico. The open-input quizzes produced more robust benefits than multiple-choice quizzes. ‘Desirable difficulties’ is a concept coined by Elizabeth and Robert Bjork, to describe the desirability of creating learning experiences that trigger effort, deeper processing, encoding and retrieval, to enhance learning. The Bjorks have researched this phenomenon in detail to show that effortful retrieval and recall is desirable in learning, as it is the effort taken in retrieval that reinforces and consolidates that learning.
A multiple-choice question is a test of recognition from a list. They do not elicit full recall from memory. Studies comparing multiple-choice with open retrieval show that when more effort is demanded of students, they have better retention.. As open-response takes cognitive effort, the very act of recalling knowledge also reinforces that knowledge in memory. The act of active recall develops and strengthens memory. It improves the process of recall in ways that passive recall – reading, listening and watching do not. Active recall, pulling something out of memory, is therefore more effective in terms of future performance.
AI can help assess alternatives to MCQs by opening up the possibilities of open input. Meaning matters and so it makes sense to assess through open response, where meaningful recall is stimulated. This act alone, even when you don’t know the answer, is a strong reinforcer, stronger indeed, than the original exposure. Interestingly, even when the answer is not known, the act of trying to answer is also a powerful form of learning. 
4. Automatic creation of assessments
We have developed an AI content creation service in WildFire, that not only creates online learning content but also assessments at the same time. AI techniques create content with the assessment identical to the learning experience, both with open text input, as outlined above. In addition, we can detect a great deal of detail about user behaviour while they do the assessment. You can vary the difficulty, and some of the input parameters, of the assessment using some global variables. This approach is important for the great mass of low level, low stakes assessment, whether formative or summative.
5. Algorithmic spaced practice
The timing of formative assessment is also important as Roediger (2011) has shown, with a logarithmic pattern recommended i.e. loosing up the period between testing or self-testing as time passes. This is one of the most effective study techniques we know, yet many seem to be trapped in the world of taking notes, reading, underlining and re-reading. The way to enhance this technique is to use an algorithm to determine the pattern of practice and push practice events to individual learners. We do this in WildFire.
6. Plagiarism
The most common use of AI in learning is in plagiarism checkers. Oddly, this is by far the most common use of AI in assessment. The quality assurance surrounding assessment often relies on this one tool to verify authorship. There’s lots of tools in this area; grammarly.com (free), academicpalgiarism.com  (cheap) or turnitin.com (expensive) or SafeAssign.com (BlackBoard). Turnitin also has writecheck, a service that allows students to submit their work. What is odd is that the only use of AI in HE is trying to catch cheats.Interestingly, given that plagiarism is a genie that is well and truly out of the bottle, we are still stuck with essays as a rather monolithic form of assessment, especially in Higher Education. The good news is that the AI techniques, increasingly used in plagiarism checkers are increasingly used to allow learners to submit drafts of essays for reflection and improvement. It is in the provision of feedback to submitted text through formative assessment that learning takes place. Comparisons across the essays submitted by one student may reveal inconsistencies that need further investigation.
Essays are sometimes appropriate assignments if one wants long-form critical thought. But in many subjects shorter, more targeted assignments and testing are far better. There’s a lot of formative assessment techniques out there and essays are just one of them. Short answer questions, open-response, formative testing, adaptive testing are just some of the alternatives.
7. Essay marking
Essay and short open answer marking is possible using AI-assisted softwareThe software takes lots of real essays, along with their human marked grades and looks for features within those grades that distinguish them from the other grades. In this sense, the software is using human traits and outputs and tries to mimic them when presented with new cases. The features the software needs to pick up on vary but can include missing absent words/phrases and so on. So it is NOT the machine or algorithms on their own doing the work, it’s a process of looking at what humans experts did when they marked lots of essays. 
Machine grading gives you a score but it also gives you a probability, namely a confidence rating. This is important, as you can use this to retrain the algorithm on low confidence scored essays. AES also tries to give scores for each dimension in the scoring rubric, it’s not just an overall grade.
8. Adaptive assessment
Delivering assessments that adapt to the learner’s performance is called adaptive learning. The advantage is that you require fewer test items to assess.  Iterative algorithms select questions from a database and these are delivered according to the learner’s ability, starting with a medium ability item. WildFire has used this in chatbot delivered assessments, where sprints of questions are delivered in a more naturalistic dialogue format.
9. Context
3D environments, either on 2D screens or in VR have opened up the possibility of assessment within a simulated context. This is particularly useful for physical and vocational tasks. VR systems also offer multi-learner environments with voice and tutor control. This is rapidly becoming a total simulation environment, where both psychological and physical fidelity can match the assessment goals. 
Many competences an only be measured by someone doing something. Yet most exams come nowhere near measuring competences. This is head and shoulders above traditional paper exams for many vocational and practical tasks, real skills. Your performance can really be measured. Your assessment can be your performance – complete and you’ve passed. This is already a reality in many simulations, flight sims and so on. It can also be true of many other skills.
Recertification for inspections is one practical example. I’ve been involved in a simulation on domestic house gas inspection that simulates scenarios so well it’s now used as a large part of the assessment, saving huge amounts of money in the US. You’re free to move around the house, check for gas leaks, do all the necessary measurements using the right equipment – a completely open training and assessment environment. With Oculus Rift it is far more realistic than a 2D screen showing a 3D simulation.
Of course, VR is not essentially AI, although the possibility of AI.
10. Online proctoring
All of the above enable online assessment, or proctoring, especially online identification but also the many online developments around interface, input, retrieval, creation, marking and context. The MOOCs providers have been doing this, and refining their models, over a number of years. It is already a reality for the MOOC providers such as Udacity and Coursera, where paying for grading of assignments, online exams and Nanodegrees (with job promises and money back if you don’t get a job), have been implemented. It is undeniable that most forms of delivery are moving online, whether retail or financial, but also in learning. This increase in demand for online learning needs to ne matched by an increase in demand for online assessment. The knotty problems associated with online assessment benefit greatly from AI.

 Subscribe to RSS