Saturday, June 30, 2018

Clever AI/AR/Teacher hybrid systems for classroom use

Most AI-driven systems deliver content via screens to the student and then dashboards to the teacher. But there is a third way – hybrid AI/AR/Teacher systems that give the teacher enhanced powers to see what they can’t see with their own eyes. No teacher has eyes in the back of their heads but, like self-driving cars, you can have eyes everywhere, that recognise individual students, read their expressions, identify their behaviours and provide personalised learning experiences and feedback. You become a more powerful teacher by seeing more, getting and giving more feedback and having less admin and paperwork to do. The promise is that such hybrid systems allow you to do what you do best – teach, especially addressing the needs of struggling students.
AI/AR in classroom
I’ve written about the use of 3D video in teacher training before but this AR (Augmented Reality) idea struck me as clever. Many uses of AI lie outside of the classroom. This augments the strengths of the teacher by integrating dashboards, personal symbols and other AR techniques into the classroom and the practice of teaching. 
Ken Holstein, at Carnegie Mellon, seems like an imaginative and creative researcher, and has been looking at hybrid teacher-AR  - AI systems that present adaptive software but also highlight each individual student's progress, whether they’re attentive, struggling, need help and so on. Symbols appear above the heads of each student. The teacher needs glasses that can display this information, linked to a back-end system that gathers data about each student’s performance.
It does, of course, seem all very Big Brother, to some even monstrous, especially those comfortable with traditional classroom teaching. However, as results seem to have plateaued in K12 education, we may need to make teachers more effective by being able to focus on the students who are having difficulties. These ideas make personalised learning possible not by replacing the teacher (the idea behind most AI/adaptive systems) but by giving the teacher individual feedback over the heads of each student, so that personalised learning can be realised. 
Face recognition in the classroom
Let’s up the stakes with this face recognition system used in China. It recognises student faces instantly, as they arrive for school, so no need for registration. In the classroom it scans the students every 30 seconds, recognising seven different expressions like neutral, happy, sad, disappointed, angry and surprised, as well as six types of behaviour, such as reading, writing, distracted and so on. So it helps the teacher manage registration, performance and behaviour.
They also claim that it helps teacher improve by adapting to the feedback and statistical analysis they receive from the system. When I’ve shown people this system, some react with horror but if we are to reduce teacher workload, should we consider such systems to help with problems around non-teaching paperwork, student feedback and classroom behaviour?
Conclusion
What seems outlandish today often turns out to be normal in the future – internet, smart phones, VR. Combinations of technology are often more effective than single approaches - witness the smartphone or self-driving car. These adaptive AR/AI hybrid systems may turn out to be very effective by being sensitive to both teacher and student needs. The aim, is not to replace but enhance the teacher's skills, giving them real-time data, personal feedback on all students in their class and data to reflect on their own skills. Let’s not throw the advantages out before we’ve had time to consider the possibilities.

 Subscribe to RSS

Monday, June 25, 2018

AI and assessment

I used my fingerprint to access this Mac to write this piece, my iPhone uses face recognition and when I travel, face recognition is used to identify me when I leave and enter the country. I am constantly being ‘assessed’ using AI. As the pendulum swings towards online learning, it makes sense to use it in online examinations. Yet the only example of AI being used in assessment in learning is in checks for cheating – plagiarism checkers.
AI is not perfect but neither are humans. Human performance falls, when marking large numbers of essays, they make mistakes, have biases based on names and gender, cognitive biases, as well as biases on what is acceptable in terms of critiques and creativity. This is not about replacing teacher assessment, it’s about automating some of that work to allow teachers to teach and provide more targeted, constructive feedback and support. It’s about optimising teachers’ time. It is also about opening up the huge potential in online assessment, on the not inconsiderable grounds of convenience, quality and cost.
1. Identification
Live or recorded monitoring (proctoring) is used to watch the candidate. You can also monitor feeds, use a locked down browser, freeze screen, block cut and paste, and limit external access.  Video, including 360 degree cameras, and audio are also used to detect possible cheating. Using webcams you can scan for suspicious objects and background noise, also use face recognition.
Coursera holds a patent on keystroke recognition. They get you to type in a sentence, then measure two things; dwell time on each key and time between keystokes, giving you as a candidate a unique signature, so that exam input can be checked to be by you. 
In addition they scan your photo ID, a Driver's license or Passport. Proctoring companies use machine learning to adapt to student behaviour, improving its analysis with each exam. Their facial recognition, eye movement tracking and auditory analysis identifies suspicious behaviour, with incident reports and session activity data generated at the end of each exam.Multi-factor authentication — ID and photo capture, facial recognition and keystroke analysis are all used to verify student identity.
All of these techniques and others are improving rapidly and it is clear from these real examples that AI is already useful in enabling more convenient, cheaper and on-demand identification and assessment. 
2. Interface (voice)
Learners largely use keyboards, whether physical or virtual to write. This is the norm at home and in the workplace. Yet assessment is still largely by writing with a pen. This creates a performance problem. On most writing and critical thinking tasks one needs to be able to ‘rewrite’ (reorder, delete, add, amend) text. Writing with a pen encourages the opposite – the memorisation of blocks of text, even entire essays.
We have already seen how keystroke patterns can be used to identify candidates but voice is also rapidly becoming a normal form of interaction with computers, with 10% of searches on Google, Siri and Cortana are common tools, as well as home devices such as Amazon’s Alexa and Google Home. The advantages of voice for assessment are clear; natural interface, frictionless, speaking is a more universal skill than writing and it eliminates literacy problems, where literacy is not the purpose of the assessment. Voice also helps assess within 3D environments such as VR assessment, where you can navigate and interact wholly by voice. We have a system in WildFire which is wholly voice-driven within or without VR. VR is another form of interface in assessment (more of this later in this article).
3. Retrieval as formative assessment
Formative testing has a solid research base. It shows that testing as a form of retrieval is one of the most effective methods of study. A metastudy by Adesope et al (2017) shows the superiority of testing over reading and other forms of study. 
However, most online learning relies heavily on multiple-choice questions, which have become the staple of much e-learning content. These have been shown to be effective, as almost any type of test item is effective to a degree, but they have also been shown to be less effective than open-response, as they test recognition from a list, not whether it is actually known. MCQs are a relic of the early days of automated marking, when templates could be used around boxes to visually or machine-read ticks/crosses. There are many problems with multiple choice questions; the answer is given, requires recognition rather than retrieval skills, guessing gives you a 25%/33% chance of being right, distractors can be remembered, cheating works and surface structure seriously distorts efficacy.
Kang et al. (2007) showed that, with 48 undergraduates, reading academic Journal quality material, open input is superior to multiple-choice (recognition) tasks. Multiple choice testing had an affect similar to that of re-reading whereas open-input resulted in more effective student learning. McDaniel et al. (2007) repeated this experiment in a real course with 35 students enrolled in a web-based Brain and Behavior course at the University of New Mexico. The open-input quizzes produced more robust benefits than multiple-choice quizzes. ‘Desirable difficulties’ is a concept coined by Elizabeth and Robert Bjork, to describe the desirability of creating learning experiences that trigger effort, deeper processing, encoding and retrieval, to enhance learning. The Bjorks have researched this phenomenon in detail to show that effortful retrieval and recall is desirable in learning, as it is the effort taken in retrieval that reinforces and consolidates that learning.
A multiple-choice question is a test of recognition from a list. They do not elicit full recall from memory. Studies comparing multiple-choice with open retrieval show that when more effort is demanded of students, they have better retention.. As open-response takes cognitive effort, the very act of recalling knowledge also reinforces that knowledge in memory. The act of active recall develops and strengthens memory. It improves the process of recall in ways that passive recall – reading, listening and watching do not. Active recall, pulling something out of memory, is therefore more effective in terms of future performance.
AI can help assess alternatives to MCQs by opening up the possibilities of open input. Meaning matters and so it makes sense to assess through open response, where meaningful recall is stimulated. This act alone, even when you don’t know the answer, is a strong reinforcer, stronger indeed, than the original exposure. Interestingly, even when the answer is not known, the act of trying to answer is also a powerful form of learning. 
4. Automatic creation of assessments
We have developed an AI content creation service in WildFire, that not only creates online learning content but also assessments at the same time. AI techniques create content with the assessment identical to the learning experience, both with open text input, as outlined above. In addition, we can detect a great deal of detail about user behaviour while they do the assessment. You can vary the difficulty, and some of the input parameters, of the assessment using some global variables. This approach is important for the great mass of low level, low stakes assessment, whether formative or summative.
5. Algorithmic spaced practice
The timing of formative assessment is also important as Roediger (2011) has shown, with a logarithmic pattern recommended i.e. loosing up the period between testing or self-testing as time passes. This is one of the most effective study techniques we know, yet many seem to be trapped in the world of taking notes, reading, underlining and re-reading. The way to enhance this technique is to use an algorithm to determine the pattern of practice and push practice events to individual learners. We do this in WildFire.
6. Plagiarism
The most common use of AI in learning is in plagiarism checkers. Oddly, this is by far the most common use of AI in assessment. The quality assurance surrounding assessment often relies on this one tool to verify authorship. There’s lots of tools in this area; grammarly.com (free), academicpalgiarism.com  (cheap) or turnitin.com (expensive) or SafeAssign.com (BlackBoard). Turnitin also has writecheck, a service that allows students to submit their work. What is odd is that the only use of AI in HE is trying to catch cheats.Interestingly, given that plagiarism is a genie that is well and truly out of the bottle, we are still stuck with essays as a rather monolithic form of assessment, especially in Higher Education. The good news is that the AI techniques, increasingly used in plagiarism checkers are increasingly used to allow learners to submit drafts of essays for reflection and improvement. It is in the provision of feedback to submitted text through formative assessment that learning takes place. Comparisons across the essays submitted by one student may reveal inconsistencies that need further investigation.
Essays are sometimes appropriate assignments if one wants long-form critical thought. But in many subjects shorter, more targeted assignments and testing are far better. There’s a lot of formative assessment techniques out there and essays are just one of them. Short answer questions, open-response, formative testing, adaptive testing are just some of the alternatives.
7. Essay marking
Essay and short open answer marking is possible using AI-assisted softwareThe software takes lots of real essays, along with their human marked grades and looks for features within those grades that distinguish them from the other grades. In this sense, the software is using human traits and outputs and tries to mimic them when presented with new cases. The features the software needs to pick up on vary but can include missing absent words/phrases and so on. So it is NOT the machine or algorithms on their own doing the work, it’s a process of looking at what humans experts did when they marked lots of essays. 
Machine grading gives you a score but it also gives you a probability, namely a confidence rating. This is important, as you can use this to retrain the algorithm on low confidence scored essays. AES also tries to give scores for each dimension in the scoring rubric, it’s not just an overall grade.
8. Adaptive assessment
Delivering assessments that adapt to the learner’s performance is called adaptive learning. The advantage is that you require fewer test items to assess.  Iterative algorithms select questions from a database and these are delivered according to the learner’s ability, starting with a medium ability item. WildFire has used this in chatbot delivered assessments, where sprints of questions are delivered in a more naturalistic dialogue format.
9. Context
3D environments, either on 2D screens or in VR have opened up the possibility of assessment within a simulated context. This is particularly useful for physical and vocational tasks. VR systems also offer multi-learner environments with voice and tutor control. This is rapidly becoming a total simulation environment, where both psychological and physical fidelity can match the assessment goals. 
Many competences an only be measured by someone doing something. Yet most exams come nowhere near measuring competences. This is head and shoulders above traditional paper exams for many vocational and practical tasks, real skills. Your performance can really be measured. Your assessment can be your performance – complete and you’ve passed. This is already a reality in many simulations, flight sims and so on. It can also be true of many other skills.
Recertification for inspections is one practical example. I’ve been involved in a simulation on domestic house gas inspection that simulates scenarios so well it’s now used as a large part of the assessment, saving huge amounts of money in the US. You’re free to move around the house, check for gas leaks, do all the necessary measurements using the right equipment – a completely open training and assessment environment. With Oculus Rift it is far more realistic than a 2D screen showing a 3D simulation.
Of course, VR is not essentially AI, although the possibility of AI.
10. Online proctoring
All of the above enable online assessment, or proctoring, especially online identification but also the many online developments around interface, input, retrieval, creation, marking and context. The MOOCs providers have been doing this, and refining their models, over a number of years. It is already a reality for the MOOC providers such as Udacity and Coursera, where paying for grading of assignments, online exams and Nanodegrees (with job promises and money back if you don’t get a job), have been implemented. It is undeniable that most forms of delivery are moving online, whether retail or financial, but also in learning. This increase in demand for online learning needs to ne matched by an increase in demand for online assessment. The knotty problems associated with online assessment benefit greatly from AI.

 Subscribe to RSS

Friday, June 22, 2018

AI and religious zealotry – let’s not fall for anthropomorphism, techno-prophecies, singularity & end-of-days

AI is unique as a species of technology as it induces speculation that falls little short of religious fervour. Elon Musk and Stephen Hawking, no less, have made the case for AI being an existential threat, a beast that needs to be tamed. On the other side, in my view more level headed thinkers, such as Stephen Pinker and many practitioners who work in AI, claim that much of this is hyperbole.
The drivers behind such religiosity are, as Hume said in the 18thcentury, a mixture of our:
1) fears, hopes and anxieties about future events
2) tendency to magnify
From the Greeks onwards, whose Promethean Myth, through its resurrection by Mary Shelly in ‘Frankenstein’ in the 19thcentury, then a century of film, from ‘Metropolis’ onwards - the perceived loss of human autonomy has fuelled our fearsand anxietiesabout technology. The movies have tended to rely on existing fears about commies, crime, nuclear war, alien invasions and whatever fear the age throws up. Y2K was a bogus fear, the world suffered no armageddon. So let’s no fall for current fears.
The tendency to magnify shows itself in the exaggeration around exponentialism, the idea that things will proceed exponentially, without interruption, until disaster ensues. Toby Wash, an AI researcher, warns us not readily accept the myth of exponential growth in AI. There are many brakes on progress, from processing power to backpropagation. Progress will be slower than anticipated.
The prophets of doom seem to ignore the fact that it is almost inconceivable that we won’t anticipate the problems associate with autonomy, then regulate and control them, with sensible engineering solutions. 
The airline industry is one of the wonders of our age, where most commercial airplanes are essentially robots, that switch to autopilot as low as 200 feet, then fly and land with out much human intervention. Security, enhanced by face recognition, allows us to take international flights without speaking to another human being. Soaked in AI and automation, its safety record is astounding. Airplanes have got safer because of AI not inspire of AI. Similarly, with other applications in AI we will anticipate and engineer solutions that are safe. But there are several specific tendencies that mirror religious fervour that we must be aware of:
Anthropomorphism
AI is not easy - it's a hard slog. I agree with Pinker, when he says that being human is a coherent concept but  there is no real coherence in AI. Even if we imagine a coherent general intelligence there is no reason to assume that AI will adopt attitudes that we, as humans, have accumulated over 2 million years of evolution. We tend to attribute human qualities to the religious domain, whether God, Saints or our binary, moral constructs; God/Devil, Saint/Sinner, Good/Evil, Heaven/Hell. These moral constructs are then applied to technology, despite the fact that there is no consciousness, no self-awareness and no ‘intelligence’, a word that often misleads us into thinking that AI has thoughts. Blinded by the word ‘intelligence’ we anthropomorphise, transposing our human moral schemas onto indifferent technology. So what if IBM Watson won at Jeopardy, and Google triumphs at GO and poker – the AI didn’t know it had won or triumphed.
Prophecy
Another sign of this religious fervour is ‘prophecy’. There’s no end of forecasts and extrapolations, best described as prophecies, about future progress and fears in AI. The prophecies, as they are in religion, tend to be about dystopian futures. Pestilence and locusts and have been replaces by nano-technology and micro-drones. Kurzwell, that high-priest of hyperbole, has taken this to another level, with his diagrammatic equivalent of rapture…. the singularity.
Singularity
The pseudo-religious idea of the ‘singularity’ is the clearest example of religious magnification and hyperbole. Just as we invented religious ideas, such as omniscience, omnipresence and omnipotence, we draw logarithmic graphics and imagine that AI moves towards similarly lofty heights. We create a technical Heaven, or for some Hell. There will be no singularity. AI is an idiot savant, smart only in narrow domains but profoundly stupid. It’s only software.
End-of-days
Then there is an ‘end of days’ dimension to this dystopian speculation, the idea that we are near the end of our reign as a species and that, through our own foolishness and blindness to the dangers of AI, will soon face extinction.
There is no God
One fundamental problem with all of this pseudo-religious fervour is the simple fact that AI, unlike our monotheistic God, is not a singular idea. It has no formal and precise definition. AI is not one thing, it is many things. It’s simply a set of wildly different tools. In fact, many things that people assume are AI, such as factory robots, have nothing to do with AI, as are many other software applications which are just statistical analysis, data mining or some other well known technique. Algorithms have been around since Euclid 2300 years ago. It has taken over two millennia of maths to get here. Sure we have data flooding from the web but that’s no reason to jump two by two onto some imaginary Ark to save ourselves and all organic life. Believe me, there are many worse dangers – disease, war, climate change, nuclear weapons…. 
Blinded by bias
The zealotry in the technophobes is akin to fanatics in The Life of Brian. What has AI ever done for us…. Google, accelerates  medical research,  identifies disease outbreaks, identifies melanomas, diagnoses cancer, reads scans and pathology slides, self-driving cars…. let’s see. Let’s not see AI as a Weapon of Math Destruction, and focus relentlessly on accusations of bias, that turn out to be the same few second-hand case studies, endlessly recycled. All humans are biased and while bias may exist in software or data, that form of mathematical bias can be mathematically defined and dealt with, unlike our many human biases, which Daniel Kahneman, who got the Nobel Prize for his work on bias, described as ‘uneducable’. Machine learning and many, many other AI techniques, depend necessarily on making mistakes as they optimise solutions. This is how it works, learns and solves problems. Remember - it’s only software.
Conclusion
We need to take the ‘idiot savantdescription seriously. Sure there are dangers. Almost all technology has a calculus of upsides and downsides. Cars mangle, kill and maim millions, yet we still drive. The greatest danger is likely to be the military or bad actor use of weaponised AI. That we should worry about and regulate. AI is really hard, it takes time, so there's time to solve the safety issues. All of those dozens of ethical groups that are springing up like weeds are largely superfluous, apart from those addressing autonomous weapons. There are plenty of real and present problems to be solved - AI is not one of them. Let’s accept that AI is like the God Shiva, it can create and destroy. Don’t let it be seen solely as a destructive force, let’s use it creatively, in making our lives better, especially in health and education.



 Subscribe to RSS

Thursday, June 21, 2018

Blockchain – got married on it but fell out of love with it….

Way back, in 2001, I built a decentralised P2P learning system. Long story but it eventually produced a successful company, Learning Pool (I’m still involved), albeit after a pivot into more mainstream technology. This eventually led to an early interest in Blockchain when it first appeared. I gave talks on Blockchain, even got remarried on Blockchain. But I’ve come round to see it not so much a solution to problems but a solution looking for a problem. Having read tons on the subject and got far too interested in Satoshi NakamotoI saw yet another a presentation this week touting it as the next big thing in learning, and gave it a rethink. Nothing wrong with changing your mind on something. Here's my thoughts...
1. Extremes of capitalism
You can’t go all ‘activist’ on me and blame the man, then use Bitcoin (therefor Blockchain). All you’re doing is playing around in the extremes of capitalism – the really bad bit, where capital is hidden, secret and not subject to tax. I said two years ago that the Wild West world of Bitcoin could do with some Sheriffs. I'm now of the opinion that it needs to be closed down.
2. Scams
Serious problems have emerged with the technology. Bitcoin looks increasingly like a money laundering scam, wracked with hacks, fraud, theft, ransoms and Ponzi schemes. The hackless future that was promised turned out to be a bit of a dystopian Westworld. This should worry those who want it used in the public sector.
3. Security
Another problem with many of the proprietary solutions is not Blockchain but the security layer. It has been noted that many of these are just Blockchain distributed databases with all the front-end security vulnerabilities of older systems.
4. Control
Sure Blockchain was created to democratise, decentralise and disintermediate institutions, so why keep it locked up within institutions? Much of the interest I now see is from traditional purveyors trying to lock down the technology for their own ends. A private Blockchain isn’t really a decentralised Blockchain, in that it is 100% owned. It’s basically a transaction ledger for interested parties, not the democratising, decentralising, deintermediating force many imagined. 
5. Environmentally disastrous
Bitcoin, for example, need an enormous amount of processing power for permissions. This is huge – estimated to be the equivalent of the energy needs of 159 countries. To be fair this is a Bitcoin problem, not Blockchain, but it is a PR issue, as Bitcoin may take the whole thing down, casting a big shadow over the whole Blockchain industry.
6. Credentialism
But the main problem is that Blockchain in learning simply reinforces runaway credentialism. Bryan Caplan’s book shows that Higher Education has expanded on the back of ‘signalling’. This has resulted in credential inflation, where more and more young people stay at college for longer and longer, just to get the inflated paper they need for a job. If Blockchain simply makes credentialism easier, then forget it. Ah, I hear you say, but it’s really about micro-credentialism. That’s fine but I also think that this has had its day. The badges movement has run out of steam, as they turned out to be motivationally suspect, lack objectivity and therefore credibility, as well as the awful branding. It has flopped.
7. Complexity
Lastly, it is just too complex an idea to sell, and education is a notoriously slow learner. Education and training also struggles to cope with innovative technology. Its infrastructure is largely old LMS technology and flat HTML delivery. Anyone investing on Blockchain in education will have to be in for the long haul and I mean a very long haul.
Conclusion
I'm not saying that Blockchain has no role to play in the world, only that it doesn't, as yet. seem to have a clear role in education and training. If it solves problems in Microfinance or in healthcare, fine. Blockchain is basically a transaction ledger and learning is not primarily about transactions. Not only is there no real problem to be solved in learning, it may just exacerbate over-credentialing. Sure there’s lots of projects around, funded by bodies that wouldn’t know a block if it was in their soup. Looks great on a grant application but, if I were to be honest, I haven’t seen a single example in education and training that has legs. I was wrong.

 Subscribe to RSS

Monday, June 18, 2018

Personalised learning – what the hell is it? 10 things that work...

In one sense all learning is personal as it requires personalattention, effort and practice. But ‘Personalised Learning’ as a term has come to mean the tailoring of learning to the individual, with a sensitivity to their individual needs. The goal being to increase the efficacy of teaching and learning, lower failure rates and increase access with scalable persoanlisation. 
Myths
First, we need to dispense with the myth that this is about Learning Styles or some other phantom phenomenon. This is one of the commonest myths in education, deeply embedded but completely wrong-headed. Personal needs are far best defined in terms of a whole raft of data about what you know, need to know and your personal context and circumstances. I like to think of Personal learning as a solution to learning difficulties. We all have learning difficulties, as our brains are inattentive, get easily overloaded, forget most of what we’re taught, sleep 8 hours a day, are hopelessly biased, can’t download and can’t network. Then there’s the delivery problems associated with the current model, which is expensive, time-consuming, inefficient and non-scalable. If, as some believe, personalised learning offers a way forward for technology enhanced learning, then let’s give it a chance.
LINL LS
LINK - time
Bloom’s promise
Personalised learning can be delivered offline, but it requires one-to-one tuition. Hence the great interest in Bloom’s famous paper, The 2 Sigma Problem, which compared the lecture, formative feedback lecture and one-to-one tuition. Taking the straight lecture as the mean, he found an:
84% increase in mastery above the mean for a formative approach to teaching and an astonishing 
98% increase in mastery for one-to-one tuition. 
Google’s Peter Norvig famously said that if you only have to read one paper to support online learning, this is it. In other words, the increase in efficacy for one-to-one, because of the increase in relevant and on-task learning, is immense. This paper deserves to be read by anyone looking at improving the efficacy of learning, as it shows hugely significant improvements by simply altering the way teachers interact with learners. 
Personalised promise through technology
However, given the difficulty in scaling up through human one-to-one tutoring, the promise of Personalised Learning is now seen as being practically realised through technology. There’s a whole range of technologies that have emerged over the last few years, that have accelerated progress right across the learning journey – interface advances, engagement, support, content, curation, practice, assessment, tutoring, even wellbeing. Rather than focus on presentation, AI tends to rely more on smart software, to deliver more than just souped-up graphics and gamification. It relies on smart software, AI, to deliver personalised learning, not as one thing but as many things in the complex business of learning.
1. Personal interface
First up is a general move in interface technology, where AI is the new UI. Google, Facebook, Twitter, Amazon, Netflix and almost everything we do online is mediated by AI. Interaction is shifting towards ‘voice’, which is AI-driven. This, we can expect, to have some influence on the delivery of learning. It is even possible that AI-driven speech will radically change learning delivery. At WildFire, we are using voice only navigation and input by learners for the entire learning experience. We create podcasts on the fly, using AI. Alexa, Google Home and other consumer devices use voice as both input and output. Interfaces are becoming more frictionless, which reduced cognitive load for learners making learning more frictionless.
2. Personal engagement
Learners are lazy. There I’ve said it. Procrastination is the norm. Most do not do the pre-reading, leave assignments and essays until the last minute, miss lectures and behaviour in schools can be a real problem. Bots, such as Differ, can be used to engage and nudge students. This ‘push’ side of tech, whether by bots, email or message apps, can be personalised to match the individual’s pattern of needs and behaviour. Face recognition tech can also be used for registration, even spotting learning behaviours. In general, we know how engaging tech is for learners – as we spend most of our time complaining about its distractive effects. Suppose we take some of those engagement tricks and apply them to learning.
3. Personal support
Direct support though bots have already had success as Georgia Tech, where the students not only mistook the bot for an academic teacher but put it up for a teaching award. Bots, such as Otto, embedded on the workflow do a similar job in corporate learning. Support in online learning may come through access to resources, content and people but if it knows what your personal needs are, it helps. 
4. Personal content (adaptive)
The personalised delivery of appropriate learning experiences, based on what you need at that exact time, drawn from data on you, others taking the course, even your (and others) behaviour on previous courses, can all be used to decide what you need at that exact moment. This ‘adaptive’ learning is almost certain to increase the efficacy of online learning. It resequences learning, and with formative feedback, can deliver on the promise of personalised learning, educating everyone uniquely. Having been involved in this area, for a number of years, I’ve seen some convincing results from real students in real institutions. This hold great promise.
5. Personal text analysis
It is unfortunate that in most institutions, the only place you’ll find AI technology, in a personalised sense, is in plagiarism checkers, seeing if you have cheated. Actually, what many are waking up to is the possibility of this same technology being used to help learners complete essays and assignments through formative feedback, using a repeated submit and learn model. Personal feedback on specific aspects of your performance, whether writing or performing in a domain specific assignment, could be one of the big wins. Formative feedback accelerates learning.
6. Personal curation
Learning experiences, especially online, often feel over-directed and fixed, a little straightjacketed. Imagine a system that uses AI to automatically provide links to useful outside sources relevant to you at that very moment you need them, as it has spotted that you have a problem. We do this in WildFire, where links to outside resources are created on the fly, making the course more porous and to encourage curiosity.
WildFire
7. Personal practice
An often forgotten dimension in personalised learning is personalised practice. Spaced practice can be delivered in a logarithmic fashion, based precisely on your needs. Learn now and between now and your exam, new job, whatever, you get a personalised pattern of spaced practice. We have delivered this in WildFire, where you get several bites of the cherry after completing the course, delivered to you by email, to shunt learning from working to long-term memory.
8. Personal assessment
Both personal formative and summative assessment are benefitting from AI. Digital identification, through keyboard strokes (Coursera), face recognition and so on, makes sure that the right person is sitting the exam. Formative assessment, often using AI techniques, is being delivered to large numbers of students. Adaptive testing is also available, to provide personalised assessments. We’ve used assessment bots in WildFire to question learners after their more directed learning experiences. Assessment is by definition personal, it’s about time we made it more personal through technology.
9. Personalised tutors
AI is nowhere near being able to replace teachers. But as the technology begins to know you, talk to you, remember what you said and did and understand the context; then we can expect services like Alexa and tutors to bloom in learning. We see this with Duolingo and many other online services online. They get to know you and deliver what you want at that very moment, free from the tyranny of time and space – anyone, anytime, anyplace. This is what Amazon does for shopping, it’s what Cleo and other bots do in finance, it’s what fitness bots do in health. We can expect moves in this direction in learning. Note that this has nothing to do with robot teachers. That idea is as stupid as having a robot driving a self-driving car.
10. Personal difficulties
We have evidence that young people are feeling the pressure and the statistics for stress, mental health problems, even suicide are rising. We also know that young learners hide this from their parents, teachers and tutors. One solution is to anonymise help. Woebot and Elli have already been used and subjected to clinical trials. The results are encouraging. Beyond this, we already know that personalised learning is a boon to learners with learning difficulties. Individualised responses to physical and psychological issues is now normal in education.
Conclusion
Personalised learning is not one thing, it is many things, an umbrella term for all sorts of applications, especially those driven by AI. We are on the cusp of bringing smart software to deliver smart learning to make people smarter. It will take time, as AI is, at present, only good at very specific, narrow tasks, not general teaching skills. That’s why I’ve tried to break it down into to specific sweet spots. But on one thing I’m clear – this is a worthy path to follow.
Bibliography

 Subscribe to RSS

Saturday, June 16, 2018

Study shows VR increases learning

I have argued that several conditions for good learning are likely to be enhanced by VR. First there’s increased attention, where the learner is literally held fast within the created environment and cannot be distracted by external stimuli. Second is experiential learning, where one has to ‘do’ something where that active component leads to higher retention. Third is emotion, the affective component in learning, which is better achieved where the power to induce empathy, excitement, calm and so on is easier. Fourth is context, where providing an albeit simulated context aids retention and recall. Fifth is transfer, where all of these conditions lead to greater transfer of knowledge and skills to the real world.
This study from the University of Maryland starts to confirm my thoughts. This points towards possible improvements in efficacy, compared to 2D screens and tablets. The researchers created a 3D Memory Palace, or in modern psychology spatial mnemonic encoding, an idea that goes back to the Greeks, and a well known method of improving retention. Try it for yourself – place each of the first five Kings on England - William 1, William 2, Henry 1, Stephen, Henry 2 – in your house – one at the front door, next in the hall and so on. Now try to recall them. You’ll soon have all 40 plus Monarchs in your mind…
In the Maryland study, 40 people were split into two groups. Both were first shown printed pictures of famous people. One group then saw them in imaginary locations in VR then desktop, the second group saw the same but on a 2D desktop screen. There was an 8.8% better performance in the recall of faces in the correct location by the VR group and 38/40 said they preferred the VR for that learning task. Further questioning revealed that learners reported increased sense of ‘focus’ within VR. The researchers also felt that the physical movement within the VR world provided an experiential component that deepened the learning experience.
My guess is that more research of this type will also reveal increased retention and recall of knowledge and skills, as well as increased transfer, once real world tasks are involved. These are likely to occur in healthcare, where I've outlined 25 potential uses, many that have already been realised but there are many other potential uses.
Interestingly, WildFire, the AI-content creation service, works within VR. So you can place your online learning in context to increase retention, as the study suggests. In VR it uses 100% voice input for both navigation plus interaction.
Bibliography
Krokos, E., Plaisant, C., Virtual memory palaces: immersion aids recall," Virtual Reality. May 2018

 Subscribe to RSS

University faculty believe in Learning Styles and promote it to students while their Teacher Training departments say it's a myth

In a 2017 study by Newton and Miah, from the University of Swansea, Evidence-Based Higher Education – Is the Learning Styles ‘Myth’, 58% believed Learning Styles to be beneficial. Only 33% actually used Learning Styles but remarkably, they report that 32% faculty continued to believe in their use, even after being presented with the evidence that shows it doesn’t work. In the US a study by Dandy and Bendersky (2014) showed that 64% of faculty believed in the efficacy of learning styles.
This is far less than the reported figures from schoolteachers, where Dekker (2012) reported that 93% believed that learning styles caused better learning. Simmonds (2014) reported that 76% of teachers used Learning Styles. What is notable is that the very Universities that have Education Departments that train teachers, who would eschew such theory, still have it as a basic belief in the majority of their teaching staff.
This is understandable, as University teachers get rather cursory teacher training. What is odd is the complete lack of consistency within Higher Education, where they have the expertise to do scotch the myth and where the teaching is clearly at odds with the research and theory that is taught.
What is even more remarkable is that many universities openly and actively promote Learning Styles on their main websites to their students. Surely a simple search would expose this to the education departments who could insist on it being removed? It's a bit like the physics department teaching the Copernican model while other faculty insist that the Sun goes around the Earth.

Promoted
Here are just a few, after a cursory search. I literally could have added dozens more:
Open University
Trinity College Dublin
University of Southampton
Birbeck University of London
Manchester Metropolitan University
University of Leicester
University of Sheffield
University of Brighton
University of Edinburgh
Cambridge Assessment English (part of Cambridge University) actually teach it on their Futurelearn MOOC.
Conclusion
It would seem that the Learning Styles myth is sustained in many ways but its roots are in education institutions, schools and Universities, that continue to peddle the theory to their students. What is odd is the lack of action around eliminating the phenomenon.
Bibliography
Dandy, K., and Bendersky, K. (2014). Student and faculty beliefs about learning in higher education: implications for teaching. Int. J. Teach. Learn. High. Educ. 26, 358–380.
Dekker, S., Lee, N. C., Howard-Jones, P., and Jolles, J. (2012). Neuromyths in education: prevalence and predictors of misconceptions among teachers. Front. Psychol. 3:429. doi: 10.3389/fpsyg.2012.00429
Newton, P.M., Miah, M.Evidence-Based Higher Education – Is the Learning Styles ‘Myth’ Important? Front. Psychol., 27 March 2017
Simmonds, A. (2014). How Neuroscience Is Affecting Education: Report of Teacher and Parent Surveys. 

 Subscribe to RSS

Thursday, June 14, 2018

To Siri With Love - how chatbots are becoming social companions and teachers...

The New York Times carried a heartwarming story "to Siri With Love" by a mother who had an autistic son, Gus. Siri was a Godsend, as it never tired of answering his questions. Far from isolating Gus, Siri was a companion and lifeline. His mother was grateful as it helped both herself and Gus deal with his condition. Gus started to articulate more clearly to be understood and Siri proved to be a non-judgemental friend and teacher. Chatbots are starting to appeal to more and more of us. They are an integral part of the social landscape.
So there’s a new kid on the block….
Social bots
There is an assumption that ‘social’ in learning, always means one human being talking to another or others. That could be synchronously, face-to-face, telephone, webinar and so on, or asynchronously online through social media, chat and so on. But this is to miss a trick. Increasingly, we will have bots as part of our social networks. They may be bots delivering a specific service or learning experience. We have finance bots delivering personal financial services from your bank. Health bots dealing with your health and no end of customer care bots. The fact that they are not people does not mean they are not part of our social network.
We have ample evidence from Nass and Reeves (The Media Equation), that we anthropomorphise technology and bots in particular have been shown to successfully ‘pass’ for humans, thereby passing the Turing test.  The Georgia Tech tutor bot not only passed this test, the students put it up for a teaching award. Google Duplex has successfully executed bot calls to a restaurant and hairdressers, completing appointments. Bots in general, whether useful, benign or malicious, are now part of our social networks.
Ecosystem
In fact, we have every reason to expect that they will play an increasing role in our social ecosystem, as dialogue and voice play a greater role in human-machine interaction. They are already in our homes through Alexa, Google Home and other devices. Sex robots are essentially bots inside robot bodies. We make calls to bots on the telephone. But mostly, we encounter bots online. A recent Pew study followed Twitter activity and identified surprising levels of bot activity:
1. Two-thirds (66%) of all tweeted links were shared by suspected bots. 
2. Suspected bots also accounted for 66% of tweeted links to sites focused on news and current events.
3. Among news and current events sites, those with political content saw the lowest proportion (57%) of bot shares.
4. About nine-in-ten tweeted links to popular news aggregation sites (89%) were posted by bots, not human users.
5. A small number of highly active bots were responsible for a large share of links to prominent news and media sites.
This raises interesting questions about our awareness of bots and their influence.
Bots and learning
In learning, however, bots are, for the moment, in a controlled environment. At work bots, such as Otto, pop up within workflow tools, such as Slack, Microsoft Teams and Facebook at Work. In learning, we have bots that increase student engagement, bots that provide learner support, tutorbots, mentorbots, assessment bots and wellbeing bots. These bots are guided learning bots, with limited capabilities but that often matches the need to stick to a guided learning path or defined domain in learning. Structure and focus in learning is often useful.
As the technology progresses, they will get smarter with the capability to sustain dialogue, retain memories of all previous conversations, be sensitive to content and become more personal. They will play an increasing role in engagement, support and delivery. This will happen in a piecemeal fashion but who knows, in time they may master the skills necessary to be a good teacher or trainer.
The learning game used to be simple. We had teachers and learners. Sure, teachers learnt from other teachers and learners. Learners learnt from teachers and other learners. Now we have these interlopers who can both teach and learn. Machine learning allows bots to learn – very quickly. We have seen their success in chess, Go and Poker. Increasingly, they are mastering other human activities. The fact htat modern AI techniques allow them to play themselves millins of times in a very short ;period of time or even set them selves up to be adversarial, leading to rapid improvement and competence is what’s new in AI. Machine learning, Reinforcement learning and Generative Adversarial Networks, and many other variants of ‘learning’ methods are driving the success of AI. Social learning networks now have these new entrants – bit learners that learn fast.
Anonymity
This raises several questions. Should anonymous bots be allowed? Bots are not conscious, even cognitive. They mimic human behaviour and in this sense fool the user into thinking they are human or have human qualities. They are faking it. There is an argument for not allowing anonymous bots, as they break the trust one assumes in dialogue, that the other agent is a real person with moral responsibilities, not a piece of software with no moral sense. Alternatively, we could see this in purely utilitarian terms and see the advantages purely in terms of outcomes – better teaching and learning.
On the other hand, the ‘anonymity’ of bots can be their cardinal advantage. I spent ten days on the wellbeing bot ‘woebot’. Its advantage is its anonymity. Few young people will want to admit to their teacher, lecturer or adult that they are having mental health problems, due to fear and embarrassment. Many will feel more comfortable dealing with a helpful, anonymous bot.
Bias
One could argue they are a conduit for bias. This could be true in news aggregation but I doubt that this is much of a problem in learning. All humans are biased, and while bots can embody intended or unintended bias, this can be eliminated. Kahneman who got the Nobel Prize for his work in this area, describes human bias as uneducable. In practice, I think that bots can easily eliminate gendered language, confirmation bias, anchoring and many other biases that seriously distort educational and learning goals. This may be our best bet in eliminating the huge amount of bias that exists within the system.
Living with bots
The bots are here. At present, they are child-like, narrow in domain and capabilities but nevertheless useful. We must learn to live with them. In a sense, bots have always been around. When I read a novel, the narrator and characters are essentially agents that have fooled my imagination into thinking they are real people. We have no problem in reading fact or fiction from the past, even from dead authors but still see them as being in the moment, when they address us in their texts. What gives computer bots extra potency is their seeming, living presence and adaptability. They respond, answer back, ask us things and get personal. Increasingly they are the mediators. But that’s essentially what teaching is – mediation. 
Bots and social constructivism
Strangely enough it is the social constructivists, led by Vygotsky, who should celebrate bots the most. If knowledge is the internalisation of social activity, then bots are a constructivists wet dream. It fits with Bandura’s Social Learning Theory, where one leans through social observation and modelling. I don’t buy this theory LINK but it is interesting to me that the most vociferous anti-technology critics, who rely on a theory of social learning, may be sabotaged by technology that plays their game. If they are social agents, why not exploit them to the full.
Conclusion
What bots can and will do, is scale social learning. They don’t sleep eight hours a day, get distracted and bored. They can also download, network and learn from both us and themselves. And they never die – they only get better.

 Subscribe to RSS

Thursday, June 07, 2018

Unconscious bias training a waste of time – 7 reasons why Starbucks training will not work

Racism and sexism are serious problems but not all training efforts are serious solutions. The latest fad are training courses that purport to tackle ‘unconscious bias'. (note that I'm not attacking training on conscious racism and sexism, only the idea that training should focus on the unconscious). Starbucks are the latest (too little too latte) but it's everywhere. There is something truly creepy about HRs move on the unconscious. Since when did it become acceptable to see an employees ‘unconscious’ as an addressable area for ‘retraining’. This is far worse than the Ponzi scheme that is Myers-Briggs, It is flawed and needs to stop. There are serious problems with ‘unconscious bias’ courses.
1. Unconscious is wrong target
Apart from the dedicated racist, few will admit to being racist in surveys. Many may hold light or even strong views on race without admitting it to anyone, certainly not researchers, who would almost certainly be seen as judgemental. This has led L&D to turn to the unconscious. Big mistake. Explicit, conscious racism and sexism, may actually be the true focus for training, not the diversion of ‘unconscious bias’, all on the basis of seriously flawed psychometric tests. 
2. Not measuring unconscious bias at all
The Banaji & Greenwald IAT (Implicit Association Test), created in 1994, is one of a number that are being foisted upon millions of employees. Just because people select words from pairs does not mean that this taps into their unconscious. This paper sends several cannonballs over the bow of the supposed ship sailing into the uncharted sea of the unconscious. Just because someone can’t explicit explain something does not mean that it has its origins in the unconscious. There are plenty of alternative explanations with more plausible causality. You may simply be registering familiarity (not bias) in matching words with images. Alternatively you may be using conscious but instantaneous recognition, not the unconscious, to links the words and images.
3. Wrong language
In fact the mutual exclusivity of conscious and unconscious bias is far from proven and psychologists are wary of even using the word. One can add the prefix ‘un’ to the word ‘conscious’, and assume this is something clear, the ‘unconscious’, a place where hidden biases are stored in little Pandora’s boxes. But the ‘unconscious’ is problematic in psychology. What is the difference between a memory and an unconscious event? If you read the literature in this field you will find the word ‘unconscious’ strangely absent. Psychologists tend to use the terms ‘implicit’ and ‘explicit’, which cuts loose from the terminology of psychotherapy to bring in a wider range of phenomena. Psychologists are wary of this binary opposition between unconscious and conscious. Of course, selling a course called ‘Implicit beliefs’ may not bring in the expected sales.
4. Unreliable
Reliability matters in tests. You don’t want a test that gets very different results on same person when they retake the test. Guess what? The IAT test is unreliable, so it should NOT be used as a test, as there is not enough evidence that it predicts your behaviour. To be precise, the desired retest reliability should be above 0.7. It is, in fact, 0.44 for racism and 0.5 for IAT tests overall.
5. Not predictive
Even if we assume the unconscious has some status, the causality of beliefs and behaviour can still be studied. Here’s the really bad news - four separate meta-analyses show weak predictive behaviour from such tests. This is a real problem, as even if one counters the unconscious bias, as it has almost no causal effect, all that work is largely pointless.
6. Doesn’t change behaviour
Even the people who work in this area warn against the inference that reducing unconscious bias reduces racist or sexist behaviour. In fact, a meta-study in 2017, that looked at 494 previous studies, showed no evidence for the reduction of unconscious bias having an effect on biased behaviour. Let’s be clear, if true, then what is claimed by those who sell this training and much of the training is quite simply, a waste of time. 
7. Record of failure
The world is littered with courses on diversity, racism and sexism. The world is NOT littered with evidence that it works. Major studies from Dobbin, Kalev and Kochan show that diversity training does not increase productivity and may, in fact, produce a backlash. Most don’t know if it works as evaluations are as rare as unicorns. Thomas Kochan, Professor of management at MIT’s Sloan School of Management’s five year study had previously come to the same conclusions, "The diversity industry is built on sand," he concluded. "The business case rhetoric for diversity is simply naive and overdone. There are no strong positive or negative effects of gender or racial diversity on business performance." Harvard’s Frank Dobbin conducted the first major, systematic study of diversity programmes across 708 private sector companies, using employment data and surveys onemployment practices. His research concluded that, “Practices that target managerial bias through…diversity training, show virtually no effect.” Dobbins research went further. “Research to date suggests that… training often generates a backlash.” Many other studies show similar conclusions (Kidder et al 2004, Rynes and Rosen 1995, Sidanias et al 2001, Naff and Kellough 2003, Benedict et al 1998, Nelson et al 1996). Yet we persevere with the idea that ‘training’ is the answer to these serious problems.
A way forwared
Going back to the main point of this article, training in ‘unconscious bias’ seems to be yet another Ponzi scheme, that fits nicely with the zeitgeist. At best it is a clear example of enormous overreach, at worse falsely accusatory and a waste of time. My conclusion is that if the identification of unconscious bias is a waste of time, as is training around that concept, that still leaves us with conscious bias. All is not lost. Starbucks need to focus on conscious racism, not psychobabble.
Who better to turn to than the word’s acknowledged expert in ‘bias’, Daniel Kahneman, who won the Nobel Prize for his work in the field. His book ‘Thinking Fast and Slow’ is essential reading if you are interested in how bias works in the mind. Note that if you’re interested in less academic book that explains it in a more readable form ‘The Undoing Project’ by Michael Lewis, is excellent. Coming back to Kahneman, in the last two pages of the book he addresses the issue of combatting bias and starts by saying that… 
System 1 is not readily educable”. 
So don’t look to changing System 1, and thinking that you can eliminate unconscious bias, where the supposed ‘unconscious bias is said to exist. His recommendation is…
The way to block errors in System 1 is simple in principle: recognise the signs that you are in a cognitive minefield, slow down, and ask for refinforcement  from System 2.”
This is good advice, so how do we do this? Kahneman suggests that organisations use process and “orderly procedures”, such as “useful checklists… reference forecasts…premortems”. I agree. Much is to be gained through organisational checks and balances, not falsely accusatory training based on unreliable, supposedly diagnostic tools.

 Subscribe to RSS