Saturday, June 30, 2018

Clever AI/AR/Teacher hybrid systems for classroom use

Most AI-driven systems deliver content via screens to the student and then dashboards to the teacher. But there is a third way – hybrid AI/AR/Teacher systems that give the teacher enhanced powers to see what they can’t see with their own eyes. No teacher has eyes in the back of their heads but, like self-driving cars, you can have eyes everywhere, that recognise individual students, read their expressions, identify their behaviours and provide personalised learning experiences and feedback. You become a more powerful teacher by seeing more, getting and giving more feedback and having less admin and paperwork to do. The promise is that such hybrid systems allow you to do what you do best – teach, especially addressing the needs of struggling students.
AI/AR in classroom
I’ve written about the use of 3D video in teacher training before but this AR (Augmented Reality) idea struck me as clever. Many uses of AI lie outside of the classroom. This augments the strengths of the teacher by integrating dashboards, personal symbols and other AR techniques into the classroom and the practice of teaching. 
Ken Holstein, at Carnegie Mellon, seems like an imaginative and creative researcher, and has been looking at hybrid teacher-AR  - AI systems that present adaptive software but also highlight each individual student's progress, whether they’re attentive, struggling, need help and so on. Symbols appear above the heads of each student. The teacher needs glasses that can display this information, linked to a back-end system that gathers data about each student’s performance.
It does, of course, seem all very Big Brother, to some even monstrous, especially those comfortable with traditional classroom teaching. However, as results seem to have plateaued in K12 education, we may need to make teachers more effective by being able to focus on the students who are having difficulties. These ideas make personalised learning possible not by replacing the teacher (the idea behind most AI/adaptive systems) but by giving the teacher individual feedback over the heads of each student, so that personalised learning can be realised. 
Face recognition in the classroom
Let’s up the stakes with this face recognition system used in China. It recognises student faces instantly, as they arrive for school, so no need for registration. In the classroom it scans the students every 30 seconds, recognising seven different expressions like neutral, happy, sad, disappointed, angry and surprised, as well as six types of behaviour, such as reading, writing, distracted and so on. So it helps the teacher manage registration, performance and behaviour.
They also claim that it helps teacher improve by adapting to the feedback and statistical analysis they receive from the system. When I’ve shown people this system, some react with horror but if we are to reduce teacher workload, should we consider such systems to help with problems around non-teaching paperwork, student feedback and classroom behaviour?
Conclusion
What seems outlandish today often turns out to be normal in the future – internet, smart phones, VR. Combinations of technology are often more effective than single approaches - witness the smartphone or self-driving car. These adaptive AR/AI hybrid systems may turn out to be very effective by being sensitive to both teacher and student needs. The aim, is not to replace but enhance the teacher's skills, giving them real-time data, personal feedback on all students in their class and data to reflect on their own skills. Let’s not throw the advantages out before we’ve had time to consider the possibilities.

Friday, June 22, 2018

AI and religious zealotry – let’s not fall for anthropomorphism, techno-prophecies, singularity & end-of-days

AI is unique as a species of technology as it induces speculation that falls little short of religious fervour. Elon Musk and Stephen Hawking, no less, have made the case for AI being an existential threat, a beast that needs to be tamed. On the other side, in my view more level headed thinkers, such as Stephen Pinker and many practitioners who work in AI, claim that much of this is hyperbole.
The drivers behind such religiosity are, as Hume said in the 18thcentury, a mixture of our:
1) fears, hopes and anxieties about future events
2) tendency to magnify
From the Greeks onwards, whose Promethean Myth, through its resurrection by Mary Shelly in ‘Frankenstein’ in the 19thcentury, then a century of film, from ‘Metropolis’ onwards - the perceived loss of human autonomy has fuelled our fearsand anxietiesabout technology. The movies have tended to rely on existing fears about commies, crime, nuclear war, alien invasions and whatever fear the age throws up. Y2K was a bogus fear, the world suffered no armageddon. So let’s no fall for current fears.
The tendency to magnify shows itself in the exaggeration around exponentialism, the idea that things will proceed exponentially, without interruption, until disaster ensues. Toby Wash, an AI researcher, warns us not readily accept the myth of exponential growth in AI. There are many brakes on progress, from processing power to backpropagation. Progress will be slower than anticipated.
The prophets of doom seem to ignore the fact that it is almost inconceivable that we won’t anticipate the problems associate with autonomy, then regulate and control them, with sensible engineering solutions. 
The airline industry is one of the wonders of our age, where most commercial airplanes are essentially robots, that switch to autopilot as low as 200 feet, then fly and land with out much human intervention. Security, enhanced by face recognition, allows us to take international flights without speaking to another human being. Soaked in AI and automation, its safety record is astounding. Airplanes have got safer because of AI not inspire of AI. Similarly, with other applications in AI we will anticipate and engineer solutions that are safe. But there are several specific tendencies that mirror religious fervour that we must be aware of:
Anthropomorphism
AI is not easy - it's a hard slog. I agree with Pinker, when he says that being human is a coherent concept but  there is no real coherence in AI. Even if we imagine a coherent general intelligence there is no reason to assume that AI will adopt attitudes that we, as humans, have accumulated over 2 million years of evolution. We tend to attribute human qualities to the religious domain, whether God, Saints or our binary, moral constructs; God/Devil, Saint/Sinner, Good/Evil, Heaven/Hell. These moral constructs are then applied to technology, despite the fact that there is no consciousness, no self-awareness and no ‘intelligence’, a word that often misleads us into thinking that AI has thoughts. Blinded by the word ‘intelligence’ we anthropomorphise, transposing our human moral schemas onto indifferent technology. So what if IBM Watson won at Jeopardy, and Google triumphs at GO and poker – the AI didn’t know it had won or triumphed.
Prophecy
Another sign of this religious fervour is ‘prophecy’. There’s no end of forecasts and extrapolations, best described as prophecies, about future progress and fears in AI. The prophecies, as they are in religion, tend to be about dystopian futures. Pestilence and locusts and have been replaces by nano-technology and micro-drones. Kurzwell, that high-priest of hyperbole, has taken this to another level, with his diagrammatic equivalent of rapture…. the singularity.
Singularity
The pseudo-religious idea of the ‘singularity’ is the clearest example of religious magnification and hyperbole. Just as we invented religious ideas, such as omniscience, omnipresence and omnipotence, we draw logarithmic graphics and imagine that AI moves towards similarly lofty heights. We create a technical Heaven, or for some Hell. There will be no singularity. AI is an idiot savant, smart only in narrow domains but profoundly stupid. It’s only software.
End-of-days
Then there is an ‘end of days’ dimension to this dystopian speculation, the idea that we are near the end of our reign as a species and that, through our own foolishness and blindness to the dangers of AI, will soon face extinction.
There is no God
One fundamental problem with all of this pseudo-religious fervour is the simple fact that AI, unlike our monotheistic God, is not a singular idea. It has no formal and precise definition. AI is not one thing, it is many things. It’s simply a set of wildly different tools. In fact, many things that people assume are AI, such as factory robots, have nothing to do with AI, as are many other software applications which are just statistical analysis, data mining or some other well known technique. Algorithms have been around since Euclid 2300 years ago. It has taken over two millennia of maths to get here. Sure we have data flooding from the web but that’s no reason to jump two by two onto some imaginary Ark to save ourselves and all organic life. Believe me, there are many worse dangers – disease, war, climate change, nuclear weapons…. 
Blinded by bias
The zealotry in the technophobes is akin to fanatics in The Life of Brian. What has AI ever done for us…. Google, accelerates  medical research,  identifies disease outbreaks, identifies melanomas, diagnoses cancer, reads scans and pathology slides, self-driving cars…. let’s see. Let’s not see AI as a Weapon of Math Destruction, and focus relentlessly on accusations of bias, that turn out to be the same few second-hand case studies, endlessly recycled. All humans are biased and while bias may exist in software or data, that form of mathematical bias can be mathematically defined and dealt with, unlike our many human biases, which Daniel Kahneman, who got the Nobel Prize for his work on bias, described as ‘uneducable’. Machine learning and many, many other AI techniques, depend necessarily on making mistakes as they optimise solutions. This is how it works, learns and solves problems. Remember - it’s only software.
Conclusion
We need to take the ‘idiot savantdescription seriously. Sure there are dangers. Almost all technology has a calculus of upsides and downsides. Cars mangle, kill and maim millions, yet we still drive. The greatest danger is likely to be the military or bad actor use of weaponised AI. That we should worry about and regulate. AI is really hard, it takes time, so there's time to solve the safety issues. All of those dozens of ethical groups that are springing up like weeds are largely superfluous, apart from those addressing autonomous weapons. There are plenty of real and present problems to be solved - AI is not one of them. Let’s accept that AI is like the God Shiva, it can create and destroy. Don’t let it be seen solely as a destructive force, let’s use it creatively, in making our lives better, especially in health and education.



Thursday, June 21, 2018

Blockchain – got married on it but fell out of love with it….

Way back, in 2001, I built a decentralised P2P learning system. Long story but it eventually produced a successful company, Learning Pool (I’m still involved), albeit after a pivot into more mainstream technology. This eventually led to an early interest in Blockchain when it first appeared. I gave talks on Blockchain, even got remarried on Blockchain. But I’ve come round to see it not so much a solution to problems but a solution looking for a problem. Having read tons on the subject and got far too interested in Satoshi NakamotoI saw yet another a presentation this week touting it as the next big thing in learning, and gave it a rethink. Nothing wrong with changing your mind on something. Here's my thoughts...
1. Extremes of capitalism
You can’t go all ‘activist’ on me and blame the man, then use Bitcoin (therefor Blockchain). All you’re doing is playing around in the extremes of capitalism – the really bad bit, where capital is hidden, secret and not subject to tax. I said two years ago that the Wild West world of Bitcoin could do with some Sheriffs. I'm now of the opinion that it needs to be closed down.
2. Scams
Serious problems have emerged with the technology. Bitcoin looks increasingly like a money laundering scam, wracked with hacks, fraud, theft, ransoms and Ponzi schemes. The hackless future that was promised turned out to be a bit of a dystopian Westworld. This should worry those who want it used in the public sector.
3. Security
Another problem with many of the proprietary solutions is not Blockchain but the security layer. It has been noted that many of these are just Blockchain distributed databases with all the front-end security vulnerabilities of older systems.
4. Control
Sure Blockchain was created to democratise, decentralise and disintermediate institutions, so why keep it locked up within institutions? Much of the interest I now see is from traditional purveyors trying to lock down the technology for their own ends. A private Blockchain isn’t really a decentralised Blockchain, in that it is 100% owned. It’s basically a transaction ledger for interested parties, not the democratising, decentralising, deintermediating force many imagined. 
5. Environmentally disastrous
Bitcoin, for example, need an enormous amount of processing power for permissions. This is huge – estimated to be the equivalent of the energy needs of 159 countries. To be fair this is a Bitcoin problem, not Blockchain, but it is a PR issue, as Bitcoin may take the whole thing down, casting a big shadow over the whole Blockchain industry.
6. Credentialism
But the main problem is that Blockchain in learning simply reinforces runaway credentialism. Bryan Caplan’s book shows that Higher Education has expanded on the back of ‘signalling’. This has resulted in credential inflation, where more and more young people stay at college for longer and longer, just to get the inflated paper they need for a job. If Blockchain simply makes credentialism easier, then forget it. Ah, I hear you say, but it’s really about micro-credentialism. That’s fine but I also think that this has had its day. The badges movement has run out of steam, as they turned out to be motivationally suspect, lack objectivity and therefore credibility, as well as the awful branding. It has flopped.
7. Complexity
Lastly, it is just too complex an idea to sell, and education is a notoriously slow learner. Education and training also struggles to cope with innovative technology. Its infrastructure is largely old LMS technology and flat HTML delivery. Anyone investing on Blockchain in education will have to be in for the long haul and I mean a very long haul.
Conclusion
I'm not saying that Blockchain has no role to play in the world, only that it doesn't, as yet. seem to have a clear role in education and training. If it solves problems in Microfinance or in healthcare, fine. Blockchain is basically a transaction ledger and learning is not primarily about transactions. Not only is there no real problem to be solved in learning, it may just exacerbate over-credentialing. Sure there’s lots of projects around, funded by bodies that wouldn’t know a block if it was in their soup. Looks great on a grant application but, if I were to be honest, I haven’t seen a single example in education and training that has legs. I was wrong.

Monday, June 18, 2018

Personalised learning – what the hell is it? 10 things that work...

In one sense all learning is personal as it requires personalattention, effort and practice. But ‘Personalised Learning’ as a term has come to mean the tailoring of learning to the individual, with a sensitivity to their individual needs. The goal being to increase the efficacy of teaching and learning, lower failure rates and increase access with scalable persoanlisation. 
Myths
First, we need to dispense with the myth that this is about Learning Styles or some other phantom phenomenon. This is one of the commonest myths in education, deeply embedded but completely wrong-headed. Personal needs are far best defined in terms of a whole raft of data about what you know, need to know and your personal context and circumstances. I like to think of Personal learning as a solution to learning difficulties. We all have learning difficulties, as our brains are inattentive, get easily overloaded, forget most of what we’re taught, sleep 8 hours a day, are hopelessly biased, can’t download and can’t network. Then there’s the delivery problems associated with the current model, which is expensive, time-consuming, inefficient and non-scalable. If, as some believe, personalised learning offers a way forward for technology enhanced learning, then let’s give it a chance.
Bloom’s promise
Personalised learning can be delivered offline, but it requires one-to-one tuition. Hence the great interest in Bloom’s famous paper, The 2 Sigma Problem, which compared the lecture, formative feedback lecture and one-to-one tuition. Taking the straight lecture as the mean, he found an:
84% increase in mastery above the mean for a formative approach to teaching and an astonishing 
98% increase in mastery for one-to-one tuition. 
Google’s Peter Norvig famously said that if you only have to read one paper to support online learning, this is it. In other words, the increase in efficacy for one-to-one, because of the increase in relevant and on-task learning, is immense. This paper deserves to be read by anyone looking at improving the efficacy of learning, as it shows hugely significant improvements by simply altering the way teachers interact with learners. 
Personalised promise through technology
However, given the difficulty in scaling up through human one-to-one tutoring, the promise of Personalised Learning is now seen as being practically realised through technology. There’s a whole range of technologies that have emerged over the last few years, that have accelerated progress right across the learning journey – interface advances, engagement, support, content, curation, practice, assessment, tutoring, even wellbeing. Rather than focus on presentation, AI tends to rely more on smart software, to deliver more than just souped-up graphics and gamification. It relies on smart software, AI, to deliver personalised learning, not as one thing but as many things in the complex business of learning.
1. Personal interface
First up is a general move in interface technology, where AI is the new UI. Google, Facebook, Twitter, Amazon, Netflix and almost everything we do online is mediated by AI. Interaction is shifting towards ‘voice’, which is AI-driven. This, we can expect, to have some influence on the delivery of learning. It is even possible that AI-driven speech will radically change learning delivery. At WildFire, we are using voice only navigation and input by learners for the entire learning experience. We create podcasts on the fly, using AI. Alexa, Google Home and other consumer devices use voice as both input and output. Interfaces are becoming more frictionless, which reduced cognitive load for learners making learning more frictionless.
2. Personal engagement
Learners are lazy. There I’ve said it. Procrastination is the norm. Most do not do the pre-reading, leave assignments and essays until the last minute, miss lectures and behaviour in schools can be a real problem. Bots, such as Differ, can be used to engage and nudge students. This ‘push’ side of tech, whether by bots, email or message apps, can be personalised to match the individual’s pattern of needs and behaviour. Face recognition tech can also be used for registration, even spotting learning behaviours. In general, we know how engaging tech is for learners – as we spend most of our time complaining about its distractive effects. Suppose we take some of those engagement tricks and apply them to learning.
3. Personal support
Direct support though bots have already had success as Georgia Tech, where the students not only mistook the bot for an academic teacher but put it up for a teaching award. Bots, such as Otto, embedded on the workflow do a similar job in corporate learning. Support in online learning may come through access to resources, content and people but if it knows what your personal needs are, it helps. 
4. Personal content (adaptive)
The personalised delivery of appropriate learning experiences, based on what you need at that exact time, drawn from data on you, others taking the course, even your (and others) behaviour on previous courses, can all be used to decide what you need at that exact moment. This ‘adaptive’ learning is almost certain to increase the efficacy of online learning. It resequences learning, and with formative feedback, can deliver on the promise of personalised learning, educating everyone uniquely. Having been involved in this area, for a number of years, I’ve seen some convincing results from real students in real institutions. This hold great promise.
5. Personal text analysis
It is unfortunate that in most institutions, the only place you’ll find AI technology, in a personalised sense, is in plagiarism checkers, seeing if you have cheated. Actually, what many are waking up to is the possibility of this same technology being used to help learners complete essays and assignments through formative feedback, using a repeated submit and learn model. Personal feedback on specific aspects of your performance, whether writing or performing in a domain specific assignment, could be one of the big wins. Formative feedback accelerates learning.
6. Personal curation
Learning experiences, especially online, often feel over-directed and fixed, a little straightjacketed. Imagine a system that uses AI to automatically provide links to useful outside sources relevant to you at that very moment you need them, as it has spotted that you have a problem. We do this in WildFire, where links to outside resources are created on the fly, making the course more porous and to encourage curiosity.
WildFire
7. Personal practice
An often forgotten dimension in personalised learning is personalised practice. Spaced practice can be delivered in a logarithmic fashion, based precisely on your needs. Learn now and between now and your exam, new job, whatever, you get a personalised pattern of spaced practice. We have delivered this in WildFire, where you get several bites of the cherry after completing the course, delivered to you by email, to shunt learning from working to long-term memory.
8. Personal assessment
Both personal formative and summative assessment are benefitting from AI. Digital identification, through keyboard strokes (Coursera), face recognition and so on, makes sure that the right person is sitting the exam. Formative assessment, often using AI techniques, is being delivered to large numbers of students. Adaptive testing is also available, to provide personalised assessments. We’ve used assessment bots in WildFire to question learners after their more directed learning experiences. Assessment is by definition personal, it’s about time we made it more personal through technology.
9. Personalised tutors
AI is nowhere near being able to replace teachers. But as the technology begins to know you, talk to you, remember what you said and did and understand the context; then we can expect services like Alexa and tutors to bloom in learning. We see this with Duolingo and many other online services online. They get to know you and deliver what you want at that very moment, free from the tyranny of time and space – anyone, anytime, anyplace. This is what Amazon does for shopping, it’s what Cleo and other bots do in finance, it’s what fitness bots do in health. We can expect moves in this direction in learning. Note that this has nothing to do with robot teachers. That idea is as stupid as having a robot driving a self-driving car.
10. Personal difficulties
We have evidence that young people are feeling the pressure and the statistics for stress, mental health problems, even suicide are rising. We also know that young learners hide this from their parents, teachers and tutors. One solution is to anonymise help. Woebot and Elli have already been used and subjected to clinical trials. The results are encouraging. Beyond this, we already know that personalised learning is a boon to learners with learning difficulties. Individualised responses to physical and psychological issues is now normal in education.
Conclusion
Personalised learning is not one thing, it is many things, an umbrella term for all sorts of applications, especially those driven by AI. We are on the cusp of bringing smart software to deliver smart learning to make people smarter. It will take time, as AI is, at present, only good at very specific, narrow tasks, not general teaching skills. That’s why I’ve tried to break it down into to specific sweet spots. But on one thing I’m clear – this is a worthy path to follow.
Bibliography

Saturday, June 16, 2018

Study shows VR increases learning

I have argued that several conditions for good learning are likely to be enhanced by VR. First there’s increased attention, where the learner is literally held fast within the created environment and cannot be distracted by external stimuli. Second is experiential learning, where one has to ‘do’ something where that active component leads to higher retention. Third is emotion, the affective component in learning, which is better achieved where the power to induce empathy, excitement, calm and so on is easier. Fourth is context, where providing an albeit simulated context aids retention and recall. Fifth is transfer, where all of these conditions lead to greater transfer of knowledge and skills to the real world.
This study from the University of Maryland starts to confirm my thoughts. This points towards possible improvements in efficacy, compared to 2D screens and tablets. The researchers created a 3D Memory Palace, or in modern psychology spatial mnemonic encoding, an idea that goes back to the Greeks, and a well known method of improving retention. Try it for yourself – place each of the first five Kings on England - William 1, William 2, Henry 1, Stephen, Henry 2 – in your house – one at the front door, next in the hall and so on. Now try to recall them. You’ll soon have all 40 plus Monarchs in your mind…
In the Maryland study, 40 people were split into two groups. Both were first shown printed pictures of famous people. One group then saw them in imaginary locations in VR then desktop, the second group saw the same but on a 2D desktop screen. There was an 8.8% better performance in the recall of faces in the correct location by the VR group and 38/40 said they preferred the VR for that learning task. Further questioning revealed that learners reported increased sense of ‘focus’ within VR. The researchers also felt that the physical movement within the VR world provided an experiential component that deepened the learning experience.
My guess is that more research of this type will also reveal increased retention and recall of knowledge and skills, as well as increased transfer, once real world tasks are involved. These are likely to occur in healthcare, where I've outlined 25 potential uses, many that have already been realised but there are many other potential uses.
Interestingly, WildFire, the AI-content creation service, works within VR. So you can place your online learning in context to increase retention, as the study suggests. In VR it uses 100% voice input for both navigation plus interaction.
Bibliography
Krokos, E., Plaisant, C., Virtual memory palaces: immersion aids recall," Virtual Reality. May 2018

University faculty believe in Learning Styles and promote it to students while their Teacher Training departments say it's a myth

In a 2017 study by Newton and Miah, from the University of Swansea, Evidence-Based Higher Education – Is the Learning Styles ‘Myth’, 58% believed Learning Styles to be beneficial. Only 33% actually used Learning Styles but remarkably, they report that 32% faculty continued to believe in their use, even after being presented with the evidence that shows it doesn’t work. In the US a study by Dandy and Bendersky (2014) showed that 64% of faculty believed in the efficacy of learning styles.
This is far less than the reported figures from schoolteachers, where Dekker (2012) reported that 93% believed that learning styles caused better learning. Simmonds (2014) reported that 76% of teachers used Learning Styles. What is notable is that the very Universities that have Education Departments that train teachers, who would eschew such theory, still have it as a basic belief in the majority of their teaching staff.
This is understandable, as University teachers get rather cursory teacher training. What is odd is the complete lack of consistency within Higher Education, where they have the expertise to do scotch the myth and where the teaching is clearly at odds with the research and theory that is taught.
What is even more remarkable is that many universities openly and actively promote Learning Styles on their main websites to their students. Surely a simple search would expose this to the education departments who could insist on it being removed? It's a bit like the physics department teaching the Copernican model while other faculty insist that the Sun goes around the Earth.

Promoted
Here are just a few, after a cursory search. I literally could have added dozens more:
Open University
Trinity College Dublin
University of Southampton
Birbeck University of London
Manchester Metropolitan University
University of Leicester
University of Sheffield
University of Brighton
University of Edinburgh
Cambridge Assessment English (part of Cambridge University) actually teach it on their Futurelearn MOOC.
Conclusion
It would seem that the Learning Styles myth is sustained in many ways but its roots are in education institutions, schools and Universities, that continue to peddle the theory to their students. What is odd is the lack of action around eliminating the phenomenon.
Bibliography
Dandy, K., and Bendersky, K. (2014). Student and faculty beliefs about learning in higher education: implications for teaching. Int. J. Teach. Learn. High. Educ. 26, 358–380.
Dekker, S., Lee, N. C., Howard-Jones, P., and Jolles, J. (2012). Neuromyths in education: prevalence and predictors of misconceptions among teachers. Front. Psychol. 3:429. doi: 10.3389/fpsyg.2012.00429
Newton, P.M., Miah, M.Evidence-Based Higher Education – Is the Learning Styles ‘Myth’ Important? Front. Psychol., 27 March 2017
Simmonds, A. (2014). How Neuroscience Is Affecting Education: Report of Teacher and Parent Surveys. 

Thursday, June 14, 2018

To Siri With Love - how chatbots are becoming social companions and teachers...

The New York Times carried a heartwarming story "to Siri With Love" by a mother who had an autistic son, Gus. Siri was a Godsend, as it never tired of answering his questions. Far from isolating Gus, Siri was a companion and lifeline. His mother was grateful as it helped both herself and Gus deal with his condition. Gus started to articulate more clearly to be understood and Siri proved to be a non-judgemental friend and teacher. Chatbots are starting to appeal to more and more of us. They are an integral part of the social landscape.
So there’s a new kid on the block….
Social bots
There is an assumption that ‘social’ in learning, always means one human being talking to another or others. That could be synchronously, face-to-face, telephone, webinar and so on, or asynchronously online through social media, chat and so on. But this is to miss a trick. Increasingly, we will have bots as part of our social networks. They may be bots delivering a specific service or learning experience. We have finance bots delivering personal financial services from your bank. Health bots dealing with your health and no end of customer care bots. The fact that they are not people does not mean they are not part of our social network.
We have ample evidence from Nass and Reeves (The Media Equation), that we anthropomorphise technology and bots in particular have been shown to successfully ‘pass’ for humans, thereby passing the Turing test.  The Georgia Tech tutor bot not only passed this test, the students put it up for a teaching award. Google Duplex has successfully executed bot calls to a restaurant and hairdressers, completing appointments. Bots in general, whether useful, benign or malicious, are now part of our social networks.
Ecosystem
In fact, we have every reason to expect that they will play an increasing role in our social ecosystem, as dialogue and voice play a greater role in human-machine interaction. They are already in our homes through Alexa, Google Home and other devices. Sex robots are essentially bots inside robot bodies. We make calls to bots on the telephone. But mostly, we encounter bots online. A recent Pew study followed Twitter activity and identified surprising levels of bot activity:
1. Two-thirds (66%) of all tweeted links were shared by suspected bots. 
2. Suspected bots also accounted for 66% of tweeted links to sites focused on news and current events.
3. Among news and current events sites, those with political content saw the lowest proportion (57%) of bot shares.
4. About nine-in-ten tweeted links to popular news aggregation sites (89%) were posted by bots, not human users.
5. A small number of highly active bots were responsible for a large share of links to prominent news and media sites.
This raises interesting questions about our awareness of bots and their influence.
Bots and learning
In learning, however, bots are, for the moment, in a controlled environment. At work bots, such as Otto, pop up within workflow tools, such as Slack, Microsoft Teams and Facebook at Work. In learning, we have bots that increase student engagement, bots that provide learner support, tutorbots, mentorbots, assessment bots and wellbeing bots. These bots are guided learning bots, with limited capabilities but that often matches the need to stick to a guided learning path or defined domain in learning. Structure and focus in learning is often useful.
As the technology progresses, they will get smarter with the capability to sustain dialogue, retain memories of all previous conversations, be sensitive to content and become more personal. They will play an increasing role in engagement, support and delivery. This will happen in a piecemeal fashion but who knows, in time they may master the skills necessary to be a good teacher or trainer.
The learning game used to be simple. We had teachers and learners. Sure, teachers learnt from other teachers and learners. Learners learnt from teachers and other learners. Now we have these interlopers who can both teach and learn. Machine learning allows bots to learn – very quickly. We have seen their success in chess, Go and Poker. Increasingly, they are mastering other human activities. The fact htat modern AI techniques allow them to play themselves millins of times in a very short ;period of time or even set them selves up to be adversarial, leading to rapid improvement and competence is what’s new in AI. Machine learning, Reinforcement learning and Generative Adversarial Networks, and many other variants of ‘learning’ methods are driving the success of AI. Social learning networks now have these new entrants – bit learners that learn fast.
Anonymity
This raises several questions. Should anonymous bots be allowed? Bots are not conscious, even cognitive. They mimic human behaviour and in this sense fool the user into thinking they are human or have human qualities. They are faking it. There is an argument for not allowing anonymous bots, as they break the trust one assumes in dialogue, that the other agent is a real person with moral responsibilities, not a piece of software with no moral sense. Alternatively, we could see this in purely utilitarian terms and see the advantages purely in terms of outcomes – better teaching and learning.
On the other hand, the ‘anonymity’ of bots can be their cardinal advantage. I spent ten days on the wellbeing bot ‘woebot’. Its advantage is its anonymity. Few young people will want to admit to their teacher, lecturer or adult that they are having mental health problems, due to fear and embarrassment. Many will feel more comfortable dealing with a helpful, anonymous bot.
Bias
One could argue they are a conduit for bias. This could be true in news aggregation but I doubt that this is much of a problem in learning. All humans are biased, and while bots can embody intended or unintended bias, this can be eliminated. Kahneman who got the Nobel Prize for his work in this area, describes human bias as uneducable. In practice, I think that bots can easily eliminate gendered language, confirmation bias, anchoring and many other biases that seriously distort educational and learning goals. This may be our best bet in eliminating the huge amount of bias that exists within the system.
Living with bots
The bots are here. At present, they are child-like, narrow in domain and capabilities but nevertheless useful. We must learn to live with them. In a sense, bots have always been around. When I read a novel, the narrator and characters are essentially agents that have fooled my imagination into thinking they are real people. We have no problem in reading fact or fiction from the past, even from dead authors but still see them as being in the moment, when they address us in their texts. What gives computer bots extra potency is their seeming, living presence and adaptability. They respond, answer back, ask us things and get personal. Increasingly they are the mediators. But that’s essentially what teaching is – mediation. 
Bots and social constructivism
Strangely enough it is the social constructivists, led by Vygotsky, who should celebrate bots the most. If knowledge is the internalisation of social activity, then bots are a constructivists wet dream. It fits with Bandura’s Social Learning Theory, where one leans through social observation and modelling. I don’t buy this theory LINK but it is interesting to me that the most vociferous anti-technology critics, who rely on a theory of social learning, may be sabotaged by technology that plays their game. If they are social agents, why not exploit them to the full.
Conclusion
What bots can and will do, is scale social learning. They don’t sleep eight hours a day, get distracted and bored. They can also download, network and learn from both us and themselves. And they never die – they only get better.

Thursday, June 07, 2018

Unconscious bias training a waste of time – 7 reasons why Starbucks training will not work

Exclusive: Unconscious bias training to be scrapped after review finds it has little effect

Nearly 170,000 staff across the civil service have taken part in sessions in the past five years, at an estimated cost of £370,000 - 14 December 2020

Racism and sexism are serious problems but not all training efforts are serious solutions. The latest fad is training courses that purport to tackle ‘unconscious bias'. (Note that I'm not attacking training on conscious racism and sexism, only the idea that training should focus on the unconscious). Starbucks led the charge, largely as a PR campaign to protect their share price,  but it is everywhere. There is something truly creepy about HRs move on the unconscious. Since when did it become acceptable to see an employees ‘unconscious’ as an addressable area for ‘retraining’. This is far worse than the Ponzi scheme that is Myers-Briggs. There are serious problems with ‘unconscious bias’ courses.

1. Unconscious is wrong target
Apart from the dedicated racist, few will admit to being racist in surveys. Many may hold light or even strong views on race without admitting it to anyone, certainly not researchers, who would almost certainly be seen as judgemental. This has led L and D to turn to the unconscious. Big mistake. Explicit, conscious racism and sexism, may actually be the true focus for training, not the diversion of ‘unconscious bias’, on the basis of seriously flawed psychometric tests. 

2. Not measuring unconscious bias at all
The Banaji and Greenwald IAT (Implicit Association Test), created in 1994, is one of a number that are being foisted upon millions of employees. Just because people select words from pairs does not mean that this taps into their unconscious. We need to send several cannonballs over the bow of the supposed ship sailing into the uncharted sea of the unconscious. Just because someone can’t explicitly explain something does not mean that it has its origins in the unconscious. There are plenty of alternative explanations with more plausible causality. You may simply be registering familiarity (not bias) in matching words with images. Alternatively you may be using conscious but instantaneous recognition, not the unconscious, to link the words and images. As Tony Greenwald, one of the creators of IAT said,
"I see most implicit bias training as window dressing... After 10 years working on this stuff and nobody reporting data, I think the logical conclusions is that if it was working, we would have heard about it."

3. Wrong language
In fact, the mutual exclusivity of conscious and unconscious bias is far from proven and psychologists are wary of even using the word. One can add the prefix ‘un’ to the word ‘conscious’, and assume this is something clear, the ‘unconscious’, a place where hidden biases are stored in little Pandora’s boxes. But the ‘unconscious’ is problematic in psychology. What is the difference between a memory and an unconscious event? If you read the literature in this field you will find the word ‘unconscious’ strangely absent. Psychologists tend to use the terms ‘implicit’ and ‘explicit’, which cuts loose from the terminology of psychotherapy to bring in a wider range of phenomena. Psychologists really are wary of this binary opposition between unconscious and conscious - but not HR. Of course, selling a course called ‘Implicit beliefs’ may not bring in the expected sales.

4. Unreliable
Reliability matters in tests. You don’t want a test that gets very different results on same person when they retake the test. Guess what? The IAT test is unreliable, so it should NOT be used as a test, as there is not enough evidence that it predicts your behaviour. To be precise, the desired retest reliability should be above 0.7. It is, in fact, 0.44 for racism and 0.5 for IAT tests overall.

5. Not predictive
Even if we assume the unconscious has some status, the causality of beliefs and behaviour can still be studied. Here’s the really bad news - four separate meta-analyses show weak predictive behaviour from such tests. This is a real problem, as even if one counters the unconscious bias, as it has almost no causal effect, all that work is largely pointless.

6. Doesn’t change behaviour
Even the people who work in this area warn against the inference that reducing unconscious bias reduces racist or sexist behaviour. In fact, a meta-study in 2017, that looked at 494 previous studies, showed no evidence for the reduction of unconscious bias having an effect on biased behaviour. Let’s be clear, if true, then what is claimed by those who sell this training and much of the training is quite simply a waste of time. 

7. Record of failure
The world is littered with courses on diversity, racism and sexism. The world is NOT littered with evidence that it works. Major studies from Dobbin, Kalev and Kochan show that diversity training does not increase productivity and may, in fact, produce a backlash. Most don’t know if it works as evaluations are as rare as unicorns. Thomas Kochan, Professor of management at MIT’s Sloan School of Management’s five year study had previously come to the same conclusions, "The diversity industry is built on sand," he concluded. "The business case rhetoric for diversity is simply naive and overdone. There are no strong positive or negative effects of gender or racial diversity on business performance." Harvard’s Frank Dobbin conducted the first major, systematic study of diversity programmes across 708 private sector companies, using employment data and surveys onemployment practices. His research concluded that, “Practices that target managerial bias through…diversity training, show virtually no effect.” Dobbins research went further. “Research to date suggests that… training often generates a backlash.” Many other studies show similar conclusions (Kidder et al 2004, Rynes and Rosen 1995, Sidanias et al 2001, Naff and Kellough 2003, Benedict et al 1998, Nelson et al 1996). Yet we persevere with the idea that ‘training’ is the answer to these serious problems.

A way forwared
Going back to the main point of this article, training in ‘unconscious bias’ seems to be yet another Ponzi scheme, that fits nicely with the zeitgeist. At best it is a clear example of enormous overreach, at worse falsely accusatory and a waste of time. My conclusion is that if the identification of unconscious bias is a waste of time, as is training around that concept, that still leaves us with conscious bias. All is not lost. Starbucks need to focus on conscious racism, not psychobabble.

Who better to turn to than the word’s acknowledged expert in ‘bias’, Daniel Kahneman, who won the Nobel Prize for his work in the field. His book ‘Thinking Fast and Slow’ is essential reading if you are interested in how bias works in the mind. Note that if you’re interested in less academic book that explains it in a more readable form ‘The Undoing Project’ by Michael Lewis, is excellent. Coming back to Kahneman, in the last two pages of the book he addresses the issue of combatting bias and starts by saying that…

System 1 is not readily educable”. 

So don’t look to changing System 1, and thinking that you can eliminate unconscious bias, where the supposed ‘unconscious bias is said to exist. His recommendation is…

The way to block errors in System 1 is simple in principle: recognise the signs that you are in a cognitive minefield, slow down, and ask for refinforcement  from System 2.”

This is good advice, so how do we do this? Kahneman suggests that organisations use process and “orderly procedures”, such as “useful checklists… reference forecasts… premortems”. I agree. Much is to be gained through organisational checks and balances, not falsely accusatory training based on unreliable, supposedly diagnostic tools.

Monday, June 04, 2018

Lifelong Learning is a conceit

Yet another Government consultation on Lifelong Learning. It actually makes you lose the will to live. Every few years we get yet another report full of platitudes. 'Lifelong Learning' trips off the tongue (beware of alliteration) but it’s a glib, confused, if not misleading, phrase. No one describes themselves as a ‘Lifelong Learner’ – it would sound pompous, even ridiculous. To be honest, I don't really think that Lifelong Learning is a 'thing', just the rhetoric one sees in reports and PowerPoints.
 Extended schooling
In truth, most of us, after being put through the wringer of intense schooling, can’t wait to see the back of it. Even those who extend schooling for another three or four degree years are often weary of the endless diet of formal learning and exams. If Lifelong Learning means more and more qualifications, forget it. Lots of people are now being prompted and pushed into being academic, when they’re not, prolonging their schooling, when the evidence suggest that it neither raises their productivity nor enriches their lives. Lifelong learning, so far, has meant extending schooling. Of course, the answer to bad schooling is always more schooling. We may even want less learning. Bryan Caplan has argued that more people are getting ‘schooled’ for longer and longer. But to what end? Signalling. Credential inflation is the wasteful result.
Academe
In my lifetime, I‘ve seen the Lifelong Learning brigade dismantle vocational learning in favour of University for all – well not really all, as they killed off support for adult learners (which is what Lifelong Learning was supposed to be about) – hence the near bankruptcy of the OU. They talk the talk but at the end of the day – the focus has been on 18 year-old undergraduates. That’s a shame. For all the rhetoric they default back to their own little world.
Workplace learning
Even in work, HR has a tendency to become the department that actively defends the organisation against its own employees through an endless diet of compliance courses. Is this Lifelong Learning? Or is it sitting in a 2rd rate hotel room full of round tables having to endure some god-awful Powerpoint presentation, or worse, being asked to form groups to answer ill-framed questions, then feed your results back to the ‘facilitator’. If so, that's the opposite of learning – it’s conformity and compliance that often turns people off learning.
Re-skilling
If you mean keep open opportunities to reskill, fine. But for many that’s usually too little too late, after mass redundancies. Janesville, about a community in the US hit by factory closures, exposed the dangers of the reskilling promise. These hastily improvised projects are usually too little too late.
 ‘Lifelong Learning UK Council’
Remember them? Thought not. An organization so invisible, that no one noticed when it disappeared, basically a bunch of University and College administrators with a couple of librarians thrown in for good measure, who thought that lifelong meant 18-22. I didn’t come across a soul in the learning industry who even knew that it existed. Although they thought that ‘employers... will look to this SSC for the standards and qualifications of the people who deliver learning in their own workforce.’ This is what happens when Lifelong learning is actually seen as lifelong teaching. There was nobody at the wake when it was closed down.
 Life is for living, not learning
Lifelong Learning is a shallow phrase as it assumes that we need something we don’t. For many, the book group or film club is formal enough, a group that encourages you to read something new and different. Life, for most, is for living, not learning. We learn to lean without formal structures, following our interests and curiosity.
Conclusion
Lifelong Learning is a phrase that appears in lofty reports, grant applications or by organisations that no one has even heard of. It’s a weasel phrase. Nobody has ever, or wants to, call themselves a Lifelong Learner. It’s a sort of educational conceit – stick with we ‘educators’, you’ll need us – for life. Adults do not want to be infantilised by this sort of jargon. They’re adults not learners. The older you get the less inclined you are to want to cram and sit exams, as you know you’ve forgotten most of what you previously learnt. I’m all for recommending that people remain curious throughout their lives but life is not a course.