Saturday, February 06, 2016

Microsoft’s massive Turing test – are AI teachers on the horizon?

Someone appears on social media and within three days 1.5 million people start chatting to that person, fooling many into thinking they’re human. If you’ve seen the movie HER – this is eerily close to that plot but it actually happened.
The plot thickens, as it appears that Microsoft, in China, has been running a huge Turing experiment. Microsoft’s Chinese, Bing researchers launched Xiaoice (Little Ice) in 2014 on WeChat and Weibo. She can draw upon a deep knowledge (or access to facts) about celebrities, sports, finance, movies… whatever. More than this she can recite poetry, song lyrics and stories, is open, friendly, a good listener, even a little mischievous, funny and chatty. Sentiment analysis allows her to gauge the emotion and mood of the conversation and adapt accordingly.
Conversations
The results were creepy. Within a few days 1.5 million people had conversations with Xiaoice, many went for up to 10 minutes before realizing she was not human. As the software improved, AI techniques, NLP, fed by Bing’s billions of data points and posts, so did the level of conversational engagement. The conversations started to get longer, with an average of 23 exchanges after tens of millions of chats, some go on for hundreds of exchanges. At 0.5 billion conversations and 850,000 followers, who talk to her on average, 60 times a month, Xiaoice has proved to be a very popular companion.
Attribution
Nass & reeves in The Media Equation, a brilliant set of 35 studies, showed that we are gullible, in the sense that we easily attribute human qualities to technology. We easily attribute human intention to tech, so expect, politeness, no awkward pauses and other human qualities in our interface with tech and that’s what tech is only now starting to deliver. Heider’s Attribution Theory also suggests that, in terms of motivation, we attribute external and internal causes to behavior. This we do, not only with humans but increasingly with machines.
Turing test
Xiaoice differs from Watson and other forms of AI, in that she (see how easy it is to slip into gender attribution to a bot) is not trying to solve a problem, like win Jeopardy, or beat the World Champion at chess or GO. Her aim is authentic conversation, or at least conversation that seems authentic to humans. That, in a nutshell, is the Turing test. It may already have been passed, on a massive scale.
AI
Even more astonishing is that as she converses, and the data set grows, she gets better and better. This internal learning feature, typical of such AI techniques, means that she learns, not like a human but to behave like a human. Obviously there is no consciousness here, that is not to say there is no intelligence. That is a philosophical question where the question of consciousness and intelligence may well lead to the idea that all such networks have some form of intelligence, just not that which we know of as human.
Bot Brother
Potentially, there’s a sinister side to this piece of AI driven tech. Big Brother really can be a bot, but a bot designed by governments to do specific jobs, like keep the population under control. Is it any accident that this experiment was run in China, the master of population control? I suspect that they would never have got away with the experiment in any liberal democracy.
Teaching technology

Will it be possible to emulate what teachers do with technology? In many ways it already can and does. Technology will find things out faster and with more accuracy than a human (search). It can hold much more in memory than any human. But this is not about simply emulating teachers subject knowledge, it is also about the wider skills of teaching. Remember also, that teachers are humans, with brains, and brains not only get lots of things wrong, they are full of cognitive biases, often display racial and sexual bias, get tired, need to switch off for around eight hours a day, start to forget and lose their powers. AI does noneof this. Where am I going with this? I’ve been arguing for the last few years that AI is the most important underlying trend in learning technology, as it offers the greatest possibilities for solving the deeper problems in education and learning, such as replication of good teaching, effective feedback, automated assessment, motivation, access and scale. 
We know from recent work, such as Todai and at Stanford, that AI is starting to get very good at educational tasks such as passing exams, essay marking and predicting learner attainment. It is also delivering more effective learning experiences. This is why every major tech company on the planet is pouring money into AI. We did not go from running speed to 100 miles per hour by copying the leg of a cheetah – we invented the wheel. So it is with AI. We are not copying teachers’ brains, we are building things that may turn out to be better. Note that part of this process, with current systems, such as essay marking or beating champions at GO, involves training the system using real experts, the system then starts to teach itself and get netter and better. It’s like CDP on steroids. That’s frightening. This form of AI introduces the possibility the the ‘teacher’ component, a teacher that not only has an enormous knowledge base, but also the human-like skills of being a motivator and respected tutor, may be on the horizon. It’s a distant horizon, nevertheless, it has appeared.

No comments: