Saturday, December 31, 2016

2016 was the smartest year ever

For me, 2016 was the year of AI. It went from an esoteric subject to a topic you’d discuss down the pub. In lectures on AI in learning around the world in New Zealand, Australia, Canada, US, UK and around Europe, I could see that this was THE zeitgeist topic of the year. More than this, things kept happening that made it very real….
1. AI predicts a Trump win
One particular instance was a terrifying epiphany. I was teaching on AI at the University of Philadelphia, on the morning of the presidential election, and showed AI predictions which pointed to a Trump win. Oh how they laughed – but the next morning confirmed my view that old media and pollsters were stuck in an Alexander Graham Bell world of telephone polls, while the world had leapt forward to data gathering from social media and other sources. They were wrong because they don’t understand technology. It’s their own fault, as they have an in-built distaste for new technology, as it’s seen as a threat. At a deeper level, Trump won because of technology. The deep cause ‘technology replacing jobs’ has already come to pass. It was always thus. Agriculture was mechanised and we moved into factories, factories were automated and we moved into offices. Offices are now being mechanised and we’ve nowhere to go. AI will be the primary political, economic and moral issue for the next 50 years.
2. AI predicts a Brexit win
On the same basis, using social media data predictions, I predicted a Brexit win. The difference here, was that I voted for Brexit. I had a long list of reasons - democratic, fiscal, economic and moral – but above all, it had become obvious that the media and traditional, elitist commentators had lost touch with both the issues and data. A bit surprised at the abuse I received, online and face-to-face, but the underlying cause, technology replacing meaningful jobs has come to pass in the UK also. We can go forward in death embrace with the EU or create our own future. I chose the latter.
3. Tesla times
I sat in my mate Paul Macilveney’s Tesla (he has one of only two in Northern Ireland), while it accelerated (silently) pushing my head back into the passenger seat. It was a gas without the gas. On the dashboard display I could see two cars ahead and vehicles all around the car, even though they were invisible to the driver. Near the end of the year we saw a Tesla car predict an accident between two other unseen cars, before it happened. But it was when Paul took his hands off the steering wheel, as we cambered round the corner of a narrow road in Donegal, that the future came into focus. In 2016, self-driving cars became real, and inevitable. The car is now a robot in which one travels. It has agency. More than this, it learns. It learns your roads, routes and preferences. It is also connected to the internet and that learning, the mapping of roads, is shared with all as yet unborn cars.
4. AI on tap
As the tech giants motored ahead with innumerable acquisitions and the development of major AI initiatives, some even redefining themselves as AI companies (IBM and Google), it was suddenly possible to use their APIs to do useful things. AI became a commodity or utility – on tap. That proved useful, very useful in starting a business.
5. OpenAI
However, as an antidote, to the danger that the tech monsters will be masters of the AI universe, Elon Musk started OpenAI. This is already proving to be a useful open source resource for developers. Its ‘Universe’ is a collection of test environments in which you can run your algorithms. This is a worthy initiative that balances out the monopolising effect of private, black-box, IP-driven AI.
6. Breakthroughs
There were also some astounding successes across the year. Google beat a GO champion, the most complex game we know. Time and time again, AI rose to the challenge. Take almost any area of human endeavour, add a dose of AI and you have a business.
7. WildFire Award
AI will become by far the most significant form of technology in learning. At first, two years ago, I invested in ‘adaptive learning’ but this year I designed and built (with a developer I’ve known for 25 years) an AI-driven content, curation and practice tool that not only won a major award for Most Innovative New Product in learning. It is now being used in anger by major corporates. Rather than just talk about AI, or post sceptical and negative platitudes about ‘all algorithms being biased’ or other such rot, we got on and did something.
8. Amazon Echo
Amazon Echo. This put AI bang in the centre of my home. The progress in Natural Language Programming is astounding, in speech recognition, understanding, translation and generation. It was interesting to see how Siri had crept into my wife’s the behaviour on her iPhone. But this was something else. This is a hint at consumer level AI that acts as a sort of teacher, concierge, DJ and personal shopper.
9. Bots
On a visit to lecture at Penn State University I came across a couple of bot projects that intrigued me. It was a revelation to find that Twitter was ridden with bots but seeing some real examples, such as a pupil bot for trainee teachers, who behaved as troublesome lads tend to in school, showed how this new approach through natural language interfaces, will have a profound effect on how we interact with AI. AI itself has provided rapid advance in natural language processing that has made AI accessible at the consumer level. We’ve been training up bots for 2017.
10. Berlin
At the end of the year and at the same time as we won an award for WildFire, I was in Berlin to take part in a debate, with the motion ‘AI can, will and should replace teachers’. It was an opportunity to show that, given recent advances from Google onwards, at some time it would be ridiculous to say that AI will decimate professions such as lawyers, Doctors, Accountants and managers, yet leave ‘teaching’ untouched. That’s merely a conceit. Many were surprised at the real-word examples in the creation of learning content, personalised feedback, assessment and reinforcement. It’s not that it’s coming, it’s already here. 
Finally, On the last day of the year, I got news about being involved in some exciting AI in learning research and some nice invitations to speak on the subject.

Both my sons are pretty technical but one is doing a degree in AI. This has been a Godsend. Being able to get immediate clarification advice on tools, and generally engage in conversations with someone whose passion is AI, has been more than useful. Their hero is not any politician, scientist, entertainer or musician but a techie. Not Steve Jobs or Marc Zuckerberg but Elon Musk. He’s the titan – super smart but not just gassing away but DOING something. They see in him a new generation of pioneers, who use AI for social and human good – the end of fossil fuels and therefore global warming, self-driving cars and going to Mars.  The world in 2016 got a lot smarter, it will get smarter still.

Wednesday, December 28, 2016

Brains - 10 deep flaws and why AI may be the fix

Every card has a number on one side and a letter on the other.
If a card has a D then it has a 3 on the other side.
What is smallest number of cards you have to turn over to verify whether the rule holds?

 D      F       3      7
(Answer at end)
Most people get this wrong, due to a cognitive weakness we have - confirmation bias. We look for examples that confirm our beliefs, whereas we should look examples that disconfirm our beliefs. This, along with many other biases, is well documented by Kahneman in Thinking: Fast and Slow. Our tragedy as a species is that our cognitive apparatus and, especially our brains, have evolved for purposes different from their contemporary needs. This makes things very difficult for teachers and trainers. Our role in life is to improve the performance of that one organ, yet it remains stubbornly resistant to learning.
1. Brains need 20 years of parenting and schooling
It takes around 16 years of intensive and nurturing parenting to turn them into adults who can function autonomously. Years of parenting, at times fraught with conflict, while the teenage brain, as brilliantly observed by Judith Harris,  gets obsessed with peer groups. This nurturing needs to be supplemented by around 13 years of sitting in classrooms being taught by other brains - a process that is painful for all involved – pupils, parents and teachers. Increasingly this is followed by several years in college or University, to prepare the brain for an increasingly complex world.
2. Brains are inattentive
You don't have to be a teacher or parent for long to realise how inattentive and easily distracted brains can be. Attention is a necessary condition for learning, yet they are so easily distracted.
3. Fallible memories
Our memories are not only limited by the narrow channel that is working memory but the massive failure to shunt what we learn from working to long-term memory. And even when memories get into long-term memory, they are subject to further forgetting, even reconfiguration into false memories. Every recalled memory is an act of recreation and reconstitution, and therefore fallible. Without reinforcement we retain and recall very little. This makes them very difficult to teach.
4. Brains are biased
The brain is inherently biased, not only sexist and racist, it has dozens of cognitive biases, such as groupthink, confirmation bias and many other types of dangerous biases, that shape and limit thought. More than this it has severe weaknesses, not only inherent tendencies, such as motion sickness, overeating, jet-lag, phobias, social anxieties, violent tendencies, addiction, delusions and psychosis. This is not an organ that is inherently stable.
5. Brains need sleep
Our brains sleep eight hours a day, that’s one third of life gone, down the drain. Cut back on this and we learn less, get more stressed, even ill. Keep the brain awake, as torturers will attest, will drive it to madness. Even when awake, they are inattentive and prone to daydreaming. This is not an organ that takes easily to being on task.
6. Brains can’t upload and download
Brains can’t upload and download. You cannot pass your knowledge and skills to me without a huge amount of motivated teaching and learning. AI can do this in an instant.
7. Brains can't network.
Our attempts at collective learning are still clumsy, yet AI, collective learning and intelligence is a feature of modern AI.
8. Brains can't multitask
This is not quite true, as they regulate lots of bodily functions, such as breathing, balance and do on, while doing other things. However, brains don;t multitask at the level required for some breakthroughs. What seems like multitasking is actually rapid switching between tasks.
9. Brains degenerate and die
As it ages the brain’s performance falls and problems such as dementia and Alzheimer’s occur. This degeneration varies in speed and is unpredictable. And in the end, that single, fatal objection – it dies. Death is a problem, as the brain cannot download its inherent and acquired knowledge or skills. It is profoundly solipsistic. Memories literally disappear. The way we deal with this is through technology that archives such acquired experience in technical media, such as print, images and now data.
‘Artificial Intelligence’ has two things wrong with it – the word ‘Artificial’ and the word ‘Intelligence’. Coined by John McCarthy in 1956, it has survived the ups and downs of AI fortunes, that is not to say that these rather odd two words now capture the field’s reach and promise. In fact, they seem, at times, to be a liability.
10. Brains don't scale
Brains are impressive but they're stuck in our skulls an limited in size, as women would not be able to give birth if they were bigger. There are also evolutionary limits interms of what can be supported on the top of our bodies along with heat and energy requirements. Bottom line, however, is that warm brains don't scale.
'Artificial' intelligence is pejorative
Artificial suggests something not real. As a word it lies in direct opposition to what is real. It suggests something unnatural, a mere imitation. This dash of the pejoration debases the concept. This lies behind many of the dystopian attitudes many have towards AI. Rather like artificial grass or artificial limbs, AI successes, no matter how astonishing, feel as though they are second-rate and inferior. An even stronger pejorative suggestion is the idea that it is fake or counterfeit, the ‘artificial’ as something feigned or bogus. As the word explicitly compares the abilities of man and machine, brains and computers, anthropomorphic judgements tend to sneak in. It defines the field as simply copying what humans or human brains do, whereas it tends to do things that are very different. The human brain may not be the benchmark here. Man may not be the measurement of the machine. 
Homo Deus
Harari in Homo Deus (2016) proposes an argument that eliminates the artificiality of AI. Homo Sapiens is, like all other living beings, an evolved entity, shaped by natural selection, which is profoundly algorithmic. These algorithms exist separately from the substrate in which they reside. 2+2 =4 is the same, whether it is calculated or shown on wooden blocks, the plastic beads of an abacus or the metal circuits of a calculator. It doesn’t matter that algorithms reside in one form or another. We should conclude that there is no reason to suppose that our organic abilities will not be replicated, even surpassed. In other words, algorithmic power resides in the power of the maths to solve problems and come up with solutions, not how accurately they mimic human abilities.
Better than brains
A stronger argument is that there is every reason to suppose that other substrates will be better. The brain has evolved for an environment that it no longer operates in. It has severe limitations, suited to survival in a place and time with very different needs. Limited in size by the need to travel down the birth channel and be carried upon the standard skeleton, it has severe limitations.
One thing we do have in our favour, is the fact that our brains have almost certainly evolved in tandem with our use of technology. The extraordinary explosion of activity around 40,000 years ago suggests a key role of tools and technology helping shape our brains. However, there is one fascinating downside. It also seems as though neophobia (fear of the new) increases with age, which means antipathy towards new technology and AI are likely to be a feature of the brain’s defence mechanism.
Neophobia is not new
Neophobia, fear of the new, is not new. No doubt some wag in some cave was asking their kids to ‘put those axes away, they’ll be the death of you’. From Socrates onwards, who thought that writing was an ill-advised invention, people have reacted with predictable horror to every piece of new technology that hits the street. It happened with writing, parchments, books, printing, newspapers, coffee houses, letters, telegraph, telephone, radio, film, TV, railways, cars, jazz, rock n’ roll, rap, computers, the internet, social media and now artificial intelligence. The idea that some new invention rots the mind, devalues the culture, even destroys civilisation is an age-old phenomenon.
Stephen Pinker sees neophobia as the product of superficial reaction about cognition that conflates “content with process”. The mind and human nature is not that malleable and obviously not subject to any real evolutionary change in such a short period of time. Sure the mind is plastic but not a blank slate waiting to be filled out with content from the web. It is far more likely that the neophobes themselves are unthinking victims of the familiar destructive syndrome of neophobia.
Neophobia as a medical and social condition
Interestingly, the medical evidence suggests that neophobia, as a medical condition, is common in the very young, especially with new foods. It fades throughout childhood and flips in adolescence when the new is seen as risky and exciting. Then it gradually returns, especially during parenthood, and into our old age, when we develop deeply established habits or expectations that we may see as being under threat.
Tool of our tools
Neophobia exaggerates the role of technology. Have we ‘become the tool of our tools’, as Thoreau would have us believe? There is something in this, as recent research suggests that tool production in the early evolution of our species played a significant role in cognitive development and our adaptive advantage as a species. So far, so good. But far from shaping minds, the more recent internet is largely being shaped by minds. Social media has flourished in response to a human need for user-generated content, social communication and sharing. Input devices have become increasingly sensitive to human ergonomics and cognitive expectations, especially natural language processing through voice.
That is not to say that what we use on the web is in some way neutral. Jaron Lanier and others do expose the intrinsic ways software shapes behaviour and outcomes. But it is not the invisible hand of a malevolent devil. All technology has a downside. Cars kill, but no one is recommending that we ban them.
The internet, as Pinker explains, is not fundamentally changing ‘how we think’ in any deep sense. It is largely speeding up findings answers to our questions through search, Wikipedia, YouTube etc., speeding up communications through email, whatsapp, whatever, speeding up commerce and fundraising. It provides scale and everyone can benefit.
Neophobia as a brake on progress
Thomas Kuhn and the evolutionist Wilson, saw neophobia as a brake on human thinking and progress, as individuals and institutions tend to work within paradigms, encouraging ‘groupthink’ which makes people irrationally defensive and unsupportive of new ideas and technologies. As Bertrand Russell said, “Every advance in civilisation was denounced as unnatural while it was recent”. Religion, for example, has played a significant role in stalling scientific discovery and progress, from the denial of the fact that the earth rotates around the sun to medical advances. Education is a case in point.
We have the late, great Douglas Adams to thank for this stunning set of observations:
1) Everything that’s already in the world when you’re born is just normal;
2) Anything that gets invented between then and before you turn 30 is incredibly exciting and creative and with any luck you can make a career out of it;

3) anything that gets invented after you’re 30 is against the natural order of things and the beginning of the end of civilisation as we know it, until it’s been around for about 10 years when it gradually turns out to be alright really.
Gut feelings are wrong.  Commonest answers are D or D and 3 but the correct answer is D and 7.
D YES – if it had another number it would falsify the rule
F – doesn’t matter what’s on the other side – rule says nothing about F
3 – popular but if it doesn’t have a D makes no difference

7 – suppose there was a D that would falsify the rule.

Friday, December 23, 2016

Bot teacher that impressed and fooled everyone

An ever-present problem in teaching, especially online, is the very many queries and questions from students. In the Georgia Tech online course this was up to 10,000 per semester from a class of 350 students (300 online, 50 on campus). It’s hard to get your head round that number but Ashok Goel, the course leader, estimates that it is one year's work for a full time teacher.
The good news is that Ashok Goel is an AI guy and saw his own subject as a possible solution to this problem. If he could get a bot to handle the predictable, commonplace questions, his teaching assistants could focus on the more interesting, creative and critical questions. This is an interesting development as it brings tech back to the Socratic, conversational, dialogue model that most see as lying at the heart of teaching.
Jill Watson – fortunate error
How does it work? It all started with a mistake. Her name, Jill Watson, came from the mistaken belief that Tom Watson’s (IBM’s legendary CEO) wife was called Jill - her name was actually Jeanette. Four semesters of query data, 40,000 questions and answers, and other chat data, were uploaded and, using Bluemix (IBM’s app development environment for Watson and other software), Jill was ready to be trained. Initial efforts produced answers that were wrong, even bizarre, but with lots of training and agile software development, Jill got a lot better and was launched upon her unsuspecting students in the spring semester 2016.
Bot solution
Jill solved a serious problem – workload. But the problem is not just scale. Students ask the same questions over and over again, but in may different forms, so you need to deal with lots of variation in natural language. This lies at the heart of the chatbot' solution -  more natural, flowing, frictionless, Socratic form of dialogue with students. The database therefore, had many species of questions, categorized, and as a new question came in Jill was trained to categorise the new question and find an answer.
With such systems it sometimes gets it right, sometimes wrong. So a mirror forum was used, moderated by a human tutor. Rather than relying on memory, they added context and structure, and performance jumped to 97%. At that point they decided to remove the Mirror Forum. Interestingly, they had to put a time delay in to avoid Jill seeming too good. In opractive academics are rather slow at responding to student queries, so thay had to replicate bad performance. Interesting, that in comparing automated with human performance, it wasn;t a metter of living up to expectations but dumbing down to the human level.
These were questions about coding, timetables, file format, data usage, the sort of questions that have definite answers. Note that she has not replaced the whole teaching tasks, only made teaching and learning more efficient, scalable and cheaper. This is likely to be the primary use of chatbots in the short to medium term - tutor and learner support. That’s admirable.
Student reactions
The students admitted they couldn’t tell, even in classes run after Jill Watson’s cover was blown – it’s that good. What’s more, they like it, because they know it delivers better information, often better expressed and (importantly) faster than human tutors. Despite the name, and an undiscovered run of 3 months, the original class never twigged. Turing test passed.
In Berlin this month, I chaired Tarek Richard Besold, of the University of Bremen, who gave a fascinating talk through some of the actual dialogue between the students and Jill Watson. It was illuminating. The real tutors, who often find themselves frustrated by student queries, sometimes got slightly annoyed and tetchy, as opposed to Jill, who comes in with personal but always polite advice. This is important. Chatbots don;t get angry, annoyed, tired and irritable. They are also free from the sort of beliefs and biases that wee humans always have. They don't have that condescending, and often misplaced. rolling-of-the-eyes reaction that an academic sometimes has towards simple mistakes and werrors by novice learners. The students found her useful, the person who would remind them of dues dates and things they really needed to know, then and there, not days later. She would also ask stimulating questions during the course. She was described as an “outstanding TA” albeit “somewhat serious”.  Of course, some got a little suspicious. They were, after all, AI students.
“Her name is Watson ;)” he added. They checked LinkedIn and Facebook, where they found a real Jill Watson, who was somewhat puzzled by the attention. What finally blew her cover was interesting, she was too good. Her responses were just too fast (even though Goel had introduced a time delay), compared to other TAs. When they did discover the truth the reaction was positive “This is incredibly cool.”
A student even wanted to put her up for a teaching award. Indeed Goel has submitted Jill for just such an award to Georgia Tech.
Bot v TA
Tarek points out that the qualities students expect of a tutor are that they are honest, flexible, patient, confident, a good listener, professional, willing to share and use available resources. Sure there are many things a good teacher can do that a bot cannot but there are qualities a bot has that teachers do not possess. Tarek showed a response from a real teacher on the course, who was clearly a little tetchy and annoyed, compared to the clear and objective reply by the bot. This relentless patience and objectivity is something a good bot can deliver. Remember that the sheer scale of the questions by students was beyond the ability of the teachers to respond and as the bot is massively scalable (hence their use in MOOCs), it is intrinsically superior on this point as something is always better than nothing. It’s all a matter of finding the right balance.
Education is also expensive, scarce and difficult to scale. In classes with a hundred or more students, few get any personal attention. So can we have personal attention at scale? In the short-term we can have scale for some functions. In the long-term we can certainly forsee this sort of technology, with other advances, as yet unknown, make inroads in all aspects of teaching – subject matter knowledge, feedback, planning, content creation, content delivery, assessment. These are already possible.
Attribution of human qualities
In The Media Equation, Nass and Reeves did 35 studies to show that we have a tendency to attribute human qualities and agency to technology, especially computers and especially computers than engage us in dialogue. If bots deliver useful help, supportm answers and even deeper teaching experiences, then this is a bonus. Indeed, I think that the natural language approch through bots accelerates this willingness to attribute agance to the bot. Natural language is our normal UI. As AI provides better and better natural language processing, along with trained and smart databases of answers and smart responses, so AI will become the new AI. One could argue that learners already have, in Google, Facebook, Twitter, Amazon, Netflix and dozens of other online services, AI-driven UI. This must surely be an advantage in learning, where, the mroe frinctionless the interface, the better the outcomes.
What’s next?
The following semester they created two new bots as AI assistants (Ian & Stacey). Stacey was more conversational. This is a natural but technically difficult evolution of bots in teaching – to be more Socratic. This time the students were on the look out for bots, but even then only 50% identified Stacey and only 16% identified Ian as AI. The next semester there were four AI assistants and the whole team (including humans) used pseudonyms to avoid detection.
Jill Watson is being turned into a commercial product. Make no mistake, IBM, Apple, Microsoft and Amazon see education as a market for AI, as do Pearson and others. Bot-based teaching is with us now and will only get better, faster and more widespread. There are chatbots now teaching Englsih and other languages. Some are fully integrated with human tutors. Teacherbots allow a real tutor to deal with many more students, increasing productivity, which, I think, is the greatest prize of all. It is not the case of dispensing with teachers, but raising their game. Teachers should welcome something that takes away all the admin and pain. It's something we hear teachers complaining about all the time, so let's grasp the solution. The next stage will be bots that provide more than responses to questions and queries but also provide real tutor plans, advice and play a more subsatntive teaching role. Differ is a Danish bot that encourages student engagement - vthere will be many others with lots of different roles. Remember also, the role of bots in ro-bots. We are now seeing the emergence of real working robots that are also communicating bots in social care and education.
The idea that professionals like Doctors, Bankers, Accountants and Lawyers will be, to a degree, replaced by AI, but teachers will not, is a conceit. It has already happened and will happen a lot more. A recent Parson/Watson tool uses AI to enhance textbooks, opening up dialogue with the student, with formative assessment and personalized help. CogBooks provide adaptive, personalised learning in realtime. WildFire actually creates online learning from documents, PowerPoints, podcasts or videos. This area is moving fast.
What I love about this story is the fact that a Professor used his own subject and skill to improve the overall teaching and learning experience of his students. That’s admirable. With all the dystopian talk around AI, we need to make sure that AI is a force for good. If it does, as seems increasingly certain, replace many jobs, resulting in increased unemployment, we need to make sure that, like the God Shiva, AI creates as it destroys. Bringing meaning to our lives through education and learning is surely an admirable goal. AI is perhaps the only way to bring scalable, automated, personalised teaching to create that future.