Friday, July 20, 2018

“Huge milestone in advancing artificial intelligence” (Gates) as AI becomes a team – big implications for learning?

An event took place this month that received little publicity but was described by Bill Gates as a “huge milestone in advancing artificial intelligence”. It could have profound implications for advances in AI. An OpenAI team, of five neural networks, and a lot of other techniques, beat human teams in the game Dota2.
To understand the significance of this, we must understand the complexity of the task. A computer environment like Dota2 is astoundingly complex and it is played by teams of five, all of whom determine long-term success, exploring the complex environment, making decisions in real time, identifying threats, employing clever team strategies. It’s a seriously bad, chaotic, complicated and fast environment made even messier by the fact that you’re playing against some very smart teams of humans.
To win they created five separate neural networks, each representing a player. You need an executive layer to determine the weightings for each of the player’s actions to determine priorities. Using reinforcement learning techniques and playing itself millions of times (equivalent of 180 years of playtime per day!), it learned fast. Not yet at the level of the professional Dota2 team, but it will get there. 
What’s frightening is the speed of the training and actual learned competence across a ‘team’ in a team context. It needed to optimize team decisions and think about long-term goals, not short-term wins. This is way beyond single player games like chess and GO. 
The implications are huge as AI moves from being good at single, narrow tasks, to general tasks, using a modular approach.
Team AI in military
Let’s take the most obvious parallel. In a battle of robot soldiers versus humans, the robots and drones will simulate the mission, going through millions of possible scenarios, then, having learned all it needs to know, attack in real time, problem solving, as a team, as it goes. Autonomous warfare will be won by the smartest software, organic or not. This is deeply worrying, so worrying that senior AI luminaries, such as Musk, and within Google, have signed a pledge this week saying they will not work on autonomous weapons. So let’s turn to the civilian world.
Team AI in robotics
I’ve written about shared learning in AI across teams of robots which give significant advantages in terms of speed and shared experience. Swarm and cloud robotics are very real. But let’s turn to something very real – business simulations.
Team AI in learning for individuals
Imagine a system that simulates the senior management of a company– its five main Directors. It learns by gathering data about the environment – competitors, financial variables, consumer trends, cost modeling, instantaneous cost-benefit analysis, cashflow projections, resource allocation, recruitment policies, training. It may not be perfect but it is likely to learn faster than any group of five humans and make decisions faster, beating possible competitors. Would a team AI be useful, as an aid to decision making? Do you really need those expensive managers to learn this stuff and make these decisions?
Team AI in learning for individuals
Let’s bring it down to specific strategies for individuals. Suppose I want to maximize my ‘learning’ time. The model in the past has always been one teacher to one or many learners. Imagine pooling the expertise of many teachers to decide what you see and do next in your learning experience or journey? Imagine those teachers having different dimensions, cross-curricular and so on? This breaks the traditional teaching mould. The very idea of a single teacher may just be an artifact of the old model of timetabling and institutions. Multi-teaching, by best of breed systems, may be a better model.
Team AI and blended learning
One could see AI determine the optimal blend for a course or you as an individual. If we take Blended Learning (not Blended Teaching) as our staring point, one of then, based on the learning task, learner and resources available from the AI teacher team, we could see guidance emerge on optimizing the learning journey for that individual, a pedagogic expert who determines what particular type of teaching/learning experience you need at that particular moment?
Conclusion
Rather than seeing AI as a single entity and falsely imagining that it should simulate the behavior of a single ‘teacher’ we should see it as a set of different things that work as a team to help you learn. At the moment we use all sorts of ~AI to help you learn – Google, Recommendation engines, Engagement bots (Differ), Support bots (Jill Watson), Content creation (WildFire), Adaptive Learning (CogBooks), Spaced-practice (WildFire), even Wellbeing. Imagine bringing all of these together for you as an individual learner – coordinated, optimized, not a teacher but a team of teachers.

Wednesday, July 04, 2018

Data is not the new oil, more likely the new snakeoil….

to data analytics projects one must really think hard about what 'data' actually is. The problem with many of these projects is that they can turn into 'data' projects and not business projects.
Before jumping inOil is messy and dirty (crude) when it comes out of the ground but it is largely useful when fractioned. Up to 50% is used for petrol, 20% distillate fuel (heating oil and diesel fuel) and 8%  jet fuel. The rest has many other useful purposes. The unwanted elements and compounds are a tiny percentage. Data is not the new oil. It’s stored in weird ways and places, is often old, useless, messy, embarrassing, secret, personal, observed, derived, analytic, may need to be anonymised, training sets identified and subject to GDPR. To quote that old malapropism, ‘data is a minefield of information’!

1. Data dumps
Data is really messy, with much of it in:
  • odd data structures
  • odd formats/encrypted
  • different databases
Just getting a hold of the stuff is difficult.

2. Defunct data
Then there’s the problem of relevance and utility, as much of it is:
  • old
  • useless
  • messy
In fact, much of it could be deleted. We have so much of the stuff because we haven’t known what to do with it, don’t clean it and don’t know how to manage it.

3. Difficult data
There are also problems around data that is:
  • embarrassing
  • secret
There may be very good reasons for not opening up historic data, such as emails and internal communications. It may open up a sizeable legal and other HR risks for organisations. Think Wikileaks email dumps. It’s not like a barrel of oil, more like a can of worms. Like oil spills, we also have data leaks.

4. Different data
Once cleaned, one can see that there’s many different types of data. Unlike oil it has not so much fractions as different categories of data. In learning we can have ‘Personal’ data, provided by the person or actions performed by that person with their full knowledge. This may be gender, age, educational background, needs, stated goals and so on. Then there’s ‘Observed’ data from the actions of the user, their routes, clicks, pauses and choices. You also have ‘Derived’ data inferred from existing data to create new data and higher level ‘Analytic’ data from statistical and probability techniques related to that individual. . Data may be created on the fly or stored.

5. Anonymised data
Just when you thought it was getting clearer. You also have ‘Anonymised’ data is a bit like oil of an unknown origin. It is clean of any attributes that may relate it to specific individuals. This is rather difficult to achieve as there are often techniques to back engineer attribution to individuals.as

6. Supervised data
In AI there’s also ‘Training’ data used for training AI systems and ‘Production’ data which the system actually uses when it is launched in the real world. This is not trivial. Given the problems stated above, it is not easy to get a suitable data set, which is clean and reliable for training. Then, when you launch the service or product the new data may be subject to all sorts of unforeseen problems not uncovered in the training 

7. Paucity of data
But the problems don’t stop there. In the learning world, the data problem is even worse as there is another problem – the paucity of data. Institutions are not gushing wells of data. Universities, for example, don’t even know how many students turn up for lectures. Data on students is paltry. The main problem with the use of data in learning, is that we have so little of the stuff.  SCORM, which has been around for 20 plus years literally stopped the collection of data with its focus in completion. This was the result of a stupid decision by a bunch of folk at ADL. This makes most data analytics projects next to useless. The data can be best handled in a spreadsheet. It is certainly not as large, clean and relevant as it needs to be to produce genuine insights.

Prep
Before entering these data analytics projects ask yourself some serious questions about 'data. Data size by itself, is overated, but size still matters, whether n = tens, hundreds, thousands, millions, the Law of Small Numbers still matters. Don’t jump until you are clear about how much relevant and useful data you have, where it is, how clean it is and in what databases.
New types of data may be more fruitful than legacy data. In learning this could be dwell time on questions, open input data, wrong answers to questions and so on.
More often than not, what you have as data is really proxies for phenomenon. Be careful here, as your oil may actually be snakeoil.
Conclusion
GDPR has made its management and use more difficult. All of this adds up to what I’d call the ‘data delusion’, the idea that data is one thing, easy to manage and generally useful in data analytics projects in institutions and organisations. In general, it is not. That's not to say you should ignore its uses - just don't get sucked into data analytics projects in learning that promise lots but deliver little. Far better to focus on the use of data in adaptive learning or small scale teaching and learning projects where relatively small amounts of data can be put to good use.


Saturday, June 30, 2018

Clever AI/AR/Teacher hybrid systems for classroom use

Most AI-driven systems deliver content via screens to the student and then dashboards to the teacher. But there is a third way – hybrid AI/AR/Teacher systems that give the teacher enhanced powers to see what they can’t see with their own eyes. No teacher has eyes in the back of their heads but, like self-driving cars, you can have eyes everywhere, that recognise individual students, read their expressions, identify their behaviours and provide personalised learning experiences and feedback. You become a more powerful teacher by seeing more, getting and giving more feedback and having less admin and paperwork to do. The promise is that such hybrid systems allow you to do what you do best – teach, especially addressing the needs of struggling students.
AI/AR in classroom
I’ve written about the use of 3D video in teacher training before but this AR (Augmented Reality) idea struck me as clever. Many uses of AI lie outside of the classroom. This augments the strengths of the teacher by integrating dashboards, personal symbols and other AR techniques into the classroom and the practice of teaching. 
Ken Holstein, at Carnegie Mellon, seems like an imaginative and creative researcher, and has been looking at hybrid teacher-AR  - AI systems that present adaptive software but also highlight each individual student's progress, whether they’re attentive, struggling, need help and so on. Symbols appear above the heads of each student. The teacher needs glasses that can display this information, linked to a back-end system that gathers data about each student’s performance.
It does, of course, seem all very Big Brother, to some even monstrous, especially those comfortable with traditional classroom teaching. However, as results seem to have plateaued in K12 education, we may need to make teachers more effective by being able to focus on the students who are having difficulties. These ideas make personalised learning possible not by replacing the teacher (the idea behind most AI/adaptive systems) but by giving the teacher individual feedback over the heads of each student, so that personalised learning can be realised. 
Face recognition in the classroom
Let’s up the stakes with this face recognition system used in China. It recognises student faces instantly, as they arrive for school, so no need for registration. In the classroom it scans the students every 30 seconds, recognising seven different expressions like neutral, happy, sad, disappointed, angry and surprised, as well as six types of behaviour, such as reading, writing, distracted and so on. So it helps the teacher manage registration, performance and behaviour.
They also claim that it helps teacher improve by adapting to the feedback and statistical analysis they receive from the system. When I’ve shown people this system, some react with horror but if we are to reduce teacher workload, should we consider such systems to help with problems around non-teaching paperwork, student feedback and classroom behaviour?
Conclusion
What seems outlandish today often turns out to be normal in the future – internet, smart phones, VR. Combinations of technology are often more effective than single approaches - witness the smartphone or self-driving car. These adaptive AR/AI hybrid systems may turn out to be very effective by being sensitive to both teacher and student needs. The aim, is not to replace but enhance the teacher's skills, giving them real-time data, personal feedback on all students in their class and data to reflect on their own skills. Let’s not throw the advantages out before we’ve had time to consider the possibilities.

Friday, June 22, 2018

AI and religious zealotry – let’s not fall for anthropomorphism, techno-prophecies, singularity & end-of-days

AI is unique as a species of technology as it induces speculation that falls little short of religious fervour. Elon Musk and Stephen Hawking, no less, have made the case for AI being an existential threat, a beast that needs to be tamed. On the other side, in my view more level headed thinkers, such as Stephen Pinker and many practitioners who work in AI, claim that much of this is hyperbole.
The drivers behind such religiosity are, as Hume said in the 18thcentury, a mixture of our:
1) fears, hopes and anxieties about future events
2) tendency to magnify
From the Greeks onwards, whose Promethean Myth, through its resurrection by Mary Shelly in ‘Frankenstein’ in the 19thcentury, then a century of film, from ‘Metropolis’ onwards - the perceived loss of human autonomy has fuelled our fearsand anxietiesabout technology. The movies have tended to rely on existing fears about commies, crime, nuclear war, alien invasions and whatever fear the age throws up. Y2K was a bogus fear, the world suffered no armageddon. So let’s no fall for current fears.
The tendency to magnify shows itself in the exaggeration around exponentialism, the idea that things will proceed exponentially, without interruption, until disaster ensues. Toby Wash, an AI researcher, warns us not readily accept the myth of exponential growth in AI. There are many brakes on progress, from processing power to backpropagation. Progress will be slower than anticipated.
The prophets of doom seem to ignore the fact that it is almost inconceivable that we won’t anticipate the problems associate with autonomy, then regulate and control them, with sensible engineering solutions. 
The airline industry is one of the wonders of our age, where most commercial airplanes are essentially robots, that switch to autopilot as low as 200 feet, then fly and land with out much human intervention. Security, enhanced by face recognition, allows us to take international flights without speaking to another human being. Soaked in AI and automation, its safety record is astounding. Airplanes have got safer because of AI not inspire of AI. Similarly, with other applications in AI we will anticipate and engineer solutions that are safe. But there are several specific tendencies that mirror religious fervour that we must be aware of:
Anthropomorphism
AI is not easy - it's a hard slog. I agree with Pinker, when he says that being human is a coherent concept but  there is no real coherence in AI. Even if we imagine a coherent general intelligence there is no reason to assume that AI will adopt attitudes that we, as humans, have accumulated over 2 million years of evolution. We tend to attribute human qualities to the religious domain, whether God, Saints or our binary, moral constructs; God/Devil, Saint/Sinner, Good/Evil, Heaven/Hell. These moral constructs are then applied to technology, despite the fact that there is no consciousness, no self-awareness and no ‘intelligence’, a word that often misleads us into thinking that AI has thoughts. Blinded by the word ‘intelligence’ we anthropomorphise, transposing our human moral schemas onto indifferent technology. So what if IBM Watson won at Jeopardy, and Google triumphs at GO and poker – the AI didn’t know it had won or triumphed.
Prophecy
Another sign of this religious fervour is ‘prophecy’. There’s no end of forecasts and extrapolations, best described as prophecies, about future progress and fears in AI. The prophecies, as they are in religion, tend to be about dystopian futures. Pestilence and locusts and have been replaces by nano-technology and micro-drones. Kurzwell, that high-priest of hyperbole, has taken this to another level, with his diagrammatic equivalent of rapture…. the singularity.
Singularity
The pseudo-religious idea of the ‘singularity’ is the clearest example of religious magnification and hyperbole. Just as we invented religious ideas, such as omniscience, omnipresence and omnipotence, we draw logarithmic graphics and imagine that AI moves towards similarly lofty heights. We create a technical Heaven, or for some Hell. There will be no singularity. AI is an idiot savant, smart only in narrow domains but profoundly stupid. It’s only software.
End-of-days
Then there is an ‘end of days’ dimension to this dystopian speculation, the idea that we are near the end of our reign as a species and that, through our own foolishness and blindness to the dangers of AI, will soon face extinction.
There is no God
One fundamental problem with all of this pseudo-religious fervour is the simple fact that AI, unlike our monotheistic God, is not a singular idea. It has no formal and precise definition. AI is not one thing, it is many things. It’s simply a set of wildly different tools. In fact, many things that people assume are AI, such as factory robots, have nothing to do with AI, as are many other software applications which are just statistical analysis, data mining or some other well known technique. Algorithms have been around since Euclid 2300 years ago. It has taken over two millennia of maths to get here. Sure we have data flooding from the web but that’s no reason to jump two by two onto some imaginary Ark to save ourselves and all organic life. Believe me, there are many worse dangers – disease, war, climate change, nuclear weapons…. 
Blinded by bias
The zealotry in the technophobes is akin to fanatics in The Life of Brian. What has AI ever done for us…. Google, accelerates  medical research,  identifies disease outbreaks, identifies melanomas, diagnoses cancer, reads scans and pathology slides, self-driving cars…. let’s see. Let’s not see AI as a Weapon of Math Destruction, and focus relentlessly on accusations of bias, that turn out to be the same few second-hand case studies, endlessly recycled. All humans are biased and while bias may exist in software or data, that form of mathematical bias can be mathematically defined and dealt with, unlike our many human biases, which Daniel Kahneman, who got the Nobel Prize for his work on bias, described as ‘uneducable’. Machine learning and many, many other AI techniques, depend necessarily on making mistakes as they optimise solutions. This is how it works, learns and solves problems. Remember - it’s only software.
Conclusion
We need to take the ‘idiot savantdescription seriously. Sure there are dangers. Almost all technology has a calculus of upsides and downsides. Cars mangle, kill and maim millions, yet we still drive. The greatest danger is likely to be the military or bad actor use of weaponised AI. That we should worry about and regulate. AI is really hard, it takes time, so there's time to solve the safety issues. All of those dozens of ethical groups that are springing up like weeds are largely superfluous, apart from those addressing autonomous weapons. There are plenty of real and present problems to be solved - AI is not one of them. Let’s accept that AI is like the God Shiva, it can create and destroy. Don’t let it be seen solely as a destructive force, let’s use it creatively, in making our lives better, especially in health and education.