Tuesday, June 28, 2016

AI will have a huge effect on education & training. But what is it?

How does AI work?
AI is not one thing. It is many things. There are also many uses of AI in education and training. Machine learning, in particular, will have huge impact on the actual world of learning – education and training. It is like a huge whale shark heading towards us gobbling up all in its wake. It learns about the learner as the learner learns. More than this it can in some domains, learn faster than humans. There are profound implications here, for future employment and even what it is to be human. It’s an existential threat or opportunity. That’s why we need to know more about it.
Five species of AI
Beneath all of this lies several species of AI, schools if you want, that have shaped the AI landscape. A useful guide is the classification used by Pedro Domingo in The Master Algorithm.
1. Symbolists
2. Connectivists
3. Evolutionists
4. Bayesians
5. Analogizers
The symbolists use maths and logic, sets of rules and decision trees, to deal with data. The problem with this approach is ‘overfitting’, the tendency to read into data, things that are not actually there. This form of AI is prone to exaggeration, or prone to being misled by errors in the data. It’s a balancing act between hypotheses produced, and data. Get it wrong and you get it badly wrong. But they have a trick up their sleeve and that is to see induction, predicting the future as the reverse of deduction.  You work back from good hypotheses from good data to determine outcomes. Decision trees are one practical tool in the symbolist armoury, there are many other refinements. Decision trees are used in Microsoft’s Kinect to identify parts of the body from the cameras. Decision trees are often better than humans at predicting diagnoses and legal rulings. Its weakness is that it doesn’t do ‘mess’. It’s bad with many practical and fuzzy problems. It demand exactitude.
Connectionists are inspired, not by logic, but the brain. The brain, or at least neural connections and networks are the connectionists shtick. More specifically ‘backpropogation’ is their kick-ass tool. Imagine having to climb Ben Nevis blind. You tap your foot around until you find the steepest direction and step forward. Repeat until you get to the top. Only there’s a problem; you may only have climbed a secondary hillock. Nevertheless, this idea of a ‘gradient ascent’ (or descent) is fundamental to connectionism. Backpropogation has been used in text to speech, predicting the stock market and in driverless cars. But there’s another serious problem. In layered neural networks, the more layers you have, the more diffuse the findings. Autoencoders, sandwiches where the input is the same as the output but the middle is a form of compression, have allowed layer after layer to be more effective. There’s big money riding on this approach – the EU has put aside a billion euros and there’s £100 million in the US BRAIN project. Some, however, are sceptical about copying the brain. We didn’t learn to fly by copying the flapping of birds’ wings, they say. Neither did we go faster by studying the legs of a cheetah, we invented the wheel.
Evolution, some argue, is essentially algorithmic. Genetic algorithms produce variability and this is tested in the real world to produce – us! But evolutionary algorithms are more like selective breeding – we inject purpose and goals into the process. It’s less random than biological evolution. We let software breed software, producing variations but also mimicking sexual reproduction. Using all the things we’ve learnt from actual evolution, we let the software rip, in a virtual world. Unlike real evolution, successful solutions don’t need to die. They are immortal, free to breed with their children and grand-children. This approach is great at coming up with new, unimagined solutions. However, it is subject to obesity, producing ever more complexity – the ‘survival of the fattest’ problem. It also, rather cleverly, uses ‘learning’ to accelerate its progress. Allowing genetic algorithms to learn, is the trick to speeding up their success – inspired by the Baldwin effect. This approach has been successful in designing electronic circuits, factory optimisation, even inventions. For a brilliant look at how evolutionary algorithms work in practice, see how one plays Super Mario.
Bayes was an 18th century clergyman. His theorem (actually created by Laplace), which takes prior probabilities and updates them in the light of new evidence, is the most famous theorem in AI. Sounds easy. You have the theorem and off you go. It’s not. Complexity is its enemy, so the Naïve Bayes Classifier, takes some ‘naïve’ shortcuts to accelerate the process. Beyond Bayes we have Markov chains, originally conceived to apply probability to poetry (Pushkin’s Eugene Onegin). Markov chains do pretty well at dealing with probabilities from structured data, such as language and are used in machine translation systems, such as Google Translate. PageRank (Google’s successful search algorithm) was a Markov Chain which calculated forward-looking probabilities on the basis of incoming links. Hidden Markov Models (HMM) go one step further by predicting the next word from previous words spoken, even from pronounced sounds. They are what make Siri work. In fact, it’s what makes all mobile voice calls work. There’s another trick called Kalman Filters, that eliminate much of the noise, like a barman scraping off the froth from the top of your beer. Beyond this are Monte Carlo techniques, which introduce chance visits across networks to stabilise the results. Bayesian inference is behind a lot of computer vision programmes. Naïve Bayes lies beneath most spam filters and Peter Norvig, of Google, states that it is a staple of search at Google.
Look for something similar, that’s the principle behind the Analogiser approach to AI. There’s the Nearest-neighbour algorithm, Support vector machines and analogical reasoning. Nearest neighbour is super-fast, and is used in face recognition. What makes analogy work is lazy learning. Don’t compare a new face to all faces in your database, use collaborative filters, as Netflix do, to narrow the possibilities. Recommendation engines such as Netflix and Amazon are keen on nearest neighbour algorithms and further tools, such as k nearest neighbour. Analogizers work well in narrow domains, where there’s a limit to what they’re looking for – handwriting recognition, book choices, movie choices and so on. To widen their applicability Support Vector machines (SVMs) optimise solutions for much more complex problems, such as text classification.
This is a complex field but it didn’t come from nowhere. We’ve had 2500 years of maths, with Euclid defining the first ever algorithm, the great Arab scholar Al Khwarizmi, who gave us the word algorithm, and centuries of work in probability theory, and now AI. With the flood of data from the web and massive, yet cheap computing power, we are now able to harness its power for practical uses. This startup, Smacc, has just raised £3.5 to automate accountancy. This AI platform beats expert pilots in ariel combat.  Are there any human skills that AI can't master?
You needn’t worry too much about the mechanics of it all but we do need to understand the ways in which it will impact the field of teaching and learning. It already has. Google’s been around for decades and it is nothing but AI. Almost everything you do online, is powered by AI. Many of the things you do and choose online have been subtly determined by AI. It’s not coming – it’s here.

 Subscribe to RSS


Post a Comment

<< Home