Demis Hassabis is a British entrepreneur, neuroscientist and Artificial intelligence researcher. He was also a chess prodigy and games designer. As the lead programmer and co-designer of Theme Park, he became known in the games world. After a spell as an academic publishing a series of papers, he started an AI company based on his understanding of how the brain and memory works. That company, DeepMind, was sold in 2014 to Google for $628 million.
Learning (memory) theory
Hassabis focused on the hippocampus, as that’s where episodic memory is consolidated. He found, through a study of five brain-damaged patients, that memory loss, caused by damage to the hippocampus, was accompanied by loss of imagination (the ability to plan and think into the future). This was a fascinating insight, as he realised that the process of reinforcement, was the real force in learning, practice makes perfect. This link between episodic memory and imagination was backed up by other studies in brain scanning and experiments with rats. He proposed a ‘scene construction’ model for recalling memories, which on scale sees the mind as a simulation engine. This focus on the reinforcement and consolidation of learnt practice, deliberate practice, as it’s known, when captured and executed algorithmically, generates expertise. This led to him setting up a machine learning AI company in 2010 - Deepmind.
Deep Learning algorithms become experts
DeepMind focused on deep learning algorithms that could take on complex tasks, and here’s the rub - without prior knowledge and training. This is the key point – AI that can ‘learn’ to do anything. They stunned the AI community when their system played a number of computer games and became expert gamers. In Breakout their system not only got as good as any human, it devised a technique of breaking round the edge and attacking from above that humans had not encountered. The achievement was astonishing, as the software didn’t know about these games when it started. It looked at the display, seeing how the scoring worked and just learning from trial and error. Deep Learning takes some aspects of human learning, but combines deep learning with reinforcement learning, called deep reinforcement learning to solve problems.
AlphaGo beat the Go World Champions Lee Sedol in Seoul 5-1, the game that is the Holy Grail in AI, reckoned to be the most complicated games we play, the pinnacle of games. Lee Sedolm was playing for humanity. The number of moves is greater than the number of atoms in the universe. It is trained by many games played by good amateurs. Deep neural networks that mimic the brain, with enormous computing power, trained to perform a task, can go beyond human capabilities. In game two it made moves that no human would and became creative. It learns and goes on learning. Far from seeing this as a defeat Lee Sedol saw it as a wonderful experience and GO has never been so popular.
AlphaFold is a protein folding tool that predicts the 3D structure of a protein from its 1D amino acid sequence, saving huge amounts of research, lab time and money. They also reduced the energy used by Google data centers by 40%.
Influence
One of the leading companies in the world, where humans have created some of the smartest software in the world, built that success on the back of learning theory, going back to Hebb and his successors. This should matter to learning professionals as AI now plays a significant role in learning. Software ‘learns’, or can be ‘trained’ using data. In addition to human teachers and learners, we now have software teachers and software that learns. It is not that a machine can beat a human but that it can learn to do even better. It is a sign of things to come, a sign of as yet unknown but astounding things to come in learning. The cutting edge of AI is the cutting edge of learning.
Bibliography
Brooks R, Hassabis D, Bray D, Shashua A., 2012. "Turing centenary: Is the brain a good model for machine intelligence?" (PDF). Nature. 482 (7386): 462–463.
No comments:
Post a Comment