When people use the word ‘AI’ these days they rarely understand the breadth of the field. One horse may have won the Derby by a golden mile recently, GenAI, but there’s a ton of other stuff in the race.
In the pre GenAI days, way back in 2014-2021 I used to regularly talk about Alphafold as an astonishing, measurable breakthrough for our species. This one tool alone remains a monumental achievement and by far the most important person in the tripartite award is Demis Hassabis.
AlphaFold, developed by DeepMind in 2020, predicts protein structure prediction. It both accelerates and opens up a vast array of research opportunities. They thrashed the competition in the CASP14 competition, outperforming the other 100 other teams, with a gargantuan leap in the field. It literally shocked everyone.
DeepMind had released a database containing over 200 million protein structures. This includes structures for nearly all cataloged proteins known to science. This database is FREE to the global scientific community, democratising access to high-quality protein structures.
The productivity gain is mindblowing. The traditional methods using incredibly expensive equipment and expertise took years for just one protein. Alphafold does it in hours. This allows researchers to focus on further experimentation, not groundwork. It has literally saved centuries of research.
For example, during the COVID pandemic, AlphaFold predicted structures of proteins related to the SARS-CoV-2 virus. This led to the rapid development of treatments and vaccines. This is generally true in this important, and some feel, neglected field.
Back to Demis Hassabis, the British entrepreneur, neuroscientist and Artificial intelligence researcher. A chess prodigy and games designer, he was the lead programmer and co-designer of Theme Park, well known in the games world. After a spell as an academic publishing a series of papers, he started an AI company based on his understanding of how the brain and memory works. That company, DeepMind, was sold in 2014 to Google for $628 million.
Learning (memory) theory
Hassabis focused on the hippocampus, as that is where episodic memory is consolidated. He found, through a study of five brain-damaged patients, that memory loss, caused by damage to the hippocampus, was accompanied by loss of imagination (the ability to plan and think into the future). This was a fascinating insight, as he then realised that the process of reinforcement, was the real force in learning, practice makes perfect. This link between episodic memory and imagination was backed up by other studies in brain scanning and experiments with rats. He proposed a ‘scene construction’ model for recalling memories, which on scale sees the mind as a simulation engine. This focus on the reinforcement and consolidation of learnt practice, deliberate practice, as it is known, when captured and executed algorithmically, generates expertise. This led to him setting up a machine learning AI company in 2010 - Deepmind.
Deep Learning algorithms become experts
DeepMind focused on deep learning algorithms that could take on complex tasks, and here’s the rub - without prior knowledge and training. This is the key point – AI that can ‘learn’ to do anything. They stunned the AI community when their system played a number of computer games and became expert gamers. In Breakout their system not only got as good as any human, it devised a technique of breaking round the edge and attacking from above that humans had not encountered. The achievement was astonishing, as the software didn’t know about these games when it started. It looked at the display, seeing how the scoring worked and just learning from trial and error. Deep Learning takes some aspects of human learning, but combines deep learning with reinforcement learning, called deep reinforcement learning to solve problems.
AlphaGo beat the Go World Champions Lee Sedol in Seoul 5-1, the game that is the Holy Grail in AI, reckoned to be the most complicated games we play, the pinnacle of games. Lee Sedolm was playing for humanity. The number of possible moves is greater than the number of atoms in the universe. It is trained by many games played by good amateurs. Deep neural networks that mimic the brain, with enormous computing power, trained to perform a task, can go beyond human capabilities. In game two it made moves that no human would and became creative. It learns and goes on learning. Far from seeing this as a defeat Lee Sedol saw it as a wonderful experience and GO has never been so popular.
Conclusion
One of the leading companies in the world, where humans have created some of the smartest software in the world, built that success on the back of learning theory, going back to Hebb and his successors. This should matter to learning professionals as AI now plays a significant role in learning. Software ‘learns’, or can be ‘trained’ using data. In addition to human teachers and learners, we now have software teachers and software that learns. It is not that a machine can beat a human but that it can learn to do even better. It is a sign of things to come, a sign of as yet unknown but astounding things to come in learning. The cutting edge of AI is the cutting edge of learning. His Nobel Prize is well deserved, as it is of such great benefit to the future of our species.
No comments:
Post a Comment