Certain parts of academia really hate AI. It's a provocation they can't handle, undermining a sometimes (not always) lazy attitude towards teaching and assessment. AI is an injection of subversion that is badly needed in education, as it throws light on so many poor practices.
Geoffrey Hinton (1948- ) is most noted for his work on artificial neural networks. He applied to Cambridge, was accepted, tried a few subjects and eventually focused on Experimental Psychology. On graduating he became a carpenter for six years but inspired by, Hebbs he formed his ideas in Islington Library and applied to Edinburgh to do a PhD in AI at a time when it was unfashionable.
He then spent time teaching and researching at various institutions, including the University of Sussex and Carnegie Mellon University but it was at the University of Toronto that Hinton contributed significantly to the field of neural networks and deep learning. Hinton's contributions to AI have earned him numerous accolades, including the Turing Award in 2018, which he shared with Yann LeCun and Yoshua Bengio for their work on deep learning.
In 2013, Hinton was hired by Google to work at Google Brain, their deep learning research team. He took a part-time status at the University of Toronto to accept this position but is now the chief scientific advisor at the Vector Institute in Toronto, which specializes in research on artificial intelligence and deep learning.
Connections
Geoffrey Hinton claims his interest in the brain arose when he was on a bus going to school,, sitting on a sloping furry seat where a penny actually moved uphill! This puzzled him and Hinton is a man who likes puzzles, especially around how the brain works. What drove him was the simple fact that the brain was, to a large degree, a ‘black box’.
In California he worked with connectionists to build a network of artificial neurons. But the brain has a layered structure and these layers began to be constructed. ‘Nettalk’ was an early text to speech neural network whose layered networks improved and progress was steady. Computing power and training data were needed for more substantial advances.
Hinton's research has been pivotal in the development of neural networks and machine learning. His work in the 1980s and 1990s on backpropagation, a method for training artificial neural networks, was groundbreaking. Alongside colleagues Yann LeCun and Yoshua Bengio, Hinton is credited with the development of deep learning techniques that have led to significant advances in technology, particularly in fields such as computer vision and speech recognition.
Backpropagation
In the paper by Rumelhart, Hinton and Williams, Learning representations by back-propagating errors (1986). You can climb a hill by feeling around with your foot and finding the steepest direction and on you go to the top. Similarly on the descent, you feel around for the steepest step down and on you go. The gradient descent in perceptrons tweaks the weights to lower the error rate. You do this layer by layer. But suppose you’re climbing a mountain with little peaks, the task is more complex. It can be used for sophisticated computer learning. Its method, the backward propagation of errors, allows neural networks to be trained, quickly and easily, so that deep neural networks do well in error prone areas like speech or image recognition.
Deep learning
Neural networks and backpropagation have had innumerable successes. NETtalk started by babbling then progressed to almost human-like speech. Stock market prediction was another, Self-driving cars benefited in the famous DARPA Challenges in 2004 and 2005. This work has been essential for the progress of deep learning.
With the internet, compute and data became plentiful and in 2012, the Imagenet competition, which put convolutional neural nets to the test, was easily won by Hinton, Ilya Sutskiver and Alex Krotesky. Their paper ImageNet classification with deep convolutional neural networks (2017), changed AI forever.
Baidu and Google, Deepmind, Microsoft approached the group, so Hinton set up an auction in a Casino in Lake Tahoe. Bids came in over several days. At $44 million, Hinton chose Google. In retrospect, it was a snip. Other companies then began to build teams, the networks and data sets got bigger bet $1 million that their system could beat a named Master at GO. AlphaGO 100 human matches, played itself in a process of self-supervised, reinforcement learning, millions of times. It got good, very good.
Brains
Hinton, as a psychologist, has remained interested in the inner workings and capabilities of the black box. After quitting his job at Google in 2023 he has become fascinated again with real brains. Our view of the brain as an inner theatre is, he thinks, wrong.
He denies the existence of qualia as the subjective, individual experiences of sensations and perceptions. They refer to the inner, private experiences that are felt by a person when they encounter sensory stimuli, like the redness of a rose, the taste of honey, or the pain of a headache. Qualia are often used in discussions within philosophy of mind to explore the nature of consciousness and the mind-body problem but the concept of qualia poses questions about how and why certain physical processes in the brain give rise to subjective experiences that are felt in a particular way. For instance, why does the wavelength of light perceived as red feel the way it does? Qualia are inherently private and subjective, making them difficult to fully describe or measure, so they are often cited in arguments against purely physical explanations of consciousness.
Thomas Nagel, for example, in his seminal paper What is it Like to be a Bat? (1980) argued that there is something that it is like to experience being a bat, which is inaccessible to humans; these experiences are ‘qualia’. He emphasizes that an organism has a point of view and that the subjective character of experience is a key aspect of mind. David Chalmers is a more contemporary philosopher of mind, well-known for discussing the "hard problem" of consciousness, which directly relates to qualia. He argues that physical explanations of the brain processes do not fully account for how subjective experiences occur, indicating the mysterious nature of qualia. Although a critic of the traditional concept of qualia, Dennett's discussions are also pivotal, as he argues against the notion of qualia as ineffable, intrinsic, private, and directly apprehensible properties of experience. His perspective is important in the debate over qualia because he challenges their philosophical utility and existence.
Hinton also has interesting views on AI and creativity. Move 37 was ‘intuitive’ for Alphago - it is creative. LLMs know. We have 100 trillion synapses, A LLM has much less, at around 1 trillion connections but they are good at seeing similarities, even analogies, more than any one person knows about and that is creativity.
Hinton has a computational model of the brain, seeing it as driven by models inaccessible but predictive and Bayesian in nature. This has led him to speculate on the possibility of a mortal computer, combining brain neurons with technology.
Critique
Hinton's approach, particularly with the development of backpropagation and deep learning, has often been critiqued for lacking biological plausibility. Critics argue that the brain does not seem to learn in the same way that backpropagation algorithms do. For example, the human brain appears to employ local learning rules rather than the global error minimization processes used in backpropagation. Despite these criticisms, Hinton and his colleagues have made efforts to draw more connections between biological processes and artificial neural networks. Concepts such as capsules and attention mechanisms are steps towards more biologically plausible models. Furthermore, the success of deep learning in practical applications suggests that while the methods may not be biologically identical, they capture some essential aspects of intelligent processing.
Influence
Geoffrey Hinton's views on the brain, as reflected in his work on neural networks and AI, have been both groundbreaking and controversial. While there are valid critiques regarding biological plausibility, computational efficiency, interpretability, and societal implications, Hinton's contributions have undeniably advanced the field of AI. His work continues to inspire and challenge researchers to develop more sophisticated, efficient, and ethical AI systems. His work continues to profoundly influence the field of artificial intelligence. His research has helped to propel neural networks to the forefront of AI technology, leading to practical applications that are used by millions of people daily.
SEE ALSO PODCAST ON CONNECTIONISTS
https://greatmindsonlearning.libsyn.com/gmols6e34-connectionists-with-donald-clark-0