Wednesday, November 10, 2021

Rumelhart & Hinton - Backpropagation learning from error

David Rumelhart was a psychologist who researched the formal representation of cognition at the University of California. He invented Backpropagation as well as recognising, with James McClelland, that Parallel Distributed Processing was relevant to understanding cognition, opening up the possibility of models for neural processing.

Geoffrey Everest Hinton is a British-Canadian cognitive psychologist, computer scientist with significant achievements in artificial intelligence. These include backpropagation for training multi-layer neural networks, deep image recognition and deep learning, although Hinton attributed the idea for backpropagation to Rumelhart.


He has some notable ancestors in being the great-great-grandson both of logician George Boole, and a middle name from another relative George Everest. He works at both Google and the University of Toronto. He has strong views on stopping AI being used by the military and is fearful that intelligent systems hold dangers for our species.

Backpropagation

In the paper by Rumelhart, Hinton and Williams, Learning representations by back-propagating errors (1986). You can climb a hill by feeling around with your foot and finding the steepest direction and on you go to the top. Similarly on the descent, you feel around for the steepest step down and on you go. The gradient descent in perceptrons tweaks the weights to lower the error rate. You do this layer by layer. But suppose you’re climbing a mountain with little peaks, the task is more complex. Backpropagation can solve the XOR problem identified by Minsky and Papert. Backpropogation is sophisticated learning.

Critique

Yann Le Cunn had also discovered backpropagation in France at the same time. Indeed, a number of other sources for this approach were discovered. As Pedro Domingos remarked in The Master Algorithm ”the history of machine learning itself shows why we need learning algorithms”, to find relevant papers in the literature! It was also initially thought that just adding layers, to create larger networks would lead to superintelligence. Unfortunately the layers break down. Then again, more computer power, better algorithms and huge amounts of data have forged a way forward with autoencoders.

Influence

Rumelhart's work on semantic cognition and learning in different domains is still a central area of work in artificial intelligence. Neural networks and backpropagation have had innumerable successes. NETtalk started by babbling then progressed to almost human-like speech. Stock market prediction was another, Self-driving cars benefited in the famous DARPA Challenges in 2004 and 2005. His work has been essential for the progress of deep learning.

Bibliography

Rumelhart, David E.; Hinton, Geoffrey E.; Williams, Ronald J., 1986. Learning representations by back-propagating errors. Nature. 323 (6088): 533–536.

McClelland, J.L., Rumelhart, D.E. and PDP Research Group, 1986. Parallel distributed processing (Vol. 2, pp. 20-21). Cambridge, MA: MIT press.


No comments: