Wednesday, November 10, 2021

Rosenblatt - Perceptrons that learn

Frank Rosenblatt (1928 – 1971) is an American psychologist who, while at Cornell, built the first ‘perceptron’ in 1960. This is a neural network that could learn by trial and error to simulate human thought and learning. This was to develop into the field of neural networks and deep learning. One other interest was astronomy. He had his own observatory and was interested in SETI work, the search for extraterrestrial life. As a progressive, political activist he was involved in the Vietnam protests and political campaigns.

He had a deep interest in learning and worked on injecting brain extracts from trained rats into the brains of novice rats and observed the transfer of learned behaviour but he showed that the effect was tiny.

Perceptron

The Perceptron was the first learning machine, as stated in his Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms (1962). The name comes from Rosenblatt’s use of the Perceptron to identify objects in vision and speech. 

As software was too slow in the late 50s, he literally built a physical machine that can now be found in the Smithsonian. The weightings were delivered via variable resistors and weight learning by electric motors that turned the physical dials on the resistors. A camera provided the input through 20×20 photocells to make a 400-pixel image. 

The aim was to build a learning algorithm. If the perceptron has a task, like identifying a dog, then it learns from its mistakes. If it fires when it shouldn’t, the weight for a feature is reduced. The perceptron ‘learns’ from its errors so that, over a period of time, it eventually learns to recognise a dog. 

Unfortunately. Marvin Minsky and Seymour Papert wrote a book Perceptrons (1969) that poured cold water over the idea, with lots of examples of where the Perceptron could not learn. One was a specific logical function called XOR or the exclusive-OR function. They turned out to be right logically but multi-layered perceptron network could solve this XOR problem. Nevertheless, it had the effect of stopping funding down this neural network route. It wasn’t until the 1980s that interest and funding returned. Rosenblatt, unfortunately, didnl;t live to see the fruitful research based on his insights return, as he dies in a boating accident just after Minsky and Papert’s book was published.

Critique

Minsky, Papert and Roger Schank were among many who thought that real problems were unlikely to be solved using this strict adherence to propositional logic. They thought that ‘thought’ was more complex and less defined than the logic models allowed. That was the crux of their arguments, that thought was ‘scruffier’. In practice AI took both the neural network approach and Schank’s scripts to make separate progress.

Influence

Deep learning emerged from this one invention and has now become central in artificial intelligence.


No comments: