Wednesday, November 10, 2021

McCulloch & Pitts - Neural nets

Warren Sturgis McCulloch (1898 – 1969) neurophysiologist and logician and mathematician Walter Pitts (1923 – 1969) are best known for their influential paper in computing and neural network computing A Logical Calculus of Ideas Immanent in Nervous Activity (1943). They were the first to create a mathematical model of a neuron, inspired by the concept of a biological neuron. This led to the development of ever more sophisticated neuron and neural network models and their astounding success in artificial intelligence.

McCulloch took Pitts in to live with his family when he was homeless. Pitts eccentricities included refusing to ever use or sign his name. This led to him never getting honorary and advanced degrees or promotion, then burning much of his unpublished research and retreating into social isolation.  

Neural nets

They started with the known physiology of neural networks (or ‘nets’ in their terminology). The nervous system is a net of neurons. Each neuron has a soma and an axon. Their synapses are always between the axon of one and the soma of another. At any moment a neuron has a threshold, which must be exceeded by excitation to initiate an impulse. Their breakthrough was to represent this in terms of propositional logic, replicating what happens in the brain in mathematics.

It takes as its inspiration the idea, first introduced by Leibniz, that activating neurons can create propositions about the real world. Basing their model on a real, biological neuron, it collects information (dendrite), then determines what neurons get preference (synapse) and processes it (soma) according to a model to produce an output (axon). They take in binary input data, of two types, excitation or inhibition, and produce binary outputs. They introduce the idea of logical neurons and neural networks. Boolean functions, such as AND, OR and NOT can be ‘represented’ by these threshold neurons. This model is known as Linear Threshold Gate (LTG).

It is similar to the logic gates of a computer, with AND gates (all inputs on), OR gates (at least one input on). The neuron switches on when a threshold is passed. So these artificial neurons can be switched on with different inputs. They can also be switched off, NOT gates, which replicates inhibitory synapses in real brains. When networked, these neural nets do what computers do, compute! Note that at this stage neural networks do not learn but it does provide a structure for later models that can learn.

They went on to build on this theory to explain how universals can be known, of forms independent of perceived qualities, in  How we Know Universals: The Perception of Auditory and Visual Forms (1947).

Their paper had a profound effect on computer science and, in particular, artificial intelligence. It tackles an important philosophical issue in giving a computational model for the mind/body problem, the mind being a product of the brain as a neural network.


Their paper was the first modern computational theory of mind and brain. It gives us a propositional logic model that explains how neural structures work and therefore the basis to explore how they learn. It is the precursor for later advances in the computational model of the brain and artificial intelligence. It led to the invention of the perceptron by Rosenblatt in 1957 and on to work on layered neural networks, backpropagation and deep learning.


Pitts, W. and McCulloch, W.S., 1947. How we know universals the perception of auditory and visual forms. The Bulletin of mathematical biophysics, 9(3), pp.127-147.

McCulloch, W.S. and Pitts, W., 1943. A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics, 5(4), pp.115-133.


No comments: