Friday, January 17, 2020

Algorithms to Live By The Computer Science of Human Decisions Brian Christian & Tom Griffiths

Very few books combine cognitive science with AI. This is therefore a rare treat, a book that bridges AI with real life, computers with brains, algorithms with life.
An algorithm is just a piece of software, a bit of maths, a set of steps used to solve a problem. And they were around long before computers, as they have been used by us since the Stone Age and earlier. This book is all about human algorithm design used to solve problems we encounter in daily life. In that respect it is theoretical but eminently practical.
Thinking algorithmically is what we do, thinking more carefully with better algorithms is the gift that maths and AI can deliver. The melding of mind and machine is well underway and this book gives us concrete examples of how each dialectically inform the other. There is some great work being done using computers and AI to unravel how the mind works. The idea that our minds are Bayesian engines is a lively area of current research. Rather than adhering to some sort of dualist position, many now see mind and machine as usefully informing each other. Algorithms happen to operate in both.
Every decision is a sort of prediction and the chapter on ‘overfitting’ is quite simply brilliant. Overfitting is a known feature in AI, the idea that having too many factors, almost counterintuitively, leads to less accurate, even wildly inaccurate results and in human terms, behaviour. For example, our brains have evolved to algorithmically select scarce salt, sugars and fats but this leads to us overeating when these are available in abundance. This is our brain overfitting. 
The authors explain each type of algorithm, or problem, in both human and mathematical terms. In the case of overfitting, the wise warning is that data, in life, is almost always noisy and messy, making overfitting more likely. We need to stop idolising data and learn to be careful. This is exactly what AI does through a battery of statistical and coding techniques – cross-validation, regularisation and early stopping. The beauty of the book, perfectly illustrated in this chapter, is that this is why we tend to think that ‘more is more’, when ‘less is more’ and why we regularly run with fads – we overfit. They recommend a bit of conservatism, to learn a bit from history, to be a little cautious. For example, don’t overwork learning materials, as students want the ‘need to know’ stuff and not the detail. Less is usually always more.
Other algorithms are given similar human and statistical treatments. Optimal stopping, when to stop expanding and exploring options – turns out the 37% rule applies. Sorting turns out to be a more complex than you thought. Memory makes rather clever use of caching algorithms , as we need relevant stuff to be at the forefront of our minds. Scheduling or time management – that’s a perennial. Bayes Rule is a big one – our brains are, essentially, Bayesian engines. Networkshave to overcome congestion and latency – turns out the bigger the network, the more reliable it becomes. Finally there’s the complications of Game theory in human interaction. 
This book takes a bit of reading, as it tries to deal with both the statistics and human dimensions but it is worth the effort in terms of unpacking both how you think and how AI can help us think and make better decisions. I wouldn’t ‘live’ by it but it’s good enough to read and be influenced by. It may change how you deal with the world.

No comments: