The author makes it clear that what he is about to discuss is unlikely to happen any time soon, perhaps within this century, perhaps not. He does, however, think it will happen – the creation of an intelligence greater than ourselves – a superintelligence.
He starts, reasonably, with a potted history of AI and the state of the art predictions on progress. Multiple routes to superintelligence are then laid out, biological, technical and organisational. They all have trade-offs – brain emulation, brain-computer interfaces, networks and organisational collective intelligence and so on, some are more incremental, others offering a faster route to real ‘super’ intelligence.
So how will this superintelligence manifest itself, how quickly will it explode into disruption and is it a winner takes all endgame? This is where the book comes into its own. His thought experiments, in terms of paths and possible outcomes is outstanding, along with a detailed treatment of the problems of control. Chapters 7, 8 and 9 are the meat in the sandwich, as they tackle the issues of where this could take us and what we can do to control it.
It is less confident in turning to economic issues like employment, welfare and social consequences but comes back to the boil on values. I like his phrase ‘Philosophy with a deadline’. This debate is full of clichés and exponential thinking about technology that doesn’t yet exist. I found it a little wanting on the limits of AI itself, but has breadth when it comes to considering all the arguments. Whatever you think about the concept of superintelligence, its timeframe or even whether it will happen at all, this book pushes the boundaries in terms of the sheer detail he amasses on each topic.
Reading the book is like going off on a series of long walks, many of which result in dead ends but wander you will, as there is no one path but many paths to many outcomes. Unlike most reviewers, I tip my hat to the sheer effort he makes in covering all of the possible paths on each topic, not settling on the obvious or easy. I also enjoyed his clear respect for Eliezer Yudkowsky, another AI theorist.
For me the book is too speculative, as AI is not as good as we think it is and not nearly as bad as we fear. It is so theoretical that it feels like an exercise in theory rather than real practice. You really do have to have a high tolerance for abstract thinking to get the most from this book.
I did like hi afterword. Although dismissive of an impending AI winter, he does fear an ‘AI safety winter’, where ethical theatre, much of it amateurish, simply prevents good beneficial work from being done in AI. That, I fear, is more likely than AI doing much harm.
First published in 2014, which is an eon ago in AI years, it was updated in 2016 but remains topical, as good ideas are good ideas. Bostrum’s an academic at the Oxford Martin School and the book has an academic style making it a read that needs ‘effort’, as technical and philosophical issues do if they are to be credible… but it is well worth the effort.
No comments:
Post a Comment