Unfortunate title, as O’Neil’s supposed WMDs are as bad as
Saddam Hussein’s mythical WMDs, the evidence similarly weak, sexed up and cherry picked. This is the go-to book for those who want to stick it to AI by
reading a pot-boiler. But rather than taking an honest look at the subject, O’Neil
takes the ‘Weapons of Math Destruction’ line far too literally, and unwittingly
re-uses a term that has come to mean exaggeration and untruths. The book has
some good arguments and passages but the search for truth is lost as she tries too hard to be a clickbait contrarian.
Bad examples
The first example borders on the bizarre. It concerns a teacher who is supposedly sacked because an algorithm said she should be sacked. Yet the true cause, as revealed by O’Neil, are other teachers who have cheated on behalf of their students in tests. Interestingly, they were caught through statistical checking, as too many erasures were found on the test sheets. That’s more (wo)man than machine.
The first example borders on the bizarre. It concerns a teacher who is supposedly sacked because an algorithm said she should be sacked. Yet the true cause, as revealed by O’Neil, are other teachers who have cheated on behalf of their students in tests. Interestingly, they were caught through statistical checking, as too many erasures were found on the test sheets. That’s more (wo)man than machine.
The second is even worse. Nobody really thinks that US College
Rankings are algorithmic in any serious sense. The ranking models are quite
simply statistically wrong. The problem is not the existence of fictional WMDs
but poor schoolboy errors in the basic maths. It’s a straw man, as they use
subjective surveys and proxies and everybody knows they are gamed. Malcolm
Gladwell did a much better job in exposing them as self-fulfilling exercises in
marketing. In fact. most of the problems uncovered in the book, if one
does a deeper analysis, are human.
Take PredPol, the predictive policing software. Sure it has
its glitches but the advantages vastly outweigh the disadvantages and the
system, and its use, evolve over time to eliminate the problems. I could go on
but the main problem with the book is this one-sidedness. Most technology has a
downside. We drive cars, despite the fact that well over a million people die
gruesome and painful deaths every year from in car accidents. Rather than tease
out the complexity, even comparing upsides with downsides, we are given over-simplifications. The proposition that all
algorithms are biased is as foolish as the idea that all algorithms are free from bias. This
is a complex area that needs careful thought and the real truth lies, as usual,
somewhere in-between. Technology often has this cost-benefit feature. To focus
on just one side is quite simply a mathematical distortion, which is what
O’Neil does in many of her cases.
The chapter headings are also a dead giveaway - Bomb Parts, Shell
Shocked, Arms Race, Civilian Casualties, Ineligible to serve, Sweating Bullets,
Collateral Damage, No Safe Zone, The Targeted Civilian and Propaganda Machine.
This is not 9/11 and the language of WMDs is hyperbolic - verging
on propaganda itself.
At times O’Neil makes good points on ‘data' – small data
sets, subjective survey data and proxies – but this is nothing new and features
in any 101 stats course. The mistake is to pin the bad data problem on
algorithms and AI – that’s often a misattribution. Time and time again we get straw
men in online advertising, personality tests, credit scoring, recruitment, insurance, social media. Sure problems exist but posing marginal errors as a global threat is a tactic that may sell books but is hardly objective. In this sense, O'Neil plays the very game she professes to despise - bias and exaggeration.
The final chapter is where it all goes a bit weird, with the
laughable Hippocratic Oath. Here’s the first line in her imagined oath “I will
remember that I didn’t make the world, and it doesn’t satisfy my equations” – a line
worthy of Donald Rumsfeld, There is, however one interesting idea – that AI be
used to police itself. A number of people are working on this and I think it is
a good example of seeing technology realistically, as being a force for both
good and bad, and that the good will triumph if we use it for human good.
Conclusion
This book relentlessly lays the blame at the door of AI for
all kinds of injustices, but mostly it exaggerates or fails to identify the real
root causes. The book is readable, as it is lightly autobiographical, and does
pose the right questions about the dangers inherent in these technologies.
Unfortunately it provides exaggerated analyses and rarely the right answers.
Let us remember that Weapons of Mass Destruction turned out to be lies, used to
promote a disastrous war. They were sexed up through dodgy dossiers. So it is
with this populist paperback.
No comments:
Post a Comment