AI is unique as a species of technology as it induces speculation that falls little short of religious fervour. Elon Musk and Stephen Hawking, no less, have made the case for AI being an existential threat, a beast that needs to be tamed. On the other side, in my view more level headed thinkers, such as Stephen Pinker and many practitioners who work in AI, claim that much of this is hyperbole.
The drivers behind such religiosity are, as Hume said in the 18thcentury, a mixture of our:
1) fears, hopes and anxieties about future events
2) tendency to magnify
From the Greeks onwards, whose Promethean Myth, through its resurrection by Mary Shelly in ‘Frankenstein’ in the 19thcentury, then a century of film, from ‘Metropolis’ onwards - the perceived loss of human autonomy has fuelled our fearsand anxietiesabout technology. The movies have tended to rely on existing fears about commies, crime, nuclear war, alien invasions and whatever fear the age throws up. Y2K was a bogus fear, the world suffered no armageddon. So let’s no fall for current fears.
The tendency to magnify shows itself in the exaggeration around exponentialism, the idea that things will proceed exponentially, without interruption, until disaster ensues. Toby Wash, an AI researcher, warns us not readily accept the myth of exponential growth in AI. There are many brakes on progress, from processing power to backpropagation. Progress will be slower than anticipated.
The prophets of doom seem to ignore the fact that it is almost inconceivable that we won’t anticipate the problems associate with autonomy, then regulate and control them, with sensible engineering solutions.
The airline industry is one of the wonders of our age, where most commercial airplanes are essentially robots, that switch to autopilot as low as 200 feet, then fly and land with out much human intervention. Security, enhanced by face recognition, allows us to take international flights without speaking to another human being. Soaked in AI and automation, its safety record is astounding. Airplanes have got safer because of AI not inspire of AI. Similarly, with other applications in AI we will anticipate and engineer solutions that are safe. But there are several specific tendencies that mirror religious fervour that we must be aware of:
Anthropomorphism
AI is not easy - it's a hard slog. I agree with Pinker, when he says that being human is a coherent concept but there is no real coherence in AI. Even if we imagine a coherent general intelligence there is no reason to assume that AI will adopt attitudes that we, as humans, have accumulated over 2 million years of evolution. We tend to attribute human qualities to the religious domain, whether God, Saints or our binary, moral constructs; God/Devil, Saint/Sinner, Good/Evil, Heaven/Hell. These moral constructs are then applied to technology, despite the fact that there is no consciousness, no self-awareness and no ‘intelligence’, a word that often misleads us into thinking that AI has thoughts. Blinded by the word ‘intelligence’ we anthropomorphise, transposing our human moral schemas onto indifferent technology. So what if IBM Watson won at Jeopardy, and Google triumphs at GO and poker – the AI didn’t know it had won or triumphed.
Prophecy
Another sign of this religious fervour is ‘prophecy’. There’s no end of forecasts and extrapolations, best described as prophecies, about future progress and fears in AI. The prophecies, as they are in religion, tend to be about dystopian futures. Pestilence and locusts and have been replaces by nano-technology and micro-drones. Kurzwell, that high-priest of hyperbole, has taken this to another level, with his diagrammatic equivalent of rapture…. the singularity.
Singularity
The pseudo-religious idea of the ‘singularity’ is the clearest example of religious magnification and hyperbole. Just as we invented religious ideas, such as omniscience, omnipresence and omnipotence, we draw logarithmic graphics and imagine that AI moves towards similarly lofty heights. We create a technical Heaven, or for some Hell. There will be no singularity. AI is an idiot savant, smart only in narrow domains but profoundly stupid. It’s only software.
End-of-days
Then there is an ‘end of days’ dimension to this dystopian speculation, the idea that we are near the end of our reign as a species and that, through our own foolishness and blindness to the dangers of AI, will soon face extinction.
There is no God
One fundamental problem with all of this pseudo-religious fervour is the simple fact that AI, unlike our monotheistic God, is not a singular idea. It has no formal and precise definition. AI is not one thing, it is many things. It’s simply a set of wildly different tools. In fact, many things that people assume are AI, such as factory robots, have nothing to do with AI, as are many other software applications which are just statistical analysis, data mining or some other well known technique. Algorithms have been around since Euclid 2300 years ago. It has taken over two millennia of maths to get here. Sure we have data flooding from the web but that’s no reason to jump two by two onto some imaginary Ark to save ourselves and all organic life. Believe me, there are many worse dangers – disease, war, climate change, nuclear weapons….
Blinded by bias
The zealotry in the technophobes is akin to fanatics in The Life of Brian. What has AI ever done for us…. Google, accelerates medical research, identifies disease outbreaks, identifies melanomas, diagnoses cancer, reads scans and pathology slides, self-driving cars…. let’s see. Let’s not see AI as a Weapon of Math Destruction, and focus relentlessly on accusations of bias, that turn out to be the same few second-hand case studies, endlessly recycled. All humans are biased and while bias may exist in software or data, that form of mathematical bias can be mathematically defined and dealt with, unlike our many human biases, which Daniel Kahneman, who got the Nobel Prize for his work on bias, described as ‘uneducable’. Machine learning and many, many other AI techniques, depend necessarily on making mistakes as they optimise solutions. This is how it works, learns and solves problems. Remember - it’s only software.
Conclusion
We need to take the ‘idiot savant’description seriously. Sure there are dangers. Almost all technology has a calculus of upsides and downsides. Cars mangle, kill and maim millions, yet we still drive. The greatest danger is likely to be the military or bad actor use of weaponised AI. That we should worry about and regulate. AI is really hard, it takes time, so there's time to solve the safety issues. All of those dozens of ethical groups that are springing up like weeds are largely superfluous, apart from those addressing autonomous weapons. There are plenty of real and present problems to be solved - AI is not one of them. Let’s accept that AI is like the God Shiva, it can create and destroy. Don’t let it be seen solely as a destructive force, let’s use it creatively, in making our lives better, especially in health and education.
1 comment:
Nice post, Thanks for sharing. Skilled Work Regional Visa- FTS
Post a Comment