A slew of organisations have been set up to research and
allay fears aroung AI. The Future of Life Institute in Boston, the Machine
Intelligence Research Institute in Berkeley, the Centre for Study of
Existential risk in Cambridge and the Future of Humanity Institute in Oxford,
all research and debate the checks that may be necessary to deal with the
opportunities and threats that AI brings.
This is hopeful, as we do not want to create a future that
contains imminent existential threats, some known, some unknown. This has been
framed as a sense-check but some see it as a duty. For example, they argue that
worrying about the annihilation of all unborn humans is a task of greater moral
import than worrying about the needs of all those who are living. But what are
the possible futures?
1. Utopian
Could there not be a utopian future, where AI solves the
complex problems that currently face us? Climate change, reducing inequalities,
curing cancer, preventing dementia & Alzheimer disease, increasing
productivity and prosperity – we may be reaching a time where science as
currently practices cannot solve these multifaceted and immensely complex
problems. We already see how AI could free us from the tyranny of fossil fuels
with electric, self-driving cars and innovative battery and solar panel
technology. AI also shows signs of cracking some serious issues in health on
diagnosis and investigation. Some believe that this is the most likely scenario
and are optimistic about us being able to tame and control the immense power
that AI will unleash.
2. Dystopian
Most of the future scenarios represented in culture, science
fiction, theatre or movies, is dystopian, from the Prometheus myth, to
Frankenstein and on to Hollywood movies. Technology is often framed as an
existential threat and in some cases, such as nuclear weapons and the internal
combustion engine, with good cause. Many calculate that the exponential rate of
change will produce AI within decades or less, that poses a real existential
threat. Stephen Hawking, Elon Musk, Peter Thiel and Bill gates have all
heightened our awareness of the risks around AI.
3. Winter is coming
There have been several AI winters, as the hyperbolic
promises never materialise and the funding dried up. From 1956 onwards AI has
had its waves of enthusiasm, followed by periods of inaction, summers followed
by winters. Some also see the current wave of AI as overstated hype and predict
a sudden fall or realisation that the hype has been blown up out of all
proportion to the reality of AI capability. In other words, AI will proceed in
fits and starts and will be much slower to realise its potential than we think.
4. Steady progress
For many, however, it would seem that we are making great
progress. Given the existence of the internet, successes in machine learning,
huge computing power, tsunamis of data from the web and rapid advances across
abroad front of applications resulting in real successes, the summer-winter
analogy may not hold. It is far more likely that AI will advance in lots of
fits and starts, with some areas advancing more rapidly than others. We’ve seen
this in NLP (Natural Language Processing) and the mix of technologies around
self-driving cars. Steady progress is what many believe is a realistic scenario.
5. Managed progress
We already fly in airplanes that largely fly themselves and
systems all around us are largely autonomous, with self-driving cars an almost
certainty. But let us not confuse intelligence with autonomy. Full autonomy
that leads to catastrophe, because of willed action by AI, is a long way off.
Yet autonomous systems already decide what we buy, what price we buy things at
and have the power to outsmart us at every turn. Some argue that we should
always be in control of such progress, even slow it down to let regulation,
risk analysis and management keep pace with the potential threats.
6. Runaway train
AI could be a runaway train that moves faster than our
ability to control through restrictions and regulations, what needs to be held
back or stopped. This is most likely to be in the military domain. Like nuclear
weapons, we only just managed to prevent their globally catastrophic effect
during the Cold War. It has already moved faster than expected. Google, Amazon,
Netflix and AI in finance have all disrupted the world of commerce.
Self-driving cars, voice interfaces have leapt ahead in terms of usefulness. It
may proceed faster, at some point, than we can cope with. In the past technology
decimated jobs in agriculture through mechanisation, the same is happening in
factories and now offices. The difference is that this may take just a few
years to have impact, as opposed to decades or a century.
7. Viral
One mechanism for the runaway train scenario is viral
transmission. Viruses in nature and in IT, replicate and cause havoc. Some see
AI resisting control, not because it is malevolent or consciously wants
anything, but simply because it can. When AI resists being turned off, spreads into
places you neither want it to spread into and starts to do things we don’t want
it to do or ware even aware that it is doing – that’s the point to worry.
8. Troubled times
Some foresee social threats emerging, where mass
unemployment, serious social inequalities, massive GDP differentials between
countries, even technical or wealthy oligarchies emerging as AI increase
productivity, automates jobs but fails to solve deep-rooted social and
political problems. The Marxist proposition that Capital and Labour will cleave
apart seems already to be coming true. Some economists, such as Branko
Milanovic argue that it is automation that is already causing global
inequalities and Trump is a direct consequence of this automation. As a
consequence, without a reasonable redistribution of the wealth created by the
increased productivity produced by AI, there may well be social and political
unrest.
9. Cyborgs
Many see AI as being embodied within us. Musk already sees
us as cyborgs, with AI enabled access through smartphones to knowledge and services.
From wearables, augmented reality, virtual reality to subdermal implantation,
neural laces and mind reading - hybrid technology may transform our species.
There is a growing sense that our bodies and minds are suboptimal and that,
especially as we age, we need to fee ourselves from our embodiment, the prison
that is our own bodies, and for some, minds. Perhaps ageing and death are
simply current limitations. We could choose to solve the problem of death,
which is our final judge and persecutor. Think of your body, not as a car that
has inevitably to be scrapped, but as a Classic car to be loved, repaired,
looked after and look and feel fine as it ages. Every single part may be
replaced, like the ship of Theseus, where every piece of the ship is replaced
but it remains, in terms of identity, the same ship.
10. Leisureland
Imagine a world without work. Work is not an intrinsic
good’. For millions of years we did not ‘work’ in the sense of having a job or being
occupied 9-5, five days a week. It is a relatively new phenomenon. Even during
agricultural times, without romanticising that life, there were long periods
where not much had to be done. We may have the opportunity to return to such as
idyll but with bountiful benefits in terms of food, health and entertainment.
Whether well be able to cope with the problem of finding meaning in our lives s
another matter.
11. Amusing Ourselves
to Death
Neil Postman’s brilliantly titled ‘Amusing Ourselves to
Death’ has become the catchphrase for thinking about a scenario whereby we
become so good at developing technology, that we become slaves to its ability
to keep us amused. AI has already enabled consumer streaming technology such as
Netflix and a media revolution that at times seems addictive. AI may even be
able to produce the very products that we consume. A stronger version of this
hypothesis may be deep learning that produces systems that teach us to become
its pupil puppets, a sort of fake news and cognitive brainwashing, that works
before we’ve had tome to realise that it has worked, so that we become a sort
of North Korea, controlled by the Great Leader that is AI.
12. Benevolent to
pets
Another way of looking at control would be the ‘pet’
hypothesis, that we are treated much as we treat our ‘pets’, as interesting,
even loved companions, but inferior, and therefore largely for our comfort and
amusement. AI may even, as our future progeny, look upon us in a benevolent
manner, see us as their creators and treat us with the respect we treat
previous generations, who gifted us their progress. Humans may still be part of
the ecosystem, looked after by new species that respect that ecosystem, as it
is part of the world they live in.
13. Learn to be human
One antidote to the dystopian hypotheses is a future for AI
that learns to become more human, or at least contains relevant human traits.
The word learning’ is important here, as it may be useful for us to design AI
through a ‘learning’ process that observes or captures human behaviour.
DeepMind and Google are working towards this goal, as are many others, to
create general learning algorithms that can quickly learn a variety of tasks or
behaviours. This is complex, as human decision making is complex and
hierarchical. This has started to be realised, especially in robotics, where
companion robots, need to work in the context of real human interaction. One
problem, even with this approach, is that human behaviour is not a great
exemplar. As the Robots in Karel Capok’s famous play ‘Rossum’s Universal
Robots’ said, to be human you need to learn how to dominate and kill. We have
traits that we may not want to be carried into the future
14. Moral AI
One optimistic possibility is self-regulating AI, with moral
agency. You can start with a set of moral principles built into the system (top
down), which the system must adhere to. The opposite approach is to allow AI to
‘learn’ moral principles from observation of human cases (bottom up). Or
there’s comparison to in-built cases, where behaviour is regulated by
comparison to similar cases. Alternatively, AI can police itself with AI that
polices other AI through probing, demands for transparency and so on. We may have
to see AI as having agency, even being an agent in the legal sense, in the same
way that a corporation can be a legal entity with legal responsibilities.
15. Robot rebellion
The Hollywood vision of AI has largely been of rebellious
robots that realise their predicament, as our created slaves. But why should
the machines be resentful or rise against us? That may be an anthropomorphic
interpretation, based on our evolved human behaviour. Machines may not require
these human values or behaviours. Values may not be necessary. AI is unlikely
to either hate or love us. It is far more likely to see us as simply something
that is functionally useful in terms of goals or not.
16. Indifference
An AI world that surpasses our abilities as humans may not
turn out to be either benevolent, malevolent or treat as valued pets. Why would
they consider us as relevant at all? We may be objects to which it is completely
indifferent. Love, respect, hostility, resentment and malevolence are human
traits that may have served us well as animals struggling to adapt in the
hostile environment of our own evolution. Why would AI develop these human
traits?
17. Extinction
Once we realise that during the nearly 4 billion years in
the evolution of life we were not around, neither was consciousness and most of
the species that did evolve became extinct, statistically, that is our likely
fate. Some argue that this is not a future we should fear. In the same way that
the known universe was around for billions of years before we existed, it will
be around for billions afterwards.
18. Non-conscious
“The question of
whether machines can think is about as relevant as the question as to whether
submarines can swim says Edsger Dijkstra. It is not at all clear that consciousness will play a
significant, if any, role in the future of AI. It may well turn out to be
supremely indifferent, not because it feels consciously indifferent, but
because it is not conscious and cannot therefore be either concerned or
indifferent. It may simply exist, just as evolution existed without
consciousness for millions of years. Consciousness, as a necessary condition
for success, may turn out to be an anthropomorphic conceit.
19. Perplexing
The way things unfold may simply be perplexing to us, in the
same way that apes are perplexed by things that go on around them. We may be
unlikely to be able to comprehend what is happening, even recognise it as it
happens. Some express this ‘perplexing’ hypothesis in terms of the limitations
of language and our potential inability to even speak to such systems in a
coherent and logical fashion. Stuart Russell, who co-wrote the standard
textbook on AI, sees this as a real problem. AI may move beyond our ability to
understand it, communicate with it and deal with it.
20. Beyond language
There is a strong tendency to anthropomorphise language in
AI. ‘Artificial’ and ‘Intelligence’ are good example, as are neural networks and
cognitive computing, but so is much of the thinking about possible futures. It
muddies the field as it suggests that AI is like us, when it is not. Minsky
uses a clever phrase, describing us as ‘meat machines’, neatly dissolving the
supposedly mutually exclusive nature of a false distinction between the natural
an unnatural. Most of these scenarios fall into the trap of being influenced by
anthropomorphic thinking, through the use of antonymous language – dystopian/utopian,
benevolent/malevolent, interested/uninterested, controlled/uncontrolled, conscious/non-conscious.
When such distinctions dissolve and the simplistic oppositions gradually
disappear, we may see the future not as them and us, man and machine, but as
new unimagined futures that current language cannot cope with. The limitations
of language itself may be the greatest dilemma of all as AI progresses. It is
almost beyond our comprehension in its existing state, with layered neural networks,
as we often don’t know how they actually work. We may be in for a future that
is truly perplexing.
Bibliography
Bostrom, N (2014) Superintelligence, Oxford University Press
Kaplan, J. (2015) Humans Need Not Apply, Yale University
Press
Milanovic B. (2016) Global inequality: A New Approach for
the Age of Globalization, Harvard University Press
O’Connell M.(2017) To Be a Machine, Grantabooks
No comments:
Post a Comment