Vladamir
Putin, announced, to an audience of one million online, that, “Artificial intelligence is the future, not
only for Russia, but for all humankind… It comes with colossal
opportunities, but also threats that are difficult to predict. Whoever becomes
the leader in this sphere will become the ruler of the world… If we become
leaders in this area, we will share this know-how with entire world, the same
way we share our nuclear technologies today.” Elon Musk, tweeted a
reply, “China, Russia, soon all countries
w strong computer science. Competition for AI superiority at national level
most likely cause of WW3 imo”, then, “May
be initiated not by the country leaders, but one of the AI's, if it decides
that a pre-emptive strike is most probable path to victory.”
Nanoweapons
A terrifying range of nanoweapons, mosquito-like robots and mini-nukes have entered the vocabulary. Nanoweapons: A Growing Threat to Humanity by Louis A. Del Monte is a terrifying account of how nanoweapons may change the whole nature of warfare, making other forms almost redundant. It is the miniaturisation of weaponry that also makes this more of a lethal threat. This world of small weaponry is worrying. Small payloads on small drones, with amazing manoeuvrability are already possible. Stuart Russell has already warned against this one aspect of AI in weaponry in evidence for the UN, which seems, at last, to be moving towards international regulation in the area, just like chemical weapons.
That pretty
much sums up the problem. Large and even small nations, even terrorist groups or lone wolves may soon have the ability to use ‘smart’, autonomous, AI-driven tech in warfare.
To be honest, it doesn’t have to be that smart. A mobile device, a drone and
explosives are all that one needs to deliver a lethal device from a distance.
You may even have left the country when it takes off and delivers its deadly
payload. Here’s the rub – sharing may be the last thing we want to do. The
problem with sharing, is that anyone can benefit. As soom as machines have the power to decide wholives or dies, we are in uncharted moral territory.
In truth, AI has long been part of the war game. Turing, the
father of AI, used it to crack German codes, and thankfully contributed to
ending the second World War and let’s not imagine that it has been dormant for
the last half a century. The landmine, essentially, a dormant robot that acts
autonomously, has been in use since the 17th century. One way to
imagine the future is to extend the concept of the landmine. What we now face
are smart, active, autonomous landmines, armed with deadly force on land, sea, air, space and cyberspace. AI exists in all war zones, on all fronts – land, sea,
air (space) and cyberspace.
AI on land
Robot soldiers are with us. You can watch Boston Analytics
videos on YouTube and see machines that match humans in some, not all, aspects
of carrying, shooting and fighting. The era of the AI-driven robot soldier is
here. We have to be careful here, as the cognitive side of soldiering is very far
from being achieved. That is one of the problems, as dumb, cognitively stupid robot soldiers bring their own problems.
In the DMZ between South and North Korea,
robot guards are armed and will shoot on sight. Known as a Lethal
Autonomous Weapons System (LAWS) it will shoot on sight, and by sight
we mean infrared detection and laser identification and tracking of a target.
It has an AI-driven voice recognition system, asks for identification, and can
shoot autonomously. You can see why these sentry or rapid response
systems have become autonomous. Humans are far too slow in detecting in-coming
attacks or targeting with enough accuracy. Many guns are now targeted
automatically with sensors and systems way beyond the capabilities of any
human.
Uran-9, a Russian unmanned drone tank, gets to places autonomously and can be under human control or not. It’s hard to believe that the autonomous software for such vehicles has not been developed, as it has for self-driving cars. When such vehicles can set off and complete missions, it is hard to see how the risk of human casualties would be seen as an option.
Uran-9, a Russian unmanned drone tank, gets to places autonomously and can be under human control or not. It’s hard to believe that the autonomous software for such vehicles has not been developed, as it has for self-driving cars. When such vehicles can set off and complete missions, it is hard to see how the risk of human casualties would be seen as an option.
AI at sea
Lethal Autonomous Weapons can already operate on or
beneath the sea. Naval mines (let’s call them autonomous robots) have been in
operation for centuries. Unmanned submarines have been around for decades and
have been used for purposes good and bad, for example, the deliver of drugs
using autonomous GPS navigation, as well as finding aircraft that have downed
in mid-ocean. In military terms, large submarines capable of travelling thousands
of miles, sensor-rich, with payloads, are already in play. Russian drone
submarines have already been detected, code-named Kanyon by the Pentagon, they
are thought to have a range of up to 6,200 miles with speeds up to 56
knots. They can also deliver nucear payloads. Boeing and many others have also been developng such unmanned subs.
AI in the air
When you fly, the pilot of the 757 switches to autopilot at 1000 feet and you are then technically flying in a robot for the rest of the flight, albeit
being supervised by the pilots – fuel consumption, weather and so on. They
could have landed using autoland but most pilots still prefer to land the
aircraft themselves. The bottom line is that software does most flying better than
humans and will soon outclass them on all tasks. Flying is safe precisely
because it is highly regulated and smart software is used to ensure safety.
Drones are the most obvious example, largely controlled from
the ground, often at huge distances, they are now AI-driven, operate from
aircraft carriers, can defend themselves against other aircraft and deliver deadly missiles to selected targets. The days of the fighter plane may
be numbered, as drones, free from the problem of seating and coping with a
human pilot, is cheaper and can be produced in larger numbers.
This is Taranis,
named after the God of Thunder, an unmanned BAE drone that has been tested in
Australia on autonomous missions. The tech that has been developed for
self-driving cars can be used for autonomous vehicles on land and air.
They spot the enemy, rather than friendly pedestrians.
Worringly, there is evidence that Israle's HARAP system, which is actually an autonomous bomb that literally dive bombs radar installations and self-destructs, has already been used.
Swarm drones have already been tested by the US, to surround and overwhelm a target. Each drone acts independently but also as a group.
Worringly, there is evidence that Israle's HARAP system, which is actually an autonomous bomb that literally dive bombs radar installations and self-destructs, has already been used.
Swarm drones have already been tested by the US, to surround and overwhelm a target. Each drone acts independently but also as a group.
Nanoweapons
A terrifying range of nanoweapons, mosquito-like robots and mini-nukes have entered the vocabulary. Nanoweapons: A Growing Threat to Humanity by Louis A. Del Monte is a terrifying account of how nanoweapons may change the whole nature of warfare, making other forms almost redundant. It is the miniaturisation of weaponry that also makes this more of a lethal threat. This world of small weaponry is worrying. Small payloads on small drones, with amazing manoeuvrability are already possible. Stuart Russell has already warned against this one aspect of AI in weaponry in evidence for the UN, which seems, at last, to be moving towards international regulation in the area, just like chemical weapons.
AI in cyberspace
War used to be fought on land, sea and air, with the
services - army, navy and airforce –
representing those three theatres of war. It is thought that a brand new front
has opened up on the internet but this is not entirely true, as the information
and communications war has always been the fourth front. The Persians did it,
the Romans were masters of it and it has featured in all modern conflicts.
Whenever a new form of communications technology is invented, from clay
tablets, paper, printing, broadcast media and the internet, it has been used as
a weapon of war.
However, the internet offers a much wider, deeper and
difficult arena, as it is global and encrypted. Russia, China, US are the major
players, with autonomous bots and viral campaigns in action. China wages a war against freedom of
expression within its own country with its infamous Great Firewall of China. Russia
has banned LinkedIn and Putin has been explicit in seeing this as the new
battlefield. The US is no different, with explicit lies about the surveillance
of its own citizens. But it is the smaller state actors that have had real wins
– ISIS, North Korea and others. With limited resources they see this theatre as somewhere they can compete and outwit the big boys.
AI as weapon of peace
When you land at advanced airports, you walk through a gate that
scans your passport. In a chip on your passport is an image of your face and
face recognition software, along with other checks, identifies you as being able
to enter the country. You needn't speak to any human on you entire trip. You will soon be able to walk through borders using a mobile phone only.
Restricting the movement of criminals and terrorists is being achieved through
the use of many types of AI. The war on terror is being fought using AI. It is
AI that is identifying and taking down ISIS propaganda. What is required, is a
determined effort to use AI to police AI. All robots may have to have black
boxes, like aircraft, so that rogue behaviour can be forensically examined. AI
may be our best defence against offensive (in both senses of the word) AI.
Conclusion
A University, KAIST, the MIT of S Korea, has just backed down from doing
military research with AI after a seriousl letter from some major AI researchers, with Global reputations, threatening a boycott. Google staff lambasted the CEO when they found that
Goggle's Tensorflow was being used in wide range imaging from military drones. It is heartening that this is coming from within hte AI and tech community. There are already UN meetings with multilateral support to discuss and decide desirable international laws, such as those covering chemical weapons, landlines and laser blinding technology. The two main goals at the moment are 1) Target selection and 2) the application of violent force. But it's hard seeing this stick. The US, Russia and Chna have been lukewarm on further regulation, seeing existing laws as adequate. Some aspects of AI remain opaque. For example, the mosre siophisticated machine learning andneural netwrors become, the less we know about what is actually happening within AI.
What is worrying, however, is that while many of the above examples are known, you can bet that this is merely the tip of a chilling iceberg, as most of these weapons and systems are being developed in deep secrecy. Musk and many others, especially the AI research and development community, are screaming out for regulation at an international level on this front. Our politicians seem ill-equipped to deal with these developments, so it is up to the AI community and those in the 'know' to press this home. This is an arms race that is far more dangerous than the nuclear race, where only large nations and humans have been in control and calls for a declaration of war on AI weaponry. We are facing a future where even small nations, rogue states and actors within states could get hold of this technology. That is a terrifying prospect.
What is worrying, however, is that while many of the above examples are known, you can bet that this is merely the tip of a chilling iceberg, as most of these weapons and systems are being developed in deep secrecy. Musk and many others, especially the AI research and development community, are screaming out for regulation at an international level on this front. Our politicians seem ill-equipped to deal with these developments, so it is up to the AI community and those in the 'know' to press this home. This is an arms race that is far more dangerous than the nuclear race, where only large nations and humans have been in control and calls for a declaration of war on AI weaponry. We are facing a future where even small nations, rogue states and actors within states could get hold of this technology. That is a terrifying prospect.
1 comment:
This is one of the best pieces you've written in ages. Keep on.
Post a Comment