top of page

The AI Arms Race

An Israeli politician is found dead in his hotel room. The cause of death is identified as a lethal poison, injected through his arm while sleeping. No sign of breaking and entry.

An American aircraft carrier is attacked by a swarm of drones. Its defences try to parry the attack but is ultimately overwhelmed by the 100,000-strong swarm. A terrorist group claims responsibility for the attack.

France is the first country to impose a blanket ban on autonomous cars after a string of terror-related attacks in which automated cars were used as bombs, killing scores of civilians.

Israel identifies the micro-robot responsible for the killing of their spy and traces it to a Korean robotics firm. Further investigation leads them to accuse an armed militia of perpetrating the attack, which the militia denies, and retaliate by sending out their new fleet of automated weapons which rapidly track down the militia and terminate all 33 members.

No civilians were caught in the crossfire.

The USA claims Russia financed and enabled the swarm attack on their aircraft carrier. An attack is launched, using the same automated weapons as the Israelis, on the Caucasus terrorist group. Russian defence technologies are immediately triggered and hack into the Pentagon’s system to call off the attack. A long series of automated hacks and counter-hacks ensue in the space of milliseconds, escalating until ultimately triggering a pre-emptive nuclear strike.

Officials on either side were unable to explain how it happened.

 

The above scenarios are all drawn from a report on Artificial Intelligence (AI) and National Security commissioned by the US government in 2017. The 132-page document outlines what the implications of AI are in terms of military, information and economic superiority, making recommendations for how the USA ought to move forwards on these strategic issues. The USA is currently a world-leader when it comes to AI and military technology, although in the case of AI the private sector is way ahead of either governments or universities.

That AI will revolutionise the world is a fact all powerful nations have been quick to grasp. And the revolution will play out, most visibly, in two fields: the commercial and the military. Commercially speaking we are already surrounded by AI, through the likes of Google, Facebook, Amazon, Uber and the soon-to-be fleet of driverless cars taking to the world’s streets. The Cambridge Analytica scandal drew attention to the potential misuses big data can lead to. It is but the first of many more to come.

The better commercial AI gets, the more we use it, the more it gets to know and predict us, the more we come to rely upon its helpful suggestions and ease of use. It is, in a sense, a technology like any other: it is amoral. Those who use it wield it according to their own sense of morality, using it as a cause for good or evil.

Yet there’s also something a little different about AI. None of the technologies of the past have sought to understand human behaviour and replicate it. The ultimate aim of AI is this: to recreate a brain. To recreate consciousness. Except with connections millions of times faster than our own, and billions of terabytes more storage space. The way things are going, I’m in no doubt that they’ll succeed in bringing this about. As the report quoted earlier put it, the only question is when will it happen and “how can the United States effectively plan for or try to affect how it happens?”

Russia has already approved an aggressive plan that would have 30% of Russian combat power consist of entirely remote-controlled and autonomous robotic platforms by 2030. Vladimir Putin has stated publicly about AI that “whoever becomes the leader in this sphere will become the ruler of the world”. China is following a similar path, aiming to supersede the USA as pioneers of AI.

 

The AI Arms Race is already well underway. We may want to pause and think about how many times a technology has ever been developed and then been deemed to be dangerous, and banned. Or think about how a new, unknown technology was rapidly developed, and how well safety concerns responded to its development. In the case of the atom bomb, the security measures taken were, on the whole, atrocious. Robert McNamara, Secretary of Defence during the Cold War, laid out the absurdity of nuclear weapons quite succinctly:

“The whole situation seems so bizarre as to be beyond belief. On any given day, as we go about our business, the president is prepared to make a decision within 20 minutes that could launch one of the most devastating weapons in the world. To declare war requires an act of congress, but to launch a nuclear holocaust requires 20 minutes’ deliberation by the president and his advisors. But that is what we have lived with for 40 years.”

Of course, with automated systems in place, those 20 minutes of deliberation might be reduced to little under nothing. The risk of unlimited escalation has never been higher.

Today, a determined group of activists has succeeded in getting the UN’s Convention on Certain Conventional Weapons (CCW) to look into the dangerous possibilities surrounding AI’s application in military technologies. Lethal Autonomous Weapons Systems (LAWS) stand to revolutionise modern warfare in ways analysts are still trying to get their heads around. Along with other AI developments, LAWS stand as the most visible of present day technologies which we are still in a position to determine. Many at the CCW are calling for an outright ban. Others are complicating things, stalling the process. Of course, these are the same nations who are already heavily investing in LAWS.

As always, diplomatic efforts take a long, long time to get anywhere. Technology proceeds at a much faster pace. The CCW is due to meet again in November to present their findings. It appears to be the last time a meaningful decision at the international level can be made before LAWS become a reality.

Tags:

Featured Posts
Recent Posts
Related Posts
bottom of page