• Pierre Smith Khanna

An Atom Bomb in the Making...

“The huge majority do not understand the historical significance of the moment. I wonder am I mistaken or not?”

— Vladimir Vernadskii, diary entry, July 12, 1940

“They did not see it until the atomic bombs burst in their fumbling hands.”

— H.G. Wells, The World Set Free, 1914

One of the longest running jokes about AI is that it is always only twenty years away. Its advocates swear that it will be a game-changer, that it will alter the world as we know it and in ways that we can hardly imagine. Others say that these prognoses are mislead, that the technological difficulties involved in emulating human-like intelligence are far too big to solve, and that it’s never going to happen.

The history of big, world-changing technological developments is fraught with such obituaries, from man’s dream to fly, to his fantasy of harnessing the power of the atom. For Yuval Noah Harari, author of Homo Deus, human beings are very bad at making accurate forecasts because “when we think about the future, our horizons are usually constrained by present-day ideologies and social systems.” Thus when man first thought of flying, he tried to emulate the bird. This turned out to be the wrong way to think about it, and eventually he stumbled on something far vaster than any one bird: aerodynamics.

The same was true for the development of nuclear weapons albeit with far greater consequences. The atomic bomb was seen, by far too many, as a bomb. Another weapon — undoubtedly powerful — to be added to the state’s arsenal. Yet what nuclear weapons did was to redefine the very notion of security, ushering in a new era in which every state, no matter how militarised, is vulnerable to being eradicated. A new world where the

traditional tradeoff between offensive and defensive weapons was made redundant by the removal of defence as a category altogether.

This needn’t have happened. The nuclear arms race that was unleashed on August 6th 1945 could have been avoided, or at least consciously mitigated, had more consideration been given to the transformational capabilities of nuclear weapons. The problem was twofold: first of all not many people knew just how much the nuclear bomb would transform the world. As Harari points out, too much of our thinking is conditioned by the past. We may be able to project certain trends into the future and use that extrapolation to imagine the world in 20 years time. When the scale of the changes defies logic however; defies even the category of a 'scale' itself, our minds gaze haplessly away into the ether. Paradigm shifts are the things of science-fiction and visionaries. Most of us, let's face it, aren't like that.

The second part of the problem is simply this: once we build something, we just have to use it. There are no two ways about it. Over 22 billion dollars and 130,000 people worked on building the atomic bomb. The new President of America wasn't about to shelve it just because a handful of Manhattan Project scientists protested that it needn't be used against Japan. “We built this thing because we thought Germany might build one before us. Now Germany is defeated so let's stop and think a minute.” Too little too late.

The history of technological progress is fraught with such examples of advances being put to use before deeply considering the wider implications involved in deploying them. And it is near impossible to find examples of tech being voluntarily shelved, especially when it can be used to give its possessors an advantage over others. The ban on chemical weapons stood out as one such example, until its recent use in Syria with horrifying consequences. What is easy to find are the countless cautionary words with respect to blind technological development, words which, despite their age, continually ring true.

“Will man be able to use this power, direct it towards good, and not towards self-destruction? Is he mature enough to be able to use the power that science must inevitably give him? Scientists ought not to close their eyes to the possible consequences of their scientific work, of scientific progress. They ought to feel responsible for all the consequences of their discoveries. They ought to connect their work with the best organisation of all mankind. Thought and attention ought to be directed to these questions.”

The Russian mineralogist Vladimir Vernadskii wrote this in 1922 when pondering the revolutionary character of atomic energy. The scientists working on the atomic bomb voiced their concerns when it was already too late to change the course of history. Today we stand at a crossroads similar to the one Vernadskii stood at back in 1922. Artificial Intelligence has already taken off, but its military application has yet to be fully exploited. No one nation has taken a decisive advantage over another and, as such, the world is still in a position to prevent an arms race from occurring. Leading voices in the AI and robotics community have already presented an open letter to the UN for this very reason, presciently reminding us that unlike nuclear weapons, AI autonomous weapons “require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce.”

The dangers of AI go beyond its military use, as has been vividly brought to light by the Cambridge Analytica scandals of the past weeks. Whereas nuclear bombs required huge, powerful states to build, AI can be developed by almost anyone. Currently, it is being pioneered by huge corporations who have no legal requirements as to how they use the huge amounts of data they harvest. The fact that their bottom line is profit ought to worry us given how good their algorithms are getting at addicting us to their products or swinging our vote. Moreover, the algorithms they are developing are becoming so sophisticated and complex they themselves no longer know how they work. The idea that sooner or later we will develop a 'super-AI' far surpassing human beings in intelligence and capability, is a risk on par with the risks associated with autonomous weaponry. Couple that to the fact that so many of these algorithms are designed to understand and emulate human behaviour and you are left with a super-smart AI equipped with a state-of-the-art toolset for manipulating and deceiving humans. No wonder Putin is so fond of it.

Mitigating these risks cannot happen through an outright ban of AI research. This is not about putting a halt to scientific progress altogether. This is about pausing to think what that progress is for and evaluating the risks inherent to the unmitigated pursuit of technological know-how. Only then can we begin to outline what research makes sense and what research ought to be steered away from. Although a lot of resources have already been spent on developing AI, there is still time before AI's Hiroshima moment comes about. Let's not wait around for it.



© 2020 by Pierre Smith Khanna