An AI Oppenheimer Moment?

A nuclear explosion mushroom cloud
Image by Harsh Ghanshyam from Pixabay

All computer scientists should watch the staggeringly good film, Oppenheimer, by Christopher Nolan. It charts the life of J. Robert Oppenheimer, “father of the atom bomb”, and the team he put together at Los Alamos, as they designed and built the first weapons of mass destruction. The film is about science, politics and war, not computer science and all the science is quantum physics (portrayed incredibly well). Despite that, Christopher Nolan believes the film does have lessons for all scientists, and especially those in Silicon Valley.

Why? In an interview, he suggested that given the current state of Artificial Intelligence the world is at “an Oppenheimer moment”. Computer scientists, in the 2020s, just like physicists in the 1940s, are creating technology that could be used for great good but also cause great harm (including in both cases a possibility that we use it in a way that destroys civilisation). Should scientists and technologists stay outside the political realm and leave discussion of what to do with their technology to politicians, while the scientist do as they wish in the name of science? That leaves society playing a game of catch up. Or do scientists and technologists have more responsibility than that?

Artificial Intelligence isn’t so obviously capable of doing bad things as an atomic bomb was and still clearly is. There is also no clear imperative, such as Oppenheimer had, to get there before the fascist Nazi party, who were clearly evil and already using technology for evil, (now the main imperative seems to be just to get there before someone else makes all the money, not you). It is, therefore, far easier for those creating AI technology to ignore both the potential and the real effects of their inventions on society. However, it is now clear AI can and already is doing lots of bad as well as good. Many scientists understand this and are focussing their work on developing versions that are, for example, built in to be transparent and accountable, are not biased, racist, homophobic, … that do put children’s protection at the heart of what they do… Unfortunately, not all are though. And there is one big elephant in the room. AI can be, and is being, put in control of weapons in wars that are actively taking place right now. There is an arms race to get there before the other side do. From mass identification of targets in the middle East to AI controlled drone strikes in the Ukraine war, military AI is a reality and is in control of killing people with only minimal, if any, real human’s in the loop. Do we really want that? Do we want AIs in control of weapons of mass destruction. Or is that total madness that will lead only to our destruction.

Oppenheimer was a complex man, as the film showed. He believed in peace but, a brilliant theoretical physicist himself, he managed a group of the best scientists in the world in the creation of the greatest weapon of destruction ever built to that point, the first atom bomb. He believed it had to be used once so that everyone would understand that all out nuclear war would end civilisation (it was of course used against Japan not the already defeated Nazis, the original justification). However, he also spent the rest of his life working for peace, arguing that international agreements were vital to prevent such weapons ever being used again. In times of relative peace people forget about the power we have to destroy everyone. The worries only surface again when there is international tension and wars break out such as in the Middle East or Ukraine. We need to always remeber the possibility is there though lest we use them by mistake. Oppenheimer thought the bomb would actually end war, having come up with the idea of “mutually assured destruction” as a means for peace. The phrase aimed to remind people that these weapons could never be used. He worked tirelessly, arguing for international regulation and agreements to prevent their use. 

Christopher Nolan was asked, if there was a special screening of the film in Silicon Valley, what message would he hope the computer scientists and technologists would take from it. His answer was that the should take home the message of the need for accountability. Scientists do have to be accountable for their work, especially when it is capable of having massively bad consequences for society. A key part of that is engaging with the public, industry and government; not with vested interests pushing for their own work to be allowed, but to make sure the public and policymakers do understand the science and technology so there can be fully informed debate. Both international law and international policy is now a long way off the pace of technological development. The willingness of countries to obey international law is also disintegrating and there is a new subtle difference to the 1940s: technology companies are now as rich and powerful as many countries so corporate accountability is now needed too, not just agreements between countries.

Oppenheimer was vilified over his politics after the war, and his name is now forever linked with weapons of mass destruction. He certainly didn’t get everything right: there have been plenty of wars since, so he didn’t manage to end all war as he had hoped, though so far no nuclear war. However, despite the vilification, he did spend his life making sure everyone understood the consequences of his work. Asked if he believed we had created the means to kill tens of millions of Americans (everyone) at a stroke, his answer was a clear “Yes”. He did ultimately make himself accountable for the things he had done. That is something every scientist should do too. The Doomsday Clock is closer to midnight than ever (89s to midnight – manmade global catastrophe). Let’s hope the Tech Bros and scientists of Silicon Valley are willingly to become accountable too, never mind countries. All scientists and technologists should watch Oppenheimer and reflect.

– Paul Curzon, Queen Mary University of London

More on …

Subscribe to be notified whenever we publish a new post to the CS4FN blog.


This page is funded by EPSRC on research agreement EP/W033615/1.

QMUL CS4FN EPSRC logos

Leave a comment