AI Regulation: Reflections on the Nuclear Analogy and its Utility

By Emily Worlock

In 2023, the Centre for AI Safety issued a panel statement that placed the threat of AI on par with nuclear war. Following this, influential figures such as António Gueterres, the Secretary-General of the UN, and Sam Altman, the CEO of Open-AI, voiced support for AI regulatory frameworks inspired by the International Atomic Energy Agency (IAEA). Yet, how useful are these comparisons when the threat posed by AI is not only fundamentally different to that of nuclear technology, but totally unprecedented? The nuclear analogy can certainly serve an advisory role for policy and law makers, but it cannot, and should not, be depended upon. 

The rapid integration of AI into our social, political, and economic reality makes its regulation essential. AI technology is everywhere, from our phones and home appliances to military equipment and research facilities. AI is effectively a catalyst for increasing the speed and efficiency of processes in daily life, and without it, individual performance, the performance of a company and even that of a government or country will be impacted. 

Yet, AI technology is still highly flawed and poses risks in every sector. Not only can the goal-setting algorithm raise ethical concerns, but Large Language Models can be used to undermine the integrity of political elections. AI technology also threatens the healthcare sector along with commerce, privacy, security, and defence – notably with the development of lethal autonomous weapons —whilst contributing to the spreading of misinformation at large in civil society as well as posing new and unprecedented dangers

Some initiatives have been taken to regulate AI. In August 2024, the EU passed the EU Artificial Intelligence Act, which is the first piece of AI legislation in history. In addition to this, the Council of Europe has established a committee on Artificial Intelligence. UNESCO has also issued recommendations on the ethics of AI. Meanwhile, the Future of Humanity Institute, Future Life Institute, and Centre for the Study of Existential Risks are a few examples of organisations currently working on AI regulatory frameworks.

It has been generally recognised that any regulatory framework which is established needs to be international, rather than national, due to the global influence of AI and to overcome any barriers posed at a national level. Yet, as is clearly demonstrated by the difficulties in establishing and sticking to global initiatives to combat climate change, national interests vary, which may prove problematic when creating global regulations.

The Nuclear Precedent

Considering the existential threat of nuclear weapons to humanity, it seems plausible to use the development of nuclear technology as a precedent and analogy for understanding the requirements of AI regulation. Indeed, the founding of the IAEA, treaties such as the Nuclear Non-Proliferation Treaty, as well as the fact that there has not been a nuclear war, all point to the successes of attempts to regulate nuclear technology. In addition to this, as Marc Aidinoff and David Kaiser have argued, the “Manhattan Project” reveals the problems with opacity and “overzealous secrecy,” governmental interference in scientific research, the suppression of political debate, and complications arising from private sector involvement. All these issues are equally relevant and pertinent to discussions regarding AI. 

From this, we learn that AI, like nuclear technology, needs to be ethically institutionalised and held accountable by democratic oversight. Regulation requires the collaborative efforts of government officials, policy makers, scientists, as well as the public, in order to be effectively actioned.

However, it must also be recognised that the nature of the AI beast is fundamentally different to that of nuclear technology. Indeed, for starters, nuclear weapons actually pose an existential risk to humanity, but there is no evidence to suggest that the risk of AI is directly comparable. On the contrary, the danger posed by AI lies to a greater extent in the realm of ethics and legislation as well as over-dependency, rather than, in general, physical damage. Therefore, AI cannot, so far, destroy the species as nuclear weapons can, meaning the harm and danger are fundamentally different. 

In addition to this, the actual nature of the regulations also varies widely. Nuclear technology requires physical material and radioactive elements such as plutonium, uranium, and tritium, which are easily traceable. By contrast, aside from a supercomputer and AI chips as physical hardware, AI can be utilised by anyone anywhere in the world, rendering it impossible to regulate in the same manner as if it were physical, like a nuclear bomb. As a study by Yasmin Afina and Dr Patricia Lewis of Chatham House noted, “AI is, in that sense, the very opposite of nuclear weapons.”

Moreover, unlike nuclear weapons which reside in the possession of the state, the role of the private sector in AI development cannot be ignored. OpenAI, after all, is a private company that amassed $4.3 billion in the initial months of 2025. The UK AI Sector alone has received £200 million per day since July 2024 in private investment. Therefore, the commercial interests of companies must also be taken into consideration when forming a regulatory framework, which is not a factor in the regulation of nuclear technology. 

It is highly likely that AI regulation will be inspired by a patchwork of frameworks from across sectors that reflects the technology’s own diverse usage and capabilities. Dr Lewis’ and Afina’s study is especially useful in pointing to alternate regulatory frameworks, such as the International Panel on Climate Change regarding an international agency, and the US Food and Drug Administration or the EU’s Reference Laboratory for Genetically Modified Food and Feed regarding control and regulation alongside commercialisation. 

And so, the nuclear precedent serves as a useful example of how states have dealt with threatening technology via international legislation. However, AI’s lack of physicality and industrially driven nature render it an entirely different beast altogether which, in turn, requires a different type of regulation specific to its multifaceted nature. 

About the author

Emily Worlock is pursuing a master’s degree in History at Oxford University and has a keen interest in European security, defence and diplomacy. With a background in History from Durham University and International Relations from her year abroad at Sciences Po Paris, she is always looking for ways to apply a historical lens to the analysis of contemporary international threats and emerging technologies.

Leave a comment