AI’s Generality Poses New Threats to International Security

 /  Jan. 13, 2024, 1:30 a.m.


New world of international security under emerging AI technologies

Created with AI, 2023

Generalized AI development brings about new world of international security

On Nov. 13, 2023, the United States signed on to the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, an international initiative to promote responsible military development and use of autonomous systems. Forty-eight other countries were signatories as of Nov 21. 

The Declaration is an acknowledgement of how AI systems will change the way warfare is conducted, and even fundamentally change the way countries interact with each other and think about their own security. 

A key reason why AI will place such pressure on international relationships and why it’s doing so now and not at any other time is that we’ve seen an explosion in the range of tasks AI systems are capable of doing. In the past year we’ve seen models such as GPT-4 develop the ability to write code, do some mathematics, draft legal documents, and give instructions to create bioweapons. The fact that AI models could have all these abilities simultaneously, and often without being explicitly trained to, caught many people by surprise - including these systems’ own creators.

Along with these increases in AI capabilities came alarming and unexpected phenomena, such as Microsoft Bing Chat’s split-personality behavior, which included threats and attempted manipulation, as well as large-language models’ (LLMs) tendency to generate untrue statements or ‘hallucinations’. Concerns have also arisen around the use of AI in generating political misinformation, surveillance and autonomous weaponry, such as the automated launch of nuclear weapons.

Countries and international organizations have begun tackling in earnest the question of how best to govern this little-understood new technology. A patchwork of different governance strategies and objectives emerged, with countries like Japan and the U.S. highlighting innovation while the EU has prioritized managing risks in its AI Act. A debate has arisen regarding what moral and political frameworks AI systems should endorse: China’s AI law gives power to the state, keeping China’s AI bound to ‘core socialist values’, while President Biden recently signed an executive order on “safe, secure, and trustworthy” AI in the U.S., which encourages Western ideals and U.S. leadership on AI.

Each country seems to have its own approach to regulating AI domestically, yet many of the biggest concerns about AI require international coordination to manage. These range from fears that AI will spark an international arms race or increase the risk of miscalculations and mistakes under nuclear tension, to concerns that it will exacerbate a legacy of global inequality between historically colonized and colonizing countries. None of these problems can be addressed without extensive international diplomacy, coordination, and buy-in from all countries involved.

Nuclear weapons, and the global governance structures that arose them, have become a go-to analogy for AI and its governance. In a U.S. Senate Judiciary hearing, witnesses including OpenAI CEO Sam Altman and cognitive scientist Gary Marcus called for a new international organization for AI comparable to the European Organization for Nuclear Research (CERN) or the International Atomic Energy Agency (IAEA). Even AI developers and researchers themselves look towards the history of nuclear weapons through books like Richard Rhodes’ The Making of the Atomic Bomb, conscious of this history as they work on a new groundbreaking technology for the next generation.

The two technologies have remarkable similarities. Nuclear weapons quickly arose as a flash point for geopolitical tension, and AI seems to be on the same trajectory. AI is proceeding at a rapid pace of development just as nuclear weapons once did. AI evokes similarly catastrophic visions as nuclear weapons do - The New York Times even created a quiz article challenging readers to distinguish whether quotes warning of “annihilation”, “catastrophe”, and “jeopardy” were about AI or nuclear weapons. From a governance perspective, the technologies are similar in that both have development choke points that countries crave control over: uranium refinement for nuclear weapons and chip supply chains for AI. Yet we should stay aware of the limits of what we can learn from such an analogy between nuclear weapons and AI.

The crucial difference between the two technologies is that nuclear weapons only have one main use, as suggested by its name: a weapon of mass destruction. Despite nuclear technology also being pivotal for energy uses, nuclear weapons require fuel containing over 90% the isotope uranium-235, while nuclear energy must use fuel with under 20% uranium-235, marking a clear separation between the two. AI, on the other hand, does everything from write sophisticated code to fielding customer service queries to diagnosing illnesses through medical imaging—and it seems nigh every product is promising that some AI-enhanced version is on its way.

The difference between nuclear as a single-purpose technology, and AI as a general-purpose one, has political and international implications we cannot ignore, and it means we need to be careful about trying to project the lessons and strategies we’ve learned from governing nuclear onto our AI governance strategy.

One danger that stems from the general-purpose nature of AI is that a single system may be used for both helpful and greatly harmful purposes. One can imagine highly articulate chatbots, which are commercially useful, being used to generate powerful political misinformation to destabilize a rival country. Recognizing AI’s economic and military duality is crucial in evaluating its effects on international relations because it puts countries (especially great powers like the U.S. and China) into a security dilemma that could spark an insecurity spiral. Actions that countries take to increase their own security often decrease other countries’ security or are interpreted as threats. Other countries may then feel the need to take counter-action to maintain their own security, causing both countries to feel forced to escalate despite not wanting to go to war.

A crucial way for countries to escape the security dilemma is being able to distinguish defensive capabilities as opposed to offensive capabilities. Here is where the generality of AI becomes dangerous - the most powerful general AI systems can be used to both strengthen a country’s national security and destabilize or attack another country. No matter how much a country with powerful AI promises it will only use that AI for its own defense, it may not be able to convince other countries. Countries’ possession of such powerful systems lays a trap that could spring at any moment into international tension and conflict. 

We’ve already seen the beginnings of this with the U.S.-China trade war over AI chips. On Oct. 7, 2022, the United States Bureau of Industry and Security put in place new controls on the export of advanced semiconductor chips to China. This act quickly provoked outrage in Beijing as the Chinese interpreted it as an industrial “act of war”: an attempt by the U.S. to keep China down, an attack on its developing commercial AI sector. 

Yet the U.S. denied that the controls targeted China’s economy. The Biden administration has said these controls are simply meant to increase the U.S.’s own security. By trying to keep its most important competitor from developing cutting-edge military AI systems, the U.S. sought to avoid destabilization from Chinese disinformation and propaganda waging more sophisticated warfare. This tension between the two superpowers shows just how easily AI’s generality can cause disparate understandings, suspicion, and tension.

The general-purpose nature of AI systems also means there are simply more issues to regulate and debate than with nuclear weapons. This only creates more potential battlefields for international disagreements to play out. Geopolitical jockeying is already complicated enough with the single-purpose technology of nuclear. It’s hard to predict how complicated countries’ interactions around AI may become with the seemingly infinite list of concerns that AI raises, from privacy to education to automated warfare. We should not expect the effort to coordinate global AI regulation to be as focused as it was able to be with nuclear weapons.

AI has incredible potential to improve healthcare, education, productivity, and so much more. Being clear-eyed on how nuclear weapons and AI differ can help us build global institutions and cooperation mechanisms more suited to AI’s unique characteristics, bringing us closer to a world where AI is able to help make life better for everyone.

The image used in this article is openly licensed under CC by 4.0. It was generated by the author using AI and can be copied or redistributed for any purpose. 


Tiffany Li


Search

<script type="text/javascript" src="//downloads.mailchimp.com/js/signup-forms/popup/embed.js" data-dojo-config="usePlainJson: true, isDebug: false"></script><script type="text/javascript">require(["mojo/signup-forms/Loader"], function(L) { L.start({"baseUrl":"mc.us12.list-manage.com","uuid":"d2157b250902dd292e3543be0","lid":"aa04c73a5b"}) })</script>