14 December 2023
- RSIS
- Publication
- RSIS Publications
- We Need to Prevent a Global AI Arms Race Now
Unlike the nuclear arms race, the AI one is not confined to the military arena.
COMMENTARY
In July, United Nations Secretary-General Antonio Guterres suggested the establishment of an international artificial intelligence (AI) agency to govern the use of the technology.
This is similar to the establishment of the International Atomic Energy Agency (IAEA) in 1957 over concerns about nuclear weapons, and the suggestion prompted many to consider the parallels between the ongoing “AI arms race” and the nuclear arms race during the Cold War.
There is one significant difference between AI and nuclear weapons: the former is not confined to the military arena.
There is of course the military AI arms race between major countries vying for supremacy to develop the most powerful AI-guided weapons and systems. Simultaneously, however, there is a commercial AI arms race among tech giants and powerful countries to develop the most advanced AI tools for technological and economic dominance.
Countries have been formulating rules and guidelines to ensure that AI advancements in civilian applications do not cross legal and ethical boundaries. At the recently held Singapore Conference on AI for the Global Good, Deputy Prime Minister Lawrence Wong mentioned Singapore’s very own Model AI Governance Framework, which provides guiding principles for AI development. The Singapore Government also released its second National AI Strategy 2.0, which aims to ensure AI is used for good. But even as governments establish their own guidelines, the absence of multilateral rules of engagement is telling.
Left unchecked, the AI arms race could usher in weapons and modes of warfare that are not only more efficient and, in turn, deadlier, but also with less human oversight.
Warring AI systems could lead to rapid escalation, leading to “hyperwar” or “battlefield singularity” and spiral beyond what any human can manage. This will be like the “flash crashes” in financial markets caused by automated traders reacting to one another.
The Two Faces of AI
The commercial AI arms race has already seen companies racing to develop and release AI tools without adequate safeguards and controls. All are rushing to be first to market. These AI tools can be harmful if used to enhance cyber-attacks, mass-produce disinformation, and generate abusive images and video footage, among other things.
For instance, an AI-powered face-swopping deepfake cost a man in China 4.3 million yuan (more than S$800,000), as it led him to believe he was making a bank transfer to a friend. This leaves us to ponder the potential criminal applications of AI with the current trajectory of its development.
Just like nuclear energy, which brings the benefit of clean energy on the one hand and the risk of nuclear annihilation on the other, AI – like the god Janus in Roman mythology – has two faces. The good face will, among other things, mean improving productivity by leaps and bounds, enhancing living standards and speeding up medical research. The menacing face, as mentioned earlier, will lead to the production of even more deadly weapons and unimaginable harm.
No wonder, then, many want the nations of the world to agree to a treaty on the non-proliferation of AI, similar to that which exists for nuclear weapons – the Treaty on the Non-Proliferation of Nuclear Weapons (NPT) – and for a body like the IAEA to conduct inspections and watch for violations.
Yet, this is not to say that the NPT is perfect. It only prohibits the development of nuclear weapons for most signatories as there were five nuclear-weapon states (the United States, Russia, France, China and the United Kingdom) prior to the drafting and enforcement of the treaty, and a handful of non-hsignatory states, which continue to possess such weapons. There are now more than 12,000 nuclear weapons stockpiled globally, despite international regulation.
This need not be the case for regulating AI, if it is accomplished now, before the technology takes off in a big way. This will not be easy because of a number of obstacles. First, unlike nuclear weapons, the development and distribution of AI technology is often in the hands of private companies, not countries. A treaty will have limited impact on them. The difficulties that governments have encountered in regulating Big Tech companies in the social media industry reflect what challenges they will face in trying to regulate AI.
Also, unlike nuclear weapons, which require huge facilities like reactors and enrichment plants, AI technology can be developed in an ordinary office space and is hard to detect.
Finally, while the testing of nuclear weapons is highly conspicuous, AI technology can be tested more discreetly, such as by launching huge campaigns of hate speech and images to be distributed anonymously around the world. Developing such campaigns can be done anywhere, including an ordinary office space.
With all these challenges, it will be daunting for an international governing body to detect or inspect for malicious use of AI.
Since AI software tools that generate dangerous content or trigger dangerous outcomes can be easily multiplied and distributed, they can easily be adopted by parties in the many conflicts around the world, and these include rogue states and terrorist groups.
What Can be Done?
A key factor that can help stave off an AI arms race will be cooperation between the two major global powers who are also leaders in the field – the US and China. But this is improbable while policymakers in Washington and Beijing frame the technological competition between the two countries as an AI arms race. Each is trying to achieve global superiority in the nascent technology and is seeking to constrain the other instead of collaborating.
That leaves us with international agencies like the UN, which has taken a pivotal first step towards governing AI with a landmark initiative – the formation of a global AI Advisory Body. The body, consisting of 38 experts from various nations, has embarked on a mission to analyse and propose recommendations for AI governance, aligning it with the UN’s sustainable development goals and human rights principles.
At the AI Safety Summit in Bletchley Park, Britain, in early November, 25 countries signed an international declaration that recognised the need to address risks associated with AI development. The UN also confirmed support for an expert AI panel, and the major tech companies agreed to collaborate with governments in testing their advanced AI models.
The current efforts by various governments and companies around the world are a commendable start, but more needs to be done, and soon. AI technology is advancing so rapidly that harmful use of it is already proliferating.
The major powers need to recognise their interdependence and the value of collaboration in AI, which should include joint research and development and creating international norms and standards for safety.
The major militaries need to recognise the importance of building safeguards and human controls into their AI systems, to avoid miscalculations that can lead to serious conflict. But mutual restraint is unlikely to occur without external pressure or the certainty of mutually assured destruction, as is the case with nuclear weapons.
Global pressure on the major powers to take the proper steps is needed, through diplomacy, trade, and even moral persuasion. It is imperative for international bodies to bring countries together and convene discussions that build on cooperation that benefits all. One such success story is reflected by the demands for C-level executives to address climate change and the push for net-zero carbon emissions.
Major tech companies need to ensure that the AI tools they develop and distribute have adequate safeguards and testing to prevent misuse, abuse and accidental harms. Regulators need to hold the companies responsible for this, which will require countries to develop ethical guidelines and rules.
A recent step in this direction was the establishment of the Guidelines for Secure AI System Development, published on 27 November, led by the UK and the US. The document was supported by several international agencies, including the public and private sectors. The initiative was signed and endorsed by 18 countries, including Singapore.
The guidelines for providers and users of AI are a fine example of international collaboration to ensure that AI remains a force for good. The document, however, also brings into question why some technological superpowers – China and Russia, for instance – were not involved. There are a couple of other similar initiatives in place, taken by individual governments as well as others, such as the European Union’s AI Act.
Academics, journalists and civil society need to continue building awareness of these issues among policymakers and the public, and to advocate ethical use, fairness, respect for society and avoidance of harm.
The public needs to hold governments and companies accountable for all the above. It will take accord and collaboration across all sectors around the world to avoid an AI arms race and to ensure that AI stays a friendly, and not menacing, face to bring maximum benefit to humanity.
About the Authors
Karryl Sagun-Trajano is a research fellow for future issues in technology (FIT) at the S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore. Benjamin Ang is a senior fellow and head of the Centre of Excellence for National Security at the same institute and oversees FIT. This article was first published in The Straits Times on 8 December 2023.
Unlike the nuclear arms race, the AI one is not confined to the military arena.
COMMENTARY
In July, United Nations Secretary-General Antonio Guterres suggested the establishment of an international artificial intelligence (AI) agency to govern the use of the technology.
This is similar to the establishment of the International Atomic Energy Agency (IAEA) in 1957 over concerns about nuclear weapons, and the suggestion prompted many to consider the parallels between the ongoing “AI arms race” and the nuclear arms race during the Cold War.
There is one significant difference between AI and nuclear weapons: the former is not confined to the military arena.
There is of course the military AI arms race between major countries vying for supremacy to develop the most powerful AI-guided weapons and systems. Simultaneously, however, there is a commercial AI arms race among tech giants and powerful countries to develop the most advanced AI tools for technological and economic dominance.
Countries have been formulating rules and guidelines to ensure that AI advancements in civilian applications do not cross legal and ethical boundaries. At the recently held Singapore Conference on AI for the Global Good, Deputy Prime Minister Lawrence Wong mentioned Singapore’s very own Model AI Governance Framework, which provides guiding principles for AI development. The Singapore Government also released its second National AI Strategy 2.0, which aims to ensure AI is used for good. But even as governments establish their own guidelines, the absence of multilateral rules of engagement is telling.
Left unchecked, the AI arms race could usher in weapons and modes of warfare that are not only more efficient and, in turn, deadlier, but also with less human oversight.
Warring AI systems could lead to rapid escalation, leading to “hyperwar” or “battlefield singularity” and spiral beyond what any human can manage. This will be like the “flash crashes” in financial markets caused by automated traders reacting to one another.
The Two Faces of AI
The commercial AI arms race has already seen companies racing to develop and release AI tools without adequate safeguards and controls. All are rushing to be first to market. These AI tools can be harmful if used to enhance cyber-attacks, mass-produce disinformation, and generate abusive images and video footage, among other things.
For instance, an AI-powered face-swopping deepfake cost a man in China 4.3 million yuan (more than S$800,000), as it led him to believe he was making a bank transfer to a friend. This leaves us to ponder the potential criminal applications of AI with the current trajectory of its development.
Just like nuclear energy, which brings the benefit of clean energy on the one hand and the risk of nuclear annihilation on the other, AI – like the god Janus in Roman mythology – has two faces. The good face will, among other things, mean improving productivity by leaps and bounds, enhancing living standards and speeding up medical research. The menacing face, as mentioned earlier, will lead to the production of even more deadly weapons and unimaginable harm.
No wonder, then, many want the nations of the world to agree to a treaty on the non-proliferation of AI, similar to that which exists for nuclear weapons – the Treaty on the Non-Proliferation of Nuclear Weapons (NPT) – and for a body like the IAEA to conduct inspections and watch for violations.
Yet, this is not to say that the NPT is perfect. It only prohibits the development of nuclear weapons for most signatories as there were five nuclear-weapon states (the United States, Russia, France, China and the United Kingdom) prior to the drafting and enforcement of the treaty, and a handful of non-hsignatory states, which continue to possess such weapons. There are now more than 12,000 nuclear weapons stockpiled globally, despite international regulation.
This need not be the case for regulating AI, if it is accomplished now, before the technology takes off in a big way. This will not be easy because of a number of obstacles. First, unlike nuclear weapons, the development and distribution of AI technology is often in the hands of private companies, not countries. A treaty will have limited impact on them. The difficulties that governments have encountered in regulating Big Tech companies in the social media industry reflect what challenges they will face in trying to regulate AI.
Also, unlike nuclear weapons, which require huge facilities like reactors and enrichment plants, AI technology can be developed in an ordinary office space and is hard to detect.
Finally, while the testing of nuclear weapons is highly conspicuous, AI technology can be tested more discreetly, such as by launching huge campaigns of hate speech and images to be distributed anonymously around the world. Developing such campaigns can be done anywhere, including an ordinary office space.
With all these challenges, it will be daunting for an international governing body to detect or inspect for malicious use of AI.
Since AI software tools that generate dangerous content or trigger dangerous outcomes can be easily multiplied and distributed, they can easily be adopted by parties in the many conflicts around the world, and these include rogue states and terrorist groups.
What Can be Done?
A key factor that can help stave off an AI arms race will be cooperation between the two major global powers who are also leaders in the field – the US and China. But this is improbable while policymakers in Washington and Beijing frame the technological competition between the two countries as an AI arms race. Each is trying to achieve global superiority in the nascent technology and is seeking to constrain the other instead of collaborating.
That leaves us with international agencies like the UN, which has taken a pivotal first step towards governing AI with a landmark initiative – the formation of a global AI Advisory Body. The body, consisting of 38 experts from various nations, has embarked on a mission to analyse and propose recommendations for AI governance, aligning it with the UN’s sustainable development goals and human rights principles.
At the AI Safety Summit in Bletchley Park, Britain, in early November, 25 countries signed an international declaration that recognised the need to address risks associated with AI development. The UN also confirmed support for an expert AI panel, and the major tech companies agreed to collaborate with governments in testing their advanced AI models.
The current efforts by various governments and companies around the world are a commendable start, but more needs to be done, and soon. AI technology is advancing so rapidly that harmful use of it is already proliferating.
The major powers need to recognise their interdependence and the value of collaboration in AI, which should include joint research and development and creating international norms and standards for safety.
The major militaries need to recognise the importance of building safeguards and human controls into their AI systems, to avoid miscalculations that can lead to serious conflict. But mutual restraint is unlikely to occur without external pressure or the certainty of mutually assured destruction, as is the case with nuclear weapons.
Global pressure on the major powers to take the proper steps is needed, through diplomacy, trade, and even moral persuasion. It is imperative for international bodies to bring countries together and convene discussions that build on cooperation that benefits all. One such success story is reflected by the demands for C-level executives to address climate change and the push for net-zero carbon emissions.
Major tech companies need to ensure that the AI tools they develop and distribute have adequate safeguards and testing to prevent misuse, abuse and accidental harms. Regulators need to hold the companies responsible for this, which will require countries to develop ethical guidelines and rules.
A recent step in this direction was the establishment of the Guidelines for Secure AI System Development, published on 27 November, led by the UK and the US. The document was supported by several international agencies, including the public and private sectors. The initiative was signed and endorsed by 18 countries, including Singapore.
The guidelines for providers and users of AI are a fine example of international collaboration to ensure that AI remains a force for good. The document, however, also brings into question why some technological superpowers – China and Russia, for instance – were not involved. There are a couple of other similar initiatives in place, taken by individual governments as well as others, such as the European Union’s AI Act.
Academics, journalists and civil society need to continue building awareness of these issues among policymakers and the public, and to advocate ethical use, fairness, respect for society and avoidance of harm.
The public needs to hold governments and companies accountable for all the above. It will take accord and collaboration across all sectors around the world to avoid an AI arms race and to ensure that AI stays a friendly, and not menacing, face to bring maximum benefit to humanity.
About the Authors
Karryl Sagun-Trajano is a research fellow for future issues in technology (FIT) at the S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore. Benjamin Ang is a senior fellow and head of the Centre of Excellence for National Security at the same institute and oversees FIT. This article was first published in The Straits Times on 8 December 2023.