Back
About RSIS
Introduction
Building the Foundations
Welcome Message
Board of Governors
Staff Profiles
Executive Deputy Chairman’s Office
Dean’s Office
Management
Distinguished Fellows
Faculty and Research
Associate Research Fellows, Senior Analysts and Research Analysts
Visiting Fellows
Adjunct Fellows
Administrative Staff
Honours and Awards for RSIS Staff and Students
RSIS Endowment Fund
Endowed Professorships
Career Opportunities
Getting to RSIS
Research
Research Centres
Centre for Multilateralism Studies (CMS)
Centre for Non-Traditional Security Studies (NTS Centre)
Centre of Excellence for National Security
Institute of Defence and Strategic Studies (IDSS)
International Centre for Political Violence and Terrorism Research (ICPVTR)
Research Programmes
National Security Studies Programme (NSSP)
Social Cohesion Research Programme (SCRP)
Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
Other Research
Future Issues and Technology Cluster
Research@RSIS
Science and Technology Studies Programme (STSP) (2017-2020)
Graduate Education
Graduate Programmes Office
Exchange Partners and Programmes
How to Apply
Financial Assistance
Meet the Admissions Team: Information Sessions and other events
RSIS Alumni
Outreach
Global Networks
About Global Networks
RSIS Alumni
Executive Education
About Executive Education
SRP Executive Programme
Terrorism Analyst Training Course (TATC)
International Programmes
About International Programmes
Asia-Pacific Programme for Senior Military Officers (APPSMO)
Asia-Pacific Programme for Senior National Security Officers (APPSNO)
International Conference on Cohesive Societies (ICCS)
International Strategy Forum-Asia (ISF-Asia)
Publications
RSIS Publications
Annual Reviews
Books
Bulletins and Newsletters
RSIS Commentary Series
Counter Terrorist Trends and Analyses
Commemorative / Event Reports
Future Issues
IDSS Papers
Interreligious Relations
Monographs
NTS Insight
Policy Reports
Working Papers
External Publications
Authored Books
Journal Articles
Edited Books
Chapters in Edited Books
Policy Reports
Working Papers
Op-Eds
Glossary of Abbreviations
Policy-relevant Articles Given RSIS Award
RSIS Publications for the Year
External Publications for the Year
Media
Cohesive Societies
Sustainable Security
Other Resource Pages
News Releases
Speeches
Video/Audio Channel
External Podcasts
Events
Contact Us
S. Rajaratnam School of International Studies Think Tank and Graduate School Ponder The Improbable Since 1966
Nanyang Technological University Nanyang Technological University
  • About RSIS
      IntroductionBuilding the FoundationsWelcome MessageBoard of GovernorsHonours and Awards for RSIS Staff and StudentsRSIS Endowment FundEndowed ProfessorshipsCareer OpportunitiesGetting to RSIS
      Staff ProfilesExecutive Deputy Chairman’s OfficeDean’s OfficeManagementDistinguished FellowsFaculty and ResearchAssociate Research Fellows, Senior Analysts and Research AnalystsVisiting FellowsAdjunct FellowsAdministrative Staff
  • Research
      Research CentresCentre for Multilateralism Studies (CMS)Centre for Non-Traditional Security Studies (NTS Centre)Centre of Excellence for National SecurityInstitute of Defence and Strategic Studies (IDSS)International Centre for Political Violence and Terrorism Research (ICPVTR)
      Research ProgrammesNational Security Studies Programme (NSSP)Social Cohesion Research Programme (SCRP)Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
      Other ResearchFuture Issues and Technology ClusterResearch@RSISScience and Technology Studies Programme (STSP) (2017-2020)
  • Graduate Education
      Graduate Programmes OfficeExchange Partners and ProgrammesHow to ApplyFinancial AssistanceMeet the Admissions Team: Information Sessions and other eventsRSIS Alumni
  • Outreach
      Global NetworksAbout Global NetworksRSIS Alumni
      Executive EducationAbout Executive EducationSRP Executive ProgrammeTerrorism Analyst Training Course (TATC)
      International ProgrammesAbout International ProgrammesAsia-Pacific Programme for Senior Military Officers (APPSMO)Asia-Pacific Programme for Senior National Security Officers (APPSNO)International Conference on Cohesive Societies (ICCS)International Strategy Forum-Asia (ISF-Asia)
  • Publications
      RSIS PublicationsAnnual ReviewsBooksBulletins and NewslettersRSIS Commentary SeriesCounter Terrorist Trends and AnalysesCommemorative / Event ReportsFuture IssuesIDSS PapersInterreligious RelationsMonographsNTS InsightPolicy ReportsWorking Papers
      External PublicationsAuthored BooksJournal ArticlesEdited BooksChapters in Edited BooksPolicy ReportsWorking PapersOp-Eds
      Glossary of AbbreviationsPolicy-relevant Articles Given RSIS AwardRSIS Publications for the YearExternal Publications for the Year
  • Media
      Cohesive SocietiesSustainable SecurityOther Resource PagesNews ReleasesSpeechesVideo/Audio ChannelExternal Podcasts
  • Events
  • Contact Us
    • Connect with Us

      rsis.ntu
      rsis_ntu
      rsisntu
      rsisvideocast
      school/rsis-ntu
      rsis.sg
      rsissg
      RSIS
      RSS
      Subscribe to RSIS Publications
      Subscribe to RSIS Events

      Getting to RSIS

      Nanyang Technological University
      Block S4, Level B3,
      50 Nanyang Avenue,
      Singapore 639798

      Click here for direction to RSIS

      Get in Touch

    Connect
    Search
    • RSIS
    • Publication
    • RSIS Publications
    • We Need to Prevent a Global AI Arms Race Now
    • Annual Reviews
    • Books
    • Bulletins and Newsletters
    • RSIS Commentary Series
    • Counter Terrorist Trends and Analyses
    • Commemorative / Event Reports
    • Future Issues
    • IDSS Papers
    • Interreligious Relations
    • Monographs
    • NTS Insight
    • Policy Reports
    • Working Papers

    CO23183 | We Need to Prevent a Global AI Arms Race Now
    Karryl Kim Sagun Trajano, Benjamin Ang

    14 December 2023

    download pdf

    Unlike the nuclear arms race, the AI one is not confined to the military arena.

    231220 CO23183 We Need to Prevent a Global AI Arms Race Now
    Source: Freepik

    COMMENTARY

    In July, United Nations Secretary-General Antonio Guterres suggested the establishment of an international artificial intelligence (AI) agency  to govern the use of the technology.

    This is similar to the establishment of the International Atomic Energy Agency (IAEA) in 1957 over concerns about nuclear weapons, and the suggestion prompted many to consider the parallels between the ongoing “AI arms race” and the nuclear arms race during the Cold War.

    There is one significant difference between AI and nuclear weapons: the former is not confined to the military arena.

    There is of course the military AI arms race between major countries vying for supremacy to develop the most powerful AI-guided weapons and systems. Simultaneously, however, there is a commercial AI arms race among tech giants and powerful countries to develop the most advanced AI tools for technological and economic dominance.

    Countries have been formulating rules and guidelines to ensure that AI advancements in civilian applications do not cross legal and ethical boundaries. At the recently held Singapore Conference on AI for the Global Good, Deputy Prime Minister Lawrence Wong mentioned Singapore’s very own Model AI Governance Framework, which provides guiding principles for AI development. The Singapore Government also released its second National AI Strategy 2.0, which aims to ensure AI is used for good. But even as governments establish their own guidelines, the absence of multilateral rules of engagement is telling.

    Left unchecked, the AI arms race could usher in weapons and modes of warfare that are not only more efficient and, in turn, deadlier, but also with less human oversight.

    Warring AI systems could lead to rapid escalation, leading to “hyperwar” or “battlefield singularity” and spiral beyond what any human can manage. This will be like the “flash crashes” in financial markets caused by automated traders reacting to one another.

    The Two Faces of AI

    The commercial AI arms race has already seen companies racing to develop and release AI tools without adequate safeguards and controls. All are rushing to be first to market. These AI tools can be harmful if used to enhance cyber-attacks, mass-produce disinformation, and generate abusive images and video footage, among other things.

    For instance, an AI-powered face-swopping deepfake cost a man in China 4.3 million yuan (more than S$800,000), as it led him to believe he was making a bank transfer to a friend. This leaves us to ponder the potential criminal applications of AI with the current trajectory of its development.

    Just like nuclear energy, which brings the benefit of clean energy on the one hand and the risk of nuclear annihilation on the other, AI – like the god Janus in Roman mythology – has two faces. The good face will, among other things, mean improving productivity by leaps and bounds, enhancing living standards and speeding up medical research. The menacing face, as mentioned earlier, will lead to the production of even more deadly weapons and unimaginable harm.

    No wonder, then, many want the nations of the world to agree to a treaty on the non-proliferation of AI, similar to that which exists for nuclear weapons – the Treaty on the Non-Proliferation of Nuclear Weapons (NPT) – and for a body like the IAEA to conduct inspections and watch for violations.

    Yet, this is not to say that the NPT is perfect. It only prohibits the development of nuclear weapons for most signatories as there were five nuclear-weapon states (the United States, Russia, France, China and the United Kingdom) prior to the drafting and enforcement of the treaty, and a handful of non-hsignatory states, which continue to possess such weapons. There are now more than 12,000 nuclear weapons stockpiled globally, despite international regulation.

    This need not be the case for regulating AI, if it is accomplished now, before the technology takes off in a big way. This will not be easy because of a number of obstacles. First, unlike nuclear weapons, the development and distribution of AI technology is often in the hands of private companies, not countries. A treaty will have limited impact on them. The difficulties that governments have encountered in regulating Big Tech companies in the social media industry reflect what challenges they will face in trying to regulate AI.

    Also, unlike nuclear weapons, which require huge facilities like reactors and enrichment plants, AI technology can be developed in an ordinary office space and is hard to detect.

    Finally, while the testing of nuclear weapons is highly conspicuous, AI technology can be tested more discreetly, such as by launching huge campaigns of hate speech and images to be distributed anonymously around the world. Developing such campaigns can be done anywhere, including an ordinary office space.

    With all these challenges, it will be daunting for an international governing body to detect or inspect for malicious use of AI.

    Since AI software tools that generate dangerous content or trigger dangerous outcomes can be easily multiplied and distributed, they can easily be adopted by parties in the many conflicts around the world, and these include rogue states and terrorist groups.

    What Can be Done?

    A key factor that can help stave off an AI arms race will be cooperation between the two major global powers who are also leaders in the field – the US and China. But this is improbable while policymakers in Washington and Beijing frame the technological competition between the two countries as an AI arms race. Each is trying to achieve global superiority in the nascent technology and is seeking to constrain the other instead of collaborating.

    That leaves us with international agencies like the UN, which has taken a pivotal first step towards governing AI with a landmark initiative – the formation of a global AI Advisory Body. The body, consisting of 38 experts from various nations, has embarked on a mission to analyse and propose recommendations for AI governance, aligning it with the UN’s sustainable development goals and human rights principles.

    At the AI Safety Summit in Bletchley Park, Britain, in early November, 25 countries signed an international declaration that recognised the need to address risks associated with AI development. The UN also confirmed support for an expert AI panel, and the major tech companies agreed to collaborate with governments in testing their advanced AI models.

    The current efforts by various governments and companies around the world are a commendable start, but more needs to be done, and soon. AI technology is advancing so rapidly that harmful use of it is already proliferating.

    The major powers need to recognise their interdependence and the value of collaboration in AI, which should include joint research and development and creating international norms and standards for safety.

    The major militaries need to recognise the importance of building safeguards and human controls into their AI systems, to avoid miscalculations that can lead to serious conflict. But mutual restraint is unlikely to occur without external pressure or the certainty of mutually assured destruction, as is the case with nuclear weapons.

    Global pressure on the major powers to take the proper steps is needed, through diplomacy, trade, and even moral persuasion. It is imperative for international bodies to bring countries together and convene discussions that build on cooperation that benefits all. One such success story is reflected by the demands for C-level executives to address climate change and the push for net-zero carbon emissions.

    Major tech companies need to ensure that the AI tools they develop and distribute have adequate safeguards and testing to prevent misuse, abuse and accidental harms. Regulators need to hold the companies responsible for this, which will require countries to develop ethical guidelines and rules.

    A recent step in this direction was the establishment of the Guidelines for Secure AI System Development, published on 27 November, led by the UK and the US. The document was supported by several international agencies, including the public and private sectors. The initiative was signed and endorsed by 18 countries, including Singapore.

    The guidelines for providers and users of AI are a fine example of international collaboration to ensure that AI remains a force for good. The document, however, also brings into question why some technological superpowers – China and Russia, for instance – were not involved. There are a couple of other similar initiatives in place, taken by individual governments as well as others, such as the European Union’s AI Act.

    Academics, journalists and civil society need to continue building awareness of these issues among policymakers and the public, and to advocate ethical use, fairness, respect for society and avoidance of harm.

    The public needs to hold governments and companies accountable for all the above. It will take accord and collaboration across all sectors around the world to avoid an AI arms race and to ensure that AI stays a friendly, and not menacing, face to bring maximum benefit to humanity.

    About the Authors

    Karryl Sagun-Trajano is a research fellow for future issues in technology (FIT) at the S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore. Benjamin Ang is a senior fellow and head of the Centre of Excellence for National Security at the same institute and oversees FIT. This article was first published in The Straits Times on 8 December 2023.

    Categories: RSIS Commentary Series / Country and Region Studies / Technology and Future Issues / East Asia and Asia Pacific / South Asia / Southeast Asia and ASEAN / Global
    comments powered by Disqus

    Unlike the nuclear arms race, the AI one is not confined to the military arena.

    231220 CO23183 We Need to Prevent a Global AI Arms Race Now
    Source: Freepik

    COMMENTARY

    In July, United Nations Secretary-General Antonio Guterres suggested the establishment of an international artificial intelligence (AI) agency  to govern the use of the technology.

    This is similar to the establishment of the International Atomic Energy Agency (IAEA) in 1957 over concerns about nuclear weapons, and the suggestion prompted many to consider the parallels between the ongoing “AI arms race” and the nuclear arms race during the Cold War.

    There is one significant difference between AI and nuclear weapons: the former is not confined to the military arena.

    There is of course the military AI arms race between major countries vying for supremacy to develop the most powerful AI-guided weapons and systems. Simultaneously, however, there is a commercial AI arms race among tech giants and powerful countries to develop the most advanced AI tools for technological and economic dominance.

    Countries have been formulating rules and guidelines to ensure that AI advancements in civilian applications do not cross legal and ethical boundaries. At the recently held Singapore Conference on AI for the Global Good, Deputy Prime Minister Lawrence Wong mentioned Singapore’s very own Model AI Governance Framework, which provides guiding principles for AI development. The Singapore Government also released its second National AI Strategy 2.0, which aims to ensure AI is used for good. But even as governments establish their own guidelines, the absence of multilateral rules of engagement is telling.

    Left unchecked, the AI arms race could usher in weapons and modes of warfare that are not only more efficient and, in turn, deadlier, but also with less human oversight.

    Warring AI systems could lead to rapid escalation, leading to “hyperwar” or “battlefield singularity” and spiral beyond what any human can manage. This will be like the “flash crashes” in financial markets caused by automated traders reacting to one another.

    The Two Faces of AI

    The commercial AI arms race has already seen companies racing to develop and release AI tools without adequate safeguards and controls. All are rushing to be first to market. These AI tools can be harmful if used to enhance cyber-attacks, mass-produce disinformation, and generate abusive images and video footage, among other things.

    For instance, an AI-powered face-swopping deepfake cost a man in China 4.3 million yuan (more than S$800,000), as it led him to believe he was making a bank transfer to a friend. This leaves us to ponder the potential criminal applications of AI with the current trajectory of its development.

    Just like nuclear energy, which brings the benefit of clean energy on the one hand and the risk of nuclear annihilation on the other, AI – like the god Janus in Roman mythology – has two faces. The good face will, among other things, mean improving productivity by leaps and bounds, enhancing living standards and speeding up medical research. The menacing face, as mentioned earlier, will lead to the production of even more deadly weapons and unimaginable harm.

    No wonder, then, many want the nations of the world to agree to a treaty on the non-proliferation of AI, similar to that which exists for nuclear weapons – the Treaty on the Non-Proliferation of Nuclear Weapons (NPT) – and for a body like the IAEA to conduct inspections and watch for violations.

    Yet, this is not to say that the NPT is perfect. It only prohibits the development of nuclear weapons for most signatories as there were five nuclear-weapon states (the United States, Russia, France, China and the United Kingdom) prior to the drafting and enforcement of the treaty, and a handful of non-hsignatory states, which continue to possess such weapons. There are now more than 12,000 nuclear weapons stockpiled globally, despite international regulation.

    This need not be the case for regulating AI, if it is accomplished now, before the technology takes off in a big way. This will not be easy because of a number of obstacles. First, unlike nuclear weapons, the development and distribution of AI technology is often in the hands of private companies, not countries. A treaty will have limited impact on them. The difficulties that governments have encountered in regulating Big Tech companies in the social media industry reflect what challenges they will face in trying to regulate AI.

    Also, unlike nuclear weapons, which require huge facilities like reactors and enrichment plants, AI technology can be developed in an ordinary office space and is hard to detect.

    Finally, while the testing of nuclear weapons is highly conspicuous, AI technology can be tested more discreetly, such as by launching huge campaigns of hate speech and images to be distributed anonymously around the world. Developing such campaigns can be done anywhere, including an ordinary office space.

    With all these challenges, it will be daunting for an international governing body to detect or inspect for malicious use of AI.

    Since AI software tools that generate dangerous content or trigger dangerous outcomes can be easily multiplied and distributed, they can easily be adopted by parties in the many conflicts around the world, and these include rogue states and terrorist groups.

    What Can be Done?

    A key factor that can help stave off an AI arms race will be cooperation between the two major global powers who are also leaders in the field – the US and China. But this is improbable while policymakers in Washington and Beijing frame the technological competition between the two countries as an AI arms race. Each is trying to achieve global superiority in the nascent technology and is seeking to constrain the other instead of collaborating.

    That leaves us with international agencies like the UN, which has taken a pivotal first step towards governing AI with a landmark initiative – the formation of a global AI Advisory Body. The body, consisting of 38 experts from various nations, has embarked on a mission to analyse and propose recommendations for AI governance, aligning it with the UN’s sustainable development goals and human rights principles.

    At the AI Safety Summit in Bletchley Park, Britain, in early November, 25 countries signed an international declaration that recognised the need to address risks associated with AI development. The UN also confirmed support for an expert AI panel, and the major tech companies agreed to collaborate with governments in testing their advanced AI models.

    The current efforts by various governments and companies around the world are a commendable start, but more needs to be done, and soon. AI technology is advancing so rapidly that harmful use of it is already proliferating.

    The major powers need to recognise their interdependence and the value of collaboration in AI, which should include joint research and development and creating international norms and standards for safety.

    The major militaries need to recognise the importance of building safeguards and human controls into their AI systems, to avoid miscalculations that can lead to serious conflict. But mutual restraint is unlikely to occur without external pressure or the certainty of mutually assured destruction, as is the case with nuclear weapons.

    Global pressure on the major powers to take the proper steps is needed, through diplomacy, trade, and even moral persuasion. It is imperative for international bodies to bring countries together and convene discussions that build on cooperation that benefits all. One such success story is reflected by the demands for C-level executives to address climate change and the push for net-zero carbon emissions.

    Major tech companies need to ensure that the AI tools they develop and distribute have adequate safeguards and testing to prevent misuse, abuse and accidental harms. Regulators need to hold the companies responsible for this, which will require countries to develop ethical guidelines and rules.

    A recent step in this direction was the establishment of the Guidelines for Secure AI System Development, published on 27 November, led by the UK and the US. The document was supported by several international agencies, including the public and private sectors. The initiative was signed and endorsed by 18 countries, including Singapore.

    The guidelines for providers and users of AI are a fine example of international collaboration to ensure that AI remains a force for good. The document, however, also brings into question why some technological superpowers – China and Russia, for instance – were not involved. There are a couple of other similar initiatives in place, taken by individual governments as well as others, such as the European Union’s AI Act.

    Academics, journalists and civil society need to continue building awareness of these issues among policymakers and the public, and to advocate ethical use, fairness, respect for society and avoidance of harm.

    The public needs to hold governments and companies accountable for all the above. It will take accord and collaboration across all sectors around the world to avoid an AI arms race and to ensure that AI stays a friendly, and not menacing, face to bring maximum benefit to humanity.

    About the Authors

    Karryl Sagun-Trajano is a research fellow for future issues in technology (FIT) at the S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore. Benjamin Ang is a senior fellow and head of the Centre of Excellence for National Security at the same institute and oversees FIT. This article was first published in The Straits Times on 8 December 2023.

    Categories: RSIS Commentary Series / Country and Region Studies / Technology and Future Issues

    Popular Links

    About RSISResearch ProgrammesGraduate EducationPublicationsEventsAdmissionsCareersVideo/Audio ChannelRSIS Intranet

    Connect with Us

    rsis.ntu
    rsis_ntu
    rsisntu
    rsisvideocast
    school/rsis-ntu
    rsis.sg
    rsissg
    RSIS
    RSS
    Subscribe to RSIS Publications
    Subscribe to RSIS Events

    Getting to RSIS

    Nanyang Technological University
    Block S4, Level B3,
    50 Nanyang Avenue,
    Singapore 639798

    Click here for direction to RSIS

    Get in Touch

      Copyright © S. Rajaratnam School of International Studies. All rights reserved.
      Privacy Statement / Terms of Use
      Help us improve

        Rate your experience with this website
        123456
        Not satisfiedVery satisfied
        What did you like?
        0/255 characters
        What can be improved?
        0/255 characters
        Your email
        Please enter a valid email.
        Thank you for your feedback.
        This site uses cookies to offer you a better browsing experience. By continuing, you are agreeing to the use of cookies on your device as described in our privacy policy. Learn more
        OK
        Latest Book
        more info