Back
About RSIS
Introduction
Building the Foundations
Welcome Message
Board of Governors
Staff Profiles
Executive Deputy Chairman’s Office
Dean’s Office
Management
Distinguished Fellows
Faculty and Research
Associate Research Fellows, Senior Analysts and Research Analysts
Visiting Fellows
Adjunct Fellows
Administrative Staff
Honours and Awards for RSIS Staff and Students
RSIS Endowment Fund
Endowed Professorships
Career Opportunities
Getting to RSIS
Research
Research Centres
Centre for Multilateralism Studies (CMS)
Centre for Non-Traditional Security Studies (NTS Centre)
Centre of Excellence for National Security
Institute of Defence and Strategic Studies (IDSS)
International Centre for Political Violence and Terrorism Research (ICPVTR)
Research Programmes
National Security Studies Programme (NSSP)
Social Cohesion Research Programme (SCRP)
Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
Other Research
Future Issues and Technology Cluster
Research@RSIS
Science and Technology Studies Programme (STSP) (2017-2020)
Graduate Education
Graduate Programmes Office
Exchange Partners and Programmes
How to Apply
Financial Assistance
Meet the Admissions Team: Information Sessions and other events
RSIS Alumni
Outreach
Global Networks
About Global Networks
RSIS Alumni
Executive Education
About Executive Education
SRP Executive Programme
Terrorism Analyst Training Course (TATC)
International Programmes
About International Programmes
Asia-Pacific Programme for Senior Military Officers (APPSMO)
Asia-Pacific Programme for Senior National Security Officers (APPSNO)
International Conference on Cohesive Societies (ICCS)
International Strategy Forum-Asia (ISF-Asia)
Publications
RSIS Publications
Annual Reviews
Books
Bulletins and Newsletters
RSIS Commentary Series
Counter Terrorist Trends and Analyses
Commemorative / Event Reports
Future Issues
IDSS Papers
Interreligious Relations
Monographs
NTS Insight
Policy Reports
Working Papers
External Publications
Authored Books
Journal Articles
Edited Books
Chapters in Edited Books
Policy Reports
Working Papers
Op-Eds
Glossary of Abbreviations
Policy-relevant Articles Given RSIS Award
RSIS Publications for the Year
External Publications for the Year
Media
Cohesive Societies
Sustainable Security
Other Resource Pages
News Releases
Speeches
Video/Audio Channel
External Podcasts
Events
Contact Us
S. Rajaratnam School of International Studies Think Tank and Graduate School Ponder The Improbable Since 1966
Nanyang Technological University Nanyang Technological University
  • About RSIS
      IntroductionBuilding the FoundationsWelcome MessageBoard of GovernorsHonours and Awards for RSIS Staff and StudentsRSIS Endowment FundEndowed ProfessorshipsCareer OpportunitiesGetting to RSIS
      Staff ProfilesExecutive Deputy Chairman’s OfficeDean’s OfficeManagementDistinguished FellowsFaculty and ResearchAssociate Research Fellows, Senior Analysts and Research AnalystsVisiting FellowsAdjunct FellowsAdministrative Staff
  • Research
      Research CentresCentre for Multilateralism Studies (CMS)Centre for Non-Traditional Security Studies (NTS Centre)Centre of Excellence for National SecurityInstitute of Defence and Strategic Studies (IDSS)International Centre for Political Violence and Terrorism Research (ICPVTR)
      Research ProgrammesNational Security Studies Programme (NSSP)Social Cohesion Research Programme (SCRP)Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
      Other ResearchFuture Issues and Technology ClusterResearch@RSISScience and Technology Studies Programme (STSP) (2017-2020)
  • Graduate Education
      Graduate Programmes OfficeExchange Partners and ProgrammesHow to ApplyFinancial AssistanceMeet the Admissions Team: Information Sessions and other eventsRSIS Alumni
  • Outreach
      Global NetworksAbout Global NetworksRSIS Alumni
      Executive EducationAbout Executive EducationSRP Executive ProgrammeTerrorism Analyst Training Course (TATC)
      International ProgrammesAbout International ProgrammesAsia-Pacific Programme for Senior Military Officers (APPSMO)Asia-Pacific Programme for Senior National Security Officers (APPSNO)International Conference on Cohesive Societies (ICCS)International Strategy Forum-Asia (ISF-Asia)
  • Publications
      RSIS PublicationsAnnual ReviewsBooksBulletins and NewslettersRSIS Commentary SeriesCounter Terrorist Trends and AnalysesCommemorative / Event ReportsFuture IssuesIDSS PapersInterreligious RelationsMonographsNTS InsightPolicy ReportsWorking Papers
      External PublicationsAuthored BooksJournal ArticlesEdited BooksChapters in Edited BooksPolicy ReportsWorking PapersOp-Eds
      Glossary of AbbreviationsPolicy-relevant Articles Given RSIS AwardRSIS Publications for the YearExternal Publications for the Year
  • Media
      Cohesive SocietiesSustainable SecurityOther Resource PagesNews ReleasesSpeechesVideo/Audio ChannelExternal Podcasts
  • Events
  • Contact Us
    • Connect with Us

      rsis.ntu
      rsis_ntu
      rsisntu
      rsisvideocast
      school/rsis-ntu
      rsis.sg
      rsissg
      RSIS
      RSS
      Subscribe to RSIS Publications
      Subscribe to RSIS Events

      Getting to RSIS

      Nanyang Technological University
      Block S4, Level B3,
      50 Nanyang Avenue,
      Singapore 639798

      Click here for direction to RSIS

      Get in Touch

    Connect
    Search
    • RSIS
    • Publication
    • RSIS Publications
    • IP25004 | Military AI Governance in 2024: One Step Forward, Two Steps Back
    • Annual Reviews
    • Books
    • Bulletins and Newsletters
    • RSIS Commentary Series
    • Counter Terrorist Trends and Analyses
    • Commemorative / Event Reports
    • Future Issues
    • IDSS Papers
    • Interreligious Relations
    • Monographs
    • NTS Insight
    • Policy Reports
    • Working Papers

    IP25004 | Military AI Governance in 2024: One Step Forward, Two Steps Back
    Manoj Harjani

    10 January 2025

    download pdf

    SYNOPSIS

    Multilateral governance of military AI saw progress in 2024 mainly around efforts to develop norms of behaviour. However, prospects for legally binding agreements remain slim while proliferation concerns are growing as more countries seek to advance their capabilities in military AI.

    COMMENTARY

    On the surface, 2024 could be considered a marquee year for military AI governance. In November 2024, the UN General Assembly’s First Committee adopted its first-ever resolution focused on military AI. The resolution was led by the Netherlands and South Korea, who have also collaborated to organise summits on responsible AI in the military domain since 2023.

    Another important development occurred on the sidelines of the Asia-Pacific Economic Cooperation (APEC) Summit held in November 2024 in Lima, Peru, where presidents Joe Biden and Xi Jinping agreed that human beings rather than AI should make decisions regarding the use of nuclear weapons. Earlier in the year, the United States and China had also held an inaugural round of bilateral talks in Geneva on the risks posed by AI.

    In contrast to the progress seen towards developing norms of behaviour, prospects for legally binding agreements remain slim. An almost decade-long effort on lethal autonomous weapon systems (LAWS) by countries that are signatories to the Convention on Certain Conventional Weapons (CCW) failed to make significant progress in 2024. This is despite UN secretary-general Antonio Guterres urging for the conclusion of a legally binding agreement on LAWS through this platform by 2026.

    While efforts to develop norms should not be discounted, it is difficult to impose enforceable constraints on the development and use of military AI without legally binding agreements. Observations from ongoing conflicts in Gaza and Ukraine have already demonstrated the urgent need for arms control, and we can expect militaries to spend more on AI as part of the larger global trend of rising defence spending. This will only deepen strategic instability, which is no longer underpinned solely by nuclear weapons.

    In the year ahead, there will be three things to watch closely. First, the impact on military AI governance from any shifts in relations between China and the United States as a second administration led by Donald Trump takes office. Next, the steps that established military AI governance platforms will take to move beyond norms of behaviour. Finally, whether the conversation on military AI governance will become more inclusive and involve countries beyond those with the most advanced military capabilities or which are more economically developed.

    2025 will be a critical year for military AI governance to mature beyond norms as proliferation concerns persist. Image by the Dutch Ministry of Foreign Affairs via Flickr.
    2025 will be a critical year for military AI governance to mature beyond norms as proliferation concerns persist. Image by the Dutch Ministry of Foreign Affairs via Flickr.

    The Rise of Platforms for Developing Norms

    In a climate of acrimonious relations between China and the United States, few would have expected any progress on military AI governance. The rise of platforms for developing norms of behaviour regarding military AI has not only highlighted the value of smaller countries coming together but has shaped the superpowers’ behaviour as well.

    The Responsible AI in the Military Domain (REAIM) summits held since 2023 are a good example of this dynamic at play. At the inaugural REAIM summit, hosted by the Netherlands, the United States launched a parallel effort on military AI governance, the Political Declaration on Responsible Military Use of AI and Autonomy. It is telling that so far more countries have endorsed the Blueprint for Action put forward at the second REAIM summit, held in South Korea in September 2024, than the US-led Political Declaration.

    The REAIM platform’s success has in turn nudged the superpowers in a positive direction. Both China and the United States voted in favour of the resolution on military AI led by the Netherlands and South Korea at the UN General Assembly’s First Committee in November 2024. While much has been made of the fact that China did not endorse the REAIM Blueprint for Action or co-sponsor the First Committee resolution on military AI like the United States did, this does not necessarily equate to China being against these initiatives.

    Nevertheless, Russia has been a consistent opponent of these norm-building efforts. However, it also recently announced an AI Alliance Network under the BRICS grouping that appears to be primarily aimed at conducting joint R&D in AI. For now, it is unclear whether this initiative will have any impact on military AI governance, but the likelihood is low given the BRICS’ traditional focus on economic issues.

    Obstacles to Legally Binding Agreements

    The picture is far less rosy when we turn to legally binding agreements. Although the Group of Governmental Experts (GGE) on LAWS involving parties to the CCW is aiming for a legally binding agreement under a more focused mandate approved in November 2023, the meetings in 2024 saw existing divisions entrenched over nearly a decade continuing to exercise a hold over proceedings.

    These divisions are primarily along two fault lines — the characteristics and definitions of LAWS and approaches to regulating them. Regarding definitions, a major challenge is that much of what is being discussed is prospective and has yet to be deployed. In terms of approaches to regulation, some countries favour banning only LAWS that cannot comply with international humanitarian law, while others desire an all-out ban. With the GGE on LAWS working on the basis of consensus, these divisions are very likely to continue hampering future progress towards a legally binding agreement.

    Meanwhile, countries that have been frustrated with the slow progress of the GGE on LAWS have sought to create new venues for discussion. Austria led a resolution in the UN General Assembly that was passed in December 2023 and organised a conference in April 2024 to move discussion on the regulation of LAWS forward without the GGE’s constraint of requiring consensus. It also successfully led a follow-up resolution in 2024, which has now firmly placed LAWS on the agenda of the UN General Assembly’s First Committee. While this effort has been criticised for seeking to sidestep countries in the GGE on LAWS that want an all-out ban, the reality is that the GGE is not an entirely inclusive platform either, being restricted to CCW signatories.

    Prospects for 2025

    There are three things worth watching closely in the year ahead. First among them is a new US administration led by Donald Trump that will take office later this month, armed with a majority in the Senate and the House of Representatives. While observers generally expect that the Biden administration’s policies on AI will be rolled back, it is unclear how this will affect military AI.

    In August 2024, China and the United States committed to a second round of bilateral talks on AI. It remains to be seen whether the incoming administration will retain this platform. Another uncertainty is the future of the Political Declaration on Responsible Military Use of AI and Autonomy. Having convened only one plenary meeting of countries that endorsed the declaration in March 2024, it is possible that this platform may be abandoned. If the first Trump administration’s actions are any guide, future American participation in other multilateral platforms on military AI governance is also in doubt.

    The second thing to watch will be how existing military AI governance platforms aim to push forward beyond norms of behaviour. The REAIM process has already signalled its intentions, having progressed from a Call to Action at the first summit to a Blueprint for Action in 2024. Furthermore, at the upcoming 80th session of the UN General Assembly’s First Committee this year, there will be two agenda items related to military AI arising from the two resolutions that were adopted in 2024 — one on the implications of AI for international peace and security, and the other on LAWS. The GGE on LAWS will also convene and aim to make progress on a rolling text that guided discussions in 2024 towards an agreement.

    Finally, it will be important to track whether the conversation on military AI governance will become more inclusive in 2025. The inclusion of military AI and LAWS on the agenda of the UN General Assembly could already be interpreted as progress on this front, but it remains to be seen whether other platforms will invite greater participation. If we consider Southeast Asia, just three countries from the region (Brunei, the Philippines, and Singapore) endorsed the REAIM Blueprint for Action, and only Singapore endorsed the US Political Declaration.

    ASEAN has yet to substantively address the issue of military AI governance, and think tanks in the region have suggested that the ASEAN Defence Ministers’ Meeting (ADMM) could take the lead on this. Furthermore, with the ADMM-Plus platform that involves ASEAN’s Dialogue Partner countries — most of whom are key players in military AI, including China, the United States, Australia, India, Japan, and South Korea — the region is well positioned to plug into the larger multilateral conversation. The Philippines, which has been active at multilateral platforms on military AI governance in recent years, may well make this a reality if it ends up taking over Myanmar’s 2026 slot as rotating chair of ASEAN.

    The year ahead therefore looks to be a critical one for military AI governance to mature — all eyes will not just be focused on China and the United States, but the smaller and less developed countries that are playing crucial roles in taking the multilateral conversation forward.

    Manoj Harjani is Research Fellow and Coordinator of the Military Transformations Programme at the S. Rajaratnam School of International Studies.

    Categories: IDSS Papers / Country and Region Studies / International Politics and Security / East Asia and Asia Pacific / South Asia / Southeast Asia and ASEAN / Global

    SYNOPSIS

    Multilateral governance of military AI saw progress in 2024 mainly around efforts to develop norms of behaviour. However, prospects for legally binding agreements remain slim while proliferation concerns are growing as more countries seek to advance their capabilities in military AI.

    COMMENTARY

    On the surface, 2024 could be considered a marquee year for military AI governance. In November 2024, the UN General Assembly’s First Committee adopted its first-ever resolution focused on military AI. The resolution was led by the Netherlands and South Korea, who have also collaborated to organise summits on responsible AI in the military domain since 2023.

    Another important development occurred on the sidelines of the Asia-Pacific Economic Cooperation (APEC) Summit held in November 2024 in Lima, Peru, where presidents Joe Biden and Xi Jinping agreed that human beings rather than AI should make decisions regarding the use of nuclear weapons. Earlier in the year, the United States and China had also held an inaugural round of bilateral talks in Geneva on the risks posed by AI.

    In contrast to the progress seen towards developing norms of behaviour, prospects for legally binding agreements remain slim. An almost decade-long effort on lethal autonomous weapon systems (LAWS) by countries that are signatories to the Convention on Certain Conventional Weapons (CCW) failed to make significant progress in 2024. This is despite UN secretary-general Antonio Guterres urging for the conclusion of a legally binding agreement on LAWS through this platform by 2026.

    While efforts to develop norms should not be discounted, it is difficult to impose enforceable constraints on the development and use of military AI without legally binding agreements. Observations from ongoing conflicts in Gaza and Ukraine have already demonstrated the urgent need for arms control, and we can expect militaries to spend more on AI as part of the larger global trend of rising defence spending. This will only deepen strategic instability, which is no longer underpinned solely by nuclear weapons.

    In the year ahead, there will be three things to watch closely. First, the impact on military AI governance from any shifts in relations between China and the United States as a second administration led by Donald Trump takes office. Next, the steps that established military AI governance platforms will take to move beyond norms of behaviour. Finally, whether the conversation on military AI governance will become more inclusive and involve countries beyond those with the most advanced military capabilities or which are more economically developed.

    2025 will be a critical year for military AI governance to mature beyond norms as proliferation concerns persist. Image by the Dutch Ministry of Foreign Affairs via Flickr.
    2025 will be a critical year for military AI governance to mature beyond norms as proliferation concerns persist. Image by the Dutch Ministry of Foreign Affairs via Flickr.

    The Rise of Platforms for Developing Norms

    In a climate of acrimonious relations between China and the United States, few would have expected any progress on military AI governance. The rise of platforms for developing norms of behaviour regarding military AI has not only highlighted the value of smaller countries coming together but has shaped the superpowers’ behaviour as well.

    The Responsible AI in the Military Domain (REAIM) summits held since 2023 are a good example of this dynamic at play. At the inaugural REAIM summit, hosted by the Netherlands, the United States launched a parallel effort on military AI governance, the Political Declaration on Responsible Military Use of AI and Autonomy. It is telling that so far more countries have endorsed the Blueprint for Action put forward at the second REAIM summit, held in South Korea in September 2024, than the US-led Political Declaration.

    The REAIM platform’s success has in turn nudged the superpowers in a positive direction. Both China and the United States voted in favour of the resolution on military AI led by the Netherlands and South Korea at the UN General Assembly’s First Committee in November 2024. While much has been made of the fact that China did not endorse the REAIM Blueprint for Action or co-sponsor the First Committee resolution on military AI like the United States did, this does not necessarily equate to China being against these initiatives.

    Nevertheless, Russia has been a consistent opponent of these norm-building efforts. However, it also recently announced an AI Alliance Network under the BRICS grouping that appears to be primarily aimed at conducting joint R&D in AI. For now, it is unclear whether this initiative will have any impact on military AI governance, but the likelihood is low given the BRICS’ traditional focus on economic issues.

    Obstacles to Legally Binding Agreements

    The picture is far less rosy when we turn to legally binding agreements. Although the Group of Governmental Experts (GGE) on LAWS involving parties to the CCW is aiming for a legally binding agreement under a more focused mandate approved in November 2023, the meetings in 2024 saw existing divisions entrenched over nearly a decade continuing to exercise a hold over proceedings.

    These divisions are primarily along two fault lines — the characteristics and definitions of LAWS and approaches to regulating them. Regarding definitions, a major challenge is that much of what is being discussed is prospective and has yet to be deployed. In terms of approaches to regulation, some countries favour banning only LAWS that cannot comply with international humanitarian law, while others desire an all-out ban. With the GGE on LAWS working on the basis of consensus, these divisions are very likely to continue hampering future progress towards a legally binding agreement.

    Meanwhile, countries that have been frustrated with the slow progress of the GGE on LAWS have sought to create new venues for discussion. Austria led a resolution in the UN General Assembly that was passed in December 2023 and organised a conference in April 2024 to move discussion on the regulation of LAWS forward without the GGE’s constraint of requiring consensus. It also successfully led a follow-up resolution in 2024, which has now firmly placed LAWS on the agenda of the UN General Assembly’s First Committee. While this effort has been criticised for seeking to sidestep countries in the GGE on LAWS that want an all-out ban, the reality is that the GGE is not an entirely inclusive platform either, being restricted to CCW signatories.

    Prospects for 2025

    There are three things worth watching closely in the year ahead. First among them is a new US administration led by Donald Trump that will take office later this month, armed with a majority in the Senate and the House of Representatives. While observers generally expect that the Biden administration’s policies on AI will be rolled back, it is unclear how this will affect military AI.

    In August 2024, China and the United States committed to a second round of bilateral talks on AI. It remains to be seen whether the incoming administration will retain this platform. Another uncertainty is the future of the Political Declaration on Responsible Military Use of AI and Autonomy. Having convened only one plenary meeting of countries that endorsed the declaration in March 2024, it is possible that this platform may be abandoned. If the first Trump administration’s actions are any guide, future American participation in other multilateral platforms on military AI governance is also in doubt.

    The second thing to watch will be how existing military AI governance platforms aim to push forward beyond norms of behaviour. The REAIM process has already signalled its intentions, having progressed from a Call to Action at the first summit to a Blueprint for Action in 2024. Furthermore, at the upcoming 80th session of the UN General Assembly’s First Committee this year, there will be two agenda items related to military AI arising from the two resolutions that were adopted in 2024 — one on the implications of AI for international peace and security, and the other on LAWS. The GGE on LAWS will also convene and aim to make progress on a rolling text that guided discussions in 2024 towards an agreement.

    Finally, it will be important to track whether the conversation on military AI governance will become more inclusive in 2025. The inclusion of military AI and LAWS on the agenda of the UN General Assembly could already be interpreted as progress on this front, but it remains to be seen whether other platforms will invite greater participation. If we consider Southeast Asia, just three countries from the region (Brunei, the Philippines, and Singapore) endorsed the REAIM Blueprint for Action, and only Singapore endorsed the US Political Declaration.

    ASEAN has yet to substantively address the issue of military AI governance, and think tanks in the region have suggested that the ASEAN Defence Ministers’ Meeting (ADMM) could take the lead on this. Furthermore, with the ADMM-Plus platform that involves ASEAN’s Dialogue Partner countries — most of whom are key players in military AI, including China, the United States, Australia, India, Japan, and South Korea — the region is well positioned to plug into the larger multilateral conversation. The Philippines, which has been active at multilateral platforms on military AI governance in recent years, may well make this a reality if it ends up taking over Myanmar’s 2026 slot as rotating chair of ASEAN.

    The year ahead therefore looks to be a critical one for military AI governance to mature — all eyes will not just be focused on China and the United States, but the smaller and less developed countries that are playing crucial roles in taking the multilateral conversation forward.

    Manoj Harjani is Research Fellow and Coordinator of the Military Transformations Programme at the S. Rajaratnam School of International Studies.

    Categories: IDSS Papers / Country and Region Studies / International Politics and Security

    Popular Links

    About RSISResearch ProgrammesGraduate EducationPublicationsEventsAdmissionsCareersVideo/Audio ChannelRSIS Intranet

    Connect with Us

    rsis.ntu
    rsis_ntu
    rsisntu
    rsisvideocast
    school/rsis-ntu
    rsis.sg
    rsissg
    RSIS
    RSS
    Subscribe to RSIS Publications
    Subscribe to RSIS Events

    Getting to RSIS

    Nanyang Technological University
    Block S4, Level B3,
    50 Nanyang Avenue,
    Singapore 639798

    Click here for direction to RSIS

    Get in Touch

      Copyright © S. Rajaratnam School of International Studies. All rights reserved.
      Privacy Statement / Terms of Use
      Help us improve

        Rate your experience with this website
        123456
        Not satisfiedVery satisfied
        What did you like?
        0/255 characters
        What can be improved?
        0/255 characters
        Your email
        Please enter a valid email.
        Thank you for your feedback.
        This site uses cookies to offer you a better browsing experience. By continuing, you are agreeing to the use of cookies on your device as described in our privacy policy. Learn more
        OK
        Latest Book
        more info