Back
About RSIS
Introduction
Building the Foundations
Welcome Message
Board of Governors
Staff Profiles
Executive Deputy Chairman’s Office
Dean’s Office
Management
Distinguished Fellows
Faculty and Research
Associate Research Fellows, Senior Analysts and Research Analysts
Visiting Fellows
Adjunct Fellows
Administrative Staff
Honours and Awards for RSIS Staff and Students
RSIS Endowment Fund
Endowed Professorships
Career Opportunities
Getting to RSIS
Research
Research Centres
Centre for Multilateralism Studies (CMS)
Centre for Non-Traditional Security Studies (NTS Centre)
Centre of Excellence for National Security
Institute of Defence and Strategic Studies (IDSS)
International Centre for Political Violence and Terrorism Research (ICPVTR)
Research Programmes
National Security Studies Programme (NSSP)
Social Cohesion Research Programme (SCRP)
Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
Other Research
Future Issues and Technology Cluster
Research@RSIS
Science and Technology Studies Programme (STSP) (2017-2020)
Graduate Education
Graduate Programmes Office
Exchange Partners and Programmes
How to Apply
Financial Assistance
Meet the Admissions Team: Information Sessions and other events
RSIS Alumni
Outreach
Global Networks
About Global Networks
RSIS Alumni
Executive Education
About Executive Education
SRP Executive Programme
Terrorism Analyst Training Course (TATC)
International Programmes
About International Programmes
Asia-Pacific Programme for Senior Military Officers (APPSMO)
Asia-Pacific Programme for Senior National Security Officers (APPSNO)
International Conference on Cohesive Societies (ICCS)
International Strategy Forum-Asia (ISF-Asia)
Publications
RSIS Publications
Annual Reviews
Books
Bulletins and Newsletters
RSIS Commentary Series
Counter Terrorist Trends and Analyses
Commemorative / Event Reports
Future Issues
IDSS Papers
Interreligious Relations
Monographs
NTS Insight
Policy Reports
Working Papers
External Publications
Authored Books
Journal Articles
Edited Books
Chapters in Edited Books
Policy Reports
Working Papers
Op-Eds
Glossary of Abbreviations
Policy-relevant Articles Given RSIS Award
RSIS Publications for the Year
External Publications for the Year
Media
Cohesive Societies
Sustainable Security
Other Resource Pages
News Releases
Speeches
Video/Audio Channel
External Podcasts
Events
Contact Us
S. Rajaratnam School of International Studies Think Tank and Graduate School Ponder The Improbable Since 1966
Nanyang Technological University Nanyang Technological University
  • About RSIS
      IntroductionBuilding the FoundationsWelcome MessageBoard of GovernorsHonours and Awards for RSIS Staff and StudentsRSIS Endowment FundEndowed ProfessorshipsCareer OpportunitiesGetting to RSIS
      Staff ProfilesExecutive Deputy Chairman’s OfficeDean’s OfficeManagementDistinguished FellowsFaculty and ResearchAssociate Research Fellows, Senior Analysts and Research AnalystsVisiting FellowsAdjunct FellowsAdministrative Staff
  • Research
      Research CentresCentre for Multilateralism Studies (CMS)Centre for Non-Traditional Security Studies (NTS Centre)Centre of Excellence for National SecurityInstitute of Defence and Strategic Studies (IDSS)International Centre for Political Violence and Terrorism Research (ICPVTR)
      Research ProgrammesNational Security Studies Programme (NSSP)Social Cohesion Research Programme (SCRP)Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
      Other ResearchFuture Issues and Technology ClusterResearch@RSISScience and Technology Studies Programme (STSP) (2017-2020)
  • Graduate Education
      Graduate Programmes OfficeExchange Partners and ProgrammesHow to ApplyFinancial AssistanceMeet the Admissions Team: Information Sessions and other eventsRSIS Alumni
  • Outreach
      Global NetworksAbout Global NetworksRSIS Alumni
      Executive EducationAbout Executive EducationSRP Executive ProgrammeTerrorism Analyst Training Course (TATC)
      International ProgrammesAbout International ProgrammesAsia-Pacific Programme for Senior Military Officers (APPSMO)Asia-Pacific Programme for Senior National Security Officers (APPSNO)International Conference on Cohesive Societies (ICCS)International Strategy Forum-Asia (ISF-Asia)
  • Publications
      RSIS PublicationsAnnual ReviewsBooksBulletins and NewslettersRSIS Commentary SeriesCounter Terrorist Trends and AnalysesCommemorative / Event ReportsFuture IssuesIDSS PapersInterreligious RelationsMonographsNTS InsightPolicy ReportsWorking Papers
      External PublicationsAuthored BooksJournal ArticlesEdited BooksChapters in Edited BooksPolicy ReportsWorking PapersOp-Eds
      Glossary of AbbreviationsPolicy-relevant Articles Given RSIS AwardRSIS Publications for the YearExternal Publications for the Year
  • Media
      Cohesive SocietiesSustainable SecurityOther Resource PagesNews ReleasesSpeechesVideo/Audio ChannelExternal Podcasts
  • Events
  • Contact Us
    • Connect with Us

      rsis.ntu
      rsis_ntu
      rsisntu
      rsisvideocast
      school/rsis-ntu
      rsis.sg
      rsissg
      RSIS
      RSS
      Subscribe to RSIS Publications
      Subscribe to RSIS Events

      Getting to RSIS

      Nanyang Technological University
      Block S4, Level B3,
      50 Nanyang Avenue,
      Singapore 639798

      Click here for direction to RSIS

      Get in Touch

    Connect
    Search
    • RSIS
    • Publication
    • RSIS Publications
    • IP23030 | Inaugural Summit on Responsible AI in the Military Domain: Limitations and Proposed Pathways
    • Annual Reviews
    • Books
    • Bulletins and Newsletters
    • RSIS Commentary Series
    • Counter Terrorist Trends and Analyses
    • Commemorative / Event Reports
    • Future Issues
    • IDSS Papers
    • Interreligious Relations
    • Monographs
    • NTS Insight
    • Policy Reports
    • Working Papers

    IP23030 | Inaugural Summit on Responsible AI in the Military Domain: Limitations and Proposed Pathways
    Wichuta Teeratanabodee

    23 March 2023

    download pdf

    The first global summit on Responsible AI in the Military Domain was held in The Hague in February 2023 to discuss how the military can employ AI responsibly. It culminated in a joint call to action, which addressed the urgent need for a military AI governance framework. While the document demonstrates a significant international effort to find ways to use military AI responsibly, it faces several limitations. WICHUTA TEERATANABODEE suggests several steps to maximise the potential of the next summit in finding solutions for the responsible use of AI in the military domain.

     

    COMMENTARY

    Artificial intelligence (AI) uses algorithms to mimic human intelligence and is intended to enhance the speed, precision, and effectiveness of human efforts. AI-enabled technologies have been employed in various sectors, including the military.

    While AI offers the advantage of helping to minimise human error and increasing the effectiveness of military operations, it comes with potential risks and harms. Due to concerns over the urgent need for norms and governance frameworks for the military use of AI, in February 2023, the Netherlands and South Korea jointly convened the Responsible AI in the Military Domain (REAIM) Summit, which concluded with the “REAIM 2023 Call to Action”.

    IP23030
    Though artificial intelligence (AI) offers advantages in enhancing speed and effectiveness while minimising human errors, it comes with potential risks. Governing frameworks are necessary for responsible AI in the military domain. Image from Pixabay.

    The REAIM 2023 Call to Action

    The Call to Action was drafted and endorsed by over 50 states, including those boasting the world’s leading AI technologies, such as the United States, China, France, Germany and Japan, as well as Singapore.

    The preamble of the document acknowledges the potential of AI, yet also recognises that the rapid adoption of AI in the military domain brings about visible and invisible challenges. On the one hand, the failure to deploy AI in a timely manner, especially in this time of fierce strategic competition, may result in a military disadvantage. On the other hand, a premature adoption of AI without sufficiently well-informed research, testing, and assurance may lead to unintended and harmful outcomes.

    Furthermore, the document acknowledges that, despite continuous efforts to learn about AI, we do not and cannot fully comprehend and anticipate the implications and challenges resulting from AI applications within the military domain. This is partly due to the distributed nature of military decision-making and the diverse nature of the AI ecosystem, where different sectors are involved throughout the entire life cycle of AI – from design to development and deployment.

    The document accordingly puts forward several points calling to action. It stresses the significance of a holistic, inclusive, and comprehensive approach in addressing the possible impacts, opportunities, and challenges of the use of AI in the military. This effectively means there is a need for collaboration and information exchange between relevant stakeholders, including governments, the private sector, civil society, and academia.

    Along with stressing the need for an international effort to govern AI, the document invites all states to increase general comprehension of military AI through research, training courses and capacity building activities, as well as to develop national frameworks, strategies, and principles for responsible AI in the military domain.

    More Needs to be Done

    The REAIM 2023 Call to Action demonstrates a significant effort to develop further collaboration and reach potential agreement on shared practices for military AI, an attempt worthy of recognition. However, the document has three key limitations: it lacks concrete plans; does not consider the diversity of military AI capabilities; and did not have a multi-stakeholder document-drafting process.

    First, it is understandable that finding agreements, even non-binding ones, is a difficult task at any international forum, especially when given a limited timeframe. However, merely inviting or welcoming states to cooperate or develop national strategies on responsible AI in the military domain might not be strong enough, especially considering the urgency of the issue. Furthermore, as preparations for the second REAIM Summit are already under way in South Korea, participants, particularly state representatives, should have taken advantage of this continuation to set more concrete and trackable call-to-action proposals.

    For example, the proposal could have included urging country representatives who participated in or supported the drafting of the Call to Action to come up with a roadmap or plan towards responsible AI in the military domain for their respective countries and/or regions. The second summit could then serve as a platform to follow up with each state’s progress on the development of national or joint strategies.

    Second, the content of the Call to Action is broad and ambiguous. There is a general acceptance that AI has been applied in a wide range of military fields, from transport and logistic systems to decision support systems, autonomous systems, and even killer robots. The capabilities of each of these applications vary. Depending on the means and purposes of military AI applications, the ends come with different levels of contentiousness and impacts for civilians. Thus, concrete frameworks to govern killer robots, for example, are likely to be, and should be, different from those created for military logistic support systems.

    Third, while the importance of a multi-stakeholder approach, characterised by the involvement of relevant stakeholders, was frequently underlined, both at the summit and specifically in the Call to Action, these actors did not take part in discussing or drafting the document. This constituted a missed opportunity to create the holistic, inclusive, and comprehensive governance framework stressed in the Call to Action.

    The private sector overseeing the developing and testing processes of AI, for instance, would have been able to offer valuable opinions on the feasibility of potential plans towards making military AI more responsible. Likewise, international and non-governmental organisations might have been able to offer ideas from civil society and human rights perspectives that might have been missed during the process.

    Paths Forward

    Based on the above reflections on the Call to Action, this paper recommends that the next summit consider the following three steps, particularly involving the process of discussing and drafting a new call to action.

    First, participants, particularly the government representatives who are at the forefront of shaping standards and norms on AI, should aim to create more concrete, achievable and trackable plans. These could include, for instance, finding a common understanding of good and responsible practices in the use of AI. Drawing up a clear roadmap for the development of potential AI governance in the military, both at the national and international levels, should also be on the agenda.

    Second, the discussions on responsible military AI should account for different ways and potential outcomes of military AI applications. These could include, for example, outlining possible scenarios of military AI applications and determining their level of contentiousness and autonomy, as well as their potential impacts on civilians and the broader society. Spelling out these contexts could help set the foundation for less ambiguous and, consequently, more effective military AI governance.

    Third, as REAIM stresses the importance of an inclusive multi-stakeholder approach to responsible AI, it could be beneficial to involve the different stakeholders present at the summit, including the private sector and NGOs, in drafting the next call to action.

    A global dialogue such as REAIM is essential for actors to meet and discuss the responsible use of AI in the military domain. This paper is not meant to criticise the REAIM effort but, instead, to help ensure that the next summit can reach its full potential and catch up with the frenetic pace of technological development.

    Wichuta TEERATANABODEE is a Senior Analyst in the Military Transformations Programme of the Institute of Defence and Strategic Studies (IDSS), S. Rajaratnam School of International Studies (RSIS).

    Categories: IDSS Papers / Non-Traditional Security / Technology and Future Issues / International Politics and Security / Global

    The first global summit on Responsible AI in the Military Domain was held in The Hague in February 2023 to discuss how the military can employ AI responsibly. It culminated in a joint call to action, which addressed the urgent need for a military AI governance framework. While the document demonstrates a significant international effort to find ways to use military AI responsibly, it faces several limitations. WICHUTA TEERATANABODEE suggests several steps to maximise the potential of the next summit in finding solutions for the responsible use of AI in the military domain.

     

    COMMENTARY

    Artificial intelligence (AI) uses algorithms to mimic human intelligence and is intended to enhance the speed, precision, and effectiveness of human efforts. AI-enabled technologies have been employed in various sectors, including the military.

    While AI offers the advantage of helping to minimise human error and increasing the effectiveness of military operations, it comes with potential risks and harms. Due to concerns over the urgent need for norms and governance frameworks for the military use of AI, in February 2023, the Netherlands and South Korea jointly convened the Responsible AI in the Military Domain (REAIM) Summit, which concluded with the “REAIM 2023 Call to Action”.

    IP23030
    Though artificial intelligence (AI) offers advantages in enhancing speed and effectiveness while minimising human errors, it comes with potential risks. Governing frameworks are necessary for responsible AI in the military domain. Image from Pixabay.

    The REAIM 2023 Call to Action

    The Call to Action was drafted and endorsed by over 50 states, including those boasting the world’s leading AI technologies, such as the United States, China, France, Germany and Japan, as well as Singapore.

    The preamble of the document acknowledges the potential of AI, yet also recognises that the rapid adoption of AI in the military domain brings about visible and invisible challenges. On the one hand, the failure to deploy AI in a timely manner, especially in this time of fierce strategic competition, may result in a military disadvantage. On the other hand, a premature adoption of AI without sufficiently well-informed research, testing, and assurance may lead to unintended and harmful outcomes.

    Furthermore, the document acknowledges that, despite continuous efforts to learn about AI, we do not and cannot fully comprehend and anticipate the implications and challenges resulting from AI applications within the military domain. This is partly due to the distributed nature of military decision-making and the diverse nature of the AI ecosystem, where different sectors are involved throughout the entire life cycle of AI – from design to development and deployment.

    The document accordingly puts forward several points calling to action. It stresses the significance of a holistic, inclusive, and comprehensive approach in addressing the possible impacts, opportunities, and challenges of the use of AI in the military. This effectively means there is a need for collaboration and information exchange between relevant stakeholders, including governments, the private sector, civil society, and academia.

    Along with stressing the need for an international effort to govern AI, the document invites all states to increase general comprehension of military AI through research, training courses and capacity building activities, as well as to develop national frameworks, strategies, and principles for responsible AI in the military domain.

    More Needs to be Done

    The REAIM 2023 Call to Action demonstrates a significant effort to develop further collaboration and reach potential agreement on shared practices for military AI, an attempt worthy of recognition. However, the document has three key limitations: it lacks concrete plans; does not consider the diversity of military AI capabilities; and did not have a multi-stakeholder document-drafting process.

    First, it is understandable that finding agreements, even non-binding ones, is a difficult task at any international forum, especially when given a limited timeframe. However, merely inviting or welcoming states to cooperate or develop national strategies on responsible AI in the military domain might not be strong enough, especially considering the urgency of the issue. Furthermore, as preparations for the second REAIM Summit are already under way in South Korea, participants, particularly state representatives, should have taken advantage of this continuation to set more concrete and trackable call-to-action proposals.

    For example, the proposal could have included urging country representatives who participated in or supported the drafting of the Call to Action to come up with a roadmap or plan towards responsible AI in the military domain for their respective countries and/or regions. The second summit could then serve as a platform to follow up with each state’s progress on the development of national or joint strategies.

    Second, the content of the Call to Action is broad and ambiguous. There is a general acceptance that AI has been applied in a wide range of military fields, from transport and logistic systems to decision support systems, autonomous systems, and even killer robots. The capabilities of each of these applications vary. Depending on the means and purposes of military AI applications, the ends come with different levels of contentiousness and impacts for civilians. Thus, concrete frameworks to govern killer robots, for example, are likely to be, and should be, different from those created for military logistic support systems.

    Third, while the importance of a multi-stakeholder approach, characterised by the involvement of relevant stakeholders, was frequently underlined, both at the summit and specifically in the Call to Action, these actors did not take part in discussing or drafting the document. This constituted a missed opportunity to create the holistic, inclusive, and comprehensive governance framework stressed in the Call to Action.

    The private sector overseeing the developing and testing processes of AI, for instance, would have been able to offer valuable opinions on the feasibility of potential plans towards making military AI more responsible. Likewise, international and non-governmental organisations might have been able to offer ideas from civil society and human rights perspectives that might have been missed during the process.

    Paths Forward

    Based on the above reflections on the Call to Action, this paper recommends that the next summit consider the following three steps, particularly involving the process of discussing and drafting a new call to action.

    First, participants, particularly the government representatives who are at the forefront of shaping standards and norms on AI, should aim to create more concrete, achievable and trackable plans. These could include, for instance, finding a common understanding of good and responsible practices in the use of AI. Drawing up a clear roadmap for the development of potential AI governance in the military, both at the national and international levels, should also be on the agenda.

    Second, the discussions on responsible military AI should account for different ways and potential outcomes of military AI applications. These could include, for example, outlining possible scenarios of military AI applications and determining their level of contentiousness and autonomy, as well as their potential impacts on civilians and the broader society. Spelling out these contexts could help set the foundation for less ambiguous and, consequently, more effective military AI governance.

    Third, as REAIM stresses the importance of an inclusive multi-stakeholder approach to responsible AI, it could be beneficial to involve the different stakeholders present at the summit, including the private sector and NGOs, in drafting the next call to action.

    A global dialogue such as REAIM is essential for actors to meet and discuss the responsible use of AI in the military domain. This paper is not meant to criticise the REAIM effort but, instead, to help ensure that the next summit can reach its full potential and catch up with the frenetic pace of technological development.

    Wichuta TEERATANABODEE is a Senior Analyst in the Military Transformations Programme of the Institute of Defence and Strategic Studies (IDSS), S. Rajaratnam School of International Studies (RSIS).

    Categories: IDSS Papers / Non-Traditional Security / Technology and Future Issues / International Politics and Security

    Popular Links

    About RSISResearch ProgrammesGraduate EducationPublicationsEventsAdmissionsCareersVideo/Audio ChannelRSIS Intranet

    Connect with Us

    rsis.ntu
    rsis_ntu
    rsisntu
    rsisvideocast
    school/rsis-ntu
    rsis.sg
    rsissg
    RSIS
    RSS
    Subscribe to RSIS Publications
    Subscribe to RSIS Events

    Getting to RSIS

    Nanyang Technological University
    Block S4, Level B3,
    50 Nanyang Avenue,
    Singapore 639798

    Click here for direction to RSIS

    Get in Touch

      Copyright © S. Rajaratnam School of International Studies. All rights reserved.
      Privacy Statement / Terms of Use
      Help us improve

        Rate your experience with this website
        123456
        Not satisfiedVery satisfied
        What did you like?
        0/255 characters
        What can be improved?
        0/255 characters
        Your email
        Please enter a valid email.
        Thank you for your feedback.
        This site uses cookies to offer you a better browsing experience. By continuing, you are agreeing to the use of cookies on your device as described in our privacy policy. Learn more
        OK
        Latest Book
        more info