Back
About RSIS
Introduction
Building the Foundations
Welcome Message
Board of Governors
Staff Profiles
Executive Deputy Chairman’s Office
Dean’s Office
Management
Distinguished Fellows
Faculty and Research
Associate Research Fellows, Senior Analysts and Research Analysts
Visiting Fellows
Adjunct Fellows
Administrative Staff
Honours and Awards for RSIS Staff and Students
RSIS Endowment Fund
Endowed Professorships
Career Opportunities
Getting to RSIS
Research
Research Centres
Centre for Multilateralism Studies (CMS)
Centre for Non-Traditional Security Studies (NTS Centre)
Centre of Excellence for National Security
Institute of Defence and Strategic Studies (IDSS)
International Centre for Political Violence and Terrorism Research (ICPVTR)
Research Programmes
National Security Studies Programme (NSSP)
Social Cohesion Research Programme (SCRP)
Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
Other Research
Future Issues and Technology Cluster
Research@RSIS
Science and Technology Studies Programme (STSP) (2017-2020)
Graduate Education
Graduate Programmes Office
Exchange Partners and Programmes
How to Apply
Financial Assistance
Meet the Admissions Team: Information Sessions and other events
RSIS Alumni
Outreach
Global Networks
About Global Networks
RSIS Alumni
Executive Education
About Executive Education
SRP Executive Programme
Terrorism Analyst Training Course (TATC)
International Programmes
About International Programmes
Asia-Pacific Programme for Senior Military Officers (APPSMO)
Asia-Pacific Programme for Senior National Security Officers (APPSNO)
International Conference on Cohesive Societies (ICCS)
International Strategy Forum-Asia (ISF-Asia)
Publications
RSIS Publications
Annual Reviews
Books
Bulletins and Newsletters
RSIS Commentary Series
Counter Terrorist Trends and Analyses
Commemorative / Event Reports
Future Issues
IDSS Papers
Interreligious Relations
Monographs
NTS Insight
Policy Reports
Working Papers
External Publications
Authored Books
Journal Articles
Edited Books
Chapters in Edited Books
Policy Reports
Working Papers
Op-Eds
Glossary of Abbreviations
Policy-relevant Articles Given RSIS Award
RSIS Publications for the Year
External Publications for the Year
Media
Cohesive Societies
Sustainable Security
Other Resource Pages
News Releases
Speeches
Video/Audio Channel
External Podcasts
Events
Contact Us
S. Rajaratnam School of International Studies Think Tank and Graduate School Ponder The Improbable Since 1966
Nanyang Technological University Nanyang Technological University
  • About RSIS
      IntroductionBuilding the FoundationsWelcome MessageBoard of GovernorsHonours and Awards for RSIS Staff and StudentsRSIS Endowment FundEndowed ProfessorshipsCareer OpportunitiesGetting to RSIS
      Staff ProfilesExecutive Deputy Chairman’s OfficeDean’s OfficeManagementDistinguished FellowsFaculty and ResearchAssociate Research Fellows, Senior Analysts and Research AnalystsVisiting FellowsAdjunct FellowsAdministrative Staff
  • Research
      Research CentresCentre for Multilateralism Studies (CMS)Centre for Non-Traditional Security Studies (NTS Centre)Centre of Excellence for National SecurityInstitute of Defence and Strategic Studies (IDSS)International Centre for Political Violence and Terrorism Research (ICPVTR)
      Research ProgrammesNational Security Studies Programme (NSSP)Social Cohesion Research Programme (SCRP)Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
      Other ResearchFuture Issues and Technology ClusterResearch@RSISScience and Technology Studies Programme (STSP) (2017-2020)
  • Graduate Education
      Graduate Programmes OfficeExchange Partners and ProgrammesHow to ApplyFinancial AssistanceMeet the Admissions Team: Information Sessions and other eventsRSIS Alumni
  • Outreach
      Global NetworksAbout Global NetworksRSIS Alumni
      Executive EducationAbout Executive EducationSRP Executive ProgrammeTerrorism Analyst Training Course (TATC)
      International ProgrammesAbout International ProgrammesAsia-Pacific Programme for Senior Military Officers (APPSMO)Asia-Pacific Programme for Senior National Security Officers (APPSNO)International Conference on Cohesive Societies (ICCS)International Strategy Forum-Asia (ISF-Asia)
  • Publications
      RSIS PublicationsAnnual ReviewsBooksBulletins and NewslettersRSIS Commentary SeriesCounter Terrorist Trends and AnalysesCommemorative / Event ReportsFuture IssuesIDSS PapersInterreligious RelationsMonographsNTS InsightPolicy ReportsWorking Papers
      External PublicationsAuthored BooksJournal ArticlesEdited BooksChapters in Edited BooksPolicy ReportsWorking PapersOp-Eds
      Glossary of AbbreviationsPolicy-relevant Articles Given RSIS AwardRSIS Publications for the YearExternal Publications for the Year
  • Media
      Cohesive SocietiesSustainable SecurityOther Resource PagesNews ReleasesSpeechesVideo/Audio ChannelExternal Podcasts
  • Events
  • Contact Us
    • Connect with Us

      rsis.ntu
      rsis_ntu
      rsisntu
      rsisvideocast
      school/rsis-ntu
      rsis.sg
      rsissg
      RSIS
      RSS
      Subscribe to RSIS Publications
      Subscribe to RSIS Events

      Getting to RSIS

      Nanyang Technological University
      Block S4, Level B3,
      50 Nanyang Avenue,
      Singapore 639798

      Click here for direction to RSIS

      Get in Touch

    Connect
    Search
    • RSIS
    • Publication
    • RSIS Publications
    • IP24054 | Military AI Governance: Moving Beyond Autonomous Weapon Systems
    • Annual Reviews
    • Books
    • Bulletins and Newsletters
    • RSIS Commentary Series
    • Counter Terrorist Trends and Analyses
    • Commemorative / Event Reports
    • Future Issues
    • IDSS Papers
    • Interreligious Relations
    • Monographs
    • NTS Insight
    • Policy Reports
    • Working Papers

    IP24054 | Military AI Governance: Moving Beyond Autonomous Weapon Systems
    Mei Ching Liu

    24 June 2024

    download pdf


    Governance of artificial intelligence in the military domain has a dominant focus on autonomous weapon systems, while AI-based decision support systems have received less attention. Given that the latter will likely be used more widely in AI-driven warfare, it is necessary to extend the focus beyond autonomous weapons systems in discussions of military AI governance.

       

     

     

    COMMENTARY

    Across several major global artificial intelligence summits in recent months, discussions regarding military AI governance have tended to focus on autonomous weapon systems (AWS). AWS, commonly known as “killer robots”, have received the most attention due to an effective campaign by Human Rights Watch (HRW) to “Stop Killer Robots”. The evocative image of “killer robots”, which was once a mobiliser for discussions on lethal autonomous weapon systems (LAWS) at the United Nations, is now distorting and narrowing the debate on military AI applications.

    Contrary to media portrayals, the use of AI in the military domain extends far beyond AWS. For example, Israel allegedly used an AI-based decision support system (ADSS) – the Lavender system – in the Gaza Strip. Observers of such military AI applications have typically failed to recognise the distinction between ADSS and AWS, thereby treating the Lavender system as an AWS. However, the Lavender system does not autonomously select and apply force to targets; it simply aids in identifying them.

    Unlike AWS, which are weapon systems that, once activated, can identify, select, and engage targets without further intervention from a human operator, ADSS do not replace human decision-makers; the decisions to select and engage targets are still made by humans. Nevertheless, military applications of ADSS in Gaza and Ukraine raise doubts regarding compliance with international humanitarian law (IHL) and the ability to minimise risks for civilians. Given such doubts, policymakers should take steps to broaden current debates on military AI to encompass ADSS, building awareness, understanding, and norms of behaviour regarding their military application, particularly in decisions on the use of force.

    IP24054
    Unlike autonomous weapon systems (AWS), AI-based decision support systems (ADSS) do not replace human decision-makers. ADSS have been allegedly used in the battlefields in Gaza and Ukraine, from identifying targets for military operations to recommending the most effective targeting options. Image from Pixabay.

    Campaign to Stop Killer Robots

    AWS were popularised by HRW in their 2012 report titled “Losing Humanity: The Case Against Killer Robots”. The term “killer robots” was used to bring media attention to serious ethical and legal concerns around AWS. In 2013, HRW launched the Stop Killer Robots campaign, which successfully mobilised the international community, and the first informal meeting of experts on LAWS was held in 2014 at the United Nations. Since then, AWS have been associated or even equated with military AI, notwithstanding that AWS may or may not incorporate AI. The persistent reference to AWS on matters such as the military application of ADSS, however, is distorting the debate on the risks and challenges posed by the military use of ADSS in decisions on the use of force.

    ADSS and Military Decision-making on the Use of Force

    In the military context, ADSS can aid decision-makers by collecting, combining, and analysing relevant data sources, such as surveillance footage from drones and telephone metadata, to identify people or objects, assess patterns of behaviour, and make recommendations for military operations. Regarding military use of force, ADSS can be used to inform decision-makers about who or what a target is and when, where, and how to strike it.

    For instance, the Lavender system allegedly used AI to support the IDF in its target selection process. Information on known Hamas and Palestinian Islamic Jihad (PIJ) operatives was used to train the system to identify characteristics associated with such operatives. The system then combined intelligence inputs, such as intercepted chat messages and social media data, to assess the probability of an individual being a member of Hamas or PIJ. The IDF also allegedly used another ADSS – the Gospel – to identify buildings and structures used by militants.

    Apart from target selection, ADSS can also assist the military in the process of target engagement. In the Ukraine/Russia conflict, ADSS were used to analyse large volumes of intelligence information, as well as radar and thermal images. The system then identified potential enemy positions, recommending the most effective options for targeting.

    ADSS vs AWS – Conceptual and Legal Differences

    ADSS represent a more varied category of military AI application than AWS, although some of the technologies used in both systems may be similar. For example, ADSS with facial recognition and tracking software could form part of AWS; but if a weapon system can select and engage a target without human intervention, it would be categorised as an AWS.

    The main concern regarding AWS is that the system itself triggers the entire target selection and engagement process. To put it simply, humans do not choose (or know) the specific target, the precise time, or place of attack, or even the means and methods of attack. If an illegal killing is conducted by AWS, there is the question of who is responsible for such conduct. As reflected in both the Rome Statute and the 2019 Guiding Principles reached by the UN LAWS discussion, individual criminal responsibility applies only to humans and not machines. However, the challenge lies in identifying the responsible individual(s), who could include the manufacturer, the programmer, the military commander, or even the AWS operator. Therefore, the use of AWS creates what is termed an “accountability gap”, where conduct potentially amounting to an IHL violation cannot be satisfactorily attributed to an individual; thus, no one is held accountable.

    On the other hand, ADSS are intended to support human decision-making; they do not replace human decision-makers. Humans are theoretically “in the loop” in making the decision to select and apply force to targets. Consequently, as far as ADSS are concerned, the accountability gap problem, a thorny issue in UN LAWS discussions, may not arise to the same extent as with AWS, as ADSS are designed to retain human decision-making.

    However, ADSS raise the question of what quality and level of human–machine interaction is required to ensure that their use complies with IHL obligations, notably those demanded by the principles of distinction, proportionality, and precaution. The Lavender system has been criticised for causing a high number of civilian casualties as the system’s human operators allegedly served only as a “rubber stamp”. This instance highlights how decision-makers could potentially end up deferring to conclusions reached by a machine, effectively making the human in the loop redundant.

    Others argue that military applications of ADSS for the use of force can facilitate compliance with IHL. For instance, ADSS can aid human decision-makers in determining the most appropriate means of attack by considering target and environment data as well as weighing the potential collateral damage.

    The Way Forward for Singapore

    Singapore is at the forefront of efforts related to military AI governance. It has actively participated in various military AI governance discussions, including the UN LAWS discussions and the 2023 summit on Responsible Artificial Intelligence in the Military Domain (REAIM). In February 2024, Singapore hosted the inaugural REAIM Regional Consultations (Asia) in partnership with the Netherlands and the Republic of Korea. In 2023, Singapore not only endorsed the REAIM Call to Action and the US-led “Political Declaration on Responsible Military Use of AI and Autonomy” but also acceded to the Convention on Certain Conventional Weapons, under which the UN LAWS discussions are convened.

    Singapore can use its unique role as a “trusted and substantive interlocutor” at various AI governance platforms, such as REAIM, to broaden the discussions to include ADSS. Unlike AWS, which have various multilateral platforms to facilitate discussions and build consensus, ADSS have not received the level of attention needed. With Singapore’s influence in these AI governance platforms, more attention and awareness could be raised among relevant stakeholders.

    Second, policymakers should develop the necessary understanding of ADSS and its associated risks and challenges under IHL. They could do so through IHL training programmes and multi-stakeholder discussions involving technology companies and academics to help them better understand the measures that may be required in the design and use of ADSS to ensure compliance with IHL. In undertaking such capacity-building, Singapore could amplify its voice and leverage its influence in international fora to lead efforts in building awareness, understanding, and norms of behaviour regarding the military application of ADSS, particularly in decisions on the use of force.

     

    Mei Ching LIU is Associate Research Fellow with the Military Transformations Programme at the S. Rajaratnam School of International Studies.

    Categories: IDSS Papers / Conflict and Stability / Cybersecurity, Biosecurity and Nuclear Safety / International Politics and Security / Global


    Governance of artificial intelligence in the military domain has a dominant focus on autonomous weapon systems, while AI-based decision support systems have received less attention. Given that the latter will likely be used more widely in AI-driven warfare, it is necessary to extend the focus beyond autonomous weapons systems in discussions of military AI governance.

       

     

     

    COMMENTARY

    Across several major global artificial intelligence summits in recent months, discussions regarding military AI governance have tended to focus on autonomous weapon systems (AWS). AWS, commonly known as “killer robots”, have received the most attention due to an effective campaign by Human Rights Watch (HRW) to “Stop Killer Robots”. The evocative image of “killer robots”, which was once a mobiliser for discussions on lethal autonomous weapon systems (LAWS) at the United Nations, is now distorting and narrowing the debate on military AI applications.

    Contrary to media portrayals, the use of AI in the military domain extends far beyond AWS. For example, Israel allegedly used an AI-based decision support system (ADSS) – the Lavender system – in the Gaza Strip. Observers of such military AI applications have typically failed to recognise the distinction between ADSS and AWS, thereby treating the Lavender system as an AWS. However, the Lavender system does not autonomously select and apply force to targets; it simply aids in identifying them.

    Unlike AWS, which are weapon systems that, once activated, can identify, select, and engage targets without further intervention from a human operator, ADSS do not replace human decision-makers; the decisions to select and engage targets are still made by humans. Nevertheless, military applications of ADSS in Gaza and Ukraine raise doubts regarding compliance with international humanitarian law (IHL) and the ability to minimise risks for civilians. Given such doubts, policymakers should take steps to broaden current debates on military AI to encompass ADSS, building awareness, understanding, and norms of behaviour regarding their military application, particularly in decisions on the use of force.

    IP24054
    Unlike autonomous weapon systems (AWS), AI-based decision support systems (ADSS) do not replace human decision-makers. ADSS have been allegedly used in the battlefields in Gaza and Ukraine, from identifying targets for military operations to recommending the most effective targeting options. Image from Pixabay.

    Campaign to Stop Killer Robots

    AWS were popularised by HRW in their 2012 report titled “Losing Humanity: The Case Against Killer Robots”. The term “killer robots” was used to bring media attention to serious ethical and legal concerns around AWS. In 2013, HRW launched the Stop Killer Robots campaign, which successfully mobilised the international community, and the first informal meeting of experts on LAWS was held in 2014 at the United Nations. Since then, AWS have been associated or even equated with military AI, notwithstanding that AWS may or may not incorporate AI. The persistent reference to AWS on matters such as the military application of ADSS, however, is distorting the debate on the risks and challenges posed by the military use of ADSS in decisions on the use of force.

    ADSS and Military Decision-making on the Use of Force

    In the military context, ADSS can aid decision-makers by collecting, combining, and analysing relevant data sources, such as surveillance footage from drones and telephone metadata, to identify people or objects, assess patterns of behaviour, and make recommendations for military operations. Regarding military use of force, ADSS can be used to inform decision-makers about who or what a target is and when, where, and how to strike it.

    For instance, the Lavender system allegedly used AI to support the IDF in its target selection process. Information on known Hamas and Palestinian Islamic Jihad (PIJ) operatives was used to train the system to identify characteristics associated with such operatives. The system then combined intelligence inputs, such as intercepted chat messages and social media data, to assess the probability of an individual being a member of Hamas or PIJ. The IDF also allegedly used another ADSS – the Gospel – to identify buildings and structures used by militants.

    Apart from target selection, ADSS can also assist the military in the process of target engagement. In the Ukraine/Russia conflict, ADSS were used to analyse large volumes of intelligence information, as well as radar and thermal images. The system then identified potential enemy positions, recommending the most effective options for targeting.

    ADSS vs AWS – Conceptual and Legal Differences

    ADSS represent a more varied category of military AI application than AWS, although some of the technologies used in both systems may be similar. For example, ADSS with facial recognition and tracking software could form part of AWS; but if a weapon system can select and engage a target without human intervention, it would be categorised as an AWS.

    The main concern regarding AWS is that the system itself triggers the entire target selection and engagement process. To put it simply, humans do not choose (or know) the specific target, the precise time, or place of attack, or even the means and methods of attack. If an illegal killing is conducted by AWS, there is the question of who is responsible for such conduct. As reflected in both the Rome Statute and the 2019 Guiding Principles reached by the UN LAWS discussion, individual criminal responsibility applies only to humans and not machines. However, the challenge lies in identifying the responsible individual(s), who could include the manufacturer, the programmer, the military commander, or even the AWS operator. Therefore, the use of AWS creates what is termed an “accountability gap”, where conduct potentially amounting to an IHL violation cannot be satisfactorily attributed to an individual; thus, no one is held accountable.

    On the other hand, ADSS are intended to support human decision-making; they do not replace human decision-makers. Humans are theoretically “in the loop” in making the decision to select and apply force to targets. Consequently, as far as ADSS are concerned, the accountability gap problem, a thorny issue in UN LAWS discussions, may not arise to the same extent as with AWS, as ADSS are designed to retain human decision-making.

    However, ADSS raise the question of what quality and level of human–machine interaction is required to ensure that their use complies with IHL obligations, notably those demanded by the principles of distinction, proportionality, and precaution. The Lavender system has been criticised for causing a high number of civilian casualties as the system’s human operators allegedly served only as a “rubber stamp”. This instance highlights how decision-makers could potentially end up deferring to conclusions reached by a machine, effectively making the human in the loop redundant.

    Others argue that military applications of ADSS for the use of force can facilitate compliance with IHL. For instance, ADSS can aid human decision-makers in determining the most appropriate means of attack by considering target and environment data as well as weighing the potential collateral damage.

    The Way Forward for Singapore

    Singapore is at the forefront of efforts related to military AI governance. It has actively participated in various military AI governance discussions, including the UN LAWS discussions and the 2023 summit on Responsible Artificial Intelligence in the Military Domain (REAIM). In February 2024, Singapore hosted the inaugural REAIM Regional Consultations (Asia) in partnership with the Netherlands and the Republic of Korea. In 2023, Singapore not only endorsed the REAIM Call to Action and the US-led “Political Declaration on Responsible Military Use of AI and Autonomy” but also acceded to the Convention on Certain Conventional Weapons, under which the UN LAWS discussions are convened.

    Singapore can use its unique role as a “trusted and substantive interlocutor” at various AI governance platforms, such as REAIM, to broaden the discussions to include ADSS. Unlike AWS, which have various multilateral platforms to facilitate discussions and build consensus, ADSS have not received the level of attention needed. With Singapore’s influence in these AI governance platforms, more attention and awareness could be raised among relevant stakeholders.

    Second, policymakers should develop the necessary understanding of ADSS and its associated risks and challenges under IHL. They could do so through IHL training programmes and multi-stakeholder discussions involving technology companies and academics to help them better understand the measures that may be required in the design and use of ADSS to ensure compliance with IHL. In undertaking such capacity-building, Singapore could amplify its voice and leverage its influence in international fora to lead efforts in building awareness, understanding, and norms of behaviour regarding the military application of ADSS, particularly in decisions on the use of force.

     

    Mei Ching LIU is Associate Research Fellow with the Military Transformations Programme at the S. Rajaratnam School of International Studies.

    Categories: IDSS Papers / Conflict and Stability / Cybersecurity, Biosecurity and Nuclear Safety / International Politics and Security

    Popular Links

    About RSISResearch ProgrammesGraduate EducationPublicationsEventsAdmissionsCareersVideo/Audio ChannelRSIS Intranet

    Connect with Us

    rsis.ntu
    rsis_ntu
    rsisntu
    rsisvideocast
    school/rsis-ntu
    rsis.sg
    rsissg
    RSIS
    RSS
    Subscribe to RSIS Publications
    Subscribe to RSIS Events

    Getting to RSIS

    Nanyang Technological University
    Block S4, Level B3,
    50 Nanyang Avenue,
    Singapore 639798

    Click here for direction to RSIS

    Get in Touch

      Copyright © S. Rajaratnam School of International Studies. All rights reserved.
      Privacy Statement / Terms of Use
      Help us improve

        Rate your experience with this website
        123456
        Not satisfiedVery satisfied
        What did you like?
        0/255 characters
        What can be improved?
        0/255 characters
        Your email
        Please enter a valid email.
        Thank you for your feedback.
        This site uses cookies to offer you a better browsing experience. By continuing, you are agreeing to the use of cookies on your device as described in our privacy policy. Learn more
        OK
        Latest Book
        more info