Back
About RSIS
Introduction
Building the Foundations
Welcome Message
Board of Governors
Staff Profiles
Executive Deputy Chairman’s Office
Dean’s Office
Management
Distinguished Fellows
Faculty and Research
Associate Research Fellows, Senior Analysts and Research Analysts
Visiting Fellows
Adjunct Fellows
Administrative Staff
Honours and Awards for RSIS Staff and Students
RSIS Endowment Fund
Endowed Professorships
Career Opportunities
Getting to RSIS
Research
Research Centres
Centre for Multilateralism Studies (CMS)
Centre for Non-Traditional Security Studies (NTS Centre)
Centre of Excellence for National Security
Institute of Defence and Strategic Studies (IDSS)
International Centre for Political Violence and Terrorism Research (ICPVTR)
Research Programmes
National Security Studies Programme (NSSP)
Social Cohesion Research Programme (SCRP)
Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
Other Research
Future Issues and Technology Cluster
Research@RSIS
Science and Technology Studies Programme (STSP) (2017-2020)
Graduate Education
Graduate Programmes Office
Exchange Partners and Programmes
How to Apply
Financial Assistance
Meet the Admissions Team: Information Sessions and other events
RSIS Alumni
Outreach
Global Networks
About Global Networks
RSIS Alumni
Executive Education
About Executive Education
SRP Executive Programme
Terrorism Analyst Training Course (TATC)
International Programmes
About International Programmes
Asia-Pacific Programme for Senior Military Officers (APPSMO)
Asia-Pacific Programme for Senior National Security Officers (APPSNO)
International Conference on Cohesive Societies (ICCS)
International Strategy Forum-Asia (ISF-Asia)
Publications
RSIS Publications
Annual Reviews
Books
Bulletins and Newsletters
RSIS Commentary Series
Counter Terrorist Trends and Analyses
Commemorative / Event Reports
Future Issues
IDSS Papers
Interreligious Relations
Monographs
NTS Insight
Policy Reports
Working Papers
External Publications
Authored Books
Journal Articles
Edited Books
Chapters in Edited Books
Policy Reports
Working Papers
Op-Eds
Glossary of Abbreviations
Policy-relevant Articles Given RSIS Award
RSIS Publications for the Year
External Publications for the Year
Media
Cohesive Societies
Sustainable Security
Other Resource Pages
News Releases
Speeches
Video/Audio Channel
External Podcasts
Events
Contact Us
S. Rajaratnam School of International Studies Think Tank and Graduate School Ponder The Improbable Since 1966
Nanyang Technological University Nanyang Technological University
  • About RSIS
      IntroductionBuilding the FoundationsWelcome MessageBoard of GovernorsHonours and Awards for RSIS Staff and StudentsRSIS Endowment FundEndowed ProfessorshipsCareer OpportunitiesGetting to RSIS
      Staff ProfilesExecutive Deputy Chairman’s OfficeDean’s OfficeManagementDistinguished FellowsFaculty and ResearchAssociate Research Fellows, Senior Analysts and Research AnalystsVisiting FellowsAdjunct FellowsAdministrative Staff
  • Research
      Research CentresCentre for Multilateralism Studies (CMS)Centre for Non-Traditional Security Studies (NTS Centre)Centre of Excellence for National SecurityInstitute of Defence and Strategic Studies (IDSS)International Centre for Political Violence and Terrorism Research (ICPVTR)
      Research ProgrammesNational Security Studies Programme (NSSP)Social Cohesion Research Programme (SCRP)Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
      Other ResearchFuture Issues and Technology ClusterResearch@RSISScience and Technology Studies Programme (STSP) (2017-2020)
  • Graduate Education
      Graduate Programmes OfficeExchange Partners and ProgrammesHow to ApplyFinancial AssistanceMeet the Admissions Team: Information Sessions and other eventsRSIS Alumni
  • Outreach
      Global NetworksAbout Global NetworksRSIS Alumni
      Executive EducationAbout Executive EducationSRP Executive ProgrammeTerrorism Analyst Training Course (TATC)
      International ProgrammesAbout International ProgrammesAsia-Pacific Programme for Senior Military Officers (APPSMO)Asia-Pacific Programme for Senior National Security Officers (APPSNO)International Conference on Cohesive Societies (ICCS)International Strategy Forum-Asia (ISF-Asia)
  • Publications
      RSIS PublicationsAnnual ReviewsBooksBulletins and NewslettersRSIS Commentary SeriesCounter Terrorist Trends and AnalysesCommemorative / Event ReportsFuture IssuesIDSS PapersInterreligious RelationsMonographsNTS InsightPolicy ReportsWorking Papers
      External PublicationsAuthored BooksJournal ArticlesEdited BooksChapters in Edited BooksPolicy ReportsWorking PapersOp-Eds
      Glossary of AbbreviationsPolicy-relevant Articles Given RSIS AwardRSIS Publications for the YearExternal Publications for the Year
  • Media
      Cohesive SocietiesSustainable SecurityOther Resource PagesNews ReleasesSpeechesVideo/Audio ChannelExternal Podcasts
  • Events
  • Contact Us
    • Connect with Us

      rsis.ntu
      rsis_ntu
      rsisntu
      rsisvideocast
      school/rsis-ntu
      rsis.sg
      rsissg
      RSIS
      RSS
      Subscribe to RSIS Publications
      Subscribe to RSIS Events

      Getting to RSIS

      Nanyang Technological University
      Block S4, Level B3,
      50 Nanyang Avenue,
      Singapore 639798

      Click here for direction to RSIS

      Get in Touch

    Connect
    Search
    • RSIS
    • Publication
    • RSIS Publications
    • IP24063 | The Case for AI-Based Decision Support Systems Oversight
    • Annual Reviews
    • Books
    • Bulletins and Newsletters
    • RSIS Commentary Series
    • Counter Terrorist Trends and Analyses
    • Commemorative / Event Reports
    • Future Issues
    • IDSS Papers
    • Interreligious Relations
    • Monographs
    • NTS Insight
    • Policy Reports
    • Working Papers

    IP24063 | The Case for AI-Based Decision Support Systems Oversight
    Mei Ching Liu

    07 August 2024

    download pdf

    The alleged use of AI-based decision support systems for targeting in combat operations lays bare the challenge of assessing how these systems operate, as well as the absence of regulatory oversight and ambiguity under international humanitarian law. Stakeholders must therefore work together to demand greater transparency regarding the use of AI-based decision support systems and to clarify and broaden the legal framework to address the complexities posed by these systems.

       

     

     

    COMMENTARY

    The ongoing Israel-Hamas conflict has shone a spotlight on the military use of AI-based decision support systems (AI DSS). The Israel Defence Forces (IDF) is alleged to have used at least eight AI DSS to support its military operation in Gaza: Gospel, Lavender, Where’s Daddy, Fire Factory, Fire Weaver, Alchemist, Depth of Wisdom, and Edge 360.

    Recent media portrayals of how the Gospel and Lavender systems were allegedly used in the IDF’s targeting process have sparked fierce debate on the need to regulate military use of AI. Some observers have strenuously defended the use of AI DSS, while others have criticised its use, arguing that they increase the risk of civilians being mistakenly targeted and strip individuals of their intrinsic dignity. Analysis of Israel’s AI DSS has predominantly focused on the Gospel and Lavender systems, while other AI DSS that were also used for targeting, such as Where’s Daddy, Fire Weaver, and Fire Factory, remain insufficiently discussed. Consequently, there is a lack of understanding regarding how AI DSS support the targeting process, contributing to the confusion surrounding their legality.

    Given that there is currently no legal prohibition against the use of AI DSS, which remains shrouded in ambiguity under existing international humanitarian law (IHL), there is a need for greater transparency regarding the use of AI DSS and for clarifying and broadening the scope of IHL to address the complexities posed by these systems.

    Mapping AI DSS within the Targeting Cycle

    A targeting cycle, also known as “kill chain”, is a process employed by the military to find, fix, track, target, engage people or objects, and then assess the strike results. The time taken to complete this six-step process can range from minutes to days. Where time-sensitive targets are involved, it may be necessary to complete the decision-making process in each step rapidly to enable a timely attack against identified targets.

    Fire Weaver, developed by RAFAEL in collaboration with Israel’s Defence Ministry, aims to expedite the target identification and engagement process. It integrates intelligence from sensors to swiftly classify and distribute information about targets to deployed weapons. It also autonomously selects the most suitable weapon, based on criteria like location and effectiveness, enabling rapid target engagement in “seconds”.

    While an IDF statement has implied that Fire Weaver does not operate with full autonomy and that humans are in the loop, the nature of Fire Weaver’s design suggests otherwise as it can engage targets autonomously. Furthermore, in a targeting cycle, a firewall is supposed to exist to separate the humans involved in the “target” and the “engage” steps. During targeting, separate personnel other than weapons operators assess target restrictions such as IHL and collateral damage estimates. Approval based on this assessment process precedes engagement by weapons operators, ensuring the latter execute strikes without being involved in determining targets for elimination. However, the nature of the Fire Weaver’s design suggests that this firewall has collapsed, resulting in a combination of the “target” and the “engage” steps within the targeting cycle.

    The Gospel and Lavender systems, on the other hand, appear to have been used to find and/or fix targets. Gospel was first deployed by the IDF during the 2021 Operation Guardian of the Walls, an 11-day Israeli operation in the Gaza Strip following an outbreak of violence. It combines large amounts of data across different data sets and identifies buildings and structures that could qualify as military objectives. Lavender is a database characterised by the IDF as a tool that organises and cross-references intelligence sources to identify human targets. It is considered a “smart database” for its ability to connect leads from different sources. Little is known about how the output from the Gospel and Lavender systems is treated as part of the target engagement process.

    IP24063
    The Gospel system is one of the several AI-based decision support systems (AI DSS) used by the Israel Defence Forces (IDF) to identify buildings and structures that could qualify as military objectives and was first deployed during the 2021 Operation Guardian of the Walls. Image from Wikimedia Commons.

    Where’s Daddy has been described as a “tracking system”. It was allegedly used to monitor the movements of identified targets to their homes where strikes were subsequently carried out. Unlike Fire Weaver, this system, along with the Gospel and Lavender systems, appear to have no connection with any weapon or weapon system deployed in the battlefield.

    Lastly, Fire Factory has allegedly been used to organise airstrikes, i.e., to engage targets. It analyses data on pre-authorised targets to calculate munition loads and proposes airstrike schedules and logistic arrangements.

    The Use of AI DSS and IHL

    The recent firestorm of criticism against the IDF’s use of AI DSS in targeting has created the impression that such use is illegal, when in reality it is not. AI DSS were ultimately used to support human decision-making, but it is not clear whether they were misused in contravention of IHL targeting rules. Additionally, there is currently no prohibition against AI DSS under international law, unlike specifically regulated weapons such as nuclear weapons. Thus, their use in the battlefield is not inherently unlawful.

    AI DSS also do not fall squarely under Article 36 of the First Additional Protocol to the Geneva Conventions (API) concerning legal review of new weapons, means, and methods of warfare. Under the API, states are restricted in the weapons, means, and methods of warfare they can employ. For example, states are prohibited from using indiscriminate weapons like cluster munitions. States are also required to conduct legal review under Article 36 of the API, commonly known as “weapon review”, to assess whether new weapons, means or methods of warfare would be prohibited by international law.

    Legal reviews under Article 36 of the API are particularly important for the ongoing assessment of emerging technologies and tactics of warfare. They help to prevent the costly consequences of employing any new weaponry or tactics that are likely to be prohibited by international law. However, the API does not clarify what constitutes a new weapon, means, or method of warfare. This omission led to disagreements and confusion at the UN lethal autonomous weapon systems (LAWS) discussions. The United Nations eventually agreed in 2019 that all weapon systems, including LAWS, fall under the scope of Article 36 of the API.

    The lack of a clear definition of “weapon, means or method of warfare” raises the question of whether AI DSS that are neither weapon systems nor form part of a weapon system, such as Gospel, Lavender, Where’s Daddy and Fire Factory, ought to be subject to Article 36 reviews. Some have argued that AI DSS used for offensive actions such as targeting ought to be reviewed under Article 36 of the API. However, such AI DSS could be simply reprogrammed and used for other purposes, for example, identifying and searching for missing persons and organising logistics for humanitarian aid – both of which are IHL legal obligations. If AI DSS could simultaneously be used for offensive and non-offensive actions, the challenge lies in differentiating the usage of AI DSS, which raises concern whether legal compliance is possible at all.

    As for Fire Weaver, it is more straightforward: it could be classified as being part of a weapon system and therefore fall under the scope of Article 36 of the API. Alternatively, it could be categorised as a LAWS owing to its autonomous capability in identifying and engaging targets without human intervention. This characteristic could lead to its use being outlawed if an international agreement governing LAWS is reached. In spite of years of discussion around LAWS governance, the fate of such an agreement remains uncertain.

    Furthermore, as Israel is not a party to the API, it is not bound by Article 36. Israel has voluntarily conducted Article 36 reviews, but it is unclear whether any of its AI DSS were subject to such reviews prior to their deployment in the battlefield.

    The Way Forward

    Israel’s use of AI DSS for targeting not only exposes the challenge of assessing how AI DSS support targeting processes, but also highlights the ambiguity around AI DSS’ current and future regulation under IHL. Concerned stakeholders should work together to ensure restrained use of AI DSS in targeting.

    One starting point could be to mandate greater transparency from technology companies and belligerents that are developing, manufacturing, and using AI DSS. States can also utilise the upcoming summit on Responsible Artificial Intelligence in the Military Domain (REAIM) and other fora related to military AI governance to help build awareness around the challenges associated with AI DSS.

    Second, stakeholders involved in developing international law frameworks should work towards clarifying and broadening the scope of Article 36 of the API to include AI DSS that could be used for targeting. In this way, states would be obliged to review these systems before their deployment, ensuring that wars are fought with legal restraints. There is an important role for academics and technical experts here to study the complexities and challenges posed by dual-use AI DSS and make appropriate recommendations to relevant governance platforms.

     

    Mei Ching LIU is Associate Research Fellow with the Military Transformations Programme at the S. Rajaratnam School of International Studies.

    Categories: IDSS Papers / General / Conflict and Stability / Cybersecurity, Biosecurity and Nuclear Safety / International Politics and Security / Global

    The alleged use of AI-based decision support systems for targeting in combat operations lays bare the challenge of assessing how these systems operate, as well as the absence of regulatory oversight and ambiguity under international humanitarian law. Stakeholders must therefore work together to demand greater transparency regarding the use of AI-based decision support systems and to clarify and broaden the legal framework to address the complexities posed by these systems.

       

     

     

    COMMENTARY

    The ongoing Israel-Hamas conflict has shone a spotlight on the military use of AI-based decision support systems (AI DSS). The Israel Defence Forces (IDF) is alleged to have used at least eight AI DSS to support its military operation in Gaza: Gospel, Lavender, Where’s Daddy, Fire Factory, Fire Weaver, Alchemist, Depth of Wisdom, and Edge 360.

    Recent media portrayals of how the Gospel and Lavender systems were allegedly used in the IDF’s targeting process have sparked fierce debate on the need to regulate military use of AI. Some observers have strenuously defended the use of AI DSS, while others have criticised its use, arguing that they increase the risk of civilians being mistakenly targeted and strip individuals of their intrinsic dignity. Analysis of Israel’s AI DSS has predominantly focused on the Gospel and Lavender systems, while other AI DSS that were also used for targeting, such as Where’s Daddy, Fire Weaver, and Fire Factory, remain insufficiently discussed. Consequently, there is a lack of understanding regarding how AI DSS support the targeting process, contributing to the confusion surrounding their legality.

    Given that there is currently no legal prohibition against the use of AI DSS, which remains shrouded in ambiguity under existing international humanitarian law (IHL), there is a need for greater transparency regarding the use of AI DSS and for clarifying and broadening the scope of IHL to address the complexities posed by these systems.

    Mapping AI DSS within the Targeting Cycle

    A targeting cycle, also known as “kill chain”, is a process employed by the military to find, fix, track, target, engage people or objects, and then assess the strike results. The time taken to complete this six-step process can range from minutes to days. Where time-sensitive targets are involved, it may be necessary to complete the decision-making process in each step rapidly to enable a timely attack against identified targets.

    Fire Weaver, developed by RAFAEL in collaboration with Israel’s Defence Ministry, aims to expedite the target identification and engagement process. It integrates intelligence from sensors to swiftly classify and distribute information about targets to deployed weapons. It also autonomously selects the most suitable weapon, based on criteria like location and effectiveness, enabling rapid target engagement in “seconds”.

    While an IDF statement has implied that Fire Weaver does not operate with full autonomy and that humans are in the loop, the nature of Fire Weaver’s design suggests otherwise as it can engage targets autonomously. Furthermore, in a targeting cycle, a firewall is supposed to exist to separate the humans involved in the “target” and the “engage” steps. During targeting, separate personnel other than weapons operators assess target restrictions such as IHL and collateral damage estimates. Approval based on this assessment process precedes engagement by weapons operators, ensuring the latter execute strikes without being involved in determining targets for elimination. However, the nature of the Fire Weaver’s design suggests that this firewall has collapsed, resulting in a combination of the “target” and the “engage” steps within the targeting cycle.

    The Gospel and Lavender systems, on the other hand, appear to have been used to find and/or fix targets. Gospel was first deployed by the IDF during the 2021 Operation Guardian of the Walls, an 11-day Israeli operation in the Gaza Strip following an outbreak of violence. It combines large amounts of data across different data sets and identifies buildings and structures that could qualify as military objectives. Lavender is a database characterised by the IDF as a tool that organises and cross-references intelligence sources to identify human targets. It is considered a “smart database” for its ability to connect leads from different sources. Little is known about how the output from the Gospel and Lavender systems is treated as part of the target engagement process.

    IP24063
    The Gospel system is one of the several AI-based decision support systems (AI DSS) used by the Israel Defence Forces (IDF) to identify buildings and structures that could qualify as military objectives and was first deployed during the 2021 Operation Guardian of the Walls. Image from Wikimedia Commons.

    Where’s Daddy has been described as a “tracking system”. It was allegedly used to monitor the movements of identified targets to their homes where strikes were subsequently carried out. Unlike Fire Weaver, this system, along with the Gospel and Lavender systems, appear to have no connection with any weapon or weapon system deployed in the battlefield.

    Lastly, Fire Factory has allegedly been used to organise airstrikes, i.e., to engage targets. It analyses data on pre-authorised targets to calculate munition loads and proposes airstrike schedules and logistic arrangements.

    The Use of AI DSS and IHL

    The recent firestorm of criticism against the IDF’s use of AI DSS in targeting has created the impression that such use is illegal, when in reality it is not. AI DSS were ultimately used to support human decision-making, but it is not clear whether they were misused in contravention of IHL targeting rules. Additionally, there is currently no prohibition against AI DSS under international law, unlike specifically regulated weapons such as nuclear weapons. Thus, their use in the battlefield is not inherently unlawful.

    AI DSS also do not fall squarely under Article 36 of the First Additional Protocol to the Geneva Conventions (API) concerning legal review of new weapons, means, and methods of warfare. Under the API, states are restricted in the weapons, means, and methods of warfare they can employ. For example, states are prohibited from using indiscriminate weapons like cluster munitions. States are also required to conduct legal review under Article 36 of the API, commonly known as “weapon review”, to assess whether new weapons, means or methods of warfare would be prohibited by international law.

    Legal reviews under Article 36 of the API are particularly important for the ongoing assessment of emerging technologies and tactics of warfare. They help to prevent the costly consequences of employing any new weaponry or tactics that are likely to be prohibited by international law. However, the API does not clarify what constitutes a new weapon, means, or method of warfare. This omission led to disagreements and confusion at the UN lethal autonomous weapon systems (LAWS) discussions. The United Nations eventually agreed in 2019 that all weapon systems, including LAWS, fall under the scope of Article 36 of the API.

    The lack of a clear definition of “weapon, means or method of warfare” raises the question of whether AI DSS that are neither weapon systems nor form part of a weapon system, such as Gospel, Lavender, Where’s Daddy and Fire Factory, ought to be subject to Article 36 reviews. Some have argued that AI DSS used for offensive actions such as targeting ought to be reviewed under Article 36 of the API. However, such AI DSS could be simply reprogrammed and used for other purposes, for example, identifying and searching for missing persons and organising logistics for humanitarian aid – both of which are IHL legal obligations. If AI DSS could simultaneously be used for offensive and non-offensive actions, the challenge lies in differentiating the usage of AI DSS, which raises concern whether legal compliance is possible at all.

    As for Fire Weaver, it is more straightforward: it could be classified as being part of a weapon system and therefore fall under the scope of Article 36 of the API. Alternatively, it could be categorised as a LAWS owing to its autonomous capability in identifying and engaging targets without human intervention. This characteristic could lead to its use being outlawed if an international agreement governing LAWS is reached. In spite of years of discussion around LAWS governance, the fate of such an agreement remains uncertain.

    Furthermore, as Israel is not a party to the API, it is not bound by Article 36. Israel has voluntarily conducted Article 36 reviews, but it is unclear whether any of its AI DSS were subject to such reviews prior to their deployment in the battlefield.

    The Way Forward

    Israel’s use of AI DSS for targeting not only exposes the challenge of assessing how AI DSS support targeting processes, but also highlights the ambiguity around AI DSS’ current and future regulation under IHL. Concerned stakeholders should work together to ensure restrained use of AI DSS in targeting.

    One starting point could be to mandate greater transparency from technology companies and belligerents that are developing, manufacturing, and using AI DSS. States can also utilise the upcoming summit on Responsible Artificial Intelligence in the Military Domain (REAIM) and other fora related to military AI governance to help build awareness around the challenges associated with AI DSS.

    Second, stakeholders involved in developing international law frameworks should work towards clarifying and broadening the scope of Article 36 of the API to include AI DSS that could be used for targeting. In this way, states would be obliged to review these systems before their deployment, ensuring that wars are fought with legal restraints. There is an important role for academics and technical experts here to study the complexities and challenges posed by dual-use AI DSS and make appropriate recommendations to relevant governance platforms.

     

    Mei Ching LIU is Associate Research Fellow with the Military Transformations Programme at the S. Rajaratnam School of International Studies.

    Categories: IDSS Papers / General / Conflict and Stability / Cybersecurity, Biosecurity and Nuclear Safety / International Politics and Security

    Popular Links

    About RSISResearch ProgrammesGraduate EducationPublicationsEventsAdmissionsCareersVideo/Audio ChannelRSIS Intranet

    Connect with Us

    rsis.ntu
    rsis_ntu
    rsisntu
    rsisvideocast
    school/rsis-ntu
    rsis.sg
    rsissg
    RSIS
    RSS
    Subscribe to RSIS Publications
    Subscribe to RSIS Events

    Getting to RSIS

    Nanyang Technological University
    Block S4, Level B3,
    50 Nanyang Avenue,
    Singapore 639798

    Click here for direction to RSIS

    Get in Touch

      Copyright © S. Rajaratnam School of International Studies. All rights reserved.
      Privacy Statement / Terms of Use
      Help us improve

        Rate your experience with this website
        123456
        Not satisfiedVery satisfied
        What did you like?
        0/255 characters
        What can be improved?
        0/255 characters
        Your email
        Please enter a valid email.
        Thank you for your feedback.
        This site uses cookies to offer you a better browsing experience. By continuing, you are agreeing to the use of cookies on your device as described in our privacy policy. Learn more
        OK
        Latest Book
        more info