07 August 2024
- RSIS
- Publication
- RSIS Publications
- IP24063 | The Case for AI-Based Decision Support Systems Oversight
The alleged use of AI-based decision support systems for targeting in combat operations lays bare the challenge of assessing how these systems operate, as well as the absence of regulatory oversight and ambiguity under international humanitarian law. Stakeholders must therefore work together to demand greater transparency regarding the use of AI-based decision support systems and to clarify and broaden the legal framework to address the complexities posed by these systems.
COMMENTARY
The ongoing Israel-Hamas conflict has shone a spotlight on the military use of AI-based decision support systems (AI DSS). The Israel Defence Forces (IDF) is alleged to have used at least eight AI DSS to support its military operation in Gaza: Gospel, Lavender, Where’s Daddy, Fire Factory, Fire Weaver, Alchemist, Depth of Wisdom, and Edge 360.
Recent media portrayals of how the Gospel and Lavender systems were allegedly used in the IDF’s targeting process have sparked fierce debate on the need to regulate military use of AI. Some observers have strenuously defended the use of AI DSS, while others have criticised its use, arguing that they increase the risk of civilians being mistakenly targeted and strip individuals of their intrinsic dignity. Analysis of Israel’s AI DSS has predominantly focused on the Gospel and Lavender systems, while other AI DSS that were also used for targeting, such as Where’s Daddy, Fire Weaver, and Fire Factory, remain insufficiently discussed. Consequently, there is a lack of understanding regarding how AI DSS support the targeting process, contributing to the confusion surrounding their legality.
Given that there is currently no legal prohibition against the use of AI DSS, which remains shrouded in ambiguity under existing international humanitarian law (IHL), there is a need for greater transparency regarding the use of AI DSS and for clarifying and broadening the scope of IHL to address the complexities posed by these systems.
Mapping AI DSS within the Targeting Cycle
A targeting cycle, also known as “kill chain”, is a process employed by the military to find, fix, track, target, engage people or objects, and then assess the strike results. The time taken to complete this six-step process can range from minutes to days. Where time-sensitive targets are involved, it may be necessary to complete the decision-making process in each step rapidly to enable a timely attack against identified targets.
Fire Weaver, developed by RAFAEL in collaboration with Israel’s Defence Ministry, aims to expedite the target identification and engagement process. It integrates intelligence from sensors to swiftly classify and distribute information about targets to deployed weapons. It also autonomously selects the most suitable weapon, based on criteria like location and effectiveness, enabling rapid target engagement in “seconds”.
While an IDF statement has implied that Fire Weaver does not operate with full autonomy and that humans are in the loop, the nature of Fire Weaver’s design suggests otherwise as it can engage targets autonomously. Furthermore, in a targeting cycle, a firewall is supposed to exist to separate the humans involved in the “target” and the “engage” steps. During targeting, separate personnel other than weapons operators assess target restrictions such as IHL and collateral damage estimates. Approval based on this assessment process precedes engagement by weapons operators, ensuring the latter execute strikes without being involved in determining targets for elimination. However, the nature of the Fire Weaver’s design suggests that this firewall has collapsed, resulting in a combination of the “target” and the “engage” steps within the targeting cycle.
The Gospel and Lavender systems, on the other hand, appear to have been used to find and/or fix targets. Gospel was first deployed by the IDF during the 2021 Operation Guardian of the Walls, an 11-day Israeli operation in the Gaza Strip following an outbreak of violence. It combines large amounts of data across different data sets and identifies buildings and structures that could qualify as military objectives. Lavender is a database characterised by the IDF as a tool that organises and cross-references intelligence sources to identify human targets. It is considered a “smart database” for its ability to connect leads from different sources. Little is known about how the output from the Gospel and Lavender systems is treated as part of the target engagement process.
Where’s Daddy has been described as a “tracking system”. It was allegedly used to monitor the movements of identified targets to their homes where strikes were subsequently carried out. Unlike Fire Weaver, this system, along with the Gospel and Lavender systems, appear to have no connection with any weapon or weapon system deployed in the battlefield.
Lastly, Fire Factory has allegedly been used to organise airstrikes, i.e., to engage targets. It analyses data on pre-authorised targets to calculate munition loads and proposes airstrike schedules and logistic arrangements.
The Use of AI DSS and IHL
The recent firestorm of criticism against the IDF’s use of AI DSS in targeting has created the impression that such use is illegal, when in reality it is not. AI DSS were ultimately used to support human decision-making, but it is not clear whether they were misused in contravention of IHL targeting rules. Additionally, there is currently no prohibition against AI DSS under international law, unlike specifically regulated weapons such as nuclear weapons. Thus, their use in the battlefield is not inherently unlawful.
AI DSS also do not fall squarely under Article 36 of the First Additional Protocol to the Geneva Conventions (API) concerning legal review of new weapons, means, and methods of warfare. Under the API, states are restricted in the weapons, means, and methods of warfare they can employ. For example, states are prohibited from using indiscriminate weapons like cluster munitions. States are also required to conduct legal review under Article 36 of the API, commonly known as “weapon review”, to assess whether new weapons, means or methods of warfare would be prohibited by international law.
Legal reviews under Article 36 of the API are particularly important for the ongoing assessment of emerging technologies and tactics of warfare. They help to prevent the costly consequences of employing any new weaponry or tactics that are likely to be prohibited by international law. However, the API does not clarify what constitutes a new weapon, means, or method of warfare. This omission led to disagreements and confusion at the UN lethal autonomous weapon systems (LAWS) discussions. The United Nations eventually agreed in 2019 that all weapon systems, including LAWS, fall under the scope of Article 36 of the API.
The lack of a clear definition of “weapon, means or method of warfare” raises the question of whether AI DSS that are neither weapon systems nor form part of a weapon system, such as Gospel, Lavender, Where’s Daddy and Fire Factory, ought to be subject to Article 36 reviews. Some have argued that AI DSS used for offensive actions such as targeting ought to be reviewed under Article 36 of the API. However, such AI DSS could be simply reprogrammed and used for other purposes, for example, identifying and searching for missing persons and organising logistics for humanitarian aid – both of which are IHL legal obligations. If AI DSS could simultaneously be used for offensive and non-offensive actions, the challenge lies in differentiating the usage of AI DSS, which raises concern whether legal compliance is possible at all.
As for Fire Weaver, it is more straightforward: it could be classified as being part of a weapon system and therefore fall under the scope of Article 36 of the API. Alternatively, it could be categorised as a LAWS owing to its autonomous capability in identifying and engaging targets without human intervention. This characteristic could lead to its use being outlawed if an international agreement governing LAWS is reached. In spite of years of discussion around LAWS governance, the fate of such an agreement remains uncertain.
Furthermore, as Israel is not a party to the API, it is not bound by Article 36. Israel has voluntarily conducted Article 36 reviews, but it is unclear whether any of its AI DSS were subject to such reviews prior to their deployment in the battlefield.
The Way Forward
Israel’s use of AI DSS for targeting not only exposes the challenge of assessing how AI DSS support targeting processes, but also highlights the ambiguity around AI DSS’ current and future regulation under IHL. Concerned stakeholders should work together to ensure restrained use of AI DSS in targeting.
One starting point could be to mandate greater transparency from technology companies and belligerents that are developing, manufacturing, and using AI DSS. States can also utilise the upcoming summit on Responsible Artificial Intelligence in the Military Domain (REAIM) and other fora related to military AI governance to help build awareness around the challenges associated with AI DSS.
Second, stakeholders involved in developing international law frameworks should work towards clarifying and broadening the scope of Article 36 of the API to include AI DSS that could be used for targeting. In this way, states would be obliged to review these systems before their deployment, ensuring that wars are fought with legal restraints. There is an important role for academics and technical experts here to study the complexities and challenges posed by dual-use AI DSS and make appropriate recommendations to relevant governance platforms.
Mei Ching LIU is Associate Research Fellow with the Military Transformations Programme at the S. Rajaratnam School of International Studies.
The alleged use of AI-based decision support systems for targeting in combat operations lays bare the challenge of assessing how these systems operate, as well as the absence of regulatory oversight and ambiguity under international humanitarian law. Stakeholders must therefore work together to demand greater transparency regarding the use of AI-based decision support systems and to clarify and broaden the legal framework to address the complexities posed by these systems.
COMMENTARY
The ongoing Israel-Hamas conflict has shone a spotlight on the military use of AI-based decision support systems (AI DSS). The Israel Defence Forces (IDF) is alleged to have used at least eight AI DSS to support its military operation in Gaza: Gospel, Lavender, Where’s Daddy, Fire Factory, Fire Weaver, Alchemist, Depth of Wisdom, and Edge 360.
Recent media portrayals of how the Gospel and Lavender systems were allegedly used in the IDF’s targeting process have sparked fierce debate on the need to regulate military use of AI. Some observers have strenuously defended the use of AI DSS, while others have criticised its use, arguing that they increase the risk of civilians being mistakenly targeted and strip individuals of their intrinsic dignity. Analysis of Israel’s AI DSS has predominantly focused on the Gospel and Lavender systems, while other AI DSS that were also used for targeting, such as Where’s Daddy, Fire Weaver, and Fire Factory, remain insufficiently discussed. Consequently, there is a lack of understanding regarding how AI DSS support the targeting process, contributing to the confusion surrounding their legality.
Given that there is currently no legal prohibition against the use of AI DSS, which remains shrouded in ambiguity under existing international humanitarian law (IHL), there is a need for greater transparency regarding the use of AI DSS and for clarifying and broadening the scope of IHL to address the complexities posed by these systems.
Mapping AI DSS within the Targeting Cycle
A targeting cycle, also known as “kill chain”, is a process employed by the military to find, fix, track, target, engage people or objects, and then assess the strike results. The time taken to complete this six-step process can range from minutes to days. Where time-sensitive targets are involved, it may be necessary to complete the decision-making process in each step rapidly to enable a timely attack against identified targets.
Fire Weaver, developed by RAFAEL in collaboration with Israel’s Defence Ministry, aims to expedite the target identification and engagement process. It integrates intelligence from sensors to swiftly classify and distribute information about targets to deployed weapons. It also autonomously selects the most suitable weapon, based on criteria like location and effectiveness, enabling rapid target engagement in “seconds”.
While an IDF statement has implied that Fire Weaver does not operate with full autonomy and that humans are in the loop, the nature of Fire Weaver’s design suggests otherwise as it can engage targets autonomously. Furthermore, in a targeting cycle, a firewall is supposed to exist to separate the humans involved in the “target” and the “engage” steps. During targeting, separate personnel other than weapons operators assess target restrictions such as IHL and collateral damage estimates. Approval based on this assessment process precedes engagement by weapons operators, ensuring the latter execute strikes without being involved in determining targets for elimination. However, the nature of the Fire Weaver’s design suggests that this firewall has collapsed, resulting in a combination of the “target” and the “engage” steps within the targeting cycle.
The Gospel and Lavender systems, on the other hand, appear to have been used to find and/or fix targets. Gospel was first deployed by the IDF during the 2021 Operation Guardian of the Walls, an 11-day Israeli operation in the Gaza Strip following an outbreak of violence. It combines large amounts of data across different data sets and identifies buildings and structures that could qualify as military objectives. Lavender is a database characterised by the IDF as a tool that organises and cross-references intelligence sources to identify human targets. It is considered a “smart database” for its ability to connect leads from different sources. Little is known about how the output from the Gospel and Lavender systems is treated as part of the target engagement process.
Where’s Daddy has been described as a “tracking system”. It was allegedly used to monitor the movements of identified targets to their homes where strikes were subsequently carried out. Unlike Fire Weaver, this system, along with the Gospel and Lavender systems, appear to have no connection with any weapon or weapon system deployed in the battlefield.
Lastly, Fire Factory has allegedly been used to organise airstrikes, i.e., to engage targets. It analyses data on pre-authorised targets to calculate munition loads and proposes airstrike schedules and logistic arrangements.
The Use of AI DSS and IHL
The recent firestorm of criticism against the IDF’s use of AI DSS in targeting has created the impression that such use is illegal, when in reality it is not. AI DSS were ultimately used to support human decision-making, but it is not clear whether they were misused in contravention of IHL targeting rules. Additionally, there is currently no prohibition against AI DSS under international law, unlike specifically regulated weapons such as nuclear weapons. Thus, their use in the battlefield is not inherently unlawful.
AI DSS also do not fall squarely under Article 36 of the First Additional Protocol to the Geneva Conventions (API) concerning legal review of new weapons, means, and methods of warfare. Under the API, states are restricted in the weapons, means, and methods of warfare they can employ. For example, states are prohibited from using indiscriminate weapons like cluster munitions. States are also required to conduct legal review under Article 36 of the API, commonly known as “weapon review”, to assess whether new weapons, means or methods of warfare would be prohibited by international law.
Legal reviews under Article 36 of the API are particularly important for the ongoing assessment of emerging technologies and tactics of warfare. They help to prevent the costly consequences of employing any new weaponry or tactics that are likely to be prohibited by international law. However, the API does not clarify what constitutes a new weapon, means, or method of warfare. This omission led to disagreements and confusion at the UN lethal autonomous weapon systems (LAWS) discussions. The United Nations eventually agreed in 2019 that all weapon systems, including LAWS, fall under the scope of Article 36 of the API.
The lack of a clear definition of “weapon, means or method of warfare” raises the question of whether AI DSS that are neither weapon systems nor form part of a weapon system, such as Gospel, Lavender, Where’s Daddy and Fire Factory, ought to be subject to Article 36 reviews. Some have argued that AI DSS used for offensive actions such as targeting ought to be reviewed under Article 36 of the API. However, such AI DSS could be simply reprogrammed and used for other purposes, for example, identifying and searching for missing persons and organising logistics for humanitarian aid – both of which are IHL legal obligations. If AI DSS could simultaneously be used for offensive and non-offensive actions, the challenge lies in differentiating the usage of AI DSS, which raises concern whether legal compliance is possible at all.
As for Fire Weaver, it is more straightforward: it could be classified as being part of a weapon system and therefore fall under the scope of Article 36 of the API. Alternatively, it could be categorised as a LAWS owing to its autonomous capability in identifying and engaging targets without human intervention. This characteristic could lead to its use being outlawed if an international agreement governing LAWS is reached. In spite of years of discussion around LAWS governance, the fate of such an agreement remains uncertain.
Furthermore, as Israel is not a party to the API, it is not bound by Article 36. Israel has voluntarily conducted Article 36 reviews, but it is unclear whether any of its AI DSS were subject to such reviews prior to their deployment in the battlefield.
The Way Forward
Israel’s use of AI DSS for targeting not only exposes the challenge of assessing how AI DSS support targeting processes, but also highlights the ambiguity around AI DSS’ current and future regulation under IHL. Concerned stakeholders should work together to ensure restrained use of AI DSS in targeting.
One starting point could be to mandate greater transparency from technology companies and belligerents that are developing, manufacturing, and using AI DSS. States can also utilise the upcoming summit on Responsible Artificial Intelligence in the Military Domain (REAIM) and other fora related to military AI governance to help build awareness around the challenges associated with AI DSS.
Second, stakeholders involved in developing international law frameworks should work towards clarifying and broadening the scope of Article 36 of the API to include AI DSS that could be used for targeting. In this way, states would be obliged to review these systems before their deployment, ensuring that wars are fought with legal restraints. There is an important role for academics and technical experts here to study the complexities and challenges posed by dual-use AI DSS and make appropriate recommendations to relevant governance platforms.
Mei Ching LIU is Associate Research Fellow with the Military Transformations Programme at the S. Rajaratnam School of International Studies.