23 April 2024
- RSIS
- Publication
- RSIS Publications
- Israeli Forces Display Power of AI, but it is a Double-edged Sword
SYNOPSIS
The Israel Defence Forces, alongside other advanced militaries developing AI, must grapple with the complex technological, legal, and ethical implications of using AI in warfare. The strategic imperative to integrate human oversight and accountability mechanisms into AI systems for responsible and lawful use also holds significant implications for Singapore’s defence and military innovation.
COMMENTARY
On 13 April, the Israel Defence Forces (IDF), in collaboration with the US, British, French and Jordanian air forces, successfully intercepted over 300 incoming drones and missiles launched from Iran. Credit for this should go to Israel’s multilayered air defence systems, which use artificial intelligence (AI) algorithms to track and intercept missiles in real time. These AI algorithms enabled split-second decision-making and target allocation for Israel’s air defences – the Arrow missile defence system, David’s Sling, and Iron Dome systems.
But AI can also be used when a nation goes on the offensive – and this has raised some serious concerns. For example, media reports by Israeli publications +972 Magazine and Local Call have shed light on Project Lavender, an AI-powered database allegedly used by the IDF to identify bombing targets in Gaza with minimal human oversight.
According to the report, Project Lavender uses AI to process vast amounts of data to generate potential targets for air strikes, including individuals affiliated with Hamas and Islamic Jihad. The report claims that Lavender’s targets played a significant role in Israel’s military operations in Gaza, particularly during the initial weeks of the conflict that resulted in massive casualties among Palestinians.
The IDF has refuted the claims made in the media report, stating that Lavender is not a system designed to target civilians but rather a database used to cross-reference intelligence sources and gather information on military operatives of terrorist organisations.
But the controversy underscores the challenges of using AI in military operations, including concerns about transparency, accountability, and ethical decision-making in conflict situations.
The Shift Towards AI
While the specifics of Project Lavender remain shrouded in secrecy, sources suggest that the IDF’s strategies are increasingly being driven by AI.
Reputable Israeli military publications such as The Dado Centre Journal and Ma’arachot have cited key documents to outline the IDF’s strategic vision. Central to this evolution are key documents like the IDF’s Momentum (Tnufa) Multiyear Plan, unveiled in February 2020, as well as the newly formulated Operational Concept for Victory and Data AI Strategy.
This involves leveraging AI to usher in autonomous and smart transformations within the IDF, fundamentally reshaping the character and conduct of warfare. From the IDF’s perspective, AI technology is not just a valuable intelligence tool but also a crucial force multiplier, especially in response to the evolving technological and strategic capabilities of its adversaries.
This shift in mindset repositions Israel from a paradigm of “asymmetric warfare” against perceived “inferior forces” to confronting “well-organised, well-trained, well-equipped rocket-based terror armies”. In practical terms, AI empowers the IDF to assimilate intelligence from diverse warfare domains and disseminate it efficiently across various combat units. This enables unmanned systems to be deployed for highly precise and potent military strikes.
The AI revolution has seen the IDF tweak its operational methodologies, leading to organisational restructuring too. The IDF’s Momentum Plan, for instance, established the Digital Transformation Administration – a centralised “operational internet” platform to streamline communication and connectivity across the IDF. Simultaneously, the Intelligence Directorate serves as a pivotal hub for AI integration, spearheading initiatives described in Project Lavender – aimed at target identification and allocation.
At the operational level, entities like the Warfare Methods and Innovation Division (Shiloah) are spearheading the development of “multi-domain combat methods”, synergising various combat units and technologies including infantry, combat engineering, reconnaissance, air force, cyber capabilities, and more.
Experimental units such as the 888 “Refaim” multidomain unit are actively testing AI-driven combat technologies, including unmanned aerial vehicles, to augment the IDF’s combat prowess.
Dangers of Using AI in Military Operations
However, the proliferation of AI-powered capabilities also raises critical concerns. These include ethical dilemmas, potential biases and the intricate challenges of assessing the legality and accountability of AI-driven military operations.
Advanced militaries such as the IDF must grapple with such contending legal and ethical implications of using AI in warfare. For instance, AI algorithms learn from data, which may contain biases or inaccuracies, potentially leading to unreliable decisions in military scenarios. Additionally, intricate AI models can raise questions of explainability such as “Why did our system provide this recommendation or take that action?”
Achieving the right balance between human oversight and AI-enabled autonomy is key to avoiding unintended consequences and retaining human control over military decision-making. Equally important are dependable algorithms capable of adapting to environmental changes and learning from unforeseen events. Errors made by AI systems can result in severe ramifications on the battlefield.
Furthermore, on a strategic level, the advancement and deployment of sophisticated military AI could spark a race for lethal autonomous weapons technologies, heightening the potential for conflict escalation by favouring machine-driven decisions over human judgment. However, the weaponisation of algorithmic warfare is poised to advance rapidly due to ongoing breakthroughs in science and technology.
Meanwhile, the international community is in the nascent stages of developing viable AI governance mechanisms in military use, such as the Responsible AI in Military Domain (REAIM) process – a platform for all stakeholders to discuss the key opportunities, challenges and risks associated with military applications of AI – launched in 2023.
Israel may also feel that adhering to international AI norms could limit its capabilities. That is why, instead of following international AI governance initiatives, the Israeli military focuses on developing internal ethical guidelines to safeguard AI systems.
AI in Singapore’s Defence
The long-term strategic implications of the AI revolution in future conflicts require a re-evaluation of defence policy planning and management, including the direction of weapons development, and research and development efforts.
The implications of AI advancements and challenges in warfare also extend beyond traditional powerhouses like Israel, encompassing countries with strategic interests and technological ambitions, such as Singapore. As a small nation with a strong focus on innovation and technology, Singapore is keenly aware of the transformative potential of AI in defence and security.
In December 2023, Singapore updated its National Artificial Intelligence Strategy (NAIS 2.0), signalling its ambition to become a global leader in the conscientious and innovative utilisation of AI. The strategy serves as a comprehensive roadmap for the entire government.
The core philosophy of AI governance embedded within NAIS 2.0 revolves around several key principles: ethical and responsible AI, transparency, collaboration and inclusivity, and human-centric AI. These principles shape the way AI is integrated into Singapore’s defence and military innovation efforts. Indeed, the Singapore Ministry of Defence’s preliminary guiding principles for AI governance in defence innovation and military use prioritise responsible, safe, reliable and robust AI development.
By harnessing AI systems, cloud technologies and data science, the Singapore Armed Forces’ (SAF) aim is to automate tasks, enhance decision-making processes and optimise capabilities. This approach could see the SAF become more effective in a more volatile and uncertain regional security environment.
Going Beyond Technology
Singapore’s integration of AI in its defence strategy extends beyond technological advancement; it also underscores the importance of defence diplomacy as a core anchor of its national security approach, emphasising both deterrence and resilience.
Central to these efforts is a growing emphasis on regulating AI development and use, particularly regarding lethal autonomous weapons systems (LAWS), to establish responsible and ethical norms for AI in warfare and promoting regional peace and stability. In this context, Singapore has actively pursued collaborative partnerships with various states, particularly major powers possessing advanced AI technologies such as the US, France and Australia.
In 2023, Singapore also endorsed both the REAIM initiative and the US-led “Political Declaration on Responsible Military Use of AI and Autonomy”, demonstrating the need for a multilateral and norms-based approach to AI governance in the military domain on the global stage.
However, within the rapidly evolving defence AI landscape, Singapore faces intricate challenges and dilemmas. One of the foremost concerns is striking a delicate balance between technological advancements and ethical considerations in AI-driven military competition in East Asia.
As countries in the region invest heavily in AI technologies for defence purposes, the risk of an escalating arms race and the potential for AI-driven conflicts will rise. Singapore’s defence establishment must remain agile and proactive in leveraging AI for defence purposes while mitigating the risks associated with AI-driven military competition.
Collaboration with like-minded countries and adherence to international norms and standards will be crucial in shaping a responsible and sustainable future for AI in defence in the region.
About the Author
Michael Raska is Assistant Professor in the Military Transformations Programme at the S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore. This is a slightly edited version of an article first published in The Straits Times on 17 April 2024. It is republished here with permission.
SYNOPSIS
The Israel Defence Forces, alongside other advanced militaries developing AI, must grapple with the complex technological, legal, and ethical implications of using AI in warfare. The strategic imperative to integrate human oversight and accountability mechanisms into AI systems for responsible and lawful use also holds significant implications for Singapore’s defence and military innovation.
COMMENTARY
On 13 April, the Israel Defence Forces (IDF), in collaboration with the US, British, French and Jordanian air forces, successfully intercepted over 300 incoming drones and missiles launched from Iran. Credit for this should go to Israel’s multilayered air defence systems, which use artificial intelligence (AI) algorithms to track and intercept missiles in real time. These AI algorithms enabled split-second decision-making and target allocation for Israel’s air defences – the Arrow missile defence system, David’s Sling, and Iron Dome systems.
But AI can also be used when a nation goes on the offensive – and this has raised some serious concerns. For example, media reports by Israeli publications +972 Magazine and Local Call have shed light on Project Lavender, an AI-powered database allegedly used by the IDF to identify bombing targets in Gaza with minimal human oversight.
According to the report, Project Lavender uses AI to process vast amounts of data to generate potential targets for air strikes, including individuals affiliated with Hamas and Islamic Jihad. The report claims that Lavender’s targets played a significant role in Israel’s military operations in Gaza, particularly during the initial weeks of the conflict that resulted in massive casualties among Palestinians.
The IDF has refuted the claims made in the media report, stating that Lavender is not a system designed to target civilians but rather a database used to cross-reference intelligence sources and gather information on military operatives of terrorist organisations.
But the controversy underscores the challenges of using AI in military operations, including concerns about transparency, accountability, and ethical decision-making in conflict situations.
The Shift Towards AI
While the specifics of Project Lavender remain shrouded in secrecy, sources suggest that the IDF’s strategies are increasingly being driven by AI.
Reputable Israeli military publications such as The Dado Centre Journal and Ma’arachot have cited key documents to outline the IDF’s strategic vision. Central to this evolution are key documents like the IDF’s Momentum (Tnufa) Multiyear Plan, unveiled in February 2020, as well as the newly formulated Operational Concept for Victory and Data AI Strategy.
This involves leveraging AI to usher in autonomous and smart transformations within the IDF, fundamentally reshaping the character and conduct of warfare. From the IDF’s perspective, AI technology is not just a valuable intelligence tool but also a crucial force multiplier, especially in response to the evolving technological and strategic capabilities of its adversaries.
This shift in mindset repositions Israel from a paradigm of “asymmetric warfare” against perceived “inferior forces” to confronting “well-organised, well-trained, well-equipped rocket-based terror armies”. In practical terms, AI empowers the IDF to assimilate intelligence from diverse warfare domains and disseminate it efficiently across various combat units. This enables unmanned systems to be deployed for highly precise and potent military strikes.
The AI revolution has seen the IDF tweak its operational methodologies, leading to organisational restructuring too. The IDF’s Momentum Plan, for instance, established the Digital Transformation Administration – a centralised “operational internet” platform to streamline communication and connectivity across the IDF. Simultaneously, the Intelligence Directorate serves as a pivotal hub for AI integration, spearheading initiatives described in Project Lavender – aimed at target identification and allocation.
At the operational level, entities like the Warfare Methods and Innovation Division (Shiloah) are spearheading the development of “multi-domain combat methods”, synergising various combat units and technologies including infantry, combat engineering, reconnaissance, air force, cyber capabilities, and more.
Experimental units such as the 888 “Refaim” multidomain unit are actively testing AI-driven combat technologies, including unmanned aerial vehicles, to augment the IDF’s combat prowess.
Dangers of Using AI in Military Operations
However, the proliferation of AI-powered capabilities also raises critical concerns. These include ethical dilemmas, potential biases and the intricate challenges of assessing the legality and accountability of AI-driven military operations.
Advanced militaries such as the IDF must grapple with such contending legal and ethical implications of using AI in warfare. For instance, AI algorithms learn from data, which may contain biases or inaccuracies, potentially leading to unreliable decisions in military scenarios. Additionally, intricate AI models can raise questions of explainability such as “Why did our system provide this recommendation or take that action?”
Achieving the right balance between human oversight and AI-enabled autonomy is key to avoiding unintended consequences and retaining human control over military decision-making. Equally important are dependable algorithms capable of adapting to environmental changes and learning from unforeseen events. Errors made by AI systems can result in severe ramifications on the battlefield.
Furthermore, on a strategic level, the advancement and deployment of sophisticated military AI could spark a race for lethal autonomous weapons technologies, heightening the potential for conflict escalation by favouring machine-driven decisions over human judgment. However, the weaponisation of algorithmic warfare is poised to advance rapidly due to ongoing breakthroughs in science and technology.
Meanwhile, the international community is in the nascent stages of developing viable AI governance mechanisms in military use, such as the Responsible AI in Military Domain (REAIM) process – a platform for all stakeholders to discuss the key opportunities, challenges and risks associated with military applications of AI – launched in 2023.
Israel may also feel that adhering to international AI norms could limit its capabilities. That is why, instead of following international AI governance initiatives, the Israeli military focuses on developing internal ethical guidelines to safeguard AI systems.
AI in Singapore’s Defence
The long-term strategic implications of the AI revolution in future conflicts require a re-evaluation of defence policy planning and management, including the direction of weapons development, and research and development efforts.
The implications of AI advancements and challenges in warfare also extend beyond traditional powerhouses like Israel, encompassing countries with strategic interests and technological ambitions, such as Singapore. As a small nation with a strong focus on innovation and technology, Singapore is keenly aware of the transformative potential of AI in defence and security.
In December 2023, Singapore updated its National Artificial Intelligence Strategy (NAIS 2.0), signalling its ambition to become a global leader in the conscientious and innovative utilisation of AI. The strategy serves as a comprehensive roadmap for the entire government.
The core philosophy of AI governance embedded within NAIS 2.0 revolves around several key principles: ethical and responsible AI, transparency, collaboration and inclusivity, and human-centric AI. These principles shape the way AI is integrated into Singapore’s defence and military innovation efforts. Indeed, the Singapore Ministry of Defence’s preliminary guiding principles for AI governance in defence innovation and military use prioritise responsible, safe, reliable and robust AI development.
By harnessing AI systems, cloud technologies and data science, the Singapore Armed Forces’ (SAF) aim is to automate tasks, enhance decision-making processes and optimise capabilities. This approach could see the SAF become more effective in a more volatile and uncertain regional security environment.
Going Beyond Technology
Singapore’s integration of AI in its defence strategy extends beyond technological advancement; it also underscores the importance of defence diplomacy as a core anchor of its national security approach, emphasising both deterrence and resilience.
Central to these efforts is a growing emphasis on regulating AI development and use, particularly regarding lethal autonomous weapons systems (LAWS), to establish responsible and ethical norms for AI in warfare and promoting regional peace and stability. In this context, Singapore has actively pursued collaborative partnerships with various states, particularly major powers possessing advanced AI technologies such as the US, France and Australia.
In 2023, Singapore also endorsed both the REAIM initiative and the US-led “Political Declaration on Responsible Military Use of AI and Autonomy”, demonstrating the need for a multilateral and norms-based approach to AI governance in the military domain on the global stage.
However, within the rapidly evolving defence AI landscape, Singapore faces intricate challenges and dilemmas. One of the foremost concerns is striking a delicate balance between technological advancements and ethical considerations in AI-driven military competition in East Asia.
As countries in the region invest heavily in AI technologies for defence purposes, the risk of an escalating arms race and the potential for AI-driven conflicts will rise. Singapore’s defence establishment must remain agile and proactive in leveraging AI for defence purposes while mitigating the risks associated with AI-driven military competition.
Collaboration with like-minded countries and adherence to international norms and standards will be crucial in shaping a responsible and sustainable future for AI in defence in the region.
About the Author
Michael Raska is Assistant Professor in the Military Transformations Programme at the S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore. This is a slightly edited version of an article first published in The Straits Times on 17 April 2024. It is republished here with permission.