Back
About RSIS
Introduction
Building the Foundations
Welcome Message
Board of Governors
Staff Profiles
Executive Deputy Chairman’s Office
Dean’s Office
Management
Distinguished Fellows
Faculty and Research
Associate Research Fellows, Senior Analysts and Research Analysts
Visiting Fellows
Adjunct Fellows
Administrative Staff
Honours and Awards for RSIS Staff and Students
RSIS Endowment Fund
Endowed Professorships
Career Opportunities
Getting to RSIS
Research
Research Centres
Centre for Multilateralism Studies (CMS)
Centre for Non-Traditional Security Studies (NTS Centre)
Centre of Excellence for National Security
Institute of Defence and Strategic Studies (IDSS)
International Centre for Political Violence and Terrorism Research (ICPVTR)
Research Programmes
National Security Studies Programme (NSSP)
Social Cohesion Research Programme (SCRP)
Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
Other Research
Future Issues and Technology Cluster
Research@RSIS
Science and Technology Studies Programme (STSP) (2017-2020)
Graduate Education
Graduate Programmes Office
Exchange Partners and Programmes
How to Apply
Financial Assistance
Meet the Admissions Team: Information Sessions and other events
RSIS Alumni
Outreach
Global Networks
About Global Networks
RSIS Alumni
Executive Education
About Executive Education
SRP Executive Programme
Terrorism Analyst Training Course (TATC)
International Programmes
About International Programmes
Asia-Pacific Programme for Senior Military Officers (APPSMO)
Asia-Pacific Programme for Senior National Security Officers (APPSNO)
International Conference on Cohesive Societies (ICCS)
International Strategy Forum-Asia (ISF-Asia)
Publications
RSIS Publications
Annual Reviews
Books
Bulletins and Newsletters
RSIS Commentary Series
Counter Terrorist Trends and Analyses
Commemorative / Event Reports
Future Issues
IDSS Papers
Interreligious Relations
Monographs
NTS Insight
Policy Reports
Working Papers
External Publications
Authored Books
Journal Articles
Edited Books
Chapters in Edited Books
Policy Reports
Working Papers
Op-Eds
Glossary of Abbreviations
Policy-relevant Articles Given RSIS Award
RSIS Publications for the Year
External Publications for the Year
Media
Cohesive Societies
Sustainable Security
Other Resource Pages
News Releases
Speeches
Video/Audio Channel
External Podcasts
Events
Contact Us
S. Rajaratnam School of International Studies Think Tank and Graduate School Ponder The Improbable Since 1966
Nanyang Technological University Nanyang Technological University
  • About RSIS
      IntroductionBuilding the FoundationsWelcome MessageBoard of GovernorsHonours and Awards for RSIS Staff and StudentsRSIS Endowment FundEndowed ProfessorshipsCareer OpportunitiesGetting to RSIS
      Staff ProfilesExecutive Deputy Chairman’s OfficeDean’s OfficeManagementDistinguished FellowsFaculty and ResearchAssociate Research Fellows, Senior Analysts and Research AnalystsVisiting FellowsAdjunct FellowsAdministrative Staff
  • Research
      Research CentresCentre for Multilateralism Studies (CMS)Centre for Non-Traditional Security Studies (NTS Centre)Centre of Excellence for National SecurityInstitute of Defence and Strategic Studies (IDSS)International Centre for Political Violence and Terrorism Research (ICPVTR)
      Research ProgrammesNational Security Studies Programme (NSSP)Social Cohesion Research Programme (SCRP)Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
      Other ResearchFuture Issues and Technology ClusterResearch@RSISScience and Technology Studies Programme (STSP) (2017-2020)
  • Graduate Education
      Graduate Programmes OfficeExchange Partners and ProgrammesHow to ApplyFinancial AssistanceMeet the Admissions Team: Information Sessions and other eventsRSIS Alumni
  • Outreach
      Global NetworksAbout Global NetworksRSIS Alumni
      Executive EducationAbout Executive EducationSRP Executive ProgrammeTerrorism Analyst Training Course (TATC)
      International ProgrammesAbout International ProgrammesAsia-Pacific Programme for Senior Military Officers (APPSMO)Asia-Pacific Programme for Senior National Security Officers (APPSNO)International Conference on Cohesive Societies (ICCS)International Strategy Forum-Asia (ISF-Asia)
  • Publications
      RSIS PublicationsAnnual ReviewsBooksBulletins and NewslettersRSIS Commentary SeriesCounter Terrorist Trends and AnalysesCommemorative / Event ReportsFuture IssuesIDSS PapersInterreligious RelationsMonographsNTS InsightPolicy ReportsWorking Papers
      External PublicationsAuthored BooksJournal ArticlesEdited BooksChapters in Edited BooksPolicy ReportsWorking PapersOp-Eds
      Glossary of AbbreviationsPolicy-relevant Articles Given RSIS AwardRSIS Publications for the YearExternal Publications for the Year
  • Media
      Cohesive SocietiesSustainable SecurityOther Resource PagesNews ReleasesSpeechesVideo/Audio ChannelExternal Podcasts
  • Events
  • Contact Us
    • Connect with Us

      rsis.ntu
      rsis_ntu
      rsisntu
      rsisvideocast
      school/rsis-ntu
      rsis.sg
      rsissg
      RSIS
      RSS
      Subscribe to RSIS Publications
      Subscribe to RSIS Events

      Getting to RSIS

      Nanyang Technological University
      Block S4, Level B3,
      50 Nanyang Avenue,
      Singapore 639798

      Click here for direction to RSIS

      Get in Touch

    Connect
    Search
    • RSIS
    • Publication
    • RSIS Publications
    • Israeli Forces Display Power of AI, but it is a Double-edged Sword
    • Annual Reviews
    • Books
    • Bulletins and Newsletters
    • RSIS Commentary Series
    • Counter Terrorist Trends and Analyses
    • Commemorative / Event Reports
    • Future Issues
    • IDSS Papers
    • Interreligious Relations
    • Monographs
    • NTS Insight
    • Policy Reports
    • Working Papers

    CO24054 | Israeli Forces Display Power of AI, but it is a Double-edged Sword
    Michael Raska

    23 April 2024

    download pdf

    SYNOPSIS

    The Israel Defence Forces, alongside other advanced militaries developing AI, must grapple with the complex technological, legal, and ethical implications of using AI in warfare. The strategic imperative to integrate human oversight and accountability mechanisms into AI systems for responsible and lawful use also holds significant implications for Singapore’s defence and military innovation.

    240424 Israeli Forces Display Power of AI but it is a Double edged Sword
    Source: Unsplash

    COMMENTARY

    On 13 April, the Israel Defence Forces (IDF), in collaboration with the US, British, French and Jordanian air forces, successfully intercepted over 300 incoming drones and missiles launched from Iran. Credit for this should go to Israel’s multilayered air defence systems, which use artificial intelligence (AI) algorithms to track and intercept missiles in real time. These AI algorithms enabled split-second decision-making and target allocation for Israel’s air defences – the Arrow missile defence system, David’s Sling, and Iron Dome systems.

    But AI can also be used when a nation goes on the offensive – and this has raised some serious concerns. For example, media reports by Israeli publications +972 Magazine and Local Call have shed light on Project Lavender, an AI-powered database allegedly used by the IDF to identify bombing targets in Gaza with minimal human oversight.

    According to the report, Project Lavender uses AI to process vast amounts of data to generate potential targets for air strikes, including individuals affiliated with Hamas and Islamic Jihad. The report claims that Lavender’s targets played a significant role in Israel’s military operations in Gaza, particularly during the initial weeks of the conflict that resulted in massive casualties among Palestinians.

    The IDF has refuted the claims made in the media report, stating that Lavender is not a system designed to target civilians but rather a database used to cross-reference intelligence sources and gather information on military operatives of terrorist organisations.

    But the controversy underscores the challenges of using AI in military operations, including concerns about transparency, accountability, and ethical decision-making in conflict situations.

    The Shift Towards AI

    While the specifics of Project Lavender remain shrouded in secrecy, sources suggest that the IDF’s strategies are increasingly being driven by AI.

    Reputable Israeli military publications such as The Dado Centre Journal and Ma’arachot have cited key documents to outline the IDF’s strategic vision. Central to this evolution are key documents like the IDF’s Momentum (Tnufa) Multiyear Plan, unveiled in February 2020, as well as the newly formulated Operational Concept for Victory and Data AI Strategy.

    This involves leveraging AI to usher in autonomous and smart transformations within the IDF, fundamentally reshaping the character and conduct of warfare. From the IDF’s perspective, AI technology is not just a valuable intelligence tool but also a crucial force multiplier, especially in response to the evolving technological and strategic capabilities of its adversaries.

    This shift in mindset repositions Israel from a paradigm of “asymmetric warfare” against perceived “inferior forces” to confronting “well-organised, well-trained, well-equipped rocket-based terror armies”. In practical terms, AI empowers the IDF to assimilate intelligence from diverse warfare domains and disseminate it efficiently across various combat units. This enables unmanned systems to be deployed for highly precise and potent military strikes.

    The AI revolution has seen the IDF tweak its operational methodologies, leading to organisational restructuring too. The IDF’s Momentum Plan, for instance, established the Digital Transformation Administration – a centralised “operational internet” platform to streamline communication and connectivity across the IDF. Simultaneously, the Intelligence Directorate serves as a pivotal hub for AI integration, spearheading initiatives described in Project Lavender – aimed at target identification and allocation.

    At the operational level, entities like the Warfare Methods and Innovation Division (Shiloah) are spearheading the development of “multi-domain combat methods”, synergising various combat units and technologies including infantry, combat engineering, reconnaissance, air force, cyber capabilities, and more.

    Experimental units such as the 888 “Refaim” multidomain unit are actively testing AI-driven combat technologies, including unmanned aerial vehicles, to augment the IDF’s combat prowess.

    Dangers of Using AI in Military Operations

    However, the proliferation of AI-powered capabilities also raises critical concerns. These include ethical dilemmas, potential biases and the intricate challenges of assessing the legality and accountability of AI-driven military operations.

    Advanced militaries such as the IDF must grapple with such contending legal and ethical implications of using AI in warfare. For instance, AI algorithms learn from data, which may contain biases or inaccuracies, potentially leading to unreliable decisions in military scenarios. Additionally, intricate AI models can raise questions of explainability such as “Why did our system provide this recommendation or take that action?”

    Achieving the right balance between human oversight and AI-enabled autonomy is key to avoiding unintended consequences and retaining human control over military decision-making. Equally important are dependable algorithms capable of adapting to environmental changes and learning from unforeseen events. Errors made by AI systems can result in severe ramifications on the battlefield.

    Furthermore, on a strategic level, the advancement and deployment of sophisticated military AI could spark a race for lethal autonomous weapons technologies, heightening the potential for conflict escalation by favouring machine-driven decisions over human judgment. However, the weaponisation of algorithmic warfare is poised to advance rapidly due to ongoing breakthroughs in science and technology.

    Meanwhile, the international community is in the nascent stages of developing viable AI governance mechanisms in military use, such as the Responsible AI in Military Domain (REAIM) process – a platform for all stakeholders to discuss the key opportunities, challenges and risks associated with military applications of AI – launched in 2023.

    Israel may also feel that adhering to international AI norms could limit its capabilities. That is why, instead of following international AI governance initiatives, the Israeli military focuses on developing internal ethical guidelines to safeguard AI systems.

    AI in Singapore’s Defence

    The long-term strategic implications of the AI revolution in future conflicts require a re-evaluation of defence policy planning and management, including the direction of weapons development, and research and development efforts.

    The implications of AI advancements and challenges in warfare also extend beyond traditional powerhouses like Israel, encompassing countries with strategic interests and technological ambitions, such as Singapore. As a small nation with a strong focus on innovation and technology, Singapore is keenly aware of the transformative potential of AI in defence and security.

    In December 2023, Singapore updated its National Artificial Intelligence Strategy (NAIS 2.0), signalling its ambition to become a global leader in the conscientious and innovative utilisation of AI. The strategy serves as a comprehensive roadmap for the entire government.

    The core philosophy of AI governance embedded within NAIS 2.0 revolves around several key principles: ethical and responsible AI, transparency, collaboration and inclusivity, and human-centric AI. These principles shape the way AI is integrated into Singapore’s defence and military innovation efforts. Indeed, the Singapore Ministry of Defence’s preliminary guiding principles for AI governance in defence innovation and military use prioritise responsible, safe, reliable and robust AI development.

    By harnessing AI systems, cloud technologies and data science, the Singapore Armed Forces’ (SAF) aim is to automate tasks, enhance decision-making processes and optimise capabilities. This approach could see the SAF become more effective in a more volatile and uncertain regional security environment.

    Going Beyond Technology

    Singapore’s integration of AI in its defence strategy extends beyond technological advancement; it also underscores the importance of defence diplomacy as a core anchor of its national security approach, emphasising both deterrence and resilience.

    Central to these efforts is a growing emphasis on regulating AI development and use, particularly regarding lethal autonomous weapons systems (LAWS), to establish responsible and ethical norms for AI in warfare and promoting regional peace and stability. In this context, Singapore has actively pursued collaborative partnerships with various states, particularly major powers possessing advanced AI technologies such as the US, France and Australia.

    In 2023, Singapore also endorsed both the REAIM initiative and the US-led “Political Declaration on Responsible Military Use of AI and Autonomy”, demonstrating the need for a multilateral and norms-based approach to AI governance in the military domain on the global stage.

    However, within the rapidly evolving defence AI landscape, Singapore faces intricate challenges and dilemmas. One of the foremost concerns is striking a delicate balance between technological advancements and ethical considerations in AI-driven military competition in East Asia.

    As countries in the region invest heavily in AI technologies for defence purposes, the risk of an escalating arms race and the potential for AI-driven conflicts will rise. Singapore’s defence establishment must remain agile and proactive in leveraging AI for defence purposes while mitigating the risks associated with AI-driven military competition.

    Collaboration with like-minded countries and adherence to international norms and standards will be crucial in shaping a responsible and sustainable future for AI in defence in the region.

    About the Author

    Michael Raska is Assistant Professor in the Military Transformations Programme at the S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore. This is a slightly edited version of an article first published in The Straits Times on 17 April 2024. It is republished here with permission.

    Categories: RSIS Commentary Series / Country and Region Studies / Technology and Future Issues / East Asia and Asia Pacific / South Asia / Southeast Asia and ASEAN / Middle East and North Africa (MENA) / Global
    comments powered by Disqus

    SYNOPSIS

    The Israel Defence Forces, alongside other advanced militaries developing AI, must grapple with the complex technological, legal, and ethical implications of using AI in warfare. The strategic imperative to integrate human oversight and accountability mechanisms into AI systems for responsible and lawful use also holds significant implications for Singapore’s defence and military innovation.

    240424 Israeli Forces Display Power of AI but it is a Double edged Sword
    Source: Unsplash

    COMMENTARY

    On 13 April, the Israel Defence Forces (IDF), in collaboration with the US, British, French and Jordanian air forces, successfully intercepted over 300 incoming drones and missiles launched from Iran. Credit for this should go to Israel’s multilayered air defence systems, which use artificial intelligence (AI) algorithms to track and intercept missiles in real time. These AI algorithms enabled split-second decision-making and target allocation for Israel’s air defences – the Arrow missile defence system, David’s Sling, and Iron Dome systems.

    But AI can also be used when a nation goes on the offensive – and this has raised some serious concerns. For example, media reports by Israeli publications +972 Magazine and Local Call have shed light on Project Lavender, an AI-powered database allegedly used by the IDF to identify bombing targets in Gaza with minimal human oversight.

    According to the report, Project Lavender uses AI to process vast amounts of data to generate potential targets for air strikes, including individuals affiliated with Hamas and Islamic Jihad. The report claims that Lavender’s targets played a significant role in Israel’s military operations in Gaza, particularly during the initial weeks of the conflict that resulted in massive casualties among Palestinians.

    The IDF has refuted the claims made in the media report, stating that Lavender is not a system designed to target civilians but rather a database used to cross-reference intelligence sources and gather information on military operatives of terrorist organisations.

    But the controversy underscores the challenges of using AI in military operations, including concerns about transparency, accountability, and ethical decision-making in conflict situations.

    The Shift Towards AI

    While the specifics of Project Lavender remain shrouded in secrecy, sources suggest that the IDF’s strategies are increasingly being driven by AI.

    Reputable Israeli military publications such as The Dado Centre Journal and Ma’arachot have cited key documents to outline the IDF’s strategic vision. Central to this evolution are key documents like the IDF’s Momentum (Tnufa) Multiyear Plan, unveiled in February 2020, as well as the newly formulated Operational Concept for Victory and Data AI Strategy.

    This involves leveraging AI to usher in autonomous and smart transformations within the IDF, fundamentally reshaping the character and conduct of warfare. From the IDF’s perspective, AI technology is not just a valuable intelligence tool but also a crucial force multiplier, especially in response to the evolving technological and strategic capabilities of its adversaries.

    This shift in mindset repositions Israel from a paradigm of “asymmetric warfare” against perceived “inferior forces” to confronting “well-organised, well-trained, well-equipped rocket-based terror armies”. In practical terms, AI empowers the IDF to assimilate intelligence from diverse warfare domains and disseminate it efficiently across various combat units. This enables unmanned systems to be deployed for highly precise and potent military strikes.

    The AI revolution has seen the IDF tweak its operational methodologies, leading to organisational restructuring too. The IDF’s Momentum Plan, for instance, established the Digital Transformation Administration – a centralised “operational internet” platform to streamline communication and connectivity across the IDF. Simultaneously, the Intelligence Directorate serves as a pivotal hub for AI integration, spearheading initiatives described in Project Lavender – aimed at target identification and allocation.

    At the operational level, entities like the Warfare Methods and Innovation Division (Shiloah) are spearheading the development of “multi-domain combat methods”, synergising various combat units and technologies including infantry, combat engineering, reconnaissance, air force, cyber capabilities, and more.

    Experimental units such as the 888 “Refaim” multidomain unit are actively testing AI-driven combat technologies, including unmanned aerial vehicles, to augment the IDF’s combat prowess.

    Dangers of Using AI in Military Operations

    However, the proliferation of AI-powered capabilities also raises critical concerns. These include ethical dilemmas, potential biases and the intricate challenges of assessing the legality and accountability of AI-driven military operations.

    Advanced militaries such as the IDF must grapple with such contending legal and ethical implications of using AI in warfare. For instance, AI algorithms learn from data, which may contain biases or inaccuracies, potentially leading to unreliable decisions in military scenarios. Additionally, intricate AI models can raise questions of explainability such as “Why did our system provide this recommendation or take that action?”

    Achieving the right balance between human oversight and AI-enabled autonomy is key to avoiding unintended consequences and retaining human control over military decision-making. Equally important are dependable algorithms capable of adapting to environmental changes and learning from unforeseen events. Errors made by AI systems can result in severe ramifications on the battlefield.

    Furthermore, on a strategic level, the advancement and deployment of sophisticated military AI could spark a race for lethal autonomous weapons technologies, heightening the potential for conflict escalation by favouring machine-driven decisions over human judgment. However, the weaponisation of algorithmic warfare is poised to advance rapidly due to ongoing breakthroughs in science and technology.

    Meanwhile, the international community is in the nascent stages of developing viable AI governance mechanisms in military use, such as the Responsible AI in Military Domain (REAIM) process – a platform for all stakeholders to discuss the key opportunities, challenges and risks associated with military applications of AI – launched in 2023.

    Israel may also feel that adhering to international AI norms could limit its capabilities. That is why, instead of following international AI governance initiatives, the Israeli military focuses on developing internal ethical guidelines to safeguard AI systems.

    AI in Singapore’s Defence

    The long-term strategic implications of the AI revolution in future conflicts require a re-evaluation of defence policy planning and management, including the direction of weapons development, and research and development efforts.

    The implications of AI advancements and challenges in warfare also extend beyond traditional powerhouses like Israel, encompassing countries with strategic interests and technological ambitions, such as Singapore. As a small nation with a strong focus on innovation and technology, Singapore is keenly aware of the transformative potential of AI in defence and security.

    In December 2023, Singapore updated its National Artificial Intelligence Strategy (NAIS 2.0), signalling its ambition to become a global leader in the conscientious and innovative utilisation of AI. The strategy serves as a comprehensive roadmap for the entire government.

    The core philosophy of AI governance embedded within NAIS 2.0 revolves around several key principles: ethical and responsible AI, transparency, collaboration and inclusivity, and human-centric AI. These principles shape the way AI is integrated into Singapore’s defence and military innovation efforts. Indeed, the Singapore Ministry of Defence’s preliminary guiding principles for AI governance in defence innovation and military use prioritise responsible, safe, reliable and robust AI development.

    By harnessing AI systems, cloud technologies and data science, the Singapore Armed Forces’ (SAF) aim is to automate tasks, enhance decision-making processes and optimise capabilities. This approach could see the SAF become more effective in a more volatile and uncertain regional security environment.

    Going Beyond Technology

    Singapore’s integration of AI in its defence strategy extends beyond technological advancement; it also underscores the importance of defence diplomacy as a core anchor of its national security approach, emphasising both deterrence and resilience.

    Central to these efforts is a growing emphasis on regulating AI development and use, particularly regarding lethal autonomous weapons systems (LAWS), to establish responsible and ethical norms for AI in warfare and promoting regional peace and stability. In this context, Singapore has actively pursued collaborative partnerships with various states, particularly major powers possessing advanced AI technologies such as the US, France and Australia.

    In 2023, Singapore also endorsed both the REAIM initiative and the US-led “Political Declaration on Responsible Military Use of AI and Autonomy”, demonstrating the need for a multilateral and norms-based approach to AI governance in the military domain on the global stage.

    However, within the rapidly evolving defence AI landscape, Singapore faces intricate challenges and dilemmas. One of the foremost concerns is striking a delicate balance between technological advancements and ethical considerations in AI-driven military competition in East Asia.

    As countries in the region invest heavily in AI technologies for defence purposes, the risk of an escalating arms race and the potential for AI-driven conflicts will rise. Singapore’s defence establishment must remain agile and proactive in leveraging AI for defence purposes while mitigating the risks associated with AI-driven military competition.

    Collaboration with like-minded countries and adherence to international norms and standards will be crucial in shaping a responsible and sustainable future for AI in defence in the region.

    About the Author

    Michael Raska is Assistant Professor in the Military Transformations Programme at the S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore. This is a slightly edited version of an article first published in The Straits Times on 17 April 2024. It is republished here with permission.

    Categories: RSIS Commentary Series / Country and Region Studies / Technology and Future Issues

    Popular Links

    About RSISResearch ProgrammesGraduate EducationPublicationsEventsAdmissionsCareersVideo/Audio ChannelRSIS Intranet

    Connect with Us

    rsis.ntu
    rsis_ntu
    rsisntu
    rsisvideocast
    school/rsis-ntu
    rsis.sg
    rsissg
    RSIS
    RSS
    Subscribe to RSIS Publications
    Subscribe to RSIS Events

    Getting to RSIS

    Nanyang Technological University
    Block S4, Level B3,
    50 Nanyang Avenue,
    Singapore 639798

    Click here for direction to RSIS

    Get in Touch

      Copyright © S. Rajaratnam School of International Studies. All rights reserved.
      Privacy Statement / Terms of Use
      Help us improve

        Rate your experience with this website
        123456
        Not satisfiedVery satisfied
        What did you like?
        0/255 characters
        What can be improved?
        0/255 characters
        Your email
        Please enter a valid email.
        Thank you for your feedback.
        This site uses cookies to offer you a better browsing experience. By continuing, you are agreeing to the use of cookies on your device as described in our privacy policy. Learn more
        OK
        Latest Book
        more info