Back
About RSIS
Introduction
Building the Foundations
Welcome Message
Board of Governors
Staff Profiles
Executive Deputy Chairman’s Office
Dean’s Office
Management
Distinguished Fellows
Faculty and Research
Associate Research Fellows, Senior Analysts and Research Analysts
Visiting Fellows
Adjunct Fellows
Administrative Staff
Honours and Awards for RSIS Staff and Students
RSIS Endowment Fund
Endowed Professorships
Career Opportunities
Getting to RSIS
Research
Research Centres
Centre for Multilateralism Studies (CMS)
Centre for Non-Traditional Security Studies (NTS Centre)
Centre of Excellence for National Security
Institute of Defence and Strategic Studies (IDSS)
International Centre for Political Violence and Terrorism Research (ICPVTR)
Research Programmes
National Security Studies Programme (NSSP)
Social Cohesion Research Programme (SCRP)
Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
Other Research
Future Issues and Technology Cluster
Research@RSIS
Science and Technology Studies Programme (STSP) (2017-2020)
Graduate Education
Graduate Programmes Office
Exchange Partners and Programmes
How to Apply
Financial Assistance
Meet the Admissions Team: Information Sessions and other events
RSIS Alumni
Outreach
Global Networks
About Global Networks
RSIS Alumni
Executive Education
About Executive Education
SRP Executive Programme
Terrorism Analyst Training Course (TATC)
International Programmes
About International Programmes
Asia-Pacific Programme for Senior Military Officers (APPSMO)
Asia-Pacific Programme for Senior National Security Officers (APPSNO)
International Conference on Cohesive Societies (ICCS)
International Strategy Forum-Asia (ISF-Asia)
Publications
RSIS Publications
Annual Reviews
Books
Bulletins and Newsletters
RSIS Commentary Series
Counter Terrorist Trends and Analyses
Commemorative / Event Reports
Future Issues
IDSS Papers
Interreligious Relations
Monographs
NTS Insight
Policy Reports
Working Papers
External Publications
Authored Books
Journal Articles
Edited Books
Chapters in Edited Books
Policy Reports
Working Papers
Op-Eds
Glossary of Abbreviations
Policy-relevant Articles Given RSIS Award
RSIS Publications for the Year
External Publications for the Year
Media
Cohesive Societies
Sustainable Security
Other Resource Pages
News Releases
Speeches
Video/Audio Channel
External Podcasts
Events
Contact Us
S. Rajaratnam School of International Studies Think Tank and Graduate School Ponder The Improbable Since 1966
Nanyang Technological University Nanyang Technological University
  • About RSIS
      IntroductionBuilding the FoundationsWelcome MessageBoard of GovernorsHonours and Awards for RSIS Staff and StudentsRSIS Endowment FundEndowed ProfessorshipsCareer OpportunitiesGetting to RSIS
      Staff ProfilesExecutive Deputy Chairman’s OfficeDean’s OfficeManagementDistinguished FellowsFaculty and ResearchAssociate Research Fellows, Senior Analysts and Research AnalystsVisiting FellowsAdjunct FellowsAdministrative Staff
  • Research
      Research CentresCentre for Multilateralism Studies (CMS)Centre for Non-Traditional Security Studies (NTS Centre)Centre of Excellence for National SecurityInstitute of Defence and Strategic Studies (IDSS)International Centre for Political Violence and Terrorism Research (ICPVTR)
      Research ProgrammesNational Security Studies Programme (NSSP)Social Cohesion Research Programme (SCRP)Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
      Other ResearchFuture Issues and Technology ClusterResearch@RSISScience and Technology Studies Programme (STSP) (2017-2020)
  • Graduate Education
      Graduate Programmes OfficeExchange Partners and ProgrammesHow to ApplyFinancial AssistanceMeet the Admissions Team: Information Sessions and other eventsRSIS Alumni
  • Outreach
      Global NetworksAbout Global NetworksRSIS Alumni
      Executive EducationAbout Executive EducationSRP Executive ProgrammeTerrorism Analyst Training Course (TATC)
      International ProgrammesAbout International ProgrammesAsia-Pacific Programme for Senior Military Officers (APPSMO)Asia-Pacific Programme for Senior National Security Officers (APPSNO)International Conference on Cohesive Societies (ICCS)International Strategy Forum-Asia (ISF-Asia)
  • Publications
      RSIS PublicationsAnnual ReviewsBooksBulletins and NewslettersRSIS Commentary SeriesCounter Terrorist Trends and AnalysesCommemorative / Event ReportsFuture IssuesIDSS PapersInterreligious RelationsMonographsNTS InsightPolicy ReportsWorking Papers
      External PublicationsAuthored BooksJournal ArticlesEdited BooksChapters in Edited BooksPolicy ReportsWorking PapersOp-Eds
      Glossary of AbbreviationsPolicy-relevant Articles Given RSIS AwardRSIS Publications for the YearExternal Publications for the Year
  • Media
      Cohesive SocietiesSustainable SecurityOther Resource PagesNews ReleasesSpeechesVideo/Audio ChannelExternal Podcasts
  • Events
  • Contact Us
    • Connect with Us

      rsis.ntu
      rsis_ntu
      rsisntu
      rsisvideocast
      school/rsis-ntu
      rsis.sg
      rsissg
      RSIS
      RSS
      Subscribe to RSIS Publications
      Subscribe to RSIS Events

      Getting to RSIS

      Nanyang Technological University
      Block S4, Level B3,
      50 Nanyang Avenue,
      Singapore 639798

      Click here for direction to RSIS

      Get in Touch

    Connect
    Search
    • RSIS
    • Publication
    • RSIS Publications
    • The Use of AI in Terrorism
    • Annual Reviews
    • Books
    • Bulletins and Newsletters
    • RSIS Commentary Series
    • Counter Terrorist Trends and Analyses
    • Commemorative / Event Reports
    • Future Issues
    • IDSS Papers
    • Interreligious Relations
    • Monographs
    • NTS Insight
    • Policy Reports
    • Working Papers

    CO24124 | The Use of AI in Terrorism
    Asha Hemrajani

    26 August 2024

    download pdf

    SYNOPSIS

    Artificial Intelligence (AI) is changing the way intelligence is gathered and used, affecting geopolitical tensions and terrorism. It has been reported that terrorist and violent extremist actors are leveraging AI to enhance their operations in three ways: boosting information operations, recruitment and financing. The potential for AI to be weaponised and misused in terrorism is rapidly becoming realised. Proactive policies and multistakeholder initiatives will be needed to monitor and counter these AI-enhanced terrorist operations.

    Photo: Pixabay

    COMMENTARY

    In a speech to the Australian National Press Club in April, Mike Burgess, Director-General of the Australian Security Intelligence Organisation (ASIO), warned that “AI is likely to make radicalisation easier and faster”. His assessment is shared by other Intelligence chiefs who are similarly concerned about “terrorists exploiting the enormous potential of the technology [AI]”.

    The key driver for this is that the barriers to leveraging AI tools, such as cost, training, and coding skills, have been reduced as they become more widely accessible and do not require significant resources or technical skills. Terrorist groups will likely incorporate more AI into their operations.

    AI can potentially change terrorism operations in three fundamental ways: (a) boosting Information Operations (IO), (b) recruitment, and (c) financing.

    Information Operations

    AI platforms can play a clear role in shaping and enhancing IO. Last year, Tech Against Terrorism, an initiative launched by the United Nations to work on terrorism threat intelligence and policy, reported that terrorist and violent extremist actors (TVEs) have already started using Generative AI tools to enhance their existing methods of producing and spreading propaganda, distorting narratives and influencing public opinion, as part of their IO.

    Deepfakes are one type of synthetic media output from Generative AI tools. They can resemble certain persons, objects or events and appear genuine. Deepfakes are becoming more widespread and difficult to tell apart from genuine content, thus facilitating unlawful and dangerous activities. For example, a media organisation associated with Al-Qaeda had been disseminating misinformation and propaganda that appeared to have been produced using deepfakes.

    These images have been made into posters overlaid with propaganda messages. The analysts believed it was likely that the images were generated with free tools to avoid the risks of identification associated with paid tools. It is also likely the images were originally generated without the propaganda messages overlaid, thereby avoiding violations of content moderation rules.

    In February 2024, another study examined 286 pieces of AI-generated/enhanced content created or shared by pro-Islamic State (IS) accounts across four major social media platforms.  The most common imagery in the AI-generated material were IS flags and guns. AI allowed the supporters to generate creatively and efficiently and even bypass filters on Instagram and Facebook by blurring the IS flag or placing a sticker on top.

    The above examples suggest that TVE operatives have become adept at circumventing content creation restrictions in AI-enabled image generation tools. They have also developed a high level of skill in bypassing content moderation controls on social media platforms. This strongly suggests that the current guardrails on both AI-enabled image generation tools as well as social media platforms need to be strengthened.

    Recruitment

    Terrorists could use AI-based tools to profile candidates and identify potential recruits who meet their criteria. Subsequently, generative AI tools could be used to personalise and tailor messaging and media content for potential recruits. A UN study reported that concerns have been raised about OpenAI’s tools in terms of their capabilities in “micro-profiling and micro-targeting, generating automatic text for recruitment purposes”.

    The same study found that AI might be used in the data mining process to identify individuals susceptible to radicalisation, enabling the precise dissemination of terrorist information or messaging. For example, they may send tailored messages to potential recruits, such as those who often seek “violent content online or streamed films portraying alienated and angry antiheroes” via AI-powered chatbots. This was demonstrated in an experiment when the UK’s terrorism legislation reviewer was “recruited” by a chatbot.

    Financing

    Countering Financing of Terrorism (CFT) mechanisms trace how finance flows from the source, legally or illegally, to terrorists. These involve multiple stakeholders and include programmes against money laundering. Know Your Customer (KYC) is one standard technique financial institutions deploy to combat money laundering, especially when a bank account is opened online.

    Some banks require the account holder to be present during a live video call where his “liveness” is verified. It is possible to spoof these KYC processes with deepfake videos, allowing terrorists to open fake accounts to facilitate funding for their activities.

    The use of deepfakes in biometric KYC verification is particularly worrying. A report by sensity.ai, a fraud detection company, studied the most popular biometric verification vendors and found that “the vast majority were severely vulnerable to deepfake attacks”. Terrorists can, therefore, leverage AI-generated deepfakes to bypass security checks inherent in KYC platforms, evade CFT mechanisms and thus illegally transfer funds.

    Just as audio deepfakes have been used for commercial fraud, they can also be leveraged by terrorists to transfer funds illegally. These recordings can deceive people into parting with money or sensitive information as certain banks utilise voice authorisation checks for security purposes. Audio deepfakes could be used to bypass KYC checks and illegally transfer funds by synthesising a voice that closely resembles that of the authorised user.

    Cryptocurrencies are another source of terrorist financing. In 2022, Bloomberg reported that a UN counter-terrorism legal expert had opined that “more cases of crypto use in terror-financing are being detected amid stepped-up scrutiny of such practices”.

    Cryptocurrencies were suspected to have been used in financing the 2015 Paris and 2019 Sri Lankan bombings. Following the 7 October attack on Israel, the US Treasury Department imposed sanctions on a virtual currency exchange in Gaza suspected of facilitating the Hamas attack. In February 2024, the crypto exchange Binance, whose CEO is a Singaporean, was sued by the families of the victims of the attack for allegedly “facilitating terrorism”.

    AI can potentially enable terrorists to take further advantage of cryptocurrencies in two ways: trading and theft.

    AI-powered cryptocurrency bots can detect patterns and price trends to “offer predictions on target and breakout prices, alongside confidence levels, thus significantly enhancing decision-making processes”, which is especially useful for markets with high volatility. Terrorists can leverage this ability to make more lucrative cryptocurrency trades, increasing their profits.

    The second way is the outright theft of cryptocurrencies. Global cryptocurrency thefts have surged more than 100 per cent in the first half of 2024 as compared to 2023, partly caused by major attacks. This includes a US$308 million loss at a Japanese crypto exchange due to a cybersecurity vulnerability. Given the growing trend of using AI to craft and launch cyberattacks, it is foreseeable that terrorists will use AI to “facilitate the theft of cryptocurrencies from ‘hot wallets’” as reported by the UN.

    Conclusion

    The potential for AI to be weaponised and misused in terrorist activities is fast being realised. It is becoming another tool in the terrorism toolbox to run their IO, recruitment, and financing operations.

    Policymakers, lawmakers, law enforcement agencies, and civil society must collaborate closely to develop robust strategies to counter terrorist entities’ misuse of AI.

    Greater vigilance and an array of mechanisms such as multistakeholder initiatives, legislation, policies such as better guardrails on social media platforms and AI-enabled content generation tools as well as the use of automated detection tools will be needed to monitor and contain these AI-enhanced terrorist operations.

    About the Author

    Asha Hemrajani is a Senior Fellow at the Centre of Excellence for National Security (CENS) at S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore.

    Categories: RSIS Commentary Series / Country and Region Studies / International Politics and Security / Non-Traditional Security / Technology and Future Issues / East Asia and Asia Pacific / South Asia / Southeast Asia and ASEAN / Global
    comments powered by Disqus

    SYNOPSIS

    Artificial Intelligence (AI) is changing the way intelligence is gathered and used, affecting geopolitical tensions and terrorism. It has been reported that terrorist and violent extremist actors are leveraging AI to enhance their operations in three ways: boosting information operations, recruitment and financing. The potential for AI to be weaponised and misused in terrorism is rapidly becoming realised. Proactive policies and multistakeholder initiatives will be needed to monitor and counter these AI-enhanced terrorist operations.

    Photo: Pixabay

    COMMENTARY

    In a speech to the Australian National Press Club in April, Mike Burgess, Director-General of the Australian Security Intelligence Organisation (ASIO), warned that “AI is likely to make radicalisation easier and faster”. His assessment is shared by other Intelligence chiefs who are similarly concerned about “terrorists exploiting the enormous potential of the technology [AI]”.

    The key driver for this is that the barriers to leveraging AI tools, such as cost, training, and coding skills, have been reduced as they become more widely accessible and do not require significant resources or technical skills. Terrorist groups will likely incorporate more AI into their operations.

    AI can potentially change terrorism operations in three fundamental ways: (a) boosting Information Operations (IO), (b) recruitment, and (c) financing.

    Information Operations

    AI platforms can play a clear role in shaping and enhancing IO. Last year, Tech Against Terrorism, an initiative launched by the United Nations to work on terrorism threat intelligence and policy, reported that terrorist and violent extremist actors (TVEs) have already started using Generative AI tools to enhance their existing methods of producing and spreading propaganda, distorting narratives and influencing public opinion, as part of their IO.

    Deepfakes are one type of synthetic media output from Generative AI tools. They can resemble certain persons, objects or events and appear genuine. Deepfakes are becoming more widespread and difficult to tell apart from genuine content, thus facilitating unlawful and dangerous activities. For example, a media organisation associated with Al-Qaeda had been disseminating misinformation and propaganda that appeared to have been produced using deepfakes.

    These images have been made into posters overlaid with propaganda messages. The analysts believed it was likely that the images were generated with free tools to avoid the risks of identification associated with paid tools. It is also likely the images were originally generated without the propaganda messages overlaid, thereby avoiding violations of content moderation rules.

    In February 2024, another study examined 286 pieces of AI-generated/enhanced content created or shared by pro-Islamic State (IS) accounts across four major social media platforms.  The most common imagery in the AI-generated material were IS flags and guns. AI allowed the supporters to generate creatively and efficiently and even bypass filters on Instagram and Facebook by blurring the IS flag or placing a sticker on top.

    The above examples suggest that TVE operatives have become adept at circumventing content creation restrictions in AI-enabled image generation tools. They have also developed a high level of skill in bypassing content moderation controls on social media platforms. This strongly suggests that the current guardrails on both AI-enabled image generation tools as well as social media platforms need to be strengthened.

    Recruitment

    Terrorists could use AI-based tools to profile candidates and identify potential recruits who meet their criteria. Subsequently, generative AI tools could be used to personalise and tailor messaging and media content for potential recruits. A UN study reported that concerns have been raised about OpenAI’s tools in terms of their capabilities in “micro-profiling and micro-targeting, generating automatic text for recruitment purposes”.

    The same study found that AI might be used in the data mining process to identify individuals susceptible to radicalisation, enabling the precise dissemination of terrorist information or messaging. For example, they may send tailored messages to potential recruits, such as those who often seek “violent content online or streamed films portraying alienated and angry antiheroes” via AI-powered chatbots. This was demonstrated in an experiment when the UK’s terrorism legislation reviewer was “recruited” by a chatbot.

    Financing

    Countering Financing of Terrorism (CFT) mechanisms trace how finance flows from the source, legally or illegally, to terrorists. These involve multiple stakeholders and include programmes against money laundering. Know Your Customer (KYC) is one standard technique financial institutions deploy to combat money laundering, especially when a bank account is opened online.

    Some banks require the account holder to be present during a live video call where his “liveness” is verified. It is possible to spoof these KYC processes with deepfake videos, allowing terrorists to open fake accounts to facilitate funding for their activities.

    The use of deepfakes in biometric KYC verification is particularly worrying. A report by sensity.ai, a fraud detection company, studied the most popular biometric verification vendors and found that “the vast majority were severely vulnerable to deepfake attacks”. Terrorists can, therefore, leverage AI-generated deepfakes to bypass security checks inherent in KYC platforms, evade CFT mechanisms and thus illegally transfer funds.

    Just as audio deepfakes have been used for commercial fraud, they can also be leveraged by terrorists to transfer funds illegally. These recordings can deceive people into parting with money or sensitive information as certain banks utilise voice authorisation checks for security purposes. Audio deepfakes could be used to bypass KYC checks and illegally transfer funds by synthesising a voice that closely resembles that of the authorised user.

    Cryptocurrencies are another source of terrorist financing. In 2022, Bloomberg reported that a UN counter-terrorism legal expert had opined that “more cases of crypto use in terror-financing are being detected amid stepped-up scrutiny of such practices”.

    Cryptocurrencies were suspected to have been used in financing the 2015 Paris and 2019 Sri Lankan bombings. Following the 7 October attack on Israel, the US Treasury Department imposed sanctions on a virtual currency exchange in Gaza suspected of facilitating the Hamas attack. In February 2024, the crypto exchange Binance, whose CEO is a Singaporean, was sued by the families of the victims of the attack for allegedly “facilitating terrorism”.

    AI can potentially enable terrorists to take further advantage of cryptocurrencies in two ways: trading and theft.

    AI-powered cryptocurrency bots can detect patterns and price trends to “offer predictions on target and breakout prices, alongside confidence levels, thus significantly enhancing decision-making processes”, which is especially useful for markets with high volatility. Terrorists can leverage this ability to make more lucrative cryptocurrency trades, increasing their profits.

    The second way is the outright theft of cryptocurrencies. Global cryptocurrency thefts have surged more than 100 per cent in the first half of 2024 as compared to 2023, partly caused by major attacks. This includes a US$308 million loss at a Japanese crypto exchange due to a cybersecurity vulnerability. Given the growing trend of using AI to craft and launch cyberattacks, it is foreseeable that terrorists will use AI to “facilitate the theft of cryptocurrencies from ‘hot wallets’” as reported by the UN.

    Conclusion

    The potential for AI to be weaponised and misused in terrorist activities is fast being realised. It is becoming another tool in the terrorism toolbox to run their IO, recruitment, and financing operations.

    Policymakers, lawmakers, law enforcement agencies, and civil society must collaborate closely to develop robust strategies to counter terrorist entities’ misuse of AI.

    Greater vigilance and an array of mechanisms such as multistakeholder initiatives, legislation, policies such as better guardrails on social media platforms and AI-enabled content generation tools as well as the use of automated detection tools will be needed to monitor and contain these AI-enhanced terrorist operations.

    About the Author

    Asha Hemrajani is a Senior Fellow at the Centre of Excellence for National Security (CENS) at S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore.

    Categories: RSIS Commentary Series / Country and Region Studies / International Politics and Security / Non-Traditional Security / Technology and Future Issues

    Popular Links

    About RSISResearch ProgrammesGraduate EducationPublicationsEventsAdmissionsCareersVideo/Audio ChannelRSIS Intranet

    Connect with Us

    rsis.ntu
    rsis_ntu
    rsisntu
    rsisvideocast
    school/rsis-ntu
    rsis.sg
    rsissg
    RSIS
    RSS
    Subscribe to RSIS Publications
    Subscribe to RSIS Events

    Getting to RSIS

    Nanyang Technological University
    Block S4, Level B3,
    50 Nanyang Avenue,
    Singapore 639798

    Click here for direction to RSIS

    Get in Touch

      Copyright © S. Rajaratnam School of International Studies. All rights reserved.
      Privacy Statement / Terms of Use
      Help us improve

        Rate your experience with this website
        123456
        Not satisfiedVery satisfied
        What did you like?
        0/255 characters
        What can be improved?
        0/255 characters
        Your email
        Please enter a valid email.
        Thank you for your feedback.
        This site uses cookies to offer you a better browsing experience. By continuing, you are agreeing to the use of cookies on your device as described in our privacy policy. Learn more
        OK
        Latest Book
        more info