Back
About RSIS
Introduction
Building the Foundations
Welcome Message
Board of Governors
Staff Profiles
Executive Deputy Chairman’s Office
Dean’s Office
Management
Distinguished Fellows
Faculty and Research
Associate Research Fellows, Senior Analysts and Research Analysts
Visiting Fellows
Adjunct Fellows
Administrative Staff
Honours and Awards for RSIS Staff and Students
RSIS Endowment Fund
Endowed Professorships
Career Opportunities
Getting to RSIS
Research
Research Centres
Centre for Multilateralism Studies (CMS)
Centre for Non-Traditional Security Studies (NTS Centre)
Centre of Excellence for National Security (CENS)
Institute of Defence and Strategic Studies (IDSS)
International Centre for Political Violence and Terrorism Research (ICPVTR)
Research Programmes
National Security Studies Programme (NSSP)
Social Cohesion Research Programme (SCRP)
Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
Other Research
Future Issues and Technology Cluster
Research@RSIS
Science and Technology Studies Programme (STSP) (2017-2020)
Graduate Education
Graduate Programmes Office
Exchange Partners and Programmes
How to Apply
Financial Assistance
Meet the Admissions Team: Information Sessions and other events
RSIS Alumni
Outreach
Global Networks
About Global Networks
RSIS Alumni
International Programmes
About International Programmes
Asia-Pacific Programme for Senior Military Officers (APPSMO)
Asia-Pacific Programme for Senior National Security Officers (APPSNO)
International Conference on Cohesive Societies (ICCS)
International Strategy Forum-Asia (ISF-Asia)
Executive Education
About Executive Education
SRP Executive Programme
Terrorism Analyst Training Course (TATC)
Public Education
About Public Education
Publications
RSIS Publications
Annual Reviews
Books
Bulletins and Newsletters
RSIS Commentary Series
Counter Terrorist Trends and Analyses
Commemorative / Event Reports
Future Issues
IDSS Papers
Interreligious Relations
Monographs
NTS Insight
Policy Reports
Working Papers
External Publications
Authored Books
Journal Articles
Edited Books
Chapters in Edited Books
Policy Reports
Working Papers
Op-Eds
Glossary of Abbreviations
Policy-relevant Articles Given RSIS Award
RSIS Publications for the Year
External Publications for the Year
Media
Video Channel
Podcasts
News Releases
Speeches
Events
Contact Us
S. Rajaratnam School of International Studies Think Tank and Graduate School Ponder The Improbable Since 1966
Nanyang Technological University Nanyang Technological University
  • About RSIS
      IntroductionBuilding the FoundationsWelcome MessageBoard of GovernorsHonours and Awards for RSIS Staff and StudentsRSIS Endowment FundEndowed ProfessorshipsCareer OpportunitiesGetting to RSIS
      Staff ProfilesExecutive Deputy Chairman’s OfficeDean’s OfficeManagementDistinguished FellowsFaculty and ResearchAssociate Research Fellows, Senior Analysts and Research AnalystsVisiting FellowsAdjunct FellowsAdministrative Staff
  • Research
      Research CentresCentre for Multilateralism Studies (CMS)Centre for Non-Traditional Security Studies (NTS Centre)Centre of Excellence for National Security (CENS)Institute of Defence and Strategic Studies (IDSS)International Centre for Political Violence and Terrorism Research (ICPVTR)
      Research ProgrammesNational Security Studies Programme (NSSP)Social Cohesion Research Programme (SCRP)Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
      Other ResearchFuture Issues and Technology ClusterResearch@RSISScience and Technology Studies Programme (STSP) (2017-2020)
  • Graduate Education
      Graduate Programmes OfficeExchange Partners and ProgrammesHow to ApplyFinancial AssistanceMeet the Admissions Team: Information Sessions and other eventsRSIS Alumni
  • Outreach
      Global NetworksAbout Global NetworksRSIS Alumni
      International ProgrammesAbout International ProgrammesAsia-Pacific Programme for Senior Military Officers (APPSMO)Asia-Pacific Programme for Senior National Security Officers (APPSNO)International Conference on Cohesive Societies (ICCS)International Strategy Forum-Asia (ISF-Asia)
      Executive EducationAbout Executive EducationSRP Executive ProgrammeTerrorism Analyst Training Course (TATC)
      Public EducationAbout Public Education
  • Publications
      RSIS PublicationsAnnual ReviewsBooksBulletins and NewslettersRSIS Commentary SeriesCounter Terrorist Trends and AnalysesCommemorative / Event ReportsFuture IssuesIDSS PapersInterreligious RelationsMonographsNTS InsightPolicy ReportsWorking Papers
      External PublicationsAuthored BooksJournal ArticlesEdited BooksChapters in Edited BooksPolicy ReportsWorking PapersOp-Eds
      Glossary of AbbreviationsPolicy-relevant Articles Given RSIS AwardRSIS Publications for the YearExternal Publications for the Year
  • Media
      Video ChannelPodcastsNews ReleasesSpeeches
  • Events
  • Contact Us
    • Connect with Us

      rsis.ntu
      rsis_ntu
      rsisntu
      rsisvideocast
      school/rsis-ntu
      rsis.sg
      rsissg
      RSIS
      RSS
      Subscribe to RSIS Publications
      Subscribe to RSIS Events

      Getting to RSIS

      Nanyang Technological University
      Block S4, Level B3,
      50 Nanyang Avenue,
      Singapore 639798

      Click here for direction to RSIS
Connect
Search
  • RSIS
  • Publication
  • RSIS Publications
  • Who’s Accountable When AI Agents Go Rogue?
  • Annual Reviews
  • Books
  • Bulletins and Newsletters
  • RSIS Commentary Series
  • Counter Terrorist Trends and Analyses
  • Commemorative / Event Reports
  • Future Issues
  • IDSS Papers
  • Interreligious Relations
  • Monographs
  • NTS Insight
  • Policy Reports
  • Working Papers

CO25212 | Who’s Accountable When AI Agents Go Rogue?
Asha Hemrajani

27 October 2025

download pdf

SYNOPSIS

The rise of autonomous AI systems has revealed a new frontier in cybersecurity risk, expanding attack surfaces and blurring accountability. Safe and responsible deployment has hence become a defining cybersecurity challenge. Governance of non-human identities and adaptive, policy-driven controls to detect and contain attacks on AI models, apps, and workflows will be needed to establish trusted autonomy.

COMMENTARY

Earlier this year, security researchers proved that an artificial intelligence (AI) assistant could be hijacked through something as ordinary as a calendar invite. Hidden within the invitation was a set of malicious instructions that, once triggered, caused connected lights to flicker, shutters to open, and files to be accessed without the user’s consent.

What began as a controlled experiment quickly revealed a new frontier in cybersecurity risk, where AI systems are not just tools for attackers but potential targets in their own right. As AI becomes more autonomous, able to plan and act across digital and physical environments, the implications for security will be far-reaching.

The line between human and machine agency is blurring, and the time needed to exploit vulnerabilities is shrinking. For businesses and governments, this signals a fundamental change in how digital risk must be managed.

This shift from passive tools to autonomous agents is ongoing. Agentic systems are already deployed in banking, e-commerce and logistics to streamline operations, detect fraud and make real-time decisions.

As these agents interact with enterprise systems, other agents and humans, the cybersecurity attack surface expands. Malicious agents can exploit the same interfaces as legitimate ones, using new threats such as impersonation attacks, prompt injections and data exfiltration (theft). Safeguarding agentic AI in enterprise systems is therefore emerging as a defining cybersecurity challenge.

Cybersecurity as Strategic Enabler

Governments and enterprises are now seeking ways to capture the benefits of AI innovation while managing the growing spectrum of risk it creates. The discussion is increasingly on how to deploy it securely and responsibly.

Traditional cybersecurity frameworks were designed for systems with predictable behaviours. Agentic AI breaks that predictability. It learns, adapts and operates with varying degrees of autonomy, creating new layers of uncertainty that static defences cannot contain.

For governments and large enterprises operating critical infrastructure, this shift requires a fundamental change in mindset. As agentic AI becomes embedded in decision-making, operations and citizen services, cybersecurity must evolve from a defensive function to a strategic enabler of trusted autonomy.

This demands a shift to adaptive, context-aware security with clear human oversight and escalation management, moving beyond static defences to maintain the trustworthiness of systems that influence decisions at a national scale.

Foundational concepts in cybersecurity, such as identity, data, and attack surfaces, are taking on new and evolving dimensions. Even established frameworks like “zero trust” are being re-examined as the rise of AI exposes contradictions that demand rethinking and adaptation.

Reframing Digital Risk Governance

Indeed, governance frameworks must evolve alongside technology. Two issues are becoming urgent.

First, the spectrum of autonomy must be understood. Agentic behaviour is not a binary state. Treating a basic automation script as equivalent to a self-directing system results in misplaced controls and uneven risk management. Oversight and safeguards should correspond to degrees of autonomy, not broad labels.

Second, accountability must be redefined. If an agentic AI system executes an action that is harmful, who should bear responsibility? Without clear boundaries, legal and ethical gaps will persist, and adversaries may exploit them. Boards, chief information security officers and regulators need shared accountability models that reflect how agentic AI systems work.

These questions are already visible in data governance disputes, algorithmic bias cases, and AI incidents where AI systems have behaved in unexpected ways. Unless accountability frameworks get better defined, accountability gaps will widen.

Securing Agentic AI in Critical Infrastructure

Agentic AI deployment in critical infrastructure entities raises unique risks. These systems promise gains in efficiency and resilience, but their vulnerabilities could cause cascading disruptions if compromised. Protecting them requires new approaches to securing AI apps and agents. It is therefore essential that critical infrastructure entities retain control as they adopt more autonomous AI-driven systems.

The focus must then be on detecting and stopping attacks on AI models, apps, and agentic-AI workflows. Policy controls for AI use, including blocking risky requests, preventing data leaks in apps and detecting unsanctioned AI agents, are also essential.

Equally important is ensuring resilience by governing the non-human identities (NHIs), the digital identities backbone of agentic AI. Enterprises will need to exercise proper oversight of NHIs through access control, guardrails and traceability.

Convening for Resilience in Agentic AI

Trust will not be built by algorithms alone; technology is only as trustworthy as the intent and integrity of the people who create and govern it. The rise of agentic AI exposes the limitations of current frameworks and demands new approaches grounded in foresight, accountability and collaboration. Businesses that recognise this shift will be better protected and positioned to lead in the next chapter of digital transformation.

About the Authors

Asha Hemrajani is a Senior Fellow at the S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU). Ian Monteiro is the Chief Executive Officer and founder of Image Engine, organiser of the GovWare Conference and Exhibition 2025. This commentary was originally published by The Business Times on 21 October 2025. It is republished here with permission.

Categories: RSIS Commentary Series / Country and Region Studies / Non-Traditional Security / Technology and Future Issues / East Asia and Asia Pacific / South Asia / Southeast Asia and ASEAN / Global
comments powered by Disqus

SYNOPSIS

The rise of autonomous AI systems has revealed a new frontier in cybersecurity risk, expanding attack surfaces and blurring accountability. Safe and responsible deployment has hence become a defining cybersecurity challenge. Governance of non-human identities and adaptive, policy-driven controls to detect and contain attacks on AI models, apps, and workflows will be needed to establish trusted autonomy.

COMMENTARY

Earlier this year, security researchers proved that an artificial intelligence (AI) assistant could be hijacked through something as ordinary as a calendar invite. Hidden within the invitation was a set of malicious instructions that, once triggered, caused connected lights to flicker, shutters to open, and files to be accessed without the user’s consent.

What began as a controlled experiment quickly revealed a new frontier in cybersecurity risk, where AI systems are not just tools for attackers but potential targets in their own right. As AI becomes more autonomous, able to plan and act across digital and physical environments, the implications for security will be far-reaching.

The line between human and machine agency is blurring, and the time needed to exploit vulnerabilities is shrinking. For businesses and governments, this signals a fundamental change in how digital risk must be managed.

This shift from passive tools to autonomous agents is ongoing. Agentic systems are already deployed in banking, e-commerce and logistics to streamline operations, detect fraud and make real-time decisions.

As these agents interact with enterprise systems, other agents and humans, the cybersecurity attack surface expands. Malicious agents can exploit the same interfaces as legitimate ones, using new threats such as impersonation attacks, prompt injections and data exfiltration (theft). Safeguarding agentic AI in enterprise systems is therefore emerging as a defining cybersecurity challenge.

Cybersecurity as Strategic Enabler

Governments and enterprises are now seeking ways to capture the benefits of AI innovation while managing the growing spectrum of risk it creates. The discussion is increasingly on how to deploy it securely and responsibly.

Traditional cybersecurity frameworks were designed for systems with predictable behaviours. Agentic AI breaks that predictability. It learns, adapts and operates with varying degrees of autonomy, creating new layers of uncertainty that static defences cannot contain.

For governments and large enterprises operating critical infrastructure, this shift requires a fundamental change in mindset. As agentic AI becomes embedded in decision-making, operations and citizen services, cybersecurity must evolve from a defensive function to a strategic enabler of trusted autonomy.

This demands a shift to adaptive, context-aware security with clear human oversight and escalation management, moving beyond static defences to maintain the trustworthiness of systems that influence decisions at a national scale.

Foundational concepts in cybersecurity, such as identity, data, and attack surfaces, are taking on new and evolving dimensions. Even established frameworks like “zero trust” are being re-examined as the rise of AI exposes contradictions that demand rethinking and adaptation.

Reframing Digital Risk Governance

Indeed, governance frameworks must evolve alongside technology. Two issues are becoming urgent.

First, the spectrum of autonomy must be understood. Agentic behaviour is not a binary state. Treating a basic automation script as equivalent to a self-directing system results in misplaced controls and uneven risk management. Oversight and safeguards should correspond to degrees of autonomy, not broad labels.

Second, accountability must be redefined. If an agentic AI system executes an action that is harmful, who should bear responsibility? Without clear boundaries, legal and ethical gaps will persist, and adversaries may exploit them. Boards, chief information security officers and regulators need shared accountability models that reflect how agentic AI systems work.

These questions are already visible in data governance disputes, algorithmic bias cases, and AI incidents where AI systems have behaved in unexpected ways. Unless accountability frameworks get better defined, accountability gaps will widen.

Securing Agentic AI in Critical Infrastructure

Agentic AI deployment in critical infrastructure entities raises unique risks. These systems promise gains in efficiency and resilience, but their vulnerabilities could cause cascading disruptions if compromised. Protecting them requires new approaches to securing AI apps and agents. It is therefore essential that critical infrastructure entities retain control as they adopt more autonomous AI-driven systems.

The focus must then be on detecting and stopping attacks on AI models, apps, and agentic-AI workflows. Policy controls for AI use, including blocking risky requests, preventing data leaks in apps and detecting unsanctioned AI agents, are also essential.

Equally important is ensuring resilience by governing the non-human identities (NHIs), the digital identities backbone of agentic AI. Enterprises will need to exercise proper oversight of NHIs through access control, guardrails and traceability.

Convening for Resilience in Agentic AI

Trust will not be built by algorithms alone; technology is only as trustworthy as the intent and integrity of the people who create and govern it. The rise of agentic AI exposes the limitations of current frameworks and demands new approaches grounded in foresight, accountability and collaboration. Businesses that recognise this shift will be better protected and positioned to lead in the next chapter of digital transformation.

About the Authors

Asha Hemrajani is a Senior Fellow at the S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU). Ian Monteiro is the Chief Executive Officer and founder of Image Engine, organiser of the GovWare Conference and Exhibition 2025. This commentary was originally published by The Business Times on 21 October 2025. It is republished here with permission.

Categories: RSIS Commentary Series / Country and Region Studies / Non-Traditional Security / Technology and Future Issues

Popular Links

About RSISResearch ProgrammesGraduate EducationPublicationsEventsAdmissionsCareersVideo/Audio ChannelRSIS Intranet

Connect with Us

rsis.ntu
rsis_ntu
rsisntu
rsisvideocast
school/rsis-ntu
rsis.sg
rsissg
RSIS
RSS
Subscribe to RSIS Publications
Subscribe to RSIS Events

Getting to RSIS

Nanyang Technological University
Block S4, Level B3,
50 Nanyang Avenue,
Singapore 639798

Click here for direction to RSIS

Get in Touch

    Copyright © S. Rajaratnam School of International Studies. All rights reserved.
    Privacy Statement / Terms of Use
    Help us improve

      Rate your experience with this website
      123456
      Not satisfiedVery satisfied
      What did you like?
      0/255 characters
      What can be improved?
      0/255 characters
      Your email
      Please enter a valid email.
      Thank you for your feedback.
      This site uses cookies to offer you a better browsing experience. By continuing, you are agreeing to the use of cookies on your device as described in our privacy policy. Learn more
      OK
      Latest Book
      more info