Abstract
Artificial Intelligence (AI) is increasingly deployed in safety-critical and mission-critical environments, making effective incident response essential. Unlike traditional IT systems, AI introduces new risks such as adversarial attacks, data poisoning, model drift, and unpredictable system behaviour. This webinar will bring together experts to discuss how organizations can prepare for, detect, and respond to incidents in AI systems. Speakers will explore both technical safeguards and governance measures, offering practical insights on building resilience, ensuring accountability, and learning from incidents to strengthen future AI deployments.
About the Speakers
Paul Starrett is a legal and technology expert with over 25 years of experience spanning law, data governance, and AI. He has served as General Counsel and CRO of a publicly held AI and data management company, taught graduate-level courses on AI governance and cybersecurity law, and contributed to the IAPP’s AI Governance Certification. With a background in predictive analytics, e-discovery, and software engineering, he brings a cross-disciplinary approach to helping organizations address risks in AI systems.
Sean McGregor, PhD is a machine learning safety researcher whose efforts have included starting up the Digital Safety Research Institute at the UL Research Institutes, launching the AI Incident Database, and training edge neural network models for the neural accelerator startup Syntiant. Outside his day jobs, Sean’s opensource development work has earned media attention in the Atlantic, Der Spiegel, Wired, Venture Beat, Vice, and O’Reilly while his technical publications have appeared in a variety of proceedings.
Heather Frase, PhD is the CEO of Veraitech and Senior Advisor for Testing & Evaluation of AI at Virginia Tech’s National Security Institute. Her diverse career spans roles in defense, intelligence, policy, and financial crime. Her current work focuses on developing and supporting the evaluation of AI systems, improving reliability, and aligning performance with real-world use. She also serves on the OECD’s Network of Experts on AI and on the board of the Responsible AI Collaborative, which researches and documents AI incidents.
Jey Kumarasamy is a Legal Director in ZwillGen’s AI Division, where he advises technology companies on complex legal and technical issues involving AI. With a background in computer science, he leads in-depth assessments of AI systems, including red teaming, benchmarking, and bias audits, for performance, safety, and compliance. He also advises on emerging AI laws and governance standards, blending legal strategy with practical technical insight. Previously, Jey was a Senior Associate at Luminos.Law, where he advised clients on AI governance and model Audits.
About the Chairperson
Asha Hemrajani is Senior Fellow at the Centre of Excellence in National Security (CENS) at RSIS. Her research covers digital infrastructure, emerging technologies, cybersecurity, information and data security, Critical Information Infrastructure (CII), Artificial Intelligence safety/security as well as the intersection of geopolitics and technologies in the spheres of national security, disinformation and terrorism.
Ms. Hemrajani’s experience spans network engineering, strategy, regulatory in the mobile communications, domain names and cybersecurity sectors. She has also been invited by the editors of Oxford Intersections and the Marine Policy Journal to review academic journals on the topics of AI in terrorism and undersea cables.
She was formerly a member of the Board of Directors at ICANN, the global domain names regulator. She has a Bachelor’s degree in Electrical Engineering and a ModularMaster in Cybersecurity by Design.