Back
About RSIS
Introduction
Building the Foundations
Welcome Message
Board of Governors
Staff Profiles
Executive Deputy Chairman’s Office
Dean’s Office
Management
Distinguished Fellows
Faculty and Research
Associate Research Fellows, Senior Analysts and Research Analysts
Visiting Fellows
Adjunct Fellows
Administrative Staff
Honours and Awards for RSIS Staff and Students
RSIS Endowment Fund
Endowed Professorships
Career Opportunities
Getting to RSIS
Research
Research Centres
Centre for Multilateralism Studies (CMS)
Centre for Non-Traditional Security Studies (NTS Centre)
Centre of Excellence for National Security
Institute of Defence and Strategic Studies (IDSS)
International Centre for Political Violence and Terrorism Research (ICPVTR)
Research Programmes
National Security Studies Programme (NSSP)
Social Cohesion Research Programme (SCRP)
Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
Other Research
Future Issues and Technology Cluster
Research@RSIS
Science and Technology Studies Programme (STSP) (2017-2020)
Graduate Education
Graduate Programmes Office
Exchange Partners and Programmes
How to Apply
Financial Assistance
Meet the Admissions Team: Information Sessions and other events
RSIS Alumni
Outreach
Global Networks
About Global Networks
RSIS Alumni
Executive Education
About Executive Education
SRP Executive Programme
Terrorism Analyst Training Course (TATC)
International Programmes
About International Programmes
Asia-Pacific Programme for Senior Military Officers (APPSMO)
Asia-Pacific Programme for Senior National Security Officers (APPSNO)
International Conference on Cohesive Societies (ICCS)
International Strategy Forum-Asia (ISF-Asia)
Publications
RSIS Publications
Annual Reviews
Books
Bulletins and Newsletters
RSIS Commentary Series
Counter Terrorist Trends and Analyses
Commemorative / Event Reports
Future Issues
IDSS Papers
Interreligious Relations
Monographs
NTS Insight
Policy Reports
Working Papers
External Publications
Authored Books
Journal Articles
Edited Books
Chapters in Edited Books
Policy Reports
Working Papers
Op-Eds
Glossary of Abbreviations
Policy-relevant Articles Given RSIS Award
RSIS Publications for the Year
External Publications for the Year
Media
Cohesive Societies
Sustainable Security
Other Resource Pages
News Releases
Speeches
Video/Audio Channel
External Podcasts
Events
Contact Us
S. Rajaratnam School of International Studies Think Tank and Graduate School Ponder The Improbable Since 1966
Nanyang Technological University Nanyang Technological University
  • About RSIS
      IntroductionBuilding the FoundationsWelcome MessageBoard of GovernorsHonours and Awards for RSIS Staff and StudentsRSIS Endowment FundEndowed ProfessorshipsCareer OpportunitiesGetting to RSIS
      Staff ProfilesExecutive Deputy Chairman’s OfficeDean’s OfficeManagementDistinguished FellowsFaculty and ResearchAssociate Research Fellows, Senior Analysts and Research AnalystsVisiting FellowsAdjunct FellowsAdministrative Staff
  • Research
      Research CentresCentre for Multilateralism Studies (CMS)Centre for Non-Traditional Security Studies (NTS Centre)Centre of Excellence for National SecurityInstitute of Defence and Strategic Studies (IDSS)International Centre for Political Violence and Terrorism Research (ICPVTR)
      Research ProgrammesNational Security Studies Programme (NSSP)Social Cohesion Research Programme (SCRP)Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
      Other ResearchFuture Issues and Technology ClusterResearch@RSISScience and Technology Studies Programme (STSP) (2017-2020)
  • Graduate Education
      Graduate Programmes OfficeExchange Partners and ProgrammesHow to ApplyFinancial AssistanceMeet the Admissions Team: Information Sessions and other eventsRSIS Alumni
  • Outreach
      Global NetworksAbout Global NetworksRSIS Alumni
      Executive EducationAbout Executive EducationSRP Executive ProgrammeTerrorism Analyst Training Course (TATC)
      International ProgrammesAbout International ProgrammesAsia-Pacific Programme for Senior Military Officers (APPSMO)Asia-Pacific Programme for Senior National Security Officers (APPSNO)International Conference on Cohesive Societies (ICCS)International Strategy Forum-Asia (ISF-Asia)
  • Publications
      RSIS PublicationsAnnual ReviewsBooksBulletins and NewslettersRSIS Commentary SeriesCounter Terrorist Trends and AnalysesCommemorative / Event ReportsFuture IssuesIDSS PapersInterreligious RelationsMonographsNTS InsightPolicy ReportsWorking Papers
      External PublicationsAuthored BooksJournal ArticlesEdited BooksChapters in Edited BooksPolicy ReportsWorking PapersOp-Eds
      Glossary of AbbreviationsPolicy-relevant Articles Given RSIS AwardRSIS Publications for the YearExternal Publications for the Year
  • Media
      Cohesive SocietiesSustainable SecurityOther Resource PagesNews ReleasesSpeechesVideo/Audio ChannelExternal Podcasts
  • Events
  • Contact Us
    • Connect with Us

      rsis.ntu
      rsis_ntu
      rsisntu
      rsisvideocast
      school/rsis-ntu
      rsis.sg
      rsissg
      RSIS
      RSS
      Subscribe to RSIS Publications
      Subscribe to RSIS Events

      Getting to RSIS

      Nanyang Technological University
      Block S4, Level B3,
      50 Nanyang Avenue,
      Singapore 639798

      Click here for direction to RSIS

      Get in Touch

    Connect
    Search
    • RSIS
    • Publication
    • RSIS Publications
    • IP25055 | Disasters and Disinformation: AI and the Myanmar 7.7 Magnitude Earthquake
    • Annual Reviews
    • Books
    • Bulletins and Newsletters
    • RSIS Commentary Series
    • Counter Terrorist Trends and Analyses
    • Commemorative / Event Reports
    • Future Issues
    • IDSS Papers
    • Interreligious Relations
    • Monographs
    • NTS Insight
    • Policy Reports
    • Working Papers

    IP25055 | Disasters and Disinformation: AI and the Myanmar 7.7 Magnitude Earthquake
    Keith Paolo Catibog Landicho, Karryl Kim Sagun Trajano

    01 May 2025

    download pdf

    A devastating 7.7 magnitude earthquake struck Myanmar on 28 March 2025, its tremors reaching as far as Bangkok, Thailand. In addition to the dire impacts, the victims were not spared from disinformation. Amid the chaos and the critical need for information, misleading AI-generated content spread widely, highlighting the dangerous intersection of technology and humanitarian crises.

    AI-generated disinformation complicates already challenging disaster response operations. Image courtesy of Wikimedia Commons.
    AI-generated disinformation complicates already challenging disaster response operations. Image courtesy of Wikimedia Commons.

    The earthquake that struck Myanmar on 28 March 2025 exposed the nation to compounding harm. Over one million people were affected: thousands were dead or injured and close to 70,000 individuals internally displaced. Tremors were felt as far as Bangkok, Thailand, approximately 1,000 km away from the epicentre in Sagaing, highlighting the severity and scale of the disaster. The devastation was compounded not only by Myanmar’s existing socio-economic and political challenges but also the ongoing armed conflict: according to the UN Human Rights Office, armed operations and airstrikes continued in quake-hit areas. Exacerbating these challenges were various bureaucratic hurdles that limited aid delivery, as in previous disasters like Cyclone Nargis (2008) and Cyclone Mocha (2023).

    Amid the chaos, telecommunications shutdowns by the military hindered access to information. This caused a soaring demand for information updates in the aftermath of the earthquake, creating a conducive environment for the spread of disinformation.

    The spread of disinformation was amplified by emergent technologies such as artificial intelligence (AI). Motivated by the desire to generate advertising revenue, profiteers capitalised on the crisis to farm engagement in social media through clickbait content. For example, one of the videos that went viral carried fabricated depictions of destruction and temples located incorrectly in Mandalay, misrepresenting both the geographic scope and severity of the disaster. The opportunistic timing highlights the intersection of crisis and technology, where misuse of AI may not only erode trust in information and institutions but can also potentially exacerbate the challenges of humanitarian assistance and disaster relief (HADR).

    AI and Disinformation in Humanitarian Crises

    AI has rapidly reshaped the global information landscape as a dual-use technology, bringing both good and harm. Tools like generative adversarial networks (GANs) can now be used for predicting floods and for remote sensing during disasters. AI can also create hyper-realistic simulations of natural calamities such as floods, fire and smog. While useful for climate modelling, AI is also being misused to produce deepfake disaster videosfeaturing fabricated destruction. Such content may mislead any average consumer of online content, especially during real crises.

    In humanitarian emergencies, time is critical, and accurate and timely information is scarce. Disinformation can sow confusion and erode trust. It can also potentially delay life-saving action, reduce compliance with emergency and safety instructions, and fragment coordination among humanitarian actors. The growing severity of this challenge is reflected in the 2025 Global Risks Report of the World Economic Forum, which identifies misinformation and disinformation as among the top global risks.

    The Myanmar case illustrates the impact of disinformation on HADR. With social media as the main conduit for its spread, AI adds a layer of complexity by enabling faster and more widespread distribution of disinformation. This is dangerous as every second of the “critical window” of earthquake search and rescue operations matters. In countries like Myanmar, where the information landscape is already fragilecharacterised by limited access to disaster knowledge and early warning systems and where conflict and internal displacement create additional vulnerabilities, disinformation affects both victims and responders alike.

    Even in non-conflict settings, disinformation has presented challenges, from COVID-19 vaccine hesitancy in Southeast Asia and manipulated images of the 2023 Türkiye-Syria earthquake to deceptive photos during Hurricane Helene in the United States in 2024. These instances highlight the urgent need for information integrity safeguards in crisis settings where the consequences of disinformation are amplified.

    Rethinking Crisis Response in the Age of AI

    The Myanmar case is a tragic example. The convergence of disasters, conflict and AI-driven disinformation is a global threat. As disasters grow more frequent and severe and as AI becomes more accessible, the humanitarian sector, with the support of the technology sector, should consider information integrity as a vital aspect of crisis response.

    While binding AI regulations are yet to be implemented by ASEAN, national-level initiatives may offer more immediate solutions. Ukraine offers a model to combat disinformation during a crisis, using AI tools like CommSecure and CIB Guard for early detection of harmful narratives and coordinated disinformation campaigns. It also established dedicated institutions like the Center for Combating Disinformation and the Centre for Strategic Communication. [PS1]These bodies are responsible for coordinating efforts against disinformation and formulating swift response strategies.

    During disasters, however, affected countries may struggle to counter disinformation while dealing with a crisis on the ground, making space for regional support. Ukraine leaned on partnerships within Europe, including the EU Stratcom Task Force’s EUvsDisinfo project, the European Centre of Excellence for Countering Hybrid Threats, and NATO’s Strategic Communications Centre of Excellence. Similar national measures and regional collaboration must be established in Southeast Asia sooner rather than later. The situation is particularly urgent here due to frequent and often catastrophic disasters.

    That said, it is important to realise that crises in Eastern Europe differ in many ways from those in Southeast Asia. While Ukraine’s use of AI and international partnerships during wartime offers valuable insights, the dynamics of disasters present a distinct set of challenges. The example of Ukraine serves as a starting point, but further research leading to contextual solutions is still needed.

    Ways Forward

    ASEAN’s existing disaster management mechanisms and operations emphasise “accurate information shared in a timely manner” but lack protocols to counter disinformation during crises. The convergence of disasters, conflict and technological misuse exposes the vulnerability of crisis-hit regions to compounded harm. In Myanmar’s case, the physical devastation from the earthquake was compounded by systemic failures in the information landscape, as well as the exploitation of the crisis by malicious actors. The Myanmar case highlights the urgent need for disaster mechanisms to include safeguards against digital threats like disinformation. Bridging this gap would help alleviate the burden of mitigating disinformation for national agencies and integrate it into the regional collective response. This would not only mitigate the risks from emerging digital threats but also support the ASEAN vision of One ASEAN, One Response.

    Ultimately, it is important to recognise that AI remains a dual-use technology. While it may be exploited for disinformation during crises, robust and purposeful national and regional initiatives can unlock the technology’s potential for positive uses. Important as they are, safeguards on responsible AI use will not completely deter malicious actors. In addition to governance frameworks, capacity building, real-time information sharing among stakeholders, and regional and international collaboration could enhance collective resilience against AI-enabled harms during crises and beyond.

    About the Authors

    Keith Paolo C. Landicho is an Associate Research Fellow of the Humanitarian Assistance and Disaster Relief (HADR) Programme, Institute of Defence and Strategic Studies (IDSS), S. Rajaratnam School of International Studies (RSIS). Dr Karryl Kim Sagun Trajano is a Research Fellow with the Future Issues and Technology (FIT) research cluster at RSIS.

    Categories: IDSS Papers
    comments powered by Disqus

    A devastating 7.7 magnitude earthquake struck Myanmar on 28 March 2025, its tremors reaching as far as Bangkok, Thailand. In addition to the dire impacts, the victims were not spared from disinformation. Amid the chaos and the critical need for information, misleading AI-generated content spread widely, highlighting the dangerous intersection of technology and humanitarian crises.

    AI-generated disinformation complicates already challenging disaster response operations. Image courtesy of Wikimedia Commons.
    AI-generated disinformation complicates already challenging disaster response operations. Image courtesy of Wikimedia Commons.

    The earthquake that struck Myanmar on 28 March 2025 exposed the nation to compounding harm. Over one million people were affected: thousands were dead or injured and close to 70,000 individuals internally displaced. Tremors were felt as far as Bangkok, Thailand, approximately 1,000 km away from the epicentre in Sagaing, highlighting the severity and scale of the disaster. The devastation was compounded not only by Myanmar’s existing socio-economic and political challenges but also the ongoing armed conflict: according to the UN Human Rights Office, armed operations and airstrikes continued in quake-hit areas. Exacerbating these challenges were various bureaucratic hurdles that limited aid delivery, as in previous disasters like Cyclone Nargis (2008) and Cyclone Mocha (2023).

    Amid the chaos, telecommunications shutdowns by the military hindered access to information. This caused a soaring demand for information updates in the aftermath of the earthquake, creating a conducive environment for the spread of disinformation.

    The spread of disinformation was amplified by emergent technologies such as artificial intelligence (AI). Motivated by the desire to generate advertising revenue, profiteers capitalised on the crisis to farm engagement in social media through clickbait content. For example, one of the videos that went viral carried fabricated depictions of destruction and temples located incorrectly in Mandalay, misrepresenting both the geographic scope and severity of the disaster. The opportunistic timing highlights the intersection of crisis and technology, where misuse of AI may not only erode trust in information and institutions but can also potentially exacerbate the challenges of humanitarian assistance and disaster relief (HADR).

    AI and Disinformation in Humanitarian Crises

    AI has rapidly reshaped the global information landscape as a dual-use technology, bringing both good and harm. Tools like generative adversarial networks (GANs) can now be used for predicting floods and for remote sensing during disasters. AI can also create hyper-realistic simulations of natural calamities such as floods, fire and smog. While useful for climate modelling, AI is also being misused to produce deepfake disaster videosfeaturing fabricated destruction. Such content may mislead any average consumer of online content, especially during real crises.

    In humanitarian emergencies, time is critical, and accurate and timely information is scarce. Disinformation can sow confusion and erode trust. It can also potentially delay life-saving action, reduce compliance with emergency and safety instructions, and fragment coordination among humanitarian actors. The growing severity of this challenge is reflected in the 2025 Global Risks Report of the World Economic Forum, which identifies misinformation and disinformation as among the top global risks.

    The Myanmar case illustrates the impact of disinformation on HADR. With social media as the main conduit for its spread, AI adds a layer of complexity by enabling faster and more widespread distribution of disinformation. This is dangerous as every second of the “critical window” of earthquake search and rescue operations matters. In countries like Myanmar, where the information landscape is already fragilecharacterised by limited access to disaster knowledge and early warning systems and where conflict and internal displacement create additional vulnerabilities, disinformation affects both victims and responders alike.

    Even in non-conflict settings, disinformation has presented challenges, from COVID-19 vaccine hesitancy in Southeast Asia and manipulated images of the 2023 Türkiye-Syria earthquake to deceptive photos during Hurricane Helene in the United States in 2024. These instances highlight the urgent need for information integrity safeguards in crisis settings where the consequences of disinformation are amplified.

    Rethinking Crisis Response in the Age of AI

    The Myanmar case is a tragic example. The convergence of disasters, conflict and AI-driven disinformation is a global threat. As disasters grow more frequent and severe and as AI becomes more accessible, the humanitarian sector, with the support of the technology sector, should consider information integrity as a vital aspect of crisis response.

    While binding AI regulations are yet to be implemented by ASEAN, national-level initiatives may offer more immediate solutions. Ukraine offers a model to combat disinformation during a crisis, using AI tools like CommSecure and CIB Guard for early detection of harmful narratives and coordinated disinformation campaigns. It also established dedicated institutions like the Center for Combating Disinformation and the Centre for Strategic Communication. [PS1]These bodies are responsible for coordinating efforts against disinformation and formulating swift response strategies.

    During disasters, however, affected countries may struggle to counter disinformation while dealing with a crisis on the ground, making space for regional support. Ukraine leaned on partnerships within Europe, including the EU Stratcom Task Force’s EUvsDisinfo project, the European Centre of Excellence for Countering Hybrid Threats, and NATO’s Strategic Communications Centre of Excellence. Similar national measures and regional collaboration must be established in Southeast Asia sooner rather than later. The situation is particularly urgent here due to frequent and often catastrophic disasters.

    That said, it is important to realise that crises in Eastern Europe differ in many ways from those in Southeast Asia. While Ukraine’s use of AI and international partnerships during wartime offers valuable insights, the dynamics of disasters present a distinct set of challenges. The example of Ukraine serves as a starting point, but further research leading to contextual solutions is still needed.

    Ways Forward

    ASEAN’s existing disaster management mechanisms and operations emphasise “accurate information shared in a timely manner” but lack protocols to counter disinformation during crises. The convergence of disasters, conflict and technological misuse exposes the vulnerability of crisis-hit regions to compounded harm. In Myanmar’s case, the physical devastation from the earthquake was compounded by systemic failures in the information landscape, as well as the exploitation of the crisis by malicious actors. The Myanmar case highlights the urgent need for disaster mechanisms to include safeguards against digital threats like disinformation. Bridging this gap would help alleviate the burden of mitigating disinformation for national agencies and integrate it into the regional collective response. This would not only mitigate the risks from emerging digital threats but also support the ASEAN vision of One ASEAN, One Response.

    Ultimately, it is important to recognise that AI remains a dual-use technology. While it may be exploited for disinformation during crises, robust and purposeful national and regional initiatives can unlock the technology’s potential for positive uses. Important as they are, safeguards on responsible AI use will not completely deter malicious actors. In addition to governance frameworks, capacity building, real-time information sharing among stakeholders, and regional and international collaboration could enhance collective resilience against AI-enabled harms during crises and beyond.

    About the Authors

    Keith Paolo C. Landicho is an Associate Research Fellow of the Humanitarian Assistance and Disaster Relief (HADR) Programme, Institute of Defence and Strategic Studies (IDSS), S. Rajaratnam School of International Studies (RSIS). Dr Karryl Kim Sagun Trajano is a Research Fellow with the Future Issues and Technology (FIT) research cluster at RSIS.

    Categories: IDSS Papers

    Popular Links

    About RSISResearch ProgrammesGraduate EducationPublicationsEventsAdmissionsCareersVideo/Audio ChannelRSIS Intranet

    Connect with Us

    rsis.ntu
    rsis_ntu
    rsisntu
    rsisvideocast
    school/rsis-ntu
    rsis.sg
    rsissg
    RSIS
    RSS
    Subscribe to RSIS Publications
    Subscribe to RSIS Events

    Getting to RSIS

    Nanyang Technological University
    Block S4, Level B3,
    50 Nanyang Avenue,
    Singapore 639798

    Click here for direction to RSIS

    Get in Touch

      Copyright © S. Rajaratnam School of International Studies. All rights reserved.
      Privacy Statement / Terms of Use
      Help us improve

        Rate your experience with this website
        123456
        Not satisfiedVery satisfied
        What did you like?
        0/255 characters
        What can be improved?
        0/255 characters
        Your email
        Please enter a valid email.
        Thank you for your feedback.
        This site uses cookies to offer you a better browsing experience. By continuing, you are agreeing to the use of cookies on your device as described in our privacy policy. Learn more
        OK
        Latest Book
        more info