Back
About RSIS
Introduction
Building the Foundations
Welcome Message
Board of Governors
Staff Profiles
Executive Deputy Chairman’s Office
Dean’s Office
Management
Distinguished Fellows
Faculty and Research
Associate Research Fellows, Senior Analysts and Research Analysts
Visiting Fellows
Adjunct Fellows
Administrative Staff
Honours and Awards for RSIS Staff and Students
RSIS Endowment Fund
Endowed Professorships
Career Opportunities
Getting to RSIS
Research
Research Centres
Centre for Multilateralism Studies (CMS)
Centre for Non-Traditional Security Studies (NTS Centre)
Centre of Excellence for National Security
Institute of Defence and Strategic Studies (IDSS)
International Centre for Political Violence and Terrorism Research (ICPVTR)
Research Programmes
National Security Studies Programme (NSSP)
Social Cohesion Research Programme (SCRP)
Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
Other Research
Future Issues and Technology Cluster
Research@RSIS
Science and Technology Studies Programme (STSP) (2017-2020)
Graduate Education
Graduate Programmes Office
Exchange Partners and Programmes
How to Apply
Financial Assistance
Meet the Admissions Team: Information Sessions and other events
RSIS Alumni
Outreach
Global Networks
About Global Networks
RSIS Alumni
Executive Education
About Executive Education
SRP Executive Programme
Terrorism Analyst Training Course (TATC)
International Programmes
About International Programmes
Asia-Pacific Programme for Senior Military Officers (APPSMO)
Asia-Pacific Programme for Senior National Security Officers (APPSNO)
International Conference on Cohesive Societies (ICCS)
International Strategy Forum-Asia (ISF-Asia)
Publications
RSIS Publications
Annual Reviews
Books
Bulletins and Newsletters
RSIS Commentary Series
Counter Terrorist Trends and Analyses
Commemorative / Event Reports
Future Issues
IDSS Papers
Interreligious Relations
Monographs
NTS Insight
Policy Reports
Working Papers
External Publications
Authored Books
Journal Articles
Edited Books
Chapters in Edited Books
Policy Reports
Working Papers
Op-Eds
Glossary of Abbreviations
Policy-relevant Articles Given RSIS Award
RSIS Publications for the Year
External Publications for the Year
Media
Cohesive Societies
Sustainable Security
Other Resource Pages
News Releases
Speeches
Video/Audio Channel
External Podcasts
Events
Contact Us
S. Rajaratnam School of International Studies Think Tank and Graduate School Ponder The Improbable Since 1966
Nanyang Technological University Nanyang Technological University
  • About RSIS
      IntroductionBuilding the FoundationsWelcome MessageBoard of GovernorsHonours and Awards for RSIS Staff and StudentsRSIS Endowment FundEndowed ProfessorshipsCareer OpportunitiesGetting to RSIS
      Staff ProfilesExecutive Deputy Chairman’s OfficeDean’s OfficeManagementDistinguished FellowsFaculty and ResearchAssociate Research Fellows, Senior Analysts and Research AnalystsVisiting FellowsAdjunct FellowsAdministrative Staff
  • Research
      Research CentresCentre for Multilateralism Studies (CMS)Centre for Non-Traditional Security Studies (NTS Centre)Centre of Excellence for National SecurityInstitute of Defence and Strategic Studies (IDSS)International Centre for Political Violence and Terrorism Research (ICPVTR)
      Research ProgrammesNational Security Studies Programme (NSSP)Social Cohesion Research Programme (SCRP)Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
      Other ResearchFuture Issues and Technology ClusterResearch@RSISScience and Technology Studies Programme (STSP) (2017-2020)
  • Graduate Education
      Graduate Programmes OfficeExchange Partners and ProgrammesHow to ApplyFinancial AssistanceMeet the Admissions Team: Information Sessions and other eventsRSIS Alumni
  • Outreach
      Global NetworksAbout Global NetworksRSIS Alumni
      Executive EducationAbout Executive EducationSRP Executive ProgrammeTerrorism Analyst Training Course (TATC)
      International ProgrammesAbout International ProgrammesAsia-Pacific Programme for Senior Military Officers (APPSMO)Asia-Pacific Programme for Senior National Security Officers (APPSNO)International Conference on Cohesive Societies (ICCS)International Strategy Forum-Asia (ISF-Asia)
  • Publications
      RSIS PublicationsAnnual ReviewsBooksBulletins and NewslettersRSIS Commentary SeriesCounter Terrorist Trends and AnalysesCommemorative / Event ReportsFuture IssuesIDSS PapersInterreligious RelationsMonographsNTS InsightPolicy ReportsWorking Papers
      External PublicationsAuthored BooksJournal ArticlesEdited BooksChapters in Edited BooksPolicy ReportsWorking PapersOp-Eds
      Glossary of AbbreviationsPolicy-relevant Articles Given RSIS AwardRSIS Publications for the YearExternal Publications for the Year
  • Media
      Cohesive SocietiesSustainable SecurityOther Resource PagesNews ReleasesSpeechesVideo/Audio ChannelExternal Podcasts
  • Events
  • Contact Us
    • Connect with Us

      rsis.ntu
      rsis_ntu
      rsisntu
      rsisvideocast
      school/rsis-ntu
      rsis.sg
      rsissg
      RSIS
      RSS
      Subscribe to RSIS Publications
      Subscribe to RSIS Events

      Getting to RSIS

      Nanyang Technological University
      Block S4, Level B3,
      50 Nanyang Avenue,
      Singapore 639798

      Click here for direction to RSIS

      Get in Touch

    Connect
    Search
    • RSIS
    • Publication
    • RSIS Publications
    • IP24105 | AI in Humanitarian Action: Understanding the Digital Divide and Humanitarian Principles
    • Annual Reviews
    • Books
    • Bulletins and Newsletters
    • RSIS Commentary Series
    • Counter Terrorist Trends and Analyses
    • Commemorative / Event Reports
    • Future Issues
    • IDSS Papers
    • Interreligious Relations
    • Monographs
    • NTS Insight
    • Policy Reports
    • Working Papers

    IP24105 | AI in Humanitarian Action: Understanding the Digital Divide and Humanitarian Principles
    Andrej Zwitter, Keith Paolo Catibog Landicho

    17 December 2024

    download pdf

    SYNOPSIS

    Artificial Intelligence (AI) is positioned to revolutionise humanitarian assistance and disaster relief in disaster-prone Southeast Asia with transformative prediction, forecast, and analytic capabilities. However, the digital divide and the risk to the humanitarian principles pose significant challenges. Hence, robust governance and strategies aligning AI with the humanitarian principles are essential to empower communities and build resilience.

    COMMENTARY

    Southeast Asia is one of the world’s most disaster-prone regions. The Philippines alone faced six tropical cyclones in the span of a month (October to November), causing widespread flooding, significant damage to infrastructure, and affecting millions of people. Mainland Southeast Asia was also severely impacted. NASA characterised the event as unusual with the Japan Meteorological Agency reporting that four simultaneously active cyclones in the Pacific basin, have not been observed since 1951. This is a glimpse of the disaster complexity across the region, which includes not only cyclones and its related hazards (flooding, rain-induced landslides, storm surge, strong winds), but also droughts, earthquakes, volcanic eruptions, and tsunamis.

    Given such complexity, ASEAN has recognised the need for innovative solutions such as advancing the integration of Artificial Intelligence (AI) into its humanitarian assistance and disaster relief (HADR) strategies. Current plans, like the ASEAN Agreement on Disaster Management and Emergency Response (AADMER) Work Programme 2021-2025 and the ASEAN Coordinating Centre for Humanitarian Assistance on disaster management (AHA Centre) Work Plan 2025, include developing data-driven decision-making capabilities, fostering innovation hubs, and exploring AI applications. ASEAN should carefully tread the path of integrating AI to future-proof its disaster management as outlined and guided by the ASEAN Disaster Resilience Outlook 2025.

    AI offers immense potential to transform HADR in the region enabling capabilities such as detecting emerging threats, enhancing the forecasting of typhoons and volcanic eruptions, providing early warnings to at-risk populations to facilitate pre-emptive evacuations, optimising resource mobilisation, predicting conflicts, and identifying migration streams. However, this potential is tempered by the region’s significant digital divide with a third of the Southeast Asian adult population still lacking access to digital technologies according to a 2021 report by Roland Berger. Additionally, the region averages only 56.22 out of 100 across six different pillars for the ASEAN Digital Integration Index, an index that measures digital integration. AI risks exacerbating this divide.

    The challenge lies in leveraging AI to empower vulnerable communities while safeguarding the humanitarian principles. Striking this balance is essential to ensure that technological innovation fosters inclusivity, ethical applications, and long-term resilience.

    AI offers immense potential to transform and boost humanitarian assistance and disaster relief (HADR) responses. However, the integration of AI present risks to the humanitarian principles of humanity, impartiality, neutrality, and independence, and may also exacerbate the digital divide. Image from Pexels.
    AI offers immense potential to transform and boost humanitarian assistance and disaster relief (HADR) responses. However, the integration of AI present risks to the humanitarian principles of humanity, impartiality, neutrality, and independence, and may also exacerbate the digital divide. Image from Pexels.

    Making the Case for AI in HADR

    Weather prediction using AI is a competitive race and greatly substantiates AI’s impact. The Fengwu model of the Shanghai Artificial Intelligence Laboratory, Google DeepMind’s GraphCast, and Microsoft’s Aurora model are some of the notable AI models that showcase respective strengths in lead time and predicting track, intensity, and other aspects of weather systems over traditional prediction models. This application is pivotal for early warning systems and enables timely HADR to mitigate disaster risks.

    The application of AI in conflict prediction and the analysis of migration streams has significantly enhanced humanitarian operations, particularly by forecasting the displacement of internally displaced persons (IDPs) and refugees. AI tools, leveraging vast datasets like satellite imagery, geospatial analytics, and real-time social media activity, enable precise predictions of conflict escalations and migration flows. For instance, the UN Global Pulse project has utilised machine learning to analyse sentiment and conflict indicators derived from social media, improving pre-emptive humanitarian responses. Similarly, NATO’s AI in strategic warning systems aims to analyse complex datasets for early detection of potential conflicts, enhancing crisis preparedness and response. Moreover, the US military’s Integrated Crisis Early Warning System (ICEWS) combines political event databases with advanced analytical models to deliver actionable conflict predictions.

    However, a critical challenge arises with the involvement of non-humanitarian stakeholders, including private tech firms, military entities, and governmental actors, in developing and deploying these systems. Unlike traditional humanitarian organisations, these actors are not bound by the principles of humanity, impartiality, neutrality, and independence, leading to ethical dilemmas and potential misuse of predictive insights. An illustrative example is the collaboration between migration-focused NGOs and private analytics firms like Palantir, where advanced predictive analytics raised concerns about data privacy and alignment with humanitarian goals.

    While these innovations underscore AI’s transformative potential, their ethical deployment necessitates rigorous governance frameworks that ensure adherence to the humanitarian principles, safeguarding the rights and dignity of affected populations.

    AI and the Digital Divide

    The AI trend is pushing organisations to implement the technology without ensuring its tangible benefits. This includes for HADR purposes. While AI holds promise for improving disaster response, its technical complexities can alienate the communities it aims to serve, exacerbating the digital divide or even vulnerabilities as crisis situations are high-stake environments not ideal for testing or deploying AI solutions. In the humanitarian domain, several different kinds of the digital divide become pertinent:

    1. Access divide: Beyond digital literacy, this refers to the disparity in access to digital infrastructure, such as the internet, AI systems, and devices, especially in remote or underserved areas. It also includes energy reliability and connectivity, which are crucial for AI deployment.
    2. Knowledge divide: A gap in understanding the ethical, operational, and strategic use of AI. This includes disparities in awareness of AI risks and benefits among stakeholders, including communities affected by disasters.
    3. Governance divide: A lack of robust frameworks to regulate AI use in humanitarian contexts, which can disadvantage organisations unable to navigate complex legal or ethical landscapes.
    4. Data divide: Inequalities in access to high-quality data for training AI models. Organisations with insufficient resources may lack the data diversity necessary for accurate predictions or solutions tailored to local contexts.
    5. Sustainability divide: Differences in the capacity to maintain and scale AI systems over time, considering ongoing costs, software updates, and adapting to evolving needs.

    These forms of the digital divide highlight the multifaceted challenges in aligning AI with the humanitarian principles and emphasise the need for equitable governance and capacity-building initiatives.

    The push for AI should not be obligatory but rather a nudge to understand risks, address fears and uncertainties, and make sense of AI for humanitarian purposes. It must be tailored to the context, focusing on empowering vulnerable populations rather than multiplying risks. For those in remote and inaccessible areas that lack technology or even basic necessities, AI should deliver tangible benefits such as improved food distribution or targeted evacuation plans — ensuring no one is left behind. Striking this balance between technological innovation and inclusivity is vital to achieve impactful HADR outcomes.

    Risk to the Humanitarian Principles

    The humanitarian principles of humanity, impartiality, neutrality, and independence are fundamental to humanitarian action, ensuring that aid is provided solely based on need, without discrimination, and free from political or military influence. These principles are crucial for maintaining trust and access in conflict and crisis situations.

    The integration of AI into conflict prediction and migration analysis presents several risks to these principles. Regarding Impartiality, AI can introduce and perpetuate biases present in their training data, leading to skewed predictions that may favour certain groups over others. This undermines the principle of impartiality, as aid distribution could become unequal, not solely based on need. For instance, according to a study on AI for humanitarian action, if an AI system is trained on data that underrepresents certain populations, it may fail to predict crises affecting those groups, resulting in inadequate humanitarian response.

    Neutrality is equally at stake when it comes to the misuse of data. Collaborations with non-humanitarian stakeholders, such as private tech firms or military entities, can compromise neutrality. Data collected for humanitarian purposes might be repurposed for military or political objectives, eroding the perceived neutrality of humanitarian organisations and potentially making them targets in conflicts. For example, partnerships between humanitarian agencies and private companies like Palantir have raised concerns about data being used beyond humanitarian aims, potentially for surveillance or military operations.

    Independence is aimed to prevent undue external influence. However, the reliance on AI technologies developed by external entities can threaten exactly that independence of humanitarian organisations. Dependence on proprietary AI systems may subject humanitarian actions to the interests and agendas of technology providers, which may not align with humanitarian goals. This could lead to situations where aid delivery is influenced by external political or economic pressures, compromising the organisation’s autonomy.

    To mitigate these risks, it is essential to establish robust data governance frameworks that uphold the humanitarian principles, ensuring that AI applications in humanitarian contexts are ethical, unbiased, and solely focused on alleviating human suffering. Furthermore, we might need to think to adapt the humanitarian principles to the 21st century’s data and AI focus by creating a set of digital humanitarian principles that protect humanitarians on the battlefield of the 5th domain of warfare (besides land, water, air and space), namely cyberspace.

    Towards Inclusive AI in HADR

    The integration of AI into HADR offers transformational opportunities to enhance disaster preparedness, response, and recovery. However, the implementation of AI must navigate significant challenges, including the digital divide, risks to the humanitarian principles, and ethical concerns. In regions like Southeast Asia, where natural hazards are frequent and devastating, the digital divide threatens to exclude populations from the benefits of AI-driven solutions. Therefore, ensuring that these AI solutions are inclusive, applicable, and context-appropriate is essential.

    AI solutions must not undermine the humanitarian principles to not exacerbate inequalities. Robust governance frameworks and the respect for data privacy, intellectual property rights, and relevant guidelines should guide AI integration and are essential to safeguard data, address biases, and ensure that AI aligns with the core values of humanitarian action.

    AI should serve as a catalyst — bridging gaps rather than widening them and empowering communities and stakeholders rather than alienating them. By fostering collaboration, innovation, and inclusivity, the humanitarian sector can harness AI for enhanced and sustainable resilience.

    Andrej Janko Zwitter is Professor of Political Theory and Governance and Director of the Centre for Innovation, Technology and Ethics at the Rijksuniversiteit Groningen, Netherlands. Keith Paolo C. Landicho is an Associate Research Fellow of the Humanitarian Assistance and Disaster Relief [HADR] Programme, Centre for Non-Traditional Security Studies, S. Rajaratnam School of International Studies, Nanyang Technological University, Singapore.

    Categories: IDSS Papers / Country and Region Studies / International Politics and Security / East Asia and Asia Pacific / South Asia / Southeast Asia and ASEAN / Global

    SYNOPSIS

    Artificial Intelligence (AI) is positioned to revolutionise humanitarian assistance and disaster relief in disaster-prone Southeast Asia with transformative prediction, forecast, and analytic capabilities. However, the digital divide and the risk to the humanitarian principles pose significant challenges. Hence, robust governance and strategies aligning AI with the humanitarian principles are essential to empower communities and build resilience.

    COMMENTARY

    Southeast Asia is one of the world’s most disaster-prone regions. The Philippines alone faced six tropical cyclones in the span of a month (October to November), causing widespread flooding, significant damage to infrastructure, and affecting millions of people. Mainland Southeast Asia was also severely impacted. NASA characterised the event as unusual with the Japan Meteorological Agency reporting that four simultaneously active cyclones in the Pacific basin, have not been observed since 1951. This is a glimpse of the disaster complexity across the region, which includes not only cyclones and its related hazards (flooding, rain-induced landslides, storm surge, strong winds), but also droughts, earthquakes, volcanic eruptions, and tsunamis.

    Given such complexity, ASEAN has recognised the need for innovative solutions such as advancing the integration of Artificial Intelligence (AI) into its humanitarian assistance and disaster relief (HADR) strategies. Current plans, like the ASEAN Agreement on Disaster Management and Emergency Response (AADMER) Work Programme 2021-2025 and the ASEAN Coordinating Centre for Humanitarian Assistance on disaster management (AHA Centre) Work Plan 2025, include developing data-driven decision-making capabilities, fostering innovation hubs, and exploring AI applications. ASEAN should carefully tread the path of integrating AI to future-proof its disaster management as outlined and guided by the ASEAN Disaster Resilience Outlook 2025.

    AI offers immense potential to transform HADR in the region enabling capabilities such as detecting emerging threats, enhancing the forecasting of typhoons and volcanic eruptions, providing early warnings to at-risk populations to facilitate pre-emptive evacuations, optimising resource mobilisation, predicting conflicts, and identifying migration streams. However, this potential is tempered by the region’s significant digital divide with a third of the Southeast Asian adult population still lacking access to digital technologies according to a 2021 report by Roland Berger. Additionally, the region averages only 56.22 out of 100 across six different pillars for the ASEAN Digital Integration Index, an index that measures digital integration. AI risks exacerbating this divide.

    The challenge lies in leveraging AI to empower vulnerable communities while safeguarding the humanitarian principles. Striking this balance is essential to ensure that technological innovation fosters inclusivity, ethical applications, and long-term resilience.

    AI offers immense potential to transform and boost humanitarian assistance and disaster relief (HADR) responses. However, the integration of AI present risks to the humanitarian principles of humanity, impartiality, neutrality, and independence, and may also exacerbate the digital divide. Image from Pexels.
    AI offers immense potential to transform and boost humanitarian assistance and disaster relief (HADR) responses. However, the integration of AI present risks to the humanitarian principles of humanity, impartiality, neutrality, and independence, and may also exacerbate the digital divide. Image from Pexels.

    Making the Case for AI in HADR

    Weather prediction using AI is a competitive race and greatly substantiates AI’s impact. The Fengwu model of the Shanghai Artificial Intelligence Laboratory, Google DeepMind’s GraphCast, and Microsoft’s Aurora model are some of the notable AI models that showcase respective strengths in lead time and predicting track, intensity, and other aspects of weather systems over traditional prediction models. This application is pivotal for early warning systems and enables timely HADR to mitigate disaster risks.

    The application of AI in conflict prediction and the analysis of migration streams has significantly enhanced humanitarian operations, particularly by forecasting the displacement of internally displaced persons (IDPs) and refugees. AI tools, leveraging vast datasets like satellite imagery, geospatial analytics, and real-time social media activity, enable precise predictions of conflict escalations and migration flows. For instance, the UN Global Pulse project has utilised machine learning to analyse sentiment and conflict indicators derived from social media, improving pre-emptive humanitarian responses. Similarly, NATO’s AI in strategic warning systems aims to analyse complex datasets for early detection of potential conflicts, enhancing crisis preparedness and response. Moreover, the US military’s Integrated Crisis Early Warning System (ICEWS) combines political event databases with advanced analytical models to deliver actionable conflict predictions.

    However, a critical challenge arises with the involvement of non-humanitarian stakeholders, including private tech firms, military entities, and governmental actors, in developing and deploying these systems. Unlike traditional humanitarian organisations, these actors are not bound by the principles of humanity, impartiality, neutrality, and independence, leading to ethical dilemmas and potential misuse of predictive insights. An illustrative example is the collaboration between migration-focused NGOs and private analytics firms like Palantir, where advanced predictive analytics raised concerns about data privacy and alignment with humanitarian goals.

    While these innovations underscore AI’s transformative potential, their ethical deployment necessitates rigorous governance frameworks that ensure adherence to the humanitarian principles, safeguarding the rights and dignity of affected populations.

    AI and the Digital Divide

    The AI trend is pushing organisations to implement the technology without ensuring its tangible benefits. This includes for HADR purposes. While AI holds promise for improving disaster response, its technical complexities can alienate the communities it aims to serve, exacerbating the digital divide or even vulnerabilities as crisis situations are high-stake environments not ideal for testing or deploying AI solutions. In the humanitarian domain, several different kinds of the digital divide become pertinent:

    1. Access divide: Beyond digital literacy, this refers to the disparity in access to digital infrastructure, such as the internet, AI systems, and devices, especially in remote or underserved areas. It also includes energy reliability and connectivity, which are crucial for AI deployment.
    2. Knowledge divide: A gap in understanding the ethical, operational, and strategic use of AI. This includes disparities in awareness of AI risks and benefits among stakeholders, including communities affected by disasters.
    3. Governance divide: A lack of robust frameworks to regulate AI use in humanitarian contexts, which can disadvantage organisations unable to navigate complex legal or ethical landscapes.
    4. Data divide: Inequalities in access to high-quality data for training AI models. Organisations with insufficient resources may lack the data diversity necessary for accurate predictions or solutions tailored to local contexts.
    5. Sustainability divide: Differences in the capacity to maintain and scale AI systems over time, considering ongoing costs, software updates, and adapting to evolving needs.

    These forms of the digital divide highlight the multifaceted challenges in aligning AI with the humanitarian principles and emphasise the need for equitable governance and capacity-building initiatives.

    The push for AI should not be obligatory but rather a nudge to understand risks, address fears and uncertainties, and make sense of AI for humanitarian purposes. It must be tailored to the context, focusing on empowering vulnerable populations rather than multiplying risks. For those in remote and inaccessible areas that lack technology or even basic necessities, AI should deliver tangible benefits such as improved food distribution or targeted evacuation plans — ensuring no one is left behind. Striking this balance between technological innovation and inclusivity is vital to achieve impactful HADR outcomes.

    Risk to the Humanitarian Principles

    The humanitarian principles of humanity, impartiality, neutrality, and independence are fundamental to humanitarian action, ensuring that aid is provided solely based on need, without discrimination, and free from political or military influence. These principles are crucial for maintaining trust and access in conflict and crisis situations.

    The integration of AI into conflict prediction and migration analysis presents several risks to these principles. Regarding Impartiality, AI can introduce and perpetuate biases present in their training data, leading to skewed predictions that may favour certain groups over others. This undermines the principle of impartiality, as aid distribution could become unequal, not solely based on need. For instance, according to a study on AI for humanitarian action, if an AI system is trained on data that underrepresents certain populations, it may fail to predict crises affecting those groups, resulting in inadequate humanitarian response.

    Neutrality is equally at stake when it comes to the misuse of data. Collaborations with non-humanitarian stakeholders, such as private tech firms or military entities, can compromise neutrality. Data collected for humanitarian purposes might be repurposed for military or political objectives, eroding the perceived neutrality of humanitarian organisations and potentially making them targets in conflicts. For example, partnerships between humanitarian agencies and private companies like Palantir have raised concerns about data being used beyond humanitarian aims, potentially for surveillance or military operations.

    Independence is aimed to prevent undue external influence. However, the reliance on AI technologies developed by external entities can threaten exactly that independence of humanitarian organisations. Dependence on proprietary AI systems may subject humanitarian actions to the interests and agendas of technology providers, which may not align with humanitarian goals. This could lead to situations where aid delivery is influenced by external political or economic pressures, compromising the organisation’s autonomy.

    To mitigate these risks, it is essential to establish robust data governance frameworks that uphold the humanitarian principles, ensuring that AI applications in humanitarian contexts are ethical, unbiased, and solely focused on alleviating human suffering. Furthermore, we might need to think to adapt the humanitarian principles to the 21st century’s data and AI focus by creating a set of digital humanitarian principles that protect humanitarians on the battlefield of the 5th domain of warfare (besides land, water, air and space), namely cyberspace.

    Towards Inclusive AI in HADR

    The integration of AI into HADR offers transformational opportunities to enhance disaster preparedness, response, and recovery. However, the implementation of AI must navigate significant challenges, including the digital divide, risks to the humanitarian principles, and ethical concerns. In regions like Southeast Asia, where natural hazards are frequent and devastating, the digital divide threatens to exclude populations from the benefits of AI-driven solutions. Therefore, ensuring that these AI solutions are inclusive, applicable, and context-appropriate is essential.

    AI solutions must not undermine the humanitarian principles to not exacerbate inequalities. Robust governance frameworks and the respect for data privacy, intellectual property rights, and relevant guidelines should guide AI integration and are essential to safeguard data, address biases, and ensure that AI aligns with the core values of humanitarian action.

    AI should serve as a catalyst — bridging gaps rather than widening them and empowering communities and stakeholders rather than alienating them. By fostering collaboration, innovation, and inclusivity, the humanitarian sector can harness AI for enhanced and sustainable resilience.

    Andrej Janko Zwitter is Professor of Political Theory and Governance and Director of the Centre for Innovation, Technology and Ethics at the Rijksuniversiteit Groningen, Netherlands. Keith Paolo C. Landicho is an Associate Research Fellow of the Humanitarian Assistance and Disaster Relief [HADR] Programme, Centre for Non-Traditional Security Studies, S. Rajaratnam School of International Studies, Nanyang Technological University, Singapore.

    Categories: IDSS Papers / Country and Region Studies / International Politics and Security

    Popular Links

    About RSISResearch ProgrammesGraduate EducationPublicationsEventsAdmissionsCareersVideo/Audio ChannelRSIS Intranet

    Connect with Us

    rsis.ntu
    rsis_ntu
    rsisntu
    rsisvideocast
    school/rsis-ntu
    rsis.sg
    rsissg
    RSIS
    RSS
    Subscribe to RSIS Publications
    Subscribe to RSIS Events

    Getting to RSIS

    Nanyang Technological University
    Block S4, Level B3,
    50 Nanyang Avenue,
    Singapore 639798

    Click here for direction to RSIS

    Get in Touch

      Copyright © S. Rajaratnam School of International Studies. All rights reserved.
      Privacy Statement / Terms of Use
      Help us improve

        Rate your experience with this website
        123456
        Not satisfiedVery satisfied
        What did you like?
        0/255 characters
        What can be improved?
        0/255 characters
        Your email
        Please enter a valid email.
        Thank you for your feedback.
        This site uses cookies to offer you a better browsing experience. By continuing, you are agreeing to the use of cookies on your device as described in our privacy policy. Learn more
        OK
        Latest Book
        more info