26 February 2026
- RSIS
- Publication
- RSIS Publications
- IP26029 | The Uncertain Future of the REAIM Summit
KEY TAKEAWAYS
• The third Responsible AI in the Military Domain (REAIM) Summit concluded with an outcome document titled “Pathways to Action” (PtA), but it has received less support than documents produced by the first two summits.
• While the PtA reflects the guiding principles developed by the Global Commission on REAIM in its final report, it has not incorporated three of the five core recommendations.
• It is unclear whether there will be another REAIM summit, and even if there is consensus to hold one, its focus will need to move beyond outcome documents.
COMMENTARY
The third Responsible AI in the Military Domain (REAIM) Summit, which was held in A Coruña, Spain, from 4 to 5 February 2026, concluded with an outcome document endorsed by fewer states than similar documents from the first two summits, which were held in The Hague in 2023 and Seoul in 2024.
At the time of writing, 40 states have endorsed the “Pathways to Action” (PtA) document, compared with more than 60 countries endorsing the “Blueprint for Action” from the second REAIM Summit and more than 50 countries endorsing the inaugural summit’s “Call to Action”.
Crucially, neither China nor the United States endorsed the PtA, making it the first REAIM outcome document not to have the support of at least one of the two superpowers. China and the United States both endorsed the “Call to Action” in 2023, while only the United States supported the “Blueprint for Action” in 2024.
Does the absence of superpower support mean that REAIM has lost its relevance as a platform for multilateral governance of military AI? The short answer is, not necessarily. After all, REAIM was an initiative by middle powers – the Netherlands and South Korea – whose leadership can continue to sustain it.
Nevertheless, the fact that fewer states have endorsed the PtA should be a cause for concern. Moreover, the PtA has only incorporated some of the recommendations put forward by the final report of the Global Commission on REAIM (GC REAIM) published in September 2025, suggesting that they may not have resonated with states.
Other questions remain which will have a bearing on the future impact and importance of REAIM as a multilateral platform driving military AI governance. First, who will host the next summit, if there is indeed consensus on the need to hold one? Second, what should the fourth REAIM summit focus on, given that three consensus documents have already been produced?

Delay and Divergence
The third REAIM Summit was originally scheduled to take place in the second half of 2025. In the time that has elapsed during this inadvertent delay, some of the momentum among states regarding military AI governance appears to have been lost.
This is not just about deteriorating relations between China and the United States. Dynamics have also shifted between the United States and Europe, driven by concern in recent weeks over the possibility of Greenland’s annexation, as well as other long-standing issues such as the war in Ukraine and the regulation of Big Tech companies.
While the PtA is not legally binding, its endorsement by states nevertheless serves as a barometer for their level of support to impose limits on the development and use of AI within the military domain. The fact that 85 states attended the third REAIM Summit but only 40 ultimately endorsed the PtA should therefore be a cause for concern.
Some observers point to the emergence of a parallel platform on military AI governance at the United Nations General Assembly (UNGA) since 2024 as a reason for the divergence among states at the latest REAIM summit. This is a misleading observation, since the UNGA resolutions on military AI (Resolution 79/239 adopted in 2024 and Resolution 80/58 in 2025) were sponsored by the REAIM co-chairs, the Netherlands and South Korea.
These resolutions should be seen as a complementary effort to bring some of the ideas developed through the REAIM summits to a broader audience of states. Resolution 80/58 has also called for a three-day informal exchange to take place in Geneva, which is scheduled for June 2026. This will be an important opportunity for states to take stock and ideally revive some of the momentum that has been lost.
Unpacking the PtA Document
The PtA has generally incorporated the three guiding principles for military AI governance identified in the GC REAIM’s final report. These cover the applicability of international law; the need for systematic and structured design, development, and testing processes across the entire lifecycle of military AI systems; and the support for individuals involved in the development and use of AI in the military domain to exercise informed human agency.
However, when it comes to the five core recommendations of the GC REAIM report, the PtA includes only two of them. These are: (1) to anchor the responsible development and use of AI in the military domain in relevant and applicable ethical principles and international law (reflected in paragraphs 8, 9, and 10 of the PtA); and (2) to implement national policies that guarantee human responsibility across the AI system lifecycle (reflected in paragraphs 12, 15, 19, 20, and 21 of the PtA).
The remaining three core recommendations of the GC REAIM report that are not reflected in the PtA focus on: (1) having a legally binding agreement on retaining human control over decisions to authorise the use of nuclear weapons; (2) establishing a permanent dialogue mechanism; and (3) developing a centralised expert network to disseminate knowledge for capability and capacity building.
The omission in the PtA of the need to retain human control over decisions to authorise the use of nuclear weapons should not come as a surprise, given that nuclear-armed states would not want any conditions imposed on how they deploy their arsenals. This reality is also consistent with voting patterns for UNGA Resolution 80/23 passed in 2025, which addressed risks arising from the AI–nuclear nexus. All nuclear-armed states either voted against or abstained from voting on the resolution.
Similarly, the absence of any mention in the PtA of a permanent dialogue mechanism and the need to develop a centralised expert network should not raise any eyebrows, since states will be wary of rushing into creating new multilateral platforms due to the resources required to sustain them.
Pathways to Somewhere
The third REAIM Summit concluded without any indication of whether there will be another one held in 2027. After three summits, states will naturally be thinking about the value of continuing REAIM as a multilateral platform for driving military AI governance.
In the event that there is consensus to hold a fourth summit, the choice of host country will be crucial, as it will reflect the political will among middle powers to muddle through military AI governance in the vacuum created by China and the United States.
Additionally, a future REAIM summit may need to move away from outcome documents and focus on concrete ways to implement guardrails on AI’s integration in the military domain. States would otherwise remain unconstrained in their pursuit of strategic advantage through the adoption of AI, which would leave the risks posed to global strategic stability unaddressed.
Manoj Harjani is a Research Fellow and Coordinator of the Military Transformations Programme (MTP) at the S. Rajaratnam School of International Studies (RSIS). Mei Ching Liu is an Associate Research Fellow with the MTP.
KEY TAKEAWAYS
• The third Responsible AI in the Military Domain (REAIM) Summit concluded with an outcome document titled “Pathways to Action” (PtA), but it has received less support than documents produced by the first two summits.
• While the PtA reflects the guiding principles developed by the Global Commission on REAIM in its final report, it has not incorporated three of the five core recommendations.
• It is unclear whether there will be another REAIM summit, and even if there is consensus to hold one, its focus will need to move beyond outcome documents.
COMMENTARY
The third Responsible AI in the Military Domain (REAIM) Summit, which was held in A Coruña, Spain, from 4 to 5 February 2026, concluded with an outcome document endorsed by fewer states than similar documents from the first two summits, which were held in The Hague in 2023 and Seoul in 2024.
At the time of writing, 40 states have endorsed the “Pathways to Action” (PtA) document, compared with more than 60 countries endorsing the “Blueprint for Action” from the second REAIM Summit and more than 50 countries endorsing the inaugural summit’s “Call to Action”.
Crucially, neither China nor the United States endorsed the PtA, making it the first REAIM outcome document not to have the support of at least one of the two superpowers. China and the United States both endorsed the “Call to Action” in 2023, while only the United States supported the “Blueprint for Action” in 2024.
Does the absence of superpower support mean that REAIM has lost its relevance as a platform for multilateral governance of military AI? The short answer is, not necessarily. After all, REAIM was an initiative by middle powers – the Netherlands and South Korea – whose leadership can continue to sustain it.
Nevertheless, the fact that fewer states have endorsed the PtA should be a cause for concern. Moreover, the PtA has only incorporated some of the recommendations put forward by the final report of the Global Commission on REAIM (GC REAIM) published in September 2025, suggesting that they may not have resonated with states.
Other questions remain which will have a bearing on the future impact and importance of REAIM as a multilateral platform driving military AI governance. First, who will host the next summit, if there is indeed consensus on the need to hold one? Second, what should the fourth REAIM summit focus on, given that three consensus documents have already been produced?

Delay and Divergence
The third REAIM Summit was originally scheduled to take place in the second half of 2025. In the time that has elapsed during this inadvertent delay, some of the momentum among states regarding military AI governance appears to have been lost.
This is not just about deteriorating relations between China and the United States. Dynamics have also shifted between the United States and Europe, driven by concern in recent weeks over the possibility of Greenland’s annexation, as well as other long-standing issues such as the war in Ukraine and the regulation of Big Tech companies.
While the PtA is not legally binding, its endorsement by states nevertheless serves as a barometer for their level of support to impose limits on the development and use of AI within the military domain. The fact that 85 states attended the third REAIM Summit but only 40 ultimately endorsed the PtA should therefore be a cause for concern.
Some observers point to the emergence of a parallel platform on military AI governance at the United Nations General Assembly (UNGA) since 2024 as a reason for the divergence among states at the latest REAIM summit. This is a misleading observation, since the UNGA resolutions on military AI (Resolution 79/239 adopted in 2024 and Resolution 80/58 in 2025) were sponsored by the REAIM co-chairs, the Netherlands and South Korea.
These resolutions should be seen as a complementary effort to bring some of the ideas developed through the REAIM summits to a broader audience of states. Resolution 80/58 has also called for a three-day informal exchange to take place in Geneva, which is scheduled for June 2026. This will be an important opportunity for states to take stock and ideally revive some of the momentum that has been lost.
Unpacking the PtA Document
The PtA has generally incorporated the three guiding principles for military AI governance identified in the GC REAIM’s final report. These cover the applicability of international law; the need for systematic and structured design, development, and testing processes across the entire lifecycle of military AI systems; and the support for individuals involved in the development and use of AI in the military domain to exercise informed human agency.
However, when it comes to the five core recommendations of the GC REAIM report, the PtA includes only two of them. These are: (1) to anchor the responsible development and use of AI in the military domain in relevant and applicable ethical principles and international law (reflected in paragraphs 8, 9, and 10 of the PtA); and (2) to implement national policies that guarantee human responsibility across the AI system lifecycle (reflected in paragraphs 12, 15, 19, 20, and 21 of the PtA).
The remaining three core recommendations of the GC REAIM report that are not reflected in the PtA focus on: (1) having a legally binding agreement on retaining human control over decisions to authorise the use of nuclear weapons; (2) establishing a permanent dialogue mechanism; and (3) developing a centralised expert network to disseminate knowledge for capability and capacity building.
The omission in the PtA of the need to retain human control over decisions to authorise the use of nuclear weapons should not come as a surprise, given that nuclear-armed states would not want any conditions imposed on how they deploy their arsenals. This reality is also consistent with voting patterns for UNGA Resolution 80/23 passed in 2025, which addressed risks arising from the AI–nuclear nexus. All nuclear-armed states either voted against or abstained from voting on the resolution.
Similarly, the absence of any mention in the PtA of a permanent dialogue mechanism and the need to develop a centralised expert network should not raise any eyebrows, since states will be wary of rushing into creating new multilateral platforms due to the resources required to sustain them.
Pathways to Somewhere
The third REAIM Summit concluded without any indication of whether there will be another one held in 2027. After three summits, states will naturally be thinking about the value of continuing REAIM as a multilateral platform for driving military AI governance.
In the event that there is consensus to hold a fourth summit, the choice of host country will be crucial, as it will reflect the political will among middle powers to muddle through military AI governance in the vacuum created by China and the United States.
Additionally, a future REAIM summit may need to move away from outcome documents and focus on concrete ways to implement guardrails on AI’s integration in the military domain. States would otherwise remain unconstrained in their pursuit of strategic advantage through the adoption of AI, which would leave the risks posed to global strategic stability unaddressed.
Manoj Harjani is a Research Fellow and Coordinator of the Military Transformations Programme (MTP) at the S. Rajaratnam School of International Studies (RSIS). Mei Ching Liu is an Associate Research Fellow with the MTP.


