19 March 2025
- RSIS
- Publication
- RSIS Publications
- The AI Action Summit Offers Clues for the Future of Multilateralism
SYNOPSIS
The recently held AI Action Summit in Paris offers some positive prospects and critical challenges that will influence the future of multilateralism. Amidst unfavourable geopolitical headwinds, like-minded states should assert their collective agency to develop effective global AI governance.

COMMENTARY
France and India co-convened the AI Action Summit in Paris on 10 and 11 February 2025. The event gathered government officials and industry leaders from over 100 countries to discuss matters related to AI safety and innovation against the backdrop of rapid developments within the industry, most recently the mainstream launch of Chinese AI, DeepSeek, and the subsequent rattled response from Silicon Valley, Wall Street and Washington.
The summit proved to be an important launching pad for the participating countries’ AI-related announcements. French President Emmanuel Macron announced a sizeable investment of €109 billion (US$118.9 billion) to facilitate AI development in Europe, a response to the United States’ US$500 billion Project Stargate. As for Singapore, it introduced several global initiatives to advance global AI safety through joint testing and red teaming.
However, the most consequential outcome of the summit was the Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet, a declaration signed by about 60 countries (including four ASEAN members, namely, Cambodia, Indonesia, Singapore, and Thailand) and several international organisations. The declaration highlighted several key issue areas that would guide future dialogues. These priorities include closing digital divides, promoting inclusive and safe AI, encouraging innovation and sustainable growth, and developing international governance.
The absence of the United States and the United Kingdom from the statement was most glaring. The United States’ refusal to sign off should be no surprise as Vice President J D Vance warned at the summit that excessive regulation would “kill a transformative industry”. Previously, President Donald Trump revoked Joe Biden’s 2023 Executive Order on AI development and deployment on the basis that his predecessor’s policy constrained innovation.
Positive Prospects for Multilateralism
It was clear from the summit that most countries are still cautiously approaching how to regulate the AI industry. The AI Action Summit produced a joint vision statement, which has disappointed some policy analysts who wanted more robust regulation to emerge from the summit, but this cautiousness from states should not be mistaken for disinterest in strong global oversight. That most participants – great powers, middle powers, and small states alike – decided to sign off on the Statement on Inclusive AI, notwithstanding Washington’s anti-regulatory approach, is a glimmer of hope for multilateralism during the most difficult test of its resilience.
Nonetheless, the consensus generated by these global dialogues should be transformed into more substantial policy outcomes. The efforts of middle powers and small states will play a decisive role in maintaining this momentum. India has already agreed to host the next edition of the AI Action Summit in 2026. Singapore and Rwanda spearheaded the creation of the AI Playbook for Small States for the Digital Forum of Small States (Digital FOSS). This playbook outlined issues in AI development, use, and governance that are most pressing for small states.
Attention should also be paid to regional developments that suggest how countries may consolidate divergent domestic priorities into a global governance approach. Earlier this year, ASEAN issued an expanded version of its Guide on AI Governance and Ethics, aimed specifically at Generative AI, during the 2025 ASEAN Digital Ministers Meeting. ASEAN has also recently begun exploring the impact of AI in defence. In a joint statement this February, the ASEAN Defence Ministers Meeting committed to promoting the responsible application of AI for military purposes and exploring defence-industrial collaboration on AI.
Regional dialogues have also taken place in Latin America, such as the Ministerial Summit on the Ethics of AI and the Ibero-American Forum of Digital Parliamentarians. The 2024-2025 Roadmap for Ethical AI in Latin America and the Caribbean includes action items to collaborate on addressing AI-generated disinformation, managing labour disruptions, protecting against harmful biases, and ensuring environmental sustainability.
The Challenges that Remain
The harsh reality countries face in global AI governance is that the United States participation in multilateral processes under the Trump presidency will remain elusive. This presents a critical global governance dilemma for advocates: as many major AI developers are domiciled in the US and China, how effective can global oversight be without the United States or China’s buy-in?
While some AI industry leaders acknowledge the difficulty in arriving at a unified global framework, they have underscored the importance of regulation. However, both US and Chinese firms have also tested the limits of the oversight powers of third countries. For example, DeepSeek has argued before Italy’s data protection authority that they should not be subject to European privacy obligations because they are domiciled elsewhere, forcing the agency to ban DeepSeek’s chatbot app.
These trends may make it more difficult for multilateral dialogues to gain enough support for the development of an international governance framework. Still, these obstacles should not deter countries from instituting effective governance. Instead, like-minded states should see these trends as an imperative to assert their agency and coalesce behind an effective global policy.
Likewise, it is critical to harmonise the dispersed processes and dialogues simultaneously taking place across different forums, such as the United Nations High-Level Advisory Body on Artificial Intelligence, established in 2023 to develop cross-sectoral governance. There are also business-led consortiums seeking to self-regulate the industry, such as the Frontier Model Forum backed by OpenAI, Microsoft, Google, and Anthropic.
While these avenues can contribute constructively to the governance process, there is also a risk that these contributions will be siloed, resulting in a fragmented regulatory landscape. Hence, each dialogue should take stock of and build on the proposals that emerge in other forums. The development of global AI governance should also be a transparent process supported by inputs from state actors, private industry, academia, civil society, and other relevant stakeholders.
Conclusion
Global AI governance will be a litmus test for how effective multilateralism can be when the headwinds of the geopolitical environment threaten to pull countries in the opposite direction. With some major powers showing little appetite, today’s multilateral governance will be buoyed by smaller nations seeking collaboration on sector-specific issues, including artificial intelligence.
As AI is a borderless technology, global mechanisms are necessary for effective oversight. However, the success of AI governance will also depend on how national and global governance complement each other. Policymakers should pay attention to the question which policy challenges are best addressed by global frameworks (such as clarifying broader goals of safety and sustainable development) and which ones should be reserved for domestic policy (such as ensuring that language models are culturally sensitive and relevant).
Formulating international policy frameworks inevitably takes time. Gathering consensus around a unified understanding of ethical principles, their implementation, and the mechanisms to operationalise global oversight are not easy, even under a more favourable environment. Settlement of these issues will only be more contentious in these times. Hence, it is important that avenues for global dialogue remain open and that states maintain their resolve to institute effective global AI governance.
About the Author
Jose Miguelito Enriquez is an Associate Research Fellow in the Centre for Multilateralism Studies at S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore.
SYNOPSIS
The recently held AI Action Summit in Paris offers some positive prospects and critical challenges that will influence the future of multilateralism. Amidst unfavourable geopolitical headwinds, like-minded states should assert their collective agency to develop effective global AI governance.

COMMENTARY
France and India co-convened the AI Action Summit in Paris on 10 and 11 February 2025. The event gathered government officials and industry leaders from over 100 countries to discuss matters related to AI safety and innovation against the backdrop of rapid developments within the industry, most recently the mainstream launch of Chinese AI, DeepSeek, and the subsequent rattled response from Silicon Valley, Wall Street and Washington.
The summit proved to be an important launching pad for the participating countries’ AI-related announcements. French President Emmanuel Macron announced a sizeable investment of €109 billion (US$118.9 billion) to facilitate AI development in Europe, a response to the United States’ US$500 billion Project Stargate. As for Singapore, it introduced several global initiatives to advance global AI safety through joint testing and red teaming.
However, the most consequential outcome of the summit was the Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet, a declaration signed by about 60 countries (including four ASEAN members, namely, Cambodia, Indonesia, Singapore, and Thailand) and several international organisations. The declaration highlighted several key issue areas that would guide future dialogues. These priorities include closing digital divides, promoting inclusive and safe AI, encouraging innovation and sustainable growth, and developing international governance.
The absence of the United States and the United Kingdom from the statement was most glaring. The United States’ refusal to sign off should be no surprise as Vice President J D Vance warned at the summit that excessive regulation would “kill a transformative industry”. Previously, President Donald Trump revoked Joe Biden’s 2023 Executive Order on AI development and deployment on the basis that his predecessor’s policy constrained innovation.
Positive Prospects for Multilateralism
It was clear from the summit that most countries are still cautiously approaching how to regulate the AI industry. The AI Action Summit produced a joint vision statement, which has disappointed some policy analysts who wanted more robust regulation to emerge from the summit, but this cautiousness from states should not be mistaken for disinterest in strong global oversight. That most participants – great powers, middle powers, and small states alike – decided to sign off on the Statement on Inclusive AI, notwithstanding Washington’s anti-regulatory approach, is a glimmer of hope for multilateralism during the most difficult test of its resilience.
Nonetheless, the consensus generated by these global dialogues should be transformed into more substantial policy outcomes. The efforts of middle powers and small states will play a decisive role in maintaining this momentum. India has already agreed to host the next edition of the AI Action Summit in 2026. Singapore and Rwanda spearheaded the creation of the AI Playbook for Small States for the Digital Forum of Small States (Digital FOSS). This playbook outlined issues in AI development, use, and governance that are most pressing for small states.
Attention should also be paid to regional developments that suggest how countries may consolidate divergent domestic priorities into a global governance approach. Earlier this year, ASEAN issued an expanded version of its Guide on AI Governance and Ethics, aimed specifically at Generative AI, during the 2025 ASEAN Digital Ministers Meeting. ASEAN has also recently begun exploring the impact of AI in defence. In a joint statement this February, the ASEAN Defence Ministers Meeting committed to promoting the responsible application of AI for military purposes and exploring defence-industrial collaboration on AI.
Regional dialogues have also taken place in Latin America, such as the Ministerial Summit on the Ethics of AI and the Ibero-American Forum of Digital Parliamentarians. The 2024-2025 Roadmap for Ethical AI in Latin America and the Caribbean includes action items to collaborate on addressing AI-generated disinformation, managing labour disruptions, protecting against harmful biases, and ensuring environmental sustainability.
The Challenges that Remain
The harsh reality countries face in global AI governance is that the United States participation in multilateral processes under the Trump presidency will remain elusive. This presents a critical global governance dilemma for advocates: as many major AI developers are domiciled in the US and China, how effective can global oversight be without the United States or China’s buy-in?
While some AI industry leaders acknowledge the difficulty in arriving at a unified global framework, they have underscored the importance of regulation. However, both US and Chinese firms have also tested the limits of the oversight powers of third countries. For example, DeepSeek has argued before Italy’s data protection authority that they should not be subject to European privacy obligations because they are domiciled elsewhere, forcing the agency to ban DeepSeek’s chatbot app.
These trends may make it more difficult for multilateral dialogues to gain enough support for the development of an international governance framework. Still, these obstacles should not deter countries from instituting effective governance. Instead, like-minded states should see these trends as an imperative to assert their agency and coalesce behind an effective global policy.
Likewise, it is critical to harmonise the dispersed processes and dialogues simultaneously taking place across different forums, such as the United Nations High-Level Advisory Body on Artificial Intelligence, established in 2023 to develop cross-sectoral governance. There are also business-led consortiums seeking to self-regulate the industry, such as the Frontier Model Forum backed by OpenAI, Microsoft, Google, and Anthropic.
While these avenues can contribute constructively to the governance process, there is also a risk that these contributions will be siloed, resulting in a fragmented regulatory landscape. Hence, each dialogue should take stock of and build on the proposals that emerge in other forums. The development of global AI governance should also be a transparent process supported by inputs from state actors, private industry, academia, civil society, and other relevant stakeholders.
Conclusion
Global AI governance will be a litmus test for how effective multilateralism can be when the headwinds of the geopolitical environment threaten to pull countries in the opposite direction. With some major powers showing little appetite, today’s multilateral governance will be buoyed by smaller nations seeking collaboration on sector-specific issues, including artificial intelligence.
As AI is a borderless technology, global mechanisms are necessary for effective oversight. However, the success of AI governance will also depend on how national and global governance complement each other. Policymakers should pay attention to the question which policy challenges are best addressed by global frameworks (such as clarifying broader goals of safety and sustainable development) and which ones should be reserved for domestic policy (such as ensuring that language models are culturally sensitive and relevant).
Formulating international policy frameworks inevitably takes time. Gathering consensus around a unified understanding of ethical principles, their implementation, and the mechanisms to operationalise global oversight are not easy, even under a more favourable environment. Settlement of these issues will only be more contentious in these times. Hence, it is important that avenues for global dialogue remain open and that states maintain their resolve to institute effective global AI governance.
About the Author
Jose Miguelito Enriquez is an Associate Research Fellow in the Centre for Multilateralism Studies at S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore.