27 October 2025
- RSIS
- Publication
- RSIS Publications
- IP25101 | AI, NC3 and the Future of Strategic Stability in the Trump 2.0 Era
KEY TAKEAWAYS
• AI’s integration with nuclear command, control and communications (NC3) offers potential benefits to increase efficiency but raises severe risks that could compromise nuclear decision-making.
• Both the United States and China remain opaque on AI’s role in NC3, creating a dangerous oversight gap driven by competitive dynamics that prioritise strategic advantage over strategic stability.
COMMENTARY
Nearly a year ago, on the sidelines of the APEC Summit held in Lima, Peru, former US president Joe Biden and China’s president Xi Jinping agreed that humans rather than artificial intelligence (AI) should be responsible for decisions over the use of nuclear weapons.
While this was a political declaration rather than a legally binding treaty signed between the two countries, it nevertheless sent an important signal to all countries regardless of whether they have nuclear weapons, and established that AI’s intersection with nuclear weapons was an important issue affecting global strategic stability.
In the months since the political declaration by the two presidents, both China and the United States’ policies on AI and nuclear weapons have evolved. In the case of the United States, much of this has been due to the change in administration since January 2025 following Donald Trump’s election to a second term as president.
Furthermore, there is an important distinction between the scope of the political declaration – which only referred to the need to maintain human control over decisions to use nuclear weapons – and the integration of AI within nuclear command, control and communications (NC3), which both China and the United States have been exploring.
Even if the United States and China continue to uphold the political declaration made in 2024, it would be unrealistic to assume that the two countries would agree to limit the integration of AI with NC3 given the current state of their bilateral relations. This means that risk assessments of how nuclear weapons are affecting global strategic stability must also factor in the uncertainties and problems associated with AI given that it is being embedded within the nuclear weapons decision-making process.
All of this is also happening at a time when the nuclear order is evolving. Nuclear arsenals continue to expand, and China is now estimated to have the world’s third-largest and fastest-growing stockpile at approximately 600 warheads, although it is still dwarfed by the United States (~3,700 warheads) and Russia (~4,309 warheads).
Moreover, domestic politics is playing a bigger role than previously assumed in nuclear weapons decision-making. In China, centralisation of power around Chinese Communist Party (CCP) General Secretary Xi Jinping has been a key trend in recent years. This has raised uncertainty around the extent to which China’s nuclear scientists and engineers, who have advocated for restraint in the past, can influence nuclear weapons decision-making.
In the United States, President Donald Trump’s decision to skip a Nuclear Posture Review (NPR), which has articulated each American administration’s position on using nuclear weapons since 1994, is part of a larger trend in his second term of dismantling traditional policymaking processes. It is unclear to what extent the NPR conducted in 2018 under the first Trump administration, which outlined a key role for nuclear weapons in sustaining America’s national security interests, still applies.
AI and NC3
NC3 refers to a system of systems that manages the use of nuclear weapons, including critical functions such as situational awareness, planning, decision-making, force management, and force direction. While there are many ways to define AI, in the context of NC3 it generally involves systems trained on large quantities of data rather than operating based on pre-programmed rules to carry out predictive or generative tasks.
AI’s integration with NC3 can occur across several critical functions such as situational awareness and planning, but it can also support functions such as force management, for example, in the area of predictive maintenance. Risk assessments of these functions and the extent of AI’s involvement vary considerably – for some, anything involving nuclear weapons is too sensitive to involve AI, but this caution ignores long-standing efforts to automate various aspects of NC3.
For instance, AI-driven data analysis can automate data collection, processing and sharing, improving overall situational awareness within a NC3 system. Where this data analysis is connected to early warning and detection infrastructure and processes, there is potential to improve the speed and accuracy of threat identification. Furthermore, leveraging AI to support maintenance of NC3 systems can support operational readiness and reliability through more accurate prediction of component failures and optimisation of resource allocation.
The main challenge arising from AI’s integration with NC3 stems from its lack of predictability and the constraints on developing reliable systems. AI-enabled systems employing machine learning require vast, high-quality datasets. However, NC3 is highly sensitive, limiting opportunities to gather and share data. This limits the ability of an AI-enabled NC3 system to deal with real world inputs that are not anticipated, potentially leading to false positives.
AI-enabled NC3 systems designed to detect and retaliate against nuclear threats could misread ambiguous data, such as space debris or routine rocket launches, as hostile actions, highlighting the extreme risk arising from delegating critical decisions to AI without robust safeguards and oversight. The risks are amplified by automation biases and the pressure of crisis decision-making. In such high-stakes situations, human operators may be inclined to defer to machine recommendations that superficially appear more precise and neutral. This overreliance on AI systems could have disastrous consequences.
Finally, cyberattacks on AI-enabled NC3 systems, such as those involving the manipulation of training data, could further compromise their integrity. AI’s vulnerabilities can be exploited to undermine the reliability of a NC3 system, potentially leading to unauthorised and dangerous decisions. The responsible integration of AI with NC3 must therefore be supported by rigorous testing and oversight.
Lack of Attention or Deliberate Ambiguity?
Despite undergoing recent updates, both China and the United States’ AI policies remain silent on the role of AI in nuclear weapons decision-making and AI’s integration with NC3. The question is whether this silence suggests a lack of attention to the issue, or a desire to deliberately maintain ambiguity to preserve a strategic advantage.

The Trump administration’s AI Action Plan focuses on “winning the AI race” through deregulation, infrastructure investment and global technological leadership. The plan does not discuss the translation of civilian and conventional military AI policy to NC3. Although the Political Declaration on Responsible Military AI launched by the Biden administration in 2023 has not been rescinded, its principles appear to be receiving little attention in current US discourse on AI’s use in the military domain.
Similarly, China’s Global AI Governance Action Plan published in July 2025 does not mention military applications of AI. While it shares some common ground with the United States’ AI Action Plan, China’s AI policy emphasises governance frameworks and multilateral norm-setting rather than accelerating technological advancement. Although official documents do not discuss AI’s integration with NC3, views from Chinese experts indicate that it is a subject of debate, for instance, regarding how AI can play a role in improving situational awareness, precision guidance and missile targeting.
Given the current state of bilateral relations between China and the United States, any further actions building on the Biden-Xi political declaration of 2024 appear unlikely, at least in the short term. The Trump administration’s “America First” approach also suggests that it will be less willing to advance multilateral dialogue in this area, while China is likely to maintain a position that does not restrict ongoing military modernisation efforts and the build-up of its nuclear arsenal.
Competing American and Chinese approaches leave a dangerous gap in oversight for AI’s integration with NC3. Both countries’ emphasis on competitive advantage creates a situation where strategic stability considerations can be downplayed or even disregarded. These circumstances are made worse by the range of other factors complicating assessments of appropriate measures when it comes to deterrence and escalation.
Maÿlis Mennesson is an intern at FACTS Asia and a master’s student in International Affairs at King’s College London; Manoj Harjani is Research Fellow and Coordinator of the Military Transformations Programme at the S. Rajaratnam School of International Studies (RSIS).
KEY TAKEAWAYS
• AI’s integration with nuclear command, control and communications (NC3) offers potential benefits to increase efficiency but raises severe risks that could compromise nuclear decision-making.
• Both the United States and China remain opaque on AI’s role in NC3, creating a dangerous oversight gap driven by competitive dynamics that prioritise strategic advantage over strategic stability.
COMMENTARY
Nearly a year ago, on the sidelines of the APEC Summit held in Lima, Peru, former US president Joe Biden and China’s president Xi Jinping agreed that humans rather than artificial intelligence (AI) should be responsible for decisions over the use of nuclear weapons.
While this was a political declaration rather than a legally binding treaty signed between the two countries, it nevertheless sent an important signal to all countries regardless of whether they have nuclear weapons, and established that AI’s intersection with nuclear weapons was an important issue affecting global strategic stability.
In the months since the political declaration by the two presidents, both China and the United States’ policies on AI and nuclear weapons have evolved. In the case of the United States, much of this has been due to the change in administration since January 2025 following Donald Trump’s election to a second term as president.
Furthermore, there is an important distinction between the scope of the political declaration – which only referred to the need to maintain human control over decisions to use nuclear weapons – and the integration of AI within nuclear command, control and communications (NC3), which both China and the United States have been exploring.
Even if the United States and China continue to uphold the political declaration made in 2024, it would be unrealistic to assume that the two countries would agree to limit the integration of AI with NC3 given the current state of their bilateral relations. This means that risk assessments of how nuclear weapons are affecting global strategic stability must also factor in the uncertainties and problems associated with AI given that it is being embedded within the nuclear weapons decision-making process.
All of this is also happening at a time when the nuclear order is evolving. Nuclear arsenals continue to expand, and China is now estimated to have the world’s third-largest and fastest-growing stockpile at approximately 600 warheads, although it is still dwarfed by the United States (~3,700 warheads) and Russia (~4,309 warheads).
Moreover, domestic politics is playing a bigger role than previously assumed in nuclear weapons decision-making. In China, centralisation of power around Chinese Communist Party (CCP) General Secretary Xi Jinping has been a key trend in recent years. This has raised uncertainty around the extent to which China’s nuclear scientists and engineers, who have advocated for restraint in the past, can influence nuclear weapons decision-making.
In the United States, President Donald Trump’s decision to skip a Nuclear Posture Review (NPR), which has articulated each American administration’s position on using nuclear weapons since 1994, is part of a larger trend in his second term of dismantling traditional policymaking processes. It is unclear to what extent the NPR conducted in 2018 under the first Trump administration, which outlined a key role for nuclear weapons in sustaining America’s national security interests, still applies.
AI and NC3
NC3 refers to a system of systems that manages the use of nuclear weapons, including critical functions such as situational awareness, planning, decision-making, force management, and force direction. While there are many ways to define AI, in the context of NC3 it generally involves systems trained on large quantities of data rather than operating based on pre-programmed rules to carry out predictive or generative tasks.
AI’s integration with NC3 can occur across several critical functions such as situational awareness and planning, but it can also support functions such as force management, for example, in the area of predictive maintenance. Risk assessments of these functions and the extent of AI’s involvement vary considerably – for some, anything involving nuclear weapons is too sensitive to involve AI, but this caution ignores long-standing efforts to automate various aspects of NC3.
For instance, AI-driven data analysis can automate data collection, processing and sharing, improving overall situational awareness within a NC3 system. Where this data analysis is connected to early warning and detection infrastructure and processes, there is potential to improve the speed and accuracy of threat identification. Furthermore, leveraging AI to support maintenance of NC3 systems can support operational readiness and reliability through more accurate prediction of component failures and optimisation of resource allocation.
The main challenge arising from AI’s integration with NC3 stems from its lack of predictability and the constraints on developing reliable systems. AI-enabled systems employing machine learning require vast, high-quality datasets. However, NC3 is highly sensitive, limiting opportunities to gather and share data. This limits the ability of an AI-enabled NC3 system to deal with real world inputs that are not anticipated, potentially leading to false positives.
AI-enabled NC3 systems designed to detect and retaliate against nuclear threats could misread ambiguous data, such as space debris or routine rocket launches, as hostile actions, highlighting the extreme risk arising from delegating critical decisions to AI without robust safeguards and oversight. The risks are amplified by automation biases and the pressure of crisis decision-making. In such high-stakes situations, human operators may be inclined to defer to machine recommendations that superficially appear more precise and neutral. This overreliance on AI systems could have disastrous consequences.
Finally, cyberattacks on AI-enabled NC3 systems, such as those involving the manipulation of training data, could further compromise their integrity. AI’s vulnerabilities can be exploited to undermine the reliability of a NC3 system, potentially leading to unauthorised and dangerous decisions. The responsible integration of AI with NC3 must therefore be supported by rigorous testing and oversight.
Lack of Attention or Deliberate Ambiguity?
Despite undergoing recent updates, both China and the United States’ AI policies remain silent on the role of AI in nuclear weapons decision-making and AI’s integration with NC3. The question is whether this silence suggests a lack of attention to the issue, or a desire to deliberately maintain ambiguity to preserve a strategic advantage.

The Trump administration’s AI Action Plan focuses on “winning the AI race” through deregulation, infrastructure investment and global technological leadership. The plan does not discuss the translation of civilian and conventional military AI policy to NC3. Although the Political Declaration on Responsible Military AI launched by the Biden administration in 2023 has not been rescinded, its principles appear to be receiving little attention in current US discourse on AI’s use in the military domain.
Similarly, China’s Global AI Governance Action Plan published in July 2025 does not mention military applications of AI. While it shares some common ground with the United States’ AI Action Plan, China’s AI policy emphasises governance frameworks and multilateral norm-setting rather than accelerating technological advancement. Although official documents do not discuss AI’s integration with NC3, views from Chinese experts indicate that it is a subject of debate, for instance, regarding how AI can play a role in improving situational awareness, precision guidance and missile targeting.
Given the current state of bilateral relations between China and the United States, any further actions building on the Biden-Xi political declaration of 2024 appear unlikely, at least in the short term. The Trump administration’s “America First” approach also suggests that it will be less willing to advance multilateral dialogue in this area, while China is likely to maintain a position that does not restrict ongoing military modernisation efforts and the build-up of its nuclear arsenal.
Competing American and Chinese approaches leave a dangerous gap in oversight for AI’s integration with NC3. Both countries’ emphasis on competitive advantage creates a situation where strategic stability considerations can be downplayed or even disregarded. These circumstances are made worse by the range of other factors complicating assessments of appropriate measures when it comes to deterrence and escalation.
Maÿlis Mennesson is an intern at FACTS Asia and a master’s student in International Affairs at King’s College London; Manoj Harjani is Research Fellow and Coordinator of the Military Transformations Programme at the S. Rajaratnam School of International Studies (RSIS).


