Back
About RSIS
Introduction
Building the Foundations
Welcome Message
Board of Governors
Staff Profiles
Executive Deputy Chairman’s Office
Dean’s Office
Management
Distinguished Fellows
Faculty and Research
Associate Research Fellows, Senior Analysts and Research Analysts
Visiting Fellows
Adjunct Fellows
Administrative Staff
Honours and Awards for RSIS Staff and Students
RSIS Endowment Fund
Endowed Professorships
Career Opportunities
Getting to RSIS
Research
Research Centres
Centre for Multilateralism Studies (CMS)
Centre for Non-Traditional Security Studies (NTS Centre)
Centre of Excellence for National Security (CENS)
Institute of Defence and Strategic Studies (IDSS)
International Centre for Political Violence and Terrorism Research (ICPVTR)
Research Programmes
National Security Studies Programme (NSSP)
Social Cohesion Research Programme (SCRP)
Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
Other Research
Future Issues and Technology Cluster
Research@RSIS
Science and Technology Studies Programme (STSP) (2017-2020)
Graduate Education
Graduate Programmes Office
Exchange Partners and Programmes
How to Apply
Financial Assistance
Meet the Admissions Team: Information Sessions and other events
RSIS Alumni
Outreach
Global Networks
About Global Networks
RSIS Alumni
International Programmes
About International Programmes
Asia-Pacific Programme for Senior Military Officers (APPSMO)
Asia-Pacific Programme for Senior National Security Officers (APPSNO)
International Conference on Cohesive Societies (ICCS)
International Strategy Forum-Asia (ISF-Asia)
Executive Education
About Executive Education
SRP Executive Programme
Terrorism Analyst Training Course (TATC)
Public Education
About Public Education
Publications
RSIS Publications
Annual Reviews
Books
Bulletins and Newsletters
RSIS Commentary Series
Counter Terrorist Trends and Analyses
Commemorative / Event Reports
Future Issues
IDSS Papers
Interreligious Relations
Monographs
NTS Insight
Policy Reports
Working Papers
External Publications
Authored Books
Journal Articles
Edited Books
Chapters in Edited Books
Policy Reports
Working Papers
Op-Eds
Glossary of Abbreviations
Policy-relevant Articles Given RSIS Award
RSIS Publications for the Year
External Publications for the Year
Media
News Releases
Speeches
Video/Audio Channel
Events
Contact Us
S. Rajaratnam School of International Studies Think Tank and Graduate School Ponder The Improbable Since 1966
Nanyang Technological University Nanyang Technological University
  • About RSIS
      IntroductionBuilding the FoundationsWelcome MessageBoard of GovernorsHonours and Awards for RSIS Staff and StudentsRSIS Endowment FundEndowed ProfessorshipsCareer OpportunitiesGetting to RSIS
      Staff ProfilesExecutive Deputy Chairman’s OfficeDean’s OfficeManagementDistinguished FellowsFaculty and ResearchAssociate Research Fellows, Senior Analysts and Research AnalystsVisiting FellowsAdjunct FellowsAdministrative Staff
  • Research
      Research CentresCentre for Multilateralism Studies (CMS)Centre for Non-Traditional Security Studies (NTS Centre)Centre of Excellence for National Security (CENS)Institute of Defence and Strategic Studies (IDSS)International Centre for Political Violence and Terrorism Research (ICPVTR)
      Research ProgrammesNational Security Studies Programme (NSSP)Social Cohesion Research Programme (SCRP)Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
      Other ResearchFuture Issues and Technology ClusterResearch@RSISScience and Technology Studies Programme (STSP) (2017-2020)
  • Graduate Education
      Graduate Programmes OfficeExchange Partners and ProgrammesHow to ApplyFinancial AssistanceMeet the Admissions Team: Information Sessions and other eventsRSIS Alumni
  • Outreach
      Global NetworksAbout Global NetworksRSIS Alumni
      International ProgrammesAbout International ProgrammesAsia-Pacific Programme for Senior Military Officers (APPSMO)Asia-Pacific Programme for Senior National Security Officers (APPSNO)International Conference on Cohesive Societies (ICCS)International Strategy Forum-Asia (ISF-Asia)
      Executive EducationAbout Executive EducationSRP Executive ProgrammeTerrorism Analyst Training Course (TATC)
      Public EducationAbout Public Education
  • Publications
      RSIS PublicationsAnnual ReviewsBooksBulletins and NewslettersRSIS Commentary SeriesCounter Terrorist Trends and AnalysesCommemorative / Event ReportsFuture IssuesIDSS PapersInterreligious RelationsMonographsNTS InsightPolicy ReportsWorking Papers
      External PublicationsAuthored BooksJournal ArticlesEdited BooksChapters in Edited BooksPolicy ReportsWorking PapersOp-Eds
      Glossary of AbbreviationsPolicy-relevant Articles Given RSIS AwardRSIS Publications for the YearExternal Publications for the Year
  • Media
      News ReleasesSpeechesVideo/Audio Channel
  • Events
  • Contact Us
    • Connect with Us

      rsis.ntu
      rsis_ntu
      rsisntu
      rsisvideocast
      school/rsis-ntu
      rsis.sg
      rsissg
      RSIS
      RSS
      Subscribe to RSIS Publications
      Subscribe to RSIS Events

      Getting to RSIS

      Nanyang Technological University
      Block S4, Level B3,
      50 Nanyang Avenue,
      Singapore 639798

      Click here for direction to RSIS
Connect
Search
  • RSIS
  • Publication
  • RSIS Publications
  • ChatBIT and the Militarisation of Open-Source AI: Security Implications for Asia
  • Annual Reviews
  • Books
  • Bulletins and Newsletters
  • RSIS Commentary Series
  • Counter Terrorist Trends and Analyses
  • Commemorative / Event Reports
  • Future Issues
  • IDSS Papers
  • Interreligious Relations
  • Monographs
  • NTS Insight
  • Policy Reports
  • Working Papers

CO25164 | ChatBIT and the Militarisation of Open-Source AI: Security Implications for Asia
Annemarie Mugisa Acen

28 July 2025

download pdf

SYNOPSIS

The rapid advancement of open-source AI has outpaced regulatory oversight, raising critical concerns about its potential exploitation for military applications. There is a lack of attention within global AI governance platforms on regulating the use of open-source AI in military applications and the risks to security and stability, especially in Asia.

COMMENTARY

In June 2024, a team of Chinese researchers affiliated with the People’s Liberation Army unveiled ChatBIT, an AI model developed specifically for military applications. Built on Meta’s open-source Llama-2-13b large language model, ChatBIT is designed to support military operations, including battlefield intelligence, situational awareness, and operational decision-making.

This development raises concerns about the lack of regulatory measures regarding the use of open-source AI for military purposes. While the United States expressed concern over ChatBIT, it has not received enough scrutiny in ongoing global AI governance discussions. Given ChatBIT’s potential impact on global and regional security, countries need to pay closer attention to the possible use of open-source AI for military applications.

Open-Source AI vs. Closed-Source AI

Unlike closed-source AI models such as OpenAI’s GPT-4 or Google’s Gemini, which operate under strict access controls, open-source models can be freely modified and their source codes used to create AI chatbots and models. While this openness fosters innovation, it also poses a significant security risk. Meta’s Llama-3 Acceptable Use Policy prohibits military applications, but enforcement remains a challenge. Once released, these models can be modified for use beyond their original purpose, including in the military domain.

China is not alone in leveraging open-source AI for strategic advantage; the US Department of Defense has also explored similar applications through partnerships with American tech companies. This poses a challenge to existing governance mechanisms that aim to regulate these technologies and oversee their effective implementation.

While many Western companies and governments claim to be guided by ethical principles, there have been cases where these principles appear to have been ignored. In November 2024, Meta adjusted its policy to allow US government agencies and defence contractors to use Llama models for cybersecurity and intelligence purposes. This underscores the difficulty of holding the private sector accountable for the governance of AI in the military domain.

Regional Military AI Governance Efforts

Many Asian countries are still in the early stages of integrating AI into their defence systems. Instead of responding directly to developments like ChatBIT, several countries remain focused on foundational steps, such as updating defence strategies, investing in dual-use technologies, and experimenting with AI applications in controlled environments. For example, Japan’s Defence Ministry launched its first basic policy on the use of AI in July 2024, while South Korea launched a research centre on defence AI earlier in the same year. These efforts are part of broader military modernisation and transformation efforts and do not focus on open-source AI governance per se.

In Southeast Asia, there has been comparatively less attention on the governance of military AI. Until recently, discussions about AI within ASEAN largely focused on civilian capabilities. It was only in early 2025 that the ASEAN Defence Ministers’ Meeting (ADMM) made its first joint statement on military AI, highlighting the topic’s newness in the region. There remains no regional white paper or coordinated policy framework specifically tackling the risks of open-source AI in military operations.

This muted response may be due partly to capacity limitations, differing threat perceptions, and political sensitivities surrounding military innovation. However, some countries in the region have reacted cautiously to ChatBIT’s emergence, with security analysts warning about the potential for asymmetric military capabilities and exploitation by non-state actors. Still, these concerns have not yet resulted in significant policy responses.

These circumstances highlight the importance for Southeast Asia to accelerate regional dialogue and cooperation on military AI governance, particularly regarding open-source tools, which, due to their accessibility, increase the risk of misuse. Given the dual-use nature of AI technologies, frameworks developed for civilian use could be expanded or adapted, but they will require recalibration to address the specific risks posed by militarised open-source AI.

Regulating the Military Use of Open-Source AI

Strengthening international governance frameworks will be crucial in addressing the growing risks associated with open-source military AI. At the same time, binding global agreements may prove difficult to fully enforce because of domestic political constraints. Existing multilateral conferences, such as the Responsible AI in the Military Domain (REAIM) Summit, offer a good starting point for multistakeholder dialogue.

Platforms like the REAIM Summit and other similar initiatives need to focus on creating shared regulatory frameworks that can help manage and reduce the militarisation of open-source AI models. This might involve practical steps such as setting up early warning systems to detect any suspicious military uses of open-source tools, along with encouraging voluntary transparency for state-led AI projects. By tackling these risks head-on, these platforms can significantly help bridge the current governance gaps and promote greater accountability in the development of military AI.

There is also a need to work with private sector developers of open-source AI to implement technical and policy safeguards to prevent their misuse for military applications. For example, Meta’s Llama Guard is an open-source classifier designed to detect potentially harmful outputs. Llama Guard demonstrates one way of implementing technical safeguards that are embedded within open-source models.

Additionally, the BigScience Workshop’s development of the BLOOM model showcases how the open-source community can play a proactive role in AI governance. BLOOM was released with usage restrictions and detailed documentation, emphasising the importance of collaboration, sharing ideas and the role of community-oriented standards. Together, these examples show that building guardrails for open-source AI is entirely possible; the challenge lies in scaling these efforts through enforceable policies and widely adopted industry standards.

Conclusion

As the militarisation of open-source AI models intensifies, the ability of existing governance efforts to manage the associated risks will depend on a concerted partnership between states, the private sector, and the open-source community. While transparency and accessibility are crucial to the advancement of AI, safeguards and accountability are equally important.

Asia finds itself in an exciting yet precarious situation. There is a need for stronger regional coordination and proactive engagement in global and Asia-specific governance frameworks for military AI; otherwise, the region risks becoming vulnerable to the strategic exploitation of open-source AI for military purposes.

The region does not need to start from scratch when developing regulation. Taking stock of existing efforts by states and other players is an important first step towards developing regional technical safeguards and enhancing international cooperation. The expertise and tools already exist to address some of the critical challenges posed by the militarisation of open-source AI.

What is needed is multilateral coordination and enforcement based on shared principles, although this will pose another significant challenge, given the fractious nature of the regional and global order.

About the Author

Annemarie Mugisa Acen recently graduated with an MSc in International Relations from the S. Rajaratnam School of International Studies (RSIS) at Nanyang Technological University. She interned with RSIS’ Military Transformations Programme from December 2024 to May 2025.

Categories: RSIS Commentary Series / Country and Region Studies / Technology and Future Issues / East Asia and Asia Pacific / South Asia / Southeast Asia and ASEAN / Global
comments powered by Disqus

SYNOPSIS

The rapid advancement of open-source AI has outpaced regulatory oversight, raising critical concerns about its potential exploitation for military applications. There is a lack of attention within global AI governance platforms on regulating the use of open-source AI in military applications and the risks to security and stability, especially in Asia.

COMMENTARY

In June 2024, a team of Chinese researchers affiliated with the People’s Liberation Army unveiled ChatBIT, an AI model developed specifically for military applications. Built on Meta’s open-source Llama-2-13b large language model, ChatBIT is designed to support military operations, including battlefield intelligence, situational awareness, and operational decision-making.

This development raises concerns about the lack of regulatory measures regarding the use of open-source AI for military purposes. While the United States expressed concern over ChatBIT, it has not received enough scrutiny in ongoing global AI governance discussions. Given ChatBIT’s potential impact on global and regional security, countries need to pay closer attention to the possible use of open-source AI for military applications.

Open-Source AI vs. Closed-Source AI

Unlike closed-source AI models such as OpenAI’s GPT-4 or Google’s Gemini, which operate under strict access controls, open-source models can be freely modified and their source codes used to create AI chatbots and models. While this openness fosters innovation, it also poses a significant security risk. Meta’s Llama-3 Acceptable Use Policy prohibits military applications, but enforcement remains a challenge. Once released, these models can be modified for use beyond their original purpose, including in the military domain.

China is not alone in leveraging open-source AI for strategic advantage; the US Department of Defense has also explored similar applications through partnerships with American tech companies. This poses a challenge to existing governance mechanisms that aim to regulate these technologies and oversee their effective implementation.

While many Western companies and governments claim to be guided by ethical principles, there have been cases where these principles appear to have been ignored. In November 2024, Meta adjusted its policy to allow US government agencies and defence contractors to use Llama models for cybersecurity and intelligence purposes. This underscores the difficulty of holding the private sector accountable for the governance of AI in the military domain.

Regional Military AI Governance Efforts

Many Asian countries are still in the early stages of integrating AI into their defence systems. Instead of responding directly to developments like ChatBIT, several countries remain focused on foundational steps, such as updating defence strategies, investing in dual-use technologies, and experimenting with AI applications in controlled environments. For example, Japan’s Defence Ministry launched its first basic policy on the use of AI in July 2024, while South Korea launched a research centre on defence AI earlier in the same year. These efforts are part of broader military modernisation and transformation efforts and do not focus on open-source AI governance per se.

In Southeast Asia, there has been comparatively less attention on the governance of military AI. Until recently, discussions about AI within ASEAN largely focused on civilian capabilities. It was only in early 2025 that the ASEAN Defence Ministers’ Meeting (ADMM) made its first joint statement on military AI, highlighting the topic’s newness in the region. There remains no regional white paper or coordinated policy framework specifically tackling the risks of open-source AI in military operations.

This muted response may be due partly to capacity limitations, differing threat perceptions, and political sensitivities surrounding military innovation. However, some countries in the region have reacted cautiously to ChatBIT’s emergence, with security analysts warning about the potential for asymmetric military capabilities and exploitation by non-state actors. Still, these concerns have not yet resulted in significant policy responses.

These circumstances highlight the importance for Southeast Asia to accelerate regional dialogue and cooperation on military AI governance, particularly regarding open-source tools, which, due to their accessibility, increase the risk of misuse. Given the dual-use nature of AI technologies, frameworks developed for civilian use could be expanded or adapted, but they will require recalibration to address the specific risks posed by militarised open-source AI.

Regulating the Military Use of Open-Source AI

Strengthening international governance frameworks will be crucial in addressing the growing risks associated with open-source military AI. At the same time, binding global agreements may prove difficult to fully enforce because of domestic political constraints. Existing multilateral conferences, such as the Responsible AI in the Military Domain (REAIM) Summit, offer a good starting point for multistakeholder dialogue.

Platforms like the REAIM Summit and other similar initiatives need to focus on creating shared regulatory frameworks that can help manage and reduce the militarisation of open-source AI models. This might involve practical steps such as setting up early warning systems to detect any suspicious military uses of open-source tools, along with encouraging voluntary transparency for state-led AI projects. By tackling these risks head-on, these platforms can significantly help bridge the current governance gaps and promote greater accountability in the development of military AI.

There is also a need to work with private sector developers of open-source AI to implement technical and policy safeguards to prevent their misuse for military applications. For example, Meta’s Llama Guard is an open-source classifier designed to detect potentially harmful outputs. Llama Guard demonstrates one way of implementing technical safeguards that are embedded within open-source models.

Additionally, the BigScience Workshop’s development of the BLOOM model showcases how the open-source community can play a proactive role in AI governance. BLOOM was released with usage restrictions and detailed documentation, emphasising the importance of collaboration, sharing ideas and the role of community-oriented standards. Together, these examples show that building guardrails for open-source AI is entirely possible; the challenge lies in scaling these efforts through enforceable policies and widely adopted industry standards.

Conclusion

As the militarisation of open-source AI models intensifies, the ability of existing governance efforts to manage the associated risks will depend on a concerted partnership between states, the private sector, and the open-source community. While transparency and accessibility are crucial to the advancement of AI, safeguards and accountability are equally important.

Asia finds itself in an exciting yet precarious situation. There is a need for stronger regional coordination and proactive engagement in global and Asia-specific governance frameworks for military AI; otherwise, the region risks becoming vulnerable to the strategic exploitation of open-source AI for military purposes.

The region does not need to start from scratch when developing regulation. Taking stock of existing efforts by states and other players is an important first step towards developing regional technical safeguards and enhancing international cooperation. The expertise and tools already exist to address some of the critical challenges posed by the militarisation of open-source AI.

What is needed is multilateral coordination and enforcement based on shared principles, although this will pose another significant challenge, given the fractious nature of the regional and global order.

About the Author

Annemarie Mugisa Acen recently graduated with an MSc in International Relations from the S. Rajaratnam School of International Studies (RSIS) at Nanyang Technological University. She interned with RSIS’ Military Transformations Programme from December 2024 to May 2025.

Categories: RSIS Commentary Series / Country and Region Studies / Technology and Future Issues

Popular Links

About RSISResearch ProgrammesGraduate EducationPublicationsEventsAdmissionsCareersVideo/Audio ChannelRSIS Intranet

Connect with Us

rsis.ntu
rsis_ntu
rsisntu
rsisvideocast
school/rsis-ntu
rsis.sg
rsissg
RSIS
RSS
Subscribe to RSIS Publications
Subscribe to RSIS Events

Getting to RSIS

Nanyang Technological University
Block S4, Level B3,
50 Nanyang Avenue,
Singapore 639798

Click here for direction to RSIS

Get in Touch

    Copyright © S. Rajaratnam School of International Studies. All rights reserved.
    Privacy Statement / Terms of Use
    Help us improve

      Rate your experience with this website
      123456
      Not satisfiedVery satisfied
      What did you like?
      0/255 characters
      What can be improved?
      0/255 characters
      Your email
      Please enter a valid email.
      Thank you for your feedback.
      This site uses cookies to offer you a better browsing experience. By continuing, you are agreeing to the use of cookies on your device as described in our privacy policy. Learn more
      OK
      Latest Book
      more info