Back
About RSIS
Introduction
Building the Foundations
Welcome Message
Board of Governors
Staff Profiles
Executive Deputy Chairman’s Office
Dean’s Office
Management
Distinguished Fellows
Faculty and Research
Associate Research Fellows, Senior Analysts and Research Analysts
Visiting Fellows
Adjunct Fellows
Administrative Staff
Honours and Awards for RSIS Staff and Students
RSIS Endowment Fund
Endowed Professorships
Career Opportunities
Getting to RSIS
Research
Research Centres
Centre for Multilateralism Studies (CMS)
Centre for Non-Traditional Security Studies (NTS Centre)
Centre of Excellence for National Security (CENS)
Institute of Defence and Strategic Studies (IDSS)
International Centre for Political Violence and Terrorism Research (ICPVTR)
Research Programmes
National Security Studies Programme (NSSP)
Social Cohesion Research Programme (SCRP)
Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
Other Research
Future Issues and Technology Cluster
Research@RSIS
Science and Technology Studies Programme (STSP) (2017-2020)
Graduate Education
Graduate Programmes Office
Exchange Partners and Programmes
How to Apply
Financial Assistance
Meet the Admissions Team: Information Sessions and other events
RSIS Alumni
Outreach
Global Networks
About Global Networks
RSIS Alumni
Executive Education
About Executive Education
SRP Executive Programme
Terrorism Analyst Training Course (TATC)
International Programmes
About International Programmes
Asia-Pacific Programme for Senior Military Officers (APPSMO)
Asia-Pacific Programme for Senior National Security Officers (APPSNO)
International Conference on Cohesive Societies (ICCS)
International Strategy Forum-Asia (ISF-Asia)
Publications
RSIS Publications
Annual Reviews
Books
Bulletins and Newsletters
RSIS Commentary Series
Counter Terrorist Trends and Analyses
Commemorative / Event Reports
Future Issues
IDSS Papers
Interreligious Relations
Monographs
NTS Insight
Policy Reports
Working Papers
External Publications
Authored Books
Journal Articles
Edited Books
Chapters in Edited Books
Policy Reports
Working Papers
Op-Eds
Glossary of Abbreviations
Policy-relevant Articles Given RSIS Award
RSIS Publications for the Year
External Publications for the Year
Media
News Releases
Speeches
Video/Audio Channel
External Podcasts
Events
Contact Us
S. Rajaratnam School of International Studies Think Tank and Graduate School Ponder The Improbable Since 1966
Nanyang Technological University Nanyang Technological University
  • About RSIS
      IntroductionBuilding the FoundationsWelcome MessageBoard of GovernorsHonours and Awards for RSIS Staff and StudentsRSIS Endowment FundEndowed ProfessorshipsCareer OpportunitiesGetting to RSIS
      Staff ProfilesExecutive Deputy Chairman’s OfficeDean’s OfficeManagementDistinguished FellowsFaculty and ResearchAssociate Research Fellows, Senior Analysts and Research AnalystsVisiting FellowsAdjunct FellowsAdministrative Staff
  • Research
      Research CentresCentre for Multilateralism Studies (CMS)Centre for Non-Traditional Security Studies (NTS Centre)Centre of Excellence for National Security (CENS)Institute of Defence and Strategic Studies (IDSS)International Centre for Political Violence and Terrorism Research (ICPVTR)
      Research ProgrammesNational Security Studies Programme (NSSP)Social Cohesion Research Programme (SCRP)Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
      Other ResearchFuture Issues and Technology ClusterResearch@RSISScience and Technology Studies Programme (STSP) (2017-2020)
  • Graduate Education
      Graduate Programmes OfficeExchange Partners and ProgrammesHow to ApplyFinancial AssistanceMeet the Admissions Team: Information Sessions and other eventsRSIS Alumni
  • Outreach
      Global NetworksAbout Global NetworksRSIS Alumni
      Executive EducationAbout Executive EducationSRP Executive ProgrammeTerrorism Analyst Training Course (TATC)
      International ProgrammesAbout International ProgrammesAsia-Pacific Programme for Senior Military Officers (APPSMO)Asia-Pacific Programme for Senior National Security Officers (APPSNO)International Conference on Cohesive Societies (ICCS)International Strategy Forum-Asia (ISF-Asia)
  • Publications
      RSIS PublicationsAnnual ReviewsBooksBulletins and NewslettersRSIS Commentary SeriesCounter Terrorist Trends and AnalysesCommemorative / Event ReportsFuture IssuesIDSS PapersInterreligious RelationsMonographsNTS InsightPolicy ReportsWorking Papers
      External PublicationsAuthored BooksJournal ArticlesEdited BooksChapters in Edited BooksPolicy ReportsWorking PapersOp-Eds
      Glossary of AbbreviationsPolicy-relevant Articles Given RSIS AwardRSIS Publications for the YearExternal Publications for the Year
  • Media
      News ReleasesSpeechesVideo/Audio ChannelExternal Podcasts
  • Events
  • Contact Us
    • Connect with Us

      rsis.ntu
      rsis_ntu
      rsisntu
      rsisvideocast
      school/rsis-ntu
      rsis.sg
      rsissg
      RSIS
      RSS
      Subscribe to RSIS Publications
      Subscribe to RSIS Events

      Getting to RSIS

      Nanyang Technological University
      Block S4, Level B3,
      50 Nanyang Avenue,
      Singapore 639798

      Click here for direction to RSIS

      Get in Touch

    Connect
    Search
    • RSIS
    • Publication
    • RSIS Publications
    • Strengthening Transparency at the AI-Biotech Nexus
    • Annual Reviews
    • Books
    • Bulletins and Newsletters
    • RSIS Commentary Series
    • Counter Terrorist Trends and Analyses
    • Commemorative / Event Reports
    • Future Issues
    • IDSS Papers
    • Interreligious Relations
    • Monographs
    • NTS Insight
    • Policy Reports
    • Working Papers

    CO25173 | Strengthening Transparency at the AI-Biotech Nexus
    Julius Cesar Imperial Trajano, Jeselyn, Mely Caballero-Anthony

    12 August 2025

    download pdf

    SYNOPSIS

    Rapid advancements in artificial intelligence increasingly shape rising biosecurity concerns. To prevent the misuse of emerging biotechnologies and AI for hostile purposes, greater transparency in both life sciences and AI development is essential. At the same time, when applied responsibly, AI-driven tools and biotechnologies have significant potential for detecting and responding to emerging biological threats, thereby reinforcing international peace and security.

    source: pexels
    source: pexels

    COMMENTARY

    In June 2025, OpenAI, the developer of Chat GPT, warned that future AI models are likely to reach “High” capability levels in biology under its “Preparedness Framework”. This means that individuals with basic training might obtain AI tools capable of creating biological or chemical threats. It also warned that these AI models raise an important dual-use consideration: enabling scientific advancement while safeguarding against harmful misuse.

    In an era of unprecedented technological convergence, AI is accelerating breakthroughs in biological research, offering hope against diseases, optimising bio-manufacturing, and enabling early detection of pandemics. However, with this potential comes a serious dual-use dilemma: the same research outputs that generate lifesaving innovations can also be misused to develop biological weapons.

    Against rapid advances in biological and AI research, the landscape of global security has been transformed significantly, presenting new types of threats and capabilities. Enhancing transparency in the life sciences and AI development is therefore crucial to mitigate the threat of frontier technologies being weaponised, which could threaten peace and security. However, when used responsibly, AI-enabled biotechnologies and AI models can also be employed to detect and counter emerging biosecurity threats.

    Transparency Under Threat

    Large-language models (LLMs) such as GPT-4 and 4o have achieved transformational growth in dual-use capabilities, including supporting the design and implementation of biological and chemical research and testing protocols. LLMs have the potential to be utilised in accessing biological AI models to perform complex scientific tasks. This means that in the near future, AI will likely reduce the cost of biological innovations and help less experienced researchers utilise increasingly complex and powerful biological tools.

    The same underlying capabilities that drive progress, such as analysing biological data, predicting chemical reactions, or guiding laboratory experiments, could also be misused to enable people with minimal expertise to recreate biological threats or assist highly skilled actors in developing bioweapons.

    Recent evaluations by OpenAI and others indicate that frontier AI systems may soon enable novice actors to replicate known biothreats. The growing “novice uplift” risk highlights a critical gap in current biosecurity governance – a lack of transparency in how AI tools intersect with sensitive biological knowledge and capabilities. This lack of national regulatory oversight undermines trust and transparency, particularly as private AI labs, startups, or universities conduct powerful research outside government oversight.

    Without transparency, the bio-AI field becomes opaque, trust declines, and regulatory blind spots expand. A key concern is the lack of clear responsibility and accountability for AI companies regarding the dual-use potential of their technologies, especially in the context of biological research. The question arises: What share of responsibility should AI developers bear relative to the scientists using AI models in their research?

    An indication of this growing concern is the proliferation of cloud labs. These are automated laboratories operated remotely through cloud-based platforms, enabling researchers to conduct and oversee experiments using robots without being physically present. They offer benefits such as increased flexibility, lower costs, and scalable operations. However, these facilities also carry potential risks, including the capacity to produce harmful pathogens.

    How AI Can Help Boost Transparency in Biological Research

    Although AI presents certain risks, it also offers significant opportunities to strengthen biological arms control and reduce biological threats. Since the Biological Weapons Convention (BWC) lacks a formal verification mechanism, AI and other emerging capabilities could help develop innovative measures to ensure compliance by state and non-state actors.

    AI’s ability to analyse large volumes of data to identify trends could significantly enhance disease surveillance, enable early warning systems, and accelerate response efforts. When harnessed effectively, these capabilities could improve cross-border assistance in cases of BWC violations and help to reduce the impact of any use of biological weapons.

    AI tools can be utilised in several ways to identify and react to biological threats:

    • AI applications could include the mapping and analysing of terrorist networks, patterns, social media, and travel to predict and disrupt biologically related activities and attacks.

    • AI-enabled capabilities could be utilised to analyse data, images, and overhead satellite imagery to help differentiate between legitimate biological research and suspicious, weapons-related activities.

    • AI can be used to identify and disrupt illicit procurement networks that attempt to bypass export control measures and acquire dual-use biological materials.

    • AI applications can support attribution efforts by identifying unique signatures of biological samples or incidents, helping to trace their origins or the responsible parties.

    AI could, therefore, enable more effective and targeted policy responses and could deter the development or usage of biological weapons.

    ASEAN’s Role in Enhancing AI Transparency for Biological Arms Control

    Without a verification mechanism under the BWC, regional transparency measures are vital for building trust and ensuring compliance. As AI becomes more integrated into biotechnology and disease surveillance, there is a growing need for mechanisms that ensure the transparent and responsible use of AI, especially in its dual-use applications. There are cogent reasons why ASEAN, as a regional organisation with a long-standing focus on confidence-building, dialogue, and non-traditional security cooperation, is well-positioned to support efforts that promote AI transparency in the biological domain.

    Firstly, ASEAN’s institutional platforms, such as the soon-to-be-established ASEAN Biosafety and Biosecurity Network (ABBN) and the ASEAN Health Cluster 2 on “Responding to All Hazards and Emerging Threats”, can serve as platforms for dialogue on responsible AI development and application in biotechnology. These platforms have already facilitated discussions on other emerging technologies and regional security concerns. They can be expanded to include structured dialogues on AI-related risks, norms, and transparency practices, particularly in relation to biological threat mitigation.

    Secondly, ASEAN member states could consider establishing a regional voluntary reporting mechanism or transparency initiative focused on the AI-biotech nexus. While formal verification under the BWC remains out of reach, such voluntary measures, including the sharing of national AI-biosecurity policies, oversight frameworks, and ethical guidelines, can foster regional confidence and demonstrate collective commitment to non-proliferation and responsible innovation. These transparency efforts could be integrated into the future ABBN, which is currently under development.

    Thirdly, ASEAN can promote the standardisation of AI governance principles related to biological security, such as accountability, auditability, and traceability. Given that AI systems used in biosurveillance and data analysis may involve opaque or proprietary algorithms, a regional framework that encourages transparency about the sources, methods, and limitations of AI tools can help reduce misperceptions and build mutual trust, especially in sensitive areas such as attribution or threat detection.

    Fourthly, ASEAN’s emphasis on capacity-building and narrowing of development gaps across member states can be directed toward enhancing AI literacy, technical skills, and regulatory coherence. This includes supporting national governments in establishing safeguards against AI misuse in biological research and enabling them to participate meaningfully in regional transparency initiatives. Tailored technical assistance, knowledge-sharing platforms, and public-private dialogues on AI in biosciences could contribute to a more balanced and inclusive regional approach to transparency.

    Fifthly, ASEAN can advocate for greater involvement of scientists, ethicists, and civil society in regional AI-biosecurity governance, fostering a multi-stakeholder model that enhances transparency and legitimacy. Science diplomacy, long promoted within ASEAN, can be expanded to encompass the AI-biotech frontier, encouraging cooperative research and peer exchanges that promote openness and collaboration in the biological sciences.

    Conclusion

    By emphasising transparency over verification, ASEAN can carve out a pragmatic and constructive role in supporting the BWC’s objectives in the age of AI. In doing so, ASEAN helps to bridge the gap between developed and developing countries, between innovation and regulation, and between security and scientific openness – ensuring that AI is not only protected against misuse but actively harnessed to strengthen the global biological security regime.

    About the Authors

    Julius Cesar Trajano and Jeselyn are respectively Research Fellow and Research Analyst at the Centre for Non-Traditional Security Studies (NTS Centre) at the S. Rajaratnam School of International Studies (RSIS). Mely Caballero-Anthony is Professor and Head of the NTS Centre, as well as the Biosecurity and International Security Lead of the Asia Centre for Health Security.

    Categories: RSIS Commentary Series / Country and Region Studies / International Political Economy / International Politics and Security / Non-Traditional Security / Technology and Future Issues / International Economics and Security / East Asia and Asia Pacific / South Asia / Southeast Asia and ASEAN / Global
    comments powered by Disqus

    SYNOPSIS

    Rapid advancements in artificial intelligence increasingly shape rising biosecurity concerns. To prevent the misuse of emerging biotechnologies and AI for hostile purposes, greater transparency in both life sciences and AI development is essential. At the same time, when applied responsibly, AI-driven tools and biotechnologies have significant potential for detecting and responding to emerging biological threats, thereby reinforcing international peace and security.

    source: pexels
    source: pexels

    COMMENTARY

    In June 2025, OpenAI, the developer of Chat GPT, warned that future AI models are likely to reach “High” capability levels in biology under its “Preparedness Framework”. This means that individuals with basic training might obtain AI tools capable of creating biological or chemical threats. It also warned that these AI models raise an important dual-use consideration: enabling scientific advancement while safeguarding against harmful misuse.

    In an era of unprecedented technological convergence, AI is accelerating breakthroughs in biological research, offering hope against diseases, optimising bio-manufacturing, and enabling early detection of pandemics. However, with this potential comes a serious dual-use dilemma: the same research outputs that generate lifesaving innovations can also be misused to develop biological weapons.

    Against rapid advances in biological and AI research, the landscape of global security has been transformed significantly, presenting new types of threats and capabilities. Enhancing transparency in the life sciences and AI development is therefore crucial to mitigate the threat of frontier technologies being weaponised, which could threaten peace and security. However, when used responsibly, AI-enabled biotechnologies and AI models can also be employed to detect and counter emerging biosecurity threats.

    Transparency Under Threat

    Large-language models (LLMs) such as GPT-4 and 4o have achieved transformational growth in dual-use capabilities, including supporting the design and implementation of biological and chemical research and testing protocols. LLMs have the potential to be utilised in accessing biological AI models to perform complex scientific tasks. This means that in the near future, AI will likely reduce the cost of biological innovations and help less experienced researchers utilise increasingly complex and powerful biological tools.

    The same underlying capabilities that drive progress, such as analysing biological data, predicting chemical reactions, or guiding laboratory experiments, could also be misused to enable people with minimal expertise to recreate biological threats or assist highly skilled actors in developing bioweapons.

    Recent evaluations by OpenAI and others indicate that frontier AI systems may soon enable novice actors to replicate known biothreats. The growing “novice uplift” risk highlights a critical gap in current biosecurity governance – a lack of transparency in how AI tools intersect with sensitive biological knowledge and capabilities. This lack of national regulatory oversight undermines trust and transparency, particularly as private AI labs, startups, or universities conduct powerful research outside government oversight.

    Without transparency, the bio-AI field becomes opaque, trust declines, and regulatory blind spots expand. A key concern is the lack of clear responsibility and accountability for AI companies regarding the dual-use potential of their technologies, especially in the context of biological research. The question arises: What share of responsibility should AI developers bear relative to the scientists using AI models in their research?

    An indication of this growing concern is the proliferation of cloud labs. These are automated laboratories operated remotely through cloud-based platforms, enabling researchers to conduct and oversee experiments using robots without being physically present. They offer benefits such as increased flexibility, lower costs, and scalable operations. However, these facilities also carry potential risks, including the capacity to produce harmful pathogens.

    How AI Can Help Boost Transparency in Biological Research

    Although AI presents certain risks, it also offers significant opportunities to strengthen biological arms control and reduce biological threats. Since the Biological Weapons Convention (BWC) lacks a formal verification mechanism, AI and other emerging capabilities could help develop innovative measures to ensure compliance by state and non-state actors.

    AI’s ability to analyse large volumes of data to identify trends could significantly enhance disease surveillance, enable early warning systems, and accelerate response efforts. When harnessed effectively, these capabilities could improve cross-border assistance in cases of BWC violations and help to reduce the impact of any use of biological weapons.

    AI tools can be utilised in several ways to identify and react to biological threats:

    • AI applications could include the mapping and analysing of terrorist networks, patterns, social media, and travel to predict and disrupt biologically related activities and attacks.

    • AI-enabled capabilities could be utilised to analyse data, images, and overhead satellite imagery to help differentiate between legitimate biological research and suspicious, weapons-related activities.

    • AI can be used to identify and disrupt illicit procurement networks that attempt to bypass export control measures and acquire dual-use biological materials.

    • AI applications can support attribution efforts by identifying unique signatures of biological samples or incidents, helping to trace their origins or the responsible parties.

    AI could, therefore, enable more effective and targeted policy responses and could deter the development or usage of biological weapons.

    ASEAN’s Role in Enhancing AI Transparency for Biological Arms Control

    Without a verification mechanism under the BWC, regional transparency measures are vital for building trust and ensuring compliance. As AI becomes more integrated into biotechnology and disease surveillance, there is a growing need for mechanisms that ensure the transparent and responsible use of AI, especially in its dual-use applications. There are cogent reasons why ASEAN, as a regional organisation with a long-standing focus on confidence-building, dialogue, and non-traditional security cooperation, is well-positioned to support efforts that promote AI transparency in the biological domain.

    Firstly, ASEAN’s institutional platforms, such as the soon-to-be-established ASEAN Biosafety and Biosecurity Network (ABBN) and the ASEAN Health Cluster 2 on “Responding to All Hazards and Emerging Threats”, can serve as platforms for dialogue on responsible AI development and application in biotechnology. These platforms have already facilitated discussions on other emerging technologies and regional security concerns. They can be expanded to include structured dialogues on AI-related risks, norms, and transparency practices, particularly in relation to biological threat mitigation.

    Secondly, ASEAN member states could consider establishing a regional voluntary reporting mechanism or transparency initiative focused on the AI-biotech nexus. While formal verification under the BWC remains out of reach, such voluntary measures, including the sharing of national AI-biosecurity policies, oversight frameworks, and ethical guidelines, can foster regional confidence and demonstrate collective commitment to non-proliferation and responsible innovation. These transparency efforts could be integrated into the future ABBN, which is currently under development.

    Thirdly, ASEAN can promote the standardisation of AI governance principles related to biological security, such as accountability, auditability, and traceability. Given that AI systems used in biosurveillance and data analysis may involve opaque or proprietary algorithms, a regional framework that encourages transparency about the sources, methods, and limitations of AI tools can help reduce misperceptions and build mutual trust, especially in sensitive areas such as attribution or threat detection.

    Fourthly, ASEAN’s emphasis on capacity-building and narrowing of development gaps across member states can be directed toward enhancing AI literacy, technical skills, and regulatory coherence. This includes supporting national governments in establishing safeguards against AI misuse in biological research and enabling them to participate meaningfully in regional transparency initiatives. Tailored technical assistance, knowledge-sharing platforms, and public-private dialogues on AI in biosciences could contribute to a more balanced and inclusive regional approach to transparency.

    Fifthly, ASEAN can advocate for greater involvement of scientists, ethicists, and civil society in regional AI-biosecurity governance, fostering a multi-stakeholder model that enhances transparency and legitimacy. Science diplomacy, long promoted within ASEAN, can be expanded to encompass the AI-biotech frontier, encouraging cooperative research and peer exchanges that promote openness and collaboration in the biological sciences.

    Conclusion

    By emphasising transparency over verification, ASEAN can carve out a pragmatic and constructive role in supporting the BWC’s objectives in the age of AI. In doing so, ASEAN helps to bridge the gap between developed and developing countries, between innovation and regulation, and between security and scientific openness – ensuring that AI is not only protected against misuse but actively harnessed to strengthen the global biological security regime.

    About the Authors

    Julius Cesar Trajano and Jeselyn are respectively Research Fellow and Research Analyst at the Centre for Non-Traditional Security Studies (NTS Centre) at the S. Rajaratnam School of International Studies (RSIS). Mely Caballero-Anthony is Professor and Head of the NTS Centre, as well as the Biosecurity and International Security Lead of the Asia Centre for Health Security.

    Categories: RSIS Commentary Series / Country and Region Studies / International Political Economy / International Politics and Security / Non-Traditional Security / Technology and Future Issues / International Economics and Security

    Popular Links

    About RSISResearch ProgrammesGraduate EducationPublicationsEventsAdmissionsCareersVideo/Audio ChannelRSIS Intranet

    Connect with Us

    rsis.ntu
    rsis_ntu
    rsisntu
    rsisvideocast
    school/rsis-ntu
    rsis.sg
    rsissg
    RSIS
    RSS
    Subscribe to RSIS Publications
    Subscribe to RSIS Events

    Getting to RSIS

    Nanyang Technological University
    Block S4, Level B3,
    50 Nanyang Avenue,
    Singapore 639798

    Click here for direction to RSIS

    Get in Touch

      Copyright © S. Rajaratnam School of International Studies. All rights reserved.
      Privacy Statement / Terms of Use
      Help us improve

        Rate your experience with this website
        123456
        Not satisfiedVery satisfied
        What did you like?
        0/255 characters
        What can be improved?
        0/255 characters
        Your email
        Please enter a valid email.
        Thank you for your feedback.
        This site uses cookies to offer you a better browsing experience. By continuing, you are agreeing to the use of cookies on your device as described in our privacy policy. Learn more
        OK
        Latest Book
        more info