Back
About RSIS
Introduction
Building the Foundations
Welcome Message
Board of Governors
Staff Profiles
Executive Deputy Chairman’s Office
Dean’s Office
Management
Distinguished Fellows
Faculty and Research
Associate Research Fellows, Senior Analysts and Research Analysts
Visiting Fellows
Adjunct Fellows
Administrative Staff
Honours and Awards for RSIS Staff and Students
RSIS Endowment Fund
Endowed Professorships
Career Opportunities
Getting to RSIS
Research
Research Centres
Centre for Multilateralism Studies (CMS)
Centre for Non-Traditional Security Studies (NTS Centre)
Centre of Excellence for National Security
Institute of Defence and Strategic Studies (IDSS)
International Centre for Political Violence and Terrorism Research (ICPVTR)
Research Programmes
National Security Studies Programme (NSSP)
Social Cohesion Research Programme (SCRP)
Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
Other Research
Future Issues and Technology Cluster
Research@RSIS
Science and Technology Studies Programme (STSP) (2017-2020)
Graduate Education
Graduate Programmes Office
Exchange Partners and Programmes
How to Apply
Financial Assistance
Meet the Admissions Team: Information Sessions and other events
RSIS Alumni
Outreach
Global Networks
About Global Networks
RSIS Alumni
Executive Education
About Executive Education
SRP Executive Programme
Terrorism Analyst Training Course (TATC)
International Programmes
About International Programmes
Asia-Pacific Programme for Senior Military Officers (APPSMO)
Asia-Pacific Programme for Senior National Security Officers (APPSNO)
International Conference on Cohesive Societies (ICCS)
International Strategy Forum-Asia (ISF-Asia)
Publications
RSIS Publications
Annual Reviews
Books
Bulletins and Newsletters
RSIS Commentary Series
Counter Terrorist Trends and Analyses
Commemorative / Event Reports
Future Issues
IDSS Papers
Interreligious Relations
Monographs
NTS Insight
Policy Reports
Working Papers
External Publications
Authored Books
Journal Articles
Edited Books
Chapters in Edited Books
Policy Reports
Working Papers
Op-Eds
Glossary of Abbreviations
Policy-relevant Articles Given RSIS Award
RSIS Publications for the Year
External Publications for the Year
Media
Cohesive Societies
Sustainable Security
Other Resource Pages
News Releases
Speeches
Video/Audio Channel
External Podcasts
Events
Contact Us
S. Rajaratnam School of International Studies Think Tank and Graduate School Ponder The Improbable Since 1966
Nanyang Technological University Nanyang Technological University
  • About RSIS
      IntroductionBuilding the FoundationsWelcome MessageBoard of GovernorsHonours and Awards for RSIS Staff and StudentsRSIS Endowment FundEndowed ProfessorshipsCareer OpportunitiesGetting to RSIS
      Staff ProfilesExecutive Deputy Chairman’s OfficeDean’s OfficeManagementDistinguished FellowsFaculty and ResearchAssociate Research Fellows, Senior Analysts and Research AnalystsVisiting FellowsAdjunct FellowsAdministrative Staff
  • Research
      Research CentresCentre for Multilateralism Studies (CMS)Centre for Non-Traditional Security Studies (NTS Centre)Centre of Excellence for National SecurityInstitute of Defence and Strategic Studies (IDSS)International Centre for Political Violence and Terrorism Research (ICPVTR)
      Research ProgrammesNational Security Studies Programme (NSSP)Social Cohesion Research Programme (SCRP)Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
      Other ResearchFuture Issues and Technology ClusterResearch@RSISScience and Technology Studies Programme (STSP) (2017-2020)
  • Graduate Education
      Graduate Programmes OfficeExchange Partners and ProgrammesHow to ApplyFinancial AssistanceMeet the Admissions Team: Information Sessions and other eventsRSIS Alumni
  • Outreach
      Global NetworksAbout Global NetworksRSIS Alumni
      Executive EducationAbout Executive EducationSRP Executive ProgrammeTerrorism Analyst Training Course (TATC)
      International ProgrammesAbout International ProgrammesAsia-Pacific Programme for Senior Military Officers (APPSMO)Asia-Pacific Programme for Senior National Security Officers (APPSNO)International Conference on Cohesive Societies (ICCS)International Strategy Forum-Asia (ISF-Asia)
  • Publications
      RSIS PublicationsAnnual ReviewsBooksBulletins and NewslettersRSIS Commentary SeriesCounter Terrorist Trends and AnalysesCommemorative / Event ReportsFuture IssuesIDSS PapersInterreligious RelationsMonographsNTS InsightPolicy ReportsWorking Papers
      External PublicationsAuthored BooksJournal ArticlesEdited BooksChapters in Edited BooksPolicy ReportsWorking PapersOp-Eds
      Glossary of AbbreviationsPolicy-relevant Articles Given RSIS AwardRSIS Publications for the YearExternal Publications for the Year
  • Media
      Cohesive SocietiesSustainable SecurityOther Resource PagesNews ReleasesSpeechesVideo/Audio ChannelExternal Podcasts
  • Events
  • Contact Us
    • Connect with Us

      rsis.ntu
      rsis_ntu
      rsisntu
      rsisvideocast
      school/rsis-ntu
      rsis.sg
      rsissg
      RSIS
      RSS
      Subscribe to RSIS Publications
      Subscribe to RSIS Events

      Getting to RSIS

      Nanyang Technological University
      Block S4, Level B3,
      50 Nanyang Avenue,
      Singapore 639798

      Click here for direction to RSIS

      Get in Touch

    Connect
    Search
    • RSIS
    • Publication
    • RSIS Publications
    • Singapore’s Proposal on Global Generative AI Governance
    • Annual Reviews
    • Books
    • Bulletins and Newsletters
    • RSIS Commentary Series
    • Counter Terrorist Trends and Analyses
    • Commemorative / Event Reports
    • Future Issues
    • IDSS Papers
    • Interreligious Relations
    • Monographs
    • NTS Insight
    • Policy Reports
    • Working Papers

    CO24017 | Singapore’s Proposal on Global Generative AI Governance
    Jose Miguelito Enriquez

    30 January 2024

    download pdf

    SYNOPSIS

    Singapore’s proposed Model AI Governance Framework for Generative AI is a step in the right direction for global generative AI governance, but global regulation advocates face a tough road ahead. It is necessary to engage all parties involved to reach an equitable and viable structure going forward.

    240130 Singapores Proposal on Global Generative AI Governance
    Source: Canva

    COMMENTARY

    On 16 January 2024, the AI Verify Foundation and the Infocomm Media Development Authority (IMDA) published their Proposed Model AI Governance Framework for Generative AI. While it is the third iteration of their Model AI Governance Framework, this version is the first to focus squarely on regulating Generative Artificial Intelligence (Gen AI) models such as Google’s Gemini and OpenAI’s Generative Pre-Trained Transformer (GPT).

    The rapid mainstreaming of Gen AI models, spurred by the launch of OpenAI’s ChatGPT chatbot in November 2022, is responsible for the so-called “AI boom”. The swift development of Gen AI models consequently led to an increasingly urgent need to regulate AI and guarantee its secure development and use.

    Several regions have already responded to this need. Last December 2023, the European Union reached a provisional deal on its AI Act. In October 2023, US President Joe Biden signed an executive order to ensure safe AI development in the United States. Still, other countries, like the United Kingdom, have put a hold on their plans to pass legislation out of concerns it could restrict innovation.

    A Blueprint for Global Dialogue

    Singapore enters the conversation on Gen AI governance not with a domestic law, but a guiding framework towards a global AI regulatory system by focusing on nine key elements to build confidence in the AI ecosystem: accountability, data, trusted development, incident reporting, testing and assurance, security, content provenance, safety and alignment, and ensuring AI for the public good.

    In this document, Singapore crafts a proposal that creates a trustworthy AI ecosystem for consumers but also an environment conducive to innovations from AI developers and related businesses. By providing a holistic discussion on Gen AI governance, the Model Framework is a useful blueprint for global conversation on AI governance issues.

    The Model Framework provides concrete policy recommendations by drawing parallels to other industry regulations, such as shared accountability between AI model developers and AI-based application developers patterned after responsibility models in the cloud computing industry.

    The document also clearly states which existing legal statutes need to be updated to cater to the novel use cases caused by Gen AI, such as in product liability protections and personal data protection. Amending data protection statutes has become salient as the training data used by AI developers, a once overlooked issue, is now subject to close scrutiny.

    Finally, the Model Framework also explores an issue of AI use that is currently overlooked – its sustainability. While it is currently difficult to pin down the exact environmental impact of AI, current estimates show that Google’s AI operations alone could produce a carbon footprint similar to that of a small country.

    Developers contend that the current environmental impact of AI is overstated and that servers used for AI operations consume considerably less electricity than traditional data centres. However, a recent study estimated that by 2027, AI servers manufactured by chipmaker Nvidia are projected to consume 134 Terawatt hours (TWh) of power. This is comparable to the consumption of the Bitcoin mining network today.

    It is imperative that the environmental costs of AI are regularly monitored.  In this regard, the Model Framework’s recommendation to build efficient computing centres and incentivise green energy use should be accompanied by strict requirements for AI developers to report their operations’ energy consumption and carbon emissions.

    Mitigating Harms and Navigating Contentious Issues

    The Model Framework also shares proposals to minimise harm in areas where malicious AI use could lead to societal harm, such as in deepfaking. It rightly points out the urgency to institute standardised content provenance labels to make it easier for users to know when an image or video has been edited or wholly generated through Gen AI – a harm that Singapore recently faced when a deepfake video of Prime Minister Lee Hsien Loong surfaced online.

    However, provenance labels, such as the watermarks and cryptographic provenance identified in the framework, will only be effective if all stakeholders agree on a single, interoperable, tamperproof labelling standard. While work on open standards is ongoing, a coordinated and sustained dialogue across the public and private sectors on this key issue is needed to keep the momentum and achieve this goal.

    Moreover, while the Model Framework maps out an ambitious policy roadmap that tackles the entire AI development process, it appears to be less instructive on managing copyright concerns, a topic that could potentially become the most contentious in Gen AI governance.

    The issue recently came to light when several lawsuits alleged that AI developers trained their models based on the copyrighted works of authors, journalists, and musicians without obtaining prior permission.

    The Model Framework does not make a concrete proposal to resolve these concerns. It appropriately stated that continuous dialogue is required to produce a viable solution that balances copyright concerns with the need for AI developers to access quality training data.

    Elsewhere, countries have also grappled with how to move forward in resolving this issue. In the UK, an early proposal to allow AI developers to freely use copyrighted material as training data was criticised by several members of Parliament. In the US, several lawmakers supported a proposal to require AI companies to pay licensing fees to use copyrighted material but was met with criticism from AI industry executives.

    It is still unclear what a viable solution to AI’s copyright dilemma would be. However, policymakers around the world need to explore possible options now to keep pace with the innovation taking place within the AI industry. Concerns within the industry must also be tempered with the rights of creative individuals whose livelihoods and body of work are at risk from the continuing intrusions of Gen AI.

    The Road to Global Regulation

    As the AI boom shows no signs of slowing down, managing Gen AI’s most disruptive effects should be a discussion taking place at the international level. Singapore’s latest Model AI Governance Framework offers a compelling roadmap to advance a global framework and a state-led response to today’s challenges in Generative AI governance.

    However, even with elevated enthusiasm for Gen AI governance, it may take a while to arrive at a global agreement. If the experience of the EU with the AI Act is a sign, these negotiations could become very heated and contentious, and at times even break down due to divergent state and stakeholder interests.

    To prevent a repeat of the protracted discussions in the EU, advocates for global AI governance like Singapore could benefit from initially convening informal dialogues with a smaller group of like-minded governments as well as with business leaders, civil society organisations, and AI developers.

    Continuously engaging in dialogues will help generate cross-stakeholder support around the proposals laid down in the Model Framework, which will then provide momentum once the conversation is expanded to a wider global forum.

    About the Author

    Jose Miguelito Enriquez is an Associate Research Fellow in the Centre for Multilateralism Studies at S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore. His research interests include digital economy governance in ASEAN, populist foreign policy, and Philippine politics and foreign policy.

    Categories: RSIS Commentary Series / Country and Region Studies / International Politics and Security / Regionalism and Multilateralism / Singapore and Homeland Security / Technology and Future Issues / East Asia and Asia Pacific / South Asia / Southeast Asia and ASEAN / Global
    comments powered by Disqus

    SYNOPSIS

    Singapore’s proposed Model AI Governance Framework for Generative AI is a step in the right direction for global generative AI governance, but global regulation advocates face a tough road ahead. It is necessary to engage all parties involved to reach an equitable and viable structure going forward.

    240130 Singapores Proposal on Global Generative AI Governance
    Source: Canva

    COMMENTARY

    On 16 January 2024, the AI Verify Foundation and the Infocomm Media Development Authority (IMDA) published their Proposed Model AI Governance Framework for Generative AI. While it is the third iteration of their Model AI Governance Framework, this version is the first to focus squarely on regulating Generative Artificial Intelligence (Gen AI) models such as Google’s Gemini and OpenAI’s Generative Pre-Trained Transformer (GPT).

    The rapid mainstreaming of Gen AI models, spurred by the launch of OpenAI’s ChatGPT chatbot in November 2022, is responsible for the so-called “AI boom”. The swift development of Gen AI models consequently led to an increasingly urgent need to regulate AI and guarantee its secure development and use.

    Several regions have already responded to this need. Last December 2023, the European Union reached a provisional deal on its AI Act. In October 2023, US President Joe Biden signed an executive order to ensure safe AI development in the United States. Still, other countries, like the United Kingdom, have put a hold on their plans to pass legislation out of concerns it could restrict innovation.

    A Blueprint for Global Dialogue

    Singapore enters the conversation on Gen AI governance not with a domestic law, but a guiding framework towards a global AI regulatory system by focusing on nine key elements to build confidence in the AI ecosystem: accountability, data, trusted development, incident reporting, testing and assurance, security, content provenance, safety and alignment, and ensuring AI for the public good.

    In this document, Singapore crafts a proposal that creates a trustworthy AI ecosystem for consumers but also an environment conducive to innovations from AI developers and related businesses. By providing a holistic discussion on Gen AI governance, the Model Framework is a useful blueprint for global conversation on AI governance issues.

    The Model Framework provides concrete policy recommendations by drawing parallels to other industry regulations, such as shared accountability between AI model developers and AI-based application developers patterned after responsibility models in the cloud computing industry.

    The document also clearly states which existing legal statutes need to be updated to cater to the novel use cases caused by Gen AI, such as in product liability protections and personal data protection. Amending data protection statutes has become salient as the training data used by AI developers, a once overlooked issue, is now subject to close scrutiny.

    Finally, the Model Framework also explores an issue of AI use that is currently overlooked – its sustainability. While it is currently difficult to pin down the exact environmental impact of AI, current estimates show that Google’s AI operations alone could produce a carbon footprint similar to that of a small country.

    Developers contend that the current environmental impact of AI is overstated and that servers used for AI operations consume considerably less electricity than traditional data centres. However, a recent study estimated that by 2027, AI servers manufactured by chipmaker Nvidia are projected to consume 134 Terawatt hours (TWh) of power. This is comparable to the consumption of the Bitcoin mining network today.

    It is imperative that the environmental costs of AI are regularly monitored.  In this regard, the Model Framework’s recommendation to build efficient computing centres and incentivise green energy use should be accompanied by strict requirements for AI developers to report their operations’ energy consumption and carbon emissions.

    Mitigating Harms and Navigating Contentious Issues

    The Model Framework also shares proposals to minimise harm in areas where malicious AI use could lead to societal harm, such as in deepfaking. It rightly points out the urgency to institute standardised content provenance labels to make it easier for users to know when an image or video has been edited or wholly generated through Gen AI – a harm that Singapore recently faced when a deepfake video of Prime Minister Lee Hsien Loong surfaced online.

    However, provenance labels, such as the watermarks and cryptographic provenance identified in the framework, will only be effective if all stakeholders agree on a single, interoperable, tamperproof labelling standard. While work on open standards is ongoing, a coordinated and sustained dialogue across the public and private sectors on this key issue is needed to keep the momentum and achieve this goal.

    Moreover, while the Model Framework maps out an ambitious policy roadmap that tackles the entire AI development process, it appears to be less instructive on managing copyright concerns, a topic that could potentially become the most contentious in Gen AI governance.

    The issue recently came to light when several lawsuits alleged that AI developers trained their models based on the copyrighted works of authors, journalists, and musicians without obtaining prior permission.

    The Model Framework does not make a concrete proposal to resolve these concerns. It appropriately stated that continuous dialogue is required to produce a viable solution that balances copyright concerns with the need for AI developers to access quality training data.

    Elsewhere, countries have also grappled with how to move forward in resolving this issue. In the UK, an early proposal to allow AI developers to freely use copyrighted material as training data was criticised by several members of Parliament. In the US, several lawmakers supported a proposal to require AI companies to pay licensing fees to use copyrighted material but was met with criticism from AI industry executives.

    It is still unclear what a viable solution to AI’s copyright dilemma would be. However, policymakers around the world need to explore possible options now to keep pace with the innovation taking place within the AI industry. Concerns within the industry must also be tempered with the rights of creative individuals whose livelihoods and body of work are at risk from the continuing intrusions of Gen AI.

    The Road to Global Regulation

    As the AI boom shows no signs of slowing down, managing Gen AI’s most disruptive effects should be a discussion taking place at the international level. Singapore’s latest Model AI Governance Framework offers a compelling roadmap to advance a global framework and a state-led response to today’s challenges in Generative AI governance.

    However, even with elevated enthusiasm for Gen AI governance, it may take a while to arrive at a global agreement. If the experience of the EU with the AI Act is a sign, these negotiations could become very heated and contentious, and at times even break down due to divergent state and stakeholder interests.

    To prevent a repeat of the protracted discussions in the EU, advocates for global AI governance like Singapore could benefit from initially convening informal dialogues with a smaller group of like-minded governments as well as with business leaders, civil society organisations, and AI developers.

    Continuously engaging in dialogues will help generate cross-stakeholder support around the proposals laid down in the Model Framework, which will then provide momentum once the conversation is expanded to a wider global forum.

    About the Author

    Jose Miguelito Enriquez is an Associate Research Fellow in the Centre for Multilateralism Studies at S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore. His research interests include digital economy governance in ASEAN, populist foreign policy, and Philippine politics and foreign policy.

    Categories: RSIS Commentary Series / Country and Region Studies / International Politics and Security / Regionalism and Multilateralism / Singapore and Homeland Security / Technology and Future Issues

    Popular Links

    About RSISResearch ProgrammesGraduate EducationPublicationsEventsAdmissionsCareersVideo/Audio ChannelRSIS Intranet

    Connect with Us

    rsis.ntu
    rsis_ntu
    rsisntu
    rsisvideocast
    school/rsis-ntu
    rsis.sg
    rsissg
    RSIS
    RSS
    Subscribe to RSIS Publications
    Subscribe to RSIS Events

    Getting to RSIS

    Nanyang Technological University
    Block S4, Level B3,
    50 Nanyang Avenue,
    Singapore 639798

    Click here for direction to RSIS

    Get in Touch

      Copyright © S. Rajaratnam School of International Studies. All rights reserved.
      Privacy Statement / Terms of Use
      Help us improve

        Rate your experience with this website
        123456
        Not satisfiedVery satisfied
        What did you like?
        0/255 characters
        What can be improved?
        0/255 characters
        Your email
        Please enter a valid email.
        Thank you for your feedback.
        This site uses cookies to offer you a better browsing experience. By continuing, you are agreeing to the use of cookies on your device as described in our privacy policy. Learn more
        OK
        Latest Book
        more info