Back
About RSIS
Introduction
Building the Foundations
Welcome Message
Board of Governors
Staff Profiles
Executive Deputy Chairman’s Office
Dean’s Office
Management
Distinguished Fellows
Faculty and Research
Associate Research Fellows, Senior Analysts and Research Analysts
Visiting Fellows
Adjunct Fellows
Administrative Staff
Honours and Awards for RSIS Staff and Students
RSIS Endowment Fund
Endowed Professorships
Career Opportunities
Getting to RSIS
Research
Research Centres
Centre for Multilateralism Studies (CMS)
Centre for Non-Traditional Security Studies (NTS Centre)
Centre of Excellence for National Security (CENS)
Institute of Defence and Strategic Studies (IDSS)
International Centre for Political Violence and Terrorism Research (ICPVTR)
Research Programmes
National Security Studies Programme (NSSP)
Social Cohesion Research Programme (SCRP)
Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
Other Research
Future Issues and Technology Cluster
Research@RSIS
Science and Technology Studies Programme (STSP) (2017-2020)
Graduate Education
Graduate Programmes Office
Exchange Partners and Programmes
How to Apply
Financial Assistance
Meet the Admissions Team: Information Sessions and other events
RSIS Alumni
Outreach
Global Networks
About Global Networks
RSIS Alumni
International Programmes
About International Programmes
Asia-Pacific Programme for Senior Military Officers (APPSMO)
Asia-Pacific Programme for Senior National Security Officers (APPSNO)
International Conference on Cohesive Societies (ICCS)
International Strategy Forum-Asia (ISF-Asia)
Executive Education
About Executive Education
SRP Executive Programme
Terrorism Analyst Training Course (TATC)
Public Education
About Public Education
Publications
RSIS Publications
Annual Reviews
Books
Bulletins and Newsletters
RSIS Commentary Series
Counter Terrorist Trends and Analyses
Commemorative / Event Reports
Future Issues
IDSS Papers
Interreligious Relations
Monographs
NTS Insight
Policy Reports
Working Papers
External Publications
Authored Books
Journal Articles
Edited Books
Chapters in Edited Books
Policy Reports
Working Papers
Op-Eds
Glossary of Abbreviations
Policy-relevant Articles Given RSIS Award
RSIS Publications for the Year
External Publications for the Year
Media
News Releases
Speeches
Video/Audio Channel
Events
Contact Us
S. Rajaratnam School of International Studies Think Tank and Graduate School Ponder The Improbable Since 1966
Nanyang Technological University Nanyang Technological University
  • About RSIS
      IntroductionBuilding the FoundationsWelcome MessageBoard of GovernorsHonours and Awards for RSIS Staff and StudentsRSIS Endowment FundEndowed ProfessorshipsCareer OpportunitiesGetting to RSIS
      Staff ProfilesExecutive Deputy Chairman’s OfficeDean’s OfficeManagementDistinguished FellowsFaculty and ResearchAssociate Research Fellows, Senior Analysts and Research AnalystsVisiting FellowsAdjunct FellowsAdministrative Staff
  • Research
      Research CentresCentre for Multilateralism Studies (CMS)Centre for Non-Traditional Security Studies (NTS Centre)Centre of Excellence for National Security (CENS)Institute of Defence and Strategic Studies (IDSS)International Centre for Political Violence and Terrorism Research (ICPVTR)
      Research ProgrammesNational Security Studies Programme (NSSP)Social Cohesion Research Programme (SCRP)Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
      Other ResearchFuture Issues and Technology ClusterResearch@RSISScience and Technology Studies Programme (STSP) (2017-2020)
  • Graduate Education
      Graduate Programmes OfficeExchange Partners and ProgrammesHow to ApplyFinancial AssistanceMeet the Admissions Team: Information Sessions and other eventsRSIS Alumni
  • Outreach
      Global NetworksAbout Global NetworksRSIS Alumni
      International ProgrammesAbout International ProgrammesAsia-Pacific Programme for Senior Military Officers (APPSMO)Asia-Pacific Programme for Senior National Security Officers (APPSNO)International Conference on Cohesive Societies (ICCS)International Strategy Forum-Asia (ISF-Asia)
      Executive EducationAbout Executive EducationSRP Executive ProgrammeTerrorism Analyst Training Course (TATC)
      Public EducationAbout Public Education
  • Publications
      RSIS PublicationsAnnual ReviewsBooksBulletins and NewslettersRSIS Commentary SeriesCounter Terrorist Trends and AnalysesCommemorative / Event ReportsFuture IssuesIDSS PapersInterreligious RelationsMonographsNTS InsightPolicy ReportsWorking Papers
      External PublicationsAuthored BooksJournal ArticlesEdited BooksChapters in Edited BooksPolicy ReportsWorking PapersOp-Eds
      Glossary of AbbreviationsPolicy-relevant Articles Given RSIS AwardRSIS Publications for the YearExternal Publications for the Year
  • Media
      News ReleasesSpeechesVideo/Audio Channel
  • Events
  • Contact Us
    • Connect with Us

      rsis.ntu
      rsis_ntu
      rsisntu
      rsisvideocast
      school/rsis-ntu
      rsis.sg
      rsissg
      RSIS
      RSS
      Subscribe to RSIS Publications
      Subscribe to RSIS Events

      Getting to RSIS

      Nanyang Technological University
      Block S4, Level B3,
      50 Nanyang Avenue,
      Singapore 639798

      Click here for direction to RSIS
Connect
Search
  • RSIS
  • Publication
  • RSIS Publications
  • The Paradox of Scaling AI: New Age or Future Winter?
  • Annual Reviews
  • Books
  • Bulletins and Newsletters
  • RSIS Commentary Series
  • Counter Terrorist Trends and Analyses
  • Commemorative / Event Reports
  • Future Issues
  • IDSS Papers
  • Interreligious Relations
  • Monographs
  • NTS Insight
  • Policy Reports
  • Working Papers

CO21178 | The Paradox of Scaling AI: New Age or Future Winter?
Manoj Harjani, Teo Yi-Ling

14 December 2021

download pdf

SYNOPSIS

As AI matures, constraints to its future progress are also emerging. The question is whether this paradox will lead to another “AI winter” or provide the necessary brakes for a potential runaway train. What could this mean for Singapore’s ambition to be a “living laboratory” for global AI solutions?


Source: Pixabay

COMMENTARY

DEPENDING ON who you speak to, we are either entering an age where artificial intelligence (AI) is propelling humanity forward or are inexorably developing superintelligence that will cause an existential catastrophe. The reality of AI’s progress is far more prosaic. While there has been considerable advancement in research and in commercial applications, AI has generally been consistent in disappointing both techno-optimists and pessimists.

This does not mean that AI lacks transformative potential. Indeed, the past five years alone have witnessed significant developments, particularly in applications of machine learning. Nevertheless, as AI matures, its limitations have also come to light, ranging from biased output to ever-increasing computing resources required to train and deploy models. These limitations are not trivial as AI becomes more embedded in daily life.

Paradox: Avoiding Another “AI Winter”

It is this paradox — that as AI scales, we are discovering more obstacles to its future growth — which governments, companies and researchers must reckon with.  It is unclear whether these obstacles will lead to another “AI winter” where investment and research will decline or focus attention on current shortcomings in existing AI-based systems and their knock-on societal implications.

Despite what some techno-optimists might suggest, we are a considerable distance from achieving AI that can scale itself. Humans are still very much “in the loop” when it comes to AI’s prospects for achieving scale. However, it remains to be seen whether this human factor, rather than data or hardware, will be instrumental in avoiding another “AI winter”.

One challenge is that researchers appear to be prioritising the development of novel techniques rather than making existing applications work better for society. In contrast, when we look at companies, there might be a “winter by stealth”: on the surface, AI innovation continues apace, but brakes are being applied selectively where applications are generating obviously negative consequences.

Recent examples of this include Twitter’s algorithmic bias bounty challenge for its image cropping tool, and Meta shutting down the use of facial recognition on Facebook.

However, many governments have yet to tangibly address the larger issue of how to make AI technology accountable to society. High-minded lists of ethical principles and abstract national strategies do little to ensure that societal harms are mitigated and appropriately penalised, let alone incentivise the creation of safe and trustworthy AI-based systems.

The European Union’s approach is a clear exception in this regard. While far from perfect, its draft AI legislation attempts to introduce a risk-based framework for regulating AI and protect consumers from potential harms through stiff penalties.

What is “Success” for AI?

These developments beg the question of what “success” will look like for AI. Currently, success seems to mean that the output or outcomes of AI deployment function as expected. Whether or not this expectation accounts for the successful implementation of ethical AI principles — “ethics by design” — is less clear. There have been significant examples of correctly-functioning AI-based systems producing discriminatory and unfair outcomes.

Globally, the conversation about ethical AI has moved from identifying and defining principles to describing what trustworthy AI is. While this is a welcome and important change, a concern is that this may result in box-ticking exercises that, when completed, bestow upon an AI-based system a false gloss of trustworthiness.

To avoid such “trust-washing”, it is important to interrogate the ethics of actions undertaken throughout the development and deployment process. A continuous and progressive assessment contrasts with current suggestions for ethical AI audits, which have to contend with sunk costs as they typically occur after the fact.

If a claim of observing ethics by design is to mean anything at all, ethical practice must be active, real-time, and integrated into development workflows, not simply a consequential debriefing or reckoning.

The question then becomes whether a chain of accountability for trustworthiness can be established through such exercises, and whether integrity is carried all the way along its links. It will also be important to address the prevailing sentiment in some quarters that taking ethics into account “chills” or stifles AI development.

Governments can play an important role here: setting clear and transparent standards for investment in and procurement of AI-based systems. This will incentivise research and applications to prioritise trust and safety, and can be complemented by safety regulations similar to the EU’s draft AI legislation, thereby ensuring that consumers are protected from harm.

Implications for Singapore: Is ‘Living Lab’ Goal Still Viable?

If it is intended for AI to become a key driving force of Singapore’s Smart Nation initiative, this is not yet evident in how resources are currently being allocated. Only around 13% of the  government’s overall ICT procurement budget for the 2021 financial year (~S$500 million out of an estimated S$3.8 billion) was earmarked for AI-related projects.

In addition, it is currently unclear how much additional funding from the Research, Innovation and Enterprise 2025 Plan launched in 2020 has been allocated to AI Singapore, the national research programme for AI, on top of the existing S$150 million committed in 2017 over five years.

Two years have passed since the National AI Strategy was first launched. Questions around the viability of a “hub strategy” remain, and are joined by new concerns around ensuring trust and safety. Is Singapore’s goal of being a “living laboratory” for global AI solutions still viable, and if so, what should the characteristics of it be in the light of these?

This is an opportunity to re-evaluate Singapore’s notion of success for AI and re-align resource allocation more closely with the relative importance attached rhetorically to AI within the larger Smart Nation initiative. Singapore is still a leader in the region when it comes to AI, but needs to take concerted action in order to sustain its larger ambitions on a global scale.

About the Authors

Manoj Harjani is a Research Fellow with the Future Issues and Technology (FIT) Cluster, and Teo Yi-Ling is a Senior Fellow with the Centre of Excellence for National Security (CENS) at the S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore.

Categories: RSIS Commentary Series / Country and Region Studies / Cybersecurity, Biosecurity and Nuclear Safety / International Politics and Security / Technology and Future Issues / East Asia and Asia Pacific / South Asia / Southeast Asia and ASEAN / Global
comments powered by Disqus

SYNOPSIS

As AI matures, constraints to its future progress are also emerging. The question is whether this paradox will lead to another “AI winter” or provide the necessary brakes for a potential runaway train. What could this mean for Singapore’s ambition to be a “living laboratory” for global AI solutions?


Source: Pixabay

COMMENTARY

DEPENDING ON who you speak to, we are either entering an age where artificial intelligence (AI) is propelling humanity forward or are inexorably developing superintelligence that will cause an existential catastrophe. The reality of AI’s progress is far more prosaic. While there has been considerable advancement in research and in commercial applications, AI has generally been consistent in disappointing both techno-optimists and pessimists.

This does not mean that AI lacks transformative potential. Indeed, the past five years alone have witnessed significant developments, particularly in applications of machine learning. Nevertheless, as AI matures, its limitations have also come to light, ranging from biased output to ever-increasing computing resources required to train and deploy models. These limitations are not trivial as AI becomes more embedded in daily life.

Paradox: Avoiding Another “AI Winter”

It is this paradox — that as AI scales, we are discovering more obstacles to its future growth — which governments, companies and researchers must reckon with.  It is unclear whether these obstacles will lead to another “AI winter” where investment and research will decline or focus attention on current shortcomings in existing AI-based systems and their knock-on societal implications.

Despite what some techno-optimists might suggest, we are a considerable distance from achieving AI that can scale itself. Humans are still very much “in the loop” when it comes to AI’s prospects for achieving scale. However, it remains to be seen whether this human factor, rather than data or hardware, will be instrumental in avoiding another “AI winter”.

One challenge is that researchers appear to be prioritising the development of novel techniques rather than making existing applications work better for society. In contrast, when we look at companies, there might be a “winter by stealth”: on the surface, AI innovation continues apace, but brakes are being applied selectively where applications are generating obviously negative consequences.

Recent examples of this include Twitter’s algorithmic bias bounty challenge for its image cropping tool, and Meta shutting down the use of facial recognition on Facebook.

However, many governments have yet to tangibly address the larger issue of how to make AI technology accountable to society. High-minded lists of ethical principles and abstract national strategies do little to ensure that societal harms are mitigated and appropriately penalised, let alone incentivise the creation of safe and trustworthy AI-based systems.

The European Union’s approach is a clear exception in this regard. While far from perfect, its draft AI legislation attempts to introduce a risk-based framework for regulating AI and protect consumers from potential harms through stiff penalties.

What is “Success” for AI?

These developments beg the question of what “success” will look like for AI. Currently, success seems to mean that the output or outcomes of AI deployment function as expected. Whether or not this expectation accounts for the successful implementation of ethical AI principles — “ethics by design” — is less clear. There have been significant examples of correctly-functioning AI-based systems producing discriminatory and unfair outcomes.

Globally, the conversation about ethical AI has moved from identifying and defining principles to describing what trustworthy AI is. While this is a welcome and important change, a concern is that this may result in box-ticking exercises that, when completed, bestow upon an AI-based system a false gloss of trustworthiness.

To avoid such “trust-washing”, it is important to interrogate the ethics of actions undertaken throughout the development and deployment process. A continuous and progressive assessment contrasts with current suggestions for ethical AI audits, which have to contend with sunk costs as they typically occur after the fact.

If a claim of observing ethics by design is to mean anything at all, ethical practice must be active, real-time, and integrated into development workflows, not simply a consequential debriefing or reckoning.

The question then becomes whether a chain of accountability for trustworthiness can be established through such exercises, and whether integrity is carried all the way along its links. It will also be important to address the prevailing sentiment in some quarters that taking ethics into account “chills” or stifles AI development.

Governments can play an important role here: setting clear and transparent standards for investment in and procurement of AI-based systems. This will incentivise research and applications to prioritise trust and safety, and can be complemented by safety regulations similar to the EU’s draft AI legislation, thereby ensuring that consumers are protected from harm.

Implications for Singapore: Is ‘Living Lab’ Goal Still Viable?

If it is intended for AI to become a key driving force of Singapore’s Smart Nation initiative, this is not yet evident in how resources are currently being allocated. Only around 13% of the  government’s overall ICT procurement budget for the 2021 financial year (~S$500 million out of an estimated S$3.8 billion) was earmarked for AI-related projects.

In addition, it is currently unclear how much additional funding from the Research, Innovation and Enterprise 2025 Plan launched in 2020 has been allocated to AI Singapore, the national research programme for AI, on top of the existing S$150 million committed in 2017 over five years.

Two years have passed since the National AI Strategy was first launched. Questions around the viability of a “hub strategy” remain, and are joined by new concerns around ensuring trust and safety. Is Singapore’s goal of being a “living laboratory” for global AI solutions still viable, and if so, what should the characteristics of it be in the light of these?

This is an opportunity to re-evaluate Singapore’s notion of success for AI and re-align resource allocation more closely with the relative importance attached rhetorically to AI within the larger Smart Nation initiative. Singapore is still a leader in the region when it comes to AI, but needs to take concerted action in order to sustain its larger ambitions on a global scale.

About the Authors

Manoj Harjani is a Research Fellow with the Future Issues and Technology (FIT) Cluster, and Teo Yi-Ling is a Senior Fellow with the Centre of Excellence for National Security (CENS) at the S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore.

Categories: RSIS Commentary Series / Country and Region Studies / Cybersecurity, Biosecurity and Nuclear Safety / International Politics and Security / Technology and Future Issues

Popular Links

About RSISResearch ProgrammesGraduate EducationPublicationsEventsAdmissionsCareersVideo/Audio ChannelRSIS Intranet

Connect with Us

rsis.ntu
rsis_ntu
rsisntu
rsisvideocast
school/rsis-ntu
rsis.sg
rsissg
RSIS
RSS
Subscribe to RSIS Publications
Subscribe to RSIS Events

Getting to RSIS

Nanyang Technological University
Block S4, Level B3,
50 Nanyang Avenue,
Singapore 639798

Click here for direction to RSIS

Get in Touch

    Copyright © S. Rajaratnam School of International Studies. All rights reserved.
    Privacy Statement / Terms of Use
    Help us improve

      Rate your experience with this website
      123456
      Not satisfiedVery satisfied
      What did you like?
      0/255 characters
      What can be improved?
      0/255 characters
      Your email
      Please enter a valid email.
      Thank you for your feedback.
      This site uses cookies to offer you a better browsing experience. By continuing, you are agreeing to the use of cookies on your device as described in our privacy policy. Learn more
      OK
      Latest Book
      more info