Back
About RSIS
Introduction
Building the Foundations
Welcome Message
Board of Governors
Staff Profiles
Executive Deputy Chairman’s Office
Dean’s Office
Management
Distinguished Fellows
Faculty and Research
Associate Research Fellows, Senior Analysts and Research Analysts
Visiting Fellows
Adjunct Fellows
Administrative Staff
Honours and Awards for RSIS Staff and Students
RSIS Endowment Fund
Endowed Professorships
Career Opportunities
Getting to RSIS
Research
Research Centres
Centre for Multilateralism Studies (CMS)
Centre for Non-Traditional Security Studies (NTS Centre)
Centre of Excellence for National Security
Institute of Defence and Strategic Studies (IDSS)
International Centre for Political Violence and Terrorism Research (ICPVTR)
Research Programmes
National Security Studies Programme (NSSP)
Social Cohesion Research Programme (SCRP)
Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
Other Research
Future Issues and Technology Cluster
Research@RSIS
Science and Technology Studies Programme (STSP) (2017-2020)
Graduate Education
Graduate Programmes Office
Exchange Partners and Programmes
How to Apply
Financial Assistance
Meet the Admissions Team: Information Sessions and other events
RSIS Alumni
Outreach
Global Networks
About Global Networks
RSIS Alumni
Executive Education
About Executive Education
SRP Executive Programme
Terrorism Analyst Training Course (TATC)
International Programmes
About International Programmes
Asia-Pacific Programme for Senior Military Officers (APPSMO)
Asia-Pacific Programme for Senior National Security Officers (APPSNO)
International Conference on Cohesive Societies (ICCS)
International Strategy Forum-Asia (ISF-Asia)
Publications
RSIS Publications
Annual Reviews
Books
Bulletins and Newsletters
RSIS Commentary Series
Counter Terrorist Trends and Analyses
Commemorative / Event Reports
Future Issues
IDSS Papers
Interreligious Relations
Monographs
NTS Insight
Policy Reports
Working Papers
External Publications
Authored Books
Journal Articles
Edited Books
Chapters in Edited Books
Policy Reports
Working Papers
Op-Eds
Glossary of Abbreviations
Policy-relevant Articles Given RSIS Award
RSIS Publications for the Year
External Publications for the Year
Media
Cohesive Societies
Sustainable Security
Other Resource Pages
News Releases
Speeches
Video/Audio Channel
External Podcasts
Events
Contact Us
S. Rajaratnam School of International Studies Think Tank and Graduate School Ponder The Improbable Since 1966
Nanyang Technological University Nanyang Technological University
  • About RSIS
      IntroductionBuilding the FoundationsWelcome MessageBoard of GovernorsHonours and Awards for RSIS Staff and StudentsRSIS Endowment FundEndowed ProfessorshipsCareer OpportunitiesGetting to RSIS
      Staff ProfilesExecutive Deputy Chairman’s OfficeDean’s OfficeManagementDistinguished FellowsFaculty and ResearchAssociate Research Fellows, Senior Analysts and Research AnalystsVisiting FellowsAdjunct FellowsAdministrative Staff
  • Research
      Research CentresCentre for Multilateralism Studies (CMS)Centre for Non-Traditional Security Studies (NTS Centre)Centre of Excellence for National SecurityInstitute of Defence and Strategic Studies (IDSS)International Centre for Political Violence and Terrorism Research (ICPVTR)
      Research ProgrammesNational Security Studies Programme (NSSP)Social Cohesion Research Programme (SCRP)Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
      Other ResearchFuture Issues and Technology ClusterResearch@RSISScience and Technology Studies Programme (STSP) (2017-2020)
  • Graduate Education
      Graduate Programmes OfficeExchange Partners and ProgrammesHow to ApplyFinancial AssistanceMeet the Admissions Team: Information Sessions and other eventsRSIS Alumni
  • Outreach
      Global NetworksAbout Global NetworksRSIS Alumni
      Executive EducationAbout Executive EducationSRP Executive ProgrammeTerrorism Analyst Training Course (TATC)
      International ProgrammesAbout International ProgrammesAsia-Pacific Programme for Senior Military Officers (APPSMO)Asia-Pacific Programme for Senior National Security Officers (APPSNO)International Conference on Cohesive Societies (ICCS)International Strategy Forum-Asia (ISF-Asia)
  • Publications
      RSIS PublicationsAnnual ReviewsBooksBulletins and NewslettersRSIS Commentary SeriesCounter Terrorist Trends and AnalysesCommemorative / Event ReportsFuture IssuesIDSS PapersInterreligious RelationsMonographsNTS InsightPolicy ReportsWorking Papers
      External PublicationsAuthored BooksJournal ArticlesEdited BooksChapters in Edited BooksPolicy ReportsWorking PapersOp-Eds
      Glossary of AbbreviationsPolicy-relevant Articles Given RSIS AwardRSIS Publications for the YearExternal Publications for the Year
  • Media
      Cohesive SocietiesSustainable SecurityOther Resource PagesNews ReleasesSpeechesVideo/Audio ChannelExternal Podcasts
  • Events
  • Contact Us
    • Connect with Us

      rsis.ntu
      rsis_ntu
      rsisntu
      rsisvideocast
      school/rsis-ntu
      rsis.sg
      rsissg
      RSIS
      RSS
      Subscribe to RSIS Publications
      Subscribe to RSIS Events

      Getting to RSIS

      Nanyang Technological University
      Block S4, Level B3,
      50 Nanyang Avenue,
      Singapore 639798

      Click here for direction to RSIS

      Get in Touch

    Connect
    Search
    • RSIS
    • Publication
    • RSIS Publications
    • AI Ethics 2.0: From Principles to Action
    • Annual Reviews
    • Books
    • Bulletins and Newsletters
    • RSIS Commentary Series
    • Counter Terrorist Trends and Analyses
    • Commemorative / Event Reports
    • Future Issues
    • IDSS Papers
    • Interreligious Relations
    • Monographs
    • NTS Insight
    • Policy Reports
    • Working Papers

    CO19128 | AI Ethics 2.0: From Principles to Action
    Danit Gal

    02 July 2019

    download pdf

    SYNOPSIS

    Trying to catch up with the fast moving and increasingly pervasive development and use of AI, nations want to establish ground rules to ensure the technology benefits humanity. This is no easy task, further complicated by the mismatch between abstract AI ethics principles and existing technical capabilities and human practices.

    COMMENTARY

    IN THE past couple of years, discussions on AI ethics became the norm, with a variety of actors putting forth over 40 sets of fairly identical principles. These principles include: accountability, controllability, diversity, explainability, fairness, human-centricity, transparency, safety, security, and sustainability.

    While this creates a shared language assisting countries in addressing similar concerns, the local interpretation of these principles can differ widely, often leading to a deep sense of confusion. With the wide proliferation of such AI ethics principles, practitioners are getting closer to agreeing on what they should be in theory, but not on how to make them work in practice.

    Problem of Effective Implementation

    Principles like the ones recently published by the OECD are illustrative. They combine many existing works on AI ethics principles into another fairly generic guideline. This level of abstraction makes it appealing enough to create an important international consensus.

    It is also, however, vague enough to allow local actors to interpret the principles as they see fit within their own social and cultural contexts. This diversity of interpretation is essential in ensuring that benefits brought to humanity by using AI are inclusive. The problem is, therefore, one of effective implementation.

    The problem of effective implementation is the test bed of these principles, which can often be detached from technical capabilities and human practices. Can these AI ethics principles be codified into technical and human practices? In theory, yes. The abovementioned principles benefit us. They are intended to keep us safe and help us all benefit from the use of AI.

    In practice, however, they face real-life conflict of interests such as corporate profitability, individual and collective biases and inequality, low general levels of technical literacy, the sanctification of progress, and the desire for constant convenience.

    Moving from Principles to Action

    The good news is that this problem is already being partially solved. Individuals and institutions working in the AI ethics field are moving from principles to action. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems led to the creation of a series of technical standards on AI ethics.  The IEEE is the world’s largest association of technical professionals.

    The Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) community brings together researchers and practitioners developing tangible solutions. The Machine Intelligence Research Institute (MIRI), the Center for Human-Compatible AI (CHAI), and safety teams at DeepMind and OpenAI work to develop safe and robust AI. Institutions like AI NOW, Data & Society, and various academic centers are getting to the heart of socio-technical problems and how they already impact users.

    The bad news, however, is that while this work contributes immensely to the beneficial development and use of AI, it is still only the beginning. We need wider geographical participation and action to put AI ethics principles into practice and create inclusive benefits. Until we are able to achieve that, any benefits called for in AI ethics principles run the risk of staying as an idealistic vision for a speculative future.

    Coming to Terms with the Present

    While AI might still look futuristic, its early stage applications are already as widespread as they are pervasive. To most, AI is invisible. Users cannot really see it or interact with it, and they often do not understand how it works or affects them and their actions. And yet, most regulations and AI ethics principles look towards the theoretical future and thus fail to address the implementation programme.

    Due to their soft governance nature, most AI ethics principles do not offer tangible solutions. More alarmingly, many government regulators remains unwilling to offer tangible solutions due to fears that overregulation will ‘stifle innovation’.

    This creates a false dichotomy where ethical and well-regulated developments ‘sacrifice’ speed or innovation to ensure benefit. In reality, not making this ‘sacrifice’ leads to development that is prone to structural errors, stalls in achieving market viability, and mostly just serves its developers.

    Additions to the over 40 existing sets of AI ethics principles are a positive and welcome development if they represent new concerns and population groups. But things will only change when local governments interpret and implement them in local regulations and more institutions develop technical tools and methods to put them into practice. Tangible solutions are within reach.

    The Small Country Advantage

    Smaller countries have an edge in solving this problem. As importers of technology from larger countries, smaller ones often find themselves relying on tools not developed with their social and technical needs in mind. This entails an adjustment and adaptation period for users and the technology itself.

    All governments must, therefore, invest in creating regulatory and technical sandboxes to ensure the adjustment and adaptation period goes as smoothly as possible and comes to positive conclusions. But small governments can do it faster and more efficiently. To that end, they should do two things:

    The first is to institute agile regulatory mechanisms that develop with and support the nation’s beneficial use of AI. The second is to invest in creating well-informed and resourced actors that put local AI ethics principles interpretations into practice.

    Investment in a competitive future should be about beneficial development, not just a rapid one. If not, our future will see us spending years trying to identify and fix the mistakes we have made in the name of careless progress, and that’s the best case scenario. In short: put well-considered theory into thoughtful local regulatory and technical practice, because 40+ sets of AI ethics principles will not work unless you do.

    About the Author

    Danit Gal is founder of the TechFlows Group technology geopolitics consultancy, and creator of the Collective Futures Network for young experts. She is a researcher working on AI ethics, safety and security. She contributed this to RSIS Commentary in cooperation with RSIS’ Military Transformations Programme.

    Categories: RSIS Commentary Series / Non-Traditional Security / Country and Region Studies / Cybersecurity, Biosecurity and Nuclear Safety / Global / South Asia / Southeast Asia and ASEAN
    comments powered by Disqus

    SYNOPSIS

    Trying to catch up with the fast moving and increasingly pervasive development and use of AI, nations want to establish ground rules to ensure the technology benefits humanity. This is no easy task, further complicated by the mismatch between abstract AI ethics principles and existing technical capabilities and human practices.

    COMMENTARY

    IN THE past couple of years, discussions on AI ethics became the norm, with a variety of actors putting forth over 40 sets of fairly identical principles. These principles include: accountability, controllability, diversity, explainability, fairness, human-centricity, transparency, safety, security, and sustainability.

    While this creates a shared language assisting countries in addressing similar concerns, the local interpretation of these principles can differ widely, often leading to a deep sense of confusion. With the wide proliferation of such AI ethics principles, practitioners are getting closer to agreeing on what they should be in theory, but not on how to make them work in practice.

    Problem of Effective Implementation

    Principles like the ones recently published by the OECD are illustrative. They combine many existing works on AI ethics principles into another fairly generic guideline. This level of abstraction makes it appealing enough to create an important international consensus.

    It is also, however, vague enough to allow local actors to interpret the principles as they see fit within their own social and cultural contexts. This diversity of interpretation is essential in ensuring that benefits brought to humanity by using AI are inclusive. The problem is, therefore, one of effective implementation.

    The problem of effective implementation is the test bed of these principles, which can often be detached from technical capabilities and human practices. Can these AI ethics principles be codified into technical and human practices? In theory, yes. The abovementioned principles benefit us. They are intended to keep us safe and help us all benefit from the use of AI.

    In practice, however, they face real-life conflict of interests such as corporate profitability, individual and collective biases and inequality, low general levels of technical literacy, the sanctification of progress, and the desire for constant convenience.

    Moving from Principles to Action

    The good news is that this problem is already being partially solved. Individuals and institutions working in the AI ethics field are moving from principles to action. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems led to the creation of a series of technical standards on AI ethics.  The IEEE is the world’s largest association of technical professionals.

    The Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) community brings together researchers and practitioners developing tangible solutions. The Machine Intelligence Research Institute (MIRI), the Center for Human-Compatible AI (CHAI), and safety teams at DeepMind and OpenAI work to develop safe and robust AI. Institutions like AI NOW, Data & Society, and various academic centers are getting to the heart of socio-technical problems and how they already impact users.

    The bad news, however, is that while this work contributes immensely to the beneficial development and use of AI, it is still only the beginning. We need wider geographical participation and action to put AI ethics principles into practice and create inclusive benefits. Until we are able to achieve that, any benefits called for in AI ethics principles run the risk of staying as an idealistic vision for a speculative future.

    Coming to Terms with the Present

    While AI might still look futuristic, its early stage applications are already as widespread as they are pervasive. To most, AI is invisible. Users cannot really see it or interact with it, and they often do not understand how it works or affects them and their actions. And yet, most regulations and AI ethics principles look towards the theoretical future and thus fail to address the implementation programme.

    Due to their soft governance nature, most AI ethics principles do not offer tangible solutions. More alarmingly, many government regulators remains unwilling to offer tangible solutions due to fears that overregulation will ‘stifle innovation’.

    This creates a false dichotomy where ethical and well-regulated developments ‘sacrifice’ speed or innovation to ensure benefit. In reality, not making this ‘sacrifice’ leads to development that is prone to structural errors, stalls in achieving market viability, and mostly just serves its developers.

    Additions to the over 40 existing sets of AI ethics principles are a positive and welcome development if they represent new concerns and population groups. But things will only change when local governments interpret and implement them in local regulations and more institutions develop technical tools and methods to put them into practice. Tangible solutions are within reach.

    The Small Country Advantage

    Smaller countries have an edge in solving this problem. As importers of technology from larger countries, smaller ones often find themselves relying on tools not developed with their social and technical needs in mind. This entails an adjustment and adaptation period for users and the technology itself.

    All governments must, therefore, invest in creating regulatory and technical sandboxes to ensure the adjustment and adaptation period goes as smoothly as possible and comes to positive conclusions. But small governments can do it faster and more efficiently. To that end, they should do two things:

    The first is to institute agile regulatory mechanisms that develop with and support the nation’s beneficial use of AI. The second is to invest in creating well-informed and resourced actors that put local AI ethics principles interpretations into practice.

    Investment in a competitive future should be about beneficial development, not just a rapid one. If not, our future will see us spending years trying to identify and fix the mistakes we have made in the name of careless progress, and that’s the best case scenario. In short: put well-considered theory into thoughtful local regulatory and technical practice, because 40+ sets of AI ethics principles will not work unless you do.

    About the Author

    Danit Gal is founder of the TechFlows Group technology geopolitics consultancy, and creator of the Collective Futures Network for young experts. She is a researcher working on AI ethics, safety and security. She contributed this to RSIS Commentary in cooperation with RSIS’ Military Transformations Programme.

    Categories: RSIS Commentary Series / Non-Traditional Security / Country and Region Studies / Cybersecurity, Biosecurity and Nuclear Safety

    Popular Links

    About RSISResearch ProgrammesGraduate EducationPublicationsEventsAdmissionsCareersVideo/Audio ChannelRSIS Intranet

    Connect with Us

    rsis.ntu
    rsis_ntu
    rsisntu
    rsisvideocast
    school/rsis-ntu
    rsis.sg
    rsissg
    RSIS
    RSS
    Subscribe to RSIS Publications
    Subscribe to RSIS Events

    Getting to RSIS

    Nanyang Technological University
    Block S4, Level B3,
    50 Nanyang Avenue,
    Singapore 639798

    Click here for direction to RSIS

    Get in Touch

      Copyright © S. Rajaratnam School of International Studies. All rights reserved.
      Privacy Statement / Terms of Use
      Help us improve

        Rate your experience with this website
        123456
        Not satisfiedVery satisfied
        What did you like?
        0/255 characters
        What can be improved?
        0/255 characters
        Your email
        Please enter a valid email.
        Thank you for your feedback.
        This site uses cookies to offer you a better browsing experience. By continuing, you are agreeing to the use of cookies on your device as described in our privacy policy. Learn more
        OK
        Latest Book
        more info