Back
About RSIS
Introduction
Building the Foundations
Welcome Message
Board of Governors
Staff Profiles
Executive Deputy Chairman’s Office
Dean’s Office
Management
Distinguished Fellows
Faculty and Research
Associate Research Fellows, Senior Analysts and Research Analysts
Visiting Fellows
Adjunct Fellows
Administrative Staff
Honours and Awards for RSIS Staff and Students
RSIS Endowment Fund
Endowed Professorships
Career Opportunities
Getting to RSIS
Research
Research Centres
Centre for Multilateralism Studies (CMS)
Centre for Non-Traditional Security Studies (NTS Centre)
Centre of Excellence for National Security
Institute of Defence and Strategic Studies (IDSS)
International Centre for Political Violence and Terrorism Research (ICPVTR)
Research Programmes
National Security Studies Programme (NSSP)
Social Cohesion Research Programme (SCRP)
Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
Other Research
Future Issues and Technology Cluster
Research@RSIS
Science and Technology Studies Programme (STSP) (2017-2020)
Graduate Education
Graduate Programmes Office
Exchange Partners and Programmes
How to Apply
Financial Assistance
Meet the Admissions Team: Information Sessions and other events
RSIS Alumni
Outreach
Global Networks
About Global Networks
RSIS Alumni
Executive Education
About Executive Education
SRP Executive Programme
Terrorism Analyst Training Course (TATC)
International Programmes
About International Programmes
Asia-Pacific Programme for Senior Military Officers (APPSMO)
Asia-Pacific Programme for Senior National Security Officers (APPSNO)
International Conference on Cohesive Societies (ICCS)
International Strategy Forum-Asia (ISF-Asia)
Publications
RSIS Publications
Annual Reviews
Books
Bulletins and Newsletters
RSIS Commentary Series
Counter Terrorist Trends and Analyses
Commemorative / Event Reports
Future Issues
IDSS Papers
Interreligious Relations
Monographs
NTS Insight
Policy Reports
Working Papers
External Publications
Authored Books
Journal Articles
Edited Books
Chapters in Edited Books
Policy Reports
Working Papers
Op-Eds
Glossary of Abbreviations
Policy-relevant Articles Given RSIS Award
RSIS Publications for the Year
External Publications for the Year
Media
Cohesive Societies
Sustainable Security
Other Resource Pages
News Releases
Speeches
Video/Audio Channel
External Podcasts
Events
Contact Us
S. Rajaratnam School of International Studies Think Tank and Graduate School Ponder The Improbable Since 1966
Nanyang Technological University Nanyang Technological University
  • About RSIS
      IntroductionBuilding the FoundationsWelcome MessageBoard of GovernorsHonours and Awards for RSIS Staff and StudentsRSIS Endowment FundEndowed ProfessorshipsCareer OpportunitiesGetting to RSIS
      Staff ProfilesExecutive Deputy Chairman’s OfficeDean’s OfficeManagementDistinguished FellowsFaculty and ResearchAssociate Research Fellows, Senior Analysts and Research AnalystsVisiting FellowsAdjunct FellowsAdministrative Staff
  • Research
      Research CentresCentre for Multilateralism Studies (CMS)Centre for Non-Traditional Security Studies (NTS Centre)Centre of Excellence for National SecurityInstitute of Defence and Strategic Studies (IDSS)International Centre for Political Violence and Terrorism Research (ICPVTR)
      Research ProgrammesNational Security Studies Programme (NSSP)Social Cohesion Research Programme (SCRP)Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
      Other ResearchFuture Issues and Technology ClusterResearch@RSISScience and Technology Studies Programme (STSP) (2017-2020)
  • Graduate Education
      Graduate Programmes OfficeExchange Partners and ProgrammesHow to ApplyFinancial AssistanceMeet the Admissions Team: Information Sessions and other eventsRSIS Alumni
  • Outreach
      Global NetworksAbout Global NetworksRSIS Alumni
      Executive EducationAbout Executive EducationSRP Executive ProgrammeTerrorism Analyst Training Course (TATC)
      International ProgrammesAbout International ProgrammesAsia-Pacific Programme for Senior Military Officers (APPSMO)Asia-Pacific Programme for Senior National Security Officers (APPSNO)International Conference on Cohesive Societies (ICCS)International Strategy Forum-Asia (ISF-Asia)
  • Publications
      RSIS PublicationsAnnual ReviewsBooksBulletins and NewslettersRSIS Commentary SeriesCounter Terrorist Trends and AnalysesCommemorative / Event ReportsFuture IssuesIDSS PapersInterreligious RelationsMonographsNTS InsightPolicy ReportsWorking Papers
      External PublicationsAuthored BooksJournal ArticlesEdited BooksChapters in Edited BooksPolicy ReportsWorking PapersOp-Eds
      Glossary of AbbreviationsPolicy-relevant Articles Given RSIS AwardRSIS Publications for the YearExternal Publications for the Year
  • Media
      Cohesive SocietiesSustainable SecurityOther Resource PagesNews ReleasesSpeechesVideo/Audio ChannelExternal Podcasts
  • Events
  • Contact Us
    • Connect with Us

      rsis.ntu
      rsis_ntu
      rsisntu
      rsisvideocast
      school/rsis-ntu
      rsis.sg
      rsissg
      RSIS
      RSS
      Subscribe to RSIS Publications
      Subscribe to RSIS Events

      Getting to RSIS

      Nanyang Technological University
      Block S4, Level B3,
      50 Nanyang Avenue,
      Singapore 639798

      Click here for direction to RSIS

      Get in Touch

    Connect
    Search
    • RSIS
    • Publication
    • RSIS Publications
    • Debating Artificial Intelligence: The Fox versus the Hedgehog
    • Annual Reviews
    • Books
    • Bulletins and Newsletters
    • RSIS Commentary Series
    • Counter Terrorist Trends and Analyses
    • Commemorative / Event Reports
    • Future Issues
    • IDSS Papers
    • Interreligious Relations
    • Monographs
    • NTS Insight
    • Policy Reports
    • Working Papers

    CO19137 | Debating Artificial Intelligence: The Fox versus the Hedgehog
    Donald K. Emmerson

    09 July 2019

    download pdf

    SYNOPSIS

    Singapore in Southeast Asia and Stanford University in the United States are focal points for discussions of AI and how it can be made to help not hurt human beings. A recent panel at Stanford illustrates the difficulty and necessity of bringing both generalist and specialist perspectives to bear on the problem.

    COMMENTARY

    SINGAPORE HAS been described as “a thriving hub for artificial intelligence” (https://www.businesstimes.com.sg/opinion/artificial-intelligence-in-singapore-pervasive-powerful-and-present). In May 2019, Singapore’s Personal Data Protection Commission (PDPC) released the first edition of “A Proposed Model AI Governance Framework (https://www.pdpc.gov.sg/Resources/Model-AI-Gov).

    That “accountability-based” document would “frame the discussions around harnessing AI in a responsible way” by “translat[ing] ethical principles into practical measures that can be implemented by organisations deploying AI solutions”. The guiding principles it proposes to operationalise are that AI systems should be “human-centric” and that decisions made by using them should be “explainable, transparent, and fair”.

    Ethical Principles in AI

    Ethical principles are crucial in AI. They are philosophical compared with the technical character of practical measures. While Singaporeans discuss how to put which principles into practice, variations on that conversation are underway in Silicon Valley. A case in point is a recent discussion of AI at Stanford University, whose Artificial Intelligence Lab was established in 1962.

    This comment focuses on how differently scholars in the humanities may approach the challenge of making AI “human-centric” compared with their colleagues in computer science.

    At Stanford in April 2019, before an audience of nearly 1,700 people, a panel on AI (https://www.youtube.com/watch?v=d4rBh6DBHyw) brought together a fox and a hedgehog. The “fox” was a historian, Hebrew University of Jerusalem professor Yuval Noah Harari. The “hedgehog” was an engineer, Stanford professor Fei-Fei Li.

    A poet in ancient Greece is said to have coined these metaphors by remarking: “The fox knows many things, but the hedgehog knows one big thing.” The contrast is often used in academic discourse to distinguish generalists from specialisers. Viewed in that light, Yuval Harari’s latest book, 21 Lessons for the 21st Century (https://www.theguardian.com/books/2018/aug/15/21-lessons-for-the-21st-century-by-yuval-noah-harari-review), is an eclectic read worthy of a fox. The titles of its chapters include “God,” “War,” “Humility,” and “Science Fiction”. The subject of AI crops up as well.

    The Hedgehog

    As an undergraduate at Princeton, Fei-fei Li co-edited a book, Nanking 1937: Memory and Healing (2002), that delved hedgehog-style into “one big thing”— the Nanking Massacre. Since earning her doctorate in electrical engineering, Li has understandably concentrated on working and publishing in her discipline, computer science. Her specialty is AI, whose importance surely qualifies it as “one big thing,” if only as shown by the huge turnout for the panel.

    The conversation between Harari and Li was intriguing but incomplete. Prof. Li co-directs Stanford’s Human-Centered AI Institute. “Human-Centered AI” activity sounds foxy — interdisciplinary. It was Harari, however, who played the boundary-crossing fox by linking infotech with biotech to suggest that their overlapping could gestate an ability and a proclivity to “hack human beings”.

    Linking AI to psychology, he wondered whether personal decisions could someday be “outsourced to algorithms”. Could neuroscientific AI be used to “hack love” by causing an infatuation that would not otherwise have occurred? Harari brought illness in as well: “In a battle between privacy and health,” he predicted, “health will win.”

    Shifting into political science, he worried that AI could become a “21st century technology of domination”. Others share his anxiety. On biotech, for instance, there is Jamie Metzl’s just-published Hacking Darwin: Genetic Engineering and the Future of Humanity (https://www.npr.org/2019/05/02/718250111/hacking-darwin-explores-genetic-engineering-and-what-it-means-to-be-human).

    Hedgehogs & Foxes: Collaboration Needed

    Harari’s concerns almost made “human-centered AI” sound oxymoronic. But as a fox untrained in computer science, he lacked the knowledge that a hedgehog with digital depth would have brought to bear on the topic. Li had the necessary expertise on AI. But she did not respond to Hariri’s worries and speculations beyond assuring him and the audience that interdisciplinarity and ethics were definitely on her institute’s agenda.

    Without hedgehogs to keep them realistic, foxes can get carried away. Without foxes to keep them contextual, hedgehogs can silo themselves. Helpful in this context — forgive the foxy term — is a vigorous recent defence of foxiness as a career choice: David Epstein’s Range: Why Generalists Triumph in a Specialised World (https://www.npr.org/2019/05/02/718250111/hacking-darwin-explores-genetic-engineering-and-what-it-means-to-be-human).

    Already someone somewhere may be drafting an antithesis to the foxiness of Range. Perhaps its title will be Depth: Why Specialists are Necessary in a Generalist World.

    In any case, to this author’s shallow knowledge, foxes and hedgehogs are not sworn enemies, either on paper or in nature. So here’s to deep range and wide-ranging depth, unlikely in the work of a single scholar, but possible through animalian collaboration.

    About the Author

    Donald K. Emmerson, a confessed fox, heads the Southeast Asia Program in the Shorenstein Asia-Pacific Research Center at Stanford University, where he is also affiliated with the Abbasi Program in Islamic Studies and the Center on Development, Democracy, and the Rule of Law. He contributed this article specially to RSIS Commentary. His edited book, The Deer and the Dragon: Southeast Asia and China in the 21st Century, is forthcoming in 2019.

    Categories: RSIS Commentary Series / Country and Region Studies / Cybersecurity, Biosecurity and Nuclear Safety / Non-Traditional Security / Global
    comments powered by Disqus

    SYNOPSIS

    Singapore in Southeast Asia and Stanford University in the United States are focal points for discussions of AI and how it can be made to help not hurt human beings. A recent panel at Stanford illustrates the difficulty and necessity of bringing both generalist and specialist perspectives to bear on the problem.

    COMMENTARY

    SINGAPORE HAS been described as “a thriving hub for artificial intelligence” (https://www.businesstimes.com.sg/opinion/artificial-intelligence-in-singapore-pervasive-powerful-and-present). In May 2019, Singapore’s Personal Data Protection Commission (PDPC) released the first edition of “A Proposed Model AI Governance Framework (https://www.pdpc.gov.sg/Resources/Model-AI-Gov).

    That “accountability-based” document would “frame the discussions around harnessing AI in a responsible way” by “translat[ing] ethical principles into practical measures that can be implemented by organisations deploying AI solutions”. The guiding principles it proposes to operationalise are that AI systems should be “human-centric” and that decisions made by using them should be “explainable, transparent, and fair”.

    Ethical Principles in AI

    Ethical principles are crucial in AI. They are philosophical compared with the technical character of practical measures. While Singaporeans discuss how to put which principles into practice, variations on that conversation are underway in Silicon Valley. A case in point is a recent discussion of AI at Stanford University, whose Artificial Intelligence Lab was established in 1962.

    This comment focuses on how differently scholars in the humanities may approach the challenge of making AI “human-centric” compared with their colleagues in computer science.

    At Stanford in April 2019, before an audience of nearly 1,700 people, a panel on AI (https://www.youtube.com/watch?v=d4rBh6DBHyw) brought together a fox and a hedgehog. The “fox” was a historian, Hebrew University of Jerusalem professor Yuval Noah Harari. The “hedgehog” was an engineer, Stanford professor Fei-Fei Li.

    A poet in ancient Greece is said to have coined these metaphors by remarking: “The fox knows many things, but the hedgehog knows one big thing.” The contrast is often used in academic discourse to distinguish generalists from specialisers. Viewed in that light, Yuval Harari’s latest book, 21 Lessons for the 21st Century (https://www.theguardian.com/books/2018/aug/15/21-lessons-for-the-21st-century-by-yuval-noah-harari-review), is an eclectic read worthy of a fox. The titles of its chapters include “God,” “War,” “Humility,” and “Science Fiction”. The subject of AI crops up as well.

    The Hedgehog

    As an undergraduate at Princeton, Fei-fei Li co-edited a book, Nanking 1937: Memory and Healing (2002), that delved hedgehog-style into “one big thing”— the Nanking Massacre. Since earning her doctorate in electrical engineering, Li has understandably concentrated on working and publishing in her discipline, computer science. Her specialty is AI, whose importance surely qualifies it as “one big thing,” if only as shown by the huge turnout for the panel.

    The conversation between Harari and Li was intriguing but incomplete. Prof. Li co-directs Stanford’s Human-Centered AI Institute. “Human-Centered AI” activity sounds foxy — interdisciplinary. It was Harari, however, who played the boundary-crossing fox by linking infotech with biotech to suggest that their overlapping could gestate an ability and a proclivity to “hack human beings”.

    Linking AI to psychology, he wondered whether personal decisions could someday be “outsourced to algorithms”. Could neuroscientific AI be used to “hack love” by causing an infatuation that would not otherwise have occurred? Harari brought illness in as well: “In a battle between privacy and health,” he predicted, “health will win.”

    Shifting into political science, he worried that AI could become a “21st century technology of domination”. Others share his anxiety. On biotech, for instance, there is Jamie Metzl’s just-published Hacking Darwin: Genetic Engineering and the Future of Humanity (https://www.npr.org/2019/05/02/718250111/hacking-darwin-explores-genetic-engineering-and-what-it-means-to-be-human).

    Hedgehogs & Foxes: Collaboration Needed

    Harari’s concerns almost made “human-centered AI” sound oxymoronic. But as a fox untrained in computer science, he lacked the knowledge that a hedgehog with digital depth would have brought to bear on the topic. Li had the necessary expertise on AI. But she did not respond to Hariri’s worries and speculations beyond assuring him and the audience that interdisciplinarity and ethics were definitely on her institute’s agenda.

    Without hedgehogs to keep them realistic, foxes can get carried away. Without foxes to keep them contextual, hedgehogs can silo themselves. Helpful in this context — forgive the foxy term — is a vigorous recent defence of foxiness as a career choice: David Epstein’s Range: Why Generalists Triumph in a Specialised World (https://www.npr.org/2019/05/02/718250111/hacking-darwin-explores-genetic-engineering-and-what-it-means-to-be-human).

    Already someone somewhere may be drafting an antithesis to the foxiness of Range. Perhaps its title will be Depth: Why Specialists are Necessary in a Generalist World.

    In any case, to this author’s shallow knowledge, foxes and hedgehogs are not sworn enemies, either on paper or in nature. So here’s to deep range and wide-ranging depth, unlikely in the work of a single scholar, but possible through animalian collaboration.

    About the Author

    Donald K. Emmerson, a confessed fox, heads the Southeast Asia Program in the Shorenstein Asia-Pacific Research Center at Stanford University, where he is also affiliated with the Abbasi Program in Islamic Studies and the Center on Development, Democracy, and the Rule of Law. He contributed this article specially to RSIS Commentary. His edited book, The Deer and the Dragon: Southeast Asia and China in the 21st Century, is forthcoming in 2019.

    Categories: RSIS Commentary Series / Country and Region Studies / Cybersecurity, Biosecurity and Nuclear Safety / Non-Traditional Security

    Popular Links

    About RSISResearch ProgrammesGraduate EducationPublicationsEventsAdmissionsCareersVideo/Audio ChannelRSIS Intranet

    Connect with Us

    rsis.ntu
    rsis_ntu
    rsisntu
    rsisvideocast
    school/rsis-ntu
    rsis.sg
    rsissg
    RSIS
    RSS
    Subscribe to RSIS Publications
    Subscribe to RSIS Events

    Getting to RSIS

    Nanyang Technological University
    Block S4, Level B3,
    50 Nanyang Avenue,
    Singapore 639798

    Click here for direction to RSIS

    Get in Touch

      Copyright © S. Rajaratnam School of International Studies. All rights reserved.
      Privacy Statement / Terms of Use
      Help us improve

        Rate your experience with this website
        123456
        Not satisfiedVery satisfied
        What did you like?
        0/255 characters
        What can be improved?
        0/255 characters
        Your email
        Please enter a valid email.
        Thank you for your feedback.
        This site uses cookies to offer you a better browsing experience. By continuing, you are agreeing to the use of cookies on your device as described in our privacy policy. Learn more
        OK
        Latest Book
        more info