Back
About RSIS
Introduction
Building the Foundations
Welcome Message
Board of Governors
Staff Profiles
Executive Deputy Chairman’s Office
Dean’s Office
Management
Distinguished Fellows
Faculty and Research
Associate Research Fellows, Senior Analysts and Research Analysts
Visiting Fellows
Adjunct Fellows
Administrative Staff
Honours and Awards for RSIS Staff and Students
RSIS Endowment Fund
Endowed Professorships
Career Opportunities
Getting to RSIS
Research
Research Centres
Centre for Multilateralism Studies (CMS)
Centre for Non-Traditional Security Studies (NTS Centre)
Centre of Excellence for National Security
Institute of Defence and Strategic Studies (IDSS)
International Centre for Political Violence and Terrorism Research (ICPVTR)
Research Programmes
National Security Studies Programme (NSSP)
Social Cohesion Research Programme (SCRP)
Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
Other Research
Future Issues and Technology Cluster
Research@RSIS
Science and Technology Studies Programme (STSP) (2017-2020)
Graduate Education
Graduate Programmes Office
Exchange Partners and Programmes
How to Apply
Financial Assistance
Meet the Admissions Team: Information Sessions and other events
RSIS Alumni
Outreach
Global Networks
About Global Networks
RSIS Alumni
Executive Education
About Executive Education
SRP Executive Programme
Terrorism Analyst Training Course (TATC)
International Programmes
About International Programmes
Asia-Pacific Programme for Senior Military Officers (APPSMO)
Asia-Pacific Programme for Senior National Security Officers (APPSNO)
International Conference on Cohesive Societies (ICCS)
International Strategy Forum-Asia (ISF-Asia)
Publications
RSIS Publications
Annual Reviews
Books
Bulletins and Newsletters
RSIS Commentary Series
Counter Terrorist Trends and Analyses
Commemorative / Event Reports
Future Issues
IDSS Papers
Interreligious Relations
Monographs
NTS Insight
Policy Reports
Working Papers
External Publications
Authored Books
Journal Articles
Edited Books
Chapters in Edited Books
Policy Reports
Working Papers
Op-Eds
Glossary of Abbreviations
Policy-relevant Articles Given RSIS Award
RSIS Publications for the Year
External Publications for the Year
Media
Cohesive Societies
Sustainable Security
Other Resource Pages
News Releases
Speeches
Video/Audio Channel
External Podcasts
Events
Contact Us
S. Rajaratnam School of International Studies Think Tank and Graduate School Ponder The Improbable Since 1966
Nanyang Technological University Nanyang Technological University
  • About RSIS
      IntroductionBuilding the FoundationsWelcome MessageBoard of GovernorsHonours and Awards for RSIS Staff and StudentsRSIS Endowment FundEndowed ProfessorshipsCareer OpportunitiesGetting to RSIS
      Staff ProfilesExecutive Deputy Chairman’s OfficeDean’s OfficeManagementDistinguished FellowsFaculty and ResearchAssociate Research Fellows, Senior Analysts and Research AnalystsVisiting FellowsAdjunct FellowsAdministrative Staff
  • Research
      Research CentresCentre for Multilateralism Studies (CMS)Centre for Non-Traditional Security Studies (NTS Centre)Centre of Excellence for National SecurityInstitute of Defence and Strategic Studies (IDSS)International Centre for Political Violence and Terrorism Research (ICPVTR)
      Research ProgrammesNational Security Studies Programme (NSSP)Social Cohesion Research Programme (SCRP)Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
      Other ResearchFuture Issues and Technology ClusterResearch@RSISScience and Technology Studies Programme (STSP) (2017-2020)
  • Graduate Education
      Graduate Programmes OfficeExchange Partners and ProgrammesHow to ApplyFinancial AssistanceMeet the Admissions Team: Information Sessions and other eventsRSIS Alumni
  • Outreach
      Global NetworksAbout Global NetworksRSIS Alumni
      Executive EducationAbout Executive EducationSRP Executive ProgrammeTerrorism Analyst Training Course (TATC)
      International ProgrammesAbout International ProgrammesAsia-Pacific Programme for Senior Military Officers (APPSMO)Asia-Pacific Programme for Senior National Security Officers (APPSNO)International Conference on Cohesive Societies (ICCS)International Strategy Forum-Asia (ISF-Asia)
  • Publications
      RSIS PublicationsAnnual ReviewsBooksBulletins and NewslettersRSIS Commentary SeriesCounter Terrorist Trends and AnalysesCommemorative / Event ReportsFuture IssuesIDSS PapersInterreligious RelationsMonographsNTS InsightPolicy ReportsWorking Papers
      External PublicationsAuthored BooksJournal ArticlesEdited BooksChapters in Edited BooksPolicy ReportsWorking PapersOp-Eds
      Glossary of AbbreviationsPolicy-relevant Articles Given RSIS AwardRSIS Publications for the YearExternal Publications for the Year
  • Media
      Cohesive SocietiesSustainable SecurityOther Resource PagesNews ReleasesSpeechesVideo/Audio ChannelExternal Podcasts
  • Events
  • Contact Us
    • Connect with Us

      rsis.ntu
      rsis_ntu
      rsisntu
      rsisvideocast
      school/rsis-ntu
      rsis.sg
      rsissg
      RSIS
      RSS
      Subscribe to RSIS Publications
      Subscribe to RSIS Events

      Getting to RSIS

      Nanyang Technological University
      Block S4, Level B3,
      50 Nanyang Avenue,
      Singapore 639798

      Click here for direction to RSIS

      Get in Touch

    Connect
    Search
    • RSIS
    • Publication
    • RSIS Publications
    • AI: Could It Be More Ethical than Humans?
    • Annual Reviews
    • Books
    • Bulletins and Newsletters
    • RSIS Commentary Series
    • Counter Terrorist Trends and Analyses
    • Commemorative / Event Reports
    • Future Issues
    • IDSS Papers
    • Interreligious Relations
    • Monographs
    • NTS Insight
    • Policy Reports
    • Working Papers

    CO19244 | AI: Could It Be More Ethical than Humans?
    Richard Bitzinger

    04 December 2019

    download pdf

    SYNOPSIS

    Artificial intelligence in autonomous systems (i.e., drones) can address human error and fatigue issues, but also, in the future, concerns over ethical behaviour on the battlefield. Installing an algorithmic “moral compass” in AI, however, will be challenging.

    COMMENTARY

    A COMMON theme among many discussions concerning the military uses of artificial intelligence (AI) is the “Skynet” trope: the fear that AI will be self-aware and decide to turn on its masters. Inherent in this argument is the contention that AI does not share the same ethical constraints that humans do. 

    While almost certainly an over-exaggeration, the Skynet scenario does highlight the problem of ensuring that the ethical behaviour we believe is incumbent on humans in combat is not lost as we increasingly devolve battlefield decision-making to autonomous systems. In fact, we may have it the wrong way around: rather than being less ethical than humans, AI might be programmed to be more ethical. And that could have both positive and negative repercussions.

    Ubiquity of Drones on the battlefield

    AI on the battlefield is generally tied in with autonomous systems, in particular the use of drones and other robotic systems. Traditionally, drones have been assigned to replacing humans in roles and missions synonymous with the “three Ds”: dull, dirty, or dangerous. Originally, drones – particularly unmanned aerial vehicles (UAVs), were used for surveillance and reconnaissance over enemy territory.

    Today, aerial drones are also employed as communications relays, or for laser-targeting or even electronic warfare (jamming). Other functions include policing duties, border patrol, and bomb disposal, all of which have expanded the utility and importance of robots.

    The drawback, of course, is that currently such drones do not operate totally autonomously. There is always a human in the loop, usually controlling a drone remotely. This raises its own set of built-in challenges. The command and control (C2) of UAVs, let alone armed drones, is quite demanding. The support network behind drone use is enormous, especially for long-distance, long-endurance operations. 

    Drones often require satellites for target acquisition and strike-control, as well as secure datalinks; without satellites, drones need line-of-sight datalinks or relay aircraft to remain in contact with remote operators. Pilots do not come cheap, either: they have to be as skilled as a pilot of a manned aircraft, and the fatigue factor of remotely operating a drone can be high.

    Moreover, drones have an incredibly high loss rate. A US Air Force (USAF) report in March 2009 showed that it had lost 70 Predators in air crashes during its operational history up to that time. Fifty-five were lost to equipment failure, operator error, or weather, while four had been shot down by enemy forces and 11 more were lost to accidents on combat missions. According to another USAF report, this time from 2015, the Predator, Reaper and Global Hawk drones were “the most accident-prone aircraft in the Air Force fleet”.

    AI on the Battlefield

    AI promises to be the solution to these problems of human fatigue and error. AI can take over many of the more monotonous tasks of remote piloting. In fact, the US Army already has in place such manned-unmanned teaming. The intention of such teaming, according to a 2012 report by the US Defence Science Board, is not to replace humans, “but rather to extend and complement human capability”.

    It is only a few steps further, however, before AI takes over the lion’s share of such operations. This could mean AI ultimately replacing the final, still-human-centric decision as to when to launch a strike against a supposed military target. 

    The concern here is self-apparent: besides dealing with human fatigue, how do we ensure that the embedded AI is sophisticated enough to differentiate between civilian and military targets, and or is able to minimise sufficiently civilian collateral damage? In other words, how do we avoid AI-initiated massacres?

    One answer is to always keep a human involved, operating a “kill switch” to prevent such tragedies. However, that negates much of the reason for devolving operations to AI in the first place, that is, to relieve humans of the possibility for error due to emotion, fatigue, or other shortcomings.

    Are Humans Really that Ethical?

    We like to think of ourselves as ultimately moral people, but in fact humans are capable of engaging unethical behaviour, almost on a daily basis. We run red lights, cheat on our taxes, or call in sick when we just want to stay home. More to the point, we are quite capable of forgiving ourselves for our ethical lapses. This is perhaps alright for small offences, but humans are quite good at rationalising large ethical failures.

    Nowhere is this perhaps more seen than on the battlefield. Looting, the shooting of prisoners of war, and brutalisation of civilians is sadly too widespread. Worse, even in societies where one would think that ethics would be held in high esteem, there is an enormous capacity for excusing, validating, and condoning such behaviour. Look at the My Lai massacre in Vietnam, Abu Ghraib in Iraq, and more recently, President Trump’s pardon of three soldiers convicted of war crimes. In these cases and more, few people are punished, the punishments slight or even reduced, and the behaviour rationalised.

    Could AI be More Ethical than Humans?

    AI, of course, is no more ethical than the people who program it. However, that could mean that such programs could be more rigidly ethical because it is hard-wired in their systems. In other words, some kind of algorithmic “moral compass” could be installed in autonomous systems, intended to look for and avoid ethical dilemmas.

    Not that any of this would be easy. Such AI would require human checks and readjustments to machine-learning. In particular, the algorithms for AI intelligence-gathering and processing would have to be highly reliable, to ensure that autonomous systems can clearly differentiate between military and civilian targets. 

    Such AI might be even “too ethical”, if it develops a “zero-tolerance” for error and refuses to ever attack a target for fear that civilians might be hurt. And we have to keep in mind that machine-learning can also be susceptible to bias and to adversarial attacks.

    The point is, however, that AI could conceivably deal with the problems of human error and fatigue as well as ethics in the battlespace. It is a technical fix, but a philosophical one as well.

    About the Author

    Richard A. Bitzinger is a Visiting Senior Fellow with the Military Transformations Programme at the S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore. He is formerly with the RAND Corp. and the Centre for Strategic and Budgetary Assessments.

    Categories: RSIS Commentary Series / Conflict and Stability / Country and Region Studies / Cybersecurity, Biosecurity and Nuclear Safety / International Politics and Security / East Asia and Asia Pacific / South Asia / Southeast Asia and ASEAN / Global
    comments powered by Disqus

    SYNOPSIS

    Artificial intelligence in autonomous systems (i.e., drones) can address human error and fatigue issues, but also, in the future, concerns over ethical behaviour on the battlefield. Installing an algorithmic “moral compass” in AI, however, will be challenging.

    COMMENTARY

    A COMMON theme among many discussions concerning the military uses of artificial intelligence (AI) is the “Skynet” trope: the fear that AI will be self-aware and decide to turn on its masters. Inherent in this argument is the contention that AI does not share the same ethical constraints that humans do. 

    While almost certainly an over-exaggeration, the Skynet scenario does highlight the problem of ensuring that the ethical behaviour we believe is incumbent on humans in combat is not lost as we increasingly devolve battlefield decision-making to autonomous systems. In fact, we may have it the wrong way around: rather than being less ethical than humans, AI might be programmed to be more ethical. And that could have both positive and negative repercussions.

    Ubiquity of Drones on the battlefield

    AI on the battlefield is generally tied in with autonomous systems, in particular the use of drones and other robotic systems. Traditionally, drones have been assigned to replacing humans in roles and missions synonymous with the “three Ds”: dull, dirty, or dangerous. Originally, drones – particularly unmanned aerial vehicles (UAVs), were used for surveillance and reconnaissance over enemy territory.

    Today, aerial drones are also employed as communications relays, or for laser-targeting or even electronic warfare (jamming). Other functions include policing duties, border patrol, and bomb disposal, all of which have expanded the utility and importance of robots.

    The drawback, of course, is that currently such drones do not operate totally autonomously. There is always a human in the loop, usually controlling a drone remotely. This raises its own set of built-in challenges. The command and control (C2) of UAVs, let alone armed drones, is quite demanding. The support network behind drone use is enormous, especially for long-distance, long-endurance operations. 

    Drones often require satellites for target acquisition and strike-control, as well as secure datalinks; without satellites, drones need line-of-sight datalinks or relay aircraft to remain in contact with remote operators. Pilots do not come cheap, either: they have to be as skilled as a pilot of a manned aircraft, and the fatigue factor of remotely operating a drone can be high.

    Moreover, drones have an incredibly high loss rate. A US Air Force (USAF) report in March 2009 showed that it had lost 70 Predators in air crashes during its operational history up to that time. Fifty-five were lost to equipment failure, operator error, or weather, while four had been shot down by enemy forces and 11 more were lost to accidents on combat missions. According to another USAF report, this time from 2015, the Predator, Reaper and Global Hawk drones were “the most accident-prone aircraft in the Air Force fleet”.

    AI on the Battlefield

    AI promises to be the solution to these problems of human fatigue and error. AI can take over many of the more monotonous tasks of remote piloting. In fact, the US Army already has in place such manned-unmanned teaming. The intention of such teaming, according to a 2012 report by the US Defence Science Board, is not to replace humans, “but rather to extend and complement human capability”.

    It is only a few steps further, however, before AI takes over the lion’s share of such operations. This could mean AI ultimately replacing the final, still-human-centric decision as to when to launch a strike against a supposed military target. 

    The concern here is self-apparent: besides dealing with human fatigue, how do we ensure that the embedded AI is sophisticated enough to differentiate between civilian and military targets, and or is able to minimise sufficiently civilian collateral damage? In other words, how do we avoid AI-initiated massacres?

    One answer is to always keep a human involved, operating a “kill switch” to prevent such tragedies. However, that negates much of the reason for devolving operations to AI in the first place, that is, to relieve humans of the possibility for error due to emotion, fatigue, or other shortcomings.

    Are Humans Really that Ethical?

    We like to think of ourselves as ultimately moral people, but in fact humans are capable of engaging unethical behaviour, almost on a daily basis. We run red lights, cheat on our taxes, or call in sick when we just want to stay home. More to the point, we are quite capable of forgiving ourselves for our ethical lapses. This is perhaps alright for small offences, but humans are quite good at rationalising large ethical failures.

    Nowhere is this perhaps more seen than on the battlefield. Looting, the shooting of prisoners of war, and brutalisation of civilians is sadly too widespread. Worse, even in societies where one would think that ethics would be held in high esteem, there is an enormous capacity for excusing, validating, and condoning such behaviour. Look at the My Lai massacre in Vietnam, Abu Ghraib in Iraq, and more recently, President Trump’s pardon of three soldiers convicted of war crimes. In these cases and more, few people are punished, the punishments slight or even reduced, and the behaviour rationalised.

    Could AI be More Ethical than Humans?

    AI, of course, is no more ethical than the people who program it. However, that could mean that such programs could be more rigidly ethical because it is hard-wired in their systems. In other words, some kind of algorithmic “moral compass” could be installed in autonomous systems, intended to look for and avoid ethical dilemmas.

    Not that any of this would be easy. Such AI would require human checks and readjustments to machine-learning. In particular, the algorithms for AI intelligence-gathering and processing would have to be highly reliable, to ensure that autonomous systems can clearly differentiate between military and civilian targets. 

    Such AI might be even “too ethical”, if it develops a “zero-tolerance” for error and refuses to ever attack a target for fear that civilians might be hurt. And we have to keep in mind that machine-learning can also be susceptible to bias and to adversarial attacks.

    The point is, however, that AI could conceivably deal with the problems of human error and fatigue as well as ethics in the battlespace. It is a technical fix, but a philosophical one as well.

    About the Author

    Richard A. Bitzinger is a Visiting Senior Fellow with the Military Transformations Programme at the S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore. He is formerly with the RAND Corp. and the Centre for Strategic and Budgetary Assessments.

    Categories: RSIS Commentary Series / Conflict and Stability / Country and Region Studies / Cybersecurity, Biosecurity and Nuclear Safety / International Politics and Security

    Popular Links

    About RSISResearch ProgrammesGraduate EducationPublicationsEventsAdmissionsCareersVideo/Audio ChannelRSIS Intranet

    Connect with Us

    rsis.ntu
    rsis_ntu
    rsisntu
    rsisvideocast
    school/rsis-ntu
    rsis.sg
    rsissg
    RSIS
    RSS
    Subscribe to RSIS Publications
    Subscribe to RSIS Events

    Getting to RSIS

    Nanyang Technological University
    Block S4, Level B3,
    50 Nanyang Avenue,
    Singapore 639798

    Click here for direction to RSIS

    Get in Touch

      Copyright © S. Rajaratnam School of International Studies. All rights reserved.
      Privacy Statement / Terms of Use
      Help us improve

        Rate your experience with this website
        123456
        Not satisfiedVery satisfied
        What did you like?
        0/255 characters
        What can be improved?
        0/255 characters
        Your email
        Please enter a valid email.
        Thank you for your feedback.
        This site uses cookies to offer you a better browsing experience. By continuing, you are agreeing to the use of cookies on your device as described in our privacy policy. Learn more
        OK
        Latest Book
        more info