Back
About RSIS
Introduction
Building the Foundations
Welcome Message
Board of Governors
Staff Profiles
Executive Deputy Chairman’s Office
Dean’s Office
Management
Distinguished Fellows
Faculty and Research
Associate Research Fellows, Senior Analysts and Research Analysts
Visiting Fellows
Adjunct Fellows
Administrative Staff
Honours and Awards for RSIS Staff and Students
RSIS Endowment Fund
Endowed Professorships
Career Opportunities
Getting to RSIS
Research
Research Centres
Centre for Multilateralism Studies (CMS)
Centre for Non-Traditional Security Studies (NTS Centre)
Centre of Excellence for National Security
Institute of Defence and Strategic Studies (IDSS)
International Centre for Political Violence and Terrorism Research (ICPVTR)
Research Programmes
National Security Studies Programme (NSSP)
Social Cohesion Research Programme (SCRP)
Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
Other Research
Future Issues and Technology Cluster
Research@RSIS
Science and Technology Studies Programme (STSP) (2017-2020)
Graduate Education
Graduate Programmes Office
Exchange Partners and Programmes
How to Apply
Financial Assistance
Meet the Admissions Team: Information Sessions and other events
RSIS Alumni
Outreach
Global Networks
About Global Networks
RSIS Alumni
Executive Education
About Executive Education
SRP Executive Programme
Terrorism Analyst Training Course (TATC)
International Programmes
About International Programmes
Asia-Pacific Programme for Senior Military Officers (APPSMO)
Asia-Pacific Programme for Senior National Security Officers (APPSNO)
International Conference on Cohesive Societies (ICCS)
International Strategy Forum-Asia (ISF-Asia)
Publications
RSIS Publications
Annual Reviews
Books
Bulletins and Newsletters
RSIS Commentary Series
Counter Terrorist Trends and Analyses
Commemorative / Event Reports
Future Issues
IDSS Papers
Interreligious Relations
Monographs
NTS Insight
Policy Reports
Working Papers
External Publications
Authored Books
Journal Articles
Edited Books
Chapters in Edited Books
Policy Reports
Working Papers
Op-Eds
Glossary of Abbreviations
Policy-relevant Articles Given RSIS Award
RSIS Publications for the Year
External Publications for the Year
Media
Cohesive Societies
Sustainable Security
Other Resource Pages
News Releases
Speeches
Video/Audio Channel
External Podcasts
Events
Contact Us
S. Rajaratnam School of International Studies Think Tank and Graduate School Ponder The Improbable Since 1966
Nanyang Technological University Nanyang Technological University
  • About RSIS
      IntroductionBuilding the FoundationsWelcome MessageBoard of GovernorsHonours and Awards for RSIS Staff and StudentsRSIS Endowment FundEndowed ProfessorshipsCareer OpportunitiesGetting to RSIS
      Staff ProfilesExecutive Deputy Chairman’s OfficeDean’s OfficeManagementDistinguished FellowsFaculty and ResearchAssociate Research Fellows, Senior Analysts and Research AnalystsVisiting FellowsAdjunct FellowsAdministrative Staff
  • Research
      Research CentresCentre for Multilateralism Studies (CMS)Centre for Non-Traditional Security Studies (NTS Centre)Centre of Excellence for National SecurityInstitute of Defence and Strategic Studies (IDSS)International Centre for Political Violence and Terrorism Research (ICPVTR)
      Research ProgrammesNational Security Studies Programme (NSSP)Social Cohesion Research Programme (SCRP)Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
      Other ResearchFuture Issues and Technology ClusterResearch@RSISScience and Technology Studies Programme (STSP) (2017-2020)
  • Graduate Education
      Graduate Programmes OfficeExchange Partners and ProgrammesHow to ApplyFinancial AssistanceMeet the Admissions Team: Information Sessions and other eventsRSIS Alumni
  • Outreach
      Global NetworksAbout Global NetworksRSIS Alumni
      Executive EducationAbout Executive EducationSRP Executive ProgrammeTerrorism Analyst Training Course (TATC)
      International ProgrammesAbout International ProgrammesAsia-Pacific Programme for Senior Military Officers (APPSMO)Asia-Pacific Programme for Senior National Security Officers (APPSNO)International Conference on Cohesive Societies (ICCS)International Strategy Forum-Asia (ISF-Asia)
  • Publications
      RSIS PublicationsAnnual ReviewsBooksBulletins and NewslettersRSIS Commentary SeriesCounter Terrorist Trends and AnalysesCommemorative / Event ReportsFuture IssuesIDSS PapersInterreligious RelationsMonographsNTS InsightPolicy ReportsWorking Papers
      External PublicationsAuthored BooksJournal ArticlesEdited BooksChapters in Edited BooksPolicy ReportsWorking PapersOp-Eds
      Glossary of AbbreviationsPolicy-relevant Articles Given RSIS AwardRSIS Publications for the YearExternal Publications for the Year
  • Media
      Cohesive SocietiesSustainable SecurityOther Resource PagesNews ReleasesSpeechesVideo/Audio ChannelExternal Podcasts
  • Events
  • Contact Us
    • Connect with Us

      rsis.ntu
      rsis_ntu
      rsisntu
      rsisvideocast
      school/rsis-ntu
      rsis.sg
      rsissg
      RSIS
      RSS
      Subscribe to RSIS Publications
      Subscribe to RSIS Events

      Getting to RSIS

      Nanyang Technological University
      Block S4, Level B3,
      50 Nanyang Avenue,
      Singapore 639798

      Click here for direction to RSIS

      Get in Touch

    Connect
    Search
    • RSIS
    • Publication
    • RSIS Publications
    • IP22012 | Putting Principles into Practice: How the U.S. Defense Department is Approaching AI
    • Annual Reviews
    • Books
    • Bulletins and Newsletters
    • RSIS Commentary Series
    • Counter Terrorist Trends and Analyses
    • Commemorative / Event Reports
    • Future Issues
    • IDSS Papers
    • Interreligious Relations
    • Monographs
    • NTS Insight
    • Policy Reports
    • Working Papers

    IP22012 | Putting Principles into Practice: How the U.S. Defense Department is Approaching AI
    Megan Lamberth

    04 March 2022

    download pdf

    SYNOPSIS

    The U.S. Department of Defense is working toward adopting and implementing the concept of Responsible Artificial Intelligence, or RAI. The Defense Department must maintain strong momentum to ensure its RAI principles are actionable and repeatable across the DoD’s myriad offices, mission-sets, and priorities.

    COMMENTARY

    The U.S. Defense Department (DoD) is wrestling with how to institutionalize the concept of “Responsible AI” (RAI) – the belief that artificial intelligence (AI) systems should be developed and deployed safely, securely, ethically, and responsibly. The idea of RAI is built upon a years-long effort by the DoD to articulate and implement policies and principles around the appropriate and ethical use of AI capabilities.

    IP22012
    Artificial Intelligence (AI) is widely regarded as the next big technological development. Photo by Mike Mackenzie on Flickr.

    RAI is the next step in this progression. In a May 2021 memo, Deputy Secretary of Defense Kathleen Hicks explained that it was critical for the DoD to create a “trusted ecosystem” in AI, that “not only enhances our military capabilities, but also builds confidence with end-users, warfighters, and the American public.” The memo tasked the Joint Artificial Intelligence Center (JAIC) – a central body that seeks to synchronize AI activity across the DoD – with coordinating the development and implementation of RAI policies and guidance.

    While the application of RAI is still at a nascent stage, the DoD’s continued messaging and prioritization of safe and ethical AI is important, and shows that the Pentagon’s interest is not waning. The Defense Department, and the JAIC in particular, will have to keep momentum strong, working to ensure RAI principles and practices are ultimately digestible, actionable, and repeatable across the DoD’s myriad components.

    Defense Department’s Progress on AI

    The concept of RAI is the result of nearly four years of effort by the Defense Department to define its AI strategy and priorities. The DoD first laid the groundwork in June 2018 with the creation of the JAIC, and released its first AI strategy eight months later, which called for the adoption of “human-centered” AI. The strategy also promised U.S. leadership in the “responsible use and development of AI” by articulating a set of guiding principles.

    Those guiding principles were conceived of and established by the Defense Innovation Board (DIB)—a federal advisory committee of technology experts—in October 2019, and were adopted by the Defense Department three months later. The five ethical principles—Responsible, Equitable, Traceable, Reliable, and Governable—were meant to serve as foundational guidance for the Defense Department’s approach toward AI.

    Deputy Secretary Hicks built off these principles in her May 2021 memo, directing the DoD to implement RAI with these six tenets:

    1. RAI Governance. The DoD will create structure and processes for “oversight and accountability” and articulate policies and guidelines to “accelerate adoption of RAI within the DoD.
    2. Warfighter Trust. The Defense Department will ensure warfighter trust through “education and training,” as well as by establishing a framework for “test and evaluation and verification and validation.
    3. AI Product and Acquisition Lifecycle. The DoD will develop processes, policies, and guidance to ensure the implementation of RAI throughout the “acquisition lifecycle” of an AI product.
    4. Requirements Validation. The DoD will incorporate RAI into “all applicable AI requirements” to ensure its inclusion in the Defense Department’s AI capabilities.
    5. Responsible AI Ecosystem. The Defense Department will create an RAI ecosystem both nationally and globally to improve collaboration with academia, industry, and allies and partners, as well as “advance global norms grounded in shared values.”
    6. AI Workforce. The DoD will work to build an “RAI-ready workforce” to ensure “robust talent planning, recruitment, and capacity-building measures.”

    Components of the DoD, including the JAIC, the DIB, and the RAI Working Council, have been working to translate the directives and principles from the May 2021 memo into concrete guidance. In November 2021, for example, the Defense Innovation Unit (DIU) released RAI guidance for contractors looking to partner with the Defense Department. The document provides guidelines for each phase of the AI development lifecycle – planning, development, and deployment – and is intended to act as a “starting point for operationalizing” the Defense Department’s AI ethical principles.

    In addition to its work on RAI, DoD leadership has prioritized organizational changes to better streamline the Defense Department’s AI work. In December 2021, the Defense Department announced that it was creating the position of a Chief Digital and AI Officer (CDAO) – a role meant to serve as the DoD’s “senior official responsible for strengthening and integrating data, artificial intelligence, and digital solutions in the Department.” Part of the CDAO’s mission will be to align and sync activities across the JAIC, Chief Data Officer (CDO), and Defense Digital Service (DDS).

    The Defense Department’s AI priorities have also been shaped by actions taken within the broader U.S. government. For example, the National Security Commission on Artificial Intelligence (NSCAI) – a commission created by Congress to evaluate America’s AI competitiveness – released a report a year ago with dozens of recommendations aimed at shaping U.S. strategy on AI, including tackling themes such as talent, investments in research and development (R&D), and institutional processes. Many of these themes were mirrored in the 2022 National Defense Authorization Act (NDAA), which authorized more investments in AI, new pathways for “digital career fields, ”and a pilot program aimed at the “agile acquisition of technologies for warfighters.” These moves within the U.S. government and Congress show that lawmakers and government officials are eager for AI to remain a technology and defense priority.

    Institutional and bureaucratic barriers from within the Pentagon will continue to present new and existing headwinds for adopting and widely deploying AI capabilities. As a February 2022 GAO report describes, some of these challenges are more typical for the DoD, like talent shortages and lengthy acquisition processes, while others, such as sufficient usable data, are more unique to AI.

    These challenges are long-standing and will almost surely persist as time goes on. The Defense Department, however, is actively working to ensure it has the right policies, investments, infrastructure, and processes in place to successfully adopt responsible AI. The DoD must remain a leader in these efforts – working with the broader U.S. government, as well as with allies and partners, to ensure safe and ethical AI remains a priority.

     

     

    Megan Lamberth is an associate fellow with the Technology & National Security Program at the Center for a New American Security (CNAS). She is the author of two previous RSIS commentaries in collaboration with the Military Transformations Programme, “US’ AI Ethics Debate: Overcoming Barriers in Government and Tech Sector” and “AI Ethical Principles: Implementing US Military’s Framework.”

    Categories: IDSS Papers / International Politics and Security / Technology and Future Issues / Global
    comments powered by Disqus

    SYNOPSIS

    The U.S. Department of Defense is working toward adopting and implementing the concept of Responsible Artificial Intelligence, or RAI. The Defense Department must maintain strong momentum to ensure its RAI principles are actionable and repeatable across the DoD’s myriad offices, mission-sets, and priorities.

    COMMENTARY

    The U.S. Defense Department (DoD) is wrestling with how to institutionalize the concept of “Responsible AI” (RAI) – the belief that artificial intelligence (AI) systems should be developed and deployed safely, securely, ethically, and responsibly. The idea of RAI is built upon a years-long effort by the DoD to articulate and implement policies and principles around the appropriate and ethical use of AI capabilities.

    IP22012
    Artificial Intelligence (AI) is widely regarded as the next big technological development. Photo by Mike Mackenzie on Flickr.

    RAI is the next step in this progression. In a May 2021 memo, Deputy Secretary of Defense Kathleen Hicks explained that it was critical for the DoD to create a “trusted ecosystem” in AI, that “not only enhances our military capabilities, but also builds confidence with end-users, warfighters, and the American public.” The memo tasked the Joint Artificial Intelligence Center (JAIC) – a central body that seeks to synchronize AI activity across the DoD – with coordinating the development and implementation of RAI policies and guidance.

    While the application of RAI is still at a nascent stage, the DoD’s continued messaging and prioritization of safe and ethical AI is important, and shows that the Pentagon’s interest is not waning. The Defense Department, and the JAIC in particular, will have to keep momentum strong, working to ensure RAI principles and practices are ultimately digestible, actionable, and repeatable across the DoD’s myriad components.

    Defense Department’s Progress on AI

    The concept of RAI is the result of nearly four years of effort by the Defense Department to define its AI strategy and priorities. The DoD first laid the groundwork in June 2018 with the creation of the JAIC, and released its first AI strategy eight months later, which called for the adoption of “human-centered” AI. The strategy also promised U.S. leadership in the “responsible use and development of AI” by articulating a set of guiding principles.

    Those guiding principles were conceived of and established by the Defense Innovation Board (DIB)—a federal advisory committee of technology experts—in October 2019, and were adopted by the Defense Department three months later. The five ethical principles—Responsible, Equitable, Traceable, Reliable, and Governable—were meant to serve as foundational guidance for the Defense Department’s approach toward AI.

    Deputy Secretary Hicks built off these principles in her May 2021 memo, directing the DoD to implement RAI with these six tenets:

    1. RAI Governance. The DoD will create structure and processes for “oversight and accountability” and articulate policies and guidelines to “accelerate adoption of RAI within the DoD.
    2. Warfighter Trust. The Defense Department will ensure warfighter trust through “education and training,” as well as by establishing a framework for “test and evaluation and verification and validation.
    3. AI Product and Acquisition Lifecycle. The DoD will develop processes, policies, and guidance to ensure the implementation of RAI throughout the “acquisition lifecycle” of an AI product.
    4. Requirements Validation. The DoD will incorporate RAI into “all applicable AI requirements” to ensure its inclusion in the Defense Department’s AI capabilities.
    5. Responsible AI Ecosystem. The Defense Department will create an RAI ecosystem both nationally and globally to improve collaboration with academia, industry, and allies and partners, as well as “advance global norms grounded in shared values.”
    6. AI Workforce. The DoD will work to build an “RAI-ready workforce” to ensure “robust talent planning, recruitment, and capacity-building measures.”

    Components of the DoD, including the JAIC, the DIB, and the RAI Working Council, have been working to translate the directives and principles from the May 2021 memo into concrete guidance. In November 2021, for example, the Defense Innovation Unit (DIU) released RAI guidance for contractors looking to partner with the Defense Department. The document provides guidelines for each phase of the AI development lifecycle – planning, development, and deployment – and is intended to act as a “starting point for operationalizing” the Defense Department’s AI ethical principles.

    In addition to its work on RAI, DoD leadership has prioritized organizational changes to better streamline the Defense Department’s AI work. In December 2021, the Defense Department announced that it was creating the position of a Chief Digital and AI Officer (CDAO) – a role meant to serve as the DoD’s “senior official responsible for strengthening and integrating data, artificial intelligence, and digital solutions in the Department.” Part of the CDAO’s mission will be to align and sync activities across the JAIC, Chief Data Officer (CDO), and Defense Digital Service (DDS).

    The Defense Department’s AI priorities have also been shaped by actions taken within the broader U.S. government. For example, the National Security Commission on Artificial Intelligence (NSCAI) – a commission created by Congress to evaluate America’s AI competitiveness – released a report a year ago with dozens of recommendations aimed at shaping U.S. strategy on AI, including tackling themes such as talent, investments in research and development (R&D), and institutional processes. Many of these themes were mirrored in the 2022 National Defense Authorization Act (NDAA), which authorized more investments in AI, new pathways for “digital career fields, ”and a pilot program aimed at the “agile acquisition of technologies for warfighters.” These moves within the U.S. government and Congress show that lawmakers and government officials are eager for AI to remain a technology and defense priority.

    Institutional and bureaucratic barriers from within the Pentagon will continue to present new and existing headwinds for adopting and widely deploying AI capabilities. As a February 2022 GAO report describes, some of these challenges are more typical for the DoD, like talent shortages and lengthy acquisition processes, while others, such as sufficient usable data, are more unique to AI.

    These challenges are long-standing and will almost surely persist as time goes on. The Defense Department, however, is actively working to ensure it has the right policies, investments, infrastructure, and processes in place to successfully adopt responsible AI. The DoD must remain a leader in these efforts – working with the broader U.S. government, as well as with allies and partners, to ensure safe and ethical AI remains a priority.

     

     

    Megan Lamberth is an associate fellow with the Technology & National Security Program at the Center for a New American Security (CNAS). She is the author of two previous RSIS commentaries in collaboration with the Military Transformations Programme, “US’ AI Ethics Debate: Overcoming Barriers in Government and Tech Sector” and “AI Ethical Principles: Implementing US Military’s Framework.”

    Categories: IDSS Papers / International Politics and Security / Technology and Future Issues

    Popular Links

    About RSISResearch ProgrammesGraduate EducationPublicationsEventsAdmissionsCareersVideo/Audio ChannelRSIS Intranet

    Connect with Us

    rsis.ntu
    rsis_ntu
    rsisntu
    rsisvideocast
    school/rsis-ntu
    rsis.sg
    rsissg
    RSIS
    RSS
    Subscribe to RSIS Publications
    Subscribe to RSIS Events

    Getting to RSIS

    Nanyang Technological University
    Block S4, Level B3,
    50 Nanyang Avenue,
    Singapore 639798

    Click here for direction to RSIS

    Get in Touch

      Copyright © S. Rajaratnam School of International Studies. All rights reserved.
      Privacy Statement / Terms of Use
      Help us improve

        Rate your experience with this website
        123456
        Not satisfiedVery satisfied
        What did you like?
        0/255 characters
        What can be improved?
        0/255 characters
        Your email
        Please enter a valid email.
        Thank you for your feedback.
        This site uses cookies to offer you a better browsing experience. By continuing, you are agreeing to the use of cookies on your device as described in our privacy policy. Learn more
        OK
        Latest Book
        more info