23 June 2022
- RSIS
- Publication
- RSIS Publications
- IP22035 | Military AI Governance in East Asia: Advances and Challenges
While the Chinese, Japanese, and South Korean militaries pursue AI in their defence transformation, their AI governance frameworks remain entrenched primarily in the civilian domain. WICHUTA TEERATANABODEE argues that Asian militaries need novel AI governance frameworks that would ensure responsible military development, deployment, and use of AI.
COMMENTARY
Artificial intelligence (AI) systems are projected to alter the direction and character of warfare. While military AI can enhance warfighting capabilities in all domains, the diffusion of AI systems raises new problems related to ethics, responsibility, and trustworthiness.
Indeed, the lack of appropriate governance frameworks and rules of engagement could undermine the credibility of military AI. For example, allowing a fully autonomous AI-enabled weapon to fire at targets indiscriminately would risk harming civilians. Consequently, the role of humans, the level of autonomy, and accuracy — some key concerns of military AI governance — need to be clarified. At the strategic level, military AI governance could help states set the standards and leave as little vacuum as possible for adversaries, who might take advantage of the policy gaps.
Despite its importance, the establishment of military AI governance — a system of rules, standards, and practices to guide the creation and application of AI — often lags behind the fast-paced technological development. This seems to be the case for East Asia, where China, Japan, and South Korea have joined the global military AI race, yet their efforts in ensuring the ethical and responsible use of such technologies have been limited.
China
Striving to boost its AI capabilities to a world-leading level by 2030, China has prioritised AI, big data, and cloud computing advancements as a national project. This ambition also translates into the next wave of defence transformation for the People’s Liberation Army, also known as the intelligentisation of the forces.
China has increasingly fielded robotic and unmanned systems and advanced missiles with precision guidance across different military domains beyond the conventional ones, including space and cyberspace. Reportedly, some of those technologies possess a certain degree of autonomy, and their functionality could be considered comparable to a definition of an “AI weapon”.
China’s AI governance initiative is primarily led by three institutions: the Cyberspace Administration of China (CAC), the China Academy of Information and Communications Technology (CAICT), and the Ministry of Science and Technology (MOST). In October 2021, MOST published the Ethical Norms for New Generation Artificial Intelligence, putting forward six basic ethical requirements. Some of those highlight the protection of privacy and security, the assurance of controllability and trustworthiness, and improvements in the cultivation of ethics.
In December 2021, Beijing submitted a position paper on “regulating the military applications of artificial intelligence” to the United Nations Convention on Certain Conventional Weapons. The paper highlights concerns over the long-term impacts and potential risks of AI technology in the military domain, including ethical issues, governance, and the weaponisation of AI.
As China’s AI governance policy had thus far focused on civilian use, this position paper marks its first step towards regulating military AI. Detailing the risks and weaponisation of autonomous systems in the document shows that Beijing has considered the challenges arising from the military applications of AI from the tactical aspect. Furthermore, raising this initiative at the UN indicates that participation in norm- and standard-setting for military AI at the global level is strategically important for China.
Japan
AI technology has also been an interest of Japan, especially for civilian purposes, due to labour shortages arising from its ageing society. On the defence side, the Japan Ministry of Defense (JMOD) regards AI as one part of the suite of “game-changing technologies” for future warfare. In 2019, Tokyo planned to acquire and develop several unmanned vehicles and underwater drones. Its Defense of Japan 2021 white paper called for an enhancement of the technology base for defence applications, which was a bold move as Japan’s defence establishment had previously been excluded from involvement in science and technology development since the end of the Second World War.
The Acquisition, Technology & Logistics Agency (ATLA) is the key actor in Japan’s development of military AI. Administered by JMOD, ATLA works with relevant stakeholders to acquire and develop military technologies. In partnership with Japan’s Hitachi Global, it has been reported to be developing a system to support the Japan Maritime Self-Defense Force in its patrolling duties. This system would compare data from Japanese ships and satellites to identify suspicious vessels appearing within Japan’s territorial waters.
Despite technological progress in the past decade, Japan’s efforts at governing its AI technology are slow, even on the civilian side. The Ministry of Economy, Trade and Industry (METI) released the first complete report on AI governance only in January 2022. The report proposes a human-centred social approach when creating and implementing AI principles but fails to discuss the roles of AI in defence and national security.
One promising aspect, however, is that the public and experts from the private sector, academia, and law played significant roles in the report-writing process. This bottom-up approach indicates that Japan values public opinion in creating its AI governance framework, which could result in more comprehensive policies in the future.
South Korea
South Korea has also embarked on the use of military AI. In September 2018, the Ministry of National Defense (MND) said that several projects to enhance military capabilities through AI and big data technology were under way. The priority is unmanned systems, including drones and robots, cyber defence, and modernisation of scientific and alert systems. The MND is working with the Ministry of Science and ICT (Information and Communications Technology) to establish a working group between the government and the private sector for technology research and development.
South Korea relies on AI-based surveillance systems to ensure national security, with North Korea being the most concerning threat. Since 2006, it has developed a Robot Military Sentry equipped with a machine gun, which was reported to have the ability to act autonomously. Several of these robots were installed along the demilitarised zone (DMZ). In 2021, Seoul announced a plan to increase its unmanned presence at the DMZ, using robots with high-resolution cameras and sensors running on rail tracks to search for suspicious movements. In addition, by 2024, the South Korean army plans to start testing a new combat system with AI-powered drones to assist decision-making on the battlefield. The operation is expected to be complete by 2040.
The progress of South Korea’s AI governance on the civilian side is comparable to that of China. In 2019, government institutions, including the Korea Communications Commission (KCC) and the Korea Information Society Development Institute (KISDI), launched the AI Ethics Principles to govern the use and development of AI. The principles encompass seven aspects: human-centred service, transparency and explainability, responsibility, safety, anti-discrimination, participation, and privacy and data governance.
However, as with the case of Japan, South Korea’s AI governance efforts are limited to the civilian side, with no mention of how these will be applied to the military. This omission is concerning, considering that “killer sentries” have already been deployed to the field.
Strategic Implications
Due to the accelerating development of AI, it is crucial that Asian militaries take responsibility for the utilisation of this technology. While the advantages offered by AI can enhance military capabilities, the potential risks must be adequately addressed to safeguard international security and order. The adoption of military AI governance can also provide strategic advantages for the states that deploy such technologies.
As China strives to reach world-leading level in the field of military AI — where its strategic competitors like the United States and western European countries are prominent actors — it needs to ensure that its values and concerns are recognised at the international level. The position paper it submitted to the UN in December 2021 is but the foundational step in Beijing’s efforts.
While Japan and South Korea might have less ambitious goals, military AI governance is nevertheless important in their cases as well. The introduction of AI technology to the military has not only changed the character of warfare but also opened doors to new opportunities, such as defence cooperation with like-minded countries. Consequently, both should step up their military AI governance initiatives to ensure that the use of AI systems is ethical and reliable, as a first step in preparing for future AI-based defence collaborations.
About the Author
Wichuta TEERATANABODEE is a Senior Analyst in the Military Transformations Programme of the Institute of Defence and Strategic Studies, RSIS.
While the Chinese, Japanese, and South Korean militaries pursue AI in their defence transformation, their AI governance frameworks remain entrenched primarily in the civilian domain. WICHUTA TEERATANABODEE argues that Asian militaries need novel AI governance frameworks that would ensure responsible military development, deployment, and use of AI.
COMMENTARY
Artificial intelligence (AI) systems are projected to alter the direction and character of warfare. While military AI can enhance warfighting capabilities in all domains, the diffusion of AI systems raises new problems related to ethics, responsibility, and trustworthiness.
Indeed, the lack of appropriate governance frameworks and rules of engagement could undermine the credibility of military AI. For example, allowing a fully autonomous AI-enabled weapon to fire at targets indiscriminately would risk harming civilians. Consequently, the role of humans, the level of autonomy, and accuracy — some key concerns of military AI governance — need to be clarified. At the strategic level, military AI governance could help states set the standards and leave as little vacuum as possible for adversaries, who might take advantage of the policy gaps.
Despite its importance, the establishment of military AI governance — a system of rules, standards, and practices to guide the creation and application of AI — often lags behind the fast-paced technological development. This seems to be the case for East Asia, where China, Japan, and South Korea have joined the global military AI race, yet their efforts in ensuring the ethical and responsible use of such technologies have been limited.
China
Striving to boost its AI capabilities to a world-leading level by 2030, China has prioritised AI, big data, and cloud computing advancements as a national project. This ambition also translates into the next wave of defence transformation for the People’s Liberation Army, also known as the intelligentisation of the forces.
China has increasingly fielded robotic and unmanned systems and advanced missiles with precision guidance across different military domains beyond the conventional ones, including space and cyberspace. Reportedly, some of those technologies possess a certain degree of autonomy, and their functionality could be considered comparable to a definition of an “AI weapon”.
China’s AI governance initiative is primarily led by three institutions: the Cyberspace Administration of China (CAC), the China Academy of Information and Communications Technology (CAICT), and the Ministry of Science and Technology (MOST). In October 2021, MOST published the Ethical Norms for New Generation Artificial Intelligence, putting forward six basic ethical requirements. Some of those highlight the protection of privacy and security, the assurance of controllability and trustworthiness, and improvements in the cultivation of ethics.
In December 2021, Beijing submitted a position paper on “regulating the military applications of artificial intelligence” to the United Nations Convention on Certain Conventional Weapons. The paper highlights concerns over the long-term impacts and potential risks of AI technology in the military domain, including ethical issues, governance, and the weaponisation of AI.
As China’s AI governance policy had thus far focused on civilian use, this position paper marks its first step towards regulating military AI. Detailing the risks and weaponisation of autonomous systems in the document shows that Beijing has considered the challenges arising from the military applications of AI from the tactical aspect. Furthermore, raising this initiative at the UN indicates that participation in norm- and standard-setting for military AI at the global level is strategically important for China.
Japan
AI technology has also been an interest of Japan, especially for civilian purposes, due to labour shortages arising from its ageing society. On the defence side, the Japan Ministry of Defense (JMOD) regards AI as one part of the suite of “game-changing technologies” for future warfare. In 2019, Tokyo planned to acquire and develop several unmanned vehicles and underwater drones. Its Defense of Japan 2021 white paper called for an enhancement of the technology base for defence applications, which was a bold move as Japan’s defence establishment had previously been excluded from involvement in science and technology development since the end of the Second World War.
The Acquisition, Technology & Logistics Agency (ATLA) is the key actor in Japan’s development of military AI. Administered by JMOD, ATLA works with relevant stakeholders to acquire and develop military technologies. In partnership with Japan’s Hitachi Global, it has been reported to be developing a system to support the Japan Maritime Self-Defense Force in its patrolling duties. This system would compare data from Japanese ships and satellites to identify suspicious vessels appearing within Japan’s territorial waters.
Despite technological progress in the past decade, Japan’s efforts at governing its AI technology are slow, even on the civilian side. The Ministry of Economy, Trade and Industry (METI) released the first complete report on AI governance only in January 2022. The report proposes a human-centred social approach when creating and implementing AI principles but fails to discuss the roles of AI in defence and national security.
One promising aspect, however, is that the public and experts from the private sector, academia, and law played significant roles in the report-writing process. This bottom-up approach indicates that Japan values public opinion in creating its AI governance framework, which could result in more comprehensive policies in the future.
South Korea
South Korea has also embarked on the use of military AI. In September 2018, the Ministry of National Defense (MND) said that several projects to enhance military capabilities through AI and big data technology were under way. The priority is unmanned systems, including drones and robots, cyber defence, and modernisation of scientific and alert systems. The MND is working with the Ministry of Science and ICT (Information and Communications Technology) to establish a working group between the government and the private sector for technology research and development.
South Korea relies on AI-based surveillance systems to ensure national security, with North Korea being the most concerning threat. Since 2006, it has developed a Robot Military Sentry equipped with a machine gun, which was reported to have the ability to act autonomously. Several of these robots were installed along the demilitarised zone (DMZ). In 2021, Seoul announced a plan to increase its unmanned presence at the DMZ, using robots with high-resolution cameras and sensors running on rail tracks to search for suspicious movements. In addition, by 2024, the South Korean army plans to start testing a new combat system with AI-powered drones to assist decision-making on the battlefield. The operation is expected to be complete by 2040.
The progress of South Korea’s AI governance on the civilian side is comparable to that of China. In 2019, government institutions, including the Korea Communications Commission (KCC) and the Korea Information Society Development Institute (KISDI), launched the AI Ethics Principles to govern the use and development of AI. The principles encompass seven aspects: human-centred service, transparency and explainability, responsibility, safety, anti-discrimination, participation, and privacy and data governance.
However, as with the case of Japan, South Korea’s AI governance efforts are limited to the civilian side, with no mention of how these will be applied to the military. This omission is concerning, considering that “killer sentries” have already been deployed to the field.
Strategic Implications
Due to the accelerating development of AI, it is crucial that Asian militaries take responsibility for the utilisation of this technology. While the advantages offered by AI can enhance military capabilities, the potential risks must be adequately addressed to safeguard international security and order. The adoption of military AI governance can also provide strategic advantages for the states that deploy such technologies.
As China strives to reach world-leading level in the field of military AI — where its strategic competitors like the United States and western European countries are prominent actors — it needs to ensure that its values and concerns are recognised at the international level. The position paper it submitted to the UN in December 2021 is but the foundational step in Beijing’s efforts.
While Japan and South Korea might have less ambitious goals, military AI governance is nevertheless important in their cases as well. The introduction of AI technology to the military has not only changed the character of warfare but also opened doors to new opportunities, such as defence cooperation with like-minded countries. Consequently, both should step up their military AI governance initiatives to ensure that the use of AI systems is ethical and reliable, as a first step in preparing for future AI-based defence collaborations.
About the Author
Wichuta TEERATANABODEE is a Senior Analyst in the Military Transformations Programme of the Institute of Defence and Strategic Studies, RSIS.