28 September 2021
- RSIS
- Publication
- RSIS Publications
- Singapore’s AI ‘Living Lab’: Safety Rules Essential
SYNOPSIS
The impact of emerging regulations emphasising the safety of artificial intelligence (AI) in the EU and US will reach Singapore sooner rather than later. To sustain its ambition of being a “living laboratory” for AI applications, Singapore should develop its own AI safety regulations.
Source: Freepik
COMMENTARY
EARLIER THIS year, the European Union (EU) released draft regulations for artificial intelligence (AI), prompting responses ranging from criticism to praise. In the same week, the US Federal Trade Commission (FTC) issued guidance on the use of AI, highlighting the need to ensure “truth, fairness and equity.”
There are currently no federal regulations in the US for AI-driven systems. Nevertheless, the FTC said that companies deploying such systems must adhere to existing laws prohibiting unfair, deceptive or discriminatory practices. The EU’s proposed regulations similarly ensure AI-driven systems align with laws protecting fundamental rights and social values. These moves signal emerging regulatory regime to ensure AI does not cause harm to society. Rules made in these influential jurisdictions can and will have global implications.
Addressing AI Safety
Given the government’s desire to position Singapore as a “living laboratory” to test and develop new AI applications for eventual export, there is a clear need to ensure that locally-developed systems are not misaligned with these emerging regulations designed to ensure AI safety.
Singapore appears to be well-placed to do this given the groundwork laid by the Smart Nation initiative, Model AI Governance Framework, and National AI Strategy. These outline a citizen-centric approach to digital transformation, provide guidelines for ethical adoption of AI in the private sector, and highlight some priority areas for public investment in AI development and deployment.
However, the EU’s draft AI legislation and US FTC’s guidance on AI highlight the need to look beyond AI-related risks caused directly by malicious actors. As AI-driven systems become more widespread and integrated with daily life, it will eventually be necessary to address the inherent risks of AI that can materialise even when such systems function as intended.
Singapore’s current policy initiatives, however, have yet to formally address issues around the safety of AI-driven systems. Although the Model AI Governance Framework has set out a relatively robust set of guidelines for ethical deployment of AI-driven systems, its adoption is still voluntary.
Fostering Public Trust: The Need for Legislation
Given these circumstances, it is critical for Singapore to introduce more direct oversight of the development and deployment of AI-driven systems, and ensure accountability where there are risks of harm.
Although voluntary initiatives such as the Model AI Governance Framework can be sensible at initial stages of a technology’s development and deployment, we may be fast approaching the point where trust and safety concerns need to be addressed in concrete ways through legislation or regulation.
In the same way that food safety standards and their effective enforcement provide assurance to consumers, robust AI safety regulations will be crucial to foster public trust. Singapore’s stringent standards are also essential to the global success of its food manufacturing industry. AI safety regulations can ideally achieve the same effect.
However, the Model AI Governance Framework is deliberate in excluding off-the-shelf software that is being updated to incorporate AI-based features. This exclusion could become problematic as increasingly advanced AI-based features are more deeply integrated into commonly-used applications.
Framework to Classify: Something Lacking?
Another important dimension relates to the attribution of liability in cases where AI-driven systems cause unintentional harm. In a recently published report, the Law Reform Committee of the Singapore Academy of Law noted the need to legally define acceptable standards of conduct rather than letting the courts establish them over time.
Although the National AI Strategy commits to developing and deploying AI based on a “human-centric” approach, it is unclear what exactly this means in practice. Furthermore, Singapore lacks a framework to classify AI-driven systems according to their potential for causing harm.
In the EU’s draft AI legislation, a risk-based classification is used to identify obligations imposed on system providers and define activities that warrant greater scrutiny. Such an approach is worth considering to provide clarity on the scope of legislation and regulation while allaying concerns about “chilling effects” on innovation.
Gearing for the Global Market
The development and deployment of AI-driven systems is progressing amidst a geopolitical landscape marked by contestation and rivalry. As such, compliance with emerging AI regulatory regimes could well become a non-tariff barrier deployed by countries at the forefront of major AI research and development.
Given Singapore’s small domestic market, many products and services developed here are often tailored to larger export markets. This same economic logic applies to the emerging AI sector. For Singapore-made solutions to succeed in the large markets of the EU and US, there is a need to comply with their respective AI safety regulations.
Moreover, public trust needs to be carefully managed in our push to make Singapore a living laboratory for AI applications. Singapore has typically had a sense of optimism towards technology, and this has been a critical factor underpinning Smart Nation efforts.
A Pew survey between October 2019 and March 2020 showed an overwhelming majority of respondents in Singapore (72%) felt the development of AI was good for society. However, this optimism should not be taken for granted, as there could be a severe backlash if the deployment of AI-driven systems in Singapore ends up causing unintended harm or runs contrary to social mores.
For example, Singapore has high hopes for the deployment of facial recognition technologies. This is seen in the launch of SingPass face verification, which aims to add an additional — and perhaps more convenient — authentication option for individuals to access government digital services.
Avoiding Backlash
However, the application of facial recognition technologies in more intrusive ways without explicit knowledge or consent, such as to analyse emotions, poses significant risks.
For example, the deployment of an AI-driven system could be found to have inadvertently led to discrimination against specific categories of individuals. In such a scenario, the resulting backlash could sour public willingness to embrace this whole class of technologies. This will in turn deal a blow to ongoing efforts to develop, test, and refine AI applications in Singapore.
Singapore should therefore move to safeguard the long-term viability of its efforts to become a living laboratory for AI applications with robust AI safety regulations. A well-designed AI regulatory regime in Singapore will likely have an enduring positive impact.
Moreover, as other Southeast Asian countries catch up in their adoption of AI, Singapore’s regulatory frameworks will serve as a tried and tested model that could potentially be adopted more broadly in the region.
About the Authors
Manoj Harjani is a Research Fellow with the Future Issues and Technology research cluster, S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore. Hawyee Auyong is an independent researcher and formerly a Research Fellow with the Lee Kuan Yew School of Public Policy (LKYSPP), National University of Singapore (NUS).
SYNOPSIS
The impact of emerging regulations emphasising the safety of artificial intelligence (AI) in the EU and US will reach Singapore sooner rather than later. To sustain its ambition of being a “living laboratory” for AI applications, Singapore should develop its own AI safety regulations.
Source: Freepik
COMMENTARY
EARLIER THIS year, the European Union (EU) released draft regulations for artificial intelligence (AI), prompting responses ranging from criticism to praise. In the same week, the US Federal Trade Commission (FTC) issued guidance on the use of AI, highlighting the need to ensure “truth, fairness and equity.”
There are currently no federal regulations in the US for AI-driven systems. Nevertheless, the FTC said that companies deploying such systems must adhere to existing laws prohibiting unfair, deceptive or discriminatory practices. The EU’s proposed regulations similarly ensure AI-driven systems align with laws protecting fundamental rights and social values. These moves signal emerging regulatory regime to ensure AI does not cause harm to society. Rules made in these influential jurisdictions can and will have global implications.
Addressing AI Safety
Given the government’s desire to position Singapore as a “living laboratory” to test and develop new AI applications for eventual export, there is a clear need to ensure that locally-developed systems are not misaligned with these emerging regulations designed to ensure AI safety.
Singapore appears to be well-placed to do this given the groundwork laid by the Smart Nation initiative, Model AI Governance Framework, and National AI Strategy. These outline a citizen-centric approach to digital transformation, provide guidelines for ethical adoption of AI in the private sector, and highlight some priority areas for public investment in AI development and deployment.
However, the EU’s draft AI legislation and US FTC’s guidance on AI highlight the need to look beyond AI-related risks caused directly by malicious actors. As AI-driven systems become more widespread and integrated with daily life, it will eventually be necessary to address the inherent risks of AI that can materialise even when such systems function as intended.
Singapore’s current policy initiatives, however, have yet to formally address issues around the safety of AI-driven systems. Although the Model AI Governance Framework has set out a relatively robust set of guidelines for ethical deployment of AI-driven systems, its adoption is still voluntary.
Fostering Public Trust: The Need for Legislation
Given these circumstances, it is critical for Singapore to introduce more direct oversight of the development and deployment of AI-driven systems, and ensure accountability where there are risks of harm.
Although voluntary initiatives such as the Model AI Governance Framework can be sensible at initial stages of a technology’s development and deployment, we may be fast approaching the point where trust and safety concerns need to be addressed in concrete ways through legislation or regulation.
In the same way that food safety standards and their effective enforcement provide assurance to consumers, robust AI safety regulations will be crucial to foster public trust. Singapore’s stringent standards are also essential to the global success of its food manufacturing industry. AI safety regulations can ideally achieve the same effect.
However, the Model AI Governance Framework is deliberate in excluding off-the-shelf software that is being updated to incorporate AI-based features. This exclusion could become problematic as increasingly advanced AI-based features are more deeply integrated into commonly-used applications.
Framework to Classify: Something Lacking?
Another important dimension relates to the attribution of liability in cases where AI-driven systems cause unintentional harm. In a recently published report, the Law Reform Committee of the Singapore Academy of Law noted the need to legally define acceptable standards of conduct rather than letting the courts establish them over time.
Although the National AI Strategy commits to developing and deploying AI based on a “human-centric” approach, it is unclear what exactly this means in practice. Furthermore, Singapore lacks a framework to classify AI-driven systems according to their potential for causing harm.
In the EU’s draft AI legislation, a risk-based classification is used to identify obligations imposed on system providers and define activities that warrant greater scrutiny. Such an approach is worth considering to provide clarity on the scope of legislation and regulation while allaying concerns about “chilling effects” on innovation.
Gearing for the Global Market
The development and deployment of AI-driven systems is progressing amidst a geopolitical landscape marked by contestation and rivalry. As such, compliance with emerging AI regulatory regimes could well become a non-tariff barrier deployed by countries at the forefront of major AI research and development.
Given Singapore’s small domestic market, many products and services developed here are often tailored to larger export markets. This same economic logic applies to the emerging AI sector. For Singapore-made solutions to succeed in the large markets of the EU and US, there is a need to comply with their respective AI safety regulations.
Moreover, public trust needs to be carefully managed in our push to make Singapore a living laboratory for AI applications. Singapore has typically had a sense of optimism towards technology, and this has been a critical factor underpinning Smart Nation efforts.
A Pew survey between October 2019 and March 2020 showed an overwhelming majority of respondents in Singapore (72%) felt the development of AI was good for society. However, this optimism should not be taken for granted, as there could be a severe backlash if the deployment of AI-driven systems in Singapore ends up causing unintended harm or runs contrary to social mores.
For example, Singapore has high hopes for the deployment of facial recognition technologies. This is seen in the launch of SingPass face verification, which aims to add an additional — and perhaps more convenient — authentication option for individuals to access government digital services.
Avoiding Backlash
However, the application of facial recognition technologies in more intrusive ways without explicit knowledge or consent, such as to analyse emotions, poses significant risks.
For example, the deployment of an AI-driven system could be found to have inadvertently led to discrimination against specific categories of individuals. In such a scenario, the resulting backlash could sour public willingness to embrace this whole class of technologies. This will in turn deal a blow to ongoing efforts to develop, test, and refine AI applications in Singapore.
Singapore should therefore move to safeguard the long-term viability of its efforts to become a living laboratory for AI applications with robust AI safety regulations. A well-designed AI regulatory regime in Singapore will likely have an enduring positive impact.
Moreover, as other Southeast Asian countries catch up in their adoption of AI, Singapore’s regulatory frameworks will serve as a tried and tested model that could potentially be adopted more broadly in the region.
About the Authors
Manoj Harjani is a Research Fellow with the Future Issues and Technology research cluster, S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore. Hawyee Auyong is an independent researcher and formerly a Research Fellow with the Lee Kuan Yew School of Public Policy (LKYSPP), National University of Singapore (NUS).