25 November 2021
- RSIS
- Publication
- RSIS Publications
- AI Governance: Less Regulation ─ or More?
SYNOPSIS
As governments put in place governance frameworks for technology development, differing models are emerging that reflect the contrasting preferences of the US, Europe, and China. The pivotal issue is reconciling accountability and human rights. Should Singapore re-think its approach to technology innovation governance?
Source: Pixabay
COMMENTARY
IN 2014, the European Court of Justice enshrined the “right to be forgotten” as a human right in ruling against Google in the Costeja case. In doing so, it drew a line in the sand against the permanence and pervasiveness of personal data recordation on the World Wide Web. 2018 saw the promulgation of the EU General Data Protection Regulation (GDPR), which effectively established a bulwark against unrestricted collection and use of personal data for EU residents.
In April this year, the European Commission unveiled the Artificial Intelligence Act. At the time of writing, the proposed Act has yet to come into force; upon ratification by all 27 EU member states, it will be the first legislative and governance framework to comprehensively address the creation, deployment, and use of artificial intelligence systems in the EU. A fundamental tenet of this framework is data protection and privacy, among other human rights recognised in the EU. Like the GDPR, this proposed legislation will have extra-territorial effect.
The China Model: Regulating Tech Giants
Ongoing for some time now is the Chinese government’s reining in of large Chinese technology companies and cracking down on cryptocurrency activity. In August, China took two significant steps in the direction of technology regulation.
First, it passed the Personal Information Protection Legislation (PIPL), the first piece of legislation addressing personal information protection in China. This law came into force on 1 November and is expected to have considerable impact on how local and international businesses undertake data compliance practices with respect to personal information of persons in China.
Then, the Cyberspace Administration of China (CAC) released draft legislation that targets the use of recommendation algorithms in a number of respects. As stated in Article 1 of the draft legislation, the purposes of the same centre around “standardising Internet information service algorithm recommendation activities” for reasons of national security, social order, and protecting the rights of citizens.
Other express purposes are promoting the “healthy development of Internet information services” and China’s declared “core socialist values”.
That the Chinese government should do this is unsurprising given its current efforts to rein in its homegrown technology sector and demonstrates the CAC understands that curtailing such algorithmic functions are fundamental to controlling the activities of Chinese technology companies.
Opposite Ends of the Regulation Dichotomy
With the EU and China adopting these positions, they are effectively casting the hands-off approach that the United States normally takes with regulating technology into sharp relief.
It is as though the EU and China have taken a long, hard look at how the US technology companies – entities that are not elected to public office ─ have amassed so much power over their customer base and have decided that their governments shall not be in thrall to unelected corporations.
Although there are some efforts underway in the US to attempt to regulate its largest technology companies, it will be a slow, lumbering, and piecemeal process. The EU has made moves against US tech companies for flouting EU directives on anti-competition and privacy and imposed substantial fines.
China, while having decided to housekeep and rein in uninhibited innovation and corporate expansion of its homegrown tech companies, is also sending a signal globally that it has the sovereign will and ability to discipline and regulate its own. In this respect, China and the EU may well be objectively setting an example of “responsible government” for others to follow.
The EU Model: Regulating for Responsibility?
There are fundamental differences between EU and China in regulating technology – protection of fundamental human rights for the former, as opposed to preservation of “core socialist values” for the latter.
However, the overall effect of two major economic powers poised and ready to impose legislative measures controlling the impacts of technology and proliferation of AI systems, may be that of making the rest of the world sit up, pay attention, and think more seriously about following suit.
The overall message is that the needs of persons and society are being prioritised over unchecked business competition and innovation. Promoting how AI can automate routine decision-making and greatly reduce friction has suddenly become risky and questionable.
The mantra of “move fast and break things” as regards the innovation of disruptive technology, popularised by Mark Zuckerberg, is increasingly sounding hollow, irresponsible, and dangerous.
Facebook’s re-branding to Meta and its publicity around its “Metaverse” appear for all intents and purposes like a distraction tactic and regulation dodge and will only invite more scrutiny by governments outside the US. It has now become more difficult to protest against regulation having a chilling effect on innovation and competitiveness, and this legislative measures double-bill has also possibly provided a greater impetus for other countries to do likewise.
Further, could another effect of these measures by EU and China be an increase in trustworthiness of technology innovations originating from these locations, as opposed to those from other countries that have less regulation?
The Singapore Model: Time for Rethink?
Singapore is a small state with trade and diplomatic links to both China and the EU. How should it respond to these developments, in terms of how it manages its trade, R&D, and other AI-related interests with these larger economies, and how it views its own approach to technology governance?
What Singapore has at this point in time is personal data legislation that is effectively pro-business, and no fundamental law regarding privacy. Its much-vaunted Model AI Governance Framework is merely voluntary.
Given the extra-territoriality of the EU and Chinese legislation, and the importance of their markets to Singapore, there is a need to evolve Singapore’s relatively light-touch approach to AI governance reactively to these developments. This is for it to remain relevant and demonstrate that it takes these fundamental matters seriously.
A light-touch, pro-business approach may have been the strategy for incentivising local and international AI-related research and development for creating AI solutions, but may not be viable for much longer.
About the Author
Teo Yi-Ling is a Senior Fellow with the Centre of Excellence for National Security (CENS), S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore.
SYNOPSIS
As governments put in place governance frameworks for technology development, differing models are emerging that reflect the contrasting preferences of the US, Europe, and China. The pivotal issue is reconciling accountability and human rights. Should Singapore re-think its approach to technology innovation governance?
Source: Pixabay
COMMENTARY
IN 2014, the European Court of Justice enshrined the “right to be forgotten” as a human right in ruling against Google in the Costeja case. In doing so, it drew a line in the sand against the permanence and pervasiveness of personal data recordation on the World Wide Web. 2018 saw the promulgation of the EU General Data Protection Regulation (GDPR), which effectively established a bulwark against unrestricted collection and use of personal data for EU residents.
In April this year, the European Commission unveiled the Artificial Intelligence Act. At the time of writing, the proposed Act has yet to come into force; upon ratification by all 27 EU member states, it will be the first legislative and governance framework to comprehensively address the creation, deployment, and use of artificial intelligence systems in the EU. A fundamental tenet of this framework is data protection and privacy, among other human rights recognised in the EU. Like the GDPR, this proposed legislation will have extra-territorial effect.
The China Model: Regulating Tech Giants
Ongoing for some time now is the Chinese government’s reining in of large Chinese technology companies and cracking down on cryptocurrency activity. In August, China took two significant steps in the direction of technology regulation.
First, it passed the Personal Information Protection Legislation (PIPL), the first piece of legislation addressing personal information protection in China. This law came into force on 1 November and is expected to have considerable impact on how local and international businesses undertake data compliance practices with respect to personal information of persons in China.
Then, the Cyberspace Administration of China (CAC) released draft legislation that targets the use of recommendation algorithms in a number of respects. As stated in Article 1 of the draft legislation, the purposes of the same centre around “standardising Internet information service algorithm recommendation activities” for reasons of national security, social order, and protecting the rights of citizens.
Other express purposes are promoting the “healthy development of Internet information services” and China’s declared “core socialist values”.
That the Chinese government should do this is unsurprising given its current efforts to rein in its homegrown technology sector and demonstrates the CAC understands that curtailing such algorithmic functions are fundamental to controlling the activities of Chinese technology companies.
Opposite Ends of the Regulation Dichotomy
With the EU and China adopting these positions, they are effectively casting the hands-off approach that the United States normally takes with regulating technology into sharp relief.
It is as though the EU and China have taken a long, hard look at how the US technology companies – entities that are not elected to public office ─ have amassed so much power over their customer base and have decided that their governments shall not be in thrall to unelected corporations.
Although there are some efforts underway in the US to attempt to regulate its largest technology companies, it will be a slow, lumbering, and piecemeal process. The EU has made moves against US tech companies for flouting EU directives on anti-competition and privacy and imposed substantial fines.
China, while having decided to housekeep and rein in uninhibited innovation and corporate expansion of its homegrown tech companies, is also sending a signal globally that it has the sovereign will and ability to discipline and regulate its own. In this respect, China and the EU may well be objectively setting an example of “responsible government” for others to follow.
The EU Model: Regulating for Responsibility?
There are fundamental differences between EU and China in regulating technology – protection of fundamental human rights for the former, as opposed to preservation of “core socialist values” for the latter.
However, the overall effect of two major economic powers poised and ready to impose legislative measures controlling the impacts of technology and proliferation of AI systems, may be that of making the rest of the world sit up, pay attention, and think more seriously about following suit.
The overall message is that the needs of persons and society are being prioritised over unchecked business competition and innovation. Promoting how AI can automate routine decision-making and greatly reduce friction has suddenly become risky and questionable.
The mantra of “move fast and break things” as regards the innovation of disruptive technology, popularised by Mark Zuckerberg, is increasingly sounding hollow, irresponsible, and dangerous.
Facebook’s re-branding to Meta and its publicity around its “Metaverse” appear for all intents and purposes like a distraction tactic and regulation dodge and will only invite more scrutiny by governments outside the US. It has now become more difficult to protest against regulation having a chilling effect on innovation and competitiveness, and this legislative measures double-bill has also possibly provided a greater impetus for other countries to do likewise.
Further, could another effect of these measures by EU and China be an increase in trustworthiness of technology innovations originating from these locations, as opposed to those from other countries that have less regulation?
The Singapore Model: Time for Rethink?
Singapore is a small state with trade and diplomatic links to both China and the EU. How should it respond to these developments, in terms of how it manages its trade, R&D, and other AI-related interests with these larger economies, and how it views its own approach to technology governance?
What Singapore has at this point in time is personal data legislation that is effectively pro-business, and no fundamental law regarding privacy. Its much-vaunted Model AI Governance Framework is merely voluntary.
Given the extra-territoriality of the EU and Chinese legislation, and the importance of their markets to Singapore, there is a need to evolve Singapore’s relatively light-touch approach to AI governance reactively to these developments. This is for it to remain relevant and demonstrate that it takes these fundamental matters seriously.
A light-touch, pro-business approach may have been the strategy for incentivising local and international AI-related research and development for creating AI solutions, but may not be viable for much longer.
About the Author
Teo Yi-Ling is a Senior Fellow with the Centre of Excellence for National Security (CENS), S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore.