Back
About RSIS
Introduction
Building the Foundations
Welcome Message
Board of Governors
Staff Profiles
Executive Deputy Chairman’s Office
Dean’s Office
Management
Distinguished Fellows
Faculty and Research
Associate Research Fellows, Senior Analysts and Research Analysts
Visiting Fellows
Adjunct Fellows
Administrative Staff
Honours and Awards for RSIS Staff and Students
RSIS Endowment Fund
Endowed Professorships
Career Opportunities
Getting to RSIS
Research
Research Centres
Centre for Multilateralism Studies (CMS)
Centre for Non-Traditional Security Studies (NTS Centre)
Centre of Excellence for National Security
Institute of Defence and Strategic Studies (IDSS)
International Centre for Political Violence and Terrorism Research (ICPVTR)
Research Programmes
National Security Studies Programme (NSSP)
Social Cohesion Research Programme (SCRP)
Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
Other Research
Future Issues and Technology Cluster
Research@RSIS
Science and Technology Studies Programme (STSP) (2017-2020)
Graduate Education
Graduate Programmes Office
Exchange Partners and Programmes
How to Apply
Financial Assistance
Meet the Admissions Team: Information Sessions and other events
RSIS Alumni
Outreach
Global Networks
About Global Networks
RSIS Alumni
Executive Education
About Executive Education
SRP Executive Programme
Terrorism Analyst Training Course (TATC)
International Programmes
About International Programmes
Asia-Pacific Programme for Senior Military Officers (APPSMO)
Asia-Pacific Programme for Senior National Security Officers (APPSNO)
International Conference on Cohesive Societies (ICCS)
International Strategy Forum-Asia (ISF-Asia)
Publications
RSIS Publications
Annual Reviews
Books
Bulletins and Newsletters
RSIS Commentary Series
Counter Terrorist Trends and Analyses
Commemorative / Event Reports
Future Issues
IDSS Papers
Interreligious Relations
Monographs
NTS Insight
Policy Reports
Working Papers
External Publications
Authored Books
Journal Articles
Edited Books
Chapters in Edited Books
Policy Reports
Working Papers
Op-Eds
Glossary of Abbreviations
Policy-relevant Articles Given RSIS Award
RSIS Publications for the Year
External Publications for the Year
Media
Cohesive Societies
Sustainable Security
Other Resource Pages
News Releases
Speeches
Video/Audio Channel
External Podcasts
Events
Contact Us
S. Rajaratnam School of International Studies Think Tank and Graduate School Ponder The Improbable Since 1966
Nanyang Technological University Nanyang Technological University
  • About RSIS
      IntroductionBuilding the FoundationsWelcome MessageBoard of GovernorsHonours and Awards for RSIS Staff and StudentsRSIS Endowment FundEndowed ProfessorshipsCareer OpportunitiesGetting to RSIS
      Staff ProfilesExecutive Deputy Chairman’s OfficeDean’s OfficeManagementDistinguished FellowsFaculty and ResearchAssociate Research Fellows, Senior Analysts and Research AnalystsVisiting FellowsAdjunct FellowsAdministrative Staff
  • Research
      Research CentresCentre for Multilateralism Studies (CMS)Centre for Non-Traditional Security Studies (NTS Centre)Centre of Excellence for National SecurityInstitute of Defence and Strategic Studies (IDSS)International Centre for Political Violence and Terrorism Research (ICPVTR)
      Research ProgrammesNational Security Studies Programme (NSSP)Social Cohesion Research Programme (SCRP)Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
      Other ResearchFuture Issues and Technology ClusterResearch@RSISScience and Technology Studies Programme (STSP) (2017-2020)
  • Graduate Education
      Graduate Programmes OfficeExchange Partners and ProgrammesHow to ApplyFinancial AssistanceMeet the Admissions Team: Information Sessions and other eventsRSIS Alumni
  • Outreach
      Global NetworksAbout Global NetworksRSIS Alumni
      Executive EducationAbout Executive EducationSRP Executive ProgrammeTerrorism Analyst Training Course (TATC)
      International ProgrammesAbout International ProgrammesAsia-Pacific Programme for Senior Military Officers (APPSMO)Asia-Pacific Programme for Senior National Security Officers (APPSNO)International Conference on Cohesive Societies (ICCS)International Strategy Forum-Asia (ISF-Asia)
  • Publications
      RSIS PublicationsAnnual ReviewsBooksBulletins and NewslettersRSIS Commentary SeriesCounter Terrorist Trends and AnalysesCommemorative / Event ReportsFuture IssuesIDSS PapersInterreligious RelationsMonographsNTS InsightPolicy ReportsWorking Papers
      External PublicationsAuthored BooksJournal ArticlesEdited BooksChapters in Edited BooksPolicy ReportsWorking PapersOp-Eds
      Glossary of AbbreviationsPolicy-relevant Articles Given RSIS AwardRSIS Publications for the YearExternal Publications for the Year
  • Media
      Cohesive SocietiesSustainable SecurityOther Resource PagesNews ReleasesSpeechesVideo/Audio ChannelExternal Podcasts
  • Events
  • Contact Us
    • Connect with Us

      rsis.ntu
      rsis_ntu
      rsisntu
      rsisvideocast
      school/rsis-ntu
      rsis.sg
      rsissg
      RSIS
      RSS
      Subscribe to RSIS Publications
      Subscribe to RSIS Events

      Getting to RSIS

      Nanyang Technological University
      Block S4, Level B3,
      50 Nanyang Avenue,
      Singapore 639798

      Click here for direction to RSIS

      Get in Touch

    Connect
    Search
    • RSIS
    • Publication
    • RSIS Publications
    • Regulating Online Harms: Are Current Efforts Working – Or Even Workable?
    • Annual Reviews
    • Books
    • Bulletins and Newsletters
    • RSIS Commentary Series
    • Counter Terrorist Trends and Analyses
    • Commemorative / Event Reports
    • Future Issues
    • IDSS Papers
    • Interreligious Relations
    • Monographs
    • NTS Insight
    • Policy Reports
    • Working Papers

    CO23170 | Regulating Online Harms: Are Current Efforts Working – Or Even Workable?
    Sean Tan

    23 November 2023

    download pdf

    SYNOPSIS

    Rapid digitalisation has occurred in tandem with the amplification of online harms. The spread of malicious online content and activity poses an even greater danger to vulnerable users. But how should security guidelines for digital platforms – or indeed, the Big Tech companies – be implemented?

    pexels porapak apichodilok 363766
    Source: Pexels

    COMMENTARY

    The extending reach of digital applications offers new opportunities, but also heightens the potential for various social harms in online spaces. Any optimistic long-term visions for emerging technologies are at odds with real and pressing online safety issues. Many harmful online phenomena – including those which threaten national security, such as violent extremism and disinformation – are in fact abetted by digital growth.

    Some of these threats may appear unprecedented. Early developers of digital communication networks did not anticipate their eventual scale of expansion, much less their exploitation for cybercrime such as scams, harassment and sexual abuse. Since the advent of the global Internet, authorities around the world have steadily become more vigilant to these dangers. Even so, the constantly evolving nature of online harms and digital technology poses a challenge for regulators.

    Shifts – and Similarities – in the Global Online Regulation Landscape

    The accompanying evolution of laws dealing with online harms has shown that regulation has been far from straightforward. Nascent efforts, such as Title V of the United States’ 1996 Telecommunications Act, and Malaysia’s Communications and Multimedia Act of 1998, were early attempts to regulate the Internet, including the criminalisation of offences involving inappropriate online content. However, in more recent years, the dramatic advancement and proliferation of digital technology has reshaped perceptions of online risk, prompting far more extensive safety measures.

    A prominent feature of modern legislation is the imposition of new requirements on digital service providers. Recent examples include the United Kingdom’s Online Safety Bill and Singapore’s Online Criminal Harms Act, both of which contain stipulations for digital platforms believed to host illicit online activities. In 2022, Malaysia extended the scope of its Communications and Multimedia Content Code to include online service providers, before recommending additional provisions for online platforms last September.

    These requirements are typically paired with compliance mechanisms. For instance, the US Digital Consumer Protection Commission Act proposes an overarching regulatory structure for Big Tech companies, including an independent regulator whose practices would mirror that of existing consumer protection agencies. A similar independent online safety regulator has operated in Australia since 2015.

    Perhaps the most ambitious regulatory instruments today are the European Union’s Digital Service Act and Digital Markets Act. Both have superseded existing legislation in EU member states. They target a comprehensive range of online services and harms, and are chiefly aimed at the world’s largest technology companies.

    Though these measures inevitably differ to some extent, some common strands are identifiable. There is a general consensus on certain forms of unacceptable content and activity, such as encouraging terrorist acts, sexual harassment, and inciting hate and violence. An especially clear red line relates to child exploitation and abuse. Many child advocacy organisations – deeply familiar with the dangers in digital spaces – have expressed support for and requested additional amendments to the relevant laws. There is an increasing shared perception of online harms as an acute and immediate threat. Many initiatives and proposals emphasise the need for substantial and urgent action.

    Potential Areas of Oversight and Concern

    However, despite their lofty ambitions, it is yet to be seen if these measures and regulations can be meaningfully implemented in practice, or if they are indeed feasible. Of notable concern is a seemingly limited understanding of technology among policymakers – in particular, those who advocate for a proactive, hands-on approach to content moderation via the use of software tools to detect harmful content, such as client-side scanning.

    While this approach may, at a glance, seem appropriate given the importance of protecting vulnerable groups, experts have since questioned whether it is technically possible to detect harmful online content without bypassing end-to-end encryption – a security measure integral to user privacy. Although states such as the UK have since indicated that they will not deploy client-side scanning, such proposals nevertheless set a dangerous precedent. In fact, any plans to bypass encryption are likely to be problematic for all users, but especially so for at-risk individuals.

    Also worrying, technological illiteracy on the part of policymakers raises questions over the enforcement of accountability for Big Tech companies. This is especially the case where safety provisions and obligations are of a technical nature. Any uncertainty or vagueness in the definition of key terms would almost certainly make policy formulation and implementation less effective, given that independent regulators may also struggle to interpret important definitions.

    Moreover, a detailed knowledge of relevant technologies is increasingly vital, as the continued growth of emerging technological tools threatens to further complicate regulatory efforts. These developments cast further doubt on the suitability of ambitious and overarching regulatory proposals designed to tackle several complex – and evolving – issues at a single given time.

    What should be done?

    Ultimately, aspirational measures to tackle online harms, no matter how laudable, will only be as strong as their legal enforcement.

    While some authorities seem to be taking comprehensive and decisive action against online harms, this can mask a rudimentary understanding of important issues. On the contrary, a failure to engage with these issues thoroughly can create further risks to civic safety. Any attempts to expand the scope of regulation must be paired with sufficient technical knowledge, in order to avoid unintended consequences. Challenging dialogues between governments and industry experts are an inevitable, but much-needed step not only in the knowledge sharing process, but also in helping to build relationships and transparency in the long-term combatting of online harms.

    Additionally, the proliferation and transformation of online harms and digital technology is almost certain to outpace policy development. In dealing with the most distinct threats, policymakers may wish to adopt a more incremental and adaptable strategy, which would then make it easier to adjust course should the nature of these threats subsequently change.

    This iterative approach is by no means without drawbacks, as overall progress on regulation would likely be slower. However, policymakers who have the relevant technical knowledge would be better equipped to anticipate new risks. At a certain level, policymakers armed with the requisite expertise may even work to hold Big Tech companies accountable at the product design phase, before new services are released to the general public.

    About the Author

    Sean Tan is a Senior Analyst at the Centre of Excellence for National Security (CENS), a constituent unit of the S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore. Prior to joining CENS, he was based at NTU’s Centre for Information Integrity and the Internet (IN-cube), a research centre that aims to help promote information integrity in online spaces.

    Categories: RSIS Commentary Series / Country and Region Studies / Technology and Future Issues / Southeast Asia and ASEAN / Global / East Asia and Asia Pacific / South Asia
    comments powered by Disqus

    SYNOPSIS

    Rapid digitalisation has occurred in tandem with the amplification of online harms. The spread of malicious online content and activity poses an even greater danger to vulnerable users. But how should security guidelines for digital platforms – or indeed, the Big Tech companies – be implemented?

    pexels porapak apichodilok 363766
    Source: Pexels

    COMMENTARY

    The extending reach of digital applications offers new opportunities, but also heightens the potential for various social harms in online spaces. Any optimistic long-term visions for emerging technologies are at odds with real and pressing online safety issues. Many harmful online phenomena – including those which threaten national security, such as violent extremism and disinformation – are in fact abetted by digital growth.

    Some of these threats may appear unprecedented. Early developers of digital communication networks did not anticipate their eventual scale of expansion, much less their exploitation for cybercrime such as scams, harassment and sexual abuse. Since the advent of the global Internet, authorities around the world have steadily become more vigilant to these dangers. Even so, the constantly evolving nature of online harms and digital technology poses a challenge for regulators.

    Shifts – and Similarities – in the Global Online Regulation Landscape

    The accompanying evolution of laws dealing with online harms has shown that regulation has been far from straightforward. Nascent efforts, such as Title V of the United States’ 1996 Telecommunications Act, and Malaysia’s Communications and Multimedia Act of 1998, were early attempts to regulate the Internet, including the criminalisation of offences involving inappropriate online content. However, in more recent years, the dramatic advancement and proliferation of digital technology has reshaped perceptions of online risk, prompting far more extensive safety measures.

    A prominent feature of modern legislation is the imposition of new requirements on digital service providers. Recent examples include the United Kingdom’s Online Safety Bill and Singapore’s Online Criminal Harms Act, both of which contain stipulations for digital platforms believed to host illicit online activities. In 2022, Malaysia extended the scope of its Communications and Multimedia Content Code to include online service providers, before recommending additional provisions for online platforms last September.

    These requirements are typically paired with compliance mechanisms. For instance, the US Digital Consumer Protection Commission Act proposes an overarching regulatory structure for Big Tech companies, including an independent regulator whose practices would mirror that of existing consumer protection agencies. A similar independent online safety regulator has operated in Australia since 2015.

    Perhaps the most ambitious regulatory instruments today are the European Union’s Digital Service Act and Digital Markets Act. Both have superseded existing legislation in EU member states. They target a comprehensive range of online services and harms, and are chiefly aimed at the world’s largest technology companies.

    Though these measures inevitably differ to some extent, some common strands are identifiable. There is a general consensus on certain forms of unacceptable content and activity, such as encouraging terrorist acts, sexual harassment, and inciting hate and violence. An especially clear red line relates to child exploitation and abuse. Many child advocacy organisations – deeply familiar with the dangers in digital spaces – have expressed support for and requested additional amendments to the relevant laws. There is an increasing shared perception of online harms as an acute and immediate threat. Many initiatives and proposals emphasise the need for substantial and urgent action.

    Potential Areas of Oversight and Concern

    However, despite their lofty ambitions, it is yet to be seen if these measures and regulations can be meaningfully implemented in practice, or if they are indeed feasible. Of notable concern is a seemingly limited understanding of technology among policymakers – in particular, those who advocate for a proactive, hands-on approach to content moderation via the use of software tools to detect harmful content, such as client-side scanning.

    While this approach may, at a glance, seem appropriate given the importance of protecting vulnerable groups, experts have since questioned whether it is technically possible to detect harmful online content without bypassing end-to-end encryption – a security measure integral to user privacy. Although states such as the UK have since indicated that they will not deploy client-side scanning, such proposals nevertheless set a dangerous precedent. In fact, any plans to bypass encryption are likely to be problematic for all users, but especially so for at-risk individuals.

    Also worrying, technological illiteracy on the part of policymakers raises questions over the enforcement of accountability for Big Tech companies. This is especially the case where safety provisions and obligations are of a technical nature. Any uncertainty or vagueness in the definition of key terms would almost certainly make policy formulation and implementation less effective, given that independent regulators may also struggle to interpret important definitions.

    Moreover, a detailed knowledge of relevant technologies is increasingly vital, as the continued growth of emerging technological tools threatens to further complicate regulatory efforts. These developments cast further doubt on the suitability of ambitious and overarching regulatory proposals designed to tackle several complex – and evolving – issues at a single given time.

    What should be done?

    Ultimately, aspirational measures to tackle online harms, no matter how laudable, will only be as strong as their legal enforcement.

    While some authorities seem to be taking comprehensive and decisive action against online harms, this can mask a rudimentary understanding of important issues. On the contrary, a failure to engage with these issues thoroughly can create further risks to civic safety. Any attempts to expand the scope of regulation must be paired with sufficient technical knowledge, in order to avoid unintended consequences. Challenging dialogues between governments and industry experts are an inevitable, but much-needed step not only in the knowledge sharing process, but also in helping to build relationships and transparency in the long-term combatting of online harms.

    Additionally, the proliferation and transformation of online harms and digital technology is almost certain to outpace policy development. In dealing with the most distinct threats, policymakers may wish to adopt a more incremental and adaptable strategy, which would then make it easier to adjust course should the nature of these threats subsequently change.

    This iterative approach is by no means without drawbacks, as overall progress on regulation would likely be slower. However, policymakers who have the relevant technical knowledge would be better equipped to anticipate new risks. At a certain level, policymakers armed with the requisite expertise may even work to hold Big Tech companies accountable at the product design phase, before new services are released to the general public.

    About the Author

    Sean Tan is a Senior Analyst at the Centre of Excellence for National Security (CENS), a constituent unit of the S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore. Prior to joining CENS, he was based at NTU’s Centre for Information Integrity and the Internet (IN-cube), a research centre that aims to help promote information integrity in online spaces.

    Categories: RSIS Commentary Series / Country and Region Studies / Technology and Future Issues

    Popular Links

    About RSISResearch ProgrammesGraduate EducationPublicationsEventsAdmissionsCareersVideo/Audio ChannelRSIS Intranet

    Connect with Us

    rsis.ntu
    rsis_ntu
    rsisntu
    rsisvideocast
    school/rsis-ntu
    rsis.sg
    rsissg
    RSIS
    RSS
    Subscribe to RSIS Publications
    Subscribe to RSIS Events

    Getting to RSIS

    Nanyang Technological University
    Block S4, Level B3,
    50 Nanyang Avenue,
    Singapore 639798

    Click here for direction to RSIS

    Get in Touch

      Copyright © S. Rajaratnam School of International Studies. All rights reserved.
      Privacy Statement / Terms of Use
      Help us improve

        Rate your experience with this website
        123456
        Not satisfiedVery satisfied
        What did you like?
        0/255 characters
        What can be improved?
        0/255 characters
        Your email
        Please enter a valid email.
        Thank you for your feedback.
        This site uses cookies to offer you a better browsing experience. By continuing, you are agreeing to the use of cookies on your device as described in our privacy policy. Learn more
        OK
        Latest Book
        more info