Back
About RSIS
Introduction
Building the Foundations
Welcome Message
Board of Governors
Staff Profiles
Executive Deputy Chairman’s Office
Dean’s Office
Management
Distinguished Fellows
Faculty and Research
Associate Research Fellows, Senior Analysts and Research Analysts
Visiting Fellows
Adjunct Fellows
Administrative Staff
Honours and Awards for RSIS Staff and Students
RSIS Endowment Fund
Endowed Professorships
Career Opportunities
Getting to RSIS
Research
Research Centres
Centre for Multilateralism Studies (CMS)
Centre for Non-Traditional Security Studies (NTS Centre)
Centre of Excellence for National Security
Institute of Defence and Strategic Studies (IDSS)
International Centre for Political Violence and Terrorism Research (ICPVTR)
Research Programmes
National Security Studies Programme (NSSP)
Social Cohesion Research Programme (SCRP)
Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
Other Research
Future Issues and Technology Cluster
Research@RSIS
Science and Technology Studies Programme (STSP) (2017-2020)
Graduate Education
Graduate Programmes Office
Exchange Partners and Programmes
How to Apply
Financial Assistance
Meet the Admissions Team: Information Sessions and other events
RSIS Alumni
Outreach
Global Networks
About Global Networks
RSIS Alumni
Executive Education
About Executive Education
SRP Executive Programme
Terrorism Analyst Training Course (TATC)
International Programmes
About International Programmes
Asia-Pacific Programme for Senior Military Officers (APPSMO)
Asia-Pacific Programme for Senior National Security Officers (APPSNO)
International Conference on Cohesive Societies (ICCS)
International Strategy Forum-Asia (ISF-Asia)
Publications
RSIS Publications
Annual Reviews
Books
Bulletins and Newsletters
RSIS Commentary Series
Counter Terrorist Trends and Analyses
Commemorative / Event Reports
Future Issues
IDSS Papers
Interreligious Relations
Monographs
NTS Insight
Policy Reports
Working Papers
External Publications
Authored Books
Journal Articles
Edited Books
Chapters in Edited Books
Policy Reports
Working Papers
Op-Eds
Glossary of Abbreviations
Policy-relevant Articles Given RSIS Award
RSIS Publications for the Year
External Publications for the Year
Media
Cohesive Societies
Sustainable Security
Other Resource Pages
News Releases
Speeches
Video/Audio Channel
External Podcasts
Events
Contact Us
S. Rajaratnam School of International Studies Think Tank and Graduate School Ponder The Improbable Since 1966
Nanyang Technological University Nanyang Technological University
  • About RSIS
      IntroductionBuilding the FoundationsWelcome MessageBoard of GovernorsHonours and Awards for RSIS Staff and StudentsRSIS Endowment FundEndowed ProfessorshipsCareer OpportunitiesGetting to RSIS
      Staff ProfilesExecutive Deputy Chairman’s OfficeDean’s OfficeManagementDistinguished FellowsFaculty and ResearchAssociate Research Fellows, Senior Analysts and Research AnalystsVisiting FellowsAdjunct FellowsAdministrative Staff
  • Research
      Research CentresCentre for Multilateralism Studies (CMS)Centre for Non-Traditional Security Studies (NTS Centre)Centre of Excellence for National SecurityInstitute of Defence and Strategic Studies (IDSS)International Centre for Political Violence and Terrorism Research (ICPVTR)
      Research ProgrammesNational Security Studies Programme (NSSP)Social Cohesion Research Programme (SCRP)Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
      Other ResearchFuture Issues and Technology ClusterResearch@RSISScience and Technology Studies Programme (STSP) (2017-2020)
  • Graduate Education
      Graduate Programmes OfficeExchange Partners and ProgrammesHow to ApplyFinancial AssistanceMeet the Admissions Team: Information Sessions and other eventsRSIS Alumni
  • Outreach
      Global NetworksAbout Global NetworksRSIS Alumni
      Executive EducationAbout Executive EducationSRP Executive ProgrammeTerrorism Analyst Training Course (TATC)
      International ProgrammesAbout International ProgrammesAsia-Pacific Programme for Senior Military Officers (APPSMO)Asia-Pacific Programme for Senior National Security Officers (APPSNO)International Conference on Cohesive Societies (ICCS)International Strategy Forum-Asia (ISF-Asia)
  • Publications
      RSIS PublicationsAnnual ReviewsBooksBulletins and NewslettersRSIS Commentary SeriesCounter Terrorist Trends and AnalysesCommemorative / Event ReportsFuture IssuesIDSS PapersInterreligious RelationsMonographsNTS InsightPolicy ReportsWorking Papers
      External PublicationsAuthored BooksJournal ArticlesEdited BooksChapters in Edited BooksPolicy ReportsWorking PapersOp-Eds
      Glossary of AbbreviationsPolicy-relevant Articles Given RSIS AwardRSIS Publications for the YearExternal Publications for the Year
  • Media
      Cohesive SocietiesSustainable SecurityOther Resource PagesNews ReleasesSpeechesVideo/Audio ChannelExternal Podcasts
  • Events
  • Contact Us
    • Connect with Us

      rsis.ntu
      rsis_ntu
      rsisntu
      rsisvideocast
      school/rsis-ntu
      rsis.sg
      rsissg
      RSIS
      RSS
      Subscribe to RSIS Publications
      Subscribe to RSIS Events

      Getting to RSIS

      Nanyang Technological University
      Block S4, Level B3,
      50 Nanyang Avenue,
      Singapore 639798

      Click here for direction to RSIS

      Get in Touch

    Connect
    Search
    • RSIS
    • Publication
    • RSIS Publications
    • How Effective is POFMA in Battling Online Falsehoods?
    • Annual Reviews
    • Books
    • Bulletins and Newsletters
    • RSIS Commentary Series
    • Counter Terrorist Trends and Analyses
    • Commemorative / Event Reports
    • Future Issues
    • IDSS Papers
    • Interreligious Relations
    • Monographs
    • NTS Insight
    • Policy Reports
    • Working Papers

    CO24158 | How Effective is POFMA in Battling Online Falsehoods?
    Benjamin Ang, Xue ZHANG

    18 October 2024

    download pdf

    SYNOPSIS

    A study conducted in 2023 found that the Protection from Online Falsehoods and Manipulation Act (POFMA) has been effective in mitigating the risks arising from the spread of online falsehoods, which can undermine social cohesion. Media companies and the public must also play their part in this effort.

    Source: Unsplash
    Source: Unsplash

    COMMENTARY

    In his speech at the recent launch of Smart Nation 2.0, Prime Minister Lawrence Wong alluded to real-world risks from information and communication technology (ICT) advancements, such as the undermining of social cohesion through disseminating falsehoods. The Southport riots of July 2024 is a case in point. A flood of online falsehoods that falsely described the killer of three girls as being a Muslim immigrant triggered one of the UK’s most severe riots in 13 years. Another example is the deliberate dissemination of online falsehoods in Bangladesh in August, which inflamed tensions between Hindus and Muslims.

    Grappling With Online Falsehoods

    Although many countries have enacted laws against the creation and dissemination of online misinformation and disinformation, there are challenges to be overcome. For instance, while the UK government relied on the Online Safety Act 2023 to prosecute those involved in spreading falsehoods online during the UK riots, difficulties were faced in proving that the sender knowingly sent a false message, especially in social media cases, where the nuances and context in language can easily change the meaning of a message.

    POFMA’s Continuing Relevance

    In October 2019, the government enacted the Protection from Online Falsehoods and Manipulation Act (POFMA), which aims to prevent the electronic communication of misinformation and disinformation and to safeguard against the use of online platforms to communicate falsehoods and information manipulation.

    From the time POFMA was proposed and throughout its enactment and implementation, critics have perceived it negatively as the government’s way of curtailing freedom of speech and undermining independent thought. In practice, it was more frequently used during the COVID-19 pandemic to correct online posts that could have caused panic and impeded public health measures. As of 30 June 2024, the POFMA office has handled 66 cases and issued 114 correction directions.

    In the leadup to POFMA’s fifth anniversary, a team of researchers from the Centre of Excellence for National Security (CENS) at the S. Rajaratnam School of International Studies (RSIS) carried out a study in 2023 (publication forthcoming) to explore how the public perceived POFMA. A total of 1,004 responses (including those that had skipped certain questions) were analysed.

    In a nutshell, POFMA was found to be effective in preventing the spread of online falsehoods, with over half of the respondents agreeing or strongly agreeing on this. As to whether POFMA was effective in stopping falsehood creation, slightly more than one-third of the respondents agreed or strongly agreed.

    The study also aimed to determine the respondents’ views regarding the believability of POFMA clarifications. In each of the ten use cases, more than 80 per cent of the respondents responded that they believed in the POFMA clarification rather than the original post.

    These findings suggest that POFMA is “fit for purpose” in combating online falsehoods. They also show that respondents have a high level of trust in the fact checks provided with POFMA correction directions, which indicates that such fact checks help separate fact from falsehood.  At the same time, the findings show that there is a not insignificant segment of the population that is susceptible to falsehoods. The study suggests that more has to be done to prevent a further erosion of the infrastructure of fact that supports our democracy.

    AI-Enabled Disinformation

    The risks of real-world harms resulting from online falsehoods have been exacerbated by AI-enabled content, particularly deepfakes. Deepfakes use AI to create hyper-realistic images, audio, or video clips of notables, which can manipulate public perceptions of critical social and political issues, exacerbate social tensions, inflame social conflict, and even undermine a nation during times of crisis.

    In 2022, for example, a deepfake of Ukrainian President Volodymyr Zelensky telling his soldiers to lay down their weapons was circulated online, presumably to “sow panic and confusion”. In November 2023, a deepfake of London Mayor Sadiq Khan’s voice allegedly making inflammatory comments could have sparked serious public disorder.  And, in June this year, then Prime Minister Lee Hsien Loong cautioned that a deepfake video of him purportedly commenting on foreign leaders and international relations was circulating online. He added that the deepfake was a malicious attempt to create the impression that the views were supported by him and/or endorsed by the Singapore government, which is “dangerous and potentially harmful to…[Singapore’s]…national interests”.

    The impact of AI-enabled disinformation has been a concern in elections held in the UK, India and Indonesia this year. It will also be a concern in the US presidential election this November. While legislative and other measures to address the risks of AI-enabled disinformation have been proposed, such as Singapore’s Elections (Integrity of Online Advertising) (Amendment) Bill, how to minimise the spreading of online falsehoods remains challenging.

    Conclusion

    Addressing this challenge requires a holistic approach. In addition to legislative tools like POFMA, with correction directions as the key means to counter online falsehoods, media companies and the general public also have roles to play.

    Media platforms, for instance, should further enhance their review mechanisms to enable them to respond faster in identifying and acting against misinformation.

    Members of the public need to exercise critical thinking and responsibility in sharing information they receive or encounter online. Fact checks by trusted sources can help evaluate posts read online, especially those that evoke emotional responses. Individuals should avoid sharing such posts unless they are sure of the veracity of the content. The effectiveness of POFMA correction directions and fact-checking will be limited if individuals persist in spreading online falsehoods even after they have been flagged and debunked.

    About the Authors

    Benjamin Ang is a Senior Fellow and Head of the Centre of Excellence for National Security (CENS) at S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University, Singapore. He is also the Head of Digital Impact Research at RSIS and leads the Future Issues in Technology programme. Xue Zhang is a Research Fellow at CENS. Both authors wish to acknowledge Dymples Leong and Sean Tan’s contributions to the survey.

    Categories: RSIS Commentary Series / Country and Region Studies / International Politics and Security / Non-Traditional Security / Singapore and Homeland Security / East Asia and Asia Pacific / South Asia / Southeast Asia and ASEAN / Global
    comments powered by Disqus

    SYNOPSIS

    A study conducted in 2023 found that the Protection from Online Falsehoods and Manipulation Act (POFMA) has been effective in mitigating the risks arising from the spread of online falsehoods, which can undermine social cohesion. Media companies and the public must also play their part in this effort.

    Source: Unsplash
    Source: Unsplash

    COMMENTARY

    In his speech at the recent launch of Smart Nation 2.0, Prime Minister Lawrence Wong alluded to real-world risks from information and communication technology (ICT) advancements, such as the undermining of social cohesion through disseminating falsehoods. The Southport riots of July 2024 is a case in point. A flood of online falsehoods that falsely described the killer of three girls as being a Muslim immigrant triggered one of the UK’s most severe riots in 13 years. Another example is the deliberate dissemination of online falsehoods in Bangladesh in August, which inflamed tensions between Hindus and Muslims.

    Grappling With Online Falsehoods

    Although many countries have enacted laws against the creation and dissemination of online misinformation and disinformation, there are challenges to be overcome. For instance, while the UK government relied on the Online Safety Act 2023 to prosecute those involved in spreading falsehoods online during the UK riots, difficulties were faced in proving that the sender knowingly sent a false message, especially in social media cases, where the nuances and context in language can easily change the meaning of a message.

    POFMA’s Continuing Relevance

    In October 2019, the government enacted the Protection from Online Falsehoods and Manipulation Act (POFMA), which aims to prevent the electronic communication of misinformation and disinformation and to safeguard against the use of online platforms to communicate falsehoods and information manipulation.

    From the time POFMA was proposed and throughout its enactment and implementation, critics have perceived it negatively as the government’s way of curtailing freedom of speech and undermining independent thought. In practice, it was more frequently used during the COVID-19 pandemic to correct online posts that could have caused panic and impeded public health measures. As of 30 June 2024, the POFMA office has handled 66 cases and issued 114 correction directions.

    In the leadup to POFMA’s fifth anniversary, a team of researchers from the Centre of Excellence for National Security (CENS) at the S. Rajaratnam School of International Studies (RSIS) carried out a study in 2023 (publication forthcoming) to explore how the public perceived POFMA. A total of 1,004 responses (including those that had skipped certain questions) were analysed.

    In a nutshell, POFMA was found to be effective in preventing the spread of online falsehoods, with over half of the respondents agreeing or strongly agreeing on this. As to whether POFMA was effective in stopping falsehood creation, slightly more than one-third of the respondents agreed or strongly agreed.

    The study also aimed to determine the respondents’ views regarding the believability of POFMA clarifications. In each of the ten use cases, more than 80 per cent of the respondents responded that they believed in the POFMA clarification rather than the original post.

    These findings suggest that POFMA is “fit for purpose” in combating online falsehoods. They also show that respondents have a high level of trust in the fact checks provided with POFMA correction directions, which indicates that such fact checks help separate fact from falsehood.  At the same time, the findings show that there is a not insignificant segment of the population that is susceptible to falsehoods. The study suggests that more has to be done to prevent a further erosion of the infrastructure of fact that supports our democracy.

    AI-Enabled Disinformation

    The risks of real-world harms resulting from online falsehoods have been exacerbated by AI-enabled content, particularly deepfakes. Deepfakes use AI to create hyper-realistic images, audio, or video clips of notables, which can manipulate public perceptions of critical social and political issues, exacerbate social tensions, inflame social conflict, and even undermine a nation during times of crisis.

    In 2022, for example, a deepfake of Ukrainian President Volodymyr Zelensky telling his soldiers to lay down their weapons was circulated online, presumably to “sow panic and confusion”. In November 2023, a deepfake of London Mayor Sadiq Khan’s voice allegedly making inflammatory comments could have sparked serious public disorder.  And, in June this year, then Prime Minister Lee Hsien Loong cautioned that a deepfake video of him purportedly commenting on foreign leaders and international relations was circulating online. He added that the deepfake was a malicious attempt to create the impression that the views were supported by him and/or endorsed by the Singapore government, which is “dangerous and potentially harmful to…[Singapore’s]…national interests”.

    The impact of AI-enabled disinformation has been a concern in elections held in the UK, India and Indonesia this year. It will also be a concern in the US presidential election this November. While legislative and other measures to address the risks of AI-enabled disinformation have been proposed, such as Singapore’s Elections (Integrity of Online Advertising) (Amendment) Bill, how to minimise the spreading of online falsehoods remains challenging.

    Conclusion

    Addressing this challenge requires a holistic approach. In addition to legislative tools like POFMA, with correction directions as the key means to counter online falsehoods, media companies and the general public also have roles to play.

    Media platforms, for instance, should further enhance their review mechanisms to enable them to respond faster in identifying and acting against misinformation.

    Members of the public need to exercise critical thinking and responsibility in sharing information they receive or encounter online. Fact checks by trusted sources can help evaluate posts read online, especially those that evoke emotional responses. Individuals should avoid sharing such posts unless they are sure of the veracity of the content. The effectiveness of POFMA correction directions and fact-checking will be limited if individuals persist in spreading online falsehoods even after they have been flagged and debunked.

    About the Authors

    Benjamin Ang is a Senior Fellow and Head of the Centre of Excellence for National Security (CENS) at S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University, Singapore. He is also the Head of Digital Impact Research at RSIS and leads the Future Issues in Technology programme. Xue Zhang is a Research Fellow at CENS. Both authors wish to acknowledge Dymples Leong and Sean Tan’s contributions to the survey.

    Categories: RSIS Commentary Series / Country and Region Studies / International Politics and Security / Non-Traditional Security / Singapore and Homeland Security

    Popular Links

    About RSISResearch ProgrammesGraduate EducationPublicationsEventsAdmissionsCareersVideo/Audio ChannelRSIS Intranet

    Connect with Us

    rsis.ntu
    rsis_ntu
    rsisntu
    rsisvideocast
    school/rsis-ntu
    rsis.sg
    rsissg
    RSIS
    RSS
    Subscribe to RSIS Publications
    Subscribe to RSIS Events

    Getting to RSIS

    Nanyang Technological University
    Block S4, Level B3,
    50 Nanyang Avenue,
    Singapore 639798

    Click here for direction to RSIS

    Get in Touch

      Copyright © S. Rajaratnam School of International Studies. All rights reserved.
      Privacy Statement / Terms of Use
      Help us improve

        Rate your experience with this website
        123456
        Not satisfiedVery satisfied
        What did you like?
        0/255 characters
        What can be improved?
        0/255 characters
        Your email
        Please enter a valid email.
        Thank you for your feedback.
        This site uses cookies to offer you a better browsing experience. By continuing, you are agreeing to the use of cookies on your device as described in our privacy policy. Learn more
        OK
        Latest Book
        more info