Back
About RSIS
Introduction
Building the Foundations
Welcome Message
Board of Governors
Staff Profiles
Executive Deputy Chairman’s Office
Dean’s Office
Management
Distinguished Fellows
Faculty and Research
Associate Research Fellows, Senior Analysts and Research Analysts
Visiting Fellows
Adjunct Fellows
Administrative Staff
Honours and Awards for RSIS Staff and Students
RSIS Endowment Fund
Endowed Professorships
Career Opportunities
Getting to RSIS
Research
Research Centres
Centre for Multilateralism Studies (CMS)
Centre for Non-Traditional Security Studies (NTS Centre)
Centre of Excellence for National Security
Institute of Defence and Strategic Studies (IDSS)
International Centre for Political Violence and Terrorism Research (ICPVTR)
Research Programmes
National Security Studies Programme (NSSP)
Social Cohesion Research Programme (SCRP)
Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
Other Research
Future Issues and Technology Cluster
Research@RSIS
Science and Technology Studies Programme (STSP) (2017-2020)
Graduate Education
Graduate Programmes Office
Exchange Partners and Programmes
How to Apply
Financial Assistance
Meet the Admissions Team: Information Sessions and other events
RSIS Alumni
Outreach
Global Networks
About Global Networks
RSIS Alumni
Executive Education
About Executive Education
SRP Executive Programme
Terrorism Analyst Training Course (TATC)
International Programmes
About International Programmes
Asia-Pacific Programme for Senior Military Officers (APPSMO)
Asia-Pacific Programme for Senior National Security Officers (APPSNO)
International Conference on Cohesive Societies (ICCS)
International Strategy Forum-Asia (ISF-Asia)
Publications
RSIS Publications
Annual Reviews
Books
Bulletins and Newsletters
RSIS Commentary Series
Counter Terrorist Trends and Analyses
Commemorative / Event Reports
Future Issues
IDSS Papers
Interreligious Relations
Monographs
NTS Insight
Policy Reports
Working Papers
External Publications
Authored Books
Journal Articles
Edited Books
Chapters in Edited Books
Policy Reports
Working Papers
Op-Eds
Glossary of Abbreviations
Policy-relevant Articles Given RSIS Award
RSIS Publications for the Year
External Publications for the Year
Media
Cohesive Societies
Sustainable Security
Other Resource Pages
News Releases
Speeches
Video/Audio Channel
External Podcasts
Events
Contact Us
S. Rajaratnam School of International Studies Think Tank and Graduate School Ponder The Improbable Since 1966
Nanyang Technological University Nanyang Technological University
  • About RSIS
      IntroductionBuilding the FoundationsWelcome MessageBoard of GovernorsHonours and Awards for RSIS Staff and StudentsRSIS Endowment FundEndowed ProfessorshipsCareer OpportunitiesGetting to RSIS
      Staff ProfilesExecutive Deputy Chairman’s OfficeDean’s OfficeManagementDistinguished FellowsFaculty and ResearchAssociate Research Fellows, Senior Analysts and Research AnalystsVisiting FellowsAdjunct FellowsAdministrative Staff
  • Research
      Research CentresCentre for Multilateralism Studies (CMS)Centre for Non-Traditional Security Studies (NTS Centre)Centre of Excellence for National SecurityInstitute of Defence and Strategic Studies (IDSS)International Centre for Political Violence and Terrorism Research (ICPVTR)
      Research ProgrammesNational Security Studies Programme (NSSP)Social Cohesion Research Programme (SCRP)Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
      Other ResearchFuture Issues and Technology ClusterResearch@RSISScience and Technology Studies Programme (STSP) (2017-2020)
  • Graduate Education
      Graduate Programmes OfficeExchange Partners and ProgrammesHow to ApplyFinancial AssistanceMeet the Admissions Team: Information Sessions and other eventsRSIS Alumni
  • Outreach
      Global NetworksAbout Global NetworksRSIS Alumni
      Executive EducationAbout Executive EducationSRP Executive ProgrammeTerrorism Analyst Training Course (TATC)
      International ProgrammesAbout International ProgrammesAsia-Pacific Programme for Senior Military Officers (APPSMO)Asia-Pacific Programme for Senior National Security Officers (APPSNO)International Conference on Cohesive Societies (ICCS)International Strategy Forum-Asia (ISF-Asia)
  • Publications
      RSIS PublicationsAnnual ReviewsBooksBulletins and NewslettersRSIS Commentary SeriesCounter Terrorist Trends and AnalysesCommemorative / Event ReportsFuture IssuesIDSS PapersInterreligious RelationsMonographsNTS InsightPolicy ReportsWorking Papers
      External PublicationsAuthored BooksJournal ArticlesEdited BooksChapters in Edited BooksPolicy ReportsWorking PapersOp-Eds
      Glossary of AbbreviationsPolicy-relevant Articles Given RSIS AwardRSIS Publications for the YearExternal Publications for the Year
  • Media
      Cohesive SocietiesSustainable SecurityOther Resource PagesNews ReleasesSpeechesVideo/Audio ChannelExternal Podcasts
  • Events
  • Contact Us
    • Connect with Us

      rsis.ntu
      rsis_ntu
      rsisntu
      rsisvideocast
      school/rsis-ntu
      rsis.sg
      rsissg
      RSIS
      RSS
      Subscribe to RSIS Publications
      Subscribe to RSIS Events

      Getting to RSIS

      Nanyang Technological University
      Block S4, Level B3,
      50 Nanyang Avenue,
      Singapore 639798

      Click here for direction to RSIS

      Get in Touch

    Connect
    Search
    • RSIS
    • Publication
    • RSIS Publications
    • Battling Falsehoods on China’s Short Video Platforms
    • Annual Reviews
    • Books
    • Bulletins and Newsletters
    • RSIS Commentary Series
    • Counter Terrorist Trends and Analyses
    • Commemorative / Event Reports
    • Future Issues
    • IDSS Papers
    • Interreligious Relations
    • Monographs
    • NTS Insight
    • Policy Reports
    • Working Papers

    CO25037 | Battling Falsehoods on China’s Short Video Platforms
    Xue ZHANG

    21 February 2025

    download pdf

    SYNOPSIS

    The rapid rise of short video platforms in China has created a fertile ground for creating and spreading falsehoods. This commentary explores the nature of falsehoods on these platforms, examines existing countermeasures, and discusses the challenges, barriers, and potential solutions.

    Source: Unsplash
    Source: Unsplash

    COMMENTARY

    Short video platforms, such as Douyin and Kuaishou, have become popular in China as a medium for information consumption. In 2023, short video platforms had the highest user engagement among audiovisual applications, with an average daily usage rate of 151 minutes per user. By June 2024, the number of short video platform users had reached about 1.05 billion, representing 95.5 per cent of all internet users in China.

    However, the rise of short video platforms has fostered a conducive environment for creating and spreading falsehoods. With media technology advancements and low barriers to posting, individuals can easily film, produce, and publish videos containing falsehoods. Moreover, people are more likely to be deceived by fake news presented in video form, as they place greater trust in what they see than hear or read.

    Falsehoods on China’s Short Video Platforms

    Falsehoods on China’s short video platforms deceive users through rational or emotional appeals. Those employing rational appeals often centre around fabricated professional identities or knowledge, targeting areas of public interest such as current affairs, medicine, and science.

    For instance, fake news accounts can pose as authoritative news outlets by fabricating news studio settings, imitating professional news anchors or misusing AI-generated virtual hosts. Through editing techniques, others create disinformation with sensational bold headlines, eye-catching visuals, and atmosphere-enhancing background music.

    Some doctors, whose credentials have been verified as genuine by the platforms, appear professional on camera but are profit-driven, instilling fear to get followers to become their patients through pseudo-science scripts.

    Emotionally appealing falsehoods are typically reflected in a broad range of social news, amplifying social and family conflicts or fabricating tragic experiences. In August 2023, an influencer with 40 million followers directed and acted in a self-scripted scene during his live-streaming session in which he was shown being kidnapped by a gang leader. Then again, the delivery rider persona has been frequently used to stage dramatically tragic scenes, such as one showing him (or her) “delivering food while carrying a sick child” to evoke sympathy.

    As user engagement continues to rise, e-commerce on short video platforms has steadily developed. One report showed that in 2023, over 70 per cent of users made purchases after watching short videos or live streams, and over 40 per cent felt that these had become their primary shopping channels. More followers and higher traffic can easily be monetised into greater profits, which is the fundamental driver behind the falsehoods on these platforms.

    Through their sensory and emotional stimulation, falsehood videos could thrive by capturing more user attention and gaining higher traffic, taking away opportunities for high-quality videos. Over time, this could lead to a decline in the overall quality of platforms.

    Moreover, falsehoods circulated online through these platforms could lead to public misunderstandings of important political and social issues, escalate conflicts, and erode social resilience. For example, fake and staged videos depicting “miserable lives” severely undermine the government’s poverty alleviation efforts. False claims such as “plastic” seaweed being produced for consumption have caused significant losses to food producers and caused unnecessary public anxiety about food safety. Fake videos about social conflicts can trigger public concern about social security, fuelling mistrust and unrest.

    Countering Falsehoods

    China enforces some of the world’s strictest laws against falsehoods, with related articles in its supreme law (the Constitution), general laws (e.g., Criminal Law and Civil Code), and internet laws (e.g., Cybersecurity Law).

    The government enacted the Provisions on the Governance of the Online Information Content Ecosystem in March 2020. These provisions prohibit network information producers from creating, copying, or publishing illegal content, including rumours, and require platforms to govern such content.

    The Notice to Strengthen Management of Self-Media issued by the Cyberspace Administration of China (CAC) in July 2023 explicitly mandates social media platforms to verify accounts, label and debunk rumours, improve the reach of corrections, and address violations by self-media, i.e., accounts managed by individuals who create and share their content on these platforms. The notice also obligates platforms to ensure self-media cite their sources, hold accountability for content authenticity, and flag fictional, dramatised or technically generated content.

    The CAC launched the China Internet Joint Rumour Debunking Platform on 29 August 2018. This platform and its reporting centre have established accounts on WeChat and Weibo featuring regular updates on “Today’s Rumour Debunking”, “Today’s Science Popularisation”, and “Today’s Reminder”.

    The CAC also launched a series of initiatives to ensure a “clean and healthy” cyberspace. This included the campaign “Regulating Online Communication Order in Key Traffic Segments”, held from 6 April to 15 May 2023, during which platforms shut down 107,000 fake accounts impersonating news organisations or news anchors and removed 835,000 pieces of false news content.

    A two-month campaign on “Rectifying Self-Media’s Bottomless Pursuit of Traffic” was also launched by the CAC in May 2024 targeting fabricated content, exploitation of “hot” topics, biased narratives, socially questionable personalities, and inappropriate “new yellow journalism”, i.e., reporting that emphasises sensationalism over facts.

    Short video platforms have also adopted countermeasures of their own. For example, Douyin set up its official rumour debunking account in March 2019, posting self- and jointly-created content with government authorities. In 2023, it updated several regulations to address falsehoods systematically and introduced features like removing followers to disrupt the monetisation chains. On January 3, 2025, Douyin announced plans to enhance algorithms recommending debunking content to users, expand fact-checking partnerships, strengthen manual review of hot topics, and engage users in identifying and labelling controversial content.

    Similarly, in August 2022, Kuaishou created an official debunking account and set up a dedicated section to prioritise rumour refutation videos created by authoritative institutions. It also announced the launch of a special rumour-combating campaign to improve multi-party collaboration and co-governance mechanisms. It also disclosed its six-step work process: monitoring, identifying, verifying, labelling, debunking, and handling. On January 7, 2025, Kuaishou stated its commitment to enhancing algorithm transparency and strengthening rumours and illegal content management.

    Challenges, Barriers and Potential Solutions

    Despite the countermeasures implemented by the government and short video platforms, these efforts remain constrained due to challenges and barriers that hinder their full effectiveness.

    First, the vast volume of short videos makes it challenging to identify falsehoods. Furthermore, detecting them automatically has unique difficulties, including high information heterogeneity, distinguishing non-malicious artistic editing, and addressing new propagation patterns on recommendation-based platforms.

    Second, information literacy education, widely regarded as the fundamental way to prevent the spread of falsehoods, faces challenges in achieving scalability for the large population, particularly in reaching vulnerable groups such as older adults living in rural areas who are engrossed with watching short videos and susceptible to rumours.

    Third, in sharing profits with content creators, short video platforms play a dual role as both “player” and “judge”, leaving room for insufficient or superficial content moderation.

    What Should Be Done

    To address these issues, the Chinese government should improve related laws and regulations by clarifying the responsibilities of platforms, content creators, and those who deliberately circulate falsehood videos. Law enforcement agencies should intensify actions and strengthen routine governance with more frequent random checks on platform content.

    Short video platforms should take full responsibility for content moderation, collaborate with academic and research institutes to integrate AI technologies with manual reviews to swiftly detect and flag falsehoods, improve algorithms to increase the visibility of corrections, and rigorously crack down on fake identities and rumour creator accounts.

    The government, educators, and platforms should jointly enhance netizens’ information literacy skills and knowledge of the relevant laws. The government and platforms could leverage the public by increasing their awareness of reporting channels and consider providing recognition or incentives to those who report wrongdoing.

    Netizens should heighten their awareness and act responsibly by critically evaluating video content, promptly reporting deceptive videos, properly labelling staged as well as AI-generated falsehoods, and refraining from creating or spreading falsehoods.

    About the Author

    Dr Xue Zhang is a Research Fellow at the Centre of Excellence for National Security at S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore.

    Categories: RSIS Commentary Series / Country and Region Studies / Technology and Future Issues / East Asia and Asia Pacific / South Asia / Southeast Asia and ASEAN / Global
    comments powered by Disqus

    SYNOPSIS

    The rapid rise of short video platforms in China has created a fertile ground for creating and spreading falsehoods. This commentary explores the nature of falsehoods on these platforms, examines existing countermeasures, and discusses the challenges, barriers, and potential solutions.

    Source: Unsplash
    Source: Unsplash

    COMMENTARY

    Short video platforms, such as Douyin and Kuaishou, have become popular in China as a medium for information consumption. In 2023, short video platforms had the highest user engagement among audiovisual applications, with an average daily usage rate of 151 minutes per user. By June 2024, the number of short video platform users had reached about 1.05 billion, representing 95.5 per cent of all internet users in China.

    However, the rise of short video platforms has fostered a conducive environment for creating and spreading falsehoods. With media technology advancements and low barriers to posting, individuals can easily film, produce, and publish videos containing falsehoods. Moreover, people are more likely to be deceived by fake news presented in video form, as they place greater trust in what they see than hear or read.

    Falsehoods on China’s Short Video Platforms

    Falsehoods on China’s short video platforms deceive users through rational or emotional appeals. Those employing rational appeals often centre around fabricated professional identities or knowledge, targeting areas of public interest such as current affairs, medicine, and science.

    For instance, fake news accounts can pose as authoritative news outlets by fabricating news studio settings, imitating professional news anchors or misusing AI-generated virtual hosts. Through editing techniques, others create disinformation with sensational bold headlines, eye-catching visuals, and atmosphere-enhancing background music.

    Some doctors, whose credentials have been verified as genuine by the platforms, appear professional on camera but are profit-driven, instilling fear to get followers to become their patients through pseudo-science scripts.

    Emotionally appealing falsehoods are typically reflected in a broad range of social news, amplifying social and family conflicts or fabricating tragic experiences. In August 2023, an influencer with 40 million followers directed and acted in a self-scripted scene during his live-streaming session in which he was shown being kidnapped by a gang leader. Then again, the delivery rider persona has been frequently used to stage dramatically tragic scenes, such as one showing him (or her) “delivering food while carrying a sick child” to evoke sympathy.

    As user engagement continues to rise, e-commerce on short video platforms has steadily developed. One report showed that in 2023, over 70 per cent of users made purchases after watching short videos or live streams, and over 40 per cent felt that these had become their primary shopping channels. More followers and higher traffic can easily be monetised into greater profits, which is the fundamental driver behind the falsehoods on these platforms.

    Through their sensory and emotional stimulation, falsehood videos could thrive by capturing more user attention and gaining higher traffic, taking away opportunities for high-quality videos. Over time, this could lead to a decline in the overall quality of platforms.

    Moreover, falsehoods circulated online through these platforms could lead to public misunderstandings of important political and social issues, escalate conflicts, and erode social resilience. For example, fake and staged videos depicting “miserable lives” severely undermine the government’s poverty alleviation efforts. False claims such as “plastic” seaweed being produced for consumption have caused significant losses to food producers and caused unnecessary public anxiety about food safety. Fake videos about social conflicts can trigger public concern about social security, fuelling mistrust and unrest.

    Countering Falsehoods

    China enforces some of the world’s strictest laws against falsehoods, with related articles in its supreme law (the Constitution), general laws (e.g., Criminal Law and Civil Code), and internet laws (e.g., Cybersecurity Law).

    The government enacted the Provisions on the Governance of the Online Information Content Ecosystem in March 2020. These provisions prohibit network information producers from creating, copying, or publishing illegal content, including rumours, and require platforms to govern such content.

    The Notice to Strengthen Management of Self-Media issued by the Cyberspace Administration of China (CAC) in July 2023 explicitly mandates social media platforms to verify accounts, label and debunk rumours, improve the reach of corrections, and address violations by self-media, i.e., accounts managed by individuals who create and share their content on these platforms. The notice also obligates platforms to ensure self-media cite their sources, hold accountability for content authenticity, and flag fictional, dramatised or technically generated content.

    The CAC launched the China Internet Joint Rumour Debunking Platform on 29 August 2018. This platform and its reporting centre have established accounts on WeChat and Weibo featuring regular updates on “Today’s Rumour Debunking”, “Today’s Science Popularisation”, and “Today’s Reminder”.

    The CAC also launched a series of initiatives to ensure a “clean and healthy” cyberspace. This included the campaign “Regulating Online Communication Order in Key Traffic Segments”, held from 6 April to 15 May 2023, during which platforms shut down 107,000 fake accounts impersonating news organisations or news anchors and removed 835,000 pieces of false news content.

    A two-month campaign on “Rectifying Self-Media’s Bottomless Pursuit of Traffic” was also launched by the CAC in May 2024 targeting fabricated content, exploitation of “hot” topics, biased narratives, socially questionable personalities, and inappropriate “new yellow journalism”, i.e., reporting that emphasises sensationalism over facts.

    Short video platforms have also adopted countermeasures of their own. For example, Douyin set up its official rumour debunking account in March 2019, posting self- and jointly-created content with government authorities. In 2023, it updated several regulations to address falsehoods systematically and introduced features like removing followers to disrupt the monetisation chains. On January 3, 2025, Douyin announced plans to enhance algorithms recommending debunking content to users, expand fact-checking partnerships, strengthen manual review of hot topics, and engage users in identifying and labelling controversial content.

    Similarly, in August 2022, Kuaishou created an official debunking account and set up a dedicated section to prioritise rumour refutation videos created by authoritative institutions. It also announced the launch of a special rumour-combating campaign to improve multi-party collaboration and co-governance mechanisms. It also disclosed its six-step work process: monitoring, identifying, verifying, labelling, debunking, and handling. On January 7, 2025, Kuaishou stated its commitment to enhancing algorithm transparency and strengthening rumours and illegal content management.

    Challenges, Barriers and Potential Solutions

    Despite the countermeasures implemented by the government and short video platforms, these efforts remain constrained due to challenges and barriers that hinder their full effectiveness.

    First, the vast volume of short videos makes it challenging to identify falsehoods. Furthermore, detecting them automatically has unique difficulties, including high information heterogeneity, distinguishing non-malicious artistic editing, and addressing new propagation patterns on recommendation-based platforms.

    Second, information literacy education, widely regarded as the fundamental way to prevent the spread of falsehoods, faces challenges in achieving scalability for the large population, particularly in reaching vulnerable groups such as older adults living in rural areas who are engrossed with watching short videos and susceptible to rumours.

    Third, in sharing profits with content creators, short video platforms play a dual role as both “player” and “judge”, leaving room for insufficient or superficial content moderation.

    What Should Be Done

    To address these issues, the Chinese government should improve related laws and regulations by clarifying the responsibilities of platforms, content creators, and those who deliberately circulate falsehood videos. Law enforcement agencies should intensify actions and strengthen routine governance with more frequent random checks on platform content.

    Short video platforms should take full responsibility for content moderation, collaborate with academic and research institutes to integrate AI technologies with manual reviews to swiftly detect and flag falsehoods, improve algorithms to increase the visibility of corrections, and rigorously crack down on fake identities and rumour creator accounts.

    The government, educators, and platforms should jointly enhance netizens’ information literacy skills and knowledge of the relevant laws. The government and platforms could leverage the public by increasing their awareness of reporting channels and consider providing recognition or incentives to those who report wrongdoing.

    Netizens should heighten their awareness and act responsibly by critically evaluating video content, promptly reporting deceptive videos, properly labelling staged as well as AI-generated falsehoods, and refraining from creating or spreading falsehoods.

    About the Author

    Dr Xue Zhang is a Research Fellow at the Centre of Excellence for National Security at S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore.

    Categories: RSIS Commentary Series / Country and Region Studies / Technology and Future Issues

    Popular Links

    About RSISResearch ProgrammesGraduate EducationPublicationsEventsAdmissionsCareersVideo/Audio ChannelRSIS Intranet

    Connect with Us

    rsis.ntu
    rsis_ntu
    rsisntu
    rsisvideocast
    school/rsis-ntu
    rsis.sg
    rsissg
    RSIS
    RSS
    Subscribe to RSIS Publications
    Subscribe to RSIS Events

    Getting to RSIS

    Nanyang Technological University
    Block S4, Level B3,
    50 Nanyang Avenue,
    Singapore 639798

    Click here for direction to RSIS

    Get in Touch

      Copyright © S. Rajaratnam School of International Studies. All rights reserved.
      Privacy Statement / Terms of Use
      Help us improve

        Rate your experience with this website
        123456
        Not satisfiedVery satisfied
        What did you like?
        0/255 characters
        What can be improved?
        0/255 characters
        Your email
        Please enter a valid email.
        Thank you for your feedback.
        This site uses cookies to offer you a better browsing experience. By continuing, you are agreeing to the use of cookies on your device as described in our privacy policy. Learn more
        OK
        Latest Book
        more info