Back
About RSIS
Introduction
Building the Foundations
Welcome Message
Board of Governors
Staff Profiles
Executive Deputy Chairman’s Office
Dean’s Office
Management
Distinguished Fellows
Faculty and Research
Associate Research Fellows, Senior Analysts and Research Analysts
Visiting Fellows
Adjunct Fellows
Administrative Staff
Honours and Awards for RSIS Staff and Students
RSIS Endowment Fund
Endowed Professorships
Career Opportunities
Getting to RSIS
Research
Research Centres
Centre for Multilateralism Studies (CMS)
Centre for Non-Traditional Security Studies (NTS Centre)
Centre of Excellence for National Security (CENS)
Institute of Defence and Strategic Studies (IDSS)
International Centre for Political Violence and Terrorism Research (ICPVTR)
Research Programmes
National Security Studies Programme (NSSP)
Social Cohesion Research Programme (SCRP)
Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
Other Research
Future Issues and Technology Cluster
Research@RSIS
Science and Technology Studies Programme (STSP) (2017-2020)
Graduate Education
Graduate Programmes Office
Exchange Partners and Programmes
How to Apply
Financial Assistance
Meet the Admissions Team: Information Sessions and other events
RSIS Alumni
Outreach
Global Networks
About Global Networks
RSIS Alumni
International Programmes
About International Programmes
Asia-Pacific Programme for Senior Military Officers (APPSMO)
Asia-Pacific Programme for Senior National Security Officers (APPSNO)
International Conference on Cohesive Societies (ICCS)
International Strategy Forum-Asia (ISF-Asia)
Executive Education
About Executive Education
SRP Executive Programme
Terrorism Analyst Training Course (TATC)
Public Education
About Public Education
Publications
RSIS Publications
Annual Reviews
Books
Bulletins and Newsletters
RSIS Commentary Series
Counter Terrorist Trends and Analyses
Commemorative / Event Reports
Future Issues
IDSS Papers
Interreligious Relations
Monographs
NTS Insight
Policy Reports
Working Papers
External Publications
Authored Books
Journal Articles
Edited Books
Chapters in Edited Books
Policy Reports
Working Papers
Op-Eds
Glossary of Abbreviations
Policy-relevant Articles Given RSIS Award
RSIS Publications for the Year
External Publications for the Year
Media
News Releases
Speeches
Video/Audio Channel
Events
Contact Us
S. Rajaratnam School of International Studies Think Tank and Graduate School Ponder The Improbable Since 1966
Nanyang Technological University Nanyang Technological University
  • About RSIS
      IntroductionBuilding the FoundationsWelcome MessageBoard of GovernorsHonours and Awards for RSIS Staff and StudentsRSIS Endowment FundEndowed ProfessorshipsCareer OpportunitiesGetting to RSIS
      Staff ProfilesExecutive Deputy Chairman’s OfficeDean’s OfficeManagementDistinguished FellowsFaculty and ResearchAssociate Research Fellows, Senior Analysts and Research AnalystsVisiting FellowsAdjunct FellowsAdministrative Staff
  • Research
      Research CentresCentre for Multilateralism Studies (CMS)Centre for Non-Traditional Security Studies (NTS Centre)Centre of Excellence for National Security (CENS)Institute of Defence and Strategic Studies (IDSS)International Centre for Political Violence and Terrorism Research (ICPVTR)
      Research ProgrammesNational Security Studies Programme (NSSP)Social Cohesion Research Programme (SCRP)Studies in Inter-Religious Relations in Plural Societies (SRP) Programme
      Other ResearchFuture Issues and Technology ClusterResearch@RSISScience and Technology Studies Programme (STSP) (2017-2020)
  • Graduate Education
      Graduate Programmes OfficeExchange Partners and ProgrammesHow to ApplyFinancial AssistanceMeet the Admissions Team: Information Sessions and other eventsRSIS Alumni
  • Outreach
      Global NetworksAbout Global NetworksRSIS Alumni
      International ProgrammesAbout International ProgrammesAsia-Pacific Programme for Senior Military Officers (APPSMO)Asia-Pacific Programme for Senior National Security Officers (APPSNO)International Conference on Cohesive Societies (ICCS)International Strategy Forum-Asia (ISF-Asia)
      Executive EducationAbout Executive EducationSRP Executive ProgrammeTerrorism Analyst Training Course (TATC)
      Public EducationAbout Public Education
  • Publications
      RSIS PublicationsAnnual ReviewsBooksBulletins and NewslettersRSIS Commentary SeriesCounter Terrorist Trends and AnalysesCommemorative / Event ReportsFuture IssuesIDSS PapersInterreligious RelationsMonographsNTS InsightPolicy ReportsWorking Papers
      External PublicationsAuthored BooksJournal ArticlesEdited BooksChapters in Edited BooksPolicy ReportsWorking PapersOp-Eds
      Glossary of AbbreviationsPolicy-relevant Articles Given RSIS AwardRSIS Publications for the YearExternal Publications for the Year
  • Media
      News ReleasesSpeechesVideo/Audio Channel
  • Events
  • Contact Us
    • Connect with Us

      rsis.ntu
      rsis_ntu
      rsisntu
      rsisvideocast
      school/rsis-ntu
      rsis.sg
      rsissg
      RSIS
      RSS
      Subscribe to RSIS Publications
      Subscribe to RSIS Events

      Getting to RSIS

      Nanyang Technological University
      Block S4, Level B3,
      50 Nanyang Avenue,
      Singapore 639798

      Click here for direction to RSIS
Connect
Search
  • RSIS
  • Publication
  • RSIS Publications
  • Deciphering the Language of Internet Memes, its Use in Disinformation and Detection via AI
  • Annual Reviews
  • Books
  • Bulletins and Newsletters
  • RSIS Commentary Series
  • Counter Terrorist Trends and Analyses
  • Commemorative / Event Reports
  • Future Issues
  • IDSS Papers
  • Interreligious Relations
  • Monographs
  • NTS Insight
  • Policy Reports
  • Working Papers

CO24111 | Deciphering the Language of Internet Memes, its Use in Disinformation and Detection via AI
Usman Naseem, Asha Hemrajani, Tan E-Reng

08 August 2024

download pdf

SYNOPSIS

In the current socio-digital milieu, internet memes, “a unit of cultural transmission”, play an increasing role in online discourse and interactions. Traditionally channels for humour, social commentary, and cultural expression, internet memes also have a dark side to them. They can cause real harm through their capacity to spread negative sentiments, misinformation, disinformation, hate and violence. Detection of memes is hence critical to prevent harmful effects on society, but it poses significant challenges. Some projects focusing on the detection and classification of memes via the use of AI are discussed.

Photo: Unsplash

COMMENTARY

The 2019 Christchurch terrorist attack serves as a stark example of how internet memes have been used to spread hate and inspire violence. Upon the attacker’s incitement, certain online communities created and spread hundreds of memes that celebrated the killings and idolised the attacker as a cult-like religious figure which were then used to create “fan” merchandise for sale. Other communities have created and shared memes as a way to mock the impact of the 9/11 attack every year on the anniversary of the deadly attacks in New York.

Memes are becoming a reflection of our contemporary culture, an easily reproducible “unit of cultural transmission”. According to reports, 55 per cent of internet users aged 13 to 35 engage in weekly meme sharing, while 30 per cent do so on a daily basis.

While most memes are meant for cultural expression and humour, it has emerged that memes also have a dark side to them as they can cause real harm beyond the confines of the digital world through their capacity to incorporate images and videos that are excised from their original context to spread negative sentiments, misinformation, disinformation, hate, fear and violence.

Memes in Disinformation

Memes have also come to play a key role in information warfare. The ongoing Russia-Ukraine War has observably become an arena in which memes have been deployed to spread disinformation online. TikTok has come to be used as a medium for the propagation of memes, with “WarTok” (a portmanteau of “TikTok” and “war”) being a space in which people have shared fake, AI-generated videos of the war.

Misinformation and disinformation, whether spread through memes or other relevant pathways, have the potential to erode trust in institutions, undermine democratic processes, and foster social division. The identification of these types of harmful content, whether they take the form of images, videos or memes, is therefore an important endeavour, as it would help to prevent or at the least mitigate their harmful effects on society.

Challenges in Detecting Harmful Memes

There have since been numerous projects launched with the aim of automating the process of harmful meme detection at scale, owing to the sheer volume of memes that are shared every second across all forms of social media. These projects aim to develop AI-powered algorithms to analyse, assess, and classify potentially harmful memes in a more efficient, automated manner.

However, a significant challenge that faces developers of such algorithms stems from the inherently multi-modal nature of memes. Many memes often comprise images, accompanied by a text caption; they include a textual modality, and an image modality. Problems arise when AI algorithms designed to detect harmful content in memes are not well-equipped to handle multiple modalities at once.

A meme might include text that would otherwise be considered benign or innocuous when the text is considered in isolation. However, this very same text could take on new, more malicious meaning when considered in conjunction with the image it accompanies. An example of such a meme can be viewed here. The textual modality of the meme must be considered in conjunction with the image modality of the meme for a more well-informed, accurate assessment on whether a particular meme is potentially harmful or not.

Research Efforts in Harmful Meme Detection

Some of the projects currently being undertaken at the Macquarie University School of Computing in Australia are examples of ongoing efforts to develop better harmful meme detection algorithms that account for the multi-modalities inherent in meme content. These projects aim at different aspects of harmful meme detection and include (i) identifying misinformation in memes, (ii) detecting misinformation in historical multimodal memes, and (iii) detecting harmful memes.

One of the challenges faced is that current analysis tools and models lack the ability to account for contextual information that might heavily define or skew the meaning of a meme.

One of the projects aims to develop an AI algorithm that captures the context of an entire post, accounting for textual content, image content, and synthesising the two to provide an assessment of whether a post might contain misinformation.

Another project in a yet to be published study focuses on developing an algorithm to analyse the historical posts of a particular user in order to assess whether they might be an active vector for spreading misinformation. Reinforcement learning-based models are employed to siphon out irrelevant posts from relevant ones, with measures put in place to identify certain important keywords, i.e., “depopulation”, “propaganda”, “Bill Gates”, etc., that might suggest that misinformation is being spread.

A third project aims to develop a prompt-based approach to identifying harmful or hateful memes that might target certain communities and/or individuals.

The last project aims to improve cross-modal (language and image) alignment across all the other three projects. The Cross-Modal Aligner algorithm developed for this project generates a set of questions and answers as prompts that would in turn produce better, more accurate textual descriptions of images that are fed into it. These textual descriptions can then be deployed in other harmful meme detection algorithms to improve their reliability and results.

These projects are still under development and more efforts will continue to be required to design new methods for robust detection of these memes.

Ways Forward

Like deepfakes, memes pose a potentially serious pathway for misinformation, disinformation, hate speech, and other harmful content to be propagated en masse throughout society. It would thus be in the interests of governments and regulatory bodies to work with academic research groups and other entities who are actively developing algorithms and technologies to decipher the language of memes and detect harmful ones.

Funding and support initiatives like the recently announced Centre for Advanced Technologies in Online Safety (CATOS) could expedite their efforts to improve their AI models in order to better account for the multi-modal nature of memes, and to collate together larger and more comprehensive datasets to train their algorithms.

Furthermore, more open lines of communication and exchange of data between research teams and regulatory bodies should be fostered. Research teams may continuously and regularly update government regulatory bodies on the latest forms of harmful memes that have come into circulation and vogue, so that counter-messaging operations may be planned and deployed to stem the negative effects of such memes on their respective societies.

About the Authors

Usman Naseem is Lecturer at the Macquarie University School of Computing, Australia. Asha Hemrajani and Tan E-Reng are Senior Fellow and Research Analyst, respectively, at the Centre of Excellence for National Security (CENS), S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore.

Categories: RSIS Commentary Series / General / Country and Region Studies / Technology and Future Issues / East Asia and Asia Pacific / South Asia / Southeast Asia and ASEAN / Global
comments powered by Disqus

SYNOPSIS

In the current socio-digital milieu, internet memes, “a unit of cultural transmission”, play an increasing role in online discourse and interactions. Traditionally channels for humour, social commentary, and cultural expression, internet memes also have a dark side to them. They can cause real harm through their capacity to spread negative sentiments, misinformation, disinformation, hate and violence. Detection of memes is hence critical to prevent harmful effects on society, but it poses significant challenges. Some projects focusing on the detection and classification of memes via the use of AI are discussed.

Photo: Unsplash

COMMENTARY

The 2019 Christchurch terrorist attack serves as a stark example of how internet memes have been used to spread hate and inspire violence. Upon the attacker’s incitement, certain online communities created and spread hundreds of memes that celebrated the killings and idolised the attacker as a cult-like religious figure which were then used to create “fan” merchandise for sale. Other communities have created and shared memes as a way to mock the impact of the 9/11 attack every year on the anniversary of the deadly attacks in New York.

Memes are becoming a reflection of our contemporary culture, an easily reproducible “unit of cultural transmission”. According to reports, 55 per cent of internet users aged 13 to 35 engage in weekly meme sharing, while 30 per cent do so on a daily basis.

While most memes are meant for cultural expression and humour, it has emerged that memes also have a dark side to them as they can cause real harm beyond the confines of the digital world through their capacity to incorporate images and videos that are excised from their original context to spread negative sentiments, misinformation, disinformation, hate, fear and violence.

Memes in Disinformation

Memes have also come to play a key role in information warfare. The ongoing Russia-Ukraine War has observably become an arena in which memes have been deployed to spread disinformation online. TikTok has come to be used as a medium for the propagation of memes, with “WarTok” (a portmanteau of “TikTok” and “war”) being a space in which people have shared fake, AI-generated videos of the war.

Misinformation and disinformation, whether spread through memes or other relevant pathways, have the potential to erode trust in institutions, undermine democratic processes, and foster social division. The identification of these types of harmful content, whether they take the form of images, videos or memes, is therefore an important endeavour, as it would help to prevent or at the least mitigate their harmful effects on society.

Challenges in Detecting Harmful Memes

There have since been numerous projects launched with the aim of automating the process of harmful meme detection at scale, owing to the sheer volume of memes that are shared every second across all forms of social media. These projects aim to develop AI-powered algorithms to analyse, assess, and classify potentially harmful memes in a more efficient, automated manner.

However, a significant challenge that faces developers of such algorithms stems from the inherently multi-modal nature of memes. Many memes often comprise images, accompanied by a text caption; they include a textual modality, and an image modality. Problems arise when AI algorithms designed to detect harmful content in memes are not well-equipped to handle multiple modalities at once.

A meme might include text that would otherwise be considered benign or innocuous when the text is considered in isolation. However, this very same text could take on new, more malicious meaning when considered in conjunction with the image it accompanies. An example of such a meme can be viewed here. The textual modality of the meme must be considered in conjunction with the image modality of the meme for a more well-informed, accurate assessment on whether a particular meme is potentially harmful or not.

Research Efforts in Harmful Meme Detection

Some of the projects currently being undertaken at the Macquarie University School of Computing in Australia are examples of ongoing efforts to develop better harmful meme detection algorithms that account for the multi-modalities inherent in meme content. These projects aim at different aspects of harmful meme detection and include (i) identifying misinformation in memes, (ii) detecting misinformation in historical multimodal memes, and (iii) detecting harmful memes.

One of the challenges faced is that current analysis tools and models lack the ability to account for contextual information that might heavily define or skew the meaning of a meme.

One of the projects aims to develop an AI algorithm that captures the context of an entire post, accounting for textual content, image content, and synthesising the two to provide an assessment of whether a post might contain misinformation.

Another project in a yet to be published study focuses on developing an algorithm to analyse the historical posts of a particular user in order to assess whether they might be an active vector for spreading misinformation. Reinforcement learning-based models are employed to siphon out irrelevant posts from relevant ones, with measures put in place to identify certain important keywords, i.e., “depopulation”, “propaganda”, “Bill Gates”, etc., that might suggest that misinformation is being spread.

A third project aims to develop a prompt-based approach to identifying harmful or hateful memes that might target certain communities and/or individuals.

The last project aims to improve cross-modal (language and image) alignment across all the other three projects. The Cross-Modal Aligner algorithm developed for this project generates a set of questions and answers as prompts that would in turn produce better, more accurate textual descriptions of images that are fed into it. These textual descriptions can then be deployed in other harmful meme detection algorithms to improve their reliability and results.

These projects are still under development and more efforts will continue to be required to design new methods for robust detection of these memes.

Ways Forward

Like deepfakes, memes pose a potentially serious pathway for misinformation, disinformation, hate speech, and other harmful content to be propagated en masse throughout society. It would thus be in the interests of governments and regulatory bodies to work with academic research groups and other entities who are actively developing algorithms and technologies to decipher the language of memes and detect harmful ones.

Funding and support initiatives like the recently announced Centre for Advanced Technologies in Online Safety (CATOS) could expedite their efforts to improve their AI models in order to better account for the multi-modal nature of memes, and to collate together larger and more comprehensive datasets to train their algorithms.

Furthermore, more open lines of communication and exchange of data between research teams and regulatory bodies should be fostered. Research teams may continuously and regularly update government regulatory bodies on the latest forms of harmful memes that have come into circulation and vogue, so that counter-messaging operations may be planned and deployed to stem the negative effects of such memes on their respective societies.

About the Authors

Usman Naseem is Lecturer at the Macquarie University School of Computing, Australia. Asha Hemrajani and Tan E-Reng are Senior Fellow and Research Analyst, respectively, at the Centre of Excellence for National Security (CENS), S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), Singapore.

Categories: RSIS Commentary Series / General / Country and Region Studies / Technology and Future Issues

Popular Links

About RSISResearch ProgrammesGraduate EducationPublicationsEventsAdmissionsCareersVideo/Audio ChannelRSIS Intranet

Connect with Us

rsis.ntu
rsis_ntu
rsisntu
rsisvideocast
school/rsis-ntu
rsis.sg
rsissg
RSIS
RSS
Subscribe to RSIS Publications
Subscribe to RSIS Events

Getting to RSIS

Nanyang Technological University
Block S4, Level B3,
50 Nanyang Avenue,
Singapore 639798

Click here for direction to RSIS

Get in Touch

    Copyright © S. Rajaratnam School of International Studies. All rights reserved.
    Privacy Statement / Terms of Use
    Help us improve

      Rate your experience with this website
      123456
      Not satisfiedVery satisfied
      What did you like?
      0/255 characters
      What can be improved?
      0/255 characters
      Your email
      Please enter a valid email.
      Thank you for your feedback.
      This site uses cookies to offer you a better browsing experience. By continuing, you are agreeing to the use of cookies on your device as described in our privacy policy. Learn more
      OK
      Latest Book
      more info