01 October 2025
- RSIS
- Publication
- RSIS Publications
- Far-Right Extremism and Gaming: How Hate Hijacks Play
SYNOPSIS
Far-right extremists are increasingly exploiting online gaming spaces and platforms to spread hate, coordinate violence, and radicalise youth. These digital environments offer anonymity, community, and ideological flexibility, making them powerful tools for extremist actors.
COMMENTARY
Online gaming spaces, from multiplayer games to chat platforms like Discord, have become fertile ground for extremist ideologies since the mid-2010s, particularly with the growth of social gaming. These communities often attract young, mostly male users, making them the key demographic under threat of radicalisation. What was once dismissed as harmless trolling in the early 2010s has, over the past decade, evolved into a pathway for radicalisation which, in some cases, leads to violence.
Extremists have not only gamified hate, embedding it in the language, aesthetics, and culture of gaming, but have also operationalised it by using gaming platforms as tools for recruitment, planning, and psychological conditioning. Anders Breivik, the perpetrator of the 2011 Norway terrorist attacks, claimed he used Call of Duty: Modern Warfare 2 as a “training simulator” and played World of Warcraft for social isolation.
While Breivik framed these games as instrumental in preparing for violence, scholars argue that these claims are overstated and reflect Breivik’s ideological narrative rather than evidence of actual tactical benefit. Participants in the 2017 Charlottesville rally used Discord to plan their violent actions. The 2019 Christchurch attacker, Brenton Tarrant, who killed 51 people in mosques, referenced games like Spyro the Dragon and Fortnite in his manifesto, signalling how deeply gaming culture had affected his mindset.
Digital Camouflage
Extremists leverage gaming’s cultural camouflage, its appearance as harmless entertainment and social interaction, to subtly spread hateful content. By embedding extremist narratives within the norms, humour, and aesthetics of gaming culture, they can mask radical messages as typical player behaviour, making them harder to detect and easier to normalise. Research by Europol and the Radicalisation Awareness Network (RAN) has shown that far-right groups embed extremist propaganda within mainstream gaming environments.
In-game usernames sometimes include hate symbols like “1488”, a neo-Nazi code, while player-created modifications (mods) and custom multiplayer maps often feature racist or other neo-Nazi imagery. These tools are commonly used to spread extremist ideas within gaming communities. These modifications often proliferate on platforms like Steam, where extremist forums operate under the disguise of gaming communities or “edgy” humour servers.
The Anti-Defamation League (ADL) noted that even after major attacks like the Christchurch shootings, extremist usernames and profiles glorifying past attackers continued to surface on platforms such as Steam, largely because of weak moderation policies. This persistence reflects a broader, well-debated problem in the gaming industry: most platforms rely heavily on community-based flagging systems, which are reactive and often inconsistent in identifying and removing hateful content. Critics argue that this approach allows extremist material to linger unchecked, while others contend that automated or proactive moderation risks overreach and censorship, underscoring the complex balance between free expression and safety online.
This digital camouflage makes it difficult for users and moderators to tell when jokes or memes hide real extremist messages, as such content is often disguised as humour or insider references that require cultural knowledge specific to internet communities to understand. Also, the gamification of symbols and inside jokes, like using Pepe the Frog with fascist iconography or referencing esoteric fascist ideologues through game avatars, allows hate to thrive in plain sight, often protected under the banner of “free speech” or satire.
Recruitment and Grooming
Beyond camouflage, gaming provides fertile ground for extremist grooming, particularly via chat and voice platforms like Discord. A RAN report noted that public Discord servers often serve as “middlemen” to private extremist spaces, where accepted members gain access to encrypted discussions and violent content.
Within private channels, extremists use ideological reinforcement and emotional support to exploit personal vulnerabilities like isolation or marginalisation. Members are rewarded with status, exclusive content, and validation, mirroring cult tactics such as love bombing, emotional control, and rigid in-group/out-group dynamics. These methods foster loyalty while discouraging critical thinking and isolating individuals from outside influences.
Studies on voice-based moderation on Discord reveal that real-time voice chat complicates content detection, enabling grooming to occur through trust-based interactions. Text-based detection tools are ineffective here, as conversations take place fluidly and leave limited, traceable evidence unless they are recorded.
Far-right groups exploit disaffected or isolated individuals by framing them as heroes in an ideological struggle. They simplify complex issues into conspiratorial “us vs. them” narratives, scapegoat specific groups, and validate anger and resentment. By promising belonging, purpose, and the chance to reclaim lost power, they tailor their messaging to emotional and cognitive vulnerabilities.
Implications for Southeast Asia
Far-right extremism exploiting gaming isn’t just a Western issue; Southeast Asia is increasingly exposed. In Singapore, the Internal Security Department (ISD) explicitly warns that extremist groups understand the popularity of gaming among young people, noting that platforms like Roblox, Discord, and DLive are being monitored “for recruitment and propaganda purposes.”
Notably, two teens, aged 15 and 16, were detained under the Internal Security Act for radicalisation via Roblox and Discord. One even created ISIS-themed game videos and considered violent real-world attacks. Another case involved an 18-year-old Singaporean student detained in December 2024 after role-playing a mosque massacre in a game, emulating the Christchurch attacker, Brenton Tarrant, in a violent online simulation. It remains to be seen if these incidents set a trend, but they are definitely worrying.
Beyond Singapore, Malaysia’s Southeast Asia Regional Centre for Counter-Terrorism (SEARCCT) is conducting research to understand the experiences of Malaysian gamers within online gaming environments. They are also examining potential strategies to counter the growing global trend of extremist influence in these digital spaces.
While Singapore leads in monitoring and intervention, countries like Indonesia and the Philippines, with large youth populations and rapid internet growth, are likely to face similar threats. ASEAN should establish a joint hub to address online extremism through AI-powered, localised threat detection, gamified digital literacy programmes, and neurodiversity-inclusive training for frontline workers. Such a strategy would strengthen the region’s fight against online radicalisation.
Recommendations
To address the far-right’s exploitation of gaming environments, a multifaceted strategy is necessary. This response must go beyond isolated platform bans or suspensions and aim to reshape the broader digital ecosystem where extremist ideologies thrive.
Gaming platforms should be held to higher standards of moderation. To truly hold gaming platforms to higher moderation standards, they must invest heavily in proactive AI to detect nuanced extremist content and significantly increase the number of expert human moderators trained in identifying specific vulnerabilities. These platforms need to be held accountable, ensuring they do their best to address this problem. It may be beneficial to impose stricter regulations on companies that are lax in this aspect. This expanded approach requires embedding safety by design principles and fostering robust cross-industry collaboration to combat online radicalisation effectively.
Beyond general internet safety, which is thoroughly explored in existing literature, a more nuanced understanding is vital: young people must learn to critically question how gaming communities shape their sense of identity and belonging, especially when hate is disguised as entertainment. Alongside prevention, targeted intervention is critical. Counter-terrorism agencies and community groups should therefore develop sophisticated tools that flag early signs of radicalisation through gaming activity.
These tools would encompass not only AI-driven behavioural analytics to detect sudden shifts in language or extreme viewpoints expressed within game chats and associated platforms, but also comprehensive training programmes for parents, educators, and community leaders on behavioural indicators specific to online radicalisation in gaming contexts. Crucially, they should also include secure, ethical pathways for reporting concerns and immediate access to mental health and counter-radicalisation support services for at-risk individuals.
Conclusion
Extremist actors rarely stay within national borders or a single platform. Without regional coordination, enforcement efforts risk becoming fragmented and ineffective. Gaming is no longer a pastime; it has become a site of ideological contest. Without proactive and coordinated action, it will almost certainly remain a fertile ground for far-right radicalisation.
About the Author
Noah Kuttymartin was an intern at the S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), in 2025. His research interests include extremism, defence, and security policy. He is currently in his final year at The George Washington University, USA.
SYNOPSIS
Far-right extremists are increasingly exploiting online gaming spaces and platforms to spread hate, coordinate violence, and radicalise youth. These digital environments offer anonymity, community, and ideological flexibility, making them powerful tools for extremist actors.
COMMENTARY
Online gaming spaces, from multiplayer games to chat platforms like Discord, have become fertile ground for extremist ideologies since the mid-2010s, particularly with the growth of social gaming. These communities often attract young, mostly male users, making them the key demographic under threat of radicalisation. What was once dismissed as harmless trolling in the early 2010s has, over the past decade, evolved into a pathway for radicalisation which, in some cases, leads to violence.
Extremists have not only gamified hate, embedding it in the language, aesthetics, and culture of gaming, but have also operationalised it by using gaming platforms as tools for recruitment, planning, and psychological conditioning. Anders Breivik, the perpetrator of the 2011 Norway terrorist attacks, claimed he used Call of Duty: Modern Warfare 2 as a “training simulator” and played World of Warcraft for social isolation.
While Breivik framed these games as instrumental in preparing for violence, scholars argue that these claims are overstated and reflect Breivik’s ideological narrative rather than evidence of actual tactical benefit. Participants in the 2017 Charlottesville rally used Discord to plan their violent actions. The 2019 Christchurch attacker, Brenton Tarrant, who killed 51 people in mosques, referenced games like Spyro the Dragon and Fortnite in his manifesto, signalling how deeply gaming culture had affected his mindset.
Digital Camouflage
Extremists leverage gaming’s cultural camouflage, its appearance as harmless entertainment and social interaction, to subtly spread hateful content. By embedding extremist narratives within the norms, humour, and aesthetics of gaming culture, they can mask radical messages as typical player behaviour, making them harder to detect and easier to normalise. Research by Europol and the Radicalisation Awareness Network (RAN) has shown that far-right groups embed extremist propaganda within mainstream gaming environments.
In-game usernames sometimes include hate symbols like “1488”, a neo-Nazi code, while player-created modifications (mods) and custom multiplayer maps often feature racist or other neo-Nazi imagery. These tools are commonly used to spread extremist ideas within gaming communities. These modifications often proliferate on platforms like Steam, where extremist forums operate under the disguise of gaming communities or “edgy” humour servers.
The Anti-Defamation League (ADL) noted that even after major attacks like the Christchurch shootings, extremist usernames and profiles glorifying past attackers continued to surface on platforms such as Steam, largely because of weak moderation policies. This persistence reflects a broader, well-debated problem in the gaming industry: most platforms rely heavily on community-based flagging systems, which are reactive and often inconsistent in identifying and removing hateful content. Critics argue that this approach allows extremist material to linger unchecked, while others contend that automated or proactive moderation risks overreach and censorship, underscoring the complex balance between free expression and safety online.
This digital camouflage makes it difficult for users and moderators to tell when jokes or memes hide real extremist messages, as such content is often disguised as humour or insider references that require cultural knowledge specific to internet communities to understand. Also, the gamification of symbols and inside jokes, like using Pepe the Frog with fascist iconography or referencing esoteric fascist ideologues through game avatars, allows hate to thrive in plain sight, often protected under the banner of “free speech” or satire.
Recruitment and Grooming
Beyond camouflage, gaming provides fertile ground for extremist grooming, particularly via chat and voice platforms like Discord. A RAN report noted that public Discord servers often serve as “middlemen” to private extremist spaces, where accepted members gain access to encrypted discussions and violent content.
Within private channels, extremists use ideological reinforcement and emotional support to exploit personal vulnerabilities like isolation or marginalisation. Members are rewarded with status, exclusive content, and validation, mirroring cult tactics such as love bombing, emotional control, and rigid in-group/out-group dynamics. These methods foster loyalty while discouraging critical thinking and isolating individuals from outside influences.
Studies on voice-based moderation on Discord reveal that real-time voice chat complicates content detection, enabling grooming to occur through trust-based interactions. Text-based detection tools are ineffective here, as conversations take place fluidly and leave limited, traceable evidence unless they are recorded.
Far-right groups exploit disaffected or isolated individuals by framing them as heroes in an ideological struggle. They simplify complex issues into conspiratorial “us vs. them” narratives, scapegoat specific groups, and validate anger and resentment. By promising belonging, purpose, and the chance to reclaim lost power, they tailor their messaging to emotional and cognitive vulnerabilities.
Implications for Southeast Asia
Far-right extremism exploiting gaming isn’t just a Western issue; Southeast Asia is increasingly exposed. In Singapore, the Internal Security Department (ISD) explicitly warns that extremist groups understand the popularity of gaming among young people, noting that platforms like Roblox, Discord, and DLive are being monitored “for recruitment and propaganda purposes.”
Notably, two teens, aged 15 and 16, were detained under the Internal Security Act for radicalisation via Roblox and Discord. One even created ISIS-themed game videos and considered violent real-world attacks. Another case involved an 18-year-old Singaporean student detained in December 2024 after role-playing a mosque massacre in a game, emulating the Christchurch attacker, Brenton Tarrant, in a violent online simulation. It remains to be seen if these incidents set a trend, but they are definitely worrying.
Beyond Singapore, Malaysia’s Southeast Asia Regional Centre for Counter-Terrorism (SEARCCT) is conducting research to understand the experiences of Malaysian gamers within online gaming environments. They are also examining potential strategies to counter the growing global trend of extremist influence in these digital spaces.
While Singapore leads in monitoring and intervention, countries like Indonesia and the Philippines, with large youth populations and rapid internet growth, are likely to face similar threats. ASEAN should establish a joint hub to address online extremism through AI-powered, localised threat detection, gamified digital literacy programmes, and neurodiversity-inclusive training for frontline workers. Such a strategy would strengthen the region’s fight against online radicalisation.
Recommendations
To address the far-right’s exploitation of gaming environments, a multifaceted strategy is necessary. This response must go beyond isolated platform bans or suspensions and aim to reshape the broader digital ecosystem where extremist ideologies thrive.
Gaming platforms should be held to higher standards of moderation. To truly hold gaming platforms to higher moderation standards, they must invest heavily in proactive AI to detect nuanced extremist content and significantly increase the number of expert human moderators trained in identifying specific vulnerabilities. These platforms need to be held accountable, ensuring they do their best to address this problem. It may be beneficial to impose stricter regulations on companies that are lax in this aspect. This expanded approach requires embedding safety by design principles and fostering robust cross-industry collaboration to combat online radicalisation effectively.
Beyond general internet safety, which is thoroughly explored in existing literature, a more nuanced understanding is vital: young people must learn to critically question how gaming communities shape their sense of identity and belonging, especially when hate is disguised as entertainment. Alongside prevention, targeted intervention is critical. Counter-terrorism agencies and community groups should therefore develop sophisticated tools that flag early signs of radicalisation through gaming activity.
These tools would encompass not only AI-driven behavioural analytics to detect sudden shifts in language or extreme viewpoints expressed within game chats and associated platforms, but also comprehensive training programmes for parents, educators, and community leaders on behavioural indicators specific to online radicalisation in gaming contexts. Crucially, they should also include secure, ethical pathways for reporting concerns and immediate access to mental health and counter-radicalisation support services for at-risk individuals.
Conclusion
Extremist actors rarely stay within national borders or a single platform. Without regional coordination, enforcement efforts risk becoming fragmented and ineffective. Gaming is no longer a pastime; it has become a site of ideological contest. Without proactive and coordinated action, it will almost certainly remain a fertile ground for far-right radicalisation.
About the Author
Noah Kuttymartin was an intern at the S. Rajaratnam School of International Studies (RSIS), Nanyang Technological University (NTU), in 2025. His research interests include extremism, defence, and security policy. He is currently in his final year at The George Washington University, USA.