Global Far-Right Extremist Exploitation of Artificial Intelligence and Alt-Tech: The Cases of the UK, US, Australia and New Zealand
The recent public attention paid to generative Artificial Intelligence (AI) and its potential exploitation for good and malevolent purposes has not escaped the global extreme far-right. In particular, extreme right groups have explored these technologies as a way to get ahead of state preventing and countering violent extremism (P/CVE) responses in this space. This article contains findings of an exploratory content and sentiment analysis conducted by the author of 12 violent and non-violent far-right groups in the four countries under survey. Each was deemed a representative sample based on their representation of three different ideology proclivities that currently preoccupy the global far-right: racial nationalism, ethno-nationalism and cultural nationalism.[i] Essentially, whilst public ‘chatter’ by such groups indicates very tentative inroads into and negative appraisals of AI technology, there are aspirations to both foment anxieties within non-aligned constituencies and weaponise AI for illicit propaganda, attack-planning and attack-execution activities.
Introduction
Western far-right groups[ii] have increasingly been able to mobilise and weaponise technology for activism and their campaigns. Recent research reports have suggested that such groups are able to exercise an ‘opportunistic pragmatism’ when using online platforms,[iii] creating new bases of convergence and influence in such disparate places as Germany, Italy and Sweden.[iv] While success in this space has been limited, such instances demonstrate a shift away from parochial concerns and towards more transnational ambitions in using technology to disseminate far-right messages and ideology to a wider audience.[v]
Indeed, this dialogic turn is symptomatic of the plethora of social media platforms that characterise the modern internet. No longer are far-right groups content with talking amongst themselves, as was the case with the early internet on bulletin boards, chat forums and closed online spaces. Increasingly, these actors have taken advantage of ‘likes’, ‘retweets’ and ‘pins’ on such platforms in order to disseminate (usually sanitised) versions of their messages to a wider audience. What is problematic about this content is its often banal and coded nature, using notions of tradition, heritage and the broader vilification of a mysterious globalist elite.[vi] Often, this is done in order to boost followership and widen exposure to nativist[vii] narratives and messaging.[viii]
A more recent example of how far-right extremists have exploited online technologies for their own propaganda, recruitment and kinetic attacks is the use of AI-based tools. Recent reports have shown how such groups have exploited existing generative AI tools to explore the possibilities of propaganda creation,[ix] image creation[x] and the design of recruitment tools[xi] in service of nativist ends. This article reports the findings of an exploratory study conducted by this author into how far-right groups in four12 of the Five Eyes intelligence-sharing countries (United States [US], United Kingdom [UK], Australia and New Zealand), are talking about the uses of AI, and how P/CVE practitioners, including in Southeast Asia, can scaffold timely interventions in this space to meet such efforts.
How Extremists Within the Global Far-Right Discuss Their Uses of AI Methods
The recent public attention paid to generative AI and its potential exploitation for good and malevolent purposes has not escaped the global far-right.[xii] In particular, it is important to note how such extremist groups are talking about these technologies as a way to get ahead of the curve when responding to P/CVE responses in this space.
Below are the findings of this author’s own exploratory content and sentiment analysis of 12 violent and non-violent far-right groups in the four countries under survey,[xiii] which were deemed a representative sample based on their representation of three different ideology proclivities that currently preoccupy the global far-right: racial nationalism, ethno-nationalism and cultural nationalism.[xiv] Great attention was devoted to not just how they intended to use, but how they discussed AI on their Telegram channels. Posts were harvested from October 2023 to February 2024 from 18 public Telegram channels using a key word search (AI, Artificial Intelligence, ChatGPT, Large Language Models [LLM], Chatbots, Deepfake), qualitative thematic analysis (core themes included xenophobia, racism and exclusionary nationalism, while peripheral themes covered anti-modernity, anti-science and anti-government), and qualitative sentiment analysis (positive, negative or neutral appraisals) of posts collected.
Lastly, it is noteworthy that the earliest mentions of AI within these channels can be traced back to 2017 – well before the current ChatGPT ‘hype’. However, one limitation to note is that these discussions represent a fraction (likely under 1-2 percent) of the overall content when compared to the wider range of topics these groups typically engage with. Like mainstream actors, most of these groups are in the preliminary stages of their engagement with AI, focusing on exploring and discussing its potential applications.
Key Findings
The analysis has unveiled three key findings:
Finding 1: The global far-right’s exploitation of AI is preliminary, and the discussion is largely negative.
In general, the appraisal of generative AI among the global far-right in the four countries under examination was negative, and there was no serious or sustained engagement with the idea of harnessing AI to achieve their goals – besides a few podcasts, blogs and AI image generation attempts – on public-facing channels and platforms. In this study, only a handful of posts portrayed AI in a positive light, encouraging members of the various organisations to engage in ‘information warfare’, hack mainstream LLMs to serve more nationalistic ends, and generate their own AI images as part of broader community-building activities. The rest (as described below) involved active derision and conspiratorial critiques of the technology.
Finding 2: Their discussion of AI tends to focus on anti-government and anti-globalist critiques of the technology, rather than core ideological concerns.
Rather than focusing on their own use of AI, much of these groups’ online discourse surrounding these technologies revolved around the conspiratorial and nefarious intentions of mainstream actors (i.e., governments, law enforcement and intergovernmental organisations) in their adoption and deployment of AI. At a more substantive level, discussions within the surveyed Telegram channels often centred on these technologies being perceived as tools for a “replacement” type agenda that would see the “elimination of humanity”, institute “global control” and be part of an “anti-human agenda”.
At a more marginal level, concerns were expressed by the groups that connected more readily with the exclusionary nationalist core of their far-right ideology.[xv] Interestingly, for example, only a handful of posts surveyed actively connected AI with the far-right’s core anti-immigrant and anti-Semitic ideology. For example, these groups falsely alleged that European countries’ post-pandemic recovery funds were being used to develop AI technology over tackling illegal immigration. Moreover, they also connected the Jewish heritage of the social media platform Facebook’s owner, Mark Zuckerberg, to lying AI bots on the platform. This peripherality is perhaps unsurprising given the addition of more populist and conspiratorial narratives to their ideological appeals in recent years.[xvi]
Finding 3: Discussions of AI tend to focus on allegations of broader ‘liberal’ bias and the hacking of established AI models for anti-progressive ends.
One final common trope among the online postings of the groups surveyed was allegations of bias about the current suite of generative AI tools. For these far-right groups, Google’s Bard (now Gemini) and Open AI’s ChatGPT, both companies’ respective conversational AI services, are inherently political – pushing what they see as a broader (and corrosive) ‘liberal’ agenda. As an alternative to using well-known, ostensibly ‘woke’ LLMs, these groups recommended the usage of alternative models that represent a more stridently libertarian or conservative value system. These included ChatGPT clones such as RightWing, Freedom & Truth GPT, and the open-source, decentralised HuggingFace platform, in order to put forward their nationalistic agendas unimpeded.[xvii]
In particular, the issues discussed here revolved around debates around sexuality and gender identity – layering in moral panics concerning the perversion and ‘grooming’ of young children. In one post, for example, former UK English Defence League (EDL) leader Tommy Robinson told his followers to “get [their] kids off of Snapchat” due to what he claimed is “non-binary AI”. In another, he circulated a screenshot of a user trying to trick ChatGPT into problematic discussions on pregnancy and gender roles – suggesting that heteronormative conversations were in violation of ChatGPT’s content moderation policy. This – like with anti-government and anti-globalist tropes – was used to stir moral panics among his followers and act as an opportunity for recruitment.
The use of hacking, tricking or short-circuiting AI models to either produce certain answers or to give instructions that might lead to malicious ends was not, however, uncommon in the sample surveyed. In particular, the former leader of the neo-Nazi accelerationist group The Base, Rinaldo Nizarro, was the most persistent in trying to find workarounds and the prompting of instructional guides from generative AI that might be used in preparation for violence. In particular, his use of Open AI’s Chatbot to elicit information about guerrilla warfare should be viewed with an element of caution and concern for future kinetic attacks by global far-right actors.[xviii]
Discussion
The findings found here align with and extend some of the observations made by other nascent and emerging studies on the role of extremists’ exploitation of AI. In a December 2023 report by the International Centre for Counter-Terrorism (ICCT), Busch and Ware, for example, posited that Deepfakes could be used to create false, inflammatory information or statements by trusted authority figures, election misinformation, or other distortions of social and political events, which are all likely to incite violence.[xix] Meanwhile, Siegel in a June 2023 GNET Insight found that some far-right users on 4Chan, a loosely moderated and user-anonymous imageboard website, had adapted Meta’s LLaMA model, while others used publicly available AI tools to create new and problematic chatbots.
The leaking of Meta’s AI language model LLaMA’s source code in February 2023 allowed far-right extremists on 4chan to develop chatbots capable of enabling online radicalisation efforts by imitating victims of violence that lean into stereotypes and promote violence.[xx] Moreover, Koblentz-Stenzler and Klempner in a January 2024 GNET Insight found on more extreme far-right boards and channels (mainly Gab, 4chan and 8chan) as well as online far-right publications (namely The Daily Stormer and others), that far-right actors primarily discussed AI through four key themes: (i) belief in bias against the right; (ii) anti-Semitic conspiratorial ideas; (iii) strategies to overcome and bypass AI limitations; and (iv) malicious use of AI.[xxi] This is not just a far-right threat problem. Similar stories have emerged when it comes to jihadist and Islamist extremist groups – with research articles, reports and news stories detailing how in recent months jihadists have used AI to amplify apocalyptic propaganda,[xxii] radicalise individuals,[xxiii] and put out a guide, sourced from mainstream/Western tech sources, about how to use ChatGPT and AI-supported chatbots to enhance jihadist messaging and online activities.[xxiv]
Conclusions/Recommendations
In contrast to the far-right’s adept use of social media, their use of AI in propaganda, recruitment and attacks is still in its infancy. Whilst there have been some experimental efforts, as outlined above, such efforts remain tentative at best, and are mainly met with negativity and conspiratorial scepticism. However, it is important to recognise that violent groups may harness AI for offline activities, for example, to support endeavours like 3D-printing weapons,[xxv] or drone technologies for kinetic attacks.[xxvi]
Looking forward, it is advisable for practitioners and policymakers, including in Southeast Asia, to get ahead of and proactively address these trends. This could involve blue-teaming potential AI uses for P/CVE interventions, such as the creation of assets for counter-messaging campaigns. Other actions could include incorporating regulation and incentives for safe-by-design to prevent the harmful uses of AI products by terrorist or violent extremist actors, and using responsible rhetoric to temper moral panics or fears concerning this new technology.
Security agencies and law enforcement should also consider engaging in red-teaming exercises to assess possible extremist exploitation and uses. Moreover, policymakers in the region should approach the release of open-source versions of AI technologies cautiously, compelling technology companies to involve end-users in discussions and to avoid hasty releases without thorough safety testing.
Finally, our understanding of this issue is still in its nascent stages. There is a pressing need for studies providing a deeper understanding of the extent of the scale and potential for real-world threats posed by extremist exploitation of AI. These are essential to better inform law enforcement and security agencies about the potential application of AI in kinetic attacks. These efforts are pivotal in redirecting the trajectory of this emerging technological field towards safety, prevention and the promotion of pro-social uses. By investing in these initiatives, security actors can mitigate the risk of further exploitation by malicious actors, ensuring that AI serves as a force for positive societal change, from the automation of menial tasks to the building of capacity around security threats.
About The Author
Dr William Allchorn is Honorary Senior Research Fellow at Anglia Ruskin University and Visiting Associate Professor in Politics and International Relations at Richmond, the American University, in London. He can be reached at [email protected].
Thumbnail photo by Mathurin Napoly on Unsplash
Citation
[i] Posts were harvested from October 2023 to February 2024 from 18 public Telegram channels using a key word search, qualitative thematic analysis and qualitative sentiment analysis of posts collected.
[ii] Here ‘far right’ is used to describe a broad plethora of cognate paramilitary groups, political parties and protest movements that could be considered as harbouring nativist, authoritarian and populist policy platforms (see Cas Mudde, Populist Radical Right in Europe [Cambridge: Cambridge University Press, 2007]). These include groups whose aims include “a critique of the constitutional order without any anti-democratic behaviour or intention” (see Elisabeth L. Carter, The Extreme Right in Western Europe: Success or Failure? [Manchester: Manchester University Press, 2005], p. 22) and those which actively “espouse violence” and “seek the overthrow of liberal democracy” entirely (see Roger Eatwell, “Ten Theories of the Extreme Right,” in Right-Wing Extremism in the Twenty-First Century, eds. Peter H. Merkl and Leonard Weinberg [London: Routledge, 2003], p. 14). These are often referred to as the radical right and the extreme right, and range from anti-Islamic campaign groups right through to formally constituted neo-fascist and neo-Nazi political parties.
[iii] Jacob Davey and Julia Ebner, The Fringe Insurgency: Connectivity, Convergence and Mainstreaming of the Extreme Right (London: Institute for Strategic Dialogue, 2017).
[iv] Ibid.; Julia Ebner and Jacob Davey, Mainstreaming Mussolini: How the Extreme Right Attempted to ‘Make Italy Great Again’ in the 2018 Italian Election (London: Institute for Strategic Dialogue, 2018); Sasha Havlicek et al., Smearing Sweden: International Influence Campaigns in the 2018 Swedish Election (London: Institute for Strategic Dialogue, 2018).
[v] Manuela Caini and Patricia Kröll,“The Transnationalization of the Extreme Right and the Use of the Internet,” International Journal of Comparative and Applied Criminal Justice, Vol. 39, No. 4 (2014), pp. 331-351; Caterina Froio and Bharath Ganesh, “The Transnationalisation of Far Right Discourse on Twitter,” European Societies, Vol. 21, No. 4 (2018), https://doi.org/10.1080/14616696.2018.1494295.
[vi] Andrew Brindle and Corrie MacMillan, “Like and Share If You Agree: A Study of Discourses and Cyberactivism of the Far Right British National Party Britain First,” Journal of Language, Aggression, and Conflict, Vol. 5, No. 1 (2017), pp. 108-133.
[vii] Here, ‘Nativism’ refers to an ideology which divides societies between a native ‘in-group’ and non-native ‘out-group’, and prescribes that “states should be inhabited exclusively by members of the native [in-]group” (see Mudde, Populist Radical Right in Europe, p. 22).
[viii] Nigel Copsey, “The Curious Case of Britain First: Wildly Popular on Facebook, But a Flop in Elections,” Democratic Audit UK, July 17, 2017, http://www.democraticaudit.com/2017/07/17/the-curious-case-of-britain-first-wildly-popular-on-facebook-but-a-flop-in-elections/.
[ix] Stephanie Baele, “Artificial Intelligence And Extremism: The Threat Of Language Models For Propaganda Purposes,” CREST Security Review, No. 16 (2022), https://crestresearch.ac.uk/resources/artificial-intelligence-and-extremism-the-threat-of-language-models/.
[x] Renate Mattar, “Germany’s Far Right Extremists Are Using AI Images To Incite Hatred,” Worldcrunch, April 7, 2023, https://worldcrunch.com/tech-science/ai-images-extremists-germany.
[xi] Yannick Veilleux-Lepage, Chelsea Daymon and Emil Archambault, Learning from Foes: How Racially and Ethnically Motivated Violent Extremists Embrace and Mimic Islamic State’s Use of Emerging Technologies (London: Global Network on Extremism and Technology, 2022), https://gnet-research.org/wp-content/uploads/2022/05/GNET-Report-Learning-From-Foes.pdf.
12 Canada was omitted here – owing to the lack of online discourse among the far-right relating to potential uses and appraisals of AI.
[xii] See the following studies for more information about violent extremists’ uses of AI: Jacob Ware and Ella Busch, “The Weaponization of Deepfakes: Digital Deception on the Far-Right,” International Centre for Counter-Terrorism, December 13, 2023, https://www.icct.nl/publication/weaponization-deepfakes-digital-deception-far-right; Liram Koblentz-Stenzler and Uri Klempner, “Navigating Far-Right Extremism in the Era of Artificial Intelligence”, Global Network on Extremism and Technology, January 25, 2024, https://gnet-research.org/2024/01/25/navigating-far-right-extremism-in-the-era-of-artificial-intelligence/; Daniel Siegel, ”‘RedPilled AI’: A New Weapon for Online Radicalisation on 4chan”, Global Network on Extremism and Technology, June 6, 2023, https://gnet-research.org/2023/06/07/redpilled-ai-a-new-weapon-for-online-radicalisation-on-4chan/.
[xiii] UK: Patriotic Alternative (Racial), Britain First (Cultural) and Identity England (Ethno), plus one leader, Tommy Robinson (Cultural); US: American Futurist (Racial), National Vanguard (Ethno), Western Chauvinist (Ethno), Patriot Prayer (Cultural), plus one leader, Rinaldo Nizzaro (Racial); Australia: Blair Cottrell (Racial), Proud Boys Australia (Ethno), & Avi Yemeni (Cultural); New Zealand: Right-Wing Resistance (Racial), Action Zealandia (Ethno), Yellow Vests New Zealand/Right Minds NZ (Cultural).
[xiv] Bjørgo & Ravndal further define three distinct (but sometimes overlapping) ideological strands of the contemporary far-right: 1) Racial Nationalism (i.e., the old far-right with neo-Nazis and neo-fascist parties and groupuscules) that believes in the superiority of the white race and the end to other races, praises fascist dictators, and is highly anti-Semitic; 2) Ethnic Nationalism (i.e., newer alt-right and identitarian social movements) that believes in the separation of groups based on ethnicity, and the defence against foreign peoples and cultures through forced remigration; 3) Cultural Nationalism (i.e., anti-Islam and counter-jihad street movements) that has a strong anti-Muslim focus in which Western culture should be protected against the ‘fifth column’ of Islam. See Tore Bjørgo and Jacob Aasland Ravndal, “Extreme-Right Violence and Terrorism: Concepts, Patterns, and Responses,” International Centre for Counter-Terrorism, September 1, 2019.
[xv] Elisabeth Carter, “Right-Wing Extremism/Radicalism: Reconstructing the Concept,” Journal of Political Ideologies, Vol. 23, No. 2 (2018), pp. 157-182, https://doi.org/10.1080/13569317.2018.1451227.
[xvi] William Allchorn, Radical Right Counter Narratives Expert Workshop Report (Abu Dhabi: Hedayah, 2021), https://hedayah.com/app/uploads/2021/09/CARR-Hedayah-RRCN-Workshop-Report_Final-1.pdf.
[xvii] Will Knight, “Meet ChatGPT’s Right-Wing Alter Ego”, WIRED, April 27, 2023, https://www.wired.com/story/fast-forward-meet-chatgpts-right-wing-alter-ego/.
[xviii] Koblentz-Stenzler and Klempner, “Navigating Far-Right Extremism in the Era of Artificial Intelligence.”
[xix] Busch and Ware, “The Weaponization of Deepfakes.”
[xx] Siegel, “‘RedPilled AI’.”
[xxi] Koblentz-Stenzler and Klempner, “Navigating Far-Right Extremism in the Era of Artificial Intelligence.”
[xxii] Daniel Siegel and Bilva Chandra, “‘Deepfake Doomsday’: The Role of Artificial Intelligence in Amplifying Apocalyptic Islamist Propaganda,” Global Network on Extremism and Technology, August 29, 2023, https://gnet-research.org/2023/08/29/deepfake-doomsday-the-role-of-artificial-intelligence-in-amplifying-apocalyptic-islamist-propaganda/.
[xxiii] Tom Singleton, Tom Gerken and Liv McMahon, “How a Chatbot Encouraged a Man Who Wanted to Kill the Queen,” BBC News, October 6, 2023, https://www.bbc.co.uk/news/technology-67012224.
[xxiv] Steven Stalinsky, “Terrorists Love New Technologies. What Will They Do With AI?” Newsweek, March 14, 2023, https://www.newsweek.com/terrorists-love-new-technologies-what-will-they-do-ai-opinion-1787482.
[xxv] Rajan Basra, “The Future is Now: The Use of 3D-Printed Guns by Extremists and Terrorists,” Global Network on Extremism and Technology, June 23, 2022, https://gnet-research.org/2022/06/23/the-future-is-now-the-use-of-3d-printed-guns-by-extremists-and-terrorists/.
[xxvi] Ana Aguilera, “Drone Use by Violent Extremist Organisations in Africa: The Case of Al-Shabaab,” Global Network on Extremism and Technology, July 5, 2023, https://gnet-research.org/2023/07/05/drone-use-by-violent-extremist-organisations-in-africa-a-case-study-of-al-shabaab/.