Dossier

Online-Radikalisierung, videospiele, gamification

Radikalisierungsprozesse, die online stattfinden, können unterschiedlicher verlaufen, als jene, die sowohl Online- als auch Offline-Elemente enthalten bzw. jene, bei denen der virtuelle Aspekt keine Rolle spielt. Aus diesem Grund entwickeln ForscherInnen ständig neue Konzepte und Modelle, um diese Prozesse besser zu erklären. Diese Seite ist eine Sammlung aus evidenzbasierten Quellen zu verschiedenen Aspekten dieses Themas.


Online Strategien

Extremistische AkteurInnen haben die Möglichkeit erkannt, Online-Plattformen für ihre Zwecke zu nutzen. Das erfordert jedoch eine Anpassung ihrer Strategien, d.h. wie sie rekrutieren, ihre Propaganda verbreiten und sogar Aktivismus betreiben.

Siehe auch: John R. Vacca, Ed. (2019): Online Terrorist Propaganda, Recruitment, and Radicalization.
“Online Terrorist Propaganda, Recruitment, and Radicalization is most complete treatment of the rapidly growing phenomenon of how terrorists’ online presence is utilized for terrorism funding, communication, and recruitment purposes. The book offers an in-depth coverage of the history and development of online "footprints" to target new converts, broaden their messaging, and increase their influence. Chapters present the emergence of various groups; the advancement of terrorist groups’ online presences; their utilization of video, chat room, and social media; and the current capability for propaganda, training, and recruitment.”

Das Netz bietet ideale Möglichkeiten für ExtremistInnen. Inhalte verbreiten sich schnell und anonym. Wie wird der digitale Raum von extremistischen Gruppen genutzt? Was gibt es, was uns online radikalisieren könnte?
Sören Musyal (Journalist und Soziologe) spricht über die Online-Strategien der radikalen Rechten. Till Baaken (Wissenschaftler) gibt Einblicke in seine Beobachtungen von salafistischen YouTube-Kanälen.

Lorand Bodo, Inga Kristina Trauthig (2022): Emergent Technologies and Extremists: The DWeb as a New Internet Reality?

Terms floating around with regard to the decentralised web (DWeb) such as Web3 or bitcoin have become a catchall for anything having to do with blockchains and cryptocurrency. Overall, the major questions related to a decentralised web are coalescing around two themes: (1) Is a decentralised web viable and attractive enough for enough people? and (2) What is the nature of this ‘new internet’ – in other words, will the decentralised web avoid the pitfalls of the current web? The latter is regularly charged for online radicalisation or for enabling authoritarian strengthening. Instead, could the DWeb foster positive aspects such as its potential for activists who could organise out of sight of regime censors using this technology.


Online vs Offline?


Online und offline stattfindende Radikalisierungsprozesse haben zahlreiche Gemeinsamkeiten aber auch Unterschiede. Es ist weiterhin umstritten, ob der eine Prozess ohne den anderen funktionieren kann und ob sie unterschiedlich gefährlich sind.

Literatur dazu

GNET (2022): Offline Versus Online Radicalisation: Which is the Bigger Threat?
“Governments, social media companies and the general public are becoming increasingly concerned about the threat of those who are radicalised online and turn to violent extremism. However, the evidence base for this concern is not fully formed. For instance, it is not yet clear if those who are being radicalised offline are still the greater threat. […] This report seeks to explore the differences in outcomes for those who have been primarily radicalised offline versus those radicalised online.”

Szmania, Susan and Phelix Fincher. 2017. "Countering Violent Extremism Online and Offline." Criminology and Public Policy (January).
”In the wake of devastating attacks by violent extremists around the world, policy makers have invested considerable effort into understanding terrorists' use of the Internet as they radicalize and mobilize to violence. To that end, the article "Terrorist Useofthe Internet by the Numbers: Quantifying Behaviors, Patterns, and Processes" by Paul Gill, Emily Corner, Maura Conway, Amy Thornton, Mia Bloom, and John Horgan (2017, this issue) contributes important data to a timely policy discussion. The authors' central finding, “that here is no easy offline versus online violent radicalization dichotomy to be drawn," highlights a gap in our current conceptualization of the radicalization process and suggests several implications, particularly for countering violent extremism (CVE) policies and programs.”

Daniele Valentini, Anna Maria Lorusso, Achim Stephan (2020): Onlife Extremism: Dynamic Integration of Digital and Physical Spaces in Radicalization. Front. Psychol., 24 March 2020.
“This article argues that one should consider online and offline radicalization in an integrated way. Occasionally, the design of some counter-measure initiatives treats the internet and the “real” world as two separate and independent realms. New information communication technologies (ICTs) allow extremists to fuse digital and physical settings. As a result, our research contends that radicalization takes place in onlife spaces: hybrid environments that incorporate elements from individuals’ online and offline experiences. This study substantiates this claim, and it examines how algorithms structure information on social media by tracking users’ online and offline activities. Then, it analyzes how the Islamic State promoted onlife radicalization. We focus on how the Islamic State used Telegram, specific media techniques, and videos to connect the Web to the territories it controlled in Syria.”


Videospiele & Gamification

Das Radicalisation Awareness Network (RAN) der Europäischen Kommission zählt drei Phänomene im Kontext von Videospielen auf, die mit Extremismus zusammenhängen. Erstens produzieren extremistische Gruppen häufig ihre eigenen Videospiele. Derzeit ist wenig darüber bekannt, wie wirksam diese Spiele hinsichtlich Radikalisierung sind, da sie meist relativ schnell von den Plattformen entfernt werden und darüber hinaus hohe Kosten verursachen.

Zweitens modifizieren Extremisten bestehende Mainstream-Videospiele. Dazu gehören veränderte Versionen von berühmten Videospielen, wie z.B. Call of Duty (CoD) und Grand Theft Auto (GTA). Hier sind die Kosten relativ niedrig und extremistische Gruppen können die professionellen game engines für ihre eigenen strategischen Zwecke verwenden. Mit veränderten populären Videospielen ist es auch leichter, ein breiteres Publikum zu erreichen. Zusätzlich erwecken solche veränderten Versionen auch den falschen Eindruck, dass die Gruppe technisch kompetent ist. Wie im ersten Fall sind die genauen Auswirkungen auf Rekrutierung —über die Verbreitung von Propaganda hinaus — bisher unbekannt.  

Drittens nutzen ExtremistInnen die Kommunikationsmöglichkeiten innerhalb der Videospiele, um neue Personen zu rekrutieren bzw. zu radikalisieren. Dabei treten extremistische Gruppen mit Personen auf Spielplattformen (z. B. Twitch, Steam, Discord) in Kontakt, um eine Beziehung aufzubauen und sie später zu privaten Chatgruppen einzuladen. Bei diesem Phänomen sind zwar einige Fälle von erfolgreicher Radikalisierung bzw. Rekrutierung bekannt, laut RAN sind solche Vorfälle aber noch selten. In diesem Kontext passen extremistische Gruppen häufig ihre Sprache an Gamer-szenetypische Ausdrücke bzw. Referenzen an und integrieren diese sogar in ihre eigenen Narrative. Ihre Propaganda hat damit eine größere Wahrscheinlichkeit auf Erfolg in diesen Gamer- Szenen. (vgl. RAN 2020)

Weiterführende Literatur

Linda Schlegel, Amarnath Amarasingam (2022): Examining the Intersection Between Gaming and Violent Extremism.
This paper looks into the intersection between gaming and violent extremism and presents insights from focus groups with experts as well as surveys with gamers. It discusses gaming, gaming-related content and gaming-adjacent platforms in the context of violent extremism, and highlights both the risks and positive effects of gaming.

Fisher-Birch, Joshua (2020) The Emerging Threat of Extremist-Made Video Games.
“Three video games released this summer that promote violent extreme-right beliefs are part of a disturbing trend of free to play games specifically designed as extremist propaganda and recruitment tools. Video games created by extremist groups and individuals seeking to spread violent ideologies pose a unique challenge to those working to prevent and combat radicalization, and their sinister potential has yet to be fully appreciated by tech companies and distributors.”

Lakhani, Suraj (2022): VIDEO GAMING AND (VIOLENT) EXTREMISM: An exploration of the current landscape, trends, and threats.
“This paper provides an overview of the intersection between (violent) extremism and video gaming, examining the current landscape, trends, and threats. Analysing existing literature and open-source materials, this paper discusses the types of games, platforms, and services that are vulnerable to this type of infiltration and use; particularly focussing on content, platform features, and overlaps. The paper also examines a number of recurrent themes, including: ‘radicalisation, recruitment, and reinforcing views’; ‘community building and strengthening’; and ‘extremist online ecosystems’. Thereafter, the responses to (violent) extremism from various platforms will be explored, before reflecting on current challenges and future considerations.”

Robinson, Nick; Whittaker, Joe (2021): Playing for Hate? Extremism, Terrorism, and Videogames 
“Although the production of videogames by extremist and terrorist groups has markedly declined since a high point in the 2000s, game-based interventions remain highly significant, whether through the adoption of gaming-based iconography in extremist and terrorist social media campaigns or through the activity of modders and groups’ supporters who continue to make games championing extremists and terrorists. Building on Conway’s 2017 call to look anew at the nexus between violent extremism, terrorism, and the internet, we problematize existing work on the use of videogames by extremists and terrorists. First, we argue that research needs to move beyond viewing games as tools for recruitment: seeing videogames as sources of propaganda that work to reinforce the views of those already empathetic to and/or attuned to a group’s messages significantly expands our understanding of the interrelationship between players and extremist and terrorist videogames. Second, we argue that the present literature – whilst impressive – has overly privileged the “reading” of in-game representations, at the expense of attention to the central role of interactive gameplay in promoting the strategic communication and propaganda aims of a group. It is through the undertaking of in-game actions that a player comes to experience a group’s values and aims. Research on videogames, extremism and terrorism is at a nascent stage – this article seeks to provoke further thinking and open up spaces for debate in this crucial, yet under-studied, area.”

Schlegel, Linda (2021): Competing, Connecting, Having Fun: How Gamification Could Make Extremist Content More Appealing
“Do extremists like to play? A few years ago, this question may have sounded ridiculous. But since the livestreamed attack in Christchurch and subsequent attacks across the globe making use of and reference to video gaming and online gaming communities as well as increasing evidence that extremists are using gaming (-adjacent) platforms such as DLive, Discord, Steam and Twitch, it does not sound so absurd after all to ask about the potential interplay between gaming and extremism.”

Schlegel, Linda. 2020. Jumanji Extremism? The potential role of gamification and games in radicalisation processes
“The ‘gamification of terror’ has received increased attention in the last years, especially in the aftermath of the right-wing extremist attacks in Christchurch, El Paso and Halle, which were livestreamed by the perpetrators akin to ‘Let’s Play’ streams found in the gaming scene. Previously, ISIS had made headlines, because it used not only scenes from the video game Call of Duty, but from footage collected through HD-cameras attached to helmets of fighters, recreating the visual imagery and perspective experienced by players in first-person shooter (FPS) games. In addition, concern has been growing about gamified applications such as the educational app Huroof developed by ISIS or the planned (but never launched) app Patriot Peer by the Identitarian Movement.”

Daniel Koehler, Verena Fiebig, Irina Jugl (2022). From Gaming to Hating: Extreme-Right Ideological Indoctrination and Mobilization for Violence of Children on Online Gaming Platforms
As a consequence of numerous extreme-right terror attacks in which the perpetrators posted their manifestos and attack life streams on online platforms adjacent to the video gaming community, as well as radicalized within that environment to a significant degree (e.g., Christchurch, New Zealand; Halle, Germany), increasing scholarly and policymaker interest is focusing on far-right radicalization and recruitment within online video game environments. Yet little empirical insights exist about the specific engagement between right-wing extremists and their potential recruits on these platforms. This study presents findings from a qualitative exploration of German police-investigation files for two children who radicalized on gaming platforms to become involved in extreme-right criminal behavior, including the plotting of a terrorist attack. The study demonstrates the importance of online and offline factor interaction, especially regarding the role of familiar criminogenic factors, as well as the social–emotional bonding between potential recruits and extremist gamers created through shared gaming experiences that lead to high-intensity extremist radicalization aimed at offline behavioral changes. The study did not find evidence for strategic organizational far-right recruitment campaigns, but rather multidirectional social-networking processes which were also initiated by the potential recruits.

Rachel Kowert, Alexi Martel, William B. Swann (2022). Not just a game: Identity fusion and extremism in gaming cultures
”Extremist ideologies have clearly become increasingly prevalent in the world of video games. What is less clear, however, is the mechanism through which these ideologies make their way into the psyches of gamers. Here we focus on the potential role of identity fusion in the radicalization of video gamers. In three studies, we show that fusion with gaming culture is uniquely predictive of a host of socially pernicious outcomes, including racism, sexism, and endorsement of extreme behaviors. We also show that specific personality attributes (e.g., insecure attachment, loneliness) may interact with fusion with gaming culture to further amplify support for extreme behavior, and that specific gaming communities (e.g., Call of Duty) may serve as catalysts that encourage strongly fused gamers to embrace antisocial attitudes and behaviors. These findings contribute to a theoretical understanding of the psychological processes that foment radicalization and guide the development of strategies for discouraging extremist ideologies in gaming spaces.”


Das “Rabbit hole” Phänomen,
Echokammer und “Filter Bubbles”

Social-Media-Algorithmen wird oft vorgeworfen, die NutzerInnen in eine Spirale zu führen, indem sie mit jeder Empfehlung zu immer radikaleren Kanälen und Inhalten geleitet werden. Dieses Phänomen wird von einigen ExpertInnen als „rabbit hole“ bezeichnet. Der Algorithmus verstärkt bestimmte Kanäle und deren Inhalte, aber sein Hauptzweck ist es, die BenutzerInnen dazu zu bringen, möglichst viel Zeit auf der Plattform zu verbringen (mitunter um finanziellen Gewinn zu machen). Daher müssen Algorithmen die Aufmerksamkeit des Benutzers/der Benutzerin aufrechterhalten, indem sie Stimuli erhöhen. Mit anderen Worten: die Person beginnt, Inhalte zu einem bestimmten Thema zu konsumieren und der Algorithmus empfiehlt weitere Inhalte zu diesem Thema. Nach einer Weile können Reize nur mehr mit zunehmend provokativen und radikaleren Inhalten aufrechterhalten werden, die den Betrachter/die Betrachterin weiter in das rabbit hole hineinführen. (vgl. Matamoros-Fernández/Gray 2019; Ledwich/Zaitsev 2019) Ähnlich entstehen sogenannte “Echokammern” (bzw. “Filter Bubbles”), die versuchen, fortschreitend bestimmte Inhalte zu zeigen, die den UserInnen gefallen bzw. deren Meinung(en) bestärken. (vgl. Reed et al. 2019)

Extremistische AkteurInnen profitieren von diesen Algorithmen. Sie haben sogar Verständnis dafür, wie Algorithmen funktionieren, und nutzen diese aus, um ihre Taktik kontinuierlich zu verbessern. In vielen Fällen kennen sie die Community-Richtlinien gut, wodurch die von ihnen produzierten Inhalte „grenzwertig“ und oft mehrdeutig sind, um den Richtlinien der Social-Media-Plattformen (noch) zu entsprechen und die Entfernung der Inhalte zu vermeiden. ModeratorInnen der Social-Media-Plattformen stehen aus diesem Grund ständig vor der Herausforderung, in diesen Grauzonen den Unterschied zwischen Witz und Mobbing, religiöser Doktrin und Hassrede oder Sarkasmus und Aufruf zur Gewalt zu finden. (vgl. Matamoros-Fernández/Gray 2019; Ledwich/Zaitsev 2019)

Das rabbit hole Phänomen ist jedoch umstritten und mehrere ForscherInnen haben die These im Rahmen quantitativer Studien auf verschiedenen Plattformen überprüft (siehe die Studien unten).

Literatur zum Rabbit Hole

Ariadna Matamoros-Fernández and Joanne Gray, VOX-POL (2019): Don’t Just Blame YouTube’s Algorithms for ‘Radicalisation’. Humans Also Play a Part.
“People watch more than a billion hours of video on YouTube every day. Over the past few years, the video sharing platform has come under fire for its role in spreading and amplifyingextreme views. YouTube’s video recommendation system, in particular, has been criticised for radicalising young people and steering viewers down rabbit holes of disturbing content. The company claims it is trying to avoid amplifying problematic content. But research from YouTube’s parent company, Google, indicates this is far from straightforward, given the commercial pressure to keep users engaged via ever more stimulating content. But how do YouTube’s recommendation algorithms actually work? And how much are they really to blame for the problems of radicalisation?”

Mark Ledwich, Anna Zaitsev (2019): Algorithmic Extremism: Examining YouTube's Rabbit Hole of Radicalization
“The role that YouTube and its behind-the-scenes recommendation algorithm plays in encouraging online radicalization has been suggested by both journalists and academics alike. This study directly quantifies these claims by examining the role that YouTube's algorithm plays in suggesting radicalized content. After categorizing nearly 800 political channels, we were able to differentiate between political schemas in order to analyze the algorithm traffic flows out and between each group. After conducting a detailed analysis of recommendations received by each channel type, we refute the popular radicalization claims. To the contrary, these data suggest that YouTube's recommendation algorithm actively discourages viewers from visiting radicalizing or extremist content. Instead, the algorithm is shown to favor mainstream media and cable news content over independent YouTube channels with slant towards left-leaning or politically neutral channels. Our study thus suggests that YouTube's recommendation algorithm fails to promote inflammatory or radicalized content, as previously claimed by several outlets.“

Reed, Alastair; Whittaker, Joe; Votta; Fabio; Looney, Seán (2019): Radical Filter Bubbles: Social Media Personalisation Algorithms and Extremist Content
This paper assesses whether social media platforms’ personalisation algorithms promote extremist material, finding evidence that only one platform studied – YouTube – prioritises extremist material by the recommender system.”

Axel Bruns (2019): It’s Not the Technology, Stupid: How the ‘Echo Chamber’ and ‘Filter Bubble’ Metaphors Have Failed Us
“Building on empirical studies that show no significant evidence of filter bubbles or echo chambers in search or social media, this paper argues that echo chambers and filter bubbles principally constitute an unfounded moral panic that presents a convenient technological scapegoat (search and social platforms and their affordances and algorithms) for a much more critical problem: growing social and political polarisation. But this is a problem that has fundamentally social and societal causes, and therefore cannot be solved by technological means alone.”

Ludovic Terren, Rosa Borge-Bravo (2021). Echo Chambers on Social Media: A Systematic Review of the Literature
There have been growing concerns regarding the potential impact of social media on democracy and public debate. While some theorists have claimed that ICTs and social media would bring about a new independent public sphere and increase exposure to political divergence, others have warned that they would lead to polarization through the formation of echo chambers. The issue of social media echo chambers is both crucial and widely debated. This article attempts to provide a comprehensive account of the scientific literature on this issue, shedding light on the different approaches, their similarities, differences, benefits, and drawbacks, and offering a consolidated and critical perspective that can hopefully support future research in this area. Concretely, it presents the results of a systematic review of 55 studies investigating the existence of echo chambers on social media, providing a first classification of the literature and identifying patterns across the studies’ foci, methods and findings. We found that conceptual and methodological choices influence the results of research on this issue. Most importantly, articles that found clear evidence of echo chambers on social media were all based on digital trace data. In contrast, those that found no evidence were all based on self-reported data.

Petter Törnberg (2022). How digital media drive affective polarization through partisan sorting
”Politics has in recent decades entered an era of intense polarization. Explanations have implicated digital media, with the so-called echo chamber remaining a dominant causal hypothesis despite growing challenge by empirical evidence. This paper suggests that this mounting evidence provides not only reason to reject the echo chamber hypothesis but also the foundation for an alternative causal mechanism. To propose such a mechanism, the paper draws on the literatures on affective polarization, digital media, and opinion dynamics. From the affective polarization literature, we follow the move from seeing polarization as diverging issue positions to rooted in sorting: an alignment of differences which is effectively dividing the electorate into two increasingly homogeneous megaparties. To explain the rise in sorting, the paper draws on opinion dynamics and digital media research to present a model which essentially turns the echo chamber on its head: it is not isolation from opposing views that drives polarization but precisely the fact that digital media bring us to interact outside our local bubble. When individuals interact locally, the outcome is a stable plural patchwork of cross-cutting conflicts. By encouraging nonlocal interaction, digital media drive an alignment of conflicts along partisan lines, thus effacing the counterbalancing effects of local heterogeneity. The result is polarization, even if individual interaction leads to convergence. The model thus suggests that digital media polarize through partisan sorting, creating a maelstrom in which more and more identities, beliefs, and cultural preferences become drawn into an all-encompassing societal division.”

O'Hara, K. and Stevens, D. (2015), Echo Chambers and Online Radicalism: Assessing the Internet's Complicity in Violent Extremism. Policy & Internet, 7: 401–422.
“This article considers claims made by various authors that the use of filtering and recommendation technology on the Internet can deprive certain communities of feedback, and instead amplify groups' viewpoints, leading to polarization of opinion across communities, and increases in extremism. The ‘echo chamber’ arguments of Cass Sunstein are taken as representative of this point of view, and examined in detail in the context of a range of research, theoretical and empirical, quantitative and qualitative, in political science and the sociology of religion, from the last quarter century. The conclusion is that the case has not been made either (a) that echo chambers are necessarily harmful, or (b) that the Internet is complicit in their formation.”

Whittaker, J. (2020). Online Echo Chambers and Violent Extremism. In: Munir Khasru, Riasat Noor, Ho Yean Li (Ed.), The Digital Age, Cyber Space, and Social Media: The Challenges of Security & Radicalization (pp. 129-150). Institute for Policy, Advocacy, and Governance.
”There is a discernible pattern in the aftermath of the contemporary terrorist attack. As journalists attempt to gather as much information about the attacker, it is usually found that the actor used social media as part of their activity, often to sharing propaganda and socialising with like-minded individuals. This is quickly diagnosed as being in an “echo chamber” – the effect of cutting oneself off from all dissenting opinions, causing an actor to engage with more and more extreme voices, until these voices are normalised – including the justification of violence. The term is regularly used to imply a degree of causation towards becoming a violent extremist, but evidence is rarely offered for this. This chapter argues that there is a high degree of conceptual confusion surrounding the term, in part because it often refers to one of two different phenomena. It then analyses the small amount of literature within Terrorism Studies that pertains to echo chambers, finding that there is a distinct lack of evidence to support a causative role. The chapter then goes beyond the field of Terrorism Studies to attempt to offer some clarity for some of the questions that are currently unanswered, including the effects of being inside an echo chamber, the susceptibility of different actors, and the role of personalisation technology. It concludes by outlining some of the responses offered by social media platforms in addressing the problems raised by echo chambers, finding that although such responses are in place, there is limited evidence for their effectiveness.”