top of page


Updated: Feb 2, 2023

While the importance of the internet in today’s society as a driver of communication, gateway to vast amounts of information and enabler of socio-political participation to any type of social group is irrefutable, the drastic advancements in digital and communication technologies have posed numerous problems to state agencies, technology companies and researchers in its catalysation and diffusion of disinformation, extremist content and hate speech. Those issues have become even more intricate given the development of social media platforms and search engines’ algorithms, which curate and proliferate content based on the preference of users, magnifying already existing beliefs and thus causing group polarization.

The current blog post will explore those matters further, first by providing a comprehensive definition of the term ‘fake news’. It will then examine the role of disinformation campaigns in the fuelling of hate speech and sectarianist sentiments online and evaluate to what extent that increases the radicalized views of certain individuals and could subsequently lead to extremist violence.

The ‘Fake News’ Phenomenon

In 2016, ‘post-truth’ was chosen as Word of the Year by the Oxford Dictionaries, yet as argued by Al-Rodhan (2017), the term is symptomatic of an era rather than just a year: “an era of boundless virtual communication, where politics thrives on a repudiation of facts and commonsense” (n.p.). As explained by the Oxford Dictionaries, post-truth is an adjective often associated with politics which is defined as “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief” (2016, n.p.). Thus, post-truth politics translates into assertions, which allure to one’s emotions and gut feeling, rather than having any basis on empirical evidence and valid information (Al-Rodhan, 2017). A post-truth era creates an ethical twilight zone, where the attached stigma to lying is lost, and lies could be told in impunity and with no consequences for one’s reputation (Keyes, 2004). That results in the creation of rumours, ‘fake news’ and conspiracy theories, which could go viral in short time and give impetus to false realities and serve propaganda purposes (Al-Rodhan, 2017).

In the case of fake news, the term has acquired a dual meaning: on one hand, as fabricated or ‘false news’, which circulate online, and on the other – as a polemic weapon used to discredit news media channels (Quandt, Frischlich, Boberg and Schatto-Eckrodt, 2019, pp. 1-6). We would focus first on the former interpretation.

According to Wardle (2017), the definition of fake news needs to be broken down as per the different types of content that are being created and shared, the motivations of those who create this content, and the ways this content is being disseminated. In line with that, she distinguishes seven types of fake news, namely: satire or parody, misleading content, imposter content, fabricated content, false connection, false context and manipulated content.

Similarly, Nielsen and Graves (2017) describe fake news as a landscape that consists of poor journalism, political propaganda, and misleading forms of advertising and sponsored content. Other authors, such as Lazer et al. (2018, p.1094), define fake news as “fabricated information that mimics news media content in form but not in organizational process or intent”. Quandt et al. (2019, p.2) further summarise all the proposed definitions in a more systematized method by arguing that first a basic differentiation between (i) the core content of the information (including textual information, imagery, audio elements, etc.); (ii) accompanying meta-information (headlines/titles, author information, tags, and keywords); and (iii) contextual aspects (positioning, references to other articles, framing) needs to take place. Subsequently, all of these elements could be exposed to different degrees of falsehood, or incongruities from factuality, such as: (a) misleading (but factually correct) information; (b) additions or deletions of information (e.g., “enrichment” of facts by misleading or wrong information, or a change of meaning by omitting or deleting relevant information); and (c) complete fabrications without any factual basis. In addition to that, combinations of those elements could also occur (ibid).

The second meaning of the term ‘fake news’, primarily used by former US President Donald Trump, stands for slandering news coverage that is unsympathetic and critical of one’s argumentation or administration (Holan, 2017). Trump used to label media channels as fake news whenever they gave him unfavourable coverage, yet his delegitimization was never followed by any rebuttal consisting of factual evidence or data (ibid). Thus, labelling someone as fake news functions as discrediting one’s story, diminishing trust in the media as a whole and obscuring the interpretation of the concept (Quandt et al., 2019). Historically, it has been considered a characteristic of authoritarian regimes to use such “Orwellian” technique in the appropriation of ordinary words and declaring their opposite in a bid to deprive their subjects from independent thinking and convince them in lies (Holan, 2017). A more appropriate term for such strategy nowadays is ‘gaslighting’, which stands for a psychological form of manipulation where “a person orchestrates deceptions and inaccurately narrates events to the extent that their victim stops trusting their own judgments and perceptions” (Jack, 2017, p.9).

Hate Speech

While until 2016 the concept of ‘hate speech’ existed within its own orbit, that year, the term started often arising simultaneously with ‘fake news’ (Gollatz and Jenner, 2018). The other factor that connected them was not only the similar incidents around which they appeared but also the same online milieu, particularly social media platforms such as Facebook (ibid).

In that sense, oftentimes fake news stories include biased and discriminatory content towards members of certain groups of belonging. As argued by Blanco-Herrero and Calderon (2019), the growing cases of hate speech against refugees and migrants considerably owe to the circulation of fake news related to them in the social media space. The rise of nationalist right-wing parties and their derogatory rhetoric of portraying refugees and migrants as a threat have increased the cases of hate speech online and also have led to violence in real life, promoting hate crime (ibid).

Particularly, in the case of refugees and migrants, the intolerance expressed does not relate only to xenophobia and racism, but to a large extent to the argument that the majority of them profess Islam, triggering Islamophobic sentiments. When it comes to mainstream and social media, Islam and Muslims tend to be linked to negative images, often related to violence and extremism, implying a danger to national security and amplifying the ‘us’ versus ‘them’ dichotomy (Aguilera-Carnerero and Azeez, 2016). Occasionally, that further trespasses the boundaries of the mass communications domain, by translating into institutional Islamophobia, where anti-Muslim prejudices have been promulgated within Western societies under the disguise of laws and regulations foisted as being for the benefit of the general public, such as the ban on burqas and mosques in some countries (Aguilera-Carnerero and Azeez, 2016; Esposito, 2019). Far-right political parties have further aided and abetted the passing of restrictive migrant policies and have triggered Islamophobic attitudes amongst the population through inaccurate and biased narratives about Muslims and Islam (Esposito, 2019).

While the role of Islamophobia is well-researched in regard to Islamist radicalization by being at the core of brewing outrage among some Muslims, which in turn allows terrorist groups to hijack those personal feelings of discrimination, marginalization and victimization and convert them into extremist narratives (Abbas, 2012), this article focuses on the relatively new and less explored phenomenon of far-right domestic terrorism and radicalization of white men as a response to fake news and conspiracy theories online, especially vis-à-vis anti-Muslim and anti-immigrant discourses.

The Online Disinformation-Terrorism Nexus

As argued by Piazza (2021) there is scarce empirical research of the influence of disinformation on actual political violence, and hardly any on its connection to terrorism specifically. Thus, his latest study, which uses a sample of 150 countries for the period of 2000 and 2017, makes two key findings: On the one hand, countries which tend to circulate propaganda and disinformation online through the social media channels of their governments, political parties or foreign governments are subject to higher levels of domestic terrorism. On the other, the deliberate dissemination of disinformation online by political actors increases the political polarization of the country.

In order to further illustrate those linkages, an analysis of the existing literature exhibits how while social media platforms were previously considered democracy’s allies, have increasingly become its foe, given that it is easier to discredit opponents. Instead of limiting their speech or criticizing them, one responds with a jumble of misleading and false information, leaving the readers in disarray of what is going on (Beauchamp, 2019). Oftentimes, members of the general public, including researchers and journalists, could lack the resources and tools to fact-check every piece of information and thus verify statements (Deibert, 2019): “By the time they do, the falsehoods may have already embedded themselves in the collective consciousness” (p.32). Even worse, attempts to directly repudiate them could result in their multiplicity by providing them with attention (ibid). Given the downpour of information and cacophony of viewpoints and comments, being confronted with such overflow of online materials, consumers tend to make use of cognitive shortcuts, which navigate them towards opinions that already fit their beliefs (ibid). In addition to that, by being exposed to such a myriad of information, users are more likely to start questioning the integrity of all media outlets, which often translates into cynicism and indifference (ibid). As a result, this increases the political apathy and undermines the faith in established democratic institutions, thus strengthening the support and tolerance for far-right, anti-establishment or radical actors, providing oxygen to authoritarian factions (Beauchamp, 2019).

Social media platforms disproportionately assist far-right political parties by helping them bolster social divisions (ibid). They tend to demonize and further marginalize out-group communities such as refugees, immigrants and foreigners (ibid). The major strategy is to portray those individuals as intimidating and dangerous in order for the general population to accumulate fear and hatred against them (ibid). Bilewicz and Soral (2020) further explain how exposure to derogatory rhetoric against immigrants and minorities could lead the way to political radicalization and engagement in intergroup violence. They argue that frequent subjection to hate speech results in empathy being replaced with contempt vis-à-vis minority groups, which translates in the erosion of existing anti-discriminatory norms (ibid).

Prominent examples include the ‘genocidal’ propaganda against Rohingya Muslim minorities in Myanmar, disseminated by not only the general population but also by Army representatives and the spokesman for Burma's de facto leader, Aung Sang Suu Kyi (Gowen and Bearak, 2017); the disinformation and fake news campaigns against Muslims in India on behalf of Hindutva right-wing groups, including false claims of cow slaughtering, child ritual sacrifices in mosques and attacks on Hindus (Vij, 2020); and, Hungary’s right-wing PM Viktor Orban’s government’s conspiracy theories regarding asylum seekers integration in Europe (BBC, 2019).

Thus, as summarized by Piazza (2021), often these political agents use online communities to recruit followers, mold their standpoints and mobilise them to action. Disinformation helps to ferment and reinforce group grievances and opinions, deepening their sense of resentment and rage (ibid).

Derogatory statements online have been largely discussed in the context of anti-immigration and xenophobic terrorist attacks such as the 2019 El Paso shooting, the 2019 Christchurch Mosque shooting in New Zealand and the 2018 Pittsburgh synagogue shooting (Bilewicz and Soral, 2020, p.1).

As explained by Bilewicz and Soral (2020), the perpetrators of these attacks have been previously heavily exposed to anti-immigrant hate speech online and have used such derogatory language as a justification for their actions. Thus, the role of social media in the dissemination of such fake news and disinformation should not be dismissed. The perpetrators of the Christchurch Mosque and El Paso shootings, both prior to the attacks, sent their manifestos or ‘open letters’ to several media outlets or social media platforms and shared links to them on 8chan (Wong, 2019). The latter is particularly important for this blog post.

8chan [currently 8kun] is an imageboard website composed of user-created message boards, where individuals post anything of interest to them, almost entirely anonymous (ibid). Sometimes deemed a successor or offshoot of the much more popular imageboard 4chan, the 8chan website has been linked to extremist, bigotic, white supremacist, alt-right, neo-Nazi and anti-Semitic content, oftentimes being at the centre of inciting hate crimes and mass shootings (ibid).

Although 8chan was removed from the Google search engine in an unprecedented move after being implicated in containing child pornography (Machkovech, 2015), the website still remains available on the web, especially after rebranding itself to 8kun. Particularly in recent years it became a prominent hub for the establishment and diffusion of the QAnon conspiracy theory.

When it comes to the links between radicalization, domestic terrorism and disinformation, QAnon is at the forefront of examples used by scholars and researchers in the field (Garry, Walther, Mohamed and Mohammed, 2021). QAnon is a collection of miscellaneous conspiracy theories, with the central one arguing that a cabal of political elites and prominent public figures are part of a Satan-worshipping paedophile ring, and Donald Trump is the only person who could defeat them, often portrayed as the nation’s saviour (ibid). The name originates from ‘Q’, as in “Q Clearance,” which is a top-secret category of federal security clearances in the US, and ‘Anon’ as in “anonymous”, arguing that this is an individual who drops to his followers clues to what is going to happen next, based on his access to highly confidential information at the government (ibid). While QAnon incorporates various conspiracy narratives, its followers have managed to deduce concrete goals, translatable into actions, namely:

“A. “A massive information dissemination program meant to:

  1. Expose massive global corruption and conspiracy to the people.

  2. Cause the people to research further to aid further in their “great awakening.”

B. Root out corruption, fraud, and human rights violations worldwide.

C. Return the Republic of the United States to the Constitutional rule of law and also return “the People” worldwide to their own rule.”(ibid, p. 160).

While conspiracy thinking and violent extremist ideologies share different categories, they could nonetheless intersect (ibid). Such overlap increases security concerns and establishes a dangerous mechanism when the conspiracy asserts that:

“(1) one group is superior to another (superiority versus inferiority);

(2) one group is under attack by the other (imminent threat); and

(3) the threat is apocalyptic in nature (existential threat)” (RAN, 2020, p.3).

Thus, in the case of QAnon, all the abovementioned factors are present (Garry et al., 2021). Further research shows how, when those features are combined with characteristics such as low self-control, law-relevant morality and self-efficacy, this could directly lead to violent extremist action (RAN, 2020). That was particularly the case of Edgar Welch, who in 2016 stormed a Washington-based pizza restaurant with an AR-15 rifle and started indiscriminately shooting, believing that the venue was the stage of a Hillary Clinton-run child sex network (LaFrance, 2020). While this particular case was known as the Pizzagate conspiracy theory, it largely gave the impetus of QAnon (ibid). The storming of the US Capitol was also initiated by QAnon supporters (Argentino, 2021).

Johnson (2018, pp. 100-115) calls this process of self-radicalization of white men through fake news the result of masculinist paranoia that is built into the social processes of human and nonhuman communication, acts as a defence to a perceived threat and is what gives oxygen to conspiracy theories to further proliferate, creating a vicious cycle.


This blog post aimed to concisely portray the phenomenon of fake news, its role in fuelling hate speech and extremist messages online, and thus its prospects for leading individuals on the path of radicalization. Particularly with the rapid development of technologies such as artificial intelligence and the genesis of ‘deep fakes’, the challenges of combatting disinformation have reached a new high.

Yoana Barakova is a Senior Research Analyst at the Amsterdam-based European Foundation for South Asian Studies (EFSAS). She holds a degree in Criminology from the University of Leicester, UK. Her research focuses on issues related to the region of South Asia, particularly prevention of radicalization, counter-terrorism, human rights protection, Indo-Pak relations and the Jammu & Kashmir conflict. She often speaks at supranational platforms such as the United Nations and various universities across Europe.


Socia Media Intelligence
bottom of page