Updated: Jun 6
Against the background of ongoing digitalization of most facets of life, a crucial trend of the decade has been the transformation of entire industries into sharing economies. With the increasing role of social media, extremists have also adapted a sharing economy approach, which not only changed the underlying top-down structures towards decentralised networks, but also affected related research to a large extent by making many extremist activities more accessible. In fact, it is this major shift that makes social media intelligence (SOCMINT) possible in the context of (applied) extremism research. It is an avenue of research that presents both opportunities and challenges.
Why social media intelligence?
Until the last decade, one of the main methodological challenges of extremism research used to be the severe lack of empirical data; other than a few attempts at going undercover in these scenes, there were barely any viable methodologies to collect primary data on extremists. Court files are usually difficult to access, and scene members rarely agree to interviews. While some neo-Nazi and violent jihadi groups have shown a presence on online boards since the 1980s, online activities were mostly marginal before social media became sufficiently widespread in the late-2000s. However, the era of digitalization, largely taking off in the last decade, led to a new avenue for circumventing the issue of data collection to an extent. Fast-forward to 2022, a large chunk of extremists’ activities shifted to virtual spaces. Consequently, extremist groups increasingly rely on online supporters worldwide: propaganda is not exclusively produced by organized groups for end-users anymore, but also by individuals without an affiliation with the group. This sharing economy of extremism has not only changed scenes fundamentally and shaped new generations of extremists, but also resulted in a substantial surge in openly available extremist content. The benefits for researchers are straightforward: These circumstances offer a unique opportunity to collect primary data, which, in several cases, can even serve as crucial evidence in criminal proceedings, e.g. against foreign terrorist fighters. Social media intelligence’s goal is to collect such data and turn it into actionable insights, for instance by developing a better understanding of strategies, scenes, and trends, and improving prevention and de-radicalization programs. While SOCMINT in the age of the sharing economy of extremism is certainly not the sought-after panacea some may think it is, it can play a major part in advancing expertise in extremism.
Navigating a legal gray zone and finding a balance
When dealing with extremism in general, the single most important thing to do before engaging in any research activity is to explore the legal boundaries. While some countries have enacted additional clauses into the criminal offence of consuming or downloading extremist content, making an exception for academic purposes, it is still not a universal protection. Instead, it remains a gray zone which can lead to grave consequences. Hence, staying up-to-date on domestic and international legislation is crucial. Researchers might get caught up in covert investigations of security agencies, or simply get in trouble at airports or other scenarios where devices may be screened. In general, being affiliated with a research institute and/or having credentials in the field should be sufficient to avoid legal trouble, in combination with following the respective laws, including on the issues of informed consent and data protection.
By the nature of extremism, research activities in this area bring up a number of additional legal and ethical questions, one of them being whether— and in which exact scenarios — to report extremist content and groups to authorities. Researchers often lack the legal knowledge and authority to decide whether specific content is protected by free speech, or is already punishable by law. This is made even harder by the extremists' tendency to hide behind memes, sarcasm, and humor. On the one hand, reporting might be the best and safest choice legally and morally to prevent violent content from spreading, to help authorities find and crack down on violent cells, and to cover one’s own legal responsibilities. On the other hand, that may result in losing sight of what is happening in the scene: frequent purges as a result of reports have been known to push groups to new, more exclusive platforms that are harder to monitor (including for authorities). Also, while crackdowns usually decrease the number of active members/followers, those that remain most usually become more radical due to the pressure and the perception of a shared threat to the in-group. As a rule of thumb, social workers are advised that the red line for reporting should be concrete evidence of impending violence or criminal activity. While the interpretation of “concrete evidence” has its own challenges too, it is better to tread on the side of caution.
Establishing and maintaining access
Finding and accessing the relevant platforms, channels, groups can take minutes or months, especially if language barriers exist. Less extreme sources are easier to find, as they mostly operate on mainstream platforms to broaden their reach and are thus fairly known. However, the deeper into the scene, the more extreme the content and the smaller the circle: as extremists’ communication and propaganda could result in criminal charges, it is increasingly difficult to find and access these sources, e.g., closed channels or private chat groups. Extremists have meanwhile adapted their modus operandi to contemporary surveillance strategies by using researchers’ own tactics against them: monitoring researchers’ social media accounts and research platforms for publications on online extremist activity, for the purpose of learning which extremist channels are being observed, and how. As a result, more and more extremists take additional measures and proactively provide operational security advice to their scenes to evade researchers and authorities. Oftentimes, one must go through vetting processes that vary in thoroughness to gain access. As a general recommendation, participating outside of office hours (also taking time zone differences into account) can help avoid raising suspicion of being a researcher or an authority. Other usual vetting methods range from asking for voice and/or video messages, the provision of other social media accounts for screening, to personal contact with local members.
Documentation and purges: Hurdles and fixes
The amount of information online can often be challenging to deal with, which is why establishing a sustainable system to document the data and keep track of developments is crucial, in particular for platforms that are often hit by purges. While there is no one-size-fits-all solution, algorithms and automatized scripts crawling platforms can be immensely helpful. Of course, this requires some technical expertise or buying software, and can result in undifferentiated extensive datasets that are hard to use. Depending on the objectives of the researcher, the alternative is manually managing the documentation (i.e., hand-picking messages, images, videos to save; adding keywords for later search queries). This, however, can be time-consuming: in a hypothetical scenario of monitoring 80 channels/users/chat groups on 4 different platforms with an average of 20-30 posts by each unit, monitoring and documentation can take hours every day. Hence, even when using specific software, one should always develop at least some kind of research design, defining a clear objective and research question that help tailor the approach.
Mental health considerations and the way to a sustainable approach
As an open source intelligence analyst with over 20 years of experience said, experts in this field “expose themselves a lot to all the horrors this world has to offer”. Monitoring extremist scenes inevitably means spending hours reading disturbing texts and seeing explicit material, which in turn can result in secondary traumatic stress (also called vicarious trauma). Despite learning how to take a step back from work, not even the coldest professionals can remain unaffected: “Being exposed to these amounts of traumatizing footage isn’t normal. This is not just work you can leave at the office once your shift is over.” The psychological impact might be further exacerbated by the necessity of engagement outside of office hours, which increases exposure. Yet despite these challenges, there are a number of steps one can take to deal with such difficulties: setting clear and pre-defined boundaries, proactively seeking professional counselling, and following good practices, such as taking regular breaks and only watching explicit footage in black and white, are essential parts of a sustainable approach.
Concerns related to one’s own security can also add to the potential detrimental impact of this work on mental health. One, if not the most important measure to take is to protect one’s personal identity, for both the security and psychological well-being of the researcher. There is a very real risk of one’s identity being compromised, or the files shared on channels being malicious. The accounts used for monitoring will unavoidably be affected by algorithms, too: should one use private accounts for research purposes, the algorithm will keep feeding extremist content even outside working hours. In cases where the identity of SOCMINT experts becomes known, extremists tend to resort to online bullying or even death threats. However, in addition to setting up separate accounts, being aware of one’s own information/operational security should help avoid most issues. A recommended exercise for practitioners is to establish their personal threat model, and identify and mitigate risks.
The future of SOCMINT in extremism research
With human interactions increasingly shifting towards virtual settings (and an emerging metaverse), online spaces will no doubt continue to gain relevance, which in turn poses several challenges, also beyond what was described above. The younger generations will always be more tech-savvy — effortlessly — than those older than them. Combined with the acceleration of changes in the world, it will become increasingly difficult to keep up with youth trends and subcultures as well as technological innovations. On the positive side, there is now much more data available to researchers compared to 20 years ago, when most extremist activities took place behind closed doors, which helps make sense of the subject matter. Thanks to the continuous advances made in AI and software, several methodological difficulties will likely be sorted out, and trends could also be revealed on a larger scale with the help of big data. That, however, will require (currently unique) hybrid profiles for researchers, combining social sciences with technical expertise. Whether these prospects give us a reason to see the SOCMINT glass half empty or half full, remains contested.
Erik Hacker is research fellow at SCENOR and an experienced researcher on violent extremism, terrorism and radicalization, especially on the web, its structures, dynamics and discourses. He holds an LL.M. of the Vrije Universiteit Amsterdam in law and politics of international security and a BA in political science of the Universtiy of Vienna.