Eleanor Beevor, Associate Researcher at Quilliam, explores why censorship could end up aiding jihadists in their recruitment efforts.

We should not fear the organized, hierarchical terrorist organisation as much as we should fear those pushed off the mainstream grid, into the underworld, and forced to improvise. This applies both online and offline, because the former are that much easier to find. Anthropologist Scott Atran describes how, in the wake of 9/11, jihadists suddenly formed numerous alliances with criminal enterprises. This was out of necessity rather than choice. The US State Department’s Counterterrorism Finance Program’s attempts to “follow the money” had mainstream financial institutions reporting suspicious transactions. Would-be terrorists needed a harder to reach financial infrastructure, and they found it in the criminal underworld. This exacerbated organized crime, as well as making terrorist activity harder to monitor. A parallel occurs on the Internet. Censoring terrorist or extremist material on accessible platforms forces the creators further into the “dark web”, the portion of the Internet unreachable by normal search engines.

The practice of censoring or taking down this kind of material is known as “negative measures”, and is the favourite online tactic of the Home Office’s Prevent system, one arm of its CONTEST counter-terrorism strategy. In the short-term, negative measures are understandably appealing, but pushing users into the dark web is just one of the unintended consequences. Those consequences combined lead to an unconvincing picture of negative measures’ effectiveness. While governments, Internet Service Providers and social media platforms partnering to monitor extremist and terrorist content is a welcome move, the efforts won’t meet their potential so long as censorship or removal are their dominant strategies.

The biggest social media sites, such as Twitter and YouTube, generate the most headlines when shocking content is posted on them, but they also have the most effective systems for users to report it. Yet this efficacy, and the subsequent takedowns, have failed to halt the spread of propaganda and gruesome videos by the likes of ISIS. Moments later, the material was reposted with an alternative username, and turned up further on less monitored platforms, such as Kik or Snapchat. Even if the material breaches the laws of the 2006 Terrorist Act, policing the Internet is a legal quagmire. No international laws govern it, meaning that the UK has no power to, for instance, take down content on a foreign server.

This is not to advocate a laissez-faire policy to illegal web content. Taking it down is an imperfect strategy, but leaving it alone is clearly no better. Rather, we need to recognize that the real risks lie in an overly heavy-handed approach, one which fails to distinguish between “extremist” material, and material that breaches the Terrorist Act. Prevent aims to cover “…all forms of terrorism, including far right extremism and some aspects of non-violent extremism”, but has yet to define what constitutes “extremism”. Censorship guided by an ambiguous idea of “extremism” runs the risk of radicalizing those who sympathise with extremist views, but have not actually committed a crime. Calls to censor “extremist” content with similar algorithms used to control Child Sexual Abuse Imagery are misguided. Much extremist material is written, and depends on the reader’s interpretation. By restricting civil liberties and the freedom of legal expression, with which we like to compare ourselves to repressive movements such as ISIS, we play into existing propaganda loops of western imperialism and hypocrisy.

As Quilliam’s White Paper – The Role of Prevent in Countering Online Extremism notes, the UK must recognize its role as a leader in counter-extremism. Prevent was founded to combat domestic radicalization in the wake of the 2005 bombings. By contrast, the US’s Department of Homeland Security’s Working Group on Countering Violent Extremism (CVE) was only founded in 2010. Many European initiatives are mirroring our own. Though this is commendable, Prevent has also had more than its fair share of controversies. Poorly regulated funding mechanisms led to offline sponsorship of community groups with extremist views, while at the same time heightening a sense of prejudice felt by many British Muslims. In 2010, one Imam described Prevent as “MI5 Islam”, reflecting precisely the kind of grievances Prevent was meant to help combat as part of stemming radicalisation. Trying to censor all material deemed “extremist” would only exacerbate a perception of British government and society scrutinising British Muslims, politicising their religion by effectively forcing them to “choose a side”.

Regimes with less favourable human rights records than our own, who are looking for counter-terror models, do not need to be set an example of blanket censorship on “extremist” material, particularly since “extremism” is often defined by governments’ conveniences. There has to be an alternative tactic. The Quilliam Foundation’s extensive study of the role of the internet in radicalisation, “Jihad Trending”, found that “positive measures” were a much more promising alternative. These are essentially counter-narratives; alternative opinions, for example, from religious leaders on interpretations of holy books. Governments can contribute by countering accusations or misinterpretations of its actions by extremists, through clarifying, transparent statements.

As “Jihad Trending” made clear, the Internet is the facilitator of radicalization, not the spark. The first steps usually happen offline. Quilliam’s latest white paper, released this morning, shows that Prevent may be ahead of the curve, but it needs to get smarter, if it is to set an example abroad and effectively reduce radicalization at home. This means centralising its online and offline strategies, and, crucially, defining extremism within the bounds of civil liberty, to avoid exacerbating the very radicalisation it was designed to fight.