How Can We Counter Violent Extremism If We Don't Know How?

Blog Post
Hadrian / Shutterstock.com
April 24, 2018

Over the past year technology companies such as Google, Facebook, and Twitter have faced growing pressure from governments to significantly augment their efforts to counter violent extremism (CVE) online. These calls for action have come in the wake of terror attacks in countries such as the United States, Spain, Russia, and the United Kingdom, which were found to have been facilitated by extremists who had been radicalized online via these platforms.

Although government officials in countries such as the United States have publicly pressed major technology platforms to improve their efforts to counter extremism, some governments have gone a step further by introducing legislation intended to guarantee results. At the beginning of 2018, for example, Germany introduced the Network Enforcement Act, a hate speech law which requires social network platforms with over two million members to remove “obviously illegal” fake news and hate speech (which includes extremist content), within 24 hours of the content being flagged to the company. If a company fails to meet this deadline, it risks facing fines of up to €50 million ($61.5 million).

Similarly, in the United Kingdom, Prime Minister Theresa May, who has been one of the strongest global advocates for increased policing of online content, has pushed technology companies to remove hate speech and extremist content within two hours. May’s call to action came after Policy Exchange, a British think tank, released a report that stated online jihadist propaganda has higher engagement rates in the United Kingdom than in any other European nation. The report, in conjunction with a string of terror attacks led by extremists across the United Kingdom, has raised concerns regarding the extent and potential of online radicalization channels. Other members of the British government have also put forth proposals for increased regulation of technology companies in this regard. Security Minister Ben Wallace, for example, proposed that platforms should face tax punishments if they do not remove extremist content in a timely manner, and Sadiq Khan, the Mayor of London, recently suggested that other nations will follow in Germany’s footsteps and clamp down on these companies should they not improve their efforts.

In response, major tech companies have worked to ramp up their efforts to take down extremist content. Following the 2016 EU Internet Forum, Facebook, Microsoft, Twitter and YouTube came together to create the the Global Internet Forum to Counter Terrorism. The Forum aims to strengthen technology company-led CVE approaches by facilitating resource sharing (resulting in the creation of a digital hash database) and workshops where larger companies can impart knowledge on best practices in the field to smaller platforms.

There is no doubt that the presence of extremist groups and radical content online is problematic and dangerous. However, there has been relatively little research conducted on the efficacy of content and account moderation efforts on countering violent extremism. In addition, the research that has been conducted does not permit us to draw meaningful conclusions about what approaches are effective. This is because there is a dearth of clear definitions, standardization in approaches, and established metrics for assessing success in this field. Without a clear understanding of what approaches work best and how they can be expanded in scope and strategy, there is a real risk that tech companies are wasting their efforts and resources on unproven methods. Continued governmental attempts to intimidate companies into ramping up their efforts to take down content is also problematic because it forces companies to continue implementing approaches that could be having deleterious effects (such as overbroad censorship, increases in vitriolic speech and the further marginalizing of vulnerable communities), rather than taking time to identify which approaches are truly impactful, and how they can be made more strategic and effective.

In my forthcoming policy paper for the Millennials Initiative at New America, I highlight a number of ways in which researchers can broaden and improve the evaluation frameworks they have thus far applied to assessments of content and account moderation efforts on extremist groups online. In addition, my paper makes recommendations on how companies individually and collectively can bolster future research and evaluation of these moderation efforts. In particular, I urge that companies expand the granularity of their transparency reporting on their content moderation efforts and collaborate with one another to establish clear metrics and standards for success.

This blog is part of Caffeinated Commentary - a monthly series where the Millennial Fellows create interesting and engaging content around a theme. Because the fellows are hosting a symposium focused on elevating new voices and policy ideas this month, they will each create content around their own policy research topics.