Why White Supremacy Online Is a Growing Problem

Article In The Thread
New America / GoodStudio on Shutterstock
May 24, 2022

On May 14, an 18-year old white man traveled from hours away to a supermarket in a predominantly Black community in Buffalo, N.Y. and shot 13 people, killing 10. In the aftermath, law enforcement agents discovered that the shooter had online links to racist, white nationalist, and far-right conspiracy theories and personalities.

Coming just before the third anniversary of the Christchurch attacks in New Zealand, the shooting in Buffalo is the latest somber reminder of the growing threat posed by white supremacist disinformation and conspiracy theories spreading online. It puts a spotlight on the urgent need for internet platforms to do more to address such harmful content.

According to data from April 2021, right-wing extremists have been involved in more than 267 plots or attacks and 91 fatalities since 2015. More than a quarter of these incidents and nearly half of the resulting deaths were caused by white supremacists. These attacks have targeted a broad range of communities, including the Black, Jewish, immigrant, LGBTQ, and Asian communities. Social media platforms can play a role in exposing vulnerable individuals to extremist content, which, in turn, can contribute to their radicalization. Despite growing concerns around the expanding white supremacy movement and its implications for national security and democracy, internet platforms have been slow to take action. The move to push platforms to take a stronger stance against white supremacist and related conspiracy theory content has been an uphill battle.

Vague Content Policies

For many years, internet platforms’ work on online extremism focused primarily on Islamic extremist groups such as the Islamic State. But, after years of sustained civil society advocacy, some platforms began making changes to their content policies — which determine what content is permissible on their services — to address white supremacist content.

In June 2019, YouTube updated its hate speech content policies to ban white supremacist and neo-Nazi content. In October 2020, it announced it would remove conspiracy theory content calling for real-world violence, including content posted by far-right movement QAnon. But, the platform stopped short of banning all QAnon-related content like other platforms have done.

Additionally, while some platforms have taken action against white supremacist content broadly, many failed to recognize early on how white supremacist content intersects with other parallel movements, such as white nationalism and white separatism. Facebook, for example, waited until 2019 to expand its content policies to ban content glorifying white nationalism and separatism.

Although platforms have made important strides towards recognizing the harmful nature of white supremacist content online, many of their policies remain vague and feature loopholes that allow for harmful content and conspiracy theories to spread. Additionally, oftentimes white supremacy content touches on issues of political relevance, such as immigration, which makes drawing clear lines for the purposes of moderation difficult.

Lax Enforcement

Another reason disinformation and conspiracy theories continue to proliferate online is because platforms often fail to consistently enforce their content policies. In August 2019, YouTube banned several far-right individuals for promoting conspiracy theories related to “white replacement” and Islamophobia, including one individual who was affiliated with the shooter who carried out two attacks in Christchurch, New Zealand. However, YouTube restored many of these channels less than two days later, noting that they did not violate their Community Guidelines.

Platforms also tend to invest more resources in tackling English-language disinformation, hate speech, and extremism, allowing misleading information in languages such as Spanish, Chinese, and Romanian to grow. This poses a serious threat to large swathes of the American and global population. White supremacy and conspiracy theory content is multifaceted in nature, encompassing everything from text posts to memes to live-streamed videos. Platforms’ ability to moderate such multimedia content also varies.

Although platforms have invested heavily in tackling specific types of mis and disinformation, such as those related to COVID-19 and U.S. presidential elections, these approaches are typically employed temporarily. Because of this, many platforms, including some of the largest, are often playing “whack-a-mole” when trying to tackle disinformation and conspiracy theories on a daily basis, especially as repeat spreaders are becoming more tech savvy and using new techniques to evade detection.

Social media platforms can play a role in exposing vulnerable individuals to extremist content, which, in turn, can contribute to their radicalization.

For example, on November 5, 2020, Facebook banned the first Stop the Steal group on its services, which had been casting doubt on the legitimacy of the election and calling for its members to engage in violence. By then, the group had over 360,000 members. Over the next few weeks, several similar groups cropped up, with researchers at Facebook noting that they were among the fastest-growing groups on the service at the time and the company was unable to keep up. These groups incubated numerous conspiracy theories around the outcome of the 2020 presidential election, culminating in the January 6 insurrection on the Capitol.

Ad-Driven Business Models

Many platforms shy away from banning white supremacist personalities and figures, as many of these accounts drive engagement on the platforms. Today, most internet platforms rely on advertising to generate revenue. The more ads that a platform is able to serve to users, the more revenue it generates. This model incentivizes platforms to permit divisive content that drives engagement and retains user attention on their services. White supremacist content is no different. In other circumstances, platforms are reluctant to ban accounts spreading white supremacist conspiracy theories and content that belong to individuals in public office, perhaps due to fear of retaliatory regulation.

Additionally, although platforms have begun altering their content policies to address white supremacist content, they are lacking when it comes to addressing the role that their automated content curation tools play in amplifying harmful content. Oftentimes, these algorithmic tools are designed to optimize for engagement, in support of the ad-driven business model. Researchers at the Anti-Defamation League have found that, despite changes to its technology, YouTube’s platform still algorithmically recommends extremist content to individuals who have engaged with and are therefore susceptible to such content. This can generate a “rabbit hole” effect and radicalize vulnerable internet users.

Lack of Transparency

The examples above give us only a small window into the true scope of the problem posed by white supremacy and conspiracy theory content online. Many platforms claim that their efforts to tackle online misinformation, disinformation, and conspiracy theories are working. But, they offer very little transparency around the impact of their efforts.

Some platforms report aggregate figures on the amount of misleading and harmful content they remove in their transparency reports. But often these figures are lumped together with other categories of content, making understanding the true impact of their moderation efforts challenging. Additionally, platforms deploy a range of other mechanisms to curtail the spread of misleading information, including placing warning or contextual information labels on misleading content, downranking content, altering recommendation systems, and demonetizing accounts. But because companies provide very little comprehensive transparency on how these efforts influence the spread of disinformation online, it is difficult to identify where platforms can concretely improve and hold them accountable.

Disinformation and conspiracy theories linked to white supremacy are increasingly proliferating online, posing a major threat to the lives of people of color, marginalized communities, national security, and democratic processes in the United States and around the world. Internet platforms need to invest more in addressing the spread of such harmful content.

You May Also Like

What America’s Fear of China Really Says About Us (The Thread, 2021): Americans’ fear and anger towards racialized enemies fed racism against people in the United States who 'looked like' those enemies. So when politicians slam China, all Asian Americans get put in the crosshairs. During the pandemic, even with Trump out of office and off Twitter, anti-Asian racist language in tweets about COVID-19 spiked.

Facebook’s Content Moderation Language Barrier (The Thread, 2021): Facebook’s content moderation has been under a magnifying glass as misinformation has continued to spread. Some have used coded language to avoid triggering the platform’s algorithms, but in other parts of the world there’s no need. Without fixing its language loophole, Facebook risks abetting the persecution of marginalized ethnic groups.

The Transparency Report Tracking Tool (Open Technology Institute): Internet platforms have begun publishing transparency reports that outline how they are enforcing their own content policies and rules. This report shows how internet platforms’ reporting and enforcement compares to one another and how this reporting has been expanded to include a number of metrics and categories of content that are unique.


Follow The Thread! Subscribe to The Thread monthly newsletter to get the latest in policy, equity, and culture in your inbox the first Tuesday of each month.