Why We Might Be Fighting Hate Speech All Wrong

Weekly Article
Crush Rush / Shutterstock.com
Oct. 12, 2017

Another weekend, another white supremacist march. On Saturday, Richard Spencer led a second torch-lit protest in Charlottesville and gave a speech at the now infamous Robert E. Lee statue. As before, he spoke defiantly, promising similar pop-up rallies in the future. And, naturally, Spencer took to social media, posting videos from the march to his Twitter account.

Saturday’s march follows the first one in August, also helmed by Spencer, that ended in Heather Heyer’s death. In the aftermath of these sorts of marches, infused with blatant white nationalist language, at least one question comes to mind: How can we clamp down on hate speech?

This spring, I spent hours interviewing leaders of a white nationalist group to find out why they promote hate speech, and why, more specifically, they do so online. The common doctrine, supported by numerous studies and articles, is that hate speech functions as both a recruitment tool and a key way to catapult far-right ideas into mainstream discourse. Yet my conversations pointed to another reason: Hate speech also provides emotional fulfillment. And, if we fail to address that component of hate speech, we’re unlikely to face it down in a meaningful way.

White nationalist groups do use social media hate speech to recruit and spread their ideas. The people I talked to, for instance, said that they use their Twitter accounts to comment on current events, share posters containing racist caricatures, put out a call for new members, and, on at least one occasion, celebrate Adolf Hitler’s birthday. Moreover, most white nationalist pages often supplement these topics with discussions on incorrect black-on-white crime “statistics,” falsified studies “proving” racial IQ differences, and unsupported human genetic theories.

But this isn’t the crux of hate speech, online or offline; the chant at Charlottesville wasn’t “scientific” or “reason”-based. Instead of statistics, the chant was a surprisingly vulnerable admission of palpable insecurity: “You will not replace us. Jews will not replace us.”

It may be difficult for many of us to imagine, but hate speech offers a powerful gift to the self-fashioned “alt-right”: forms of emotional gratification. In the current political era, that essentially ensures both a demand for and supply of online hate speech. And, it guarantees that people won’t stop posting hate speech online just because they have to remake a deleted account. Let’s be clear: Racist ideas, of any kind, shouldn’t be accepted, entertained, or ignored. Here, the point is that, as history has shown, repression does fuel organization creation, create stronger in-group bonds, and glorify the movement, whether that’s used for social progress or hate.

After Heyer’s death, you could argue that we saw a new approach to censorship. Never before had tech companies so unilaterally and forcefully taken on responsibility not just for censoring content on their platforms, but also for cutting off resources key to the movement’s online existence. It was noteworthy for an industry that, until recently, had stubbornly stuck to arguments of neutrality and anti-regulation. In addition to Facebook and Twitter interventions, action was taken by a wide range of organizations: Airbnb took down accounts used to book rooms in Charlottesville for alt-right members, and GoDaddy and Google removed The Daily Stormer’s domain registration. Squarespace, meanwhile, shut down its own hate sites, and Spotify deleted 37 white nationalist bands’ music. And PayPal, GoFundMe, and Cloudflare ended white nationalist groups’ funding streams and protection from DDOS attacks.

To some commentators, this purge was a victory—an eradication of the message and, therefore, the problem. Yet in looking only at standard explanations of hate speech, this approach misses the deeper point that hate speech is a much larger problem than we often think because it’s fueled by emotional gratification. In other words, the censorship we’ve seen so far addresses the “what” but not the “why”: that for white nationalist groups, emotional gratification is hard to find outside of hate speech communities. But within them, members find acceptance and purpose.

Why is this the case? White supremacy may be in full, inglorious view right now, but hate speech is still generally unwelcome in physical spaces. Charleston protestors felt comfortable marching without masks, but afterward, many lost their jobs or were even disowned. In contrast, online hate speech groups create the exact kind of safe space for them they so disdain in the outside world: a place where they feel they can be authentic without fear of attack or suppression.

None of the people I spoke to were “out” to their families and friends. They even observed some intra-group anonymity, in case of doxxing, being publicly identified, or targeted. But within the group, they described a shared worldview, save a few differences of opinion. What’s more, they greatly valued that community. According to one, “finding anyone else to talk to about this stuff is a huge psychological help.” Building off of a foundation of acceptance, this emotional support offers not only solidarity, but also a chance to make offline friends or find a mentor. In some cases, the people I spoke to expanded this “psychological help” to include financial support for when someone’s beliefs got him fired. More important still, hate speech groups give a bitter opinion-minority a sense of validation its members are otherwise missing in their lives.

The Internet, as a medium, only amplifies these emotions. By the late 1990s, researchers had already noted that, although online speech could be viewed by millions, the process not only impacts them like “one-to-one communication” would, but it’s also active where radio and TV consumption is passive. Online hate speech therefore feels personal and empowering because the posters are writing, clicking, reading, and making decisions about their own browsing. More recent research has also proven that social media consumption releases dopamine, creating a feel-good feedback loop. Beyond the psychological impacts, online speech has a unique logistical ability to transcend time and space. These platforms offer a place to turn to whenever someone needs a fellow nationalist to talk to, regardless of the geographic distance.

The message seems clear: At least on its own, the censorship we’re seeing now doesn’t get to the core of hate speech. No matter the political climate, there will likely always be an emotional need for the “why” of hate speech—the validation and support it offers—regardless of how difficult it is to access online groups. As we’re seeing, people have already begun to build their own hate-friendly crowdfunding platforms and apps, in addition to moving to the dark web.

So, looking ahead, what can we take away from the varying axes of hate speech?

For one, my conversations suggest that, although the work may be largely uncomfortable, policymakers and tech companies ought to acknowledge that hate group members are still people, not Twitter bots censorship can just conjure away. Some efforts are taking this point to heart. Programs like Life After Hate and the Department of Homeland Security’s Countering Violent Extremism initiative, for instance, are crucial in efforts to push back against hate speech because they address the emotional component, work from the community level, and acknowledge the range of U.S. extremist ideology. But, under the Trump administration, Life After Hate has lost federal funding, and CVE has become “Countering Islamic Extremism.”

To do nothing, as we’ve now seen many times over, has serious consequences, even in addition to the death of Heyer. Dylann Roof explained in his manifesto that his massacre was inspired by the statistics on the propagandistic hate website Council of Conservative Citizens. The Southern Poverty Law Center has found that Stormfront members have collectively murdered some 100 people, making them “disproportionately responsible” for hate crimes and mass killings.

Reinstating funding for support groups like Life After Hate would be a good start, as would including far-right extremism in DHS program efforts. Standard counter-narratives also could play a weighty role; they could adapt to and learn from alt-right Internet icons, creating similar, consistent, personality- and meme-driven anti-hate accounts, rather than their standard YouTube videos and messaging campaigns. Regardless, in response to hate, what’s critical, especially in this political season, is that we try something better, and more visceral, than a cosmetic fix, something that ultimately roots out the “why,” and not just the “what,” of hate speech.