Dec. 13, 2018
For Rabia, it didn’t take long for the abuse to start. Shortly after landing the editorial internship of her dreams and publishing her first major story, she began receiving vitriolic threats and insults across her personal and professional accounts online. The flood of incoming messages was overwhelming and deeply personal, often infused with overt anti-Muslim and sexist language. As soon as Rabia reported one threat to an online platform, another three would pop up to take its place.
As the online attacks against her grew, so did the consequences. First, she was doxxed—her cell phone number, home address, and the names and addresses of her family were shared online. Unsurprisingly, after this, she and her family received calls from people threatening to kill them. When they reported the incidents to law enforcement officers, the remedies were ineffective, mostly because, despite their best intentions, the local law enforcement agents lacked the skills and resources needed to help in Rabia’s situation. With no one to turn to and no way to stop the abuse, Rabia did the only thing she felt she could do: She curtailed her writing and restricted her use of the internet, for fear of being targeted again or, worse, being physically harmed.
Rabia’s story isn’t a unique one. Over the course of eight months, we spoke to dozens of individuals to investigate the digital security—or lack thereof—of marginalized populations. What we found is that online threats, ranging from stalking to sexual exploitation and extortion, are significant—and growing. And at a time when the technology landscape is evolving in increasingly intrusive and unpredictable ways, these threats can often translate into offline harms, too, as the lines between our digital and real lives are continuously blurred.
Disruptions and threats to an individual’s digital security have profound impacts on that individual’s willingness to use technology—a particularly big problem when you consider just how much technology permeates people’s everyday lives.
Importantly, these incidents can affect any one. 41 percent of adult internet users in the United States have personally experienced at least one form of digital abuse, and the true scope of the problem remains largely hidden behind veils of shame and fear. However, these risks are particularly pronounced for populations that are already at risk of marginalization in the physical world. For instance, three out of four women have been subject to some form of online harassment, abuse, or violence on account of their gender. Similarly, one in four black Americans and one in 10 Hispanic Americans have faced discrimination online as a result of their race or ethnicity, compared to only three percent of white Americans.
Online abuse and violence also have significant consequences for young people. 67 percent of young adults in the United States have been subject to some form of online harassment, abuse, or violence, with LGBTQ youth four times more likely to report sexual harassment online than their non-LGBTQ peers. The Anti-Defamation League and Southern Poverty Law Center have also reported significant increases in the number of anti-Semitic and anti-Muslim content online since the 2016 presidential election.
But what do all these statistics tell us on a more fundamental level?
For one, they shed light on the role that race, ethnicity, gender and sexual identity, age, and religion routinely play in exacerbating cases of online abuse and harassment; individuals whose intersecting identities cover multiple marginalized groups face among the worst forms of online abuse. In addition, disruptions and threats to an individual’s digital security have profound impacts on that individual’s willingness to use technology—a particularly big problem when you consider just how much technology permeates people’s everyday lives. As a result, when thinking about the collective outcome that silencing individual voices has in a democratic society, the effects become particularly worrisome.
Second, they expose the limitations of the prevailing approaches to digital security research and policy work—but also, where this work might go in the future. Digital security research has primarily encompassed a narrow range of threats related to securing digital assets, such as breaches, hacking, cyber warfare, financial crimes, and broader internet governance issues. While these issues are certainly important, the larger framing is too narrow, and doesn’t account for how digital tools can be used to prey on already-vulnerable communities. For instance, perpetrators of domestic violence can use certain technology to track, monitor, and surveil their victims. Or hackers can target communities such as seniors, who are less digitally literate, in order to steal their identity and finances.
While these threats may not always involve physical attacks, they’re just as important as what we typically think of when we talk think about digital security, in no small part because these attacks are more likely to involve the intimate details of people’s daily lives.
To fully and meaningfully address the range of digital safety threats that affect marginalized and vulnerable communities, the same level of care and attention currently dedicated to more traditional “hard” security threats ought to be aimed at other, oft-forgotten categories of harm. In our recent report on the digital security threats marginalized populations face, we offer a set of strategic recommendations—including facilitating more cross-community digital safety trainings and engagements, improving reporting mechanisms on online platforms, and investing more in comprehensive training on digital safety issues for law enforcement officers—as first steps toward achieving this goal. These recommendations seek to bolster the capacity of individuals and community-based organizations, private sector groups, and public sector actors to combat threats to digital safety. Above all, they seek to increase digital security for marginalized groups—while at the same time protecting First Amendment free speech rights.
The names and stories contained in this article are based on several dozen interviews the authors conducted over several months, however they have been altered to protect individual identities. The stories represent an amalgamation of the types of experiences often retold by interviewees.