Welcome to New America, redesigned for what’s next.

A special message from New America’s CEO and President on our new look.

Read the Note

Looking Forward

As highlighted during our interviews, a single-pronged approach rarely yields enough momentum and engagement to create lasting change. In order to effectively support the digital safety of vulnerable communities, a multi-pronged approach needs to be adopted which empowers and guides 1) individuals and communities, 2) private sector actors, and 3) public sector actors, including legislative and executive law enforcement agencies. A multi-pronged approach is particularly important since many organizations are currently only able to mobilize in one or two of these verticals. Below is an overview of the types of activities and next steps this multi-pronged approach can engage and deploy.

Individual and Community-Based Approaches

Community-based organizations, including individuals and activists representing marginalized groups, are often at the forefront of digital safety issues. These include grassroots digital security organizers and social justice non-profits and groups. While positive impacts of technological development often buttress organizing and advocacy efforts, the negative impacts of rapid technological advancements have a disproportionate impact on individuals who speak out on contentious and politically charged topics. Most community-based organizations and political activists we spoke to were, understandably, incredibly focused on one or two very specific issues. This narrow focus allows each community to develop highly specific strategies that fit their community needs, but often at the expense of sharing best practices with others outside of their constituent group. Through our interviews, we identified two areas where individuals and community-based organizations can improve their efforts to augment the overall digital safety environment:

1) Cross-Community Training: The classic “Digital Security Training” model will be familiar for community activists and digital security and safety experts alike. The format of these trainings tend to go in one of two directions: it is either a broad training that covers issues faced by communities in general, or a narrower training that covers community-specific issues. Broad digital security trainings provide basic tools and practices that establish protection against most common risks related to online abuse. The practices highlighted in such trainings include generating strong and diverse passwords in order to protect oneself from being hacked, backing up files, and developing an understanding of personal devices, software, and services and their respective privacy policies.

Community-based organizations, including individuals and activists representing marginalized groups, are often at the forefront of digital safety issues.

The second digital security training model is more narrow and issue-specific, and focuses on highlighting idiosyncratic digital security threats that face particular communities and presenting strategies that are tailored to these threats. Many of the practices highlighted in these trainings are similarly related to securing oneself against online abuse, including compromised data and technology systems, although depending on the community these trainings can also include information on securing oneself against online abuse as well as against surveillance. For example, for female victims of domestic violence, this could include how to secure devices in order to protect personal data and technology systems, such as location data, in order to prevent it from being compromised or monitored by abusers. In our conversations, we found that there is little “cross-community” training taking place, regardless of whether or not a digital security training was geared towards a broader or a more specific audience. However, there is a great deal of potential in cross-community digital security and safety trainings as the tools and remedies that one community engages with could offer valuable lessons and insights to another community.

While the specific vulnerabilities each community faces online are inherently different given the particular sensitivities and characteristics of each group, the tactics for mitigating threats are ultimately very similar. If women’s rights activists, for example, have developed a tool or best practice for responding to doxxing, it could be incredibly useful for racial and ethnic minority activists to learn from and adopt. The resiliency of each community can only be amplified if they appreciate the intersectional identities inherent to their constituents, thus necessitating a more focused cross-community training initiative.

2) Threat and Information Sharing Platforms: The speed at which digital safety threats emerge and grow to scale makes it nearly impossible for individuals and community-organizers to keep pace. It is unrealistic to expect an individual who is facing an attack or abuse for the first time to be able to respond effectively. Even for individuals who repeatedly face digital safety attacks, the sheer volume of threats can be overwhelming. The same is true for larger organizations and professional groups that face particular challenges in the digital space. We heard this over and over again during our interviews with activists, attorneys, and organizations representing vulnerable populations. Thus, some of our interviews focused in particular on the critical need to understand and share rapidly developing information related to emerging digital safety threats.

Several organizations we spoke to do a particularly good job at sharing this sort of threat and response information with their constituent members (e.g. International Network Against Cyber Hate (INACH), National Network to End Domestic Violence (NNEDV)). INACH, for instance, maintains an active social media and member organization mailing list in order to provide timely updates on emerging threats. In addition, INACH hosts a yearly conference to allow members to collaborate in-person on more difficult, high-level issues. These organizations have created a model that could serve as a guide for a larger, cross-community threat sharing platform where individuals and organizations alike are able to benefit from timely, targeted responses to rapidly developing threats.

If women’s rights activists, for example, have developed a tool or best practice for responding to doxxing, it could be incredibly useful for racial and ethnic minority activists to learn from and adopt.

Private Sector Approaches

The private sector, particularly social media companies, plays an important role when it comes to combating instances of online abuse. Over the past few years, most, if not all, of the major internet platforms who host user-generated content have received significant criticism and pressure around their approaches to moderating abusive and violent content online. Many individuals belonging to vulnerable communities, as well as civil society and advocacy organizations, have called out platforms such as Google, Facebook, and Twitter for failing to remove harmful content targeting these communities, or for removing too much content belonging to these groups, effectively silencing their speech. These calls for action raise many important questions, especially as platforms endeavor to moderate content while preserving freedom of expression. Through our interviews we were able to identify three key areas in which private sector actors can improve in order to bolster the digital safety of vulnerable communities and users on their platforms and combat instances of online abuse while still safeguarding freedom of expression.

1) Reporting Mechanisms and Feature Design: For many of the individuals we spoke to, the feature design of these platforms hindered rather than helped their ability to report harmful and abusive content. Many of them cited being unable to find or effectively navigate the reporting forms, and also cited the frustrating lack of clarity that came from receiving very little to no follow up communications from platforms regarding whether their report had been received or actioned. This made it difficult for the users to follow up with the platform on the status of their flag and also required users to engage in a time-consuming process of consistently reaching out to platforms in order to ensure their own safety. In addition, many major social media platforms currently lack appeals processes for flagged content.

In addition, platforms can also increase the accuracy of reporting mechanisms by being more transparent about the content policies and processes that are used to flag and moderate content on their platform. For example, in May 2018, Facebook released a detailed version of their Community Standards, which is meant to be almost identical to the internal guidelines Facebook’s content agents use when making moderation decisions.1 This document outlines the policy rationale for various different types of content and the types of images, texts, and references that are acceptable and prohibited on the platform. Disseminating information on these standards is vital, as it educates users on what content is permissible on a platform and therefore offers clear guidelines for what content should be flagged and what should not.

The feature design of these platforms hindered rather than helped their ability to report harmful and abusive content.

Many of the individuals we interviewed also highlighted that, although major internet platforms have taken significant strides towards promoting greater transparency around their content policies,2 their approaches to combating online hate and harassment is often flawed. This is because efforts to regulate content often lack context. For example, some platforms’ moderation processes involve reviewing individual pieces of content rather than reviewing multiple pieces of content, either sent by a particular user, or received by a particular user, together. This often prevents moderators from identifying and understanding when targeted online attacks are taking place against a particular user, and thus prevents the effective removal of harmful content.

Evaluating the content of posts is also important in identifying content that does not violate company policies and should not be removed. For example, human rights groups and activists often repost harmful content published by terror groups or share graphic images of atrocities in an attempt to raise awareness about human rights violations. Given the intent behind these posts, they do not typically violate the content standards of online platforms. However, without background information on the user and their intent, automated systems used to remove content and human moderators often erroneously remove this content, thus impinging on the free expression rights of these groups and users.

The experience of different communities and individuals, however, varies from platform to platform, and often depends on the capacities of a company’s Trust and Safety teams.

As new social media products enter the market, the individuals we spoke to also recommended that these engineers and entrepreneurs consider implementing “safety by design.” They urged these companies to think about how their platforms could be used for abuse and harassment before they actually are, and implement streamlined features that either make it harder to engage in such behavior, or that make it easier for users to report this behavior and secure themselves.

2) Investing in Local Partners: Currently, many platforms fund and liaise with on the ground organizations that have expertise on, and work directly with, vulnerable communities. For example, Twitter hosts a Trust and Safety Council which provides input on the company’s safety products, policies and programs. The council is composed of safety advocates, academics and researchers, grassroots advocacy organizations, and community groups. During our interviews, it was continuously emphasized that individual users need to know that they have a support system and a community or organization that they can engage with when engaging with digital safety threats. This is particularly important for helping these individuals understand that they retain their agency and control over their devices and over their online and offline presence. Investing in and engaging with local partners also enables strategy and insight sharing, including cross-community collaboration, which is vital considering that these issues manifest differently across city, state, and national lines. These local organizations are also better equipped than companies when it comes to mitigating and resolving offline manifestations of these online issues.

3) Digital Safety Awareness: Platforms should work to create awareness around digital safety threats and protections as well as acceptable online content and behavioral standards. Many platforms, for example, publish Trust and Safety toolkits that guide users facing digital safety threats on how to protect themselves. Most users, however, do not know that these resources exist, and will often only engage with them once it is too late.

Individual users need to know that they have a support system and a community or organization that they can engage with when engaging with digital safety threats.

Public Sector Approaches

When it comes to safeguarding individuals’ digital safety, the public sector and the effectiveness of their efforts are perceived in a mixed manner. Generally, however, the activists and individuals we interviewed agreed that the public sector needs to be doing more. Many criticized the federal government for failing to pass meaningful legislation that secures vulnerable communities from digital safety threats highlighted in this report.

In addition, many of those we interviewed criticized state and local level agents for failing to enforce existing laws around issues such as cyberbullying, doxxing, and online stalking. Still others expressed mixed perspectives on the role and effectiveness of law enforcement when engaging with cases of online abuse. Many of the activists we spoke to highlighted that law enforcement agencies were not trained or equipped enough to handle such cases, while others believed that gradually these agencies were improving, although this improvement process takes time and resources. That being said, there was some recognition of the fact that select states have succeeded in passing legislation and tackling major threats associated with online abuse, such as cyberbullying and the sharing of non-consensual intimate images.

Based on our interviews we were able to identify three key areas in which public sector actors can improve in order to bolster the digital safety of vulnerable communities and users.

1) Enforcement: More progress needs to be made at both the federal and state levels to enforce existing laws around online abuse. In states with existing statutes that provide strong protections for victims of online abuse, prosecutors must be willing to take on cases that will effectively support the victim and send a deterrent message to others who may break the law. In addition, more resources need to be devoted to training public sector officials, ranging from legal professionals to law enforcement officials in enforcing these laws.

2) Law Enforcement Training: Many of the activists we spoke to said that their engagement with law enforcement around cases of online abuse varied based on the law enforcement officer assigned to their case. This is because there is a lack of uniformity and variability in the training officers receive on these issues. Many law enforcement agents still do not consider cases of online abuse and compromised data and technology systems to be serious threats. Currently, the Computer Crime Intellectual Property Crime Group at the U.S. Department of Justice and the Internet Crime Complaint Center (IC3) at the FBI are considered some of the only functioning models for addressing these issues at the federal level in the United States.

However, these organizations typically only address large, national-level cases of fraud and crime where damages are large and the targets are often corporations. Law enforcement agencies at the city, state, and federal level therefore need to receive formal training on how to handle and respond to cases at the individual and community level. There has been progress on this in some states. For example, the New Jersey State Police (NJPD) has a High Tech Crime Bureau which includes a Cyber Crimes Unit. Similarly, the state of Michigan has established a Michigan Cyber Command Center (MC3) which includes a Computer Crimes Unit (CCU) and the Michigan Internet Crimes Against Children (ICAC) Task Force. In addition, organizations such as the National White Collar Crime Center (NWC3) have begun producing resources for law enforcement agents looking to receive training around cyber crime issues.

Based on our interviews, the six core recommendations for improving law enforcement awareness, training, and response to cases involving threats to digital safety, particularly those related to online abuse are:

  • Before law enforcement agents can begin to address the idiosyncratic digital safety challenges vulnerable communities face, they need to receive comprehensive training in the basics of cybercrime mitigation and cybercrime forensics. Only after this fundamental base of knowledge is established can they begin addressing individual community needs effectively.
  • Law enforcement training on digital safety issues should encompass, but not be limited to, understanding how to effectively engage with victims, how to triage cases, and how to educate victims on mitigation strategies to prevent more incidents going forward.
  • Law enforcement agents should be required to receive continued education around digital safety threat issues, especially given that technology continues to rapidly evolve and change.
  • There should be a centralized reporting system focused on cases at the individual and community level that victims engaging with threats to their digital safety can turn to. The United Kingdom has adopted such a centralized reporting system. Titled Action Fraud, it is the country’s national fraud and cybercrime reporting center.
  • Law enforcement agencies that have received training on mitigating and managing digital safety threats should be at the front line of disseminating information on protection and prevention to other organizations, including educational institutions.
  • Once law enforcement agents are well versed in digital safety threats and mitigation strategies, they should receive sensitivity and awareness training that pertains to different communities including youth, women, and ethnic and racial minorities.

3) Fostering a Cultural Change Through Institutions: Digital safety threat prevention and mitigation for vulnerable communities through enforcement of existing laws and training for law enforcement are vital avenues for promoting cultural change. But other institutions must also be engaged in order for this norm change to be sustainable and long-term. Many of our interviewees highlighted the responsibility of educational institutions in imparting insights and norms on individuals earlier, rather than later, in life.

Educational institutions also have a responsibility for education across generations. Our interviewees stressed that discussions around acceptable digital behaviors and how to be good digital citizens need to start earlier, in lower levels of education. This is also a particularly important time period as it is when many young technology users are first engaging with technology. These efforts are particularly important in relation to online abuse, such as the sharing of non-consensual intimate imagery, cyber sexual harassment, and online and offline consent, as well as in relation to abuse-related threats, like hacking and phishing, that aim to compromise data and technology systems. This preventive approach was perceived as more impactful in fostering cultural and norm change than the reactive approach of initiating these conversations in higher education, when individuals have likely already experienced or perpetrated such behaviors.

Citations
  1. Facebook, Community Standards Enforcement Report, source.
  2. For more on how domestic and international technology platforms have been promoting transparency around their content policies, and on best practices for improving such transparency, see New America’s Open Technology Institute’s Transparency Reporting Toolkit on Content Takedown Reporting.

Table of Contents

Close