The Promises—and Pitfalls—of Content Regulation in the Digital Age

Blog Post
Feb. 13, 2018

The rise of platforms driven by user-generated content, such as Facebook, Twitter, YouTube, and Tumblr, has profoundly changed the scope and nature of digital content, allowing users to create, share, tweet and post at a large scale. However, these innovations have also enabled the dissemination of illegal content such as child pornography and copyright-infringing material. In addition, harmful content, such as extremist campaigns and harassing speech, has also flourished in certain corners of the web. In order to ensure that their platforms remain safe and inclusive, technology companies have developed and implemented content regulation practices and policies, relying on both human and algorithmic-led methods. One of the major issues with these efforts, however, is that the public retains a limited insight into what drives these policies, how they are applied, and the impacts of their application.

As a result of this information barrier, David Kaye, the United Nations Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression issued a call for comments on content regulation in the digital age. The call aimed to collect information from states, companies, and civil society organizations on content regulation processes, including relevant legislative measures, internal and external policies, and the challenges and successes of these procedures.

New America’s Open Technology Institute (OTI) responded to this call for comments in December by highlighting the different content regulation practices employed by technology companies, as well as the challenges associated with them. Typical content regulation practices can be separated into two categories: human-led content moderation and algorithmic-led content moderation. Human-led content moderation customarily involves companies hiring and training teams of content reviewers who evaluate flagged content and accounts and assess whether they violate a company’s terms of service and content policies. These teams are extremely valuable to a company’s content moderation efforts as they are typically able to understand and flag content that requires contextual and cultural understandings, a nuance that algorithm-driven and machine-led content regulation efforts are often unable to determine. However, human content regulation efforts are resource-intensive. Therefore, smaller, resource-strapped companies often face challenges in creating and scaling their content moderation operations.

On the other hand, algorithmic content moderation approaches conventionally rely on the use of digital hashes and natural language processing tools. One current and successful example of algorithmic content regulation tools being deployed to moderate content before publication is the PhotoDNA program, which uses hash-based technologies to identify and remove existing illegal child porn images online. These digital hashes are created by converting existing illegal child porn images online into a grayscale format, overlaying them onto a grid and assigning each square a numerical value. The designation of a numerical value converts the square into a hash, or a digital signature, which remains tied to the image and can be used to identify other recreations of the image online. The PhotoDNA technology has also been adapted and applied to extremist and terror-related content online. However, because this is a field of content in which consensus about legality is less easily reached, it has raised concerns about chilling freedom of expression, especially for marginalized and minority groups who are disproportionately impacted by national security initiatives. Major technology companies have already been collaborating to refine and strengthen these algorithmic operations. OTI strongly urges companies to—through collaborative or other avenues—continue efforts to ensure that speech is not chilled or unnecessarily censored.

OTI’s comments to the United Nations Special Rapporteur also emphasized the need for companies to promote increased transparency around their content regulation practices, especially since the majority of content moderation practices engage with content that is not clearly defined, either in legal terms or in companies’ own terms of service. Because these content moderation operations take place within this grey area, companies gain the authority to become arbiters of truth and to decide what content is permissible in the modern-day public forum. To be clear, as private companies operating private speech platforms, they are entitled to regulating content on their sites as they see fit. However, a lack of transparency around these practices ultimately threatens the promise of democratic discourse and freedom of speech online, because without insight into content policies and enforcement, users have no way of understanding what speech is permissible, taken down, "shadowbanned," or otherwise censored.

Thus far, companies such as Twitter, Google, Microsoft, and Automattic have published partial data on their content regulation processes. Consistent with OTI’s work pushing for increased transparency around government requests for user information, we advocate for a similar level of transparency around content moderation. As outlined in a recent essay by OTI’s Kevin Bankston and Liz Woolery, this could include, but is not limited to, disclosures of data on the scope and volume of content removals, account removals and other forms of account or content interference/flagging, information about state flagged content, and more granular information on why content is removed and what parts of the terms of service are being violated.

The benefits to companies of such transparency reporting are apparent in three significant ways: First, companies, especially those under routine pressure to scale their content moderation operations, could use this as an opportunity to highlight the lengths to which they have gone to moderate content and underscore the challenges they have faced in the process. In addition, public disclosure about content moderation and terms of service practices will enhance discussions and understanding of content regulation issues and will permit companies to begin rebuilding trusting relationships with their users. Finally, increased transparency in this field will lead to greater accountability of both companies and governments and will provide valuable insight into how companies apply policies and practices that impact users’ freedom of expression and privacy.

Related Topics
Transparency Reporting Content Moderation Platform Accountability