Welcome to New America, redesigned for what’s next.

A special message from New America’s CEO and President on our new look.

Read the Note

Legal Frameworks that Govern Online Expression

In order to effectively assess how content moderation practices and automated tools are shaping online speech, it is important to also understand the legal frameworks—both domestic and international—that underpin contemporary notions of freedom of expression online. Internet platforms such as Facebook, YouTube, and Twitter are popular not only in the United States, but across the globe. According to January 2019 statistics, 85 percent of Facebook’s daily active users are based outside the United States and Canada,1 and 80 percent of YouTube users2 and 79 percent of Twitter accounts3 are based outside the United States, with many of them residing in emerging markets such as India, Brazil, and Indonesia. However, despite the fact that the majority of these platforms’ users reside outside the United States, these companies are headquartered in the United States and are therefore primarily bound by U.S. laws. Under U.S. law, there are two principal legal frameworks that shape how we view freedom of expression online: The First Amendment to the U.S. Constitution and Section 230 of the Communications Decency Act.

In the United States, the First Amendment establishes the right to free speech for individuals and prevents the government from infringing on this right. Internet platforms, however, are not similarly bound by the First Amendment. As a result, they are able to establish their own content policies and codes of conduct that often restrict speech that could not be prohibited by the government under the First Amendment. For example, Facebook, and most recently Tumblr, prohibit the dissemination of adult content and graphic nudity on their platforms. Under the First Amendment, however, such speech prohibitions by the government would be unconstitutional. Section 230 of the Communications Decency Act is a statute that establishes intermediary liability protections related to user content in the United States.4 Under Section 230, web hosts, social media networks, website operators, and other intermediaries are, for the most part, shielded from being held liable for the content of their users’ speech. In addition, companies are able to moderate content on their platforms without being held liable. Such protections have enabled user-generated content-based platforms to grow and thrive without fear of being held liable for the content of their users’ posts. However, in 2018, an amended version of the Allow States and Victims to Fight Online Sex Trafficking Act of 2017 (also known as FOSTA) was passed into law. FOSTA amended Section 230 of the Communications Decency Act so that online platforms could be held liable for unlawfully promoting and facilitating “prostitution and websites that facilitate traffickers in advertising the sale of unlawful sex acts with sex trafficking victims.”5 Although intended to address real harms, the law was not well-crafted to address the harms of sex trafficking, and instead it has undermined one of the foundational frameworks that created the internet as we know it. It opened up new discussions on whether further exemptions to intermediary liability protections should be proposed. In addition, FOSTA was criticized for silencing user discussions on controversial topics such as sex work, as well as for making the lives of such sex workers more dangerous, as they were forced off of online platforms and back onto the streets to solicit clients.6

Most recently, conservative politicians in the United States have begun claiming that major internet platforms are demonstrating political bias against conservatives in their content moderation practices. As a result, in June 2019 Senator Josh Hawley (R-Mo.) introduced the “Ending Support for Internet Censorship Act,” which aims to amend Section 230 so that larger internet platforms may only receive liability protections if they are able to demonstrate to the Federal Trade Commission that they are “politically neutral” platforms. The Act raises First Amendment concerns, as it tasks the government to compel and regulate what platforms can and cannot remove from their websites and requires platforms to meet a broad, undefined definition of “politically neutral.”7

On an international level, there are two primary documents that provide protections for freedom of expression. The first is Article 19 of the Universal Declaration of Human Rights (UDHR), and the second is Article 19 of the International Covenant on Civil and Political Rights (ICCPR). Both of these documents recognize that free speech and free expression are fundamental human rights, and both prohibit efforts to unjustly clamp down on them. However, freedom of expression is not an absolute right under human rights law and can be subject to necessary and proportionate limitations.8

Up until now, internet platforms in the U.S. have engaged in voluntary content moderation and self-regulation. However, a wave of terror attacks facilitated through online platforms and foreign interference in the 2016 U.S. presidential election have sparked concerns about the use of these platforms to spread terror propaganda and political disinformation.9 As a result, platforms have come under increased pressure to identify and moderate these forms of objectionable content.

This pressure has manifested into legislation around the world. In 2016, Germany introduced the Netzwerkdurchsetzungsgesetz—also known as the Network Enforcement Act or the NetzDG—which requires platforms to delete hate speech, terror propaganda, and other designated forms of illegal and objectionable content within 24 hours of it being flagged to the platform—or risk substantial fines.10

In addition, in April 2019, the European Commission approved a proposal for similar regulation that would require internet platforms to remove terrorism-related content that had been flagged to them within an hour or face fines amounting to billions of dollars.11 There has been a string of similar legislative proposals and laws emerging in countries around the world, including India, Singapore, and Kenya. These laws aim at tackling particular categories of objectionable content, such as hate speech or fake news, and attempt to impose criminal penalties on individuals or platforms for posting and sharing such content. Most recently, in April 2019, the government of the United Kingdom released a white paper focused on combating online harms, which proposes multiple requirements for internet companies to ensure they keep their platforms safe and can be held responsible for the content on their platforms, as well as the decisions of the company. The white paper proposes a framework to be enforced by a new regulatory body, under which companies and executives who breach the proposed “statutory duty of care” could be charged with hefty fines.12 Many of these forms of regulation place undue pressure on companies to remove content quickly or face liability, thereby creating strong incentives for them to err on the side of broad censorship. Mandating that companies remove content along arbitrary timeframes is particularly concerning because it exacerbates this pressure. In order to comply, companies have invested more in automated tools to flag and remove such objectionable content. However, the mandatory timelines set forth by many of these regulations establish a content moderation environment that prioritizes speed over accuracy. In response, many companies are rapidly developing and implementing automated tools that take down a wide range of content quickly, often with little transparency to the public. This has resulted in overbroad content takedowns and increased threats to user expression online.

“Many of these forms of regulation place undue pressure on companies to remove content quickly or face liability, thereby creating strong incentives for them to err on the side of broad censorship.”

For example, shortly after the NetzDG came into effect in Germany, two senior members of the far-right Alternative for Germany (AfD) party who had tweeted anti-Muslim and anti-immigrant content had their tweets flagged and removed for containing hate speech. However, a series of tweets from the satirical magazine Titanic, which caricatured the initial tweets and which were not hate speech in themselves, were also removed,13 demonstrating how such regulation pushes companies to engage in overbroad takedowns of content in order to avoid fines. This case, which was one of the first to occur after the NetzDG was introduced, also demonstrated how automated tools lack a nuanced and contextualized understanding of human speech, as they were unable to distinguish between hate speech and satire.

Citations
  1. Salman Aslam, "Facebook by the Numbers: Stats, Demographics & Fun Facts," Omnicore Agency, last modified January 6, 2019, source.
  2. Salman Aslam, "YouTube by the Numbers: Stats, Demographics & Fun Facts," Omnicore Agency, last modified January 6, 2019, source.
  3. Salman Aslam, "Twitter by the Numbers: Stats, Demographics & Fun Facts," Omnicore Agency, last modified January 6, 2019, source.
  4. Mark MacCarthy, "It's Time to Think Seriously About Regulating Platform Content Moderation Practices," CIO, February 14, 2019, source.
  5. Allow States and Victims to Fight Online Sex Trafficking Act of 2017, H.R. 1865, 115th Cong. (2018)
  6. New America's Open Technology Institute, "OTI Disappointed in the House-Passed FOSTA-SESTA Bill," news release, February 27, 2018, source.
  7. New America's Open Technology Institute, "Bill Purporting to End Internet Censorship Would Actually Threaten Free Expression Online," news release, June 20, 2019, source.
  8. Filippo A. Raso et al., Artificial Intelligence & Human Rights: Opportunities & Risks, September 25, 2018, source.
  9. MacCarthy, "It's Time to Think Seriously About Regulating Platform Content Moderation Practices".
  10. Center for Democracy & Technology, "Overview of the NetzDG Network Enforcement Law," Center for Democracy & Technology, last modified July 17, 2017, source.
  11. Zak Doffman, "EU Approves Billions In Fines For Google And Facebook If Terrorist Content Not Removed," Forbes, April 18, 2019, source.
  12. Department of Digital, Culture, Media & Sport, Online Harms White Paper, April 2019, source.
  13. Linda Kinstler, "Germany's Attempt to Fix Facebook Is Backfiring," The Atlantic, May 18, 2018, source.
Legal Frameworks that Govern Online Expression

Table of Contents

Close