Regulatory Challenges: A Free Speech Problem—and a Tech Problem

Until now, Congress has largely put its faith in companies’ abilities to self-regulate. But this has clearly not worked. We have reached a tipping point—a moment in which protecting tech companies’ abilities to build and innovate unfettered might actually be putting our democracy at grave risk. We have come to realize that we have a serious problem on our hands, and that the government must step in with regulation. But what should this regulation look like?

There is no clear or comprehensive solution to these problems right now. But we know that we need more information—likely through government-mandated transparency and impact-assessment requirements—in order to assess the potential for damage and propose viable solutions. It is also clear that we need a strong federal privacy law. These recommendations will be explored in greater depth in the second part of this report series.

We have reached a tipping point—a moment in which protecting tech companies’ abilities to build and innovate unfettered might actually be putting our democracy at grave risk.

Members of Congress are understandably eager to hold tech platforms accountable for the harms they enable, but should resist the temptation of quick-fix solutions: not all types of regulation will actually solve the problems of disinformation and violent extremism online without also seriously corroding democracy. We urge policymakers to refrain from instituting a broad intermediary liability regime, such as by revoking or dramatically revising Section 230 of the 1996 Communications Decency Act (CDA). Section 230 provides that companies cannot be held liable for the content that their users post on their platforms, within the bounds of U.S. law. It also protects companies’ ability to develop their own methods for identifying and removing content. Without this protection, companies that moderate their users’ content would be held liable for damages caused by any content that they failed to remove, creating strong incentives for companies to censor users’ posts. Instead, Section 230 allows companies to do their best to govern the content on their platforms through their terms of service and community standards.

Screen Shot 2020-03-11 at 4.43.25 PM.png
47. Communications Decency Act, U.S.C. § 230(c).

Experts in media and technology policy are all but unanimous that eliminating CDA 230 would be disastrous for free speech — domestically and globally.1 Perfect enforcement is impossible, and holding companies liable for failing to do the impossible will only lead to over-censorship.

Foreign governments are not constrained by the U.S. First Amendment and have the power to regulate speech on internet platforms more directly. This is already happening in various jurisdictions, including Germany, where the 2018 NetzDG law requires social media companies to swiftly remove illegal speech, with a specific focus on hate speech and hate crimes, or pay a fine.2 While this may reduce illegal online speech, it bypasses important measures of due process, delegating judicial authority (normally reserved for judges) to private companies. It also incentivizes them to err on the side of censorship rather than risk paying fines.3

Another example comes with the anti-terrorist content regulation currently pending before the European Commission4 that would, among other things, require companies to institute upload filters. These still-hypothetical algorithmic systems would theoretically be able to evaluate not only the content of an image or video but also its context, the user’s intent in posting it, and the competing arguments for removing the content versus taking it down. Automated tools may be able to detect an image depicting a terrorist atrocity, but they cannot recognize or judge the context or deeper significance of a piece of content. For that, human expertise and judgment are needed.5

A desire to see rapid and dramatic reduction in disinformation, hate speech, and violent extremism leads to a natural impulse to mandate outcomes. But technology simply cannot achieve these results without inflicting unacceptable levels of collateral damage to human rights and civil liberties.

Citations
  1. Kaye, David. 2019. Speech Police: The Global Struggle to Govern the Internet. New York: Columbia Global Reports.
  2. Germany’s Netzwerkdurchsetzungsgesetz (“Network Enforcemencement Act”) went into effect in 2018.
  3. Kaye, David. 2019. Speech Police: The Global Struggle to Govern the Internet. New York: Columbia Global Reports.
  4. European Parliament. 2019. “Legislative Train Schedule.” source
  5. Kaye, David. 2019. Speech Police: The Global Struggle to Govern the Internet. New York: Columbia Global Reports.
Regulatory Challenges: A Free Speech Problem—and a Tech Problem

Table of Contents

Close