To Protect the Public Interest Internet, Lawmakers Have Better Tools than Section 230 Reform
Blog Post

June 20, 2023
On June 20th, New America’s Open Technology Institute (OTI) and the Wikimedia Foundation convened a hybrid event to discuss what’s at stake for public interest organizations in the Section 230 reform debate. Against the backdrop of the Supreme Court’s decision to punt on defining the scope of Section 230 in Gonzalez v Google LLC, the event demonstrated the need for policymakers to consider alternative, more tailored approaches — such as greater algorithmic accountability and privacy measures — to create healthier spaces online. Congress should consider algorithmic accountability measures, while industry should more widely and substantively adopt governance principles that would provide greater clarity about their content policies and moderation efforts.
Senator Ron Wyden, one of the co-authors of Section 230, delivered the opening keynote address. He emphasized Section 230 empowerment of users to freely speak and access content online, and its role in building American leadership in the digital realm. The accompanying panel, featuring experts from libraries, digital archives, open data projects, and Wikipedia, demonstrated how Section 230’s liability protections are critical to the survival of public interest organizations. These organizations rely on Section 230 to publish, organize, and curate content that communities, educators, and public institutions create and share online.
In the face of increased AI integration, opaque platform policies, and growing concern over social media's negative impact on users, policymakers are seeking stronger safeguards against online harms, particularly those that big tech companies permit or algorithmically amplify. But panelists noted that legislative reform efforts often target large technology companies with little regard for the effect of their proposals on overlooked players like small business and nonprofits working in the public interest. They cautioned policymakers that Section 230 reform could do more harm than good to the beneficial features of the Internet.
“Stopping online conversations won't solve the problems politicians claim they will, but without 230 and the First Amendment, it will be harder for people without power, without clout, without political action committees—the marginalized voices—to call out wrongdoing by the powerful. And it'll certainly be easier for government to set the terms of public debate.” — Senator Ron Wyden.
Section 230: The “26 words that created the internet”
Section 230 of the Communications Decency Act serves as a liability shield for organizations when they make “good-faith decisions” while moderating objectionable third-party content on their platforms. By providing companies flexibility and freedom to moderate a variety of content, Section 230 has fostered innovation and competition, fueled the rise of user-generated content based platforms, and ushered in the creation of many essential aspects of the Internet. Today, the legislation helps create spaces where vulnerable communities, whistleblowers, dissenters, and activists can openly voice their concerns and speak out against injustices without fear of reprisal. In doing so, Section 230 provides what Senator Wyden calls the “first line of defense” against censorship.
At the same time, the protections afforded by Section 230 have also meant that some algorithms and business models have allowed hate speech, violent content, and radicalism to proliferate in certain online spaces. Some legislators have seized on these harms to propose reforming or completely repealing Section 230 protections. While often well intentioned, these proposals would do more harm than good.
Arguments to reform Section 230 often take two contrasting positions. The first camp wants to see an even more permissive environment for online speech. They allege that internet platforms are ideologically biased, and that an approach where platforms would make far fewer content moderation decisions would counteract this bias. But the predictable effect of such a change would be to empower the creation of harmful online speech. Conversely, the second camp argues for imposing stricter duties to moderate. But even narrow attempts to alter Section 230’s liability shield could inadvertently incentivize censorship. This isn’t a theoretical concern — recent evidence demonstrated precisely how this can happen. SESTA/FOSTA’s carve out of Section 230, which OTI and other digital rights advocates opposed, failed to crack-down on web-based human trafficking as intended. Instead, the fear of lawsuits led companies to impose broad content moderation measures that removed content protected by the First Amendment — all while driving sex workers to more dangerous, darker parts of the internet.
The impact of either approach could be especially devastating for public interest organizations that allow users to report, curate, preserve, share, and archive materials. Wikipedia, through community-generated and -moderated content, operates one of the top reference sites in the world. Libraries run digital repositories and collections, as well as provide internet and network access to the public. Public archival programs create a public copy of the internet, helping save news, research, and content that may otherwise be lost. These public interest services take great pains to moderate responsibly and in good faith, but the removal of Section 230 protections could leave them unable to counter hateful or inaccurate speech. In an environment without the Section 230 liability shield, recent efforts across states to ban LGBTQIA+ healthcare and education, as well as access to abortion and reproductive health services, could make organizations hesitant to allow or host important, often life-saving, information.
Alternative Avenues for Safer Online Spaces
So where should we turn to make online spaces meaningfully safer?
OTI believes that intermediary liability protections, like Section 230, are critical to effective content moderation that balances the important objectives of free expression with safety online. Rather than creating overly broad amendments or striking down Section 230, other legislative and regulatory avenues offer a more tailored approach to addressing the need for more thoughtful content moderation practices.
Creating a healthier online environment requires increased platform and algorithmic accountability implemented through a wide range of stakeholder action. Companies should adopt best practices vetted by civil society organizations, such as the Santa Clara Principles on Transparency and Accountability in Content Moderation, to provide users clarity around content policies and moderation efforts. Congress should increase algorithmic accountability measures by passing legislation like the Algorithmic Accountability Act, which requires companies to be clearer about how they use artificial intelligence and machine learning tools to shape online content.
In order to address the more broad harms that flow from algorithms powered by data-extractive business models, U.S. policymakers should take other foundational legislative and policy actions, including passing a comprehensive federal privacy law and implementing stronger pro-competitive measures.
Protecting the public interest and free speech online, and making the internet a safer place are not mutually exclusive if we realize the problem is not rooted in the existence of Section 230. Government and industry can take significant and important action to address foundational concerns to fight online harms and prioritize internet openness without weakening or eliminating crucial legislation.