Conclusion

With a global pandemic in the run-up to a U.S. presidential election, it is now undeniable that strengthening platform accountability is critical to the future of democracy.

Despite intensifying efforts to remove potentially deadly misinformation and other dangerous speech, social media companies are still failing to effectively moderate content—paid and user-generated alike—in ways that are consistent with their human rights obligations and the protection of civil liberties. At the same time, policymakers have proposed holding these companies liable for their users' speech rather than for their targeted advertising business models, the fundamental source of the problem.

The threats have been identified. Now the question is: How are social media platforms preparing to counter them?

In the first part of this two-part series, we warned against recent proposals to revoke or dramatically revise Section 230 of the 1996 Communications Decency Act (CDA), which protects companies from liability for content posted by users on their platforms. Such a step would be counter-productive. Policymakers should instead focus on reining in social media companies’ targeted advertising business model and the algorithmic systems that drive it. These upstream processes, which require the collection and sharing of vast amounts of user data to target users without their clear knowledge and consent, are the driving force in the spread and discriminatory targeting of downstream disinformation, hate speech, and other content that can endanger both public health and democratic discourse.

At the end of the first report, we made specific recommendations for corporate transparency about social media platforms’ advertising content rules and ad-targeting systems, as well as their rules and processes for governing user-generated content. We called for greater corporate transparency about what happens under the hood, to enable an informed public debate about whether to regulate the algorithms themselves, and if so, how.1 If the companies will not make such disclosures voluntarily, Congress should mandate them.

In this second report, we have described how companies have failed to stop the downstream torrent of COVID-19 misinformation despite throwing extra resources at the problem and strengthening cooperation with fact-checkers and independent security researchers. Infodemics will continue to plague society—and may get worse—unless and until companies make changes to the upstream systems that play a major role in driving them. If companies had made these changes voluntarily, we would have seen stricter and more responsible data use policies and practices, greater transparency, due diligence, and much stronger corporate oversight.

Strong privacy law is urgently needed to curb the negative social impact of targeted advertising business models. Regulation of political advertising must be upgraded for the era of online campaigning. Social media companies should be required to show evidence that they conduct tangible and credible due diligence around their social impacts and risks. They must disclose key categories of information necessary for external stakeholders to understand how companies are working to understand and address their risks. Corporate governance and oversight requirements must be strengthened. Shareholders must be empowered to hold companies accountable for their social impact. In turn, shareholders need to be sufficiently informed and engaged in order to understand the risks.

Time is running out for Congress to act in time to have an impact on social media’s next big test: the November 2020 general election. In addition to the potential harms to democratic discourse, misinformation can also undermine the democratic process. As states shift to the widespread use of absentee ballots in an effort to protect voters during COVID-19, experts warn of disinformation about how and when to vote.2 They also warn us to expect that reports of any mistakes or incompetence by local election officials will be taken out of context, conflated and combined with misinformation, and used in organized attempts by bad actors to destroy the legitimacy of the presidential election result, potentially throwing the country into political uncertainty and conflict.3

Commitments to prioritize the removal of election-related disinformation and misinformation, similar to commitments they have made to address the COVID-19 infodemic, would certainly be a start. But such steps will be insufficient—for all the reasons this report series has documented—unless more fundamental changes are made to the platforms’ algorithmic systems and targeted advertising mechanisms.

What if Facebook, Google, and Twitter committed to limit targeted advertising to the same demographic categories—such as geographic area—to which print and broadcast advertisers have access?

What if they stopped allowing any advertisers to target individuals until after the November election?

What if they all agreed to stop all targeted advertising for three months prior to the election, offering contextual advertising only?

Such measures would dramatically reduce the flow and impact of election-related disinformation and misinformation on social media.

“What if Facebook, Google, and Twitter stopped allowing any advertisers to target individuals until after the November election?”

If social media companies will not commit to a full moratorium on targeted advertising for a few months, they should nonetheless commit to take strong action that prioritizes the health of democracy over their 2020 financial returns. Facebook, Google, and Twitter should commission rapid impact assessments, in collaboration with independent experts on elections and social media, to identify the greatest threats posed by disinformation and misinformation to the 2020 presidential election. The assessments should then recommend concrete changes to their targeted advertising, data collection and use policies, and algorithmic systems that can be made in time to mitigate those threats.4 The results and recommendations should be made public, with enough time to implement a remedy.

The companies should then announce a plan for how they will change specific policies, advertising features, mechanisms, and privacy defaults in order to minimize the amplification, targeting, and overall impact of disinformation, misinformation, and other dangerous content in these final critical months leading up to the election.

While that may not solve the problem for the long term, it would be an important start to a broader national discussion about how social media platforms can best protect users’ rights and how our elected representatives and regulators can set effective rules for these companies to help safeguard our democracy.

Citations
  1. See the conclusion in Maréchal, Nathalie, and Ellery Roberts Biddle. 2020. It’s Not Just the Content, It’s the Business Model: Democracy’s Online Speech Challenge – A Report from Ranking Digital Rights. Washington, D.C.: New America. source
  2. Feldman, Max. 2020. Dirty Tricks: Eight Falsehoods That Could Undermine the 2020 Election. Brennan Center for Justice. source (May 16, 2020).
  3. Hasen, Richard L. 2020. “What Happens in November If One Side Doesn’t Accept the Election Results?” Slate. source (May 16, 2020).
  4. For an example of a rapid due diligence tool see Allison-Hope, Dunstan, and Jenny Vaughan. 2020. “COVID-19: A Rapid Human Rights Due Diligence Tool for Companies.” BSR. source (May 16, 2020).

Table of Contents

Close