Table of Contents
- Executive Summary
- Introduction
- Targeted Advertising and COVID-19 Misinformation: A Toxic Combination
- Human Rights: Our Best Toolbox for Platform Accountability
- Making All Ads “Honest” Through Transparency, Limited Targeting, and Enforcement
- By Protecting Data, Federal Privacy Law Can Reduce Algorithmic Targeting and the Spread of Disinformation
- Good Content Governance Requires Good Corporate Governance
- Without Civil Society, Platform Accountability is a Pipe Dream
- Key Recommendations for Policymakers
- Conclusion
Introduction
As the pandemic death toll continued to rise in the spring of 2020, myriad myths and conspiracy theories circulated, including some dangerous falsehoods, about COVID-19 on Twitter and other social media: mobile 5G technology helps spread the virus; zinc, celluloid silver, miracle mineral solution, and garlic can cure the virus; hydroxychloroquine has a 100 percent success rate in treating the virus—ingesting bleach may help; and vaccines are being developed with microchip tracking technology funded by Bill Gates.1
Misinformation can be both polarizing and deadly. It is disturbing that social media companies failed to remove posts promoting the falsehoods described above, despite the companies’ commitments to remove COVID-19 misinformation. With so little known about the novel coronavirus and how to stop its spread, conspiracy theorists, hucksters, political opportunists, and would-be authoritarians worldwide have adopted tactics honed for recent political campaigns to further exploit social media’s algorithmic toolset, using platforms’ targeted advertising systems to amplify unfounded virus transmission theories and remedies.
Policymakers have proposed holding companies liable for speech rather than for their targeted advertising business models, the real source of the problem.
Social media platforms are part of a broader information ecosystem that also includes news organizations and other influential actors, including political leaders and celebrities. The growing impact of social media platforms on our information environment is fueled by advertising in general, and targeted advertising in particular. Fifty-five percent of American adults say they get their news from social media.2 That includes information about political candidates and policy issues from ads on Facebook and Google. In 2018, those two companies combined receive almost 60 percent of all digital advertising dollars in the United States.3 They are projected to earn 77 percent of the more than $1.34 billion that is expected to be spent on U.S. digital political advertising in the 2019–2020 election cycle.4 Before deciding to ban political advertising last fall, Twitter earned $3 million from political ads during the 2018 midterm elections.5 Social media platforms draw attention and advertising dollars away from news outlets whose journalism also includes critical facts without which citizens cannot make informed choices about how to live their lives—or to vote.
But digital platforms’ increasing share of news and advertising is only the beginning of the story. Since the 2016 U.S. presidential election, companies like Facebook, Google, and Twitter have struggled to curtail the circulation of both organic and paid false information on their platforms. The content moderation and takedown systems that social media companies have put in place to enforce their content rules are failing society as well as individual users: harmful misinformation continues to flow; while content moderation rules are enforced in inconsistent and often arbitrary ways, resulting in unintended deletion of content produced by activists and journalists, thereby restricting freedom of expression.6 Company systems intended to block coronavirus misinformation from ad platforms have been proven ineffective.
In parallel, policymakers have begun to address the problem of dangerous online content, particularly by formulating proposals to amend the extent to which companies are legally liable for user-generated speech on their platforms.7 But such proposals do not address what makes these platforms different—and so potentially dangerous: their targeted advertising business models and the algorithmic systems that drive them.
Targeted advertising business models require extensive data collection and algorithmic content shaping in order to maximize targeted advertising revenue. Designed to prioritize sensational, eye-catching, and controversial content, they disproportionately amplify organic and paid speech that has great potential to corrupt the quality of information we need to promote not just healthy democracies and fair elections, but also, as we’ve seen in the COVID-19 pandemic, public health. At the same time, targeted advertising enables paying customers to target different types of content, including ads, at specific audiences based on people’s demographics and declared interests as well as algorithmically inferred assumptions about other affinities and traits that may not even be correct.
Content moderation is necessary, but far from sufficient.
Against the backdrop of both a global pandemic and a U.S.-presidential election campaign, it is now incontrovertible that strengthening platform accountability and, thus, the integrity and resilience of our information ecosystem, is critical to the future of democracy. This report, which offers concrete U.S. policy recommendations, is the second in a two-part series examining how targeted advertising business models can drive the spread of misinformation8 and dangerous speech,9 and what U.S. lawmakers and regulators can do to hold companies accountable for these systems without infringing on human rights. We make the case for why and how policymakers and advocates should prioritize holding platforms accountable for mechanisms, policies, and practices that enable the amplification and targeting of user-generated misinformation and dangerous speech—without which the speech would have less reach and, thus, fewer negative effects—instead of holding the companies liable for the speech itself.
Some policymakers are focused on breaking up these huge companies as the key to limiting social harms they can cause or contribute to. Antitrust is not a focus of this report, though encouraging competition, ensuring interoperability, and potentially breaking up monopolies are important parts of the policy toolkit that have the potential to alleviate some of the problems that are exacerbated by the major platforms’ enormous scale. But these regulatory interventions alone will not be sufficient. Checking the power of Big Tech will require a two-pronged approach outside the context of antitrust, and competition law more broadly, that aims to mitigate the harms posed by the underlying ad- and attention-driven business model that drives platform revenue. That approach involves: 1) a comprehensive and enforceable federal data privacy regulation, and 2) holding companies directly accountable by shining sunlight on the harms created by the business model and promoting policies that increase the fairness, accountability, and overall transparency of those practices.
Having outlined in part one the challenges to both human rights and civil liberties targeted advertising and algorithmic systems present, this second report focuses on what social media companies and, in particular, policymakers can do to address them. We borrow a metaphor from the oil industry to clarify our contention: We cannot clean up downstream pollutants like misinformation or dangerous speech without tackling the upstream processes—targeted advertising and algorithmic systems—that make this speech so damaging to our information environment in the first place. Focusing on the downstream effects of the infodemic,10 as has been the approach thus far, does nothing to address its upstream structural causes.
We cannot clean up downstream pollutants like misinformation or dangerous speech without tackling upstream processes like targeted advertising and algorithmic systems.
In the first report, It’s Not Just the Content, It’s the Business Model: Democracy’s Online Speech Challenge, we examined two overarching types of algorithms: 1) content-shaping algorithms that determine what individuals users see when they use a company’s online services (including those that target ads) and 2) content moderation algorithms that help human reviewers identify (and sometimes remove) content that violates the company’s rules.11 We then gave examples of how these technologies are used both to propagate and prohibit different forms of online speech (including targeted ads), and showed how they can cause or catalyze social harm, particularly in the context of the 2020 U.S. election. We described how users are profiled, segmented, and targeted in ways that allow advertisers to continually reinforce a specific message to a particular type of audience. We illustrated how, when this capability is combined with mis- or disinformation, such as incorrect voting information or outright lies about a candidate, for political gain, the results can be disastrous for democracy.
We also highlighted what we don’t know about these systems. We called on companies to be much more transparent about how their content-shaping algorithms and content moderation systems work, and to give users more control over how content is being prioritized and promoted to them, or targeted at them. We explained why a regulatory focus on holding companies liable for content shared by users, on its own, will not succeed in stemming the spread of problematic content, and will likely result in the violation of users’ free expression rights. We agreed that content moderation is necessary, but far from sufficient, and we asserted that the first step in addressing the problem is to require much greater transparency and accountability around a business model that relies on algorithmic curation and exploitation of user data.
Identifying and removing misinformation and disinformation, and otherwise working to mitigate its impact by flagging it as false is an essential short-term measure. But as we pointed out in the previous report, it is important not to force companies to censor higher volumes of content across a broad range of topics, languages, and cultural contexts when their moderation systems lack the accuracy, consistency, and nuance to avoid violating users’ right to freedom of expression and information. But infodemics will keep plaguing us—and may get worse—unless Congress acts, and also empowers other stakeholders including institutional investors to hold companies accountable.
Building on five years of research for the Ranking Digital Rights Corporate Accountability Index (RDR Index), which evaluates how transparent companies are about their policies and practices that affect online freedom of expression and privacy, this report reinforces the case for adopting a human rights framework for platform accountability and proposes two concrete areas where U.S. Congress needs to act to mitigate the harms of misinformation and other dangerous speech without compromising free expression: federal privacy law and corporate governance reform.
In August 2019, the Business Roundtable published a statement signed by 181 CEOs of the major U.S. corporations, announcing their commitment to the idea that the purpose of business is no longer only to serve shareholders, but also to “create value for all our stakeholders” including employees, customers, and communities.12 It is no longer debatable whether businesses in any sector should be held accountable for their social impact.
The proliferation of misinformation during the COVID-19 pandemic has shown just how high the human cost—and ultimately the economic cost—can be when companies prioritize shareholder returns over all else, and when the government fails to hold companies accountable to the public interest.13 Society is now paying the price for failing to require that companies make credible efforts to understand and track their social impact, and to take responsibility for preventing and mitigating social harms that their business may cause or contribute to. It is time to adjust course and design a resilient and equitable information environment—through increased transparency; responsive, evidence-based regulation; and persistent stakeholder engagement—that protects human rights and civil liberties especially in times of crisis and change.
Citations
- Newsguard. 2020. Tracking Twitter’s COVID-19 Misinformation “Super-Spreaders.” source (May 15, 2020).
- Khalid, Amrita. 2019. “Americans Can’t Stop Relying on Social Media for Their News.” Quartz. source (May 15, 2020).
- Sterling, Greg. 2019. “Almost 70% of Digital Ad Spending Going to Google, Facebook, Amazon, Says Analyst Firm.” Marketing Land. source (May 17, 2020).
- Gibson, Kate. 2020. “Spending on U.S. Digital Political Ads to Top $1 Billion for First Time.” CBS News. source (May 15, 2020).
- Conger, Kate. 2019. “Twitter Will Ban All Political Ads, C.E.O. Jack Dorsey Says.” The New York Times. source (May 15, 2020).
- Gary, Jeff, and Ashkan Soltani. 2019. “First Things First: Online Advertising Practices and Their Effects on Platform Speech.” Knight First Amendment Institute. source (February 13, 2020).
- Maréchal, Nathalie, and Ellery Roberts Biddle. 2020. It’s Not Just the Content, It’s the Business Model: Democracy’s Online Speech Challenge – A Report from Ranking Digital Rights. Washington, D.C.: New America. source (May 7, 2020).
- Wardle, Claire, and Hossein Derakhshan. 2017. Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making. Strasbourg: Council of Europe. Council of Europe report. source
- Dangerous Speech Project. 2016. “What Is Dangerous Speech?” Dangerous Speech Project. source (May 15, 2020).
- The World Health Organization (WHO) defines an infodemic as “an over-abundance of information—some accurate and some not—that makes it hard for people to find trustworthy sources and reliable guidance when they need it.” See World Health Organization. 2020. Novel Coronavirus (2019-NCoV) Situation Report – 13. Geneva: World Health Organization. source
- Maréchal, Nathalie, and Ellery Roberts Biddle. 2020. It’s Not Just the Content, It’s the Business Model: Democracy’s Online Speech Challenge – A Report from Ranking Digital Rights. Washington, D.C.: New America. source (May 7, 2020).
- Business Roundtable. 2019. “Business Roundtable Redefines the Purpose of a Corporation to Promote ‘An Economy That Serves All Americans.’” source (May 15, 2020).
- Goodman, Peter S. 2020. “Big Business Pledged Gentler Capitalism. It’s Not Happening in a Pandemic. – The New York Times.” The New York Times. source (May 15, 2020).