Introduction
Since 2016, online platforms have been at the heart of discussions around misinformation and disinformation. Over the past two years, many platforms significantly expanded their efforts to identify and combat misleading information, particularly related to the COVID-19 pandemic and the 2020 U.S. presidential election.1
Before the 2020 presidential election, the Open Technology Institute (OTI) published a report outlining how 11 internet platforms were addressing the spread of election-related misinformation and disinformation on their services.2 The report issued recommendations on how these platforms can improve their efforts to share authoritative information and empower users to make informed decisions, moderate and curate misleading information, tackle misleading advertising, and provide meaningful transparency and accountability around these efforts. It also proposed actions policymakers could take to support these efforts.
But, while internet platforms ramped up attempts to combat election misinformation and disinformation in 2020, many have pulled back on these efforts, noting that they were temporary measures.3 In the meantime, conspiracy theories and misleading information about the 2020 election continue to circulate and contribute to ongoing voter suppression, which particularly impacts communities of color. Ongoing misinformation and disinformation also contributed to the January 6 insurrection at the U.S. Capitol.4 As the 2022 U.S. midterm elections draw near, it is important to examine what steps internet platforms are taking to curtail the spread of election-related misinformation. National security agencies and independent experts have already warned that foreign and domestic actors are likely to continue disseminating misinformation and disinformation ahead of the midterms.5 Social media platforms, in particular, can facilitate the spread of posts and advertisements with false information about candidates, election results, the overall voting process, and more. The continued spread of this content threatens to suppress voting, undermine public trust in elections, and erode the health of our democracy.
This scorecard evaluates how major internet platforms are combating election and voter suppression-related misinformation and disinformation ahead of the midterm elections. Using this data, we demonstrate which platforms have made the most progress towards tackling misleading election information, which platforms are falling behind, and where companies need to invest more resources. The scorecard measures platforms against a selection of recommendations included in our 2020 report, which we consider baseline policies and practices that companies should implement. These recommendations are broken into four categories:
Sharing Authoritative Information and Promoting Informed User Decision-Making
- Partner with reputable fact-checking organizations and entities to promote, verify, or refute information circulated through organic content and search results.
- Partner with reputable government or civil society entities to promote, verify, or refute information circulated through organic content and search results.
- Notify users who are or have been engaging with misleading election-related content and/or direct them to authoritative sources of information.
- Conduct regular impact assessments of algorithmic curation tools so they do not direct users to or surface misleading content when they search for election-related topics.
Moderating and Curating Misleading Information
- Create a comprehensive set of content policies to address the spread of election-related misinformation and disinformation.
- Institute a dedicated reporting feature that enables users to flag misinformation and disinformation to the company.
- Remove, reduce the spread of, or label content that has been fact-checked and/or deemed to contain election-related misinformation.
Tackling Misleading Advertising
- Create and implement comprehensive policies for the content and targeting of ads that prohibit election-related misinformation and disinformation in advertisements.
- Establish a comprehensive review process for election-related ads and ad targeting categories that includes fact-checking.
- Create policies that prevent users and entities from being able to monetize and advertise on the platform if they repeatedly spread misinformation and disinformation.
Providing Meaningful Transparency and Accountability
- Publish data related to the moderation, curation, and labeling of election-related misinformation and disinformation in their regular transparency reports.
- Create a publicly available online database of all ads in categories related to elections and social and political issues that a company has run on its platform.
- Publish data on the company’s ad enforcement efforts.
The scorecard includes data on ten of the internet platforms discussed in the 2020 report— Facebook/Instagram, Google, Pinterest, Reddit, Snap, TikTok, Twitter, WhatsApp, and YouTube. While the original report included Amazon, we chose to omit the company from the scorecard due to its unique features as an e-commerce platform, which made comparative analysis challenging. For the purposes of this scorecard, the Facebook and Instagram platforms are grouped together, as parent company Meta typically applies similar content and advertising policies to both services. We evaluate WhatsApp separately as it offers different services and therefore has different policies. We distinguish between Google and YouTube because at times the services have differing policies and practices. Reddit employs a decentralized approach to content moderation, allowing individual users to serve as moderators of subreddits. For the purposes of this scorecard, we focus on policies and practices that Reddit applies across the platform, rather than individual policies and practices deployed by user moderators. The data in the charts is based on publicly available information about platforms’ misinformation and disinformation efforts as of June 2022. We focused our research and analysis on each company’s primary platform. For example, we focused on Google’s search product and Snap Inc.’s Snapchat product.
Editorial disclosure: This report discusses policies by Google (including YouTube), Facebook (including WhatsApp and Instagram), and Twitter, all of which are funders of work at New America but did not contribute funds directly to the research or writing of this report. New America is guided by the principles of full transparency, independence, and accessibility in all its activities and partnerships. New America does not engage in research or educational activities directed or influenced in any way by financial supporters. View our full list of donors at www.newamerica.org/our-funding.
Citations
- Spandana Singh and Koustubh “K.J.” Bagchi, How Internet Platforms Are Combating Disinformation and Misinformation in the Age of COVID-19 (Washington, DC: New America, 2020), source. Spandana Singh and Margerite Blase, Protecting the Vote (Washington, DC: New America, 2020), source.
- Singh and Blase, Protecting the Vote, source.
- Sheera Frenkel and Cecilia Kang, “As Midterms Loom, Elections Are No Longer Top Priority for Meta C.E.O.,” New York Times, June 23, 2022, source.
- Young Mie Kim, “Voter Suppression Has Gone Digital,” Brennan Center for Justice, November 20, 2018, source. Mark Scott and Rebecca Kern, “The Online World Still Can’t Quit the ‘Big Lie’,” Politico, January 6, 2022, source.
- Edward-Isaac Dovere, “US Is Worried about Russia Using New Efforts to Exploit Divisions in 2022 Midterms,” CNN, June 19, 2022, source. Committee on House Administration, A Growing Threat: How Disinformation Damages American Democracy, 117th Cong., 2nd sess, June 22, 2022 (testimony of Yosef Getachew), source.