Introduction

Following the 2016 U.S. presidential election, internet platforms have come under increased scrutiny for how they handle the spread of misinformation and disinformation on their services. Since that election, numerous researchers have found evidence that social media platforms served as hotbeds for the spread of election-related misleading content, including content designed to suppress voting. These efforts fanned existing societal tensions around race and socioeconomics, and they disproportionately impacted communities of color and other marginalized groups. For example, during the 2016 elections, Russian operatives fraudulently posed as Black Americans to actively dissuade the Black community from voting.1

Social media platforms can and have been used to spread false information and suppress voting in a number of ways. These include posts and advertisements that spread inaccurate information about dates, locations, and voting procedures, as well as content that threatens or intimidates particular communities into not voting. These types of content can undermine trust in the electoral process and discourage voters from participating at all.

Since 2016, internet platforms have instituted a range of policies and practices that seek to identify and curb the spread of election-related misinformation and disinformation. However, experts and users have little confidence in the efficacy of these measures.2 According to a 2018 national survey conducted by the Brookings Institution, 57 percent of those surveyed felt that they had seen fake news or misleading information during the 2018 U.S. midterm elections, and 19 percent believed that this information had influenced how they planned to vote.3 In addition, a January 2020 Pew Research Center study found that just 25 percent of U.S. adults felt confident that tech companies would be able to prevent the misuse of their platforms during the upcoming elections. This was a decrease from 33 percent prior to the 2018 midterm elections.4

As the 2020 U.S. presidential election draws near, experts are concerned that social media platforms will be used by both foreign and domestic actors to suppress votes and spread misleading information.5 In particular, many experts fear that because individuals are relying more on digital resources to learn about voting procedures and policies, they will be especially susceptible to misinformation and disinformation.6 In addition, watchdog organizations have also expressed concerns that these platforms will be used to suppress voting by exploiting users’ fears around COVID-19 in order to encourage them to avoid polling places, which could particularly affect participation among older voters.7 Further, there are also concerns that entities seeking to suppress voting could use the ongoing protests related to racial justice in the United States to push out messaging that voters from certain communities should protest racial injustice by not participating in the electoral process.8 Thus far, internet platforms have responded to concerns of election and voting-related misinformation and disinformation in a number of ways. Major tech companies, including Facebook, Google, Pinterest, Reddit, and Twitter have announced that they plan to meet regularly with each other and government agencies to discuss ongoing trends and ways to protect information around the 2020 election.9 Many platforms have created new policies or expanded existing ones to cover these categories of content, as well as related forms of content that could impact elections such as hate speech, fake accounts, and inauthentic behavior. In addition, many internet platforms have begun examining the role political advertising can play in fostering a false information ecosystem on their services. Some companies, such as Amazon and Twitter, have banned political advertising altogether. Other platforms, such as Google and Snapchat, have instead introduced guidelines for political ads. However, there is still a significant lack of transparency and accountability around how these platforms are creating and implementing these policies, sparking concerns that these policies are not being implemented consistently and are ineffective. In addition, this has also raised concerns that platforms may be prioritizing profit over the safeguarding of user rights and the electoral process.10Internet platforms, used by millions of people in the United States every day, have assumed a central role as gatekeepers of speech in society. Given that there are no clear laws that address the spread of election-related misinformation and disinformation online, these platforms are also the de facto “legislative, judicial, and executive branches” in terms of preventing online voter suppression.11 As a result, these platforms, and the policies and practices they deploy, can have a strong influence and impact on the strength and nature of democracy and discourse, both in the United States and around the world.

This report will provide an overview of how various internet platforms are addressing the rapid spread of election-related misinformation and disinformation, and particularly content that promotes voter suppression. The report concludes by offering recommendations on how platforms can improve the efficacy of their efforts and provide greater transparency for their users and the public. The report also includes recommendations on how U.S. policymakers can encourage further accountability and support efforts to combat the spread of misinformation and disinformation around voting.

Editorial disclosure: This report discusses policies by Facebook (including Instagram and WhatsApp) and Google (including YouTube), both of which are funders of work at New America, but neither of which contributed funds directly to the research or writing of this report. New America is guided by the principles of full transparency, independence, and accessibility in all its activities and partnerships. New America does not engage in research or educational activities directed or influenced in any way by financial supporters. View our full list of donors at www.newamerica.org/our-funding.

Citations
  1. Select Committee on Intelligence, (U)Report of the Select Committee on Intelligence United States Senate On Russian Active Measures Campaigns and Interference in the 2016 U.S. Election Volume 2: Russia's Use of Social Media With Additional Views, S. Rep. No. 116th, 1st (Oct. 8, 2019). source.
  2. Cat Zakrzewski and Tonya Riley, "The Technology 202: Social networks haven't done enough to prevent voter manipulation, tech leaders say," The Washington Post, March 10, 2020, source.
  3. Darrell M. West, "Brookings Survey Finds 57 Percent Say They Have Seen Fake News During 2018 Elections and 19 Percent Believe It Has Influenced Their Vote," Brookings Institution, last modified October 23, 2018, source.
  4. Ted Van Green, "Few Americans Are Confident In Tech Companies To Prevent Misuse Of Their Platforms in the 2020 Election," Pew Research Center, last modified September 9, 2020, source.
  5. Jessica Guynn, “ Facebook civil rights audit warns of Trump, voter suppression ahead of presidential election,” USA Today, July 8, 2020, source
  6. Miles Parks, "Social Media Usage Is At An All-Time High. That Could Mean A Nightmare For Democracy," NPR, May 27, 2020, source.
  7. Carrie Levine, "Online Misinformation During the Primaries: A Preview Of What's To Come?," The Center for Public Integrity, last modified March 11, 2020, source.
  8. Naomi Nix and Kurt Wagner, "Social Media Braces for a Deluge of Voter Misinformation," Bloomberg Businessweek, July 24, 2020, source.
  9. Mike Issac, Kate Conger, “Google, Facebook and Others Broaden Group to Secure U.S. Election,” NYTimes, August 12, 2020, source
  10. Nathalie Maréchal and Ellery Roberts Biddle, It's Not Just the Content, It's the Business Model: Democracy's Online Speech Challenge, March 17, 2020, source.
  11. Nix and Wagner, "Social Media".

Table of Contents

Close