Olive Garden is funding President Trump’s re-election campaign. A damaging asteroid is going to hit Earth the day before the election. A major disease outbreak has occurred during every election year since the mid-2000s. These are all examples of false or misleading stories that went viral over the past several months in the run up to the 2020 U.S. presidential election.
The United States is no stranger to election misinformation and disinformation. Following the 2016 U.S. presidential election, numerous researchers found evidence that social media platforms served as hotbeds for the spread of election-related misleading content, including content designed to suppress voting. These efforts exploited existing societal tensions around race and socioeconomics, and specifically targeted communities of color and other marginalized groups.
Since then, internet platforms have come under increased scrutiny around how they handle the spread of misinformation and disinformation on their services. Many companies have responded by creating new policies or expanding existing ones to cover these categories of content, as well as related forms of content that could impact elections, such as hate speech, fake accounts, and inauthentic behavior. In addition, many internet platforms have begun examining the role political advertising can play in fostering a false information ecosystem on their services. However, despite all these changes and internal discussions, there is still a significant lack of transparency and accountability around how these platforms are creating and implementing these policies, sparking concerns that these policies are not being implemented consistently and are ineffective, and that platforms may be prioritizing profit over safeguarding user rights and the electoral process.
These platforms, and the policies and practices they deploy, can have a strong influence and impact on the strength and nature of democracy and discourse, both in the United States and around the world. If not properly addressed, election-related misinformation and disinformation can undermine trust in the electoral process and discourage voters from participating at all. During this year’s election, these concerns are further augmented by the ongoing COVID-19 pandemic.
In our new report, New America’s Open Technology Institute (OTI) explores how ten internet platforms—Amazon, Facebook, Google, Pinterest, Reddit, Snapchat, TikTok, Twitter, WhatsApp, and YouTube—are addressing the spread of election-related misinformation and disinformation on their services. The report includes 34 recommendations for companies in four categories: 1) sharing and lifting up authoritative information and empowering informed user decision-making, 2) moderating and curating misleading information, 3) tackling misleading advertising, and 4) providing meaningful transparency and accountability around these efforts. The report also features six recommendations on how U.S. policymakers can encourage further accountability from these platforms, and support efforts to combat the spread of election-related misinformation and disinformation, and help enforce anti-discrimination statutes in the digital space. Some of these recommendations are outlined below:
Recommendations for Internet Platforms
Sharing and Lifting Up Authoritative Information and Empowering Informed User Decision-Making
- Partner with reputable fact-checking organizations and entities, as well as local and state election bodies to verify or refute information circulated through organic content and advertisements.
- Notify users who have engaged with misleading election-related content and direct them to authoritative sources of information.
- Institute a public interest exception policy that permits companies to leave content posted by world leaders, candidates for political office, and other government officials on their services, even if the content has been fact-checked and contains misleading information. In instances where the company determines that the content posted by officials could result in imminent harm, this public interest exception policy should not be applied and the content should be removed.
- Conduct regular impact assessments and audits of algorithmic curation tools (e.g. ranking and recommendation systems), and recalibrate them as necessary so they do not direct users to or surface misleading content when they search for election-related topics and do not algorithmically amplify such content in trending topics and recommendations.
- Label organic content and advertisements that have been produced by state-controlled media outlets to inform users of the content’s origins.
Moderating and Curating Misleading Information
- Create a comprehensive set of content policies to address the spread of election-related misinformation and disinformation with specific considerations for voter-suppressive content. Companies should house these policies in one location, provide public notice if their policies change, and include an archive of past policies.
- Institute a dedicated reporting feature which enables users to flag election-related misinformation and disinformation to the company.
- Remove, reduce the spread of, or label content that has been fact-checked and deemed to contain election-related misinformation.
Tackling Misleading Advertising
- Create and implement comprehensive policies for the content and targeting of ads that prohibit election-related misinformation and disinformation in advertisements. The policies should include specific considerations for voter suppressive ad content and should clarify to what extent these policies interface with advertising policies related to hate speech, bots, deepfakes, etc.
- Establish a comprehensive review process for election-related ads and ad targeting categories. Companies should require all election-related ads to be fact-checked and reviewed by a human reviewer before they are permitted to run on a platform. Companies should publicly disclose high-level information on what this review process consists of and to what extent it relies on automated tools and human reviewers.
- Explain to users to what extent advertisements that are flagged for violating election-related ad policies are reviewed, moderated, and curated by human reviewers and by automated tools. Users should be notified of any significant updates to these processes.
- Create a comprehensive vetting process for advertisers which requires them to verify their identity and which country they are based in before running ads.
- Append “paid for” disclosures to all paid political, social, and issue ads and ensure labels are maintained even if ad campaigns end or if ads are organically shared online.
- Create policies that prevent users and entities from being able to monetize and advertise on the platform if they repeatedly spread misinformation and disinformation.
Provide Meaningful Transparency and Accountability
- Preserve data on election-related content and advertising removals. Vetted researchers should have access to this data so they can identify where these content and advertising moderation policies and practices fell short and make recommendations on how they can be improved.
- Publish data related to the moderation, curation, and labeling of election-related misinformation and disinformation in their regular transparency reports.
- Create a publicly available online database of all ads in categories related to elections and social and political issues that a company has run on its platform.
- Publish data on the company’s election-related ad content and targeting policy enforcement efforts.
Recommendations for Policymakers
- Policymakers should enact rules to require greater transparency from online platforms, including regular reporting regarding their content moderation, curation, labeling, and ad targeting and delivery efforts.
- Government agencies and representatives should ensure that when they post online they are only disseminating verified information related to the elections and are not spreading unproven or debunked information.
- Authoritative election authorities such as the Federal Elections Commission (FEC), state election boards, and other state and local authorities should partner with internet platforms to provide and promote verified and legitimate information related to the election on their platforms. These entities should also help debunk misleading claims and information using their own online accounts.
- Policymakers should clarify that the Voting Rights Act, which prohibits suppressing voting through intimidation, applies in the digital environment. Further, Congress should amend the Act or pass new legislation to prohibit suppression of voting through deception, which is the primary means of vote suppression online.¹
- Policymakers should fund vetted fact-checking organizations around the world to ensure that fact-checking efforts can adequately tackle the growing volume of election-related misinformation and disinformation.
- Policymakers should update campaign finance laws to address gaps and ensure that federal laws and regulations comprehensively cover digital political advertising.
¹ Gaurav Laroia and David Brody, "Privacy Rights Are Civil Rights. We Need to Protect Them.," FreePress, last modified March 14, 2019, https://www.freepress.net/our-response/expert-analysis/insights-opinions/privacy-rights-are-civil-rights-we-need-protect-them.