Recommendations

The 2016 U.S. presidential election illustrated the alarming levels of misinformation and disinformation that could spread and potentially influence an electorate. These misinformation and disinformation campaigns, many of which were designed to suppress voting on a large scale, particularly impacted communities of color. As the 2020 U.S. presidential election draws near, internet platforms can play an important role in promoting civic engagement. However, these platforms can also be manipulated and can cause serious harm to the electoral process. It is therefore critical that internet companies institute comprehensive policies and practices to respond to the continuous and rapid spread of election-related misinformation and disinformation, while distributing accurate voting and election information.

The section below includes short-term recommendations that internet platforms and policymakers should implement prior to the 2020 U.S. presidential election as well as long-term recommendations that should be used to help address future elections. The recommendations outline how companies can improve their efforts to connect users to, and lift up, authoritative information; address the spread of misleading information through content moderation and curation; tackle misleading advertisements; and provide meaningful transparency and accountability around these efforts. This section also includes recommendations for how U.S. policymakers can encourage greater accountability and integrity from internet platforms.

Recommendations for Internet Platforms

Sharing and Lifting Up Authoritative Information and Empowering Informed User Decision-Making

  • Partner with reputable fact-checking organizations and entities, as well as local and state election bodies to verify or refute information circulated through organic content and advertisements.
  • Partner with reputable organizations to launch media literacy efforts which aim to educate users on how to identify and evaluate misleading election-related content they may engage with online. These campaigns should also explain how users can report this content.
  • Fund and partner with vetted fact-checking organizations to ensure that fact-checking efforts can adequately tackle the growing volume of election-related misinformation and disinformation.
  • Educate users about potential attacks and scams related to elections that may appear on the platform and on methods to avoid becoming a victim of such efforts.
  • Notify users who have engaged with misleading election-related content and direct them to authoritative sources of information.
  • Institute a public interest exception policy that permits companies to leave content posted by world leaders, candidates for political office, and other government officials on their services, even if the content has been fact-checked and contains misleading information. In these cases, the company should label the content and provide additional context to users which explains that the content has been debunked but there is a public interest value in creating public awareness that political and government officials posted such content. Companies should also include links to authoritative information sources in the labels. In instances where the company determines that the content posted by officials could result in imminent harm, this public interest exception policy should not be applied. Rather, the companies should remove the content as they would with any other user.
  • Conduct regular impact assessments and audits of algorithmic curation tools (e.g. ranking and recommendation systems), and recalibrate them as necessary so they do not direct users to or surface misleading content when they search for election-related topics and do not algorithmically amplify such content in trending topics and recommendations.
  • Label organic content and advertisements that have been produced by state-controlled media outlets to inform users of the content’s origins.
  • Educate users on how their personal data is being collected and to what extent this data is being used to curate the content and ads that users are seeing online. Companies should also provide users with controls which allow them to determine how their data is collected, shared, and used to shape their content and ad experiences, especially as it relates to political advertising and election-related content.
  • Provide vetted researchers with access to tools and datasets that could enable them to better evaluate company efforts to combat election-related misinformation and disinformation.

Moderating and Curating Misleading Information

  • Create a comprehensive set of content policies to address the spread of election-related misinformation and disinformation with specific considerations for voter-suppressive content. Guidelines should include examples of how these policies are enforced and what kinds of content the policies do not apply to. Companies should house these policies in one location, provide public notice if their policies change, and include an archive of past policies.
  • Companies should clarify to what extent election-related policies interface with content policies related to hate speech, deepfakes, bots, coordinated inauthentic behavior, etc. While manipulated media may be a part of user expression on social media and therefore permissible for user-generated content, platforms should consider banning the use of such manipulation technologies for political advertising.
  • Institute a dedicated reporting feature which enables users to flag election-related misinformation and disinformation to the company.
  • Remove, reduce the spread of, or label content that has been fact-checked and deemed to contain election-related misinformation.
  • Label content that has been fact-checked and deemed to contain misinformation but does not qualify for removal. Labels should direct users viewing such content to authoritative sources of information. Companies should also provide adequate notice to users explaining what specific policies the user has violated and include information on how the user can appeal this decision.
  • Establish a Trusted Flaggers program which allows vetted and reputable civil rights organizations, civil society groups, and individuals to flag election-related misinformation and disinformation at scale and receive priority review for these flags. Companies should publicly disclose how this program works, how entities and individuals can apply, and other relevant information.
  • Collaborate with other internet platforms to share information on and strategies for addressing trending misinformation and disinformation campaigns, fraudulent accounts, coordinated inauthentic behavior, and debunked content. Any collaborations should be publicly disclosed and should be respectful of users’ privacy and comply with antitrust laws.

Tackling Misleading Advertising

  • Create and implement comprehensive policies for the content and targeting of ads that prohibit election-related misinformation and disinformation in advertisements. The policies should include specific considerations for addressing voter suppressive ad content and should clarify that advertisers must adhere to all applicable laws and regulations. Companies should include examples of how these ad policies are enforced and what kind of content does not fall under these policies. If these policies change, companies should provide public notice of these changes and share an archive of past policies. Companies should also clarify to what extent these policies interface with ad content and targeting policies related to hate speech, bots, deepfakes, coordinated inauthentic behavior, etc.
  • Establish a comprehensive review process for election-related ads and ad targeting categories. Companies should require all election-related ads to be fact-checked and reviewed by a human reviewer before they are permitted to run on a platform. Companies should publicly disclose high-level information on what this review process consists of and to what extent it relies on automated tools and human reviewers.
  • Explain to users to what extent advertisements that are flagged for violating election-related ad policies are reviewed, moderated, and curated by human reviewers and by automated tools. Users should be notified of any significant updates to these processes.
  • Create a comprehensive vetting process for advertisers which requires them to verify their identity and which country they are based in before running ads.
  • Provide adequate notice to advertisers who have had their ads removed, algorithmically curated (e.g. downranked), or labeled. This notice should explain what specific policies the advertiser violated and include information on how the advertiser can appeal this decision.
  • Give political advertisers the opportunity to appeal ad moderation decisions. This appeals process should be timely and enable advertisers to provide additional information on the case and have their case reviewed by a new reviewer or group of reviewers.
  • Append “paid for” disclosures to all paid political, social, and issue ads and ensure labels are maintained even if ad campaigns end or if ads are organically shared online.
  • Create policies that prevent users and entities from being able to monetize and advertise on the platform if they repeatedly spread misinformation and disinformation.

Providing Meaningful Transparency and Accountability

  • Explain to users how and to what extent content that is flagged for violating election-related misinformation and disinformation policies is reviewed, moderated, and curated by human reviewers and by automated tools. Users should be notified of any significant updates to these processes.
  • Provide adequate notice to users who have had their content removed, algorithmically curated (e.g. downranked), or labeled. This notice should explain what specific policies the user has violated and include information on how the user can appeal this decision.
  • Give users the opportunity to appeal moderation decisions. This appeals process should be timely and enable users to provide additional information on the case and have their case reviewed by a new reviewer or group of reviewers. Users who flag content and accounts should also have access to an appeals process.
  • Preserve data on election-related content and advertising removals. Vetted researchers should have access to this data so they can identify where these content and advertising moderation policies and practices fell short and make recommendations on how they can be improved.
  • Publish data related to the moderation, curation, and labeling of election-related misinformation and disinformation in their regular transparency reports. At a minimum, this data should include:
    • The number of accounts flagged, the number of accounts suspended, and the number of accounts removed for violating these policies
    • The number of pieces of content that were flagged, removed, downranked, and labeled as a result of policy violations
    • How much of the content and accounts that were removed, suspended, downranked, and labeled were identified proactively using automated tools and how much of the content and accounts were identified through human flags (e.g. from users, Trusted Flaggers, etc.)
    • A breakdown of content and accounts that were removed, suspended, downranked, or labeled by product (e.g. Facebook, Instagram, or WhatsApp)
    • A breakdown of content and accounts that were removed, suspended, downranked, or labeled by format (e.g. video, text, image)
    • A breakdown of content and accounts that were removed, suspended, downranked, or labeled by category of misinformation/disinformation (e.g. voter suppression, impersonation, etc.)
    • The number of appeals received for action taken against content and accounts in this category
    • The number of pieces of content restored and the number of accounts restored as a result of appeals in this category
    • The number of pieces of content restored and the number of accounts restored as a result of proactive recognition of errors by the company
  • Create a publicly available online database of all ads in categories related to elections and social and political issues that a company has run on its platform. This database should include search functionality. In order to protect privacy, the information in this database should not permit the identification of specific users who received the ads. At a minimum, this database should disclose the following information about each of the ads:
    • The format of the ad (e.g. text, video, etc.)
    • The name of the advertiser
    • What state the ad was run in
    • How much the ad spend was
    • The time period during which an ad was active
    • Granular engagement and interaction information such as how many users saw the ad and the number of likes, shares, and views an ad received
    • What targeting parameters the advertiser selected
    • What categories of users the ad was eventually delivered to (i.e. what targeting parameters did the ad delivery system eventually select and optimize for)
    • Whether the ad was delivered to a custom set of users or ones generated by an automated system
  • Publish data on the company’s election-related ad content and targeting policy enforcement efforts. This should include:
    • The total number of ads and advertiser accounts removed for violating the platform’s election-related ad content and targeting policies
    • A breakdown of ads and advertiser accounts removed based on which policy they violated
    • A breakdown of ads and advertiser accounts removed based on the format of the ad (e.g. text, audio, image, etc.)
    • A breakdown of ads and advertiser accounts removed based on the country of the advertiser
    • A breakdown of ads and advertiser accounts removed based on the product or service on which the ad was run
    • The detection method used (e.g. user flag, automated tools, etc.). This data should not reveal the identity of individual flaggers
  • Provide periodic updates on content and advertising moderation, curation, and labeling efforts in the run up to the 2020 U.S. presidential election.
  • Following major elections, publish an election-specific transparency report that summarizes the scope and scale of content and advertising moderation, curation, and labeling efforts surrounding the elections.

Recommendations for Policymakers

Although the U.S. government is limited in the extent to which it can direct platforms how to decide what content to permit on their sites, policymakers can do more to encourage greater transparency and accountability from internet platforms around how they are addressing the rapid spread of election-related misinformation and disinformation on their services.

  • Policymakers should enact rules to require greater transparency from online platforms, including regular reporting regarding their content moderation, curation, labeling, and ad targeting and delivery efforts.
  • Government agencies and representatives should ensure that when they post online they are only disseminating verified information related to the elections and are not spreading unproven or debunked information.
  • Authoritative election authorities such as the Federal Elections Commission (FEC), state election boards, and other state and local authorities should partner with internet platforms to provide and promote verified and legitimate information related to the election on their platforms. These entities should also help debunk misleading claims and information using their own online accounts.
  • Policymakers should clarify that the Voting Rights Act, which prohibits suppressing voting through intimidation, applies in the digital environment. Further, Congress should amend the Act or pass new legislation to prohibit suppression of voting through deception, which is the primary means of vote suppression online.
  • Policymakers should fund vetted fact-checking organizations around the world to ensure that fact-checking efforts can adequately tackle the growing volume of election-related misinformation and disinformation.
  • Policymakers should update campaign finance laws to address gaps and ensure that federal laws and regulations comprehensively cover digital political advertising.

Table of Contents

Close