Executive Summary
The 2016 U.S. presidential election demonstrated how internet platforms can be used to spread false and misleading information and suppress voting. This can be done through various means, including through posts and advertisements that spread inaccurate information about dates, locations, and voting procedures, as well as content that threatens or intimidates particular communities, particularly communities of color, into not voting. Our research dives into what several popular internet platforms—Facebook and Instagram, Google, Pinterest, Reddit, Snapchat, TikTok, Twitter, WhatsApp and YouTube—are doing to combat false and misleading election information, including potential voter suppression content around the 2020 U.S. presidential election and beyond. It also lays out recommendations for how platforms and policymakers can better protect the public from such content.
These platforms, and the policies and practices they deploy, can have a strong influence on the strength and nature of democracy and discourse, both in the United States and around the world. As the 2020 U.S. presidential election approaches, and as the ongoing COVID-19 pandemic shapes how and where people vote, strong practices to combat election misinformation and disinformation, including voter suppression material, are critical
Key Findings
- Many internet platforms are establishing online hubs to house information related to voter registration, voting processes, and more ahead of the 2020 U.S. election. However, not all of these hubs are easily accessible by users.
- A number of platforms we researched do not house content and advertising policies in one central location and do not have comprehensive policies that outline their approach. This makes it difficult to understand the parameters of these policies, and puts the onus on users and researchers to dig through numerous web pages and documents to figure out how and when these policies apply.
- Platforms are addressing misleading information in political ads in a myriad of ways, from banning political ads completely, to instituting restrictions on political ad targeting and delivery. However, there is no consensus on which approach is optimal, as they all present flaws and limitations.
- There is a discrepancy between the election-related policies that companies are creating and how they are being enforced. Companies fail to provide adequate transparency and accountability around the scope and scale of these policy enforcement and moderation efforts, and where they fall short. This raises some serious concerns around whether these policies and practices are effectively addressing the spread of election-related misinformation and disinformation online.
- Companies are relying on middle-ground moderation and curation efforts, such as labels and downranking, to handle election-related content. However, these policies and practices are applied inconsistently, and there is little transparency and accountability around the policies that guide their use.
- There is a fundamental lack of transparency and accountability around what platforms are doing to handle election misinformation and disinformation online, and if platforms’ efforts are effective. Most internet platforms covered in this report publish transparency reports which include some data on the scope and scale of their content moderation efforts. However, only one company publishes data directly related to the moderation of election-related content and very few publish data related to the moderation of misleading information. This information is vital to understand where companies are taking action, what influence these actions are having, and where these efforts are falling short.
- There is contention over whether or not internet platforms should fact-check content and ads on their services. While some internet platforms fact-check user-generated content, fewer fact-check ads. This is concerning because entities and politicians could precisely target specific audiences with potentially false information.
- Some platforms are adopting labels and identity verification standards to provide transparency around which businesses—such as foreign media outlets and advertisers—are sharing information online. These efforts are largely seen as a response to the 2016 U.S. presidential election, where foreign actors used U.S. internet platforms to spread misleading information among U.S. voters without having to disclose their identity or location.
- Publishing an ad transparency library provides insight into the kinds of political ads that are run on a platform, but this practice is not widely adopted (and some platforms have banned political ads completely). In addition, there is a serious lack of quantitative and qualitative transparency around how platforms moderate ads based on their advertising content policies, and what impact these efforts have on the ads available on their platforms.
- Algorithmic curation and amplification processes can significantly boost or undermine the reach of a piece of content or an ad. Some companies have recalibrated their algorithmic ranking and recommendation systems to prevent the amplification of election-related misinformation and disinformation. However, there is still a significant lack of transparency around how these tools are trained and used, what impact these disclosed changes have had on the spread of misleading and false election information, and how or if humans are kept in the loop.
Recommendations
Going forward, internet platforms and policymakers should consider the following set of recommendations prior to the 2020 U.S. presidential election as well as in the long-term to address future elections. The section below includes an excerpt of our recommendations on how companies can improve their efforts to connect users to, and lift up, authoritative information; address the spread of misleading information through content moderation and curation; tackle misleading ads; and provide meaningful transparency and accountability around these efforts. This section also includes recommendations for how U.S. policymakers can encourage greater accountability and integrity from internet platforms, although they are limited in the extent to which they can direct how internet platforms decide what content to permit on their sites.
Recommendations for Internet Platforms
Sharing and Lifting Up Authoritative Information and Empowering Informed User Decision-Making
- Partner with reputable fact-checking organizations and entities, as well as local and state election bodies to verify or refute information circulated through organic content and ads.
- Notify users who have engaged with misleading election-related content and direct them to authoritative sources of information.
- Institute a public interest exception policy that permits companies to leave content posted by world leaders, candidates for political office, and other government officials on their services, even if the content has been fact-checked and contains misleading information. In instances where the company determines that the content posted by officials could result in imminent harm, this public interest exception policy should not be applied and the content should be removed.
- Conduct regular impact assessments and audits of algorithmic curation tools (e.g. ranking and recommendation systems), and recalibrate them as necessary so they do not direct users to or surface misleading content when they search for election-related topics and do not algorithmically amplify such content in trending topics and recommendations.
- Label organic content and ads that have been produced by state-controlled media outlets to inform users of the content’s origins.
Moderating and Curating Misleading Information
- Create a comprehensive set of content policies to address the spread of election-related misinformation and disinformation with specific considerations for voter-suppressive content. Companies should house these policies in one location, provide public notice if their policies change, and include an archive of past policies.
- Companies should clarify to what extent election-related policies interface with content policies related to hate speech, deepfakes, bots, coordinated inauthentic behavior, etc. While manipulated media may be a part of user expression on social media and therefore permissible for user-generated content, platforms should consider banning the use of such manipulation technologies for political advertising.
- Institute a dedicated reporting feature which enables users to flag election-related misinformation and disinformation to the company.
- Remove, reduce the spread of, or label content that has been fact-checked and deemed to contain election-related misinformation.
Tackling Misleading Advertising
- Create and implement comprehensive policies for the content and targeting of ads that prohibit election-related misinformation and disinformation in ads. The policies should include specific considerations for voter-suppressive ad content and should clarify to what extent these policies interface with advertising policies related to hate speech, bots, deepfakes, etc.
- Establish a comprehensive review process for election-related ads and ad targeting categories. Companies should require all election-related ads to be fact-checked and reviewed by a human reviewer before they are permitted to run on a platform. Companies should publicly disclose high-level information on what this review process consists of and to what extent it relies on automated tools and human reviewers.
- Create a comprehensive vetting process for advertisers which requires them to verify their identity and which country they are based in before running ads.
- Append “paid for” disclosures to all paid political, social, and issue ads and ensure labels are maintained even if ad campaigns end or if ads are organically shared online.
- Create policies that prevent users and entities from being able to monetize and advertise on the platform if they repeatedly spread misinformation and disinformation.
Provide Meaningful Transparency and Accountability
- Explain to users how and to what extent content that is flagged for violating election-related misinformation and disinformation policies is reviewed, moderated, and curated by human reviewers and by automated tools. Users should be notified of any significant updates to these processes.
- Preserve data on election-related content and advertising removals. Vetted researchers should have access to this data so they can identify where these content and advertising moderation policies and practices fell short and make recommendations on how they can be improved.
- Publish data related to the moderation, curation, and labeling of election-related misinformation and disinformation in their regular transparency reports.
- Create a publicly available online database of all ads in categories related to elections and social and political issues that a company has run on its platform.
- Publish data on the company’s election-related ad content and targeting policy enforcement efforts.
Recommendations for Policymakers
- Policymakers should enact rules to require greater transparency from online platforms, including regular reporting regarding their content moderation, curation, labeling, and ad targeting and delivery efforts.
- Authoritative election authorities such as the Federal Elections Commission (FEC), state election boards, and other state and local authorities should partner with internet platforms to provide and promote verified and legitimate information related to the election on their platforms. These entities should also help debunk misleading claims and information using their own online accounts.
- Policymakers should clarify that the Voting Rights Act, which prohibits suppressing voting through intimidation, applies in the digital environment. Further, Congress should amend the Act or pass new legislation to prohibit suppression of voting through deception, which is the primary means of vote suppression online.