Moderating and Curating Misleading Information
Employing content moderation and algorithmic content curation to tackle misleading election content can yield promising results. All of the platforms we evaluated have a comprehensive set of content policies to address the spread of election-related misinformation and disinformation. There is variation in how platforms outline these policies, however.
For example, Reddit has an impersonation policy prohibiting content that “impersonates individuals or entities in a misleading or deceptive manner” and “deepfakes or other manipulated content presented to mislead, or falsely attributed to an individual or entity.”1 Twitter, on the other hand, has a Civic Integrity Policy, that prohibits the use of Twitter’s services “for the purpose of manipulating or interfering in elections or other civic processes” including “posting or sharing content that may suppress participation or mislead people about when, where, or how to participate in a civic process.”2 WhatsApp’s policies also prohibit “publishing falsehoods, misrepresentations, or misleading statements.”3 Other platforms have broader policies on false information, misinformation, or misleading information, which include election-related content. If a platform had a content policy category that could be applied to election misinformation and disinformation, we gave them full credit.
One limitation of existing efforts in this area is that not all platforms specifically discuss voter suppression in their policy. Given that online services are often used to misinform users about when, where, and how they can vote and deter voting, platforms must provide more clarity around their policies ahead of the midterms. Additionally, companies who do not have a clear stance on voter suppression should engage in broad, multi-stakeholder engagement to develop robust policies around voter suppression.
A major improvement that we have seen across platforms over the past several years is that most allow users to flag content as misinformation and disinformation. The major exception to this is Google, which allows users to flag search results, but not for misleading content. In discussions with certain companies, we learned there was initially resistance to providing this flagging feature as some companies observed that users were often flagging content that they disagreed with, rather than content that was truly misleading in nature. However, most platforms now offer users the ability to flag misleading content, which is an important way of providing users with greater agency to surface harmful content they are seeing and ensuring the content moderation process is not one-sided.
We provided companies with full credit if they had a reporting feature for misinformation or misleading information broadly, rather than a dedicated feature for election misinformation and disinformation. It is important to note that some platforms combine misinformation reporting with other content categories. For example, YouTube users can flag content under the category “spam and misleading.” This may be because of the high number of false misinformation reports companies receive. The limitations of this approach will be discussed in the last section on transparency.
Our evaluation of WhatsApp was slightly different for this category because the platform does not engage in traditional content moderation and curation due to the end-to-end encrypted nature of its messaging services. End-to-end encryption is a critical feature for privacy and security. While WhatsApp cannot scan the content of user messages, the company has created several features designed to limit the spread of misinformation, including forwarding limits for viral messages, spam detection to ban mass messages, and enabling users to block and report messages containing misinformation. WhatsApp also allows users to directly interact with fact-checkers and to independently verify information on the web.4
Before the 2020 election, there was significant variation in terms of how platforms approached misleading election information. This remains true. Facebook and Instagram, for example, rely on a “Remove, Reduce, Inform” approach, in which the services remove content that violates their Community Standards, algorithmically reduce the spread of misleading or harmful content that does not violate their Community Standards in their News Feeds, and inform users with additional context, including through labels.5 On the other hand, Twitter uses a range of approaches including tweet deletions, labeling, temporary and permanent account locks, and suspensions. The company determines which enforcement actions to use based on the severity of a tweets’ content, past account history, and a predetermined strike system.6
Platforms should be able to use a broad range of enforcement mechanisms to tackle misleading content and accounts. However, few platforms have released granular data outlining the effectiveness of their efforts. This makes it difficult to identify which mechanisms are most effective and hold platforms accountable. This will be discussed in greater detail in the last section.
Citations
- “Do Not Impersonate an Individual or Entity,” Reddit Help, source.
- “Civic Integrity Policy,” Twitter Help Center, October, 2021, source.
- “WhatsApp Terms of Service,” WhatsApp Help Center, January 4, 2021, source.
- “About WhatsApp and Elections,” WhatsApp Help Center.
- Tessa Lyons, “The Three-Part Recipe for Cleaning up Your News Feed,” Meta Newsroom, May 22, 2018, source.
- “Civic Integrity Policy,” Twitter Help Center.