Twitter is a microblogging and social media platform that has approximately 152 million daily active users.1 Twitter became a focal point of conversations on misleading information and the electoral process following the 2016 U.S. presidential election.
According to Twitter’s civic integrity policy, individuals “may not use Twitter’s services for the purpose of manipulating or interfering in elections or other civic processes. This includes posting or sharing content that may suppress participation or misleading people about when, where, or how to participate in a civic process.”2 The company defines civic processes as “events or procedures mandated, organized, and conducted by the governing and/or electoral body of a country, state, region, district, or municipality to address a matter of common concern through public participation.”3 For example, in September 2020, the company took action against a Tweet posted by Democratic House candidate Elizabeth Hernandez, which suggested that Republicans should vote on November 4, for violating its rules against voter suppression. Twitter required the campaign account to remove the Tweet, which Hernandez’ team said was posted as a joke, before it could regain access to its account.4 Under its civic integrity policy, Twitter prohibits three categories of manipulative content and actions:
- Misleading information about how an individual can participate in an election or civic process: This category includes the posting or promotion of misleading information about when a civic process such as an election is taking place as well as misleading information on how to participate, such as false claims that individuals can vote by Tweet or text messages.5
- Content that aims to suppress, intimidate, or discourage individuals from participating in an election or other civic process: This category includes false claims that polling places are closed or experiencing equipment problems, misleading claims about law enforcement activity related to voting in an election, misleading statements related to process procedures which could discourage voting, and threats related to voting locations.6 Twitter’s violent threats policy is also applicable to instances of violent threats that are not covered by the civic integrity policy.7
- False or misleading affiliations: This category prohibits the creation of fake accounts which misrepresent their affiliation or share and promote content that misrepresents an affiliation with a candidate, elected official, government entity, and so on.8
In September 2020, Twitter announced that it is updating its civic integrity policy to allow the company to label or remove false or misleading information that aims to undermine public confidence in an election or other civic processes. This policy change will apply to content that creates confusion around the laws related to a civic process, shares unverified claims related to election rigging and other procedures that could undermine faith in the civic process, or promotes misleading claims that call for interference in or that relate to the results of civic process.9 The company clarified that the civic integrity policy does not apply to inaccurate statements about an elected or appointed official; election or political-related content that is polarizing, controversial, or hyper partisan in nature; and high-level statements about the integrity of civic processes.10 In addition, if an elected or appointed official shares content that violates this policy but has a public interest value, the company may leave the content up under its public interest notice policy (discussed further below).11Twitter’s civic integrity policy also includes information on when and how users can report content they believe violates this policy. Reporting is available to users in relevant jurisdictions prior to the first officially-sanctioned event associated with major civic processes, and users can report content both via the Twitter app and desktop website.12 In addition, Twitter works with several government and civil society partners around the world who flag policy-violating content and receive expedited review on their flags.13Further, the civic integrity policy details that consequences for violations vary depending on the nature of the violation and the user’s history with the platform. In situations where a user violates the civic integrity policy for the first time, the platform blocks the user’s ability to publish new tweets and mandates that the user deletes the violating tweet or content from their profile before they can regain full access. If a user violates the policy again after receiving an initial warning, their account will be permanently suspended. The company offers users whose accounts have been flagged for violating the policy access to an appeals process, although it is unclear whether this also applies to tweets.14 In August 2020, the company announced it is expanding its misinformation policies related to mail-in ballots and early voting.15
As the 2020 U.S. presidential election draws near, Twitter is under a significant amount of pressure to improve its efforts to detect and curb the spread of election-related misinformation and disinformation, particularly voter suppression-related content, and to connect users with reliable information on voting. In September 2020, Twitter debuted its election hub by adding a “US Elections” tab in the Explore menu on the platform. The hub will feature Twitter-selected election-related news in English and Spanish, debate live streams, state-related voting information and resources, and candidate information. Twitter has stated that the hub will also include public service announcements that aim to inform voters about important election-related topics such as voter registration, how to obtain a mail-in ballot, and guidance for safe voting during the pandemic.16 The company has also banned deepfakes17 and in January 2020, a few days before the Iowa caucuses, Twitter permitted users in the United States to report misleading content related to the elections as well as instances of voter intimidation or suppression .18
According to Twitter, it designed the reporting tool to empower users in the United States to flag content that could harm the electoral process. However, experts have raised concerns that the tool could be abused by trolls seeking to attack candidates or undermine individuals they disagree with. In addition, there is little transparency around how effective this tool is. It was originally rolled out in India,19 during the general elections in April 2019, and in the European Union ahead of its May 2019 elections, but Twitter has not released information about whether the tool curbed the spread of voter suppressive-related misinformation and disinformation.20 Although Twitter publishes a transparency report which outlines how it enforces its content policies, the company only recently included data on how it enforces its civic integrity policy in the report.21 Currently, the report includes data on the unique number of accounts reported and actioned and the amount of content actioned for hateful conduct, impersonation,and violent threats.22 However, the metrics and data offered in the report do not provide any specific election-related data, and the data provided is not granular enough to understand the scope and scale of voter suppression-related misinformation and disinformation on the service, and how Twitter aims to combat such content.23 In addition, Twitter has continuously been criticized for inconsistently enforcing its policies.24 This raises further questions around whether Twitter’s civic integrity policy and this new reporting tool will have any positive impact on preventing voter suppression on the platform.
Twitter has also tried to address the spread of election and voter suppression misinformation and disinformation in advertising. In August 2019, the company banned all advertising from state-backed media.25 The company defines state-controlled media as entities that are financially or editorially controlled by a state. It does not include entities that receive some taxpayer funding but are otherwise independent—such as independent public broadcasters—in this definition. The company worked with academic and civil society leaders to curate its list of state-controlled media organizations.26 In addition, in October 2019, the company banned political ads on the platform.27 Further, Twitter introduced ad targeting limitations to prevent targeting of cause-based ads using an individual’s age, race, or location.28 However, there is little transparency around how this ban and the limitations on cause-based ads are being enforced, and how effective these enforcement mechanisms have been to-date. In June 2019, Twitter introduced a public interest notice policy. The policy details that in certain instances in which a government official, individual who is running for public office, individual who is being considered for a government position, or user who is verified or has over 100,000 followers violates the company’s policies, the company may leave the content up as it believes the content has a public interest value, but will append a notice over the tweet. The notice, which appears over these tweets in both the news feed and search results, informs users the content has been deemed as violating, details which of the content policies it violated, and explains that the tweet has been left up because it has a public interest value. However, Twitter removes content that features direct threats of violence or calls to commit violence against an individual.29
The company has deployed these public interest notices in numerous instances. For example, in May 2020, President Donald Trump and his campaign shared unsubstantiated content that claimed vote-by-mail programs are efforts to commit voter fraud. The company responded by fact-checking the tweets and appending a warning label to two of President Trump’s tweets that featured false claims related to mail-in voting. The notices also included a link where users could learn more about mail-in ballots.30 However, Twitter has not taken action on similar content, sparking concerns among civil society and civil rights groups that these policies are not applied consistently or transparently,31 which can undermine their effectiveness.32
In August 2020, the company also announced it will label Twitter accounts belonging to senior government officials and entities (e.g. foreign ministers, institutional entities, diplomatic leaders, etc.) and accounts belonging to state-affiliated media entities as well as their editors-in chief and their senior staff, from the five permanent members of the UN Security Council (China, France, Russia, the United Kingdom, and the United States).33 The labels will contain information such as “Russia state-affiliated media” to provide greater transparency around who is sharing content on the platform. Twitter will not apply these labels to heads of states’ personal accounts. Twitter also announced that state-affiliated media accounts and their Tweets will no longer be amplified through the platform’s recommendation systems. This will impact their visibility on the home timeline, in notifications, and in search.34 Users who click on these new labels on account pages will be redirected to an article which explains this new policy35 as well as to the Twitter Transparency Report to provide further information.36
Going forward, the company should provide greater transparency and accountability around how it uses methods such as labels to address voter suppression-related misinformation and disinformation on the platform. The company could do this by publishing data on how many times it has used such labels for different categories of content and different types of officials or entities in its transparency report. This is particularly important given that advocates have expressed concerns that such labeling processes could be ineffective if they are implemented inconsistently or are not implemented in a comprehensive manner.37 Although the platform can still improve its efforts to prevent the spread of voter suppression-related content on its service, its efforts are notable compared to other similar platforms such as Facebook, who have taken a more hands-off approach on these issues.
On July 21, 2020, Twitter announced a range of enforcement actions against accounts related to far-right conspiracy group QAnon, one of the entities identified as responsible for promoting misinformation in the 2016 U.S. presidential election and promoting voter suppression.38 These enforcement actions include preventing QAnon-related content and accounts from appearing in the algorithmically-curated trending topics and recommendations, preventing QAnon-related links from being shared on the platform, and attempting to prevent the algorithmic amplification of QAnon-related content in search and conversation threads. These are important efforts to prevent the spread of voter suppression-related misinformation and disinformation on the platform. However, as mentioned above, in order for the impact of these efforts to be quantified, the company should provide greater transparency around the effectiveness and results of these enforcement actions.
Twitter should also expand its transparency reporting to include data related to misinformation, disinformation, and voter suppression, as well as data outlining how effective their enforcement of their political ads policies are. In addition, the company should also provide greater transparency around how users’ news feeds and recommendations are algorithmically curated, and how this could result in the promotion of related misleading information. Users should also have access to controls that allow them to determine whether and how their data is used to enable these algorithmic curation processes, and to institute preferences around the types of content users see online.39 Finally, the company should preserve data related to election-related content removals and provide researchers with access to this data following elections so they can assess where the company’s moderation policies fell short.
Citations
- Omnicore Agency, "Twitter by the Numbers: Stats, Demographics & Fun Facts," Omnicore Agency, last modified February 10, 2020, source.
- Twitter, "Civic Integrity Policy," Twitter, last modified September 2020, source.
- Twitter, "Civic Integrity," Twitter.
- Cristiano Lima, "Twitter Forces Democratic Candidate To Delete Post Flouting Voter Suppression Rules," Politico, September 1, 2020, source.
- Twitter, "Civic Integrity," Twitter.
- Twitter, "Civic Integrity," Twitter.
- Twitter, "Civic Integrity," Twitter. Twitter, "Violent Threats Policy," Twitter, last modified March 2019, source.
- Twitter, "Civic Integrity," Twitter.
- Twitter Safety, "Expanding Our Policies to Further Protect the Civic Conversation," Twitter Blog, entry posted September 10, 2020, source.
- Twitter, "Civic Integrity," Twitter.
- Twitter, "About Public-Interest Exceptions on Twitter," Twitter, source.
- Twitter, "Civic Integrity," Twitter.
- Twitter, "Civic Integrity," Twitter.
- Twitter, "Civic Integrity," Twitter.
- Kanishka Singh, "Twitter to Expand Rules Against Misinformation on Mail-In Ballots, Early Voting," Reuters, August 13, 2020, source.
- Bridget Coyne and Sam Toizer, "Helping You Find Accurate US Election News and Information," Twitter Blog, entry posted September 15, 2020, source you find accurate US Election News and Information.
- Newman, "Tech Platforms".
- Steven Overly, "Twitter Users Can Now Report Voter Suppression, Misinformation," Politico, January 29, 2020, source.
- Pranav Dixit, "Twitter Will Let Users Report Tweets That Mislead Voters," Buzzfeed News, April 24, 2019, source.
- Amnesty International, Toxic Twitter, March 2018, source.
- Twitter, Rules Enforcement Report, source.
- Twitter, Rules Enforcement.
- Twitter, Rules Enforcement.
- Amnesty International, Toxic Twitter.
- Twitter, "Updating Our Advertising Policies on State Media," Twitter Blog, entry posted August 19, 2019, source.
- Twitter, "Updating Our Advertising," Twitter Blog.
- Jack Dorsey (@jack), "We've made the decision to stop all political advertising on Twitter globally. We believe political message reach should be earned, not bought. Why? A few reasons…Thread," Twitter, October 30, 2019, 4:05 PM, source.
- Kim Lyons, "You Can Now Report Voter Suppression on Twitter Ahead of the 2020 Election," The Verge, January 30, 2020, source.
- Twitter Safety, "Defining Public Interest on Twitter," Twitter Blog, entry posted June 27, 2019, source.
- Dan Mangan and Kevin Breuninger, "Twitter Fact-Checks Trump, Slaps Warning Labels On His Tweets About Mail-In Ballots," CNBC, May 26, 2020, source.
- Holmes, "Trump Made".
- Emily Saltz et al., "It Matters How Platforms Label Manipulated Media. Here are 12 Principles Designers Should Follow.," The Startup (blog), entry posted June 9, 2020, source.
- Twitter Support, "New Labels for Government and State-Affiliated Media Accounts," Twitter Blog, entry posted August 6, 2020, source.
- Twitter Support, "New Labels," Twitter Blog.
- Twitter, "About Government and State-Affiliated Media Account Labels on Twitter," Twitter, source.
- Twitter, Twitter Transparency Report, source.
- Radsch, "Tech Platforms," Committee to Protect Journalists.
- Nix and Wagner, "Social Media".
- Spandana Singh, Charting a Path Forward: Promoting Fairness, Accountability, and Transparency in Algorithmic Content Shaping, September 9, 2020, source.