Facebook/Instagram
Facebook is the largest social media platform in the world, with over 2.4 billion active users.1
Since the 2016 U.S. presidential election, numerous researchers concluded that Facebook was a prominent site for election-related misinformation and disinformation, including voter suppression content. Since then, Facebook has taken steps to improve its policies and practices related to elections and misleading information. Facebook has also adopted many similar policies and practices for Instagram, a photo and video sharing social media platform that it owns. As a result, this section also outlines some efforts implemented by Instagram to combat election and voter suppression misinformation and disinformation.
Prior to the 2018 U.S. midterm elections, the company expanded its policies addressing voter suppression and intimidation to explicitly ban misrepresentation of the dates, locations, times and ways that voting or voter registration can take place; misleading information about who is qualified to vote, whether a vote will be counted, and other parts of the voter process; and threats of violence related to voting, voter registration, or the outcome of an election.2 According to Facebook, the company removes content in these categories regardless of who posted it.3 The company also said that prior to the midterm elections its Elections Operations Center removed over 45,000 pieces of content that violated these policies, out of which 90 percent was proactively detected by Facebook’s automated systems.4 Further, in a June 2020 blog post, Zuckerberg stated that Facebook will remove any misleading claims that aim to discourage voting, and noted that politicians will be subject to these policies as well.5
Facebook also introduced a reporting feature which enables users to flag potentially incorrect voting information. Additionally, the company established dedicated reporting channels that state election authorities can use to flag potentially false voting information as well.6 Further, Facebook has established partnerships with over 30 voting rights and election protection groups, enabling these groups to monitor and flag election-related content that potentially violates the platforms’ content policies for review.7 In September 2020, the company announced that it would expand its partnership with state election authorities to address misleading information about polling conditions.
Facebook’s Election Operations Center, which is responsible for enforcing voter and election-related policies, will be specifically tasked with addressing false claims about polling conditions. Facebook initially aimed to focus the Center’s work on addressing misleading information about polling conditions on the 72 hours prior to election day,8 when content volumes and flags are typically higher.9 However, Facebook began instituting these efforts in September in response to the large number of early voters expected due to COVID-19.10
Facebook partners with independent third-party fact-checking organizations such as the Associated Press, Reuters Fact Check, and PolitiFact to review content on the platform that is suspected to be misleading.11 Once these fact-checking partners have debunked a piece of content, Facebook reduces the distribution, or downrank, this content in the Facebook News Feed. Facebook may also apply a warning label to the debunked content.12 Facebook appends labels to photos and videos, as well as Instagram Stories,13 with the intent of allowing users to decide whether or not they’d like to view the content. Each label contains a link to the fact-checkers’ evaluation of the content at hand.14 Facebook may also feature “Related Articles” written by fact-checkers alongside debunked content in order to add context to the debunked post.15 When a user tries to share this debunked content on either Facebook or Instagram, they see a pop-up warning them that the content has been proven to be inaccurate.16 Further, if Pages, domains, and Groups repeatedly post misleading content on the service, Facebook down ranks them and restricts Page owners from advertising and monetizing.17 On Instagram, this content, as well as content posted by accounts that continuously share misleading information, is omitted from the Explore and hashtag pages.18 Coordinated inauthentic behaviors and campaigns were a hallmark of the disinformation campaigns which sought to suppress voting and sow discord during the 2016 U.S. presidential election. As a result, the company has also updated its policies and practices in this regard. In particular, Facebook updated its policy regarding inauthentic behavior to better explain how the company responds to foreign, domestic, state, and non-state led deceptive efforts.19 Further, in order to prevent Page owners from masking their identity, Facebook requires that all Pages, including those that are election-focused, have a confirmed Page owner and provide verified information such as the organization’s legal name and its website, among other things.20 Advertising also played a prominent role in spreading misleading information during the 2016 presidential election. As a result, Facebook introduced a series of metrics and features to provide greater transparency around its advertising operations. First, the company introduced a tracker which enables users to see how much money U.S. presidential candidates have spent on ads. This ad spending information can be broken down at the state or regional level to demonstrate what specific geographies candidates are focusing their ad spend on. The company is also making efforts to clarify whether an ad ran on Facebook, Instagram, Messenger, or on Facebook’s Audience Network.21 Facebook Audience Network allows advertisers to extend their Facebook and Instagram campaigns across the internet.22 The company also introduced new features including API filters that allow journalists, researchers, and others to access and download ad creatives as well as a collection of frequently used API scripts.23 Further, the company instituted new rules which require advertisers to assign a verified Page Owner to their pages in order for them to run issue, electoral, or political ads in the United States.24
Now, closer to the 2020 presidential election, Facebook has instituted additional policies and procedures that aim to tackle the spread of election and voter-suppression related misinformation and disinformation. This includes the launch of the Voting Information Center in August 2020, which has been dubbed “the largest voting information effort in U.S. history.” Through the Center, Facebook aims to increase participation in the election by helping 4 million Americans register to vote across the Facebook, Instagram, and Messenger products25 and also seeks to promote accurate and authoritative information about elections in order to counter misinformation and disinformation.26 The Center includes guidance on how to register to vote and how to vote (including information on both mail and in-person voting), as well as election results.27 The Center is also a hub for updates from local election authorities regarding any changes to the voting process. The information in the Center is drawn from state election officials and other nonpartisan civic organizations.28 However critics have expressed concerns that the Center is difficult to locate on the platform, as it requires users to navigate multiple drop-down menus.29 In September 2020, Facebook CEO Mark Zuckerberg announced that the company will place information from the Center at the top of Facebook and Instagram in the days running up to the election.30
In addition, in order to provide greater transparency around what governments and other entities are behind news posts on Facebook and Instagram, Facebook has instituted a labeling policy for media outlets that are partially or entirely state-controlled.31 This is particularly important given the role foreign governments played in pushing content labeled as news during the 2016 presidential election. Facebook also said it would begin labeling ads from such publishers later in 2020, although a concrete launch date has not been announced.32 Advocates and experts have raised concerns, however, about whether the effectiveness of these labeling efforts will be undermined by a failure to implement them consistently.33In response to growing concerns that Facebook’s advertising platform can and likely will be used to promote election-related disinformation including voter suppression content, the platform is permitting users to opt-out of all social issue, electoral, and political ads across all Facebook products.34 Users who choose to view political ads will be able to see who paid for these ads even after they have been shared by other users.35 However, this approach falls short in several ways. First, this new policy puts the onus on users to explicitly opt-out of viewing political ads. Further, the policy does not address pre-existing concerns around Facebook’s flawed advertising policies and policy enforcement process,36 especially concerns that the company does not fact-check political ads. Facebook CEO Mark Zuckerberg has long argued that no social media company, including Facebook, should be the arbiter of truth.37 As a result, he has pushed back on calls for the company to remove false claims, particularly those made by politicians, on the platform.38 While Facebook should not be the arbiter of truth, it can and should do more to ensure it is not amplifying and enabling the spread of harmful misinformation and disinformation on its platform. One way of doing this could be by fact-checking political ads and subsequently notifying users when content in political ads has been debunked.39 Finally, this new policy does not address broader concerns related to access to microtargeting tools, which enable advertisers to precisely target users based on a range of personal data points, and can be used to target specific groups of users with misleading information.40
In September 2020, Facebook also announced that it would ban new political advertisements on the platform during the week preceding the November 3 election.41 Campaigns can, however, continue to promote ads that they placed on or before October 27, as long as the ads were viewed by at least one Facebook user.42 Although this policy change prevents new advertisements from being introduced immediately prior to the election, it does not address growing concerns that the company does not fact-check content in its advertisements. As a result, false information can still circulate through ads as long as they are posted on or before October 27.43
In addition, Facebook said that since 2016, the company has tripled its workforce that focuses on “security and safety issues” and is responsible in part for content moderation on the platform.44 The company has also stated that it uses machine-learning to rapidly identify and remove inaccurate voting information, and that these efforts have become more effective over time.45 Due to the unprecedented COVID-19 pandemic, Facebook, and many other internet platforms, had to adjust its content moderation operations as its content moderator workforce could not initially work remotely.46 As a result, Facebook relies more heavily on automated tools to detect and flag certain categories of content. Although Facebook has since been able to readjust its content moderation operations to an extent, the company warned that users should expect more mistakes.47 Given that a significant amount of election-related content moderation is occurring during the pandemic, Facebook should preserve data related to election-related content removals during this period so that researchers can evaluate these efforts later on.48 This is a good best practice in general, as it allows researchers to assess where moderation efforts fell short and can be improved.
There is currently little transparency around how the company enforces its voter suppression and election-related content policies. In Facebook’s Community Standards Enforcement Report (CSER), the company outlines how it enforces some of its content policies, and how often it receives and takes actions based on appeals. However, the CSER does not include any data related to the enforcement of misinformation or voter suppression-related policies and related appeals. The report includes data on categories of content that could intersect with the company’s election-related misinformation policies, such as fake accounts and hate speech. However, this data is not sufficient to fully understand the nature of election and voter suppression-related misinformation and disinformation on the platform.49 In addition, there is little transparency around how Facebook's machine-learning and automated tools are trained, refined, and used, and how effective they are.50
Further, currently, content that users report to the platform as voter interference is not immediately sent for review by human review teams. Rather these flags are considered “user feedback” and are used to evaluate aggregate trends. If Facebook receives a large volume of user reports for a piece of content, then that content will be reviewed by its policy and operational teams. According to Facebook, the company relies on this process because during the 2018 midterm elections, a low number of user reports of voter interference involved content that actually violated the platform’s policies. The majority of flagged content instead were posts that expressed differing political opinions from the flagger. Instead, Facebook said that during the midterm elections, over 90 percent of the content it removed for violating its voter suppression policy was detected proactively using its automated tools.
However, both online efforts to suppress voting and Facebook’s voter suppression policies have changed since 2018, and civil rights experts have raised concerns around whether Facebook should subsequently change its moderation practices to route user-flagged content for human review. In addition, if Facebook does not review content, users cannot appeal moderation decisions. The decision to not review user flags for voter interference therefore denies users a right to appeal and redress.51
In July 2020, Facebook introduced a new election-related labeling policy in response to push back from a range of organizations, including those who led the #StopHateForProfit campaign,52 who argued that Facebook does little to address misinformation and hate speech in its content moderation and advertising practices.53 The new policy allows Facebook to append “Get Voting Information” labels54 to content that mentions voting to provide users with relevant information about the voting process.55 However, although Zuckerberg stated this policy will also apply to politicians, researchers have expressed concerns that the labels will not be applied consistently and that they will fail to have a meaningful impact.56 These concerns have been underscored by the fact that Facebook seems to be appending these labels to posts that refer in any way to voting, rather than posts that are inaccurate or misleading. For example, the company not only attached a “Get Voting Information” label to a July 21 post by President Trump which states mail-in ballots would result in the “most CORRUPT ELECTION” in the history of the United States57 but also to a straightforward post by Kimberly Klacik, a Congressional candidate for Maryland’s District 7, which states “Please vote KIM KLACIK on November 3rd. We are getting all of our ducks in a row. On Day 1 you will see you made a great choice.”58 Civil rights groups are concerned that Facebook’s decision to apply a label to any voting-related post, regardless of the content, creates no distinction between accurate and misleading content. Additionally, these groups have stated that the use of such broad labeling procedures could reduce the company's sense of urgency around removing false election-related information, since the content will have a label directing users to the Voting Information Center.59 In addition, some watchdog groups have pressed the company to go one step further than labeling content and notify users who have viewed or engaged with misleading election-content while on the platform.60
The platform also shared it would broaden its existing prohibition61 on posting content that misleads individuals on how they can vote. As a result, claims such as Immigraton and Customs Enforcement (ICE) agents will be reviewing immigration papers at polling stations or individualized threats of voter interference would be banned on the platform.62 These updated policies also ban threats of coordinated interference that could intimidate or discourage individuals from voting.63 Facebook has also said that when posts aim to delegitimize the outcome of the election or undermine the legitimacy of voting methods, the company will add information labels to these posts that include links to authoritative information.64
Further, Zuckerberg announced that politicians will be subject to all these new policies, although if there is public interest or newsworthy value in some of the content, it will be left up and labeled.65 This sparked concern given recent events where political figures spread misleading content about the election process66 that some experts say amounts to voter suppression.67 Going forward, when companies deem there is a public interest value in leaving such content up, any labels that they use should provide sufficient contextual information that explains the content is misleading and is being left up for awareness purposes. The company should also create a central policy that guides such cases, rather than use a disparate series of ad hoc statements and policies to make these determinations. In September 2020, the company also updated its policies to prohibit using the COVID-19 pandemic to discourage voting in both content and advertising. The company also stated it would include a link to authoritative information around COVID-19 in such posts.68
In 2018, Facebook committed to participating in an independent civil rights audit of the impact of its policies and practices on communities of color and other underrepresented groups.69 The final civil rights audit report, released in July 2020, outlined how Facebook has broadened its voter suppression and intimidation policies over the past two years to cover a more expansive set of threats and scenarios. However, the report also stated that in order for these policies to be effective, the company needs to interpret them in a more comprehensive and consistent manner.70 The overall audit outlined that the company’s lack of a strong civil rights foundation has resulted in numerous concerning outcomes, including the creation of opaque policies and practices related to elections and voting-related content, as well as the inconsistent and incomplete enforcement of these policies. By contrast, the platform responded more proactively and aggressively to the rapid spread of COVID-19 misinformation and disinformation. Some have suggested that this shows that when Facebook is committed to addressing a category of harmful misinformation, it has greater capabilities than it has demonstrated in the context of election-related misinformation.71 In furtherance of a commitment made by Facebook in the civil rights audit report released in June 2019,72 the company implemented a new policy banning paid ads that state that voting is meaningless or discourages people from voting.
Citations
- J. Clement, "Most Popular Social Networks Worldwide as of July 2020, Ranked by Number of Active Users (In Millions)," Statista, last modified August 21, 2020, source.
- Guy Rosen et al., "Helping to Protect the 2020 US Elections," Facebook Newsroom, last modified October 21, 2019, source.
- Rosen et al., "Helping to Protect," Facebook Newsroom.
- Rosen et al., "Helping to Protect," Facebook Newsroom.
- Harrison Mantas and Susan Benkelman, "How Users See Facebook's Labels Will Determine Their Effectiveness," Poynter, last modified July 23, 2020, source. Mark Zuckerberg, "Three weeks ago, I committed to reviewing our policies ahead of the 2020 elections. That work is ongoing, but today I want to share some new policies to connect people with authoritative information about voting, crack down on voter suppression, and fight hate speech.," Facebook, June 26, 2020, 2:25 PM, source.
- Jessica Leinwand, "Expanding Our Policies on Voter Suppression," Facebook Newsroom, last modified October 15, 2018, source.
- Facebook's Civil Rights Audit – Final Report, July 8, 2020, source.
- Andrew Whalen, "Facebook to Ban Posts Meant to Suppress Votes, Including ICE Agent Threats," Newsweek, June 26, 2020, source.
- Facebook's Civil.
- Facebook, "New Steps to Protect the US Elections," Facebook Newsroom, last modified September 3, 2020, source.
- Jared Newman, "Tech Platforms Screwed Up The Last Election. Here's How They're Prepping for 2020," Fast Company, March 4, 2020, source.
- Newman, "Tech Platforms".
- Rosen et al., "Helping to Protect," Facebook Newsroom.
- Rosen et al., "Helping to Protect," Facebook Newsroom.
- Leinwand, "Expanding Our Policies," Facebook Newsroom.
- Rosen et al., "Helping to Protect," Facebook Newsroom.
- Rosen et al., "Helping to Protect," Facebook Newsroom.
- Rosen et al., "Helping to Protect," Facebook Newsroom.
- Rosen et al., "Helping to Protect," Facebook Newsroom.
- Rosen et al., "Helping to Protect," Facebook Newsroom.
- Rosen et al., "Helping to Protect," Facebook Newsroom.
- "About Audience Network," Facebook for Business, source.
- Rosen et al., "Helping to Protect," Facebook Newsroom.
- Rosen et al., "Helping to Protect," Facebook Newsroom.
- Alex Hern and Julia Carrie, "Facebook Plans Voter Turnout Push – But Will Not Bar False Claims From Trump," The Guardian, June 17, 2020, source.
- Hern and Carrie, "Facebook Plans".
- Facebook, "New Steps," Facebook Newsroom.
- Hern and Carrie, "Facebook Plans".
- Alexandra S. Levine, "Could Uber and Lyft Drive Business Off the Road?," Politico Morning Tech, August 20, 2020, source
- Facebook, "New Steps," Facebook Newsroom.
- Newman, "Tech Platforms".
- Nathaniel Gleicher, "Labeling State-Controlled Media On Facebook," Facebook Newsroom, last modified June 4, 2020, source.
- Courtney C. Radsch, "Tech Platforms Struggle to Label State-Controlled Media," Committee to Protect Journalists, last modified August 12, 2020, source.
- Hern and Carrie, "Facebook Plans".
- Hern and Carrie, "Facebook Plans".
- Cecilia Kang and Mike Isaac, "Defiant Zuckerberg Says Facebook Won't Police Political Speech," The New York Times, October 21, 2019, source.
- Kang and Isaac, "Defiant Zuckerberg".
- Kang and Isaac, "Defiant Zuckerberg".
- Craig Timberg and Andrew Ba Tran, "Facebook's Fact-Checkers Have Ruled Claims in Trump Ads Are False — But No One Is Telling Facebook's Users," The Washington Post, August 5, 2020, source.
- Spandana Singh, Special Delivery: How Internet Platforms Use Artificial Intelligence to Target and Deliver Ads, February 18, 2020, source.
- Facebook, "New Steps," Facebook Newsroom.
- Thomas Germain, "Facebook to Halt New Political Ads in Week Before November Election," Consumer Reports, last modified September 3, 2020, source.
- Donie O'Sullivan and Brian Fung, "Facebook Will Limit Some Advertising in the Week Before the US Election — But It Will Let Politicians Run Ads with Lies," CNN Business, September 3, 2020, source.
- Nick Clegg, "Facebook Is Preparing For an Election Like No Other," The Telegraph, June 17, 2020, source.
- Rosen et al., "Helping to Protect," Facebook Newsroom.
- Spandana Singh, "AI Proves It's a Poor Substitute for Human Content Checkers During Lockdown," VentureBeat, May 23, 2020, source.
- Kang-Xing Jin, "Keeping People Safe and Informed About the Coronavirus," Facebook Newsroom, last modified August 19, 2020, source.
- Ian Vandewalker, Digital Disinformation and Vote Suppression, September 2, 2020, source. Letter by Emma Llansó, "COVID-19 Content Moderation Research Letter – in English, Spanish, & Arabic," April 22, 2020, source.
- Facebook, Community Standards Enforcement Report, source.
- Spandana Singh, Everything in Moderation: An Analysis of How Internet Platforms Are Using Artificial Intelligence to Moderate User-Generated Content, July 22, 2019, source.
- Facebook's Civil.
- Whalen, "Facebook to Ban Posts".
- Stop Hate for Profit, last modified 2020, source.
- Mantas and Benkelman, "How Users," Poynter.
- Mantas and Benkelman, "How Users," Poynter.
- Mantas and Benkelman, "How Users," Poynter.
- Donald J. Trump, "Mail-In Voting, unless changed by the courts, will lead to the most CORRUPT ELECTION in our Nation's History! #RIGGEDELECTION," Facebook, July 21, 2020, 8:39, source.
- Kimberly Klacik, "Please vote KIM KLACIK on November 3rd. We are getting all of our ducks in a row. On Day 1 you will see you made a great choice. Maryland's District 7 KimKForCongress.com," Facebook, August 28, 2020, 12:42, source.
- Facebook's Civil.
- Accountable Tech, Election Integrity Roadmap for Social Media Platforms, September 2020, source.
- Whalen, "Facebook to Ban Posts".
- Jonathan Shieber, "As Advertisers Revolt, Facebook Commits to Flagging 'Newsworthy' Political Speech That Violates Policy," TechCrunch, June 26, 2020, source.
- Whalen, "Facebook to Ban Posts".
- Facebook, "New Steps," Facebook Newsroom.
- Whalen, "Facebook to Ban Posts".
- Aaron Holmes, "Trump Made False Claims About Vote-By-Mail On Facebook and Twitter. Here's Why the Tech Companies Won't Ban Him or Take Down the Posts.," Business Insider, May 22, 2020, source.
- Sam Levine, "'It Could Have a Chilling Effect': Why Trump is Ramping Up Attacks on Mail-In Voting," The Guardian, June 1, 2020, source.
- Facebook, "New Steps," Facebook Newsroom.
- Sara Fischer, "Exclusive: Facebook Commits to Civil Rights Audit, Political Bias Review," Axios, May 2, 2018, source.
- Facebook's Civil.
- Spandana Singh and Koustubh "K.J." Bagchi, How Internet Platforms Are Combating Disinformation and Misinformation in the Age of COVID-19, June 1, 2020, source.
- Sheryl Sandberg, "A Second Update on Our Civil Rights Audit," Facebook Newsroom, last modified June 30, 2019, source.