YouTube
YouTube, one of Google’s subsidiaries, is the most popular video platform in the market with approximately 2 billion users worldwide.1 YouTube is a major source of online information and advertising, and is poised to play a substantial role in the 2020 presidential election. Similar to other online platforms, YouTube has received scrutiny for potential election misinformation and disinformation, including voter suppression content, on its site.2 YouTube follows the same policies for ads and political content as Google, which ban misleading information in general. YouTube also maintains its own Community Guidelines that include policies that prohibit false or misleading content.3
Under its Community Guidelines, YouTube prohibits content that contains “spam, scams, or other deceptive practices.“4 The policies specifically call out voter suppression content by prohibiting content “aiming to mislead voters about the time, place, means or eligibility requirements for voting.”5 The policies also address issues like deepfake videos or fake content by prohibiting malicious manipulated media and stating that the company will terminate channels that attempt to impersonate others. The platform says that it is able to remove policy-violating content by investing in new technologies and tactics for identifying malicious actors. In 2018, for example, YouTube formed an Intelligence Desk to help detect new trends in inappropriate content and behavior. YouTube also partners closely with TAG, Google’s Threat Analysis Group, to combat foreign and domestic entities trying to interfere with the electoral process. General users and Trusted Flaggers, which consists of individual users, government agencies, and non-governmental organizations, may also flag content for violating YouTube policies.6 Flagged content is then later reviewed by the platform’s content moderators and either removed or kept online.7 In order to demonstrate accountability around the spread of misinformation on its service, YouTube should notify users who see or engage with content that has been flagged as misinformation and provide them with additional contextual information to understand why the post was misleading or false.8
YouTube also announced that it is working to raise authoritative voices on its platform to help reduce misinformation.9 In 2017, it started prioritizing known sources it deems authoritative—such as CNN, Fox News, and the Guardian—for news and information in search results and “watch next” panels. Although YouTube does not provide information on how it determines whether a source is “authoritative.” The platform continues to expand its Top News and Breaking News sections to highlight videos from news sources and display breaking news events directly on its homepage.YouTube also announced on April 28, 2020 that it was expanding its work on fact-check information panels, which connect users to authoritative information based on their search queries.10 One way YouTube uses information panels is to show whether a channel is owned by a news publisher that is publicly funded or funded by a government.11 Increasing the visibility of reputable sources for news and keeping users informed about the sources of the content they view on YouTube can be a helpful tool to combat voter misinformation and voter suppression tactics.
Over the past several years, researchers have outlined the ways YouTube’s algorithmic recommendation system contributes to the spread of misinformation.12 In response to these criticisms, the company instituted a number of changes.13 YouTube’s recommendation process typically ranks and recommends videos to users based on a range of signals, including likes, dislikes, watch history, and data from user feedback surveys. 14 Since January of 2019, the platform has worked to reduce the recommendation of borderline content that comes close to violating its Community Guidelines, but does not merit removal.15 As a result, YouTube limits recommendations for videos that, for example, promote a miracle cure for a serious illness or claim the Earth is flat. Users may also turn off recommendations to have more controls over the content they see.16 Although these tactics could be helpful for preventing the rapid spread of election misinformation and voter suppression content, the platform has not published information about how it tackles this type of content in its recommendations. In addition, YouTube does not provide users with a comprehensive set of controls for determining why YouTube recommends certain content to them. Going forward, the company should provide greater transparency around how it addresses misleading election content in its recommendation system. Further, it should empower users to decide how their data is used to shape the recommendations they receive.
YouTube uses both automated tools and human reviewers to moderate content and enforce its Community Guidelines.17 However, there is little transparency around how the platform’s automated tools are trained, updated, or used, and how effective they are at combating misinformation. In addition, as previously discussed, Google (and YouTube) are increasingly relying on automated tools to review potentially violating content during the COVID-19 pandemic. The company has not provided adequate transparency around what categories of content this new process applies to, and what the consequences of this shift are.
YouTube also warns that this increased reliance may result in a higher number of removals for videos, some of which “may not violate policies.”18 The company states that it won’t issue strikes on content removed by automated systems without human review, unless it has a high confidence that the content actually violates policies. Strikes are typically issued when a user’s content is removed for violating YouTube’s Community Guidelines and, if a user receives three strikes within a 90-day period, their channel may be permanently removed from YouTube.19 If an account holder believes that their content was improperly removed, they can appeal the decision. Appeals are an important mechanism for remedy and redress, however, the company stated that appeals processes may take longer than usual due to the pandemic. This raises concerns that a potentially higher number of mistaken removals combined with a slower appeals process could negatively affect election and voting content prior to the election. While it is important to remove and block misleading content, accurate voting and election-related content, which can be beneficial for potential voters, could also be taken down accidentally. The platform should therefore invest more in ensuring election content gets priority review in the run up to the election. In addition, because a significant amount of election-related content moderation is occurring during the pandemic, YouTube should preserve data related to election-related content removals during this period so that researchers can evaluate these efforts later on.
Google publishes a political advertising transparency report, which features data on Google, YouTube, and partner properties.20 However, the consolidated report does not break out reporting for each platform, and it does not provide granular information for YouTube specifically. The report also does not include the number of political ads that were flagged or removed for violating Google or YouTube’s advertising policies, making it difficult to understand how effective Google’s ad enforcement practices are. In addition, Google publishes a Community Guidelines Enforcement Report for YouTube, which outlines how the platform enforces its content policies.21 While the report shows the volume of videos, channels, and comments removed for being spam, misleading, or scams, it does not specifically break out the volume of videos, channels, or comments removed for attempting to mislead users about elections or voting information.
Citations
- Maryam Mohsin, “10 Youtube Stats Every Marketer Should Know in 2020 [Infographic],” Oberlo, May 11, 2020, source
- Lata Nott, “Political Advertising on Social Media Platforms,” American Bar Association, June 26, 2020, source
- Leslie Miller, “How YouTube supports elections,” YouTube: Official Blog, February 3, 2020, source
- “Spam, deceptive practices & scams policies,” YouTube Help, source
- “Spam, deceptive practices,” YouTube Help
- “YouTube Trusted Flagger program,” YouTube Help, source
- “YouTube Trusted Flagger,” YouTube Help
- Accountable Tech, Election Integrity.
- “The Four Rs of Responsibility, Part 2: Raising authoritative content and reducing borderline content and harmful misinformation,” YouTube Official Blog, December 3, 2019 source
- “Expanding fact checks on YouTube to the United States,” YouTube:Official Blog, April 28, 2020, source
- “Information panel providing publisher context,” YouTube Help, source
- Paul Lewis, “'Fiction is outperforming reality': how YouTube's algorithm distorts truth,” The Guardian, February 2, 2018, source Spandana Singh, Why Am I Seeing This? How Video and E-Commerce Platforms Use Recommendation Systems to Shape User Experiences, March 25, 2020, source.
- Andrew Hutchinson, “YouTube Updates Recommendations Algorithm to Lessen the Spread of 'Borderline Content',” Social Media Today, January 26, 2019, source
- Paige Cooper, “How Does the YouTube Algorithm Work? A Guide to Getting More Views,” Hootsuite, August 18, 2020, source
- “Continuing our work to improve recommendations on YouTube,” YouTube: Official Blog, January 25, 2019,source
- Common Sense Media, 2020 Social.
- “Protecting our extended workforce and the community,” YouTube Official Blog, March 16, 2020, source
- “YouTube Community Guidelines enforcement,” Google Transparency Report, source
- “Community Guidelines strike basics,” Google, source
- “Political advertising in the United States,” Google Transparency Report, source
- “YouTube Community Guidelines,” Google Transparency Report,