Table of Contents
Case Study: YouTube
YouTube is an American video-sharing company founded in 2005 by Chad Hurley, Steve Chen, and Jawed Karim.1 In 2006, YouTube was acquired by Google for $1.65 billion, and has since operated as a subsidiary of the company.2 YouTube is the world’s largest online video source,3 with approximately 2 billion users worldwide.4 The company currently ranks second for global internet engagement on Alexa rankings.5 According to a Pew Research Center study, 94 percent of Americans between the ages of 18 and 24 use YouTube, a higher percentage than for any other online platform.6
Today, individuals turn to YouTube to access a range of content, including music videos, instructional videos, and the news. The company operates a vast database of videos, and has been referred to as a library of content.7 YouTube utilizes an algorithmic recommendation system to generate personalized video recommendations to its users.8 According to YouTube, although many users visit the platform to search for something specific, the company has expanded its recommendation system in order to also engage those who did not come to the platform with a specific idea of what they wanted to watch.9 YouTube’s videos also often appear in Google search results.10 YouTube seeks to maximize the time that users spend on the platform as it enables the company to deliver more ads to users. Given that the recommendation system is designed to infer user interests and behaviors, and subsequently suggest content that may be of interest to a user, the system is part and parcel of the company's revenue generation model.
YouTube’s recommendation system determines what content should appear on a user’s home page and in the user’s “Up Next” sidebar, which appears next to videos that a user is currently watching. The Up Next feature autoplays recommended content unless a user turns the autoplay off.11 Today, YouTube’s recommendation system is responsible for generating over 70 percent of viewing time on the platform.12 This has a significant impact on its users. According to a Pew Research Center study, 81 percent of YouTube users say that they at least occasionally watch recommended videos, including 15 percent who say they watch recommended videos regularly.13
YouTube is one of the largest video repositories on the internet, and many users incorrectly equate the site’s popularity with the credibility of its recommendation system. However, despite the fact that YouTube’s recommendation system is responsible for shaping how billions of individuals engage with content on the service, and influencing how they see the world, the company has provided relatively little transparency around how this system works.14 According to YouTube, user recommendations and search results are influenced by factors such as the videos a user has liked, the playlists a user has created,15 and a user’s watch history and activity on YouTube, Google, and Chrome. Some researchers have suggested that the system also considers data points such as a user’s account preferences16 and the keywords they search for.17 The company has not, however, offered comprehensive disclosures outlining the key factors its recommendation system considers.18 This lack of transparency is concerning, as the company’s recommendation system has been found to suggest controversial and harmful videos, including those that promote extremist propaganda, conspiracy theories, and misinformation. Further, YouTube provides users with only a limited set of controls over how they would like their platform experience to be shaped by such algorithmic decision-making. Without insight into how YouTube’s recommendation systems work, it is difficult to understand why these suggestions are made, and how to develop targeted interventions to prevent them.
A Technical Overview of YouTube’s Recommendation System
According to a 2016 paper authored by three researchers at Google, in order for YouTube’s recommendation system to deliver personalized recommendations, it has to be able to process YouTube’s expansive user base and collection of videos.19 In addition, given that over 500 hours of new content are uploaded to YouTube every minute,20 the recommendation system also needs to be responsive enough that it can rapidly integrate these new videos as well as any new user behaviors and patterns into its suggestions.21
According to YouTube, it makes minor changes to the recommendation system every year.22 But, the company has provided little transparency around how its recommendation system is structured and makes decisions, and how the system has changed over time. However, numerous researchers and journalists have attempted to document the system’s various iterations and evolutions.
Prior to March 2012, the recommendation algorithm was designed to maximize user views by recommending videos that the system calculated users were likely to click on. However, many creators figured out how to influence this recommendation system and gain more views on their videos. In addition, the prioritization of user views by the algorithm meant that creators had a greater incentive to produce clickbait content23 that garnered a large number of clicks, such as content with sensational titles, compared to content that a user would actually want to fully watch.24
In response to these concerns, YouTube altered the recommendation algorithm so that it placed more weight on a user’s watch time rather than a video’s views.25 The platform defines watch time as how much time a user spends viewing content on the platform. YouTube asserts that this change encouraged creators to produce “higher quality content” that users would watch fully, rather than content users would click on and then abandon.26 This would in turn increase the likelihood that users would be satisfied with the service, view more videos and advertisements, and generate more revenue.27
The introduction of the watch time metric also influenced how the company displays videos in search results, runs ads, and pays video creators on the platform.28 After introducing this new model, the company also changed their rules so that all creators—rather than a vetted group—could run ads on their content and accrue revenue from them.29 A few weeks after the company introduced these changes to its recommendation system, YouTube reported that the number of views on the platform was decreasing.30 The overall watch time, however, was increasing and it grew 50 percent a year for three consecutive years.31 However, critics both inside and outside the company argued that this metric also rewarded offensive, harmful, and often fringe content that garnered high watch times.32
In 2015, Google’s artificial intelligence division, Google Brain, began reconstructing YouTube’s recommendation system around neural networks. A neural network is a series of algorithms that aims to identify relationships by finding patterns in a dataset by mimicking how animal brains operate.33 Prior to Google Brain, YouTube had implemented machine learning tools in their recommendation system through a Google-produced system known as Sibyl.34 However, the new algorithm introduced by Google Brain brought in a range of new functionalities. For example, Google Brain used supervised learning systems, which operate using inputs (training data sets that are pre-labeled by humans) and predetermined output data.35 These supervised learning techniques enabled the system to identify adjacent relationships between videos, and generalize these findings in ways that humans could not. Before the use of such supervised learning techniques, if a user watched an episode from a particular beauty vlogger, the recommendation system suggested videos with high degrees of similarity. However, by identifying adjacent relationships, the Google Brain model was able to suggest other vloggers who were comparable, but not exactly identical.
In addition, the Google Brain algorithm was able to identify important patterns in consumption. For example, the algorithm noted and optimized a relationship between a user’s device and their watch times, recommending shorter videos for mobile users and longer videos for YouTube TV users.36 The Brain algorithm also enabled YouTube to incorporate insights on a user’s behavior into its recommendations at a faster rate, thus making it easier for the company to identify trending topics and offer updated recommendations.37 Following the introduction of the Google Brain model, 70 percent of the time that users spend on the site consuming content has been driven by the recommendation system.38
This recommendation system was created by combining two deep-learning neural networks: one for candidate generation and another for ranking. Candidate generation is the first stage of the recommendation process. During this stage, the recommendation system is given a query and produces a group of relevant videos as candidates. The ranking network comes second, and is responsible for ranking these candidates in order. The candidate generation network uses information on a user’s behaviors and history on the platform to identify a small group of a couple hundred videos from YouTube’s larger corpus that are considered broadly of interest to the user. The candidate generation network relies on collaborative filtering to produce these personalized results. The ranking network is then tasked with delivering a select number of best recommendations to each user. It does this by assigning each video a score based on information about the video (e.g. length) and information about the user (e.g. whether they watch long videos or short videos).The videos that are assigned the highest scores are then ranked and displayed to the user. During the ranking stage, the model has greater access to information on a specific video and a user’s relationship to the video than it does during the candidate generation stage, as the model only needs to consider a small group of a couple hundred videos. In simplified terms, the ranking function can be thought of as expected watch time per impression. Researchers from Google have asserted that this formula promotes more “relevant” videos to users compared to ones that emphasize click-through rate, as click-through rate functions often result in the promotion of clickbait.39
This two-stage model enables YouTube to make personalized video recommendations from a large database of content.40 In addition, these deep neural networks create an average of a user’s search history and watch history in order to make recommendations. It also considers additional data points, including a user’s geographic region, device, gender, logged-in status, and age.41 YouTube’s recommendation system has undergone a number of changes over the past few years. It is unclear how much of Google Brain’s changes are still in use.
Youtube evaluates and refines this recommendation system using a range of offline metrics, such as precision, ranking loss, and recall.42 The company also runs A/B tests43 during live experiments. During these experiments, researchers can measure minor changes in click-through rate, watch time, and other user engagement metrics.44 This method of testing is considered the gold standard for evaluating the effectiveness of a recommendation system, compared to offline testing, which has a number of problems.45 However the company has not provided any insight into the results of such tests and how they contribute to the company’s assessment of how effective its recommendation system is. The examples that the recommendation system model has been trained on consist of more than just videos that the system recommended. It features videos from all YouTube watches, including embedded videos from other sites, so as to ensure the recommendation system can surface new content (especially content that may not be widely viewed yet, but is still considered of interest to a user).46 The company noted that it has been able to identify other mechanisms by which users are discovering new content, and integrate it into the model, although it did not specify what these mechanisms are.47
This new two-stage algorithm also created new problems. It often forced users into specific content niches by consistently recommending content that was similar in nature to videos the user had previously watched. As a result, users got bored. This prompted researchers at Google Brain to explore whether they could maintain user engagement by guiding them to content in other sectors of the platform, rather than just in existing interest buckets. These questions led the company to test a new algorithm, which incorporated a type of artificial intelligence known as reinforcement learning.
Reinforcement learning is used to train machine-learning models to make a certain sequence of decisions using a trial and error process that features rewards and penalties.48 The team called this new algorithm Reinforce. Its primary goal was to predict which video recommendations would broaden the range of subjects that a user would watch content on, pushing users to consume more content and maximizing user engagement over time. A YouTube spokesperson also said that Reinforce was intended to improve the accuracy of recommendations on the platform by mitigating the recommendation system’s bias toward popular content.49 Google considered the introduction of Reinforce a massive success. Sitewide views across YouTube increased by almost 1 percent, a staggering amount given the platform’s size. This gain translated into millions more hours of watch time and a significant bump in the company's ad revenue.50 As YouTube’s algorithm has evolved, the company has shared that the fundamental components of this model remain intact today.51
According to a YouTube spokeswoman, in late 2016, the company adopted social responsibility as a core value for the company.52 During this time, the recommendation system was altered so it considered inputs such as how many times a video was shared, liked, and disliked. 53 These changes were introduced amidst growing pressure on internet platforms to be more proactive in their efforts to combat harmful content, such as extremist propaganda, disinformation and misinformation, and content unsafe for children.54 Recently, the company has provided more detail around this concept of responsibility, outlining that it consists of the four Rs of responsibility:55
- Removing content that violates the platform’s Community Guidelines as quickly as possible
- Raising up authoritative information sources, especially during breaking news moments
- Reducing the spread of content that comes close to violating, but does not violate the platform’s Community Guidelines (known as borderline content)
- Rewarding trusted creators
Also in 2016, a YouTube spokesperson stated that the recommendation system had changed significantly, and was no longer geared to optimize for watch time. Rather, the system began to emphasize satisfaction to ensure users were happy with the content they were viewing,56 and so that the recommendation system would suggest clickbait videos less often.57 This new metric aimed to balance watch time with factors such as likes, dislikes, shares, and satisfaction surveys that the company prompts users to fill out after they finish certain videos on the platform.58 According to the company, it receives millions of survey responses every week.59
Further, between 2017 and 2019, the company introduced two new internal metrics for evaluating how videos are performing on the site. The first metric monitors the total amount of time users are spending on YouTube, including by posting and reading the comments. The second metric, known as “quality watch time,” aims to identify content that goes beyond just retaining a user’s attention; the company has not explained what this involves. In calculating these two new metrics, YouTube aimed to reward content considered generally acceptable to YouTube users and advertisers and push back on criticisms that it uses its algorithmic recommendation system to capture user attention and make the platform more addictive.60 A recent Pew Research Center study indicates, however, that the algorithmic system still seeks to reel users in and get them to consume more content. In addition, the study found that the longer a user spends watching videos on the platform, the more the system recommends longer and more popular content. In the initial stages of the Pew study, the recommendation engine suggested videos that were on average nine minutes and 31 seconds in length. During the final stages of the study, the recommended videos were on average 15 minutes in length.61
Since 2017, YouTube has introduced changes to its recommendation algorithm designed to promote videos from sources that the company considers to be authoritative,62 such as top and local news channels.63 YouTube determines which sources it considers to be authoritative based on Google News’s assessments of publishers and whether they abide by Google News’s content policies and are producing reliable content. These sources can also include organizations such as public health institutions.64 A publisher’s status as an authoritative source is not dependent on how many subscribers its YouTube channel has.65 If a publisher is considered authoritative, and it has a YouTube channel, then YouTube will promote this publisher’s related content when a user searches for information or news related queries.66 This is intended to deter the spread of misinformation and conspiracy theories on the platform.67
In June 2019, the company also began promoting authoritative sources in cases in which a user has consumed multiple videos that are close to violating the platform’s Community Guidelines,68 such as conspiracy theory videos,69 as well as queries related to election news.70 YouTube also promotes authoritative sources in its top news and breaking news shelves for more recent events.71 The company shared that it could expand this to other categories of content, such as entertainment. However, this comes with trade-offs: It is challenging to define authoritative sources across more subjective verticals, as these determinations are based on personal preference and taste.72
In January 2019, YouTube also altered its recommendation algorithm to reduce suggestions of “borderline” videos, such as harmful content and misinformation.73 According to YouTube, this change resulted in a 50 percent drop in watchtime for this type of content.74 However, this data has not been verified by independent researchers.75 The company implemented these changes using machine learning, as well as human evaluators and experts across the United States. These evaluators are responsible for providing input on the quality of the videos they review.76 This data is then used to train the machine-learning recommendation-generation systems.77 According to YouTube, the human evaluators themselves are trained using precise guidelines.78
The company stated that it would roll out these efforts to other countries to minimize recommendations of harmful content as its systems become more accurate in implementing this rule in the United States.79 According to a company blog post, this change was anticipated to impact less than 1 percent of the videos on YouTube and would only affect recommendations of these borderline videos, not the availability of this content on the platform as a whole. This means that users who search for or subscribe to channels that post such content would still be able to view these videos.80 According to YouTube, the company took this approach in order to adequately safeguard free expression on the platform.81
YouTube collects both explicit and implicit feedback from its users. Explicit feedback is collected through the thumbs up and thumbs down features in the product, as well as product surveys82 that the company runs to find out if a user enjoyed a video that was recommended to them.83 Some of the implicit data points that the company collects and uses include user activity on YouTube, Google, and Chrome,84 and user watch history.85 YouTube relies on this implicit feedback to inform recommendations, as well as to train the recommendation models.86 In such training processes, the fact that a user has finished a video is a positive signal.87
As described above, YouTube provides very little transparency and accountability around how its recommendation system is structured, how it operates, and how it makes decisions.88 Research has suggested that promoting awareness of the use of algorithmic tools and enabling users to control their own experiences on a platform are fundamental steps in building trust with users. This lack of transparency from YouTube therefore limits the agency users have over their own experiences.89
Controversies Related to YouTube’s Recommendation System
In addition, the lack of transparency and accountability from YouTube is also concerning considering the level of controversy and backlash around this system’s recommendations. It has also made evaluating these criticisms difficult, as most studies are operating with insufficient data.90 This makes it hard to draw reliable conclusions on how YouTube’s recommendation system shapes user perceptions and behaviors. In addition, a lack of transparency around the company’s training datasets also makes it challenging to run assessments to ensure the system is providing the same level of utility to its diversity of users (e.g. users of different genders and ethnicities),91 a common problem associated with defining fairness in recommender systems.92 This is important as such assessments can identify potential instances of bias within recommender systems.
Over the past decade, the company has come under particular criticism for enabling its recommendation engine to suggest content to users containing misleading or false information and conspiracy theories. For example, after a fire broke out in Paris’s Notre-Dame cathedral in April 2019, a number of conspiracy theories began circulating on the platform, claiming that the fire was an act of terrorism, and promoting Islamophobic rhetoric. One vlogger on YouTube also claimed that the French government had started the fire as a covert operation, and that French President Emmanuel Macron could not be trusted. The video was viewed almost 50,000 times overnight,93 and this expanded to over 100,000 views soon after.94 Shortly after the video was posted, it was also monetized through the use of advertisements.95 YouTube’s recommendation system continued to recommend the video despite changes implemented by the company in January 2019, which aimed to limit algorithmic promotion of conspiracy theory content and instead promote content from authoritative sources.96 Although YouTube has stated that conspiracy theory videos make up less than 1 percent of all content on the platform, this is still a staggering amount of content, and the problem is compounded whenever the recommendation algorithm promotes this content.
YouTube has also introduced algorithmically-recommended information panels with links to third-party sources that offer to fact-check video content. These panels appear next to content featuring common conspiracies, as well as content posted by some foreign state-run media outlets.97 However, journalists found that the panels the company appended to livestreams of the Notre-Dame fire rerouted users to information about the 9/11 terrorist attacks in the United States instead.98
Similarly, in 2017, academics and other commenters spotlighted the role of the company’s search and recommendation algorithms in promoting disinformation during the 2016 U.S. presidential election. Based on these findings, prominent sociologist and technology critic Zeynep Tufekci termed these algorithms “misinformation engines.”99
In addition, YouTube has faced particular criticism for creating a “rabbit hole” effect, in which the algorithm delivers personalized recommendations that prompt users to consume harmful or radical content100 that they did not originally seek out. 101 In 2019, Mozilla began publishing anecdotes of how anonymous users encountered such rabbit holes on the platform under a project known as YouTube Regrets. The project aimed to push YouTube to let independent researchers study their algorithmic decision-making systems.102 In September 2019, YouTube representatives met with the Mozilla Foundation to discuss the issues raised in the campaign.103 In addition, a YouTube spokesman told CNET that while the company welcomed research on these issues, it had not seen the videos, screenshots, or data that Mozilla was using and was therefore unable to review Mozilla’s claims.104
The Media Manipulation Initiative at Data & Society Research Institute spearheaded a research project exploring the far-right in the United States and Germany, through which they examined YouTube’s recommendation system. Their research uncovered that the system concerningly combined communities associated with Fox News and GOP accounts with communities associated with conspiracy theory channels, such as those belonging to far-right commentator Alex Jones. Similarly, the researchers found that the recommendation system categorized communities associated with the religious right together with communities associated with the international right-wing. The researchers outlined that these categorizations could create a rabbit hole effect, because if a user is consuming content produced by conservative groups on the platform, they are only a few clicks away from receiving recommendations for content produced by far-right extremist groups.105 In addition, the researchers raised concerns that this grouping also generates a filter bubble on the platform, in which users are prevented from accessing information that may challenge their perspectives or broaden their horizons because of a system’s predictions on what they will like.106
Several researchers, including former YouTube employee-turned-critic Guillaume Chaslot, have argued that it is in the company's business interest to promote such polarizing and fringe videos and channels, as they drive engagement and greater watch times.107 Further, critics such as Chaslot have suggested that the recommendation system is biased toward promoting divisive, sensational, and conspiratorial content,108 perhaps because the system has learned that such content is engaging.109 Given the vast number of users who consume recommended content, this raises significant concerns about the platform serving as a radicalization pipeline.110
YouTube executives have contested these notions, claiming that the company considers more than just watch time when making content recommendations, and that because advertisers do not want their content to appear alongside such harmful content, there is no financial interest in promoting these videos.111 The company also stated that after a user consumes a video, the recommendation system does not account for whether the content of the video was less or more extreme, and it therefore would not seek to necessarily recommend similar videos. Rather, recommendations in such instances rely on the context and user behavior associated with the consumption of the initial video.112 YouTube also stated that after reviewing internal testing data, they found that on average, users who watch one extreme video are subsequently recommended more moderate content113 suggesting that the rabbit hole effect toward more radical content is not inevitable.114 However, because the company provides little transparency around the factors that its recommendation system actually does consider, and because the company has not shared any of the data it collected or evaluated during these tests, it is difficult to corroborate these statements.115
It is important to note that some researchers, such as Penn State Political Scientists Kevin Munger and Joseph Phillips, have also pushed back against the notion that the company's recommendation system is a central component of online radicalization.116 They contend that prior studies have not been able to determine that the algorithm has had a noticeable effect on radicalization, and that instead this narrative has been highlighted by policymakers and the media because it offers simple policy prescriptions.117 These researchers instead suggest that radicalization online is similar to radicalization offline, in that it relies on providing an individual with new, radicalizing information at scale, and that the supply of such content (and the ease with which producers can create content on YouTube) caters to this demand.118 Other researchers, such as Data & Society’s Becca Lewis, also point to the role of other algorithms, like the company’s search algorithm, as well as less technical factors, such as online social-networking interactions between creators and audiences, as more significant factors in promoting radicalization. As a result, these researchers suggest that focusing solely on the algorithmic recommendation component of the radicalization process provides a limited view of the overall problem and hinders potential solutions.119
YouTube has also faced significant backlash over how its recommendation system—which makes content recommendations both on YouTube’s main platform as well as on YouTube Kids—interfaces with children. YouTube Kids is a separate video app, curated by humans and algorithms, that features age-tailored content.120 However, children’s videos are some of the most watched categories of content on YouTube’s main platform as well. As a result, producing and reproducing popular children’s content has emerged as a lucrative business on the service, as it enables creators to reap the benefits of advertising dollars.121 According to a Pew Research Center Study,122 children’s videos constituted the majority of the 10-most recommended posts on YouTube,123 and 80 percent of parents said they occasionally let their children watch content on YouTube.124
In 2017, the company was hit with the “ElsaGate scandal,” in which its recommendation system recommended seemingly child-friendly content featuring characters such as Elsa from Disney’s Frozen, but that actually contained inappropriate themes related to topics like violence, sex, drugs, and alcohol.125 In addition, researchers, journalists, and YouTube creators have found that the recommendation engine was suggesting ordinary videos of children that were rampant with sexualized comments as well as comments suggesting timestamps in which children were in sexualized positions.126 These videos were often recommended by YouTube’s system after a user searched for videos of adult women, such as using the term “bikini haul,” raising concerns about how the system was making links about searches for adult women and children.127 In response, the company said they would implement changes such as closing down the comments section of such posts and more rigorously removing posts that were found to violate its Community Guidelines.128 However, YouTube has said little publicly about how the company’s recommendation system would be altered to prevent the promotion of such content going forward.129
Although YouTube has introduced some technical and policy changes to combat the spread of misinformation, conspiracy theories, and egregious content, numerous reports have circulated, often with employee input,130 claiming that YouTube executives repeatedly ignored warnings and suggestions to alter the company’s recommendation system in a more significant manner.131 This raises concerns that the company is placing profits over ensuring the company’s use of automated tools is responsible, transparent, and accountable.
User Controls Related to YouTube’s Recommendation System
As highlighted, YouTube does not provide significant transparency around how its recommendation system operates, thus limiting the agency users have over their personal YouTube experience. The company does, however, offer its users a limited set of controls over how this system shapes their platform experience.
As previously mentioned, YouTube users have the ability to turn off the AutoPlay feature.132 In June 2019, the company announced that they were expanding the controls users have over the homepage and Up Next recommendations.133 These changes made it easier for signed-in users to view recommendations on both of these areas of the platform.134 The changes also let users mark certain channels so that they do not appear in their recommendations. However, if a user subscribes to the channel, searches for the channel, or visits the channels page, they will still see its content. In addition, if the channel appears in the Trending tab, the user will still see its content.135
YouTube’s expansion of controls also enables users to learn why a video may have been suggested to them, particularly on the homepage.136 Users can also remove specific videos from their watch history and specific queries from their search history to prevent these data points from being considered in recommendations. They can also pause their watch and search history, or clear them altogether.137 Further, users can remove videos, channels, sections, and playlists from their homepage, and indicate that they are not interested in this content or do not want to be recommended content based on these factors.138 They can also remove liked videos from their playlists or edit and delete playlists to further control their recommendations.139
If a user wants to revert to having all the information YouTube has collected on their behaviors and interests used for personalizing their recommendations, they can clear their “not interested” and “don’t recommend channel” feedback through a tool in their My Activity tab.140
Citations
- Christopher McFadden, "YouTube: Its History and Impact on the Internet," Interesting Engineering, October 4, 2019, source
- Michael Arrington, "Google Has Acquired YouTube," TechCrunch, October 9, 2006, source
- Joan E. Solsman, "Mozilla Is Sharing YouTube Horror Stories To Prod Google For More Transparency," CNET, October 15, 2019, source
- Maryam Mohsin, "10 Youtube Stats Every Marketer Should Know in 2020 [Infographic]," Oberlo, last modified November 11, 2019, source
- "Youtube.com Competitive Analysis, Marketing Mix and Traffic," Alexa Internet, source
- Kevin Roose, "The Making of a YouTube Radical," New York Times, June 8, 2019, source
- Ben Popken, "As Algorithms Take Over, YouTube's Recommendations Highlight A Human Problem," NBC News, April 19, 2018, source
- Adrienne LaFrance, "The Algorithm That Makes Preschoolers Obsessed With YouTube," The Atlantic, July 25, 2017, source
- Casey Newton, "How YouTube Perfected The Feed," The Verge, August 30, 2017, source
- Popken, "As Algorithms".
- Matt Elliott, "How To Turn Off YouTube's New Autoplay Feature," CNET, March 20, 2015, source
- Jack Nicas, "How YouTube Drives People to the Internet's Darkest Corners," Wall Street Journal, February 7, 2018, source
- Aaron Smith, Skye Toor, and Patrick Van Kessel, "Many Turn to YouTube for Children's Content, News, How-To Lessons," Pew Research Center, last modified November 7, 2018, source
- “The RDR Corporate Accountability Index: Transparency and Accountability Standards for Targeted Advertising and Algorithmic Systems — Pilot Study and Lessons Learned,” Ranking Digital Rights, March 2020, rankingdigitalrights/pilot-report-2020
- "Manage Your Recommendations and Search Results," YouTube Help, source
- Jonas Kaiser and Adrian Rauchfleisch, "Unite the Right? How YouTube's Recommendation Algorithm Connects The U.S. Far-Right," D&S Media Manipulation: Dispatches from the Field (blog), entry posted April 11, 2018, source
- Caroline O'Donovan et al., "We Followed YouTube's Recommendation Algorithm Down The Rabbit Hole," BuzzFeed News, January 24, 2019, source
- "Manage Your," YouTube Help, “The RDR Corporate Accountability Index: Transparency and Accountability Standards for Targeted Advertising and Algorithmic Systems — Pilot Study and Lessons Learned,” Ranking Digital Rights, March 2020, rankingdigitalrights/pilot-report-2020
- Paul Covington, Jay Adams, and Emre Sargin, "Deep Neural Networks for YouTube Recommendations," Proceedings of the 10th ACM Conference on Recommender Systems, ACM, New York, NY, USA, 2016, source
- Roose, "The Making".
- Covington, Adams, and Sargin, "Deep Neural".
- Roose, "The Making".
- Mark Bergen and Lucas Shaw, "To Answer Critics, YouTube Tries A New Metric: Responsibility," The Star, April 15, 2019, source
- Roose, "The Making".
- Roose, "The Making".
- Bergen and Shaw, "To Answer."
- Roose, "The Making".
- Bergen and Shaw, "To Answer."
- Roose, "The Making".
- Michael Learmonth, "YouTube's Video Views Are Falling — By Design," AdAge, May 14, 2012, source
- Newton, "How YouTube".
- Bergen and Shaw, "To Answer."
- "Neural Network," DeepAI, source
- Alex Woodie, "Inside Sibyl, Google's Massively Parallel Machine Learning Platform," Datanami, last modified July 17, 2014, source
- Margaret Rouse and Matthew Haughn, "Supervised Learning," Search Enterprise AI, source
- Newton, "How YouTube".
- Newton, "How YouTube".
- Newton, "How YouTube".
- Covington, Adams, and Sargin, "Deep Neural".
- Covington, Adams, and Sargin, "Deep Neural".
- Covington, Adams, and Sargin, "Deep Neural".
- In this context, precision can be understood as how useful search results are. Recall can be understood as how complete search results are. Ranking loss functions winnow down the list of potential recommendations.
- A/B testing compares two versions of a variable by testing a user’s response to variable A against variable B, and establishing which of the two variables is more effective.
- Covington, Adams, and Sargin, "Deep Neural".
- Ekstrand et al., "All The Cool".
- Alexis C. Madrigal, "How YouTube's Algorithm Really Works," The Atlantic, November 8, 2018, source
- Covington, Adams, and Sargin, "Deep Neural".
- Błażej Osiński and Konrad Budek, "What Is Reinforcement Learning? The Complete Guide," Deep Sense AI, last modified July 5, 2018, source
- Roose, "The Making".
- Roose, "The Making".
- "[YouTube Recommendations] Ask us anything! YouTube Team will be here Friday February 8th.," YouTube Help, last modified February 8, 2019, source
- Mark Bergen, "YouTube Executives Ignored Warnings, Letting Toxic Videos Run Rampant," Bloomberg, April 2, 2019, source
- Bergen, "YouTube Executives".
- YouTube, "The Four Rs of Responsibility, Part 1: Removing Harmful Content," Official YouTube Blog, entry posted September 3, 2019, source
- YouTube, "Susan Wojcicki: Preserving Openness Through Responsibility," Official YouTube Blog, entry posted August 27, 2019, source
- Paul Lewis, "'Fiction is Outperforming Reality': How YouTube's Algorithm Distorts Truth," The Guardian, February 2, 2018, source
- YouTube, "Continuing Our Work To Improve Recommendations On YouTube," Official YouTube Blog, entry posted January 25, 2019, source
- Popken, "As Algorithms".
- Bergen, "YouTube Executives".
- Bergen and Shaw, "To Answer."
- Smith, Toor, and Kessel, "Many Turn," Pew Research Center.
- YouTube, "The Four Rs of Responsibility, Part 2: Raising Authoritative Content and Reducing Borderline Content and Harmful Misinformation," Official YouTube Blog, entry posted December 3, 2019, source
- YouTube, "Building a Better News Experience On YouTube, Together," Official YouTube Blog, entry posted July 9, 2018, source
- Conversation with representatives from YouTube on March 2, 2020
- Conversation with representatives from YouTube on March 2, 2020
- Conversation with representatives from YouTube on March 2, 2020
- Roose, "The Making".
- Roose, "The Making".
- YouTube, "Our Ongoing Work To Tackle Hate," Official YouTube Blog, entry posted June 5, 2019, source
- YouTube, "How YouTube Supports Elections," Official YouTube Blog, entry posted February 3, 2020, source
- Conversation with representatives from YouTube on March 2, 2020
- Kevin Roose, "YouTube's Product Chief On Online Radicalization, Algorithmic Rabbit Holes," SF Gate, April 6, 2019, source
- YouTube, "Continuing Our Work," Official YouTube Blog.
- YouTube, "Our Ongoing," Official YouTube Blog.
- Solsman, "Mozilla Is Sharing".
- YouTube, "Continuing Our Work," Official YouTube Blog.
- YouTube, "Continuing Our Work," Official YouTube Blog.
- "External Evaluators and Recommendations," YouTube Help, source
- YouTube, "Continuing Our Work," Official YouTube Blog.
- YouTube, "Continuing Our Work," Official YouTube Blog.
- YouTube, "Continuing Our Work," Official YouTube Blog.
- Covington, Adams, and Sargin, "Deep Neural".
- Newton, "How YouTube".
- "Manage Your," YouTube Help.
- Kaiser and Rauchfleisch, "Unite the Right?," D&S Media Manipulation: Dispatches from the Field (blog).
- Covington, Adams, and Sargin, "Deep Neural".
- Covington, Adams, and Sargin, "Deep Neural".
- “The RDR Corporate Accountability Index: Transparency and Accountability Standards for Targeted Advertising and Algorithmic Systems — Pilot Study and Lessons Learned,” Ranking Digital Rights, March 2020, rankingdigitalrights/pilot-report-2020
- Jaron Harambam, Natali Helberger, and Joris van Hoboken, "Democratizing Algorithmic News Recommenders: How To Materialize Voice In A Technologically Saturated Media Ecosystem," Philosophical Transactions of The Royal Society A Mathematical Physical and Engineering Sciences, October 2018, source
- Chris Stokel-Walker, "YouTube's Deradicalization Argument Is Really a Fight About Transparency," FFWD (blog), entry posted December 29, 2019, source
- Ekstrand et al., "All The Cool".
- Ekstrand et al., "All The Cool".
- Jesselyn Cook, "YouTube And Google Algorithms Promoted Notre Dame Conspiracy Theories," The Huffington Post, April 17, 2019, source
- Cook, "YouTube And Google".
- Cook, "YouTube And Google".
- Cook, "YouTube And Google".
- O'Donovan et al., "We Followed".
- Cook, "YouTube And Google".
- Lewis, "'Fiction is Outperforming".
- Roose, "The Making".
- Roose, "YouTube's Product"
- "YouTube Regrets," Mozilla Foundation, source
- Email conversation with representative from the Mozilla Foundation
- Solsman, "Mozilla Is Sharing".
- Kaiser and Rauchfleisch, "Unite the Right?," D&S Media Manipulation: Dispatches from the Field (blog).
- Pariser, The Filter Bubble: What the Internet is Hiding From You.
- Cook, "YouTube And Google".
- Lewis, "'Fiction is Outperforming".
- Lewis, "'Fiction is Outperforming".
- O'Donovan et al., "We Followed".
- Roose, "YouTube's Product"
- Roose, "YouTube's Product"Kevin Roose, "YouTube's Product Chief on Online Radicalization and Algorithmic Rabbit Holes," New York Times, March 29, 2019, source
- Roose, "The Making".
- Roose, "YouTube's Product"
- Roose, "The Making".
- Charlie Warzel, "Big Tech Was Designed to Be Toxic," New York Times, April 3, 2019, source
- Paris Martineau, "Maybe It's Not YouTube's Algorithm That Radicalizes People," WIRED, October 23, 2019, source
- Martineau, "Maybe It's",
- Becca Lewis, "All of YouTube, Not Just the Algorithm, is a Far-Right Propaganda Machine," FFWD (blog), entry posted January 8, 2020, source
- LaFrance, "The Algorithm".
- LaFrance, "The Algorithm".
- Smith, Toor, and Kessel, "Many Turn," Pew Research Center.
- Madrigal, "How YouTube's".
- Madrigal, "How YouTube's".
- Russell Brandom, "Inside Elsagate, The Conspiracy-Fueled War on Creepy YouTube Kids Videos," The Verge, December 8, 2017, source
- Natasha Lomas, "YouTube Under Fire For Recommending Videos Of Kids With Inappropriate Comments," TechCrunch, February 18, 2019, source
- Julia Alexander, "YouTube Still Can't Stop Child Predators In Its Comments," The Verge, February 19, 2019, source
- YouTube, "5 Ways We're Toughening Our Approach To Protect Families On YouTube and YouTube Kids," Official YouTube Blog, entry posted November 22, 2017, source
- Alexander, "YouTube Still".
- Bergen, "YouTube Executives".
- Bergen, "YouTube Executives".
- Elliott, "How To Turn".
- YouTube, "Giving You More Control Over Your Homepage And Up Next Videos," Official YouTube Blog, entry posted June 26, 2019, source
- YouTube, "Giving You More," Official YouTube Blog.
- YouTube, "Giving You More," Official YouTube Blog.
- YouTube, "Giving You More," Official YouTube Blog.
- " Manage Your," YouTube Help.
- "Manage Your," YouTube Help.
- "Manage Your," YouTube Help.
- "Manage Your," YouTube Help.