Table of Contents
Introduction
Today, personalized recommendation systems, particularly those that are based on machine learning (and the choice architecture decisions associated with them),1 have come to govern many internet platforms, including social media, e-commerce, and media streaming platforms. Recommendation systems are algorithmic tools that internet platforms use to identify and recommend content, products, and services that may be of interest to their users. These systems are responsible for recommending a range of content including friends, posts, ads, news articles, trending topics, items to purchase, jobs, and more. In doing so, these systems are able to influence user interests, opinions, and behaviors as well as their social group formation.2
Many internet platforms assert that these systems enhance users’ experiences through personalized and relevant recommendations. However, it is important to note that in deploying these systems, internet platforms also seek to retain user attention on their services. This translates to significant financial benefits for the companies, as they can then target these users with advertisements and recommend further content to consume or items to purchase.3 In addition, definitions of relevance vary across platforms and are largely based on what a platform believes a user is interested in through its data collection and inference practices.
Widely used by internet platforms today, recommendation systems have a significant amount of influence over how users engage with—and are influenced by—the online sphere.4 For example, recommender systems have the power to influence product purchases. They can also determine what content—such as which news articles—a user sees. This power has raised concerns around the use of algorithmic recommendation systems to intentionally or unintentionally create echo chambers in which users have a homogenized experience and engage with only certain viewpoints, or only with popular or trending topics.5
In addition, researchers have found that these recommender systems create a number of concerning outcomes. Notably, these include reinforcing societal biases and augmenting harmful perspectives, such as those of extremists and conspiracy theorists. Internet platforms that deploy these recommendation systems do not currently provide meaningful transparency and accountability around how these systems are created, how they operate, and how they make decisions.6 This makes it very difficult to analyze and combat the problematic recommendations that come from these systems.7 Because of this, critics have called recommendation systems “the biggest threat to societal cohesion on the internet” and a major contributor to offline threats.8
Further, recommender systems now also influence the operations of internet platforms themselves. For example, platforms such as Amazon and Netflix produce films and television shows based on behavioral data collected on their users through these systems. As a result, these recommender systems are not only influencing what existing content users see and engage with online, but they are also shaping the database of options that users have to choose from.9 To the extent that these productions are based on popularity signals, this could create a feedback loop that narrows the choices available to users.
This report is the final report in a series of four reports that explore how major technology companies rely on automated tools to shape the content we see and engage with online, and how internet platforms, policymakers, and researchers can promote greater fairness, accountability, and transparency around these algorithmic decision-making practices. This report focuses on the use of automated tools to provide recommendations to users. It relies on case studies on three internet platforms—YouTube, Amazon, and Netflix—to highlight the different ways algorithmic tools can be deployed by technology companies to enable recommendations. These case studies will also highlight the challenges associated with these practices.
Editorial disclosure: This report discusses policies by Google (YouTube), which is a funder of work at New America but did not contribute funds directly to the research or writing of this report. New America is guided by the principles of full transparency, independence, and accessibility in all its activities and partnerships. New America does not engage in research or educational activities directed or influenced in any way by financial supporters. View our full list of donors at www.newamerica.org/our-funding.
Citations
- Renee DiResta, "Up Next: A Better Recommendation System," WIRED, April 11, 2018, source
- Renee DiResta, "How Amazon's Algorithms Curated a Dystopian Bookstore," WIRED, March 5, 2019, source
- Spandana Singh, Special Delivery: How Internet Platforms Use Artificial Intelligence to Target and Deliver Ads, February 18, 2020, source
- Zeynep Tufekci, "How Recommendation Algorithms Run the World," WIRED, April 22, 2019, source
- Azadeh Nematzadeh et al., How Algorithmic Popularity Bias Hinders Or Promotes Quality, July 14, 2017, source
- Sinha and Swearingen, The Role.
- Ryan Bigge, "Better Personalized Recommendations Through Transparency and Content Design," Medium (blog), entry posted February 6, 2019, source
- Diresta, "Up Next".
- Nematzadeh et al., How Algorithmic.