Welcome to New America, redesigned for what’s next.

A special message from New America’s CEO and President on our new look.

Read the Note

Introduction

Since the early 2000s, the amount of information on the internet has grown tremendously. Whether it be news outlets, social media, or e-commerce platforms, the online ecosystem has become a go-to destination for users seeking a variety of information and experiences. At the same time, users have faced a fundamental challenge in identifying credible sources and understanding which of them to use. In order to help users access high-quality, relevant, and accurate information and content, a number of internet platforms rely on proprietary algorithmic tools to curate and rank content for users.

This report focuses on search engines and platforms that offer news feeds, both of which deploy algorithms to identify and curate content for users. Many of these platforms use hundreds of signals to inform these ranking algorithms and deliver users personalized search and news feed experiences.

These algorithms control both the inputs and outputs of the information environment. They evaluate and process incoming information to identify what content is most relevant for users. They then determine which of these outputs a user should see and rank the outputs in a hierarchical manner. In this way, these platforms act as gatekeepers of online speech by exercising significant editorial judgment over information flows.1

Most internet platforms have heralded the introduction of personalized search results and news feeds as a positive—and now integral—feature of their services. Many platforms assert that personalization enables users to access and engage with content that is more relevant and meaningful to them. Personalization features also enable platforms to achieve significant growth and boost revenue through avenues such as advertising.

However, there is a fundamental lack of transparency around how algorithmic decision-making around curation and ranking takes place. Because these practices can have a variety of negative consequences, this is concerning. In fact, many users are often unaware that such algorithms are being used by platforms to shape their online experiences.2 Rather, these users believe that the subjective frame presented by curation and ranking algorithms is representative of reality. As a result, users have grown accustomed to outsourcing judgment, autonomy,3 and decision-making to internet platforms and their opaque algorithms, who decide—based on their perceptions of what user interests and values are—what users’ online experience should be.4 This disparity in algorithmic awareness and understanding is fostering a new digital divide between individuals who are aware of and understand the impacts of algorithmic decision-making, and those who do not.5

As algorithmic tools are increasingly used to curate and rank content on internet platforms, concerns around fairness, accountability, and transparency have grown. In particular, an increasing number of researchers have noted that users lack awareness of algorithmic decision-making practices. In addition, there is a significant lack of transparency from internet platforms regarding how these tools are developed and deployed, and how they shape the user experience. Further, researchers have outlined that users and content creators often lack meaningful controls over, and agency related to, algorithmic decision-making practices.

In addition, although these algorithmic curation and ranking tools remove the need for humans to make significant manual and individual decisions regarding millions of pieces of content, they do not remove the need for human editorial judgment in this process, nor do they reduce bias. The term “bias” in this context does not solely refer to inappropriate preferences based on protected categories like race or political affiliation. Rather, these tools compile insights into a broad range of weighted signals. Ranking algorithms then analyze the data to prioritize certain forms of content and certain voices over others. These algorithms also incorporate the judgments of the engineers who have developed them, particularly with regard to what information users are likely to find interesting and meaningful. In addition, algorithms can infer correlations in data that may reflect societal biases. Indeed, often times, these algorithms result from machine learning “black box” systems. This means that even though developers may know what the inputs and outputs of an algorithm are, they may not know exactly how the algorithm operates internally. Therefore, concerns regarding algorithmic bias and accountability have grown as these algorithmic decision-making practices have become more prevalent.

This report is the second in a series of four reports that will explore how automated tools are being used by major technology companies to shape the content we see and engage with online, and how internet platforms, policymakers, and researchers can promote greater fairness, accountability, and transparency around these algorithmic decision-making practices. This report focuses on the algorithmic curation and ranking of content in search engine results and in news feeds on internet platforms. It uses case studies on three search engines—Google, Bing, and DuckDuckGo—and on three internet platforms that feature news feeds—Facebook, Twitter, and Reddit—to highlight the different ways algorithmic tools can be deployed by technology companies to curate and rank content, and the challenges associated with these practices.

Editorial disclosure: This report discusses policies by Google, Microsoft, and Facebook, all of which are funders of work at New America but did not contribute funds directly to the research or writing of this report. New America is guided by the principles of full transparency, independence, and accessibility in all its activities and partnerships. New America does not engage in research or educational activities directed or influenced in any way by financial supporters. View our full list of donors at www.newamerica.org/our-funding.

Citations
  1. Herman Tavani, "Search Engines and Ethics," ed. Edward N. Zalta, Stanford Encyclopedia of Philosophy, last modified 2016, source
  2. Motahhare Eslami et al., FeedVis: A Path for Exploring News Feed Curation Algorithms, 2015, source
  3. Engin Bozdag, "Bias in Algorithmic Filtering and Personalization," Ethics and Information Technology 15, no. 3 (September 2013).
  4. Tavani, "Search Engines," Stanford Encyclopedia of Philosophy.
  5. Anjana Susarla, "The New Digital Divide is Between People Who Opt Out of Algorithms and People Who Don't," The Conversation, last modified April 17, 2019, source

Table of Contents

Close