Welcome to New America, redesigned for what’s next.

A special message from New America’s CEO and President on our new look.

Read the Note

Introduction

Over the past decade, internet platforms have increasingly adopted artificial intelligence and machine-learning tools to shape the content we see and engage with online. These include automated tools for content moderation, ranking of content in news feed and search results, targeting and delivery of digital advertisements, and recommendation systems.1 These systems are largely invisible to the public. But they are pervasive in our online interactions, and hold significant influence over how we view and interact with the world, determining everything from what news we encounter and what items we purchase to whose voices we see and engage with the most.

Many technology companies assert that algorithmic content shaping systems are valuable because they provide users with a personalized experience on a platform, and enable users to access content the platform deems “relevant” or “useful.” However, by delivering these personalized, algorithmically-curated experiences to users, companies also aim to retain user attention on their services. This translates into significant financial benefits for these companies, as they can target users with advertisements and provide further recommendations for content and purchases. In this way, algorithmically-tailored platform experiences are an important revenue generator and a critical component of platforms’ business models.

Further, as outlined in our report series exploring algorithmic content shaping practices in detail, there is a significant lack of transparency and accountability around how these automated tools are created, trained, refined, and deployed. This raises concerns around how internet platforms are safeguarding user rights to freedom of expression and privacy online when using these systems. Our report series includes case studies demonstrating that algorithmic systems can yield harmful, biased, and discriminatory results, which disproportionately impact marginalized and already vulnerable users and communities. For example, researchers have found that digital advertising algorithms can optimize ad delivery processes in a manner that prevents certain groups of individuals, such as African Americans and women, from receiving employment and housing ads. In addition, researchers have also found that engagement-driven recommendation algorithms can promote troubling content, including hate speech, conspiracy theories, and extremist content.2

This is the final report in our series Holding Platforms Accountable: Online Speech in the Age of Algorithms. It summarizes key themes from the four in-depth reports in the series and from a series of events that OTI hosted on these topics that merit consideration as internet platforms, civil society, researchers, and policymakers continue to explore how to promote greater fairness, accountability, and transparency around these algorithmic decision-making practices.

Citations
  1. Spandana Singh, "Holding Platforms Accountable: Online Speech in the Age of Algorithms," New America's Open Technology Institute, last modified July 22, 2019, source
  2. Spandana Singh, Why Am I Seeing This?: How Video and E-Commerce Platforms Use Recommendation Systems to Shape User Experiences, March 25, 2020, source

Table of Contents

Close