Spandana Singh
Policy Analyst, Open Technology Institute
Over the past two decades, the amount of information available on the internet has grown exponentially. However, as sources of this information have similarly grown in number and capacity, users have struggled to identify which ones are reliable and relevant.
A number of internet platforms, including search engines and social media networks, have asserted that they are able to provide relevant content to users with artificial intelligence. Using their proprietary algorithmic tools, these platforms have established complex processes for curating and ranking the content they show users, with the aim of providing personalized search results and news feeds. They assert that this promotes content that users will find relevant and meaningful. However, these practices also help drive these company’s bottom lines and can increase the risk that individuals are restricted to “filter bubbles” or are unable to engage with new content that matches their potentially expanding interests.
Although internet platforms herald the introduction of these curation and ranking algorithms as a positive shift, their proliferation has raised a number of concerns regarding fairness, accountability, and transparency around algorithmic decision-making. These algorithms have become integral to the operations of many internet platforms, and they have enabled companies to become gatekeepers of online speech who exercise significant editorial judgment over information flows. Despite the ubiquitous presence of these curation and ranking algorithms online, many users are often unaware that algorithms are being used by platforms to shape their online experiences. As a result, many individuals willingly—and unwittingly—accept the role that internet platforms and black box algorithms play in deciding what their online experience will be.
Further, internet platforms have failed to provide adequate transparency around how these tools are developed and deployed, and how they shape the user experience. Platforms also generally offer users and content creators only a limited set of controls over how they engage with these algorithmic tools.
In addition, the widespread deployment of algorithmic curation and ranking tools has raised a number of concerns regarding algorithmic bias and accountability. By default, algorithmic tools are designed to preference certain factors and characteristics over others when curating and ranking content. Developers who select these factors do so based on their priorities and assumptions about what users value in their online experience.
In our new report, New America’s Open Technology Institute (OTI) explores how three search engines—Google, Bing, and DuckDuckGo—and three internet platforms with news feeds—Facebook, Twitter, and Reddit—utilize algorithmic curation and ranking practices, and what the associated challenges are. The report also explores how internet platforms, policymakers, and researchers can promote greater fairness, accountability, and transparency around these algorithmic decision-making practices. The recommendations presented in the report include:
This is the second in a series of four reports that will explore how internet platforms are using automated tools to shape the content we see and influence how this content is delivered to us.