Spandana Singh
Policy Analyst, Open Technology Institute
Being an internet user today means encountering a constant stream of digital advertisements. Often, these are directly tailored to you based on your digital footprint.
The digital advertising industry as we know it has grown from an online version of the traditional advertising system, featuring ad agencies and publishers, to a data-driven environment in which internet platforms such as Google and Facebook take center-stage.
Many companies assert that advertising can enhance users’ lives, and that by showing users relevant and helpful ads they are improving user’s platform experience. However, platforms also seek to use targeted content such as ads to maximize the amount of time individuals spend on their services, increasing the amount of content users consume, the number of ads they view, and the number of items they may purchase. This in turn increases the amount of revenue the platforms earn. In some instances, however, the platforms’ use of algorithmic tools can lead to discriminatory and other harmful outcomes.
The online advertising industry has seen significant success for a range of reasons. First, platforms have increasingly deployed mechanisms such as web tracking, location tracking, cross-device tracking, and browser fingerprinting to collect and monetize internet users’ personal and behavioral data. In addition, the introduction of new targeting tools has enabled advertisers to segment and select their audiences along very specific lines. These targeting tools categorize consumers using a range of data points, which can include demographic characteristics, behavioral information, and personally identifiable information (PII).
Over the past decade, the digital advertising industry has increasingly adopted automated tools to streamline the targeting and delivery of advertisements. Although this has enabled advertisers to reach customers who are more likely to be interested in their products and services, it has exacerbated discriminatory and harmful outcomes and reinforced pre-existing societal biases. Given that the digital advertising ecosystem features an array of ads for employment, financial services, and housing, these online discriminatory outcomes often have real offline impacts. For example, if an algorithm relies on historical data to determine which users should receive an employment ad for a traditionally male-dominated field,, the ad may not be delivered to women and minorities who have historically been underrepresented in the field. These practices have therefore raised concerns around fairness, accountability, and transparency in algorithmic decision-making.
In our new report, New America’s Open Technology Institute (OTI) explores how three internet platforms—Google, Facebook, and LinkedIn—utilize algorithmic tools to enable ad targeting and delivery, and the challenges associated with these practices. The report offers recommendations on how internet platforms, civil society, and researchers can promote greater fairness, accountability, and transparency around these algorithmic decision-making practices. The report also provides recommendations for U.S. policymakers in this regard. However, because the First Amendment limits the extent to which the U.S. government can direct how internet platforms decide what content to permit on their sites, the report provides only a limited set of recommendations for action by policymakers. The recommendations presented in this report include:
Internet platforms that offer digital advertising services should:
Civil society organizations and researchers should:
U.S. policymakers should:
This is the third in a series of four reports that will explore how internet platforms are using automated tools to shape the content we see and influence how this content is delivered to us.