Feb. 19, 2020
Being an internet user today means encountering a constant stream of digital advertisements. Often, these are directly tailored to you based on your digital footprint.
The digital advertising industry as we know it has grown from an online version of the traditional advertising system, featuring ad agencies and publishers, to a data-driven environment in which internet platforms such as Google and Facebook take center-stage.
Many companies assert that advertising can enhance users’ lives, and that by showing users relevant and helpful ads they are improving user’s platform experience. However, platforms also seek to use targeted content such as ads to maximize the amount of time individuals spend on their services, increasing the amount of content users consume, the number of ads they view, and the number of items they may purchase. This in turn increases the amount of revenue the platforms earn. In some instances, however, the platforms’ use of algorithmic tools can lead to discriminatory and other harmful outcomes.
The online advertising industry has seen significant success for a range of reasons. First, platforms have increasingly deployed mechanisms such as web tracking, location tracking, cross-device tracking, and browser fingerprinting to collect and monetize internet users’ personal and behavioral data. In addition, the introduction of new targeting tools has enabled advertisers to segment and select their audiences along very specific lines. These targeting tools categorize consumers using a range of data points, which can include demographic characteristics, behavioral information, and personally identifiable information (PII).
Over the past decade, the digital advertising industry has increasingly adopted automated tools to streamline the targeting and delivery of advertisements. Although this has enabled advertisers to reach customers who are more likely to be interested in their products and services, it has exacerbated discriminatory and harmful outcomes and reinforced pre-existing societal biases. Given that the digital advertising ecosystem features an array of ads for employment, financial services, and housing, these online discriminatory outcomes often have real offline impacts. For example, if an algorithm relies on historical data to determine which users should receive an employment ad for a traditionally male-dominated field,, the ad may not be delivered to women and minorities who have historically been underrepresented in the field. These practices have therefore raised concerns around fairness, accountability, and transparency in algorithmic decision-making.
In our new report, New America’s Open Technology Institute (OTI) explores how three internet platforms—Google, Facebook, and LinkedIn—utilize algorithmic tools to enable ad targeting and delivery, and the challenges associated with these practices. The report offers recommendations on how internet platforms, civil society, and researchers can promote greater fairness, accountability, and transparency around these algorithmic decision-making practices. The report also provides recommendations for U.S. policymakers in this regard. However, because the First Amendment limits the extent to which the U.S. government can direct how internet platforms decide what content to permit on their sites, the report provides only a limited set of recommendations for action by policymakers. The recommendations presented in this report include:
Internet platforms that offer digital advertising services should:
- Publish comprehensive and comprehensible descriptions of advertising content policies.
- Publish comprehensive and comprehensible descriptions of advertising targeting policies.
- Prohibit targeting based on protected classes and sensitive characteristics that could result in discriminatory outcomes, including characteristics that have been shown to be proxies for protected characteristics.
- Establish and disclose a comprehensive process to review ads for categories of ads that could have significant real-life consequences such as political, housing, education, employment, and financial services-related ads before they are permitted to run on a platform.
- Hire independent auditors to conduct regular periodic audits of ad targeting algorithms in order to identify potentially harmful outcomes related to privacy, freedom of expression, freedom of information, and discrimination, and take steps to eliminate or mitigate any harms identified through the audits.
- Hire independent auditors to conduct regular periodic audits of ad delivery and optimization algorithms in order to identify potentially harmful outcomes related to privacy, freedom of expression, freedom of information, and discrimination, and take steps to eliminate or mitigate any harms identified through the audits.
- Empower users with comprehensive tools that help them understand how and why ads are targeted and delivered to them.
- Explain to users why the platform collects, infers, and shares user data.
- Improve user controls so that users can easily manage whether and how data is collected, inferred, and shared, how this data is used, and how it influences the content that they see. This should include the option to delete this data entirely.
- Provide clear labels for sponsored and paid content across all of the platform's products, services, and ad networks.
- Create a publicly available online database of all of the ads that a company has run on its platform.
- Publish a transparency report that provides a granular overview of the platform’s advertising operations across all regions that it operates in.
- Provide meaningful notice to advertisers who have had their ads or accounts flagged or removed, as well as to users who have flagged ads or accounts.
- Offer advertisers who have had their ads or accounts flagged or removed, as well as users who have flagged ads or accounts, a robust appeals process. This appeals process should be offered for categories of ads that could have significant real-life consequences, such as political ads, housing ads, employment ads, and credit ads.
- Fund further research and investigations regarding how the digital advertising ecosystem can be used to reinforce societal biases and discriminatory outcomes, and how to redress these problems.
Civil society organizations and researchers should:
- Conduct further research on how the digital advertising ecosystem can be used to reinforce societal biases and discriminatory outcomes through advertising, and how to redress these problems.
- Collaborate to develop a set of industry-wide best practices for transparency and accountability around algorithmic ad targeting and delivery practices. These best practices should explicitly prioritize the public interest above corporate business models and concerns about trade secrets.
U.S. policymakers should:
- Clarify that all offline anti-discriminatory statutes apply in the digital environment.
- Enact rules to require greater transparency from online platforms regarding their ad targeting and delivery practices.
This is the third in a series of four reports that will explore how internet platforms are using automated tools to shape the content we see and influence how this content is delivered to us.