Table of Contents
- Introduction
- The Growth of Today’s Digital Advertising Ecosystem
- The Role of Data in the Targeted Advertising Industry
- The Role of Automated Tools in Digital Advertising
- Concerns Regarding Digital Advertising Policies and Practices
- Case Study: Google
- Case Study: Facebook
- Case Study: LinkedIn
- Promoting Fairness, Accountability, and Transparency Around Ad Targeting and Delivery Practices
Promoting Fairness, Accountability, and Transparency Around Ad Targeting and Delivery Practices
As described in this report, the process of targeting and delivering ads on internet platforms often involves the use of automated tools and algorithmic decision-making. Although internet platforms assert that these practices enable them to deliver more relevant, personalized, and interesting content to users, the use of automated tools in these instances can also result in inequitable and discriminatory outcomes and practices. The use of automated tools in these situations also relies on massive amounts of user data, which requires numerous highly-invasive data collection methods. In addition, there is a significant lack of transparency and accountability around how these ad targeting and delivery practices are implemented. Because digital advertising currently serves as the primary source of revenue for many large internet companies, these practices, however concerning, continue.
Going forward, internet platforms, civil society, and researchers should consider the following set of recommendations in order to promote greater fairness, accountability, and transparency around algorithmic decision-making in this space. This section also includes recommendations for U.S. policymakers in this regard. However, because the First Amendment limits the extent to which the U.S. government can direct how internet platforms decide what content to permit on their sites, this report provides limited recommendations for action by policymakers.
Recommendations for Internet Platforms
Internet platforms need to provide greater transparency and accountability around their advertising operations, both at the user level and in the larger advertising ecosystem. In order to achieve this, internet platforms should:
1) Publish comprehensive and comprehensible descriptions of advertising content policies. These advertising content policies should clearly delineate what categories of ads, types of ad content, and accounts are prohibited on the platform. They should also explain what tools and processes (e.g. automated tools) the platform uses to identify ads and accounts that violate its advertising content policies. These advertising content policies should be easy to access and understand. Further, the platform should provide a change log or archive of past advertising content policies so users can see how these policies have been altered over time. Whenever an internet platform changes its advertising content policies, users should be notified.
2) Publish comprehensive and comprehensible descriptions of advertising targeting policies. These ad targeting policies should clearly outline what information the platform and advertisers can use to target ads to users (e.g. location information), which targeting parameters are prohibited on the platform, and what tools and processes (e.g. automated tools) the platform uses to identify ads and accounts that violate its ad targeting policies. Furthermore, these policies should explain what review process, if any, algorithmically generated ad targeting categories undergo. We recommend that algorithmically generated ad targeting categories be reviewed by humans in order to prevent discriminatory and harmful outcomes. All of an internet platform’s ad targeting policies should be easy to access and understand. The platform should provide a change log or archive of past ad targeting policies so users can see how these policies have been altered over time. Whenever an internet platform changes its ad targeting policies, users should be notified.
3) Prohibit targeting based on protected classes and sensitive characteristics that could result in discriminatory outcomes, including characteristics that have been shown to be proxies for protected characteristics. These include categories such as race, ethnicity, religion, disability, and socioeconomic background, and known proxies such as neighborhood of residence. Companies should also establish comprehensive enforcement mechanisms to implement their anti-discrimination policies, as outlined in recommendations four through six below.
4) Establish and disclose a comprehensive process to review ads for categories of ads that could have significant real-life consequences such as political, housing, education, employment, and financial services-related ads before they are permitted to run on a platform. This review process should be comprehensive and involve pre-review and approval of ads by the internet platform. During this review process, the internet platform should ensure that all ads and accounts advertising in these categories are in compliance with the platform’s advertising content policies and its ad targeting policies. Because processes in which an advertiser simply “self-certifies” that they are not engaging in prohibited discriminatory practices can easily be abused, they are not sufficient. In addition, although some companies are investing in developing automated tools that can identify certain categories of ads (such as political advertising) these tools are unable to fully understand subjective definitions and uses of human speech. As a result, this review process should always involve human review prior to a final decision. Internet platforms should publicly disclose the scope and methods of this review process, including to what extent automated and human review are involved, and should disclose the guidelines that human reviewers use to make determinations on whether or not an ad, account, or targeting parameter should be permitted on the platform. Such comprehensive review and authentication procedures can help reduce instances of bias and discriminatory outcomes in the ad targeting and delivery process.
5) Hire independent auditors to conduct regular periodic audits of ad targeting algorithms in order to identify potentially harmful outcomes related to privacy, freedom of expression, freedom of information, and discrimination, and take steps to eliminate or mitigate any harms identified through the audits. In particular, these audits should evaluate the algorithms that are used to generate ad targeting categories. Internet platforms should conduct these audits proactively, as well as in response to credible allegations of violations of user privacy, freedom of expression, freedom of information, or cases of discrimination surfaced by community partners, civil society organizations, activists, etc. Companies should use the results of these audits to refine and improve ad targeting algorithms and make them more fair, accountable, and transparent. These audits should be conducted by an external third party, and companies should make summaries publicly available.
6) Hire independent auditors to conduct regular periodic audits of ad delivery and optimization algorithms in order to identify potentially harmful outcomes related to privacy, freedom of expression, freedom of information, and discrimination, and take steps to eliminate or mitigate any harms identified through the audits. These audits should particularly evaluate how automated ad delivery systems can inappropriately prevent certain categories of users from viewing ads due to inferences made on the basis of collected data. Internet platforms should conduct these audits proactively, as well as in response to cases of violations of user privacy, freedom of expression, freedom of information, or cases of discrimination surfaced by community partners, civil society organizations, activists, etc. Companies should use the results of these audits to refine and improve ad delivery and optimization algorithms and make them more fair, accountable, and transparent. These audits should be conducted by an external third party, and companies should make summaries publicly available.
7) Empower users with comprehensive tools that help them understand how and why ads are targeted and delivered to them. Internet platforms should enable users to understand how and why they are seeing certain ads. This information should be easy to access and understand. In particular, when an individual user clicks on a specific ad, they should be able to see the following:
- What factors (e.g. demographic information, interests, browsing history, etc.) about them were considered when targeting and delivering the ad to them.
- Whether the user appears on that advertiser’s list of target users.
When a user clicks on a certain ad to learn more, they should also be able to access further information on:
- Which advertiser(s)’ lists a user appears on. This information should include whether an advertiser provided specific information on the user to the advertising platform, or whether it was acquired through a marketing partner or data broker.
- Whether an ad was delivered to a user because the advertising platform identified them as potentially interested in the content of the ad (e.g. whether they were part of a Lookalike Audience).
8) Explain to users why the platform collects, infers, and shares user data. This information should outline the purposes and scope of each of these practices. It should also include an explanation of the risks associated with such data collection, inference, and sharing practices. This information should be easy to access and understand, and help inform users’ decisions regarding the user controls recommended in recommendation 9 below.
9) Improve user controls so that users can easily manage whether and how data is collected, inferred, and shared, how this data is used, and how it influences the content that they see. This should include the option to delete this data entirely. These controls should be easy to access and understand. They should also be available to all users of the service, whether logged in, logged out, or not associated with a particular account. At a minimum, users should be able to:
- Adjust their advertising preferences to select and change the factors (e.g. demographic information, interests, browsing history, etc.) that advertisers may consider when targeting ads to them, and that the advertising platform may consider when delivering ads to them. These preference settings should include the ability to completely opt out from having any of these factors considered.
- Change their preferences related to whether they are shown ads that are targeted and delivered based on data from partners or their activity on related products or websites.
- Manage how all advertisers contribute to their online ad experience, including having the ability to hide and opt out of receiving ads from specific advertisers.
- Control whether and how internet platforms collect data on them. This should include the ability to automatically delete their browsing, app, and location history. Ideally, users should have to opt in to such data collection practices. At a minimum, companies should provide users with controls that enable them to pause and/or opt out entirely from practices that involve collecting data beyond the basic information needed to establish an account.
- Determine whether or not they would like an internet platform to be able to make inferences about them based on data they have collected or provided.
- Decide whether they want to receive targeted ads. Ideally, users should have to opt in to having ads targeted and delivered to them on any platform. At a minimum, users should have access to controls that enable them to fully opt out of the ad targeting and delivery process. Users should have easy to use controls that let them opt out of all practices at once.
10) Provide clear labels for sponsored and paid content across all of the platform's products, services, and ad networks. Companies should label all forms of paid advertising as sponsored content. Companies should ensure that these labels are appended to ads across all of a company’s products and services and should apply to all categories and mediums of advertising, whether political, employment related, text-based, video-based, etc.
11) Create a publicly available online database of all of the ads that a company has run on its platform. This can help explain a platform’s ad operations in a comprehensive manner and can also enable meaningful trend analysis and research. This database should include ads from all categories of ads on the platform, including categories of ads that could have significant real-life consequences such as political ads, housing ads, employment ads, and credit ads. It should also be user-friendly. In particular, this database should include search functionality. In order to protect user privacy, the information in this database should not enable the identification of users that received the ad. At a minimum, this database should disclose the following information about each of the ads in the database:
- The format of the ad (e.g. text, video, etc.)
- The name of the advertiser
- What region the ad was run in
- How much the ad spend for the ad was
- The time period during which an ad was active
- Granular engagement and interaction information, such as how many users saw the ad, and the number of likes, shares, and views that an ad received
- What targeting parameters the advertiser specified to the advertising platform
- What categories of users the ad was eventually delivered to (i.e. what targeting parameters did the ad delivery system eventually select and optimize for)
- Whether the ad was delivered to a custom set of users or one generated by an automated system (e.g. Lookalike users)
12) Publish a transparency report that provides a granular overview of the platform’s advertising operations across all regions that it operates in. This transparency report should be published along a consistent timeline (e.g. annually, quarterly, etc.). All of the data in the transparency report should be available in a structured data format (e.g. comma separated values), rather than or in addition to a flat PDF file. This is helpful to researchers who want to make use of the report data, as it simplifies the data extraction process and makes reports more accessible.
At a minimum, this transparency report should disclose the following general information for every reporting period:
- The total number of ads that a platform ran
- The total number of ads that a platform ran in each country in which it operates
- The total amount of ad spend across the platform
- The total amount of ad spend in each country in which it operates
- The top advertisers in each country
- The top keywords in each country
In addition, at a minimum, this transparency report should separately disclose the following information for ads that have been flagged or removed from the platform for every reporting period:
- The total number of ads flagged for violating the platform’s advertising content policies
- The total number of ads removed for violating the platform’s advertising content policies
- The total number of ads flagged for violating the platform’s ad targeting policies
- The total number of ads removed for violating the platform’s ad targeting policies
- A separate breakdown of the ads and accounts flagged and removed for violating the platform’s advertising content policies by:
- The advertising content policy they violated
- The format of the ad’s content (e.g. text, audio, image, video, live stream)
- The country of the advertiser
- For companies that operate more than one platform, the product or service on which the ad was run
- The detection method used (e.g. user flag, automated tool). Note that the identity of individual flaggers should not be revealed.
- A separate breakdown of the ads and accounts flagged and removed for violating the platform’s ad targeting policies by:
- The ad targeting policy they violated
- The format of the ad’s content (e.g. text, audio, image, video, live stream)
- The country of the advertiser
- For companies that operate more than one platform, the product or service on which the ad was run
- The detection method used (e.g. user flag, automated tool). Note that the identity of individual flaggers should not be revealed.
13) Provide meaningful notice to advertisers who have had their ads or accounts flagged or removed, as well as to users who have flagged ads or accounts. These notice procedures are particularly important where ads are run by individuals or civil society organizations, as erroneous removal or moderation of their ads could particularly infringe on freedom of expression. In addition, given the lack of clear definitions around categories of ads such as political and issue ads, such notice processes are important to protect freedom of expression. All notices should be available in a durable form that is accessible even if an advertiser’s account is suspended or terminated. In addition, users who flag ads should have a log of ads they have reported and the outcomes of the review process. At a minimum the notice provided to advertisers who have had their ads or accounts flagged should include:
- The URL, content excerpt, and/or any other relevant information which would allow the party to identify the ad or account flagged or removed
- The specific portion of the advertising content policy and/or ad targeting policy that the ad or account was found to violate
- How the ad or account was detected or removed (e.g. user flag, government request, automated tool)
- An explanation of the process through which the relevant party can appeal the decision
14) With regard to categories of ads that could have significant real-life consequences, such as political ads, housing ads, employment ads, and credit ads, offer advertisers who have had their ads or accounts flagged or removed as well as users who have flagged ads or accounts a robust appeals process. These appeal procedures are particularly important where ads are run by individuals or civil society organizations, as erroneous removal or moderation of their ads could particularly infringe on freedom of expression. In addition, given the lack of clear definitions around categories of ads such as political and issue ads, such appeals processes are important to protect freedom of expression. At a minimum, the appeals process available to advertisers who have had their ads or accounts flagged and users who have flagged ads or accounts for these consequential categories of ads should include:
- Human review by a person or panel of people that were not directly involved in the initial decision-making process.
- The opportunity to provide additional context or information that should be considered in the appeals process.
- Meaningful notice which details the results of the appeals process. This notice should provide a comprehensive explanation of why the final decision was made.
15) Fund further research and investigations regarding how the digital advertising ecosystem can be used to reinforce societal biases and discriminatory outcomes and how to redress these problems. In particular, this research should include platforms which have been less studied to date, such as Twitter, Reddit, Snapchat, etc. Additionally, new research should explore potentially consequential categories of ads beyond the ones discussed in this report as ads that could have significant real-life consequences such as political and employment ads. Other similarly consequential categories of ads may include education and healthcare, and these areas would benefit from further research as they can also yield discriminatory and harmful outcomes.
Recommendations for Civil Society and Researchers
In order to promote greater understanding and safeguards around how automated tools are used during the ad targeting and delivery processes, members of civil society and researchers should:
- Conduct further research on how the digital advertising ecosystem can be used to reinforce societal biases and discriminatory outcomes through advertising and how to redress these problems. In particular, this research should focus on platforms that have been less studied to date, such as Twitter, Reddit, Snapchat, etc. As discussed above, this report has highlighted some categories of ads that could have significant real-life consequences, but there are other similarly consequential categories of ads, such as education and healthcare, that need to be further researched, as they can also yield discriminatory and harmful outcomes.
- Collaborate to develop a set of industry-wide best practices for transparency and accountability around algorithmic ad targeting and delivery practices. These best practices should explicitly prioritize the public interest above corporate business models and concerns about trade secrets. This will help ensure that users are adequately educated and aware of company practices, have a range of meaningful controls at their disposal, and know how to use them. It will also promote greater accountability around these algorithmic decision-making practices.
Recommendations for Policymakers
The recommendations for policymakers in this report are focused on U.S. policymakers. This is both because the platforms discussed are U.S. companies and also because the First Amendment of the U.S. Constitution imposes unique constraints on the extent to which U.S. policymakers can regulate how companies decide which content to permit on their platforms.
In order to help prevent discrimination and harmful outcomes as a result of algorithmic decision-making in ad targeting and delivery online, U.S. policymakers should:
- Clarify that all offline anti-discriminatory statutes apply in the digital environment. Policymakers should ensure that anti-discrimination statutes apply online to the same extent as offline. Statutes such as the Civil Rights Act of 1964, the Fair Housing Act (part of the Civil Rights act of 1968), the Age Discrimination in Emplyoment Act, and California’s Unruh Civil Rights Act should be applied online as they are applied offline. Where necessary to fill gaps or clarify the applicability of such laws, Congress and state legislatures should enact appropriate legislation.
- Enact rules to require greater transparency from online platforms regarding their ad targeting and delivery practices. The U.S. government is limited in the extent to which it can direct platforms how to decide what content to permit on their sites. However, Congress could improve accountability mechanisms by requiring greater transparency around these content policies and practices.