Welcome to New America, redesigned for what’s next.

A special message from New America’s CEO and President on our new look.

Read the Note

Promoting Fairness, Accountability, and Transparency Around Algorithmic Recommendation Practices

As explained in this report, the use of algorithmic systems to curate and recommend content gives internet platforms significant power to shape the perspectives and behaviors of their users. The use of recommendation systems has also significantly transformed, and contributed to the success of, companies’ business models because these systems help platforms retain user attention, thus enabling them to target users with advertisements and/or further recommendations. However, the use of these algorithmic recommendation systems have also sparked a number of controversies. Despite this, internet platforms do not offer a significant amount of transparency and accountability around how these systems are structured, how they operate, and how they engage in decision-making. This makes it difficult to understand how these systems impact users, their worldviews, and their behaviors. Going forward, internet platforms and policymakers should consider the following set of recommendations in order to promote greater fairness, accountability, and transparency around algorithmic decision-making.1 This report does not offer recommendations for researchers, as there is currently far too little corporate data available to researchers related to algorithmic recommendation systems. Once internet platforms begin implementing the recommendations outlined below, we will be able to suggest more tangible recommendations for researchers.

Recommendations for Internet Platforms

  1. Disclose to users the situations in which the platform uses an algorithmically-curated recommendation system and provide comprehensive and meaningful explanations to users around how their recommendation systems work. These explanations should include information on the explicit and implicit data points the system considers (especially sensitive data points such as demographic information). These explanations should also include the various signals (e.g. when a user stopped watching a video, how popular an item is) and factors (e.g. user location, on what device a user is visiting the platform) that a recommendation system considers and weighs in order to generate its recommendations. If these signals or factors change, the company should publicly disclose this and explain why these changes have been made. These disclosures should also include an overview of instances in which the company will manually intervene in algorithmic decision-making processes to change outcomes (e.g. to downrank or hide recommendations that include misinformation). The disclosure and these explanations should be publicly available, easily accessible on the company’s website, and written in a manner that is easily comprehensible by the average user.
  2. Explain to users why a recommendation was made to them. Users should be able to access information on why a particular video, item, etc. was recommended to them. This explanation should at a minimum include information on the different signals and user characteristics the recommendation system considered to make the recommendation. It should also include an easy link to relevant user controls (per recommendation seven below) that could let the user change their recommendation preferences.
  3. Disclose granular data around how the company trains its algorithmic recommendation systems. At a minimum, this should include information on the categories of users that a company’s training data sets are trained on (e.g. which demographic groups).
  4. Enable independent researchers to conduct audits to review and verify relevant internal models and data. In particular, companies should permit pre-vetted researchers to review and verify its training models, the results of tests the company runs to evaluate how effective their recommendation system is, any statistics the company has publicly released related to the impact of algorithmic changes on the operations of the recommendation system, and data related to controversial categories of recommendations such as extremist propaganda, conspiracy theories, and misinformation.
  5. Hire independent auditors to conduct regular periodic audits of recommendation algorithms in order to identify potentially harmful outcomes and take steps to address findings of audits, including mitigating discrimination and bias. These audits should specifically evaluate how algorithmic recommendation systems can inappropriately influence or manipulate user perspectives and behaviors, promote concerning topics of information, and cause discrimination. Internet platforms should conduct these audits proactively, as well as in response to concerns surfaced by community partners, civil society organizations, researchers, activists, etc. Companies should take affirmative steps to address any problematic findings from the audits, including using the results of these audits to refine, retrain, and improve their recommendation systems and make them more fair, accountable, and transparent. Companies should also work to reduce instances of discrimination and bias that result from the use of their algorithmic recommendation systems. These audits should be conducted by an external third party, and companies should make summaries publicly available.
  6. Share granular data related to how the company tests its recommendation systems and how it determines how effective the company’s systems are. At a minimum, this should include information on how well these systems predict the preferences of different demographic groups. In addition, this data should be continuously updated to indicate how various algorithmic changes have impacted the company’s metrics and conclusions related to the overall effectiveness of the company’s recommendation system.
  7. Improve user controls so that users can easily manage whether and how their data is collected and inferred, how this data is used, and how it influences the recommendations that they see. These user controls should be easy to access and understand. They should be available to all logged in users of a service. In addition, these controls should be accompanied with an explanation of how using these controls will impact a user’s overall platform experience. At a minimum, these user controls should include the ability to:
    1. Select and change the factors (e.g. demographic information, browsing history, purchase history, ratings, interests) that a recommendation system may consider when generating recommendations for them. These settings should include the ability to completely opt out from having any of these factors considered. It should also include the ability to completely clear a user’s watch, browsing, and purchase history. These controls are integral for protecting user privacy.
    2. Exclude certain videos, titles, channels, sellers, or items from factoring into their recommendations.
    3. Choose whether recommendations are influenced by a user’s activity on partner or related products and websites. This should include the option to opt out entirely from having such data considered.
    4. Opt out of the autoplay feature on video and streaming-based services. Ideally, users should have to opt into receiving autoplay recommendations on any platform.
    5. Decide whether they want to receive algorithmically-curated recommendations at all. Ideally, users should have to opt into receiving such recommendations on any platform. At a minimum, users should have access to controls that enable them to fully opt out of the recommendation process. Users should have easy to use controls that let them opt out of all practices at once.
  8. Share the platform’s Terms of Service Community Guidelines related to topics such as content and purchases, and how they are enforced. These guidelines should be easily accessible and comprehensible to the average user. They should clearly explain what kinds of content and behaviors are and are not permissible on the platform. They should also explain how the company will enforce these policies, and what the consequences for violating these policies are. If the company changes these Terms of Service or enforcement policies, they should announce these changes and explain why these changes have been made.
  9. Publish a transparency report outlining the scope and scale of Terms of Service enforcement actions in all of the regions in which it operates. These transparency reports should provide granular and meaningful data around how the company has enforced its Terms of Service. In addition, this transparency report should be published at regular intervals (e.g. annually, quarterly, etc.). All of the data in the transparency report should be available in a structured data format (e.g. comma separated values), rather than or in addition to a flat PDF file. This is helpful to researchers who want to make use of the report data, as it simplifies the data extraction process and makes reports more accessible.
  10. Explain how the company uses human evaluators to review and train its algorithmic and machine-learning models. These explanations should include an overview of the role of the evaluators and a publicly available copy of the guidelines these evaluators use.

Recommendations for Policymakers

The recommendations for policymakers in this report are focused on U.S. policymakers. This is both because the platforms discussed are U.S. companies and also because the First Amendment of the U.S. Constitution imposes unique constraints on the extent to which U.S. policymakers can regulate how companies decide which content to permit on their platforms.

In order to help protect privacy and prevent harmful outcomes as a result of algorithmic decision-making in recommendation systems, U.S. policymakers should:

Enact rules to require greater transparency from online platforms regarding their use of algorithmic recommendation systems. The U.S. government is limited in the extent to which it can direct platforms how to decide what content to permit on their sites. However, Congress could improve accountability mechanisms by requiring greater transparency around the use of algorithmic recommendation systems.

Citations
  1. Ranking Digital Rights, an affiliate program at New America, has released a set of draft indicators which seek to measure corporate disclosures related to algorithmic systems. Our research concluded recommendations that are in line with many of their research-based indicators."RDR Corporate Accountability Index: Draft Indicators," Ranking Digital Rights, last modified October 2019, source
Promoting Fairness, Accountability, and Transparency Around Algorithmic Recommendation Practices

Table of Contents

Close