Welcome to New America, redesigned for what’s next.

A special message from New America’s CEO and President on our new look.

Read the Note

Exploring Meaningful Fairness, Accountability, and Transparency

Approaches to promoting fairness, accountability, and transparency in the context of machine-learning and algorithmic decision-making need to be dynamic and responsive to the fact that today’s internet platforms offer a variety of services, and their algorithmic systems are built to cater to specific goals and objectives.1 As a result, it is important that stakeholders consider the audience, intent, and scope of the platform.2 In addition, it is also imperative to note that the goals of algorithmic systems, such as ranking and recommendation systems, vary from platform to platform based on a diverse set of technical and economic goals. According to Daphne Keller, director of platform regulation and Stanford University’s Cyber Policy Center, the technical goal of these systems is to translate human values—such as quality or authoritativeness in content or results—into mathematical and technical formulas. The economic goals of these systems, on the other hand, include maximizing ad revenue, although as Keller notes this is only one foundational piece of the puzzle.3 It is therefore vital that as experts work to decipher what meaningful fairness, accountability, and transparency around the use of these algorithmic systems means, they account for the differences in how platforms create and calibrate their systems at different points of the product life cycle.

In addition, it is important for stakeholders in this space to examine how meaningful transparency and accountability can be delivered to different audiences, such as users, researchers and journalists, policymakers, and watchdogs. Each of these audiences has a different level of knowledge and understanding of how algorithmic systems work, a different set of goals for transparency, and desires a different level of granularity. Consequently, any transparency efforts must account for these differences in goals and understanding to appropriately frame any insights.4

For example, in order for transparency efforts geared toward users to be meaningful, they must be accessible to the average user who may not have a strong technical background or a strong interest in understanding the granular components of how these systems work. As a result, such user-focused transparency efforts should aim to explain how these systems impact users and their experiences, as well as what level of control users have over how they interact with these systems, in a straightforward way.

Researchers and journalists, on the other hand, often have specific questions or concerns about certain components of these systems, and typically seek out more granular and technical information through processes such as audits. As a result, experts such as Daphne Keller have recommended that these groups should have access to more technical tools such as application programming interfaces (APIs) that enable them to submit queries to understand if there are differences in how an algorithm responds to different users.5 Some companies have expressed concerns that offering researchers broad access to such tools (and to their systems in general) could threaten the competitiveness of these companies,6 and could have significant privacy consequences, as demonstrated by the Cambridge Analytica scandal.7 However, companies can help mitigate these risks by establishing programs that enable a limited group of vetted researchers to access more granular and sensitive data about how these systems work.

Finally, as policymakers, watchdogs, and regulators around the world consider how to regulate and oversee internet platform use of these algorithmic systems, they have begun pressing companies to provide greater transparency and accountability. The type and granularity of these disclosures will similarly vary based on the needs of different regulations and legal obligations. For example, in the European Union, the conversation around the forthcoming Digital Services Act has featured numerous granular transparency-related proposals related to the use of a range of algorithmic systems, including those for content moderation and digital advertising.8 Similarly, legislative proposals such as the Platform Accountability and Consumer Transparency Act (PACT Act) in the United States have included content moderation-related transparency reporting requirements for internet platforms.9

Stakeholders have made some progress toward achieving consensus around definitions of meaningful fairness, accountability, and transparency, but the space is continuously evolving. However, because the definitions vary across stakeholder groups, and because there is currently no set of standards to guide these frameworks, consensus has not yet been reached.10

Some civil society organizations and researchers, for example, have stated that in order for companies to promote greater fairness and demonstrate accountability around their algorithmic systems, they need to de-prioritize signals such as engagement and click-worthiness, which have been found to amplify and exacerbate many of the harms associated with these systems. These experts recommend that companies design automated tools to emphasize truthful and authentic information. Other experts, however, have argued the need for these systems to prioritize interests such as preventing racial discrimination, maximizing diversity of opinion and sources, and promoting competition.11 It is challenging for one system to achieve all of these goals. Therefore, as stakeholders continue exploring what meaningful fairness, accountability, and transparency looks like, experts have outlined that it would be helpful to reconcile these differing demands and determine which values must be prioritized in these content shaping algorithms.12 In addition, it is important to recognize that not all human values can easily be codified technically. As a result, how an algorithmic system understands such values, such as factual accuracy or societal benefit, will to an extent always be limited.13

Some of the potential new mechanisms for promoting greater fairness, accountability, and transparency around the use of these algorithmic content-shaping systems include human rights impact assessments, algorithmic audits, enhanced transparency reports, and political ad libraries. None of these mechanisms are blanket approaches, and they all need to be contextualized in their application. Further, these approaches are not mutually exclusive. Some target different aspects and effects of these algorithmic systems, and they could therefore be applied in a complementary manner.14

Citations
  1. "How Ranking and Recommendation Algorithms Influence How We See the World," video, posted by New America, July 14, 2020, source
  2. "How Ranking," video.
  3. "How Ranking," video.
  4. Roundtable discussion by New America’s Open Technology Institute, Washington, DC, June 16, 2020
  5. "How Ranking," video.
  6. "How Ranking," video.
  7. Roundtable discussion by Open Technology Institute, June 16, 2020
  8. Spandi Singh, "Thinking Through Transparency and Accountability Commitments Under The Digital Services Act," The GNI Blog, entry posted July 20, 2020, source
  9. Platform Accountability and Consumer Transparency Act, 116th, 1st. (as introduced, June 24, 2020). source
  10. Roundtable discussion by Open Technology Institute, June 16, 2020
  11. Roundtable discussion by Open Technology Institute, June 16, 2020
  12. Roundtable discussion by Open Technology Institute, June 16, 2020
  13. "How Advertising Algorithms Drive the Internet's Favorite Business Model," video, posted by New America, July 7, 2020, source
  14. Roundtable discussion by Open Technology Institute, June 16, 2020
Exploring Meaningful Fairness, Accountability, and Transparency

Table of Contents

Close