Introduction
This report will provide a landscape overview of some of the most prominent and promising proposals related to how internet platforms and governments can create and institute mechanisms for promoting fairness, accountability, and transparency (FAT) around high-risk algorithmic systems, with the goal of discussing the strengths and limitations of these approaches and demonstrating how these mechanisms fit into the overall FAT landscape.
Over the past decade, private companies and government agencies have radically expanded their development and use of machine learning (ML) and artificial intelligence (AI). While these algorithmic systems can allow entities to operate with greater efficiency and scale, they can also generate discriminatory, biased, and otherwise harmful outcomes. These harms can have wide-ranging consequences and have occurred across industries, including in education,1 labor,2 medicine, and criminal justice sectors,3 and throughout online platforms.4 Federal privacy legislation—which is direly needed for a variety of reasons—would likely help to address many of the existing harms stemming from private sector use of algorithmic systems today, but may still need to be supplemented with additional policy measures focused on algorithms and algorithmic accountability.5
In response, civil society organizations and civil rights groups, researchers, and policymakers have begun to think about how to promote greater FAT around the use of algorithmic systems, especially systems that pose high risks to citizens and society. These conversations have thus far resulted in the development of numerous high-level principles and guidelines around ethical uses of AI. While helpful for identifying critical values to consider, these types of outputs are limited by the fact that they are hard to translate into practice.6 In addition, many discussions around how to deploy certain FAT mechanisms occur in a siloed manner, failing to account for the fact that numerous mechanisms must often be deployed in concert to effectively promote meaningful FAT around these systems throughout different parts of their life cycle.
In addition, numerous existing considerations of FAT fail to account for whom a FAT mechanism is designed. In order for a FAT measure to be effective, it must be meaningful and explainable. If the information being disclosed is not comprehensible by the intended end user, then it is not a valuable mechanism for promoting FAT.7 Further, thus far there has been little consensus around how to define a high-risk algorithmic system; it would be helpful for researchers, civil society organizations, and other key stakeholders to provide an agreed-upon definition to companies and government entities. This will also ensure that any assignment of responsibility or liability is proportionate.
As this report will outline, there are a wide variety of mechanisms for promoting FAT around high-risk algorithmic systems that seek to counteract the harmful effects of opacity and address concerns around bias and discrimination. This report will unpack nine different categories of approaches, including their strengths and weaknesses, and outline best practices for using some of these mechanisms. These nine categories were selected based on their prominence in ongoing conversations around promoting FAT around high-risk algorithmic systems.
This report contains four sections, each outlining approaches to promoting FAT around high-risk algorithmic systems that different entities would be best suited to implement or pursue. Section one discusses internet platforms, section two discusses government entities and regulators, section three discusses internet platforms and government entities, and section four discusses other stakeholders.The report concludes with recommendations on next steps that internet platforms and governments deploying and using algorithmic systems should prioritize, as well as recommendations for future multi-stakeholder engagement. The recommendations are similarly broken down by which actors are best suited to implement them. Throughout the report, we reference research and examples from the European Union (EU) and other regions to inform our analysis. However, our recommendations are primarily focused in the U.S. context. This report builds on OTI’s report and event series—Holding Platforms Accountable: Online Speech in the Age of Algorithms—which explores how internet platforms use algorithmic decision-making to shape and curate the content we see and makes recommendations on how platforms can promote greater FAT.8
Editorial disclosure: This brief discusses policies by Facebook, Google (including YouTube), Microsoft, and Twitter all of which are funders of work at New America but did not contribute funds directly to the research or writing of this piece. View our full list of donors at www.newamerica.org/our-funding.
Citations
- Tom Simonite, "Meet the Secret Algorithm That's Keeping Students Out of College," WIRED, July 2020, source.
- Drew Harwell, "A Face-Scanning Algorithm Increasingly Decides Whether You Deserve the Job," Washington Post, November 6, 2019, source.
- Alex Chohlas-Wood, Understanding Risk Assessment Instruments in Criminal Justice, June 19, 2020, source.
- Muhammad Ali et al., "Discrimination through Optimization: How Facebook's Ad Delivery Can Lead to Skewed Outcomes," Proceedings of the ACM on Human-Computer Interaction 2019, September 12, 2019, source.
- Christine Bannan and Margerite Blase, Automated Intrusion, Systemic Discrimination: How Untethered Algorithms Harm Privacy and Civil Rights, October 7, 2020, source.
- Brent Mittelstadt. 2019. AI Ethics: Too Principled to Fail? SSRN (2019).
- David Freeman Engstrom et al., Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies, February 2020, source.
- Spandana Singh, Holding Platforms Accountable: Online Speech in the Age of Algorithms, source.