Table of Contents
Introduction
Machine learning and other artificial intelligence (AI) tools are increasingly used by organizations and online platforms to help with critical, life-altering decisions. These include deciding whether, or for how long, someone should go to jail, whether someone should be considered for an open job, whether someone is likely to succeed at a university, and more. Because these systems rely on large datasets and statistical analyses, their outputs are often perceived as neutral and not affected by biases in the same way as human decision-making. However, outputs from algorithms and other automated tools can and do reinforce biases and lead to disparate results because the datasets they rely on reflect historical and existing discrimination. AI often requires a significant volume of personal data to function—collecting these data can require privacy-intrusive practices that also violate users’ civil rights. The government and other stakeholders must acknowledge the risks posed by algorithmic decision-making and other automated tools to help protect those most likely to be harmed by them and to ensure these tools are used in non-discriminatory and beneficial ways. Last year, to address these issues, Rep. Yvette D. Clarke (D-N.Y.), along with Sens. Cory Booker (D-N.J.) and Ron Wyden (D-Ore.), introduced the Algorithmic Accountability Act of 2019.1 The bill requires companies to test and fix flawed computer algorithms that result in biased or discriminatory outcomes.
This report builds on an event hosted on June 3, 2020 by New America’s Open Technology Institute (OTI) that explored how machine learning and other algorithmic tools can lead to privacy and civil rights harms.2 Rep. Clarke, the vice chair of the Energy and Commerce Committee, delivered opening remarks. A subsequent panel, moderated by Koustubh “K.J.” Bagchi, OTI senior policy counsel, brought together Daniel Kahn Gillmor, senior staff technologist of the ACLU Project on Speech, Privacy, and Technology; Iris Palmer, senior advisor for higher education and workforce at New America’s Education Policy program; and A. Prince Albert III, then-technology & telecommunications fellow at the Leadership Conference on Civil and Human Rights. The panel discussed key questions regarding algorithms and their potential impacts on privacy and civil rights, including: How do we design and audit algorithms to avoid disparate outcomes? What are the real-world consequences of algorithmic practices? Who should be held accountable to protect individuals from the discriminatory impacts of automated systems? What should legislative protections look like? To what extent can, or should, such protections be incorporated into comprehensive consumer privacy legislation?
Editorial disclosure: This report discusses policies by Facebook and Google, both of which are funders of work at New America but did not contribute funds directly to the research or writing of this report. New America is guided by the principles of full transparency, independence, and accessibility in all its activities and partnerships. New America does not engage in research or educational activities directed or influenced in any way by financial supporters. View our full list of donors at www.newamerica.org/our-funding.