New America’s Open Technology Institute Joins Coalition Condemning Use of Algorithmic Risk Assessments for Pretrial Detention
Press Release

Pexels
July 30, 2018
Today, New America’s Open Technology Institute (OTI) joined a coalition of over 100 civil rights, digital justice, and community-based organizations in a statement opposing the use of pretrial “risk assessment” tools, and calling for critical safeguards where such tools are in use. The statement urges jurisdictions around the country to reconsider their use of tools that are based on artificial intelligence to make decisions about whether individuals should be incarcerated before trial. The coalition advocates that pretrial detention should only be a last resort. Among other concerns, the groups explain that algorithmic risk assessments typically perpetuate bias found in the criminal justice system against people who are poor and people of color, and therefore these tools should not be seen as a substitute for meaningful reforms to unjust bail systems.
Noting that many jurisdictions already rely on algorithmic risk assessment tools in deciding whether to incarcerate an individual before trial, the statement recommends a series of measures to reduce the harms that may be caused by such tools. These include designing such tools in a way to reduce unwarranted racial disparities and ensuring that such tools are transparent, independently validated, and open to challenge by an accused person’s counsel.
The statement notes that:
Algorithmic decision-making tools are only as smart as the inputs to the system. Many algorithms effectively only report out correlations found in the data that was used to train the algorithm. Algorithms being applied nationwide are widely various in design, complexity, and inputs, including cutting-edge techniques like machine learning. Machine learning is the process by which rules are developed from observations of patterns in the training data. As a result, biases in data sets will not only be replicated in the results, they may actually be exacerbated. For example, since police officers disproportionately arrest people of color, criminal justice data used for training the tools will perpetuate this correlation.
The following statement can be attributed to Sharon Bradford Franklin, Director of Surveillance and Cybersecurity Policy at New America’s Open Technology Institute:
“Algorithmic risk assessment tools are likely to perpetuate and aggravate the racial and economic biases that already pervade our criminal justice system. Machine learning systems can only be as good as their training data, and the racial bias found in statistics like arrest rates has been well-documented. Technology cannot provide a quick fix to cure our unjust bail systems.”