Welcome to New America, redesigned for what’s next.

A special message from New America’s CEO and President on our new look.

Read the Note

Conclusion

Algorithmic tools used to make consequential decisions about individuals’ lives are becoming increasingly common in criminal justice, higher education, and employment, as well as other sectors. Too often, these AI systems are trained with data sets reflecting historical biases and collected through privacy-invasive means. A multi-level approach is required to prevent discriminatory, privacy-invasive, and other harmful outcomes. Such systems must be transparent, undergo a disparate impact analysis before implementation, and undergo regular audits after implementation. Most fundamentally, equitable use of AI systems requires institutions to examine their own biases and honestly assess whether the benefits of a tool could outweigh the risks to privacy and equity.

Table of Contents

Close