[ONLINE] - Cracking Open the Black Box

Promoting Fairness, Accountability, and Transparency Around High-Risk AI

Over the past decade, private companies and government agencies have radically expanded their development and use of machine learning (ML) and artificial intelligence (AI). While these algorithmic systems can allow entities to operate with greater efficiency and scale, they can also generate discriminatory, biased, and otherwise harmful outcomes. In response, civil society organizations and civil rights groups, researchers, and policymakers have begun to think about how to promote greater fairness, accountability, and transparency (FAT) around the use of algorithmic systems, especially systems that pose “high risks” to citizens and society.

Join New America’s Open Technology Institute (OTI) for a discussion on the landscape of mechanisms for promoting FAT around high-risk AI systems.

This event will explore OTI's latest report on promoting FAT around high-risk algorithmic systems. It also builds off of work OTI has conducted over the past three years, which explores how internet platforms use algorithmic decision-making to shape and influence content and user experiences through four key areas: content moderation, newsfeed and search result rankings, ad targeting and delivery, and recommendation systems

Follow the conversation using #AIRisk and following @OTI.


Catherine M. Sharkey

Segal Family Professor of Regulatory Law and Policy, NYU School of Law

Christine Custis, @ChristineCustis
Head of ABOUT ML and Fairness, Transparency, and Accountability, the Partnership on AI

Spandi Singh, @spandi_s
Policy Analyst, New America’s Open Technology Institute


Lauren Sarkesian

Senior Policy Counsel, Open Technology Institute