Welcome to New America, redesigned for what’s next.

A special message from New America’s CEO and President on our new look.

Read the Note

Introduction

Tay’s first words were “hellooooooo world!!!” It was a friendly start for the Twitter bot designed by Microsoft to engage with people aged 18 to 24. But, in a mere 12 hours, Tay went from friendly Twitter persona to foul-mouthed, racist Holocaust denier who said feminists “should all die and burn in hell” and that the actor “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism.”

Tay, which Microsoft quickly shut down after just 24 hours, was programmed to learn from the behaviors of other Twitter users, and in that regard, was a success. The bot’s embrace of humanity’s worst attributes is an example of algorithmic bias—when seemingly innocuous programming takes on the prejudices either of its creators or the data it is fed.

Tay tweet

The side effects of unintentionally discriminatory algorithms can be dramatic and harmful. Companies and government institutions that use data need to pay attention to the unconscious and institutional biases that seep into their results. It doesn’t take active prejudice to produce skewed results in web searches, data-driven home loan decisions, or facial recognition software. It just takes distorted data that no one notices and corrects for.

As we begin to create artificial intelligence (AI), we risk inserting racism and other prejudices into the code that will make decisions for years to come.

At Slant: Understanding Algorithmic Bias, in San Francisco, New America CA brought together a curated group of experts in AI, bias, technology, and future thinking to outline the state AI and ethics from four different perspectives. Our goals were understand what specific actions companies are taking to address bias in AI and machine learning and what help they can use from civil society. We’re bringing you our most actionable insights.

To learn more about these recommendations and this work at New America, please contact Megan Garcia (garcia@newamerica.org).

Big ideas to make everyone better at heading off bias in algorithms and machine learning

  • An understanding of the need for thoughtful design of algorithms and AI would go a long way toward tackling many of the ethics problems we see with both.
  • Inside companies and civil society there is the appetite to make algorithms and AI more ethical, but no consensus about how to do that. Efforts are also disconnected and we are not always learning from each other. FATML is an exception.
  • There will have to be a change in the way companies think about bias to ensure that bias in algorithms and AI is minimized. Eventually there will have to be an inclusion mindset (or some company-wide or industry-wide focus on ethics and inclusion) that forces individuals and processes to incorporate thinking about bias into correcting for biases as they emerge.
  • Journalism and other fields with regulated or unregulated codes of ethics offer lessons that the technology sector might apply.
  • There is dire need for organizations outside of the technology sector to provide ideas about how to address algorithmic bias. Some organizations have made modest progress in this area to train people to use data ethically, but much remains to be done.
  • Who to talk to for more: New America and New America CA, Center for Democracy and Technology (CDT), Open Data Institute, Data & Society Research Institute, DataKind, ACLU, FATML (Fairness, Accountability and Transparency in Machine Learning) community, Open AI, Partnership on AI

What's Your Role?

Select your role from the ones below and see what you can do to address bias in AI and machine learning.

Table of Contents

Close