Ethics Alone Can't Fix Big Tech

Weekly Article
Phonlamai Photo / Shutterstock.com
April 25, 2019

This article originally appeared in Future Tense, a collaboration among Arizona State University, New America, and Slate.

The New York Times has confirmed what some have long suspected: The Chinese government is using a “vast, secret system” of artificial intelligence and facial recognition technology to identify and track Uighurs—a Muslim minority, 1 million of whom are being held in detention camps in China’s northwest Xinjiang province. This technology allows the government to extend its control of the Uighur population across the country.

It may seem difficult to imagine a similar scenario in the U.S., but related technologies, built by Amazon, are already being used by U.S. law enforcement agencies to identify suspects in photos and video. And echoes of China’s system can be heard in plans to deploy these technologies at the U.S.-Mexico border.

A.I. systems also decide what information is presented to you on social media, which ads you see, and what prices you’re offered for goods and services. They monitor your bank account for fraud, determine your credit score, and set your insurance premiums. A.I.-driven recommendations help determine where police patrol and how judges make bail and sentencing decisions.

As our lives intertwine with A.I., researchers, policymakers, and activists are trying to figure out how to ensure that these systems reflect and respect important human values, like privacy, autonomy, and fairness. Such questions are at the heart of what is often called “A.I. ethics” (or sometimes “data ethics” or “tech ethics”). Experts have been discussing these issues for years, but recently—following high-profile scandals, such as deadly self-driving car crashes and theCambridge Analytica affair—they have burst into the public sphere. The European Commission released draft “Ethics Guidelines for Trustworthy AI.” Technology companies are rushing to prove their ethics bona fides: Microsoft announced “AI Principles” to guide internal research and development, Salesforce hired a “chief ethical and humane use officer,” and Google rolled out—and then, facing intense criticism, dissolved—an ethics advisory board. In academia, computer and information science departments are starting to require that their majors take ethics courses, and research centers like Stanford’s new Institute for Human-Centered Artificial Intelligence and public-private initiatives like the Partnership on AI are sprouting up to coordinate and fund research into the social and ethical implications of emerging A.I. technologies.

Experts have been trying to draw attention to these issues for a long time, so it’s good to see the message begin to resonate. But many experts also worry that these efforts are largely designed to fail. Lists of “ethical principles” are intentionally too vague to be effective, critics argue. Ethics education is being substituted for hard, enforceable rules. Company ethics boards offer “advice” rather than meaningful oversight. The result is “ethics theater”—or worse,ethics washing”—a veneer of concern for the greater good, engineered to pacify critics and divert public attention away from what’s really going on inside the A.I. sausage factories.

As someone working in A.I. ethics, I share these worries. And I agree with many of the suggestions others have put forward for how to address them. Kate Crawford, co-founder of NYU’s AI Now Institute, argues that the fundamental problem with these approaches is their reliance on corporate self-policing and suggests moving toward external oversight instead. University of Washington professor Anna Lauren Hoffmann agrees but points out that there are plenty of people inside the big tech companies organizing to pressure their employers to build technology for good. She argues we ought to work to empower them. Others have drawn attention to the importance of transparency and diversity in ethics-related initiatives, and to the promise of more intersectional approaches to technology design.

At a deeper level, these issues highlight problems with the way we’ve been thinking about how to create technology for good. Desperate for anything to rein in otherwise indiscriminate technological development, we have ignored the different roles our theoretical and practical tools are designed to play. With no coherent strategy for coordinating them, none succeed.

Consider ethics. In discussions about emerging technologies, there is a tendency to treat ethics as though it offers the tools to answer all values questions. I suspect this is largely ethicists’ own fault: Historically, philosophy (the larger discipline of which ethics is a part) has mostly neglected technology as an object of investigation, leaving that work for others to do. (Which is not to say there aren’t brilliant philosophers working on these issues; there are. But they are a minority.) The result, as researchers from Delft University of Technology and Leiden University in the Netherlands have shown, is that the vast majority of scholarly work addressing issues related to technology ethics is being conducted by academics trained and working in other fields.

This makes it easy to forget that ethics is a specific area of inquiry with a specific purview. And like every other discipline, it offers tools designed to address specific problems. To create a world in which A.I. helps people flourish (rather than just generate profit), we need to understand what flourishing requires, how A.I. can help and hinder it, and what responsibilities individuals and institutions have for creating technologies that improve our lives. These are the kinds of questions ethics is designed to address, and critically important work in A.I. ethics has begun to shed light on them.

At the same time, we also need to understand why attempts at building “good technologies” have failed in the past, what incentives drive individuals and organizations not to build them even when they know they should, and what kinds of collective action can change those dynamics. To answer these questions, we need more than ethics. We need history, sociology, psychology, political science, economics, law, and the lessons of political activism. In other words, to tackle the vast and complex problems emerging technologies are creating, we need to integrate research and teaching around technology with all of the humanities and social sciences.

Moreover, in failing to recognize the proper scope of ethical theory, we lose our grasp of ethical practice. It should come as no surprise that ethics alone hasn’t transformed technology for the good. Ethicists will be the first to tell you that knowing the difference between good and bad is rarely enough, in itself, to incline us to the former. (We learn this whenever we teach ethics courses.) Acting ethically is hard. We face constant countervailing pressures, and there is always the risk we’ll get it wrong. Unless we acknowledge that, we leave room for the tech industry to turn ethics into “ethics theater”—the vague checklists and principles, powerless ethics officers, and toothless advisory boards, designed to save face, avoid change, and evade liability.

Ethics requires more than rote compliance. And it’s important to remember that industry can reduce any strategy to theater. Simply focusing on law and policy won’t solve these problems, since they are equally (if not more) susceptible to watering down. Many are rightly excited about new proposals for state and federal privacy legislation, and for laws constraining facial recognition technology, but we’re already seeing industry lobbying to strip them of their most meaningful provisions. More importantly, law and policy evolve too slowly to keep up with the latest challenges technology throws at us, as is evident from the fact that most existing federal privacy legislation is older than the internet.

The way forward is to see these strategies as complementary, each offering distinctive and necessary tools for steering new and emerging technologies toward shared ends. The task is fitting them together.

By its very nature ethics is idealistic. The purpose of ethical reflection is to understand how we ought to live—which principles should drive us and which rules should constrain us. However, it is more or less indifferent to the vagaries of market forces and political winds. To oversimplify: Ethics can provide blueprints for good tech, but it can’t implement them. In contrast, law and policy are creatures of the here and now. They aim to shape the future, but they are subject to the brute realities—social, political, economic, historical—from which they emerge. What they lack in idealism, though, is made up for in effectiveness. Unlike ethics, law and policy are backed by the coercive force of the state.

Taken together, this means we need new laws to place hard constraints on how A.I. is used and policy to drive more flexible external oversight. Ethics research should be a lodestar for these efforts, articulating clear goals to strive for and rigorous standards against which to judge our progress. Simultaneously, ethics education should work from the inside, guiding technologists as they imagine future tools and bring them into the world.

And what of ethics boards? The purpose of ethics boards—as well as chief ethics officers, internal “AI principles,” and so on—should be to raise awareness and drive self-criticism. They don’t need power; that’s the law’s instrument. What they need is respect and influence. So far they’ve lacked that, but they can earn it if their own organizations follow their advice, and if they’re staffed with qualified people the wider community can trust. If that happens, ethics boards can be more than moral cover. They can serve as a conscience for the tech industry, steering it toward the good (or at least, away from evil) from within.