The Ethical Character of Algorithms—and What It Means for Fairness, the Character of Decision-Making, and the Future of News

Blog Post
April 12, 2019

This is part of The Ethical Machine: Big ideas for designing fairer AI and algorithms, an on-going series about AI and ethics, curated by Dipayan Ghosh, a former Public Interest Technology fellow. You can see the full series on the Harvard Shorenstein Center website.

MARK MACCARTHY
Adjunct Professor, Communication, Culture & Technology Program, Georgetown University; Senior Fellow, Institute for Technology Law and Policy, Georgetown Law

In the 1960s, following the maxim that we shape our tools and then they shape us, Marshall McLuhan argued that the means by which visual communications media delivered messages would be a stronger influence on our culture and politics than any content they served [1]. Twenty years later, Langdon Winner claimed that once we deploy certain kinds of technology, we are constrained in the types of political structures open to us (such as democratic versus authoritarian), and that technology is often deployed precisely to have this political effect [2]. Twenty years after that, Lawrence Lessig pointed out that software code operates alongside traditional norms and the law as methods of social control [3].

Now the algorithm is king. Algorithms are increasingly used for consequential decision-making in all areas of life. The same questions that troubled these earlier scholars arise again with renewed urgency. Are these mathematical formulas expressed in computer programs value-free tools that can give us an accurate picture of social reality upon which to base our decisions? Or are they intrinsically ethical in character, unavoidably embodying political and normative considerations? Do algorithms have politics, or does it all depend on how they are used?

Do algorithms have politics, or does it all depend on how they are used?

Part of the answer is easy. Algorithms are often designed to accomplish very specific purposes—for instance, to assess creditworthiness, the likelihood of failure at school, or the need for attention from child protective services. In addition, algorithms with broader potential applications are sometimes used in only one or two specific ways. The ethical character of algorithms then depends on the evaluation of the purposes for which they are designed and used.

But algorithms are intrinsically ethical in character in a deeper sense. It is often impossible to choose between competing algorithms without making ethical judgments [4]. They implicate basic notions of fairness, they change the character of decision-making, and they have political implications for the future of news.

Algorithms and Fairness

Fairness in algorithms, as it turns out, is a nuanced issue. As an illustrative example, take recidivism scores that measure the probability someone will reoffend if released from prison. For scores offered by the COMPAS algorithm, an investigation showed that African-Americans are incorrectly labeled as higher risk at twice the rate of whites [5]. Defenders of the scores say that they are not biased because their accuracy rate—that is, the rate at which their predictions of recidivism are correct—is the same for both African-Americans and whites at about 60 percent. The real issue, they say, is that recidivism rates differ between the groups [6].

If true, in this circumstance of unequal recidivism rates, decision rules that provide for predictive parity between groups inevitably create unequal group error rates [7]. It is possible to create a decision rule based on a recidivism score that can make group error rates more equal, but if done, predictive accuracy suffers and there will be a decrease in public safety [8]. On the other hand, if the goal is a decision rule that maintains equal group accuracy rates, this will perpetuate and aggravate the disparate treatment of African-Americans in an already unfair criminal justice system.

Picture1.png

This conflict among statistical concepts of fairness reflects the dispute between those who think antidiscrimination laws aim at group disadvantaging practices and so should seek to improve outcomes for disadvantaged groups and those who think they target arbitrary misclassification of individuals, and so should merely try to ensure they accurately reflect the current distribution of skills and talents regardless of the effect on disadvantaged groups [9]. Those who believe in improving outcomes for disadvantaged groups want to use recidivism algorithms that equalize errors between African-Americans and whites. Those who want to treat people similarly regardless of their group membership want to use recidivism algorithms that accurately capture the real risk of recidivism. When recidivism rates differ, it is not possible for the same recidivism tool to achieve both ethical goals.

It is impossible to choose which decision rule to adopt without taking a stand on this controversial political issue. The algorithm inevitably has an ethical character.

How Predictive Algorithms Change the Character of Decision-Making

In an insightful article on how machine learning functions in business, Ajay Agrawal and others note that in order to use machine learning algorithms, businesses often reinterpret tasks as prediction tasks.10 For instance, autonomous cars are not programmed to drive, they are programmed to anticipate what a human driver would do [10].

Reframing decisions as prediction tasks, however, changes the way decisions are made. Their use inevitably fosters consequentialist thinking, even if such thinking might not have been the norm for that context. Some ethical theories such as deontology, which deems certain actions to be intrinsically right or wrong, explicitly reject such consequentialism as the right standard for making ethical decisions [11].

Because algorithms inevitably privilege decision-making that relies on prediction, they seem completely inappropriate in certain contexts.

Because algorithms inevitably privilege decision-making that relies on prediction, they seem completely inappropriate in certain contexts. For instance, people generally do not think that the guilt or innocence of defendants should be based on a prediction of whether they are likely to commit crimes in the future—and the law backs this general moral intuition. Defendants are being tried on what they might have done in the past, and predictions about their future behavior are simply irrelevant to that. A recidivism score would be entirely out of place in this context of guilt or innocence.

To return to our previous example, the widespread use of recidivism scores promotes the idea that sentencing and parole should be based on the consequences of the decision rather than on traditional notions of what is just punishment. Using a predictive tool like a recidivism score “renders more appealing theories of punishment that function with prediction” [12].

The use of recidivism scores replaces the question of whether people deserve a lengthy sentence with the separate question of whether it is safe to release them. In using a risk score, we have changed our notion of justice in sentencing.

Privileging consequences is intrinsic to algorithms. An algorithm cannot be modified to avoid its focus on consequences. As the use of algorithms spreads from domain to domain, inevitably it will bring with it this predictive mode of making decisions.

The Effect of Personalization on the Future of News

Traditional newspaper editors and broadcast news directors select their stories, in part at least, on their assessment of what the public needs to know about the political events and controversies of the day.

But when platforms decide which news stories to present through search results and news feeds, they do not engage in the same exercise of editorial judgment. Instead, they replace judgment with algorithmic predictions. They attempt to predict what news each individual person is looking for, and then they adopt decision rules designed to maximize user engagement.

This personalized design of search and news feed algorithms is not explicitly political in character. But the change in the standard for news selection from judgment to prediction inevitably changes the character of the news that is presented to the public. As many have pointed out, using personalization algorithms to feed people the news they want fosters filter bubbles and echo chambers, increases polarization, and provides incentives for clickbait [13].

As many have pointed out, using personalization algorithms to feed people the news they want fosters filter bubbles and echo chambers, increases polarization, and provides incentives for clickbait.

Technology platforms are not the only mechanisms driving us to our corners. A decade ago, Bill Bishop noted that Americans are increasingly segregating themselves into communities where they are surrounded by people who think the way they do [14]. Almost 20 years ago, researchers showed that discussion with like-minded people tended to move people toward extremes [15].

Still, information cocoons in online platforms exacerbate polarization effects. They also contribute to the success of stealthy campaigns aimed to influence political opinion using false or misleading messages and advertisements. A recent study pointed out that the techniques used in these campaigns are the same as those used in mainstream political advertising and commentary, and indeed in all forms of digital marketing [16].

Most people still garner their political information from local television and newspapers. But 60 percent of millennials get their news from social media [17]. As this trend continues, the character of the algorithms that sequence their news stories will dictate the future of news.

Technology companies themselves recognize these effects, perhaps belatedly. They say that they take seriously their responsibilities to keep their systems free of hate speech, terrorist material, fake news, and disinformation campaigns—and are stepping back from the philosophy that “anything goes if it works with their algorithms to drive up engagement” [18].

What can be done? Cass Sunstein recommends improving algorithms that personalize news by exposing citizens to “materials that they would not have chosen in advance,” providing “a wide range of common experiences” and developing processes to enable citizens to “distinguish between truth and falsehood—and to know when democratic processes are being manipulated” [19].

Picture3.png

These recommendations reflect the understanding that personalization algorithms applied to news selection have intrinsic political effects. Search engines and social media platforms are not acting as neutral administrators of technological systems. The decision criteria embodied in their systems have profound political consequences for our democracy.

Where Do We Go from Here?

Decades ago, McLuhan, Winner, and Lessig pointed us in the right direction in seeking to understand that algorithms have politics. These mathematical formulas embody choices regarding what Winner describes as “arrangements of power and authority in human associations as well as the activities that take place within those arrangements” [20]. What we do in the face of this reality—for instance, how much transparency algorithm developers and those that employ them provide so that they can be held accountable for their politics—is a different and very complex question. But the start of practical wisdom and effective action is to recognize the ethical character of algorithms.

The views expressed in this article are those of the author and not necessarily those of SIIA or any of its member companies.

References

  1. Marshall McLuhan, Understanding Media: The Extensions of Man (Cambridge: MIT Press, 1994).
  2. Langdon Winner, “Do Artifacts Have Politics?” Daedalus 109, no. 1 (Winter 1980).
  3. Lawrence Lessig, Code and Other Laws of Cyberspace (New York: Basic Books, 1999).
  4. Felicitas Kraemer, Kees van Overveld, and Martin Peterson, “Is There an Ethics of Algorithms?” Ethics and Information Technology 13, no. 3 (September 2011) 251–260, available at https://link.springer.com/article/10.1007/s10676-010-9233-7.
  5. Julia Angwin and Jeff Larson, “Bias in Criminal Risk Scores Is Mathematically Inevitable, Researchers Say,” ProPublica, December 30, 2016, https://www.propublica.org/article/bias-in-criminal-risk-scores-is-mathematically-inevitable-researchers-say.
  6. William Dieterich, Christina Mendoza, and Tim Brennan, “COMPAS Risk Scales: Demonstrating Accuracy, Equity, and Predictive Parity,” Northpointe, July 8, 2016, http://go.volarisgroup.com/rs/430-MBX-989/images/ProPublica_Commentary_Final_070616.pdf.
  7. Alexandra Chouldechova, “Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments,” FATML 2016 conference paper, October 2016, available at https://arxiv.org/abs/1610.07524/.
  8. Sam Corbett-Davies et al., “Algorithmic Decision Making and the Cost of Fairness,” in Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, June 10, 2017, available online at https://arxiv.org/abs/1701.08230v4.
  9. Mark MacCarthy, “Standards of Fairness for Disparate Impact Assessment of Big Data Analytics,” Cumberland Law Review 48, no. 102 (April 2018), http://dx.doi.org/10.2139/ssrn.3154788.
  10. Ajay Agrawal, Joshua Gans, and Avi Goldfarb, “The Simple Economics of Machine Learning,” Harvard Business Review, November 17, 2016, https://hbr.org/2016/11/the-simple-economics-of-machine-intelligence.
  11. “When a plain man fulfills a promise because he thinks he ought to do so, it seems clear that he does so with no thought of its total consequences, still less with any opinion that these are likely to be the best possible. He thinks in fact much more of the past than of the future. What makes him think it right to act in a certain way is the fact that he has promised to do so—that and, usually, nothing more.” W.D. Ross, “What Makes Right Actions Right,” reprinted in Moral Philosophy (Fourth Edition), eds. Louis P. Pojman and Peter Tramel (Indianapolis: Hackett Publishing, 2009).
  12. Bernard E. Harcourt, “Against Prediction: Sentencing, Policing, and Punishing in an Actuarial Age,” Chicago Public Law Working Paper No. 94, May 2005, available at https://ssrn.com/abstract=756945.
  13. Eli Pariser, The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think (New York: Penguin Press, 2011).
  14. Bill Bishop, The Big Sort: Why the Clustering of Like-Minded America Is Tearing Us Apart (New York: Houghton Mifflin, 2008).
  15. Cass R. Sunstein, “The Law of Group Polarization,” University of Chicago Law School, John M. Olin Law & Economics Working Paper No. 91, December 1999, available at https://ssrn.com/abstract=199668.
  16. Dipayan Ghosh and Ben Scott, “Digital Deceit: The Technologies Behind Precision Propaganda on the Internet,” New America Foundation, January 23, 2018, https://www.newamerica.org/public-interest-technology/policy-papers/digitaldeceit/.
  17. Jeffrey Gottfried and Michael Barthel, “How Millennials’ Political News Habits Differ from Those of Gen Xers and Baby Boomers,” Pew Research Center, June 5, 2015, http://www.pewresearch.org/fact-tank/2015/06/01/political-news-habits-by-generation/.
  18. Nicholas Thompson and Fred Vogelstein, “Inside the Two Years That Shook Facebook—and the World,” Wired, February 2, 2018, https://www.wired.com/story/inside-facebook-mark-zuckerberg-2-years-of-hell/.
  19. Cass Sunstein, “Guest Post: Is Social Media Good or Bad for Democracy?” Facebook Newsroom, January 22, 2018, https://newsroom.fb.com/news/2018/01/sunstein-democracy/.

20. Winner, “Do Artifacts Have Politics?”