Welcome to New America, redesigned for what’s next.

A special message from New America’s CEO and President on our new look.

Read the Note

Problem 2: "Arms Race" Framing Treats AI as a Single Technology

Relatedly, the arms race framing often prompts discussion of artificial intelligence as a single technology—but this, too, is inaccurate and could lead to bad policymaking. There is no true consensus definition of artificial intelligence, even among experts, but it’s clear that AI is not one technology. Instead, artificial intelligence is “a catch-call concept alluding to a range of techniques with varied applications in enabling new capabilities,”1 from image recognition to disease prediction. However, thinking of AI as a single thing threatens to guide policymakers to mishandle AI risks and miss out on upsides.

To provide context on the topic, much of what today’s commentators refer to as “AI” is just machine learning. While the term is often used without definition, the premise is relatively simple: Computers can identify patterns from data and use that pattern analysis to make decisions on their own.2 For instance, a computer that receives labeled images of cats and dogs can, with the right algorithm, learn to distinguish between the two classes of images. Researchers would feed labeled images of the two animals, perhaps hundreds or thousands at a time, to the model. As this occurs, the model begins to make observations about what characterizes each type of image, as well as what distinguishes one type from the other (e.g., through various statistical techniques often unknown to the programmer). Eventually, this so-called training process will be complete, at which point the human programmers would feed the machine unlabeled photos of cats and dogs, hoping it can now identify which is which. The computer’s skills on these tests provide success benchmarks from which further improvements can be made.3

Thinking of AI as a single thing threatens to guide policymakers to mishandle AI risks and miss out on upsides.

This is precisely why AI should not be used in reference to one technology: The process for teaching a computer to identify pictures of household pets is different than the process for teaching it to comprehend human language. Similarly, machine learning models for facial recognition are designed differently than machine learning models used to generate risk scores on convicts or estimate someone’s likelihood of defaulting on a loan. Depending on the task, the code itself—the machine learning model and/or its specific properties—will differ between AI implementations. The same goes for the dataset, which is often specifically tailored to single use cases. Even within a single application area of AI like image recognition, detecting enemy combatants’ faces would require a notably different dataset than the one used for a cat-versus-dog image classifier.

Discussing a single arms race, though, makes the development of AI sound as if it’s focused on one technology.4 Commentators, in turn, talk of China “beating” the United States in AI5 without any particular understanding of what that means—the winning, or what it means for China to do it. Will Chinese tech giant Tencent develop a more accurate facial recognition system than the FBI? Will Chinese company Baidu minimize the bias6 of its AI systems compared to bias in systems built by Amazon? Will China’s military drones be able to fly faster than ours, or its intelligence service’s natural language processors better spy on phone calls than an American private security firm?

In reality, it’s difficult to decipher the answers to these questions since the underlying logic behind a single AI arms race, with a clear winner and clear loser, is flawed.7 Further, referring to China as a single entity—albeit considering the government’s relative control of industry compared to the United States—still loses much important nuance. This could have many potentially damaging effects. As the U.S. government still needs to develop a cohesive national AI strategy, putting many forms of AI in a single bucket may yield disastrous risk management. Skin cancer predictors yield different economic, legal, social, political, and ethical risks than do facial recognition systems deployed in poor urban centers. Relatedly, some forms of artificial intelligence—like a system that’s world-class at playing games like Dota 2 or Go—could make for grabby news headlines, but likely have much less strategic value in U.S.-China great power competition than, say, an AI system in a lethal autonomous weapon.

Artificial intelligence is not a single technology, and policymakers must recognize that AI is instead a catch-all term that refers to many unique technologies, many of which that have their own distinctive development methods and timelines. Different applications of AI will develop at different speeds and with different levels of accuracy and effectiveness. Other factors, such as the computing power needed to run particular functionalities or the data used to test a given system, will also vary. For policymakers to better invest government resources in AI development—and to better coordinate non-governmental efforts—it’s essential that artificial intelligence’s many forms are not thrown into a single bucket and treated in the same fashion. As just iterated, doing so will almost certainly result in U.S. policymakers mishandling AI risks while missing out on critical AI upsides. Treating AI as one thing greatly oversimplifies AI development at the peril of the United States’ economic, technological, and strategic leadership.

Citations
  1. Elsa B. Kania, “The Pursuit of AI Is More Than an Arms Race,” Defense One, April 19, 2018, source.
  2. For more on this, see: Robert D. Hof, “Deep Learning,” MIT Technology Review, n.d., source; and Karen Hao, “What is machine learning? We drew you another flowchart,” MIT Technology Review, November 17, 2018, source.
  3. This is a deliberate oversimplification. Once again, see the previous endnote for a more detailed primer on some machine learning basics.
  4. Political scientist Michael Horowitz argues that “[t]here will not be one exclusively military AI arms race” but instead “many AI arms races, as countries (and, sometimes, violent nonstate actors) develop new algorithms or apply private sector algorithms to help them accomplish particular tasks.” Again, the concept of an “arms race” is perhaps too constraining and too winner-takes-all, yet his point about multiple, overlapping, intersecting AI development tracks stands true. See: Michael C. Horowitz, “The Algorithms of August,” Foreign Policy, September 12, 2018, source.
  5. See previous references on the permeation of the winner-takes-all “arms race” rhetoric in commentary from journalists, policymakers, and analysts. Also, it’s worth noting that even in some interviews where artificial intelligence experts bring out more nuance in the conversation, headlines reframe the conversation in the context of a winner-takes-all AI “arms race.” See, for instance: Phred Dvorak, “Which Country Is Winning the AI Race—the U.S. or China?” The Wall Street Journal, November 12, 2018, source.
  6. While trying to remain relatively high-level, it’s still worth noting that understandings of “bias” take many forms. For just some discussion of different fairness definitions, see: Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel, “Fairness Through Awareness,” arxiv.org, November 29, 2011, source; Moritz Hardt, Eric Price, and Nathan Srebro, “Equality of Opportunity in Supervised Learning,” arxiv.org, October 7, 2016, source; Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan, “Inherent Trade-Offs in the Fair Determination of Risk Scores,” arxiv.org, November 17, 2016, source; Matt Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva, “Counterfactual Fairness,” 31st Conference on Neural Information Processing Systems, 2017, source; and Niki Kilbertus, Mateo Rojas-Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Schölkopf, “Avoiding Discrimination through Causal Reasoning,” arxiv.org, January 21, 2018, source. These are pulled from a syllabus developed by Duke University’s Ashwin Machanavajjhala.
  7. This framing is flawed for the aforementioned reasons, but also because technological “superiority is not synonymous with security.” When it comes to technologies like artificial intelligence, “the most reasonable expectation is that the introduction of complex, opaque, novel, and interactive technologies will produce accidents, emergent effects, and sabotage” that will cause “the American national security establishment [to] lose control of what it creates.” See: Richard Danzig, “Technology Roulette: Managing Loss of Control as Many Militaries Pursue Technological Superiority,” Center for a New American Security, May 30, 2018, source. Page 2.
Problem 2: “Arms Race” Framing Treats AI as a Single Technology

Table of Contents

Close