Introduction

Since OpenAI’s large-language model ChatGPT burst onto the scene in November 2022, society has seen generative artificial intelligence (AI) rapidly shape conceptions of work and life online. Large-language models are an example of generative AI that is built on the recent technical advances in large, multi-purpose “foundation models.” But AI encompasses not only the generation of content but also a variety of analytical and predictive tools. AI is a broad umbrella term that people have used for decades “to refer to both a field of study and the machine-based systems that use mathematical models to analyze inputs to complete specific tasks, such as making predictions, recommendations, content, and decisions.”1

Generative AI’s rapid ascendance in the current zeitgeist has spurred policymakers to focus on governing AI more broadly. In the United States, the Biden administration responded swiftly by issuing a Blueprint for an AI Bill of Rights2 and requirements for federal agencies in Executive Order 14110 on “the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”3 As a new presidential administration and Congress prepare to take power, it is critical that—in this time of experimentation—the United States determines how it wants the AI model ecosystem to shape its democracy.

The history of the internet’s evolution contains an important example of the consequences of failing to prioritize openness. The internet’s early years were deeply “generative,” not in the currently popular use of the word as in “generative AI,” but rather with the following definition: an environment that fosters wide-ranging creativity and innovation.4 But due to a confluence of factors5—including a lack of prioritizing the promotion of openness—that relatively open era has given way to dominance by a few large companies. This current era of consolidation has eroded the power of internet users, reduced space for new competitors, and constrained the potential for unexpected innovation to emerge from diverse corners. Many of the companies that have dominated this phase of consolidation are now at the center of rapid advances in AI, and society is now at another important juncture in the internet’s evolution. How broadly accessible and competitive does the United States want the AI landscape to be? What choices will help ensure that AI best serves democratic institutions and norms globally?

Three broad categories of intervention receive persistent attention in the discourse about how to accomplish this goal: (1) governmental regulation and oversight of AI,6 (2) developing “public AI” models controlled by non-corporate actors,7 and (3) ensuring that the AI model ecosystem is sufficiently open in terms of code and other transparency measures. While we point out some of the intersections between these categories, this report focuses on the ways in which openness can better align AI with serving the public interest.

Many of the policy debates around openness in AI models have focused narrowly on the risks posed by unpredictability in the downstream uses of open-source models. It is indeed important to study the marginal risk posed by open models,8 but most of the current discourse around risk does not fully account for the benefits that openness can provide. Which lessons about the many benefits of openness in AI models should the United States draw from the long history of open-source software? Which aspects of openness beyond code or model weights should be encouraged? Looking at examples in open-source software can help clarify some of the benefits that openness can bring to AI development. While open AI models and open-source software are not perfectly analogous—in fact, there are important differences between AI and open-source software—many key benefits found in open-source software will transfer to AI.

We do not argue for a reductive one-to-one equation of open-source software development and open societies, but the principles of open software and open models do reflect the ethos of open societies foundational to democracy.9 The long history of open-source software has demonstrated the importance of openness to several societal benefits that reinforce democratic principles. These benefits include promoting transparency and public accountability, fostering unexpected and iterative innovation, promoting educational and research uses of technology, and bolstering security. All of these benefits are key elements of open societies.

Importantly, the concept of openness in AI models should extend beyond publicly available code or model weights to also encompass the importance of transparency in understanding how technical decisions for models are made and an understanding of who makes them. This broader conceptualization underlies how we use the terms “openness” and “open models.”

If policymakers continue to focus disproportionately on the risks of open models, they will help keep the bulk of AI innovation in the hands of a few powerful companies that already dominate social media, cloud, and search capabilities. But this trend is not inevitable. If U.S. policymakers want the benefits of AI to be broadly and equitably distributed and to serve democratic values, then they must consider what kind of AI ecosystem to incentivize and build. An AI ecosystem characterized by open models able to thrive alongside proprietary ones can promote public transparency and accountability, innovation from unexpected corners, new avenues for education and research, and security. To imagine how such an AI ecosystem might look, we first must define the key attributes of an open model.

Citations
  1. Sarah Forland, “Demystifying AI: A Primer,” New America’s Open Technology Institute, October 7, 2024, source.
  2. Biden Administration, “Blueprint for an AI Bill of Rights,” White House, source.
  3. Office of Science and Technology Policy, Executive Order 14110, the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (White House, October 20, 2023), source.
  4. Jonathan L. Zittrain. The Future of the Internet—and How to Stop It (Yale University Press and Penguin UK, 2008), source.
  5. A non-exhaustive list of these factors should include delays in passing federal privacy legislation, updating pro-competitive legal and regulatory tools, swiftly developing standards for data portability, and purposefully aligning urgent public-interest objectives with needed financial and technical investments.
  6. See, e.g., Ami Fields-Meyer and Janet Haven, “Artificial Intelligence, Illiberalism, and the Threat to Democracy,” Foreign Policy, October 31, 2024, source.
  7. See, e.g., Public AI Network, Public AI: Infrastructure for the Common Good (Public AI Network, August 10, 2024), source; Ganesh Sitaraman and Alex Pascal, “The National Security Case for Public AI,” Vanderbilt Policy Accelerator, September 27, 2024, source; Nathan Sanders, Bruce Schneier, and Norman Eisen, “How Public AI Can Strengthen Democracy,” Brookings, March 4, 2024, source.
  8. See, e.g., Dual-Use Foundation Models With Widely Available Model Weights (National Telecommunications and Information Administration, 2024), source.
  9. Lawrence Lessig, “Open Code and Open Societies: Values of Internet Governance,” Chicago-Kent Law Review 74 (February 1999): 1405–1420, source.

Table of Contents

Close