Executive Summary
The rapid ascendance of generative artificial intelligence (AI) in today’s zeitgeist has spurred policymakers to prioritize governing AI more broadly. In the United States, lawmakers and other stakeholders, including developers and civil society, are considering how AI can better serve democratic institutions, the economy, and consumers. The Biden administration has acted swiftly in issuing the Blueprint for an AI Bill of Rights and Executive Order 14110, which imposes a detailed array of requirements on federal agencies. In addition to executive action, states have also begun to issue broad and specific laws governing AI, but Congress has yet to take significant action.
Key Attributes of Openness
Society is in the early years of AI’s development and even earlier in the approach to governing it. This nascent phase of AI governance presents an opportunity to better understand the concept of “openness” in the context of AI. In this report, we argue that encouraging greater openness in the AI model ecosystem is essential to shaping AI’s development in ways that serve democratic values and the public interest.
“Open” and “closed” is not a rigid binary into which AI can be neatly placed. It is more helpful to envision a spectrum of openness. We identify five key attributes of openness for AI models:
- Open code that can be downloaded, modified, shared, and used by people other than the model’s creators;
- Open licenses that allow third parties to use the model;
- Transparency about model inputs (data sources, model weights);
- Transparency about envisioned threats from models and mitigations against undesirable downstream effects (e.g., malicious actors fine-tuning the model to cause clear harms); and
- Open standards for interconnection and communication among AI models that allow people and companies to switch between models (portability) and for models to interoperate with one another.
Because these attributes of openness encapsulate both technical and non-technical aspects of transparency, they acknowledge that AI models are not merely software but broader human projects shaped by deliberate governance choices.
Key Benefits of Openness
Many policy discussions about open models narrowly focus on the security risks posed by such models or open weights. A nuanced, empirically grounded discussion of risks is important, but focusing narrowly on risk fails to fully account for the other benefits of openness. Specifically, in the case of AI, promoting the five attributes of openness identified above can create the kind of AI ecosystem that better serves public transparency and democratic accountability, innovation and competition, education and research, and even security. The report discusses the ways in which promoting greater openness in the AI model ecosystem furthers each of these societal benefits.
Recommendations for Promoting Openness
A variety of stakeholders in the United States can play vital roles in creating a more open AI model ecosystem.
- Policymakers should continue to build governmental capacity to monitor and mitigate the marginal risks posed by open models. They should also craft legislative and policy requirements that promote transparency about a model’s technical elements, as well as its design and governance, and encourage and incentivize developers and companies to build model interoperability. Lastly, they should avoid placing broad restrictions on open models, including through means like export controls, licensing requirements, or broad imposition of liability on developers for downstream harms.
- Researchers should comparatively study the organizational structures and practices of teams developing open-source models to identify best practices in development and governance of AI modes. Doing so will enable them to identify resource gaps, prioritize areas of research in the public interest, and articulate use cases private companies or AI labs are unlikely to address because of a lack of commercial interest.
- AI companies should embrace openness along multiple axes (code, model weights, model training data, and model interoperability) when developing models. They also should participate and invest in the maintenance of open-source AI projects to help ensure that popular model projects have adequate resources to find and quickly address security vulnerabilities.
- Developers should use best practices in software development that promote both secure code and better insight into a model’s structure and training, as well as the decision-making behind those components. In addition, they should explore the design of open protocols and standards for promoting model interoperability.
- Civil society organizations should continue to creatively explore the ways in which openness in AI models can further democratic accountability and public-interest objectives and widely communicate these benefits. They should also invest in in-house AI expertise to enable critical insights into how the technology functions and an opportunity to interact with and evaluate open model projects.