Recommendations and Areas for Further Study

There is already a trend toward consolidation of artificial intelligence (AI) in the hands of the largest players currently in tech. The more concentrated the AI industry becomes, the fewer opportunities will exist for learning about how the technology works. There will also be fewer avenues for insight into why a model works the way it does. Models that are more open can help combat the negative effects of an AI market dominated by closed-source models.

Open-source models can build on the open-source principles that have brought so much innovation in tech. Thankfully, this is widely understood by many influential players in tech. Encouragement for open AI models has come from sources as varied as the Cybersecurity and Infrastructure Security Agency, which warns Americans to apply the lessons of open source to open models,1 and Mark Zuckerberg, who noted that “the bottom line is that open source AI represents the world’s best shot at harnessing this technology to create the greatest economic opportunity and security for everyone.”2

Diverse stakeholders can play vital roles in ensuring that open models thrive in ways that further transparency and accountability, spark innovation, democratize technical education and research, and bolster the security of AI models and systems. They should appreciate the importance of a healthy open AI model ecosystem to transparency and democratic accountability, innovation, competition, and community-driven applications. What follows are specific steps that U.S. policymakers, researchers, AI companies, developers, and civil society organizations can take to promote those objectives.

Policymakers

Legislators and policymakers should incentivize the development of models that maximize openness along its multiple axes and establish rules that require further transparency and accountability of all models, wherever they fall on the spectrum of openness. Taken together with a serious study of marginal risk, the implementation of laws, policies, technical standards, and meaningful transparency norms can produce public accountability and a good governance race to the top.

  • Continue to build governmental capacity to monitor and mitigate the marginal risks posed by open models.3
  • Craft legislative and policy requirements that promote transparency about model design and governance. These should include access to training data and other inputs that shape model decision-making, as well as information about how developers and deployers address the risks of model misuse.
  • Encourage and incentivize developers and companies to build model interoperability that promotes standard communication protocols among models. Doing so promotes collaborative research and the free flow of data and enables people to port data among training models.
  • Avoid placing broad restrictions on open models, including through means like export controls, licensing requirements, or broad imposition of liability on developers for downstream harms.4

Researchers

Research communities are essential to examining and monitoring the openness and health of model systems as well as in identifying AI applications that serve the public interest. Technical researchers also play a role in evaluating the relative efficacy of AI technologies.

  • Engage in comparative studies of the organizational structures and practices of teams developing open-source models. Which approaches to governance represent best practices? Are they modeled on the structures of other open-source software projects? Are they sustainably resourced?
  • Identify areas of research that the private sector is unlikely to undertake and articulate use cases private companies or AI labs are unlikely to develop because of a lack of commercial interest. Such research could include improving the efficiency of model training or lowering the hardware thresholds required to run it.

AI Companies

The choices companies developing AI make will have profound impacts on the nature of the AI ecosystem. AI companies that wish to promote greater openness in AI should do the following.

  • Embrace openness along multiple axes when developing models. The axes include technical elements such as access to code, model weights, and transparency about training data and interoperability among models.
  • Participate and invest in the maintenance of open-source AI projects to ensure that popular model projects have the resources they need to find and address vulnerabilities in a timely fashion.5 Addressing resource gaps—perhaps most urgently, compensation for maintainers—builds a foundation on which open-source AI projects can grow.

Developers

Developers have always been critical to the existence and flourishing of open source, and they are essential to building and maintaining a vibrant ecosystem of open models in the age of widespread AI. AI developers can choose how much they share about what goes into the models they are producing and how those decisions are made. They also will be central in developing the protocols and standards that allow for model interoperability, a key feature of promoting competition and consumer choice. Furthermore, developers must understand both the use cases for their AI and the concerns around its overuse and abuse.

  • Use best practices for software development, particularly security practices, that promote both secure code and better transparency and insight into a model’s structure and training.
  • Study and experiment with designing open protocols and standards for moving data between models, primarily so that models can interoperate more easily. Other uses, such as moving specialized knowledge between models, should also be explored. An appropriate venue for such standards would either need to be found in an existing standard developing organization (SDO), or a new SDO may need to be created along the lines of existing bodies like the Internet Engineering Task Force. Open standards and protocols can help limit anti-competitive lock-in effects and promote research and education.

Civil Society

Civil society plays an indispensable role in monitoring and shaping AI’s many uses. Civil society organizations can focus both on identifying the risks that AI systems pose and the benefits of responsible AI use. Civil society can additionally use open models as powerful tools to further their own research, as well as to identify concerns with the technology or its uses.

  • Creatively explore the ways in which openness can further democratic accountability and other public-interest objectives. They should continue to view AI through a broad lens, taking into account both risks and possible benefits. Civil society must continue its effective advocacy for the aspects of openness that translate to better accountability and democratic governance, but they also should emphasize the wider range of societal benefits, including innovation, education and research, and security.
  • Invest in in-house AI expertise to enable critical oversight of models, open or closed, that is based on a better hands-on understanding of how these technologies work.
Citations
  1. Jack Cable and Aeva Black, “With Open Source Artificial Intelligence, Don’t Forget the Lessons of Open Source Software,” source.
  2. Mark Zuckerberg, “Open Source AI Is the Path Forward,” Meta, July 23, 2024, source.
  3. Dual-Use Foundation Models, source.
  4. For further discussion on how different policy proposals might impact open models, see, e.g., Bommasani, Kapoor, et al., “Considerations for Governing Open Foundation Models,” source.
  5. Tidelift’s 2024 survey of open-source maintainers explains the importance of maintenance to open-source projects and presents responses from over 400 developers. See, e.g., 2024 Tidelift State of the Open Source Maintainer Report (Tidelift, September 2024), source.
Recommendations and Areas for Further Study

Table of Contents

Close