Power and Governance in the Age of AI

Experts Reflect on Artificial Intelligence and the Public Good
Brief
Circuit board
Elena11/Shutterstock
March 14, 2024

The influence of artificial intelligence (AI) on our world is growing. In early 2024, New America brought together experts in international relations, computer science, and technology policy to share their thinking on how governments and institutions should navigate AI to harness its strengths and mitigate its risks.

Introduction: How to Think about AI

By Gordon LaForge, senior policy analyst at New America

The first two months of 2024 showed how generative artificial intelligence (AI) is already reshaping economies, institutions, and societies worldwide. Capital flooded to AI companies, and the value of chipmaker Nvidia reached $2 trillion, driving U.S. stock indexes to record highs. Fears of AI-generated disinformation surged as the year of elections got underway and deepfakes plagued public figures from Joe Biden to Taylor Swift. The New York Timeslawsuit against OpenAI and its partner Microsoft—alleging copyright infringement in training AI models—underscored how legal precepts would have to be reconceived. AI dominated the agenda of international gatherings such as the World Economic Forum, while nations from Saudi Arabia to France raced to bolster national AI models and strategies.

The AI age is here, it seems. And yet, it is far from clear what AI will mean for society—or even what it is and how to think about it now. The metaphors out there are polarizing and at times extreme. Depending on who one talks to, generative AI might be a solicitous personal assistant or a cutthroat management consultant; a form of social collaboration or a sentient being; “Moore’s Law for everything” or a nuclear weapon.

In one sense, despite the hype, AI is just the next phase of the decades-long disruptions of the digital revolution. The internet has remade the information environment. Social media has scrambled interpersonal relationships and psychological well-being. Big data, autonomous weapons, and cyber capabilities are changing military conflict. By dominating the global digital economy, tech companies have become the most powerful private entities on the planet, sovereign players in the global order with the might to influence free speech, national security, and other areas of public policy.

But AI is also different. The pace of change is not just rapid but exponential. To give one indicator: Since 2010, training computation, one of three factors that determine the capability of an AI system, has roughly doubled every six months. And unlike social media or other types of software, powerful AI models have emergent properties, developing capabilities and behaviors that are surprising and unpredictable even to the engineers who build those models. One system can have multiple uses, some beneficial and others dangerous.

Though policymakers have been thinking about how to regulate AI since at least 2016, the release of ChatGPT in November 2022 sparked a flurry of governance activity. In April of this year, the European Parliament is expected to pass the AI Act, the world’s first comprehensive legislation to regulate artificial intelligence systems. The United States has issued voluntary rules, industrial policies, and export controls aimed at strengthening domestic AI development and undercutting China, which has a national research and development strategy to become the world’s preeminent power in AI technology. Multilateral organizations, civil society groups, and companies are all proposing and developing standards, principles, and bodies for governing AI.

While the capability of AI is exponentially increasing, the rate of policy development is not. Most regulatory efforts are in their infancy as officials and technologists debate and struggle to understand how AI might affect society and what strategies, laws, and institutions could ensure safety, innovation, and global stability. In this moment, scholars and researchers have a critical role to play in answering these questions and informing sound AI policy.

In February 2024, New America hosted a workshop on AI with leading scholars and researchers in international relations, computer science, complex adaptive systems, and technology policy. Some of these experts were invited to share their perspectives: How should governments, corporations, and nonprofits think about AI? What are the likeliest paths and impacts of the technology? And what can be done to manage the risks AI poses for geopolitics, institutions, and society? Here are some of their thoughts.

Sustaining Democracy in the Age of Generative AI

By Allison Stanger, Russell Leng ‘60 Professor of International Politics and Economics at Middlebury College, co-director of the GETTING-Plurality Research Network at Harvard University, and external professor at the Santa Fe Institute

The best way to think about ChatGPT is as the functional equivalent of expensive private education and tutoring. Yes, there is a free version, but there is also a paid subscription that gets you access to the latest breakthroughs and a more powerful version of the model. More money gets you more power and privileged access. As a result, in my courses at Middlebury College this spring, I was obliged to include the following statement in my syllabus:

“Policy on the use of ChatGPT: You may all use the free version however you like and are encouraged to do so. For purposes of equity, use of the subscription version is forbidden and will be considered a violation of the Honor Code. Your professor has both versions and knows the difference. To ensure you are learning as much as possible from the course readings, careful citation will be mandatory in both your informal and formal writing.”

The United States fails to live up to its founding values when it supports a luxury brand-driven approach to educating its future leaders that is accessible to the privileged and a few select lottery winners. One such “winning ticket” student in my class this spring argued that the quality-education-for-all issue was of such importance for the future of freedom that he would trade his individual good fortune at winning an education at Middlebury College for the elimination of ALL elite education in the United States so that quality education could be a right rather than a privilege.

A democracy cannot function if the entire game seems to be rigged and bought by elites. This is true for the United States and for democracies in the making or under challenge around the world. Consequently, in partnership with other liberal democracies, the U.S. government must do whatever it can to render both public and private governance more transparent and accountable. We should not expect authoritarian states to help us uphold liberal democratic values, nor should we expect corporations to do so voluntarily.

Arguing for the importance of the free world alliance does not require an ideologically fueled approach to AI governance that excludes authoritarian regimes from the table. Rather, a dual track strategy that builds new global governance regimes around points of common agreement while strengthening already existing cooperation and coordination among open societies can reduce the likelihood of catastrophe by creating multiple future intervention possibilities for containing the unexpected. In the end, ironically, both America’s comparative economic advantage and democratic sustainability require understanding that Marx and Engels were right: “The free development of each is the condition for the free development of all.”

Concentrated Industry Power Is Shaping AI

By Sarah Myers West, managing director of the AI Now Institute and formerly a senior advisor on AI at the U.S. Federal Trade Commission

AI as we know it today is a creation of concentrated industry power. A small handful of firms not only control the resources needed to build AI systems—the cloud infrastructures, data, and labor needed to construct AI at scale—they have also set the trajectory for AI development by influencing AI research for over a decade, increasingly defining the career incentives in AI research, metrics of prestige at leading conferences, and what counts as the leading edge for AI innovation. Today’s AI boom is driven at its core by the legacy of the surveillance business model, and its incentive structures are shaped by the existing infrastructural dominance of the small handful of firms who pioneered it. This is what is driving the push to build AI at larger and larger scale, increasing the demand for resources that only Big Tech firms can provide and further cementing these companies’ considerable advantage.

Understanding these dynamics is particularly important for conversations about global governance: The economic power amassed by these firms exceeds that of many nations, and they’ve demonstrated a willingness to flex that muscle when needed to ensure that policy interventions do not perturb their business objectives. This leads to challenging questions for regulators: Can any single nation amass sufficient regulatory friction to curb unaccountable behavior by large tech firms? If so, how? What is the appropriate role for global governance bodies to play? In the absence of a globally coordinated effort to regulate AI, companies have largely been able to set a self-regulatory tone, leveraging fragmentation to create their own forums for standard setting that become the de facto center for industry governance.

There is nothing inevitable about the trajectory for this technology: It remains open to change. AI has meant different things over the course of an almost 70-year history, from expert systems to robotics to neural networks and now large-scale AI. Effective national regulatory enforcement in combination with coordinated global governance processes could have a particularly important role to play in redirecting away from this current status quo toward more beneficial public goals and interests.

Public AI as an Alternative to Corporate AI

By Bruce Schneier, internationally renowned security technologist and lecturer at Harvard Kennedy School

The increasingly centralized control of AI is an ominous sign. When tech billionaires and corporations steer AI, we get AI that tends to reflect the interests of tech billionaires and corporations, instead of the public. Given how transformative this technology will be for the world, this is a problem.

To benefit society as a whole we need an AI public option—not to replace corporate AI but to serve as a counterbalance—as well as stronger democratic institutions to govern all of AI. Like public roads and the federal postal system, a public AI option could guarantee universal access to this transformative technology and set an implicit standard that private services must surpass to compete.

Widely available public models and computing infrastructure would yield numerous benefits to the United States and to broader society. They would provide a mechanism for public input and oversight on the critical ethical questions facing AI development, such as whether and how to incorporate copyrighted works in model training, how to distribute access to private users when demand could outstrip cloud computing capacity, and how to license access for sensitive applications ranging from policing to medical use. This would serve as an open platform for innovation, on top of which researchers and small businesses—as well as mega-corporations—could build applications and experiment. Administered by a transparent and accountable agency, a public AI would offer greater guarantees about the availability, equitability, and sustainability of AI technology for all of society than would exclusively private AI development.

Federally funded foundation AI models would be provided as a public service, similar to a health care public option. They would not eliminate opportunities for private foundation models, but they could offer a baseline of price, quality, and ethical development practices that corporate players would have to match or exceed to compete.

The key piece of the ecosystem the government would dictate when creating an AI public option would be the design decisions involved in training and deploying AI foundation models. This is the area where transparency, political oversight, and public participation can, in principle, guarantee more democratically-aligned outcomes than an unregulated private market.

The need for such competent and faithful administration is not unique to AI, and it is not a problem we can look to AI to solve. Serious policymakers from both sides of the aisle should recognize the imperative for public-interested leaders to wrest control of the future of AI from unaccountable corporate titans. We do not need to reinvent our democracy for AI, but we do need to renovate and reinvigorate it to offer an effective alternative to corporate control that could erode our democracy.

The Challenges of Regulating AI Are Not New

By Stephanie Forrest, professor of computer science in the School of Computing and Augmented Intelligence at Arizona State University and an external faculty member of the Santa Fe Institute

Many of the most pressing concerns related to AI highlight familiar digital disruptions that governance has failed to address over the past 25 years. We live in a world of mass personal data collection, intrusive targeted advertising, pervasive surveillance and workplace monitoring, censorship and content moderation, online networks of illicit activity, addictive applications, rapid lifestyle changes (think dating apps), and the upending of longstanding business models (journalism, retail, taxis, and many others). Advances in digital technology have improved many people’s lives, but they have come at the price of increasing inequality, psychological distress (for teenagers especially), weakening of privacy and civil liberties, concentration of power in monopolistic tech companies, rampant misinformation, potentially destabilizing international competition, and loss of human agency.

These longstanding problems are the same ones that tend to worry people about new generative AI systems. Thus, the first step for thinking about governance of AI should be to take a hard look at why earlier attempts at digital governance have failed and what new approaches might be tried.

To the extent that AI creates new threats, they are likely to be subtle and dispersed, involving nearly invisible social and individual engineering, so-called “nudges,” both deliberate and unintentional. These might only become apparent in the long run, just as it took many years before the threats from tobacco and environmental pollutants became clear. However, it is safe to say that relying on AI to do more of our writing, speaking, and thinking is likely to train humans to be even more passive and trusting of computation. It may rewire our brains (especially for youth) in ways we don’t yet understand. And it may make us even more dependent on addictive apps that substitute for “in real life” interaction, education, and moral judgement. These are plausible and chilling threats from widespread use of AI, and they will be challenging to regulate or manage.

The internet emerged as a borderless, resilient, and distributed system without geography-based regulation, but governments are increasingly changing that. Many look to the European Union as a beacon of hope, with its data protections and new AI regulations. It remains to be seen how these will play out, how large tech companies will respond, and whether or not there will be unanticipated ripple effects.

There is no equivalent of the EU AI Act in the U.S. federal government, but many states are acting as laboratories for new regulations. Washington State enacted a 2020 law regulating the use of government facial recognition software; New Mexico recently passed a deepfake disclosure law; and several AI-related bills are being debated in California. These and other state-level experiments may lead the way for the country and even international bodies, just as California’s vehicle emissions standards became de facto rules for the auto industry.

Voluntary measures are also emerging from civil society, companies, and ordinary citizens focused on reputation management, coordination among adversaries, and evolving social norms. Examples include promoting a return to flip phones, growing public awareness about the impact of social media on mental health, and organizations worldwide proposing voluntary AI standards. Such measures are not a replacement for binding regulation and will do little to diminish concentrated corporate power or substantially restore user privacy and agency, but they can help incubate new solutions, build advocacy coalitions, and change social behavior.

Developing AI governance will almost certainly be a messy process with contradictions, failures, and individual injustices. But a pragmatic, multi-pronged approach could be more effective than soaring statements, high-minded guiding principles, grand international bargains, and good intent. Policymaking that tackles specific problems in narrow domains—such as criminal sentencing, worker protection, vehicle safety, or insurance settlements—is more likely to make a greater difference and sooner.

AI and the Co-Evolution Dilemma

By Nazli Choucri, professor of political science at MIT and senior faculty at the Center for International Studies

Artificial intelligence is interconnected with the internet, the core of cyberspace. To understand how AI will shape international relations, and vice-versa, it helps to first understand the power of the co-evolution dilemma. Both cyberspace and international relations are complex systems, each constantly changing and evolving. But these two systems are not independent of one another; they are interlocked and mutually dependent. Communication, information, and data flows are as important to global politics and power as physical goods and services.

The two systems are not static, but they co-evolve over time and space, creating joint and often unexpected effects with new realities, uncertainties, and emergent complexities. The dynamics of co-evolution are shaped by:

  • Situationally shared information (reflected in the creation of global internet standards bodies, for instance);
  • Overlapping networks (think government–business cooperation in global cybersecurity);
  • Emergent path dependency (which refers to how decisions in one domain constrain and shape possibilities in the other); and
  • Self-organizing and self-generating properties (such as the geography of internet infrastructure).

The dilemma of co-evolution emerges because all parts of the two systems change and evolve at different rates. Specifically, cyberspace develops more quickly and “out-evolves” the instruments of the state-based international system, which is not designed to manage persistent and rapid rates of change. In the face of rapid digital technological development, states have pushed back by establishing cyber-focused rules and adapting existing law to emerging situations, among other measures.

None of this has been enough, however, and as a result, the very foundations of authority are challenged. In social science parlance, what keeps a system together is authority supported by the legitimacy of the state and its institutions. But there is no central authority in cyberspace, with the internet at its core, and none in the international system, with its constituent sovereign states.

In practice, private authority, primarily in the form of large technology companies, fills the vacuum. Because they are responding to gaps in capacity and functionality and performing operations that states cannot, these companies accrue an outsized amount of power, agency, and authority over the co-evolving systems of cyberspace and international relations. Thus, the core of the dilemma: Private, unaccountable authority is accorded a particular form of legitimacy, one that enables non-state actors to be powerful enough to establish rules and to pursue private interests—not the interests of the state or even of the market.

The advance of AI is likely to make this dilemma far more pronounced and consequential. AI innovations, applications, and permeations taking place in all parts of the world are close to making AI more akin to household appliances than to critical infrastructure. As open models proliferate, lone inventors, as well as organized entities, have near-unrestricted license to “innovate.” Existing efforts to address the dilemma, such as calling for a pause in AI development or regulation meant to constrain particular applications or operations, are far from adequate.

At the very minimum, there needs to be accountability and transparency. The companies and powerful actors at the core of the dilemma should be called upon to put forth, for public review and assessment, foundational protocols for accountability and transparency in the management of all future AI development and deployment. AI is now a global issue. It must be addressed as such.