Regulate or Innovate? Governing AI amid the Race for AI Sovereignty
Blog Post

Shutterstock
May 1, 2025
At a Glance
- Global AI governance has rapidly shifted from collaborative oversight to competitive development.
- Linking AI with national sovereignty creates powerful resistance to meaningful regulation, while widening gaps in technical expertise leave policymakers unable to engage effectively.
- Corporate influence in governance processes threatens to replace public accountability with private rule-making.
- AI benefits remain concentrated in the Global North, while disruptions disproportionately affect the South, undermining inclusive governance efforts.
- The path forward requires balancing innovation with responsibility through democratic coalitions, market incentives, risk-based frameworks, and cross-regional solidarity.
The New Sovereignty Battleground
In just fifteen months—from November 2023 to February 2025—the global approach to AI governance underwent a dramatic reversal. The Bletchley Declaration, signed by 28 nations, including France, warned of “serious, even catastrophic harm” from advanced AI systems. Yet at the Paris AI Action Summit in February 2025, French President Emmanuel Macron declared: “If we regulate before we innovate, we won’t have any innovation of our own.”
This whiplash pivot from collaborative oversight to competitive development reflects a fundamental reframing of AI as a sovereign imperative. Governments now treat AI capabilities as essential to national power, with safety concerns increasingly dismissed as impediments to technological competitiveness.
This pivot reflects how rapidly AI has become entangled with perceptions of national power. The transformation began with ChatGPT’s consumer release in late 2022, which sparked a global AI race. Since then, governments have increasingly poured resources into what might best be called “AI industrial policy”—channeling public funds and regulatory muscle toward accelerating AI development rather than restraining it. The message from capitals worldwide has become clear: Innovate first. Regulate later, if at all.
As governments pursue innovations in AI, it is important to understand how states, institutions, and industry will combine to govern the technology, and how failure to do so could have unintended consequences. Policymakers designing responsible governance frameworks for AI face three major challenges: the linkage of AI with sovereign ambitions; an expertise gap in understanding the technical complexities of AI; and, relatedly, the outsize role private industry plays in regulating AI. This triple challenge forms the central tension in current AI policy debates.
Stakeholders in AI development and deployment will need to work together to ensure that the competition for sovereignty does not drive a race to the bottom, where ethics are jettisoned in the pursuit of power. This analysis maps the challenges of AI governance in this new landscape. Rather than advocating a single approach, it identifies critical leverage points where targeted interventions can create more democratic, equitable outcomes while preserving innovation.
The Governance Deficit
Of the proliferating AI laws and regulations—more than 200 at the national or supranational level—many focus on developing AI, but far fewer focus on governing it. Most jurisdictional arrangements begin with national strategies or ethics policies rather than binding legislation. The most prominent examples include the EU’s AI Act and China’s New Generation AI Development Plan. In the United States, the Trump administration’s January 2025 executive order “Removing Barriers to U.S. Leadership in AI Infrastructure” replaced former President Joe Biden’s focus on safe and trustworthy AI development. Both the Chinese plan and Trump’s order emphasize investment and innovation over safeguards. Only the EU’s AI Act takes a comprehensive governance approach, imposing transparency and due diligence obligations on developers.
Challenge 1: Technology as National Identity
At the domestic level, the first challenge of designing governance frameworks is the equivalence—often implicit and increasingly explicit—that governments make between sovereignty and technological advancement. In a 2024 report, for instance, the French government explicitly linked AI to national sovereignty: “Our lag in artificial intelligence undermines our sovereignty. Weak control of technology implies a one-way dependence on other countries.” India’s strategy similarly declares: “We are determined that we must have our own sovereign AI,” though researchers question India’s ability to achieve this. This framing transforms AI from a technology into a national imperative. When capabilities become tied to sovereignty, regulation becomes secondary to innovation. To gain traction, governance frameworks must be positioned as enhancing competitiveness rather than constraining it.
Challenge 2: Knowledge Asymmetry
The second challenge is that regulating AI requires demystifying AI systems. AI experts do not agree on how to conceive of AI harms, often disagreeing on the upper bounds of AI capability and the kinds of danger AI poses. Policymakers often lack the technical understanding needed for effective regulation of new technologies. The constant churn of political cycles, industry leadership shuffles, and disruptive developments means they also usually lack the time to learn the problem set. This expertise gap, always present between regulators and industry, is particularly acute with AI. The field encompasses everything from simple algorithms to complex neural networks, creating confusion about what actually constitutes “AI.” Even experts disagree about capabilities and risks. Some warn of existential threats, while others focus on immediate harms like algorithmic discrimination. These divisions within the research community leave policymakers without clear guidance on appropriate guardrails.
The term “AI” connotes everything from routine ATM calculations to autocorrect to automated image generation to chatbots, prompting some computer scientists to dismiss the hype over AI products as little more than “AI snake oil.” Moreover, even the experts disagree over how to conceptualize AI harms. While some regulators may seek to close the expertise gap and set guardrails for responsible AI development and use, divisions within the AI research community make it difficult to discern the subtlety and substance of differing expert views.
Challenge 3: Corporate Foxes in the Technological Hen House
Tech giants possess computational resources that dwarf those available to academics and many governments. Microsoft, Google, OpenAI, and Meta increasingly dominate governance discussions through their technological advantage. The EU’s Digital Services Act exemplifies this dynamic, adopting a “co-regulation” framework that delegates compliance to industry leaders. Meanwhile, Google, Microsoft, OpenAI, and Anthropic have formed a self-regulating body to oversee their version of safe and responsible AI development. As regulators hesitate and defer to these arrangements, companies fill the vacuum with private governance regimes that they present as sufficient for public protection. When tech giants draft their own regulatory playbook, it’s not just the fox guarding the hen house—it’s the fox designing the coop, training the guard dogs, and writing the farmer’s manual on poultry security.
The Fraught Path to Global Rules
Principles Without Enforcement
International organizations have engaged with AI governance since 2019, when the OECD released its AI principles, which 47 countries endorsed. The G20 followed suit soon after. UNESCO’s 2021 Recommendation on Ethics of AI and the UN’s 2024 advisory report came next. These frameworks acknowledge AI’s global supply chain and the risk of regulatory arbitrage across jurisdictions. From critical minerals to coders, the AI stack is pushing the current boundaries of laws that govern everything from the mining industry to capital and labor. And the principles adopted by international organizations try as they can to address those complexities. Many of these frameworks also generally emphasize the need for multilateral cooperation, given the global nature of the AI supply chain.
Yet principles face practical obstacles. The biggest challenge is conflicting national interests. Economic competition creates resistance to binding global cooperation. The United States diverges significantly from EU and Chinese regulatory approaches. The Biden administration relied primarily on voluntary commitments from tech companies. In early 2025, the Trump administration went further, revoking Biden’s executive orders while encouraging massive private investment, including OpenAI and Oracle’s $500 billion “Stargate” AI infrastructure project. This regulatory reversal exemplifies how sovereignty narratives can accelerate a global race to the bottom.
The New Digital Divide
AI’s benefits and governance capacity are unevenly distributed. The United States, China, and the EU command the resources to shape AI development and reap the rewards. Business management consultancies like PWC estimate that a staggering 84 percent of AI’s projected $15.7 trillion economic value will flow to China, North America, and Europe.
A 2024 UN advisory report indicates the Global South faces compounding disadvantages: limited access to computing resources, data infrastructure shortages, and insufficient AI expertise. Worse, automation threatens to eliminate traditional development pathways. There is a real risk AI concentrations could worsen the divide between the Global North and South by devaluing labor prone to automation, such as telemarketing, and industries least likely to grow, such as agriculture. As one expert notes: “For poorer countries, this is engendering a new race to the bottom where machines are cheaper than humans and the cheap labor that was once offshored to their lands is now being onshored back to wealthy nations.”
These asymmetries create a governance paradox. Global South nations most likely to benefit profoundly from AI innovation are also vulnerable to AI’s disruptions, and they have the least influence over its governance. Meanwhile, countries with the most to gain have reduced incentives for strict regulation. The result is a fragmented international landscape with diminishing prospects for inclusive global frameworks.
Finding a Way Forward
The governance challenges described above demand pragmatic responses that acknowledge technological realities while preserving democratic oversight. Four promising pathways emerge:
Democratic Counterweights
Effective domestic governance requires counterbalancing corporate influence through broad-based coalitions. Universities, civil society organizations, and public interest technologists can demystify AI systems and empower policymakers. These coalitions can provide technical expertise independent from commercial interests, develop alternative governance frameworks, and advocate for public values in AI design and deployment. Distributing knowledge more widely creates the foundation for informed governance.
Market Incentives for Responsible AI
Public-interest AI systems, like philanthropically-funded language models, can create market pressure for higher standards. When ethical alternatives exist, companies face competitive pressure to improve their own practices. The private sector can also provide checks against government overreach, particularly in authoritarian contexts. Consumer and investor pressure in tech companies’ home markets remains a powerful lever for global ethical practices. The goal is not to halt innovation but to channel market forces toward responsible development.
Risk-Based Multilateral Frameworks
History shows that nations cooperate despite competing interests when the risks of noncooperation are sufficiently severe. The nuclear nonproliferation regime and Outer Space Treaty demonstrate this principle. Similar approaches could work for AI by focusing on concrete, bounded risks that threaten shared interests. Following the ICANN model for internet governance, technical standards bodies could provide neutral ground for cooperation on specific AI safety issues, sidestepping broader geopolitical conflicts while building trust incrementally.
Digital Solidarity Across Regions
A vision of “digital solidarity” could facilitate regional cooperation and more equitable AI development. Nations should acknowledge the limits of purely domestic solutions and leverage global AI supply chains strategically. Developing shared technology stacks, promoting procurement best practices that prevent vendor lock-in, and creating sustainable financing mechanisms similar to trade adjustment assistance could help smaller countries participate meaningfully in the AI economy while building domestic capacity.
Beyond the False Choice
The artificial choice between innovation and regulation threatens both. As countries reframe AI as a sovereignty issue, we face a governance inflection point with long-term consequences for global technology and power distribution.
The three central challenges—sovereignty claims, expertise gaps, and corporate co-regulation—demand sophisticated responses. Nations must balance legitimate technological ambitions against the need for meaningful oversight. Regulators need technical capacity independent of corporate influence. And governance frameworks must ensure the Global South participates meaningfully rather than merely bearing AI’s disruptive effects.
Without action, we risk entrenching a world where a handful of companies and countries monopolize AI’s benefits while distributing its risks globally. The central insight is that regulation need not impede innovation—it can channel it productively. The task is not to choose between technological advancement and public protection, but to craft governance that enables responsible progress.
The governance paths outlined here offer pragmatic first steps. They acknowledge geopolitical realities while preserving space for democratic values. The complex interplay between sovereignty, expertise, and corporate power makes AI governance uniquely challenging. But the same interconnectedness that creates these challenges also opens opportunities for creative governance solutions that serve broader interests than those of a privileged few.
This article is part of our Who Controls AI?: Global Voices on Digital Sovereignty in an Unequal World collection.