Gordon LaForge
Senior Policy Analyst, New America
The inaugural international AI Summit in 2023 was an exclusive affair. No more than 150 government, industry, and academic leaders met in the splendid seclusion of Bletchley Park, an English country estate that, as the nerve center of MI6 signals intelligence during World War II, is synonymous with secrecy. The focus of the gathering was similarly rarefied: how to manage the risks of the frontier AI systems being built by a handful of companies?
The AI Summit’s fourth iteration, held two weeks ago in New Delhi, could not have been more different. As many as 250,000 people thronged the halls of Bharat Mandapam, a colossal new convention center built by Narendra Modi’s government as a monument to its ambitions as an international convener. In more than 400 sessions, attendees discussed AI applications, local adoption, democratizing access, development use cases, open source, and edge devices. The word “safety” was nary heard; “impact” was the summit’s watchword.
The path of the world’s preeminent AI gathering from the cloistered English countryside to the rioting heart of Delhi reflects larger shifts in the global AI landscape. In just three short years, the governance discourse has gone from urging containment to managing the reality of proliferation. Adoption now rivals capability as the metric of success for many AI labs. And a growing number of countries are bent on attaining “AI sovereignty,” a term often taken to mean ownership and control over data, compute, and other layers of the AI stack.
Sovereign AI was a major focus of the summit and a clear priority for the Indian hosts. The government announced major sovereign AI projects, including a deal with the UAE to build a national AI supercomputer and the release of internationally competitive open source models built by Sarvam, a local startup. More grandly, Delhi declared its ambition to become an AI superpower on the level of the US and China.
That goal is, if not completely fanciful, a very distant prospect. China and the US account for some 70 percent of top machine-learning researchers, 90 percent of compute capacity, and more than double the amount of AI investment as all other countries combined.
It was notable, however, that the two powers were bit players at the summit. China was all but absent, and though there was a strong presence from US industry and civil society, the government delegation was thin; the highest ranking US official there was Science Advisor Michael Krastios. Instead, middle powers and regional leaders, especially from the Global South, were prominent. Aside from Modi, who was omnipresent, the two most notable heads of state in attendance were Emmanuel Macron of France and Lula of Brazil.
That hinted at a defining question for the global AI landscape. The possibility is not whether India (or any other country) can become an AI power on its own. But instead, it is whether middle powers could band together to create a “third way” for AI beyond the US-China binary. It would be a distributed or collaborative sovereignty in which AI infrastructure, capabilities, and governance decisionmaking are more dispersed and in which communities have meaningful choice, what Akash Kapur calls “digital agency.” That means understanding AI sovereignty not as isolation or self-sufficiency, but rather, as Dang Nguyen writes, as the power of “authorship,” the ability to decide “which data, models and rules shape, and will shape, how machine intelligence is built and deployed.”
Right now, a third way in AI is shrouded and brambled. Much work is needed to build the kinds of agreements and coalitions that could constitute a viable alternative to the US-China duopoly. That might include governance arrangements, like standards harmonization or procurement frameworks, that could ease market entry for middle power companies and accelerate public sector adoption. Or even shared technological investments, such as regional compute infrastructure that could provide local entrepreneurs and model developers with a cost-competitive alternative to American and Chinese hyperscalers.
Whatever the exact arrangement, for middle powers, the logic of banding together is undeniably compelling at a time of escalating great power coercion. As Canadian Prime Minister Mark Carney quipped, “If we’re not at the table, we’re on the menu.”
The stakes are similarly high for the global AI landscape writ large: Do we want the path of the most consequential technology of our time to follow a logic of zero-sum competition, technological vassalage, and weaponized interdependence? Where a handful of large companies and two countries dictate terms to all the world’s peoples and cultures?
Or can coalitions of the willing come together to help make AI power, ownership, and choice more distributed? And in so doing, help bring about a world in which technology might better serve the disparate needs, cultures, and aspirations of countries, communities, and individuals?