In February, India’s AI Impact Summit proclaimed a bold promise: Middle powers can reshape the global artificial intelligence (AI) order. With 600,000 recorded participants, the first official AI summit organized in a global majority nation staked its claim, compared to over 1,000 participants total in Paris and 100 in Bletchley. Surfing on Canadian Prime Minister Mark Carney’s speech at Davos earlier this year, the summit and the Indian government resolutely championed the concept of “middle powers,” which form a third path of influence in response to a “rupture in the world order.” Such framing stood in sharp contrast with the Paris AI Action Summit in 2025. The Paris edition was defined by the turbulence of President Donald Trump’s first 100 days: deregulatory fever, anti-safety narratives, and the geopolitical noise of Greenland and NATO threats in the background. The AI world has since absorbed the shock and shifted its center of gravity toward a less Western-centric agenda. While the summit’s key visual remains the missed hand-holding between the CEOs of OpenAI and Anthropic, the rhetoric was anchored around middle powers and “AI sovereignty,” breaking path dependency from the old world order.

And yet, the question remains: How do these buzzwords and impressive metrics translate meaningfully, and who is sidelined in the process? As noted by Alondra Nelson—former director of the White House Office of Science and Technology Policy—the summit coincided with Chinese Lunar New Year and the start of Ramadan, thus physically excluding important stakeholders from the conversation. While applauding the fact that this year’s summit was, for the first time, not invite-only and welcomed a wider community, Nelson noted the paradox of civil society organizations’ exclusion from main summit discussions when major tech CEOs were in attendance on the fourth day. The panel where Nelson raised these points further encapsulated this tension—the sole women-only panel of the summit, also featuring “Empire of AI” author Karen Hao, was left for the last day, last session, in a far-off room.

Amber Sinha highlighted this dynamic in a piece for Tech Policy Press, demonstrating how the Internet Free Foundation’s mapping of the summit agenda confirmed what the room design already suggested: The organizers’ stated commitment to “democratization” left little room for the questions it actually demands—power and value redistribution and meaningful accountability. Despite industry actors setting much of the tone, the “democratic AI” framing nonetheless positioned the summit as an alternative model. The optics of inclusion masked a more selective reality.

Without civil society in the room, words lose their meaning. At the conference, Hao identified one of the summit’s biggest risks as corporate control over AI narratives. Industry capture over shared terminology, such as “sovereignty,” “regulation,” and “energy consumption,” is dictating a range of actions and social imaginaries that enable communities to resist AI domination. She also highlighted that while OpenAI CEO Sam Altman may nominally support “regulation,” it is in the most cosmetic, washed-out form—erring on the side of voluntary standards over regulatory compliance. At the summit, Altman also attempted a misleading equivalence to downplay AI’s energy costs, arguing “it also takes a lot of energy to train a human.”

When civil society is not actively engaged, moral equivalences like these go unchallenged, and concepts like AI sovereignty get watered down. Currently, a small cluster of private corporations controls the AI industry through closed, proprietary systems, where a single provider designs and executes the AI model, platform, pricing, and its safeguards. When governments build critical infrastructure on proprietary systems that they cannot audit, sovereignty becomes a marketing opportunity and not an autonomous strategy for greater inclusion.

This also raises a longer-term nomenclature question. From the inaugural U.K. AI Safety Summit in 2023 to India’s AI Impact Summit, sessions focusing on “solidarity” and “societal impact” were never truly centered in these agendas and now seem to have been fully sidelined. From this standpoint, the India AI Impact Summit, despite being organized in a global majority nation, appeared to be a continuation of the concepts discussed at the French AI Action Summit: a clear move away from governance and safety toward innovation and the projection of national AI champions.

An interesting phenomenon has emerged in parallel. As the official summits narrow their focus, events that occur outside of the main conference programming have quietly become more formalized spaces for AI ethics and safety communities, especially for the voices sidelined from the main agenda. Examples of these include the Participatory AI Research and Practice Symposium, the Multistakeholder Convening on AI Governance, the Global South AI Research Colloquium, and AI Safety Connect.

Next year, the AI summit moves to Geneva, Switzerland, a city accustomed to hosting the annual AI for Good Summit by the International Telecommunication Union (ITU). However, the ITU’s summit has also been marked by corporate capture, with almost half of last year’s speakers hailing from tech companies and notable speakers like Abeba Birhane being censored on critical topics like AI’s societal impact.

Next year offers an opportunity to correct course. We must ensure that this time, real talks include all relevant stakeholders and happen in the open; that summit dates allow participants from distinct geographies to join; that civil society organizations are included in the agenda design, not relegated to last-day slots in off-center rooms; and that, given the visa and budget barriers of organizing a summit in one of the world’s most expensive cities, meaningful support is extended to participants who need it most. As we move from “safety” to “action” to “impact,” solidarity should make a comeback.