AI’s Role in Shaping Global Governance

AI and Multilateralism: Navigating the Future

As India prepares to host the India AI Impact Summit next month, its rising importance in geopolitical horizons has been highlighted even further. As the Global South tries to find its position in the race towards building the greatest AI systems, it is imperative to examine how the future of global decision-making would look with the advent of AI.

AI is not a distant promise of the future any longer; it has already become the defining force of the times we live in and is now on its way to shaping geopolitics, economics, security, and even the moral boundaries of human decision-making in an unprecedented manner. However, as AI systems proceed ahead, the frameworks meant to govern them lag dangerously behind. The problem is not merely technological; it is also political. We stand at the precipice of a transformative technology guiding our systems without a referee, without borders, and without a shared playbook.

The Digital Divide and Power Dynamics

One of the consequences is the widening of the digital divide. A handful of countries and corporations control the leading edge, while much of the world risks being diminished to be the audience of the very revolution that will influence their futures. In that sense, AI is not just a question of invention; it is a question of power. Humanity stands at a precipice: AI could become our greatest collective tool, or it could deepen divisions and disrupt the global order. Whether it becomes one or the other will depend on the kind of multilateralism we can build before technology overtakes our ability to govern it.

UN Initiatives for AI Governance

Amid this commotion, the United Nations is discreetly attempting to put together a global framework for AI governance, one that seeks to combine scientific grounding with political inclusivity. This emerging architecture is indicative of a rare effort to rebuild multilateral trust in an era of fragmentation. At its core are three interlinked ambitions:

  • Basing AI policy in science rather than fear or hyperbole.
  • Bringing states and innovators into a shared dialogue.
  • Safeguarding the inclusion of the Global South.

The first of these goals is achieved through the newly established Independent International Scientific Panel on Artificial Intelligence, a 40-member body tasked with providing an evidence-based analysis of the challenges and the prospects that AI offers. In a policy landscape dominated by extremes — some are predicting utopia, others are prophesying apocalypse — the UN’s goal is to create a space where science, not sensationalism, controls regulation. The panel’s first report, expected this year, will likely serve as the most reliable global benchmark yet for AI safety and governance.

Global Dialogue and Capacity Building

The second pillar is the Global Dialogue on AI Governance, an annual forum set to debut in Geneva in 2026. It aims to build trust to get governments, corporations, civil society, and academics to the same table to negotiate the norms of transparency, ethics, and safety. The rationale is that before states can legislate collectively, they must first learn to listen collectively.

The third and most transformative factor is the UN’s proposal to build AI capacity in the Global South through a $3 billion Global AI Fund, aimed at parity. Without access to computing infrastructure, data ecosystems, and skilled human capital, developing countries cannot significantly contribute to shaping global AI systems. The scale of inequality is staggering: Africa collectively possesses fewer GPUs than other regions despite its size and similar demographic dividend, while one American company, Meta, reportedly uses over 250,000.

Challenges and Fragmentation

Worked upon collectively, these three targets could form the world’s first coherent global layer of AI governance, a system designed not to replace national laws or private-sector codes but to link them. The hope is that such a framework could establish common principles, shared responsibilities, and recognize the collective nature of the challenge.

But progress remains uneven. At the moment, AI regulation resembles a crowded marketplace of competing philosophies. The European Union’s AI Act is the most ambitious legal framework to date, while the United States relies on executive orders and voluntary industry commitments. The OECD, ISO, and ITU are drafting their respective technical standards, even as China advances its own model of ‘algorithmic governance’. The result is a spaghetti bowl of overlapping rules and rival philosophies that threatens to further deepen global divisions.

The Future: Cooperation vs. Rivalry

This fragmentation poses a real risk wherein countries lower safety standards to remain competitive. The UN is attempting to address this through an International Standards Exchange, a mechanism to link diverse standard-setting bodies and promote interoperability. The problem is that AI does not exist in a political vacuum. Every model, dataset, and algorithm is intertwined with geopolitical rivalry. This has created a contradiction: the technology needs cooperation to ensure safety and stability, which is antithetical to the nature of global politics.

Despite this, there are flickers of optimism. The UN’s AI initiatives are being shaped not by superpowers but by middle power countries, such as Sweden, Spain, Zambia, and Costa Rica, that are championing dialogue over division. AI requires a preventive diplomacy framework that encourages collaboration before accidents, misuse, or weaponization.

Open-Source AI and Future Collaborations

The rise of open-source AI reveals both the promise and the peril of democratization. Open-source models can empower small states, researchers, and innovators. At the same time, it can make it easier for malicious actors to misuse the technology. Rather than blanket bans, the world needs a more nuanced understanding of openness, which can differentiate among model weights, datasets, and APIs, and develop norms for safe sharing.

The ability to access and process gigantic amounts of data has become the new currency of power. The UN’s projected Global AI Capacity Development Network envisions a ‘compute commons’ that could link idle resources across borders, allowing smaller states to access shared computing power. This could establish a minimum, irreducible, national computing capacity that every country should acquire to participate in the AI economy.

Conclusion: The Path Forward

To organize this growing web of initiatives, the UN has founded the Office for Digital and Emerging Technologies (ODET). The ODET functions as both a policy lab and a coordination hub, connecting organizations like UNESCO, ITU, OHCHR, and WMO under one digital umbrella. It also represents a philosophical shift: the UN is not locating itself as a bystander but as an active steward of technology.

Nevertheless, barriers still remain. Geopolitical mistrust between the US and China continues to limit cooperation; bureaucratic inertia and private-sector dominance twist incentives away from public accountability; capacity gaps in developing regions continue to fester.

Despite these challenges, the case for multilateralism has never been more important. AI’s impact is exponential. If left ungoverned, it could deepen inequalities, destabilize economies, and corrode democratic institutions. But if governed wisely, it could become a shared instrument of progress and a new chapter in cooperative global governance.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...