Inclusive AI Governance: Bridging Global Divides

Global Goals, Local Realities: Aligning AI Governance with Inclusion

The recent adoption of the Pact for the Future by the UN General Assembly on September 22, 2024, marks a significant step towards a digital future that is envisioned to be inclusive, fair, safe, and sustainable. However, the Global Digital Compact (GDC) embedded within this pact presents challenges regarding its implementation, particularly in the realm of AI governance.

Challenges in Inclusivity

While the GDC aims for inclusivity, it lacks a concrete framework to ensure that the initiatives related to AI governance are genuinely inclusive. The processes through which UN resolutions are negotiated often limit input from civil society, marginalized communities, and independent experts. Important drafting occurs behind closed doors, driven by a small circle of diplomats, which can lead to outcomes that do not reflect diverse perspectives.

This procedural architecture is inadequate for addressing the multifaceted challenges posed by AI. When inclusivity is missing from the process, it is unlikely to be present in the outcomes. This raises concerns about the potential for entrenching inequities into future AI systems.

The Language Barrier

Language serves as a significant barrier to inclusivity. Nearly three billion people globally cannot speak, read, or write in any of the UN’s six official languages. This exclusion raises the question: how can their realities inform policies being developed on their behalf? The lack of access to discussions, whether at national or global levels, hinders the design of equitable AI systems.

Even within the UN, the Secretary-General has pointed out the necessity for the organization to become more agile, transparent, and accessible. Yet, informal consultations frequently lack translation services, sidelining countless Member States and civil society groups incapable of operating in the official languages.

To bridge these gaps, the UN could leverage AI-powered translation tools for real-time translation, especially during informal consultations where resources are scarce. By training these tools on national linguistic datasets, countries can enhance their diplomatic capacity and ensure that their languages and cultural contexts are integrated into AI algorithm development.

Data Representation and AI Development

Another layer of exclusion in AI governance stems from the data utilized in AI development. A small number of multinational technology companies dominate this field, primarily based in the Global North, while much of the data originates from the Global South. This stark imbalance highlights that while the Global South provides essential data, it remains excluded from the governance of how AI technologies are built and deployed.

For instance, consider an AI system intended to monitor health trends among African-descendant women in the United States. Ignoring broader health contexts associated with African and Caribbean populations could lead to biased and harmful outcomes.

These issues are not merely hypothetical; there are numerous instances where AI has reinforced systemic inequalities, such as discriminatory hiring algorithms and flawed healthcare tools. These systems reflect the biases and blind spots of their creators, underscoring the need for a more inclusive approach to AI governance.

A Call for Inclusive Governance Structures

As the global community outlines the architecture of AI governance, it is crucial to avoid repeating patterns of exclusion. Inclusion should not be an afterthought but rather a foundational element in the dialogue surrounding AI governance.

In an ideal scenario, the Global Dialogue on AI Governance would incorporate a multi-tiered, hybrid structure designed to ensure representation, accountability, and transparency:

  • Global Advisory Group: An inclusive group constituted by representatives from marginalized communities, Indigenous peoples, linguistic minorities, youth, and experts from the Global South, selected through transparent processes.
  • Multilingual Consultations: All consultations, both virtual and in-person, should be conducted in multiple languages, utilizing AI for simultaneous translation to promote meaningful participation.
  • Iterative Approach: The Dialogue should adopt an open process, publishing draft positions for public comment and requiring Member States to transparently report on how inputs have been addressed.

Inclusion must shape every aspect of AI governance—from the composition of the Scientific Panel to the design of the Global Dialogue, as well as the recognition of diverse knowledge systems. It is essential to treat translation, accessibility, and digital literacy as critical elements of legitimacy rather than optional enhancements.

Conclusion

We are at a pivotal juncture. The governance structures established today will influence the ethical boundaries and social outcomes of AI technologies for years to come. If we neglect to prioritize inclusivity now, we risk perpetuating systems that reinforce existing inequalities and serve the interests of a select few at the expense of the broader population.

The time for superficial gestures has passed. What is needed is bold, principled leadership that prioritizes justice, accessibility, and representation in the governance of global AI.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...