Global Goals, Local Realities: Aligning AI Governance with Inclusion
The recent adoption of the Pact for the Future by the UN General Assembly on September 22, 2024, marks a significant step towards a digital future that is envisioned to be inclusive, fair, safe, and sustainable. However, the Global Digital Compact (GDC) embedded within this pact presents challenges regarding its implementation, particularly in the realm of AI governance.
Challenges in Inclusivity
While the GDC aims for inclusivity, it lacks a concrete framework to ensure that the initiatives related to AI governance are genuinely inclusive. The processes through which UN resolutions are negotiated often limit input from civil society, marginalized communities, and independent experts. Important drafting occurs behind closed doors, driven by a small circle of diplomats, which can lead to outcomes that do not reflect diverse perspectives.
This procedural architecture is inadequate for addressing the multifaceted challenges posed by AI. When inclusivity is missing from the process, it is unlikely to be present in the outcomes. This raises concerns about the potential for entrenching inequities into future AI systems.
The Language Barrier
Language serves as a significant barrier to inclusivity. Nearly three billion people globally cannot speak, read, or write in any of the UN’s six official languages. This exclusion raises the question: how can their realities inform policies being developed on their behalf? The lack of access to discussions, whether at national or global levels, hinders the design of equitable AI systems.
Even within the UN, the Secretary-General has pointed out the necessity for the organization to become more agile, transparent, and accessible. Yet, informal consultations frequently lack translation services, sidelining countless Member States and civil society groups incapable of operating in the official languages.
To bridge these gaps, the UN could leverage AI-powered translation tools for real-time translation, especially during informal consultations where resources are scarce. By training these tools on national linguistic datasets, countries can enhance their diplomatic capacity and ensure that their languages and cultural contexts are integrated into AI algorithm development.
Data Representation and AI Development
Another layer of exclusion in AI governance stems from the data utilized in AI development. A small number of multinational technology companies dominate this field, primarily based in the Global North, while much of the data originates from the Global South. This stark imbalance highlights that while the Global South provides essential data, it remains excluded from the governance of how AI technologies are built and deployed.
For instance, consider an AI system intended to monitor health trends among African-descendant women in the United States. Ignoring broader health contexts associated with African and Caribbean populations could lead to biased and harmful outcomes.
These issues are not merely hypothetical; there are numerous instances where AI has reinforced systemic inequalities, such as discriminatory hiring algorithms and flawed healthcare tools. These systems reflect the biases and blind spots of their creators, underscoring the need for a more inclusive approach to AI governance.
A Call for Inclusive Governance Structures
As the global community outlines the architecture of AI governance, it is crucial to avoid repeating patterns of exclusion. Inclusion should not be an afterthought but rather a foundational element in the dialogue surrounding AI governance.
In an ideal scenario, the Global Dialogue on AI Governance would incorporate a multi-tiered, hybrid structure designed to ensure representation, accountability, and transparency:
- Global Advisory Group: An inclusive group constituted by representatives from marginalized communities, Indigenous peoples, linguistic minorities, youth, and experts from the Global South, selected through transparent processes.
- Multilingual Consultations: All consultations, both virtual and in-person, should be conducted in multiple languages, utilizing AI for simultaneous translation to promote meaningful participation.
- Iterative Approach: The Dialogue should adopt an open process, publishing draft positions for public comment and requiring Member States to transparently report on how inputs have been addressed.
Inclusion must shape every aspect of AI governance—from the composition of the Scientific Panel to the design of the Global Dialogue, as well as the recognition of diverse knowledge systems. It is essential to treat translation, accessibility, and digital literacy as critical elements of legitimacy rather than optional enhancements.
Conclusion
We are at a pivotal juncture. The governance structures established today will influence the ethical boundaries and social outcomes of AI technologies for years to come. If we neglect to prioritize inclusivity now, we risk perpetuating systems that reinforce existing inequalities and serve the interests of a select few at the expense of the broader population.
The time for superficial gestures has passed. What is needed is bold, principled leadership that prioritizes justice, accessibility, and representation in the governance of global AI.