Rethinking AI Governance: Emphasizing Solidarity Over Speed

Solidarity, Not Speed, for AI Governance

The call for a more deliberate approach to artificial intelligence (AI) governance has never been more pressing. As technology continues to advance at breakneck speed, the need for a governance model that prioritizes democracy over haste is highlighted by recent global initiatives.

The Launch of AI AGE

On June 12, 2025, a significant milestone was reached with the launch of the Artificial Intelligence Advisory Group on Elections (AI AGE). This initiative, spearheaded by the International Foundation for Electoral Systems (IFES), aims to unite electoral authorities and leading AI experts to tackle the challenges AI poses to democracy.

Founding members of AI AGE include electoral management bodies from countries such as Argentina, Indonesia, Kenya, Taiwan, and Ukraine. The initiative embodies a collaborative effort to ensure that AI strengthens democratic processes rather than undermines them.

Strategic Slowness Over Speed

AI AGE advocates for a governance approach that emphasizes dialogue, deliberation, and debate. This contrasts sharply with the prevailing tech ethos of “move fast and break things,” which has often led to accountability issues and power consolidation.

As highlighted during the AI AGE launch, the notion that speed is essential can lead to detrimental outcomes. Instead, the concept of strategic slowness allows for consultation and inclusive decision-making—key aspects of a robust democratic process.

The Global Power Shift

The landscape of global governance is also shifting. Recent events, such as President Donald Trump‘s diplomatic tour of the Gulf in May 2025, showcased how states are increasingly partnering with tech companies. Landmark deals included the establishment of a massive AI data center in the UAE, backed by major firms, and significant investments in AI initiatives in collaboration with U.S. tech companies.

This new model, where tech companies are not merely contractors but partners in governance, poses questions about the implications for democracy and public policy.

Case Study: Brazil’s dWallet

Another compelling example of the intersection between AI and governance is Brazil’s dWallet, launched in April 2025. This digital wallet allows citizens to monetize their personal data, transforming fundamental human rights into commodities. While it aims to empower individuals, it raises critical concerns about privacy and the monetization of personal information.

Brazil’s existing strong data protection laws, which recognize data as a fundamental right, are now being challenged by proposals that shift the focus from protection to monetization.

The Lessons from the Internet’s Early Days

The early internet serves as a model for how democratic principles can guide technological development. The collaboration among companies, governments, and technologists led to the formation of the World Wide Web Consortium (W3C), which established open web standards that prioritize freedom of expression and multi-stakeholder engagement.

While not without flaws, the W3C exemplifies how inclusive governance can foster a global, open internet—a goal that remains relevant today.

Conclusion: A Call for Democratic AI Governance

As AI continues to integrate into various aspects of public life, the risk of repeating past governance failures looms large. However, by embracing a governance model rooted in solidarity rather than speed, stakeholders can work together to shape a future where technology serves to enhance democracy.

In a world marked by rapid technological change and rising autocracy, the future of AI governance requires a commitment to democratic values, ensuring that the benefits of AI are equitably distributed across society.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...