Hiroshima AI Process: Bridging Global Paths to Responsible AI Governance

Diverging Paths to AI Governance: How the Hiroshima AI Process Offers Common Ground

Approaches to AI governance currently differ widely across the world, each reflecting unique balances of innovation, trust, and authority.

The models shaped outside the Global South could steer technological trajectories in ways that reinforce existing digital divides.

Japan’s Hiroshima AI Process fosters international alignment by offering a flexible framework that can connect diverse national systems and promote interoperability.

The Importance of AI Governance

Whether you are scrolling through social media, applying for a mortgage, or getting a diagnosis from your doctor, artificial intelligence (AI) is already shaping the choices you make every day. Beyond everyday convenience, AI is transforming economies, shifting global influence, and challenging existing rules of governance. The real question now is not whether to regulate it, but how.

Governments worldwide are racing to shape the rules of AI, but they’re not all taking the same approach. Some are building comprehensive, risk-based regimes; others rely on principle-driven oversight or state-led coordination to align innovation with strategic priorities. Each model reflects a distinct balance between innovation and accountability, flexibility and protection – and understanding these differences is essential to finding areas of cooperation.

Diverse Paths to Governance

Governments increasingly agree that AI must be transparent, accountable, and safe – but their paths to achieving those goals diverge.

  • Europe’s EU AI Act prioritizes risk management, classifying systems by potential harm and imposing stricter rules on those affecting rights, health, or safety.
  • The United Kingdom applies a principle-based model, embedding fairness and accountability across regulators with flexibility for experimentation.
  • The United States follows a market-driven, security-oriented approach, combining voluntary frameworks such as NIST’s AI Risk Management Framework with state and federal initiatives.
  • China, by contrast, adopts a directive model emphasizing registration, security reviews, and content oversight to align innovation with national goals and social stability.

Each of these approaches reflects a distinct balance between innovation, oversight, and public trust. Yet as these logics evolve, they expose deeper challenges: how to ensure that governance keeps pace with technology without constraining it – and how to translate national priorities into globally compatible rules.

The Global South and the New Digital Divide

This question of convergence is especially urgent for the Global South. As AI governance frameworks mature across advanced economies, many developing nations risk being shaped by external standards rather than defining their own. While digital transformation offers vast potential for inclusive growth, persistent gaps in data access, infrastructure, and skills threaten to widen inequalities.

Influence in the digital economy remains concentrated among a few actors who set technical standards, govern data flows, and shape norms for AI development. Their advantages in computing power, proprietary data, and research capacity have driven global efficiency – but often at the cost of local adaptability and agency.

Bridging this divide requires more than investment; it demands agency and coordination. Strengthening infrastructure, skills, and research ecosystems, alongside regional cooperation, can help countries move from technology adopters to active contributors.

Cooperation in Action: The Hiroshima AI Process

As global approaches to AI governance diverge and capacities remain uneven, new initiatives are emerging to bridge these divides. Japan’s Hiroshima AI Process, launched during its 2023 G7 presidency, introduces a comprehensive framework composed of:

  • The Hiroshima Process International Guiding Principles
  • The Hiroshima Process International Code of Conduct
  • A voluntary Reporting Framework for companies and governments.

These enable organizations to demonstrate accountability and a commitment to responsible AI practices, even in the absence of binding regulation. By promoting transparency, it offers a practical mechanism for large companies to communicate how they manage AI risks and align with global expectations.

Building on this foundation, the World Economic Forum’s Advancing Responsible AI Innovation: A Playbook shows how such voluntary reporting can turn transparency into a driver of trust, competitiveness, and reputational value. The Hiroshima AI Process exemplifies a “third way” approach built on soft law, emphasizing stewardship and openness rather than strict enforcement.

Moreover, the process offers a flexible pathway for countries still developing AI institutions to align with shared principles before formal regulation. Its reach continues to grow through the Hiroshima AI Process Friends Group, spanning over 50 countries and regions.

In ASEAN, discussions at the World Economic Forum’s 2025 AI Stakeholder Dialogue in Kuala Lumpur highlighted how the process supports the ASEAN Responsible AI Roadmap – helping harmonize governance, enable trusted data flows, and safely test innovation through regulatory sandboxes.

Conclusion

Policy-makers increasingly agree that AI must be transparent, accountable, and aligned with human well-being. The routes differ, but the goal is shared: to build a trustworthy ecosystem where innovation and responsibility reinforce each other.

Japan’s Hiroshima AI Process offers something new: a collaborative fabric that connects these systems through shared transparency and cooperation. It does not replace national strategies or legal regimes, but it helps create the conditions for them to coexist and evolve together.

If policy-makers, businesses, and standards bodies continue refining this living interoperability layer, Hiroshima’s soft-law experiment could evolve into the backbone of practical, trusted, and globally inclusive AI governance.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...