Diverging Paths in Global AI Regulation

AI Regulation Developments: A Global Perspective

Major nations are experiencing a divergence in their approach to artificial intelligence (AI) regulation. As Australia pushes for stringent oversight, the European Union (EU) is racing to implement its AI Act, while OpenAI prepares to unveil a national AI strategy in the United States. This regulatory divide emerges at a time when financial institutions are accelerating their integration of AI technology, highlighting the contrasting ideologies between countries advocating for tight controls versus those favoring a more laissez-faire approach.

EU Opens Consultation on AI

The European AI Office has launched a targeted consultation aimed at developing official guidelines for the EU’s new AI law. This consultation, running from November 13 to December 11, focuses on two pivotal areas of the AI Act: defining what constitutes an AI system and identifying prohibited AI practices under the new regulations.

According to the European Commission, the objective of these guidelines is to provide consistent interpretation and practical guidance to assist competent authorities in enforcement actions, as well as to help providers and deployers comply with the AI Act. The commission is particularly seeking real-world examples from stakeholders, including businesses, academic institutions, and civil society organizations.

The timing of this consultation is crucial, as specific provisions of the AI Act will come into effect on February 2, 2025, six months after the law’s initial implementation date. Feedback from this consultation will be instrumental in shaping comprehensive guidelines expected to be released in early 2025.

Australia Stands Firm on AI

Australia’s industry minister has reaffirmed the country’s commitment to advancing its AI and social media regulations, despite potential opposition from the incoming Trump administration. Minister Ed Husic emphasized Australia’s resolve to establish protective “guardrails” for high-risk AI applications, mirroring the EU’s regulatory framework.

This position starkly contrasts with Donald Trump’s campaign promises to roll back existing AI regulations. Husic stated, “We have a job we’ve said we’ll do for the public, and there’s an expectation. … We will continue to do that, and we will.” The upcoming policy adjustments may include new legislation empowering the Communications and Media Authority to enhance social media companies’ responses to misinformation.

The regulatory landscape could become more complex due to the influence of key figures such as Elon Musk, who opposes social media regulations, and Vice President-elect JD Vance, who has cautioned NATO allies against restricting free speech on Musk’s platforms. Nevertheless, Husic remains steadfast in Australia’s regulatory commitment, focusing on public safety and the country’s sovereign interests.

OpenAI to Present AI Plan

OpenAI is set to unveil an ambitious national AI infrastructure plan in Washington, aimed at maintaining America’s competitive edge in the AI sector. The proposed strategy includes the establishment of specialized economic zones where states can expedite permits for AI facilities in return for providing computational resources to public universities.

A significant focus of the plan is to expand energy capacity, especially in the Midwest and Southwest regions. OpenAI’s head of global policy, Chris Lehane, indicated that the U.S. AI industry will require approximately 50 gigawatts of power by 2030, equivalent to the output of 50 nuclear reactors. The proposal advocates for leveraging U.S. Navy nuclear expertise for civilian reactor projects and establishing a National Transmission Highway Act to modernize power infrastructure.

Additionally, the plan envisions forming a North American AI alliance that may eventually encompass other Western nations and Gulf states. This initiative arrives at a critical juncture, as the incoming Trump administration plans to repeal President Biden’s AI executive order, instead promoting policies that underscore “Free Speech and Human Flourishing.”

OpenAI’s blueprint underscores the foundational nature of AI in modern society, likening its importance to that of electricity, necessitating widespread access and equitable benefits for all.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...