EU AI Act vs. US AI Action Plan: A Risk Perspective

EU AI Act vs. US AI Action Plan: A Comprehensive Study

In the evolving landscape of artificial intelligence (AI) regulation, the EU AI Act has emerged as a notable framework in comparison to the US AI Action Plan. This study delves into the insights shared in a recent podcast episode featuring discussions on the complexities of AI governance.

Overview of the US AI Action Plan

The US AI Action Plan outlines a strategic approach to AI governance, focusing on three primary pillars: innovation, infrastructure, and international diplomacy. The plan emphasizes the importance of establishing the US as a global leader in AI through collaborative efforts and secure distribution of technology.

Dr. Cari Miller, a prominent figure in AI governance, highlighted the significance of ensuring that innovation does not come at the expense of adequate regulatory safeguards. She pointed out that while the plan encourages the development of foundational models with principles of free speech, it raises concerns about potential risks compared to the stricter requirements set forth by the EU AI Act.

The EU AI Act: A Risk-Aware Framework

Adopted in 2024, the EU AI Act introduces binding obligations for high-risk AI applications, such as biometric surveillance and AI usage in employment and education. Noncompliance can result in substantial penalties, including fines of up to 7% of a company’s global turnover.

Dr. Miller’s analysis underscores that the current legislative environment in the US is still developing, lacking mandatory regulations that would ensure companies operate within a secure framework. This absence of consistent rules complicates the legal landscape, particularly in light of existing anti-discrimination laws that could be jeopardized by new directives.

The Tension Between Innovation and Regulation

A recurring theme in the discussion is the inherent tension between fostering innovation and implementing necessary regulations. Dr. Miller cautioned against proposals that could hinder state-level legislation, which may be essential in addressing unique local challenges.

She emphasized that the severity of potential harm should dictate the level of regulation required. As such, more irreversible harms warrant stricter governance, while areas with reversible outcomes may require less regulatory oversight.

The Role of Procurement in AI Governance

Procurement practices are positioned as vital tools in achieving a balance between innovation and regulation. Dr. Miller noted that effective procurement can help define acceptable error levels in AI systems, ensure data ownership clarity, and embed governance from the ground up.

However, she criticized current procurement frameworks for their inadequacies, suggesting they should not only list critical questions but also clarify their significance and provide benchmarks for evaluating vendor responses. The inclusion of diverse voices in procurement teams is crucial for ensuring that decisions are culturally sensitive and legally sound.

Future Challenges and Opportunities

Looking ahead, Dr. Miller identified both opportunities and risks associated with emerging AI technologies, such as AI agents and synthetic data. While many organizations are experimenting with these technologies without formal evaluations, the governance and liability surrounding them remain unclear. In particular, synthetic data, especially in sensitive fields like healthcare, requires rigorous cleansing and bias checks to prevent adverse outcomes.

In conclusion, the effectiveness of AI governance processes is heavily reliant on the involvement of knowledgeable teams. Organizations must ensure that their members are well-versed in data management and governance practices to promote responsible AI development and deployment.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...