The Rise of AI Regulation: How U.S. States Mirror the EU AI Act

The EU AI Act is Coming to America

The discussion surrounding the EU AI Act and its implications for the United States is becoming increasingly relevant as several states implement laws resembling this European regulatory framework. This article explores the current landscape of AI regulation in the U.S., focusing on algorithmic discrimination and its potential impacts on businesses and consumers alike.

Introduction

At a recent AI Action Summit in Paris, the Vice President of the United States delivered an optimistic message regarding AI development. He criticized the European Union for its hasty regulatory approach and asserted that the U.S. would not follow suit. However, as new laws take shape across more than a dozen states, the reality appears more complex.

About the Bills

Current legislation in various U.S. states aims to combat algorithmic discrimination. These laws regulate the use of AI in high-risk areas such as employment, education, and insurance. They create preemptive requirements for businesses that use AI as a substantial factor in consequential decisions affecting consumers.

For any covered business, this implies the necessity to draft a risk management plan and conduct an algorithmic impact assessment. Developers of AI products are also subject to transparency and monitoring requirements. The definitions of key terms, such as “substantial factor” and “consequential decision”, will determine the applicability of these laws and generate ongoing debate among legislators and stakeholders.

Challenges in Implementation

The complexities inherent in these laws are evident in the challenges faced by Colorado, the only state to have passed a version of this law. The Governor expressed concerns over its compliance regime, leading to the establishment of the Colorado AI Impact Task Force to simplify the legislation.

As states consider similar legislation, a trend of rapid regulatory adoption emerges, raising questions about the coordination among states and the influence of organizations like the Future of Privacy Forum.

The Algorithmic Discrimination Bills and the AI Act

Both the U.S. algorithmic discrimination bills and the EU AI Act adopt a risk-based approach, mandating compliance steps in regulated industries. High-risk AI systems are subjected to similar requirements, such as conducting impact assessments and risk management plans. Notably, many states utilize similar language and mechanisms as those found in the AI Act.

Economic Implications

The potential compliance costs associated with these laws are considerable. Estimates suggest that the AI Act could add up to 17% to corporate spending on AI technologies. This raises questions about the economic viability of implementing such regulations in a rapidly evolving technological landscape.

Conclusion

The possibility of the U.S. adopting a regulatory framework similar to the EU AI Act is increasingly likely. The growing momentum behind algorithmic discrimination laws signals a shift toward more stringent oversight of AI technologies. As these discussions progress, stakeholders must consider the balance between necessary regulation and the potential stifling of innovation within the American AI ecosystem.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...