The Rise of AI Regulation: How U.S. States Mirror the EU AI Act

The EU AI Act is Coming to America

The discussion surrounding the EU AI Act and its implications for the United States is becoming increasingly relevant as several states implement laws resembling this European regulatory framework. This article explores the current landscape of AI regulation in the U.S., focusing on algorithmic discrimination and its potential impacts on businesses and consumers alike.

Introduction

At a recent AI Action Summit in Paris, the Vice President of the United States delivered an optimistic message regarding AI development. He criticized the European Union for its hasty regulatory approach and asserted that the U.S. would not follow suit. However, as new laws take shape across more than a dozen states, the reality appears more complex.

About the Bills

Current legislation in various U.S. states aims to combat algorithmic discrimination. These laws regulate the use of AI in high-risk areas such as employment, education, and insurance. They create preemptive requirements for businesses that use AI as a substantial factor in consequential decisions affecting consumers.

For any covered business, this implies the necessity to draft a risk management plan and conduct an algorithmic impact assessment. Developers of AI products are also subject to transparency and monitoring requirements. The definitions of key terms, such as “substantial factor” and “consequential decision”, will determine the applicability of these laws and generate ongoing debate among legislators and stakeholders.

Challenges in Implementation

The complexities inherent in these laws are evident in the challenges faced by Colorado, the only state to have passed a version of this law. The Governor expressed concerns over its compliance regime, leading to the establishment of the Colorado AI Impact Task Force to simplify the legislation.

As states consider similar legislation, a trend of rapid regulatory adoption emerges, raising questions about the coordination among states and the influence of organizations like the Future of Privacy Forum.

The Algorithmic Discrimination Bills and the AI Act

Both the U.S. algorithmic discrimination bills and the EU AI Act adopt a risk-based approach, mandating compliance steps in regulated industries. High-risk AI systems are subjected to similar requirements, such as conducting impact assessments and risk management plans. Notably, many states utilize similar language and mechanisms as those found in the AI Act.

Economic Implications

The potential compliance costs associated with these laws are considerable. Estimates suggest that the AI Act could add up to 17% to corporate spending on AI technologies. This raises questions about the economic viability of implementing such regulations in a rapidly evolving technological landscape.

Conclusion

The possibility of the U.S. adopting a regulatory framework similar to the EU AI Act is increasingly likely. The growing momentum behind algorithmic discrimination laws signals a shift toward more stringent oversight of AI technologies. As these discussions progress, stakeholders must consider the balance between necessary regulation and the potential stifling of innovation within the American AI ecosystem.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...