The Rising Tide of AI Regulation in America

The EU AI Act is Coming to America

The AI Action Summit in Paris marked a pivotal moment as discussions unfolded around the implications of artificial intelligence regulations. Vice President J.D. Vance conveyed a message of optimism regarding AI, criticizing the European Union for hastily imposing preemptive regulations. However, the reality is that over a dozen U.S. states are currently considering laws that closely resemble the EU AI Act, particularly focusing on algorithmic discrimination in “automated decision systems.”

This article delves into the emergence of these laws, their intended purposes, and the potential impacts they may have on the landscape of artificial intelligence in the United States.

Understanding the Bills

These state bills aim to prevent algorithmic discrimination by regulating the use of AI in high-risk scenarios. They require businesses that use AI as a substantial factor in making consequential decisions—such as those affecting employment, education, or financial services—to implement risk management plans and conduct algorithmic impact assessments.

For instance, if a business employs an AI system to assist in hiring decisions, it must comply with these laws. The definitions of terms like “substantial factor” and “consequential decision” will significantly influence the application of these laws.

Challenges in Implementation

Despite these laws’ intentions, there are significant uncertainties surrounding their implementation. For example, if an AI tool is used to filter resumes, does it count as a substantial factor in a hiring decision? The ambiguity in definitions poses challenges for compliance and enforcement.

The only state where a version of this law has passed is Colorado, which has already encountered issues in defining how to implement it effectively. The complex compliance regime has led to the formation of the Colorado AI Impact Task Force, tasked with reviewing and improving the law.

The Proliferation of Similar Laws

As the Colorado experience illustrates, other states are rapidly following suit. States like California, Illinois, and Texas are considering similar laws. The rapid spread of these regulations raises questions about coordination among states and the influence of external organizations, such as the Future of Privacy Forum, which aims to align U.S. policies with European standards.

The Role of the Future of Privacy Forum

The Future of Privacy Forum (FPF) has played a crucial role in facilitating discussions around AI regulation. Although FPF claims neutrality, many state legislators involved with their AI policy working group have introduced similar bills, indicating a potential concerted effort to shape AI legislation across the U.S.

Comparisons with the EU AI Act

Both the algorithmic discrimination bills and the EU AI Act take a risk-based approach to regulation. They emphasize the need for preemptive compliance measures in high-risk industries, including financial services, education, and law enforcement. The commonalities suggest that the U.S. may be importing regulatory frameworks from the EU, which could lead to significant compliance costs for American businesses.

Evaluating the Costs and Benefits

Estimates suggest that compliance with the EU AI Act could add as much as 17% to corporate spending on AI. Given the broader use of AI systems, these costs could escalate even further. Critics argue that the issues of algorithmic discrimination may not warrant such significant economic burdens, especially when current consumer protection laws already address many of these concerns.

Conclusion

The U.S. is on a trajectory toward imposing stringent regulations on artificial intelligence, mirroring the EU’s approach. Without significant intervention, the landscape of AI regulation in America may soon resemble that of Europe, potentially stifling innovation and complicating the development of AI technologies.

As these laws evolve, stakeholders must navigate the complexities of compliance while addressing the underlying issues of algorithmic discrimination effectively. The future of AI regulation in the U.S. is uncertain, but the implications of these developments are profound and far-reaching.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...