AI Regulation in the United States: Federal Framework and State Laws
The legal framework governing artificial intelligence in the United States does not rely on a single federal law comparable to an “AI code”. Instead, it is organized around federal steering through Executive Orders and a set of state (and sometimes local) laws that create directly applicable obligations.
This results in a fragmented landscape, where compliance strongly depends on the state, the use case (recruitment, essential services, generative content, etc.), and the role of the operator (developer, deployer, provider).
I. The American Framework: Two Levels, No “U.S. AI Act”
The regulation of artificial intelligence in the United States is embedded in the country’s institutional architecture, characterized by:
- A two-level governance structure combining federal strategy and legislation adopted by states (and sometimes local authorities).
- The absence of a single, comprehensive federal law comparable to the European AI Act.
- Federal steering largely structured by Executive Orders, which set national priorities and guide the actions of federal agencies.
- State laws that may impose legally binding obligations on companies, enforced by Attorneys General or local authorities.
Priority areas include combating discrimination, transparency, consumer protection, and regulation of generated content. Generally, the American approach emphasizes maintaining technological leadership and innovation.
Executive Orders: The Federal Engine
An Executive Order is a normative act issued by the President of the United States that is legally binding for federal agencies. It guides their priorities and may influence the private sector through public procurement and regulatory guidance.
In the absence of a comprehensive federal AI law, federated states develop their own rules applicable to AI systems, resulting in a fragmented regulatory landscape.
Among the most notable examples are:
- Colorado: Risk-based approach targeting algorithmic discrimination (effective 2026).
- California: Transparency obligations for generative AI and regulation of deepfakes (effective 2026).
- Texas: Prohibitions of certain uses and targeted obligations (effective 2026).
- New York City: Regulation of automated decision tools in recruitment (in effect since 2023).
- Utah: Consumer and minors protection in interactions with AI systems (in effect since 2024).
II. Federal Level: Executive Strategy and “AI Action Plan”
1. Federal Governance Structured by the Executive
At the federal level, AI policy is mainly implemented through:
- Executive Orders
- Strategic frameworks (action plans, national priorities)
- Execution by agencies (implementation, public procurement, infrastructure, international positioning)
Following the Executive Order “Removing Barriers to American Leadership in Artificial Intelligence” (EO 14179, January 2025), the White House published the strategic plan “Winning the Race: America’s AI Action Plan” (July 2025). This document serves as a blueprint guiding the administration’s action.
2. Key Executive Orders Linked to the Federal AI Strategy
Notable Executive Orders include:
- Maintain U.S. Leadership in Artificial Intelligence (2019): launches the American AI Initiative.
- Removing Barriers to American Leadership in Artificial Intelligence (EO 14179, Jan. 2025): anchors a federal “pro-innovation” orientation.
- Advancing Artificial Intelligence Education for American Youth (Apr. 2025): aims to develop AI education within the education system.
- Preventing Woke AI in the Federal Government (EO 14319, July 2025): establishes requirements for AI systems used by the federal government.
3. Ensuring a National Policy Framework for Artificial Intelligence
The recent Executive Order “Ensuring a National Policy Framework for Artificial Intelligence” (December 2025) aims to strengthen the coherence of the national regulatory framework and limit fragmentation.
This includes the creation of a task force within the Department of Justice (DOJ) to analyze state laws and initiatives related to AI.
III. Pioneer States That Have Adopted Binding AI Regulations
Several jurisdictions have adopted texts with concrete obligations, including:
1. Texas – TRAIGA (Texas Responsible Artificial Intelligence Governance Act)
Establishes an AI governance framework, targeting certain high-risk practices and establishing a regime of control and sanctions.
2. Colorado – SB 24-205 (Consumer Protections in Interactions with AI Systems)
Aims to prevent algorithmic discrimination in high-risk AI systems affecting consequential decisions.
3. California – Generative AI Transparency (AB-2013 and SB-942)
Focuses on transparency in generative AI, specifically regarding training data and AI-generated media.
4. New York City – Local Law 144 (Automated Employment Decision Tools)
Targets discrimination in hiring and promotion decisions made by automated tools, requiring independent audits and transparency.
IV. Conclusion: A “Federal and Patchwork” Framework
U.S. AI regulation relies on a balance between federal strategy and state legislation, creating challenges for organizations to navigate compliance.
Organizations must map AI uses, identify relevant jurisdictions, and implement verifiable governance to comply with the evolving landscape of AI regulation.