AI Regulation in the U.S.: Navigating Federal and State Frameworks

Regulation of AI in the United States: Federal Framework and State Laws

The legal framework governing artificial intelligence (AI) in the United States does not rest on a single federal law akin to an “AI Code.” Instead, it is structured around federal guidance through Executive Orders and a series of state (and sometimes local) laws that create directly applicable obligations.

This creates a fragmented landscape where compliance heavily depends on the state, use case (such as recruitment, essential services, generative content, etc.), and the operator’s role (developer, deployer, provider).

I. The American Framework: Two Levels, No “U.S. AI Act”

The regulation of AI in the United States is framed within the country’s unique institutional architecture, characterized by a distribution of competencies between the federal level and the states.

Key characteristics of the American framework include:

  • A two-tier governance system that combines federal strategy with laws adopted by states (and sometimes local authorities).
  • The absence of a unique and comprehensive federal law comparable to the European AI Act.
  • A largely federal-driven structure led by Executive Orders that set national priorities and guide federal agency actions.
  • State laws that can impose binding legal obligations on businesses, enforced by state Attorneys General or local authorities.
  • Priority areas such as combating discrimination, ensuring transparency, consumer protection, and regulating generated content.

In this context, the American approach generally focuses on maintaining technological leadership and fostering innovation.

An Executive Order is a normative act issued by the President of the United States in the exercise of constitutional and/or legal powers. It is legally binding for federal agencies and administrations, guiding their priorities and can influence the private sector through public procurement, regulatory guidance, or federal standards. However, it does not equate to a law passed by Congress and generally does not create a compliance code applicable to all businesses.

In the absence of a comprehensive federal law on AI, individual states develop and adopt their own rules applicable to AI systems. This results in a regulatory environment where obligations vary by jurisdiction and use case.

II. The Federal Level: Executive Strategy and “AI Action Plan”

1. Federal Governance Structured by the Executive

At the federal level, AI policy is primarily implemented through:

  • Executive Orders
  • Strategic frameworks (action plans, national priorities)
  • Execution by agencies (implementation, public procurement, infrastructure, international positioning)

Following the Executive Order “Removing Barriers to American Leadership in Artificial Intelligence” (EO 14179, January 23, 2025), the White House published the strategic plan “Winning the Race: America’s AI Action Plan” in July 2025. This document does not create a unique federal law but serves as a blueprint to guide the administration’s actions (priorities, funding, public procurement, infrastructure, diplomacy).

The plan revolves around three pillars:

  • Accelerating AI innovation
  • Building American AI infrastructure
  • Leading international diplomacy and security in AI

2. Key Executive Orders Related to Federal AI Strategy

Structured texts include:

  • Maintain U.S. Leadership in Artificial Intelligence (2019): Initiates the American AI Initiative and sets federal priorities (research, talent, regulatory framework).
  • Removing Barriers to American Leadership in Artificial Intelligence (EO 14179, Jan. 2025): Anchors a pro-innovation federal orientation, calling for the removal of perceived barriers to AI competitiveness.
  • President’s Council of Advisors on Science and Technology (EO 14177, Jan. 2025): Strengthens the scientific and technological advisory structure at the presidential level.
  • Advancing Artificial Intelligence Education for American Youth (Apr. 2025): Aims to develop AI training and skills in education.
  • Accelerating Federal Permitting of Data Center Infrastructure (EO 14318, Jul. 2025): Accelerates certain federal permitting aspects for AI-related data center infrastructure.
  • Promoting the Export of the American AI Technology Stack (EO 14320, Jul. 2025): Structures a coordinated federal effort to support the export of American AI full-stack technology.
  • Preventing Woke AI in the Federal Government (EO 14319, Jul. 2025): Sets requirements for AI systems (including LLMs) used by the federal government and potentially influences public procurement.
  • Ensuring a National Policy Framework for Artificial Intelligence (EO 14365, Dec. 2025): Affirms a minimally burdensome national framework aiming to reduce fragmentation, particularly addressing state approaches.

3. Ensuring a National Policy Framework for Artificial Intelligence: Towards Enhanced Federal Coordination

The recent Executive Order “Ensuring a National Policy Framework for Artificial Intelligence” (December 2025) marks a significant step in the evolution of federal AI governance in the U.S. This text affirms the intention to enhance the coherence of the national regulatory framework and limit the fragmentation effects resulting from multiple state legislative initiatives.

To this end, the Executive Order includes:

  • Creation of a task force within the Department of Justice (DOJ)
  • Analysis of state laws and initiatives related to AI
  • Identification of potential conflicts with federal priorities regarding innovation and technological competitiveness

The Colorado AI law is notably mentioned as an example of regulation that may raise such concerns.

4. The TRUMP AMERICA AI Act: An Attempt at Federal Harmonization

Within this context, a federal bill titled TRUMP AMERICA AI Act (The Republic Unifying Meritocratic Performance Advancing Machine Intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act) has been introduced to establish a minimum baseline of federal requirements applicable to AI systems.

This text includes:

  • Establishment of a duty of care for AI system developers in the design and operation of their platforms
  • Risk management protocols for advanced AI models (frontier models)
  • Transparency and reporting obligations regarding high-impact models
  • Creation of a Federal AI Safety Institute (FAISI) within the National Institute of Standards and Technology (NIST)
  • Mechanisms to regulate the liability of AI system developers and operators

The bill also addresses issues such as data usage for model training, security of advanced systems, the impact of AI on employment, and the protection of minors in digital environments. Currently, legislative developments regarding this text remain relatively limited in public visibility, and its adoption timeline remains uncertain.

III. Pioneering States Adopting Binding AI Regulations

In the absence of a unique federal law, several jurisdictions have adopted applicable texts (or are about to) with concrete obligations (audit, transparency, duty of care, prohibitions, sanctions).

The American regulatory ecosystem is also marked by the presence of numerous microbills, which are very short and targeted legislative texts aimed at specific actors or uses of AI. These initiatives focus on governance of AI in the public sector, electoral contexts, transparency of AI-generated content, protection of minors, and regulation of deepfakes.

1. Texas – TRAIGA (Texas Responsible Artificial Intelligence Governance Act)

Objective: To establish a governance framework for AI in Texas, notably through the prohibition of certain high-risk practices and the establishment of a control and sanction regime.

Scope: Applicable to organizations developing, deploying, or operating AI systems related to Texas (especially when individuals located in Texas are involved).

Main obligations and prohibitions include:

  • Targeted prohibitions against behavioral manipulation inciting self-harm/violence/criminal activity
  • Certain forms of illegal discrimination
  • Social scoring by government entities
  • Restrictions on certain biometric uses by the government without consent (with exceptions)
  • Prohibition of objectives aimed at infringing constitutional rights
  • Prohibition of systems intended to produce/disseminate certain illegal content (including illicit sexual deepfakes)

Regulatory sandbox: Mechanism for supervised testing (while maintaining substantive prohibitions).

Safe harbor: Substantial compliance with recognized frameworks (e.g., NIST AI RMF) can support a defense/mitigation in certain application contexts.

Sanctions and control:

  • Authority: Texas Attorney General (investigative powers and online reporting mechanism)
  • Compliance timeline: Notice & cure logic (correction period) provided by the text
  • Civil sanctions: Provisions for uncorrected and “incurable” violations, as well as daily penalties for ongoing violations (and possible injunctions)

2. Colorado – SB 24-205 (Consumer Protections in Interactions with AI Systems)

Objective: To prevent algorithmic discrimination in high-risk AI systems used for “consequential” decisions (employment, housing, credit, health, public services, etc.).

Scope:

  • Applicable to developers of high-risk AI systems; deployers using such systems in Colorado; AI systems affecting Colorado residents.
  • The law clearly distinguishes between the developer (entity designing or providing the system) and deployer (entity using it operationally).
  • Primarily targets predictive AI systems used to make or substantially contribute to decisions, excluding general-purpose generative AI tools.

Main obligations include:

  • General duty of reasonable care: Developers and deployers must exercise due diligence to prevent algorithmic discrimination related to high-risk system usage.
  • Developers: Documentation and information enabling risk management by deployers; transparency on limits/uses; notification in case of detected discrimination.
  • Deployers: Risk management policy, impact assessments, monitoring, informing individuals when AI is used in consequential decisions (as applicable), and mechanisms for recourse/human oversight when required.
  • Interaction transparency: Obligation to inform when a consumer interacts with an AI system, unless it is evident.

Sanctions and control:

  • Control authority is exclusively the Colorado Attorney General.
  • No private right of action for individuals.
  • Violations are deemed unfair or deceptive trade practices under the Colorado Consumer Protection Act.
  • Civil sanctions can reach $20,000 per violation as per the applicable regime.

3. California – AI Generative Transparency (AB-2013 and SB-942)

California has adopted two major texts focused on transparency in generative AI, both effective January 1, 2026.

A) AB-2013 – Training Data Transparency (TDTA)

Objective: To enhance transparency regarding the training data of generative AI systems accessible in California.

Scope:

  • The law applies to developers of generative AI systems and providers offering generative AI systems or services accessible in California.
  • It targets systems capable of generating synthetic content, including text, images, audio, or video.
  • Systems used exclusively for internal purposes and not accessible to the public are excluded from the scope.

Main obligations include:

  • Publication of a “high-level” summary of the datasets used for training (broadly defined to include sources, origin, type and volume of data, substantial fine-tuning/updates), with updates during substantial modifications.

Sanctions and control:

  • Enforcement through state authority (mechanisms and sanctions under the applicable California framework; analyses highlight possible civil consequences).
B) SB-942 – California AI Transparency Act

Objective: To increase transparency regarding AI-generated media (audio/image/video) and reduce the proliferation of deepfakes through technical and contractual requirements.

Scope:

  • The law applies to “Covered Providers,” meaning entities that:
  • Create or produce a generative AI system
  • Have over 1,000,000 monthly users
  • Make this system accessible in California.
  • The text only concerns AI-generated image, audio, and video content; textual content is not targeted by this provision.

Main obligations include:

  • Free detection tool
  • Latent disclosures (integrated metadata) + option for manifest disclosures (visible)
  • Contractual obligations when the system is licensed to third parties (maintaining disclosure capabilities, revocation mechanisms in case of alteration).

Sanctions and control:

  • Civil penalty: $5,000 per violation (each day can constitute an additional violation), potential actions by the Attorney General and certain local authorities.

4. New York City – Local Law 144 (Automated Employment Decision Tools)

Unlike state laws, Local Law 144 is a municipal regulation adopted by New York City, targeting a specific use case: employment.

Objective: To reduce the risk of discrimination in recruitment and promotion decisions when automated tools are utilized through independent audits and transparency towards candidates.

Definition: An automated employment decision tool (AEDT) refers to:

  • A computational tool based on machine learning, statistical modeling, data analysis, or artificial intelligence
  • That provides a simplified outcome (such as a score, classification, or recommendation)
  • Used as an exclusive or determining factor in hiring or promotion decisions
  • Or used to substitute human decision-making.

Not considered as AEDT: Spam filters, firewalls, antivirus software; calculators, spreadsheets, databases; datasets or other compilations of data lacking automated decision-making functionality.

Main obligations include:

  • Independent bias audits prior to use and periodically (annual periodicity in practice).
  • Candidate notification (pre-notice) about the use of an AEDT and key elements related to the evaluation.
  • Publication of elements related to the last audit (date + summary) and associated information.

Sanctions and control:

  • Authority: NYC Department of Consumer and Worker Protection (DCWP).
  • Fines: $500 (first violation) and then $1,500 (subsequent violations), according to widely used compliance summaries.
  • No private action: enforcement by local authority.

IV. Conclusion: A “Federal and Patchwork” Framework That Imposes a Structured Compliance Approach

The American regulation of AI relies on a balance: on one hand, a federal strategy largely driven by the executive that sets national priorities for innovation, infrastructure, and competitiveness; on the other, state and local legislations that establish concrete legal obligations regarding audits, transparency, risk management, targeted prohibitions, and sanctions.

For organizations, the challenge lies not only in “knowing the rule” but also in mapping AI uses, identifying relevant jurisdictions, and establishing verifiable governance (documentation, processes, controls, monitoring).

Master your AI compliance in the United States. Are you deploying (or considering deploying) AI systems in the U.S.? Naaia assists you in structuring an operational governance framework: mapping systems, risk management, compliance documentation, and ongoing oversight from an AI management platform designed to industrialize these requirements over time.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...