Reassessing the AI Moratorium: A Call for Federal Leadership in Regulation

AI Moratorium: A Misunderstood Measure

The 2025 artificial intelligence moratorium that failed to be included in last year’s budget reconciliation bill has been described as “very misunderstood” by Representative Jay Obernolte, R-Calif. He emphasized the necessity for the federal government to lead in establishing an AI regulatory framework for the country.

Sector-Specific Regulation

During his speech at the Incompas Policy Summit, Obernolte advocated for a sector-specific approach to AI regulation, focusing on the risks associated with the technology. He explained that the moratorium was initially intended as “a messaging amendment” rather than a long-term solution.

“We never expected to even get it out of [the] Energy and Commerce [Committee],” he stated. “We thought that the conversation needed to be had.” He was surprised by the moratorium’s progress through Congress but noted that the Senate ultimately stripped it from the bill. “I think people took the wrong message away from it,” he added.

Clarifying Regulatory Roles

Obernolte elaborated that the purpose of the moratorium provision was to define where states have “lanes” in regulating AI and to highlight the necessity for an overarching national law. He asserted that the federal government must first establish the parameters of regulation, outlining what constitutes interstate commerce and where states can innovate.

“What we were saying was not that states shouldn’t have a lane in the regulation of AI,” Obernolte clarified. “What we were saying is that the federal government needs to go first.”

Party Line Fractures

The introduction and movement of the AI moratorium through Congress disrupted typical party lines. While it gained support from some Republicans like Senator Ted Cruz, R-Texas, and Representative Rich McCormick, R-Ga., other conservatives, including Senators Marsha Blackburn, R-Tenn., and Josh Hawley, R-Mo., opposed the measure, citing the rights of state legislatures.

Opponents criticized the moratorium for its blanket approach to delaying state regulations, especially as Congress has yet to pass comprehensive nationwide regulations for AI technology.

The Call for a Comprehensive Framework

In discussions with Nextgov/FCW, Obernolte reiterated that “the moratorium was never intended to be a long-term solution.” Its aim was to emphasize the need for the federal government to take the initiative in averting a patchwork of conflicting state laws.

He argued for a robust framework that includes preemptive guardrails to clarify what areas states can legislate in. “I hope that we don’t have to do a moratorium. I hope that we can go straight to passing that framework,” he said, stressing the importance of simultaneous preemption and federal regulation.

Presidential Executive Order

The question of a decade-long moratorium on state AI legislation reached the White House when President Donald Trump signed an executive order in December mandating evaluations of state laws to identify those that could be overly burdensome for AI developers. The order included exceptions for state and local laws related to child safety protections, AI infrastructure, and other specified areas.

Obernolte praised the executive order for its clear delineation of regulatory responsibilities, noting, “The president went through in his executive order and actually said, ‘these are things I think the states should be regulating, not the federal government.’” This action was seen as a positive step towards creating comfortable regulatory lanes for states.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...