White House Struggles to Define National AI Regulatory Framework

No Specifics From White House on National AI Regulatory Framework

A leader of the White House’s artificial intelligence strategy recently addressed House lawmakers, providing scant details regarding the administration’s forthcoming legislative recommendations aimed at establishing a national standard to preempt state laws.

In December, President Donald Trump signed an executive order instructing federal agencies to challenge state AI laws deemed “onerous” and to restrict state access to certain federal funds—including funding related to broadband deployment—based on those regulations. This executive order was issued after earlier attempts by pro-AI lawmakers to legislate a national preemption of state AI laws encountered bipartisan resistance in support of state authority.

Tasking White House Officials

The executive order also designated White House Science and Technology Adviser Michael Kratsios and Special Adviser for AI and Crypto David Sacks to formulate legislative recommendations for a national AI standard that would supersede existing state laws. During Kratsios’s first testimony on Capitol Hill since the issuance of the order, he sidestepped specific details while addressing lawmakers’ concerns about the distribution of responsibility for AI regulation among states, Congress, and the Trump administration.

Kratsios highlighted the need for “regulatory clarity and certainty” to ensure that American innovators maintain their global leadership in the AI sector. He stated, “If American innovators are to continue to lead the world, they will need regulatory clarity and certainty, which the legislative and executive branches must work together to provide.”

Support for Federal Framework

Subcommittee Chair Jay Obernolte, a Republican from California, expressed general support for Congress to establish what he termed an “appropriate federal framework” that would reinforce the United States’ position as a leader in AI development and deployment. However, he emphasized the importance of state involvement in regulating AI. California has enacted laws requiring AI developers to disclose information about potential catastrophic risks associated with their models and the data used for training.

Obernolte stated, “I think what everyone believes is that there should be a federal lane, and that there should be a state lane,” advocating for a clear distinction in regulatory responsibilities. He pressed Kratsios on the potential “guardrails” and the administration’s vision for congressional action.

Concerns Over Executive Power

During the discussion, Rep. Zoe Lofgren, a Democrat from California, raised concerns regarding the executive order’s implications for shifting power over AI from states and Congress to the executive branch. She argued that preempting state authority could undermine necessary actions to protect citizens, while Congress remains inactive in enacting its own legislation.

Lofgren acknowledged the administration’s AI Action Plan’s goals, particularly in terms of “innovation, infrastructure, international diplomacy, and security,” but critiqued it for only minimally addressing the risks associated with AI technologies.

Discussion on AI Misuse and Accountability

Concerns were also raised about the federal government’s relationship with Elon Musk’s X platform, formerly known as Twitter, particularly following incidents where the platform allowed the Grok AI chatbot to create inappropriate images of real individuals, including minors. In response, Kratsios asserted that misuse of technology demands accountability rather than “blanket restrictions on the use and development of that technology.”

Future Plans for AI Standards

Lawmakers from both parties questioned Kratsios regarding the administration’s plans for the National Institute of Standards and Technology (NIST) and its Center for AI Standards and Innovation. Obernolte indicated intentions to introduce a bill titled the Great American AI Act to codify this center, while also praising the administration’s support for the continuation of the National Artificial Intelligence Research Resource (NAIRR).

In conclusion, Kratsios celebrated the transition from the former AI Safety Institute to CAISI and mentioned that NIST would revise its AI Risk Management Framework, aiming to eliminate references to political issues like misinformation and climate change to maintain focus on scientific integrity.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...