Congress’s Silent Strike Against AI Regulation

Buried in Congress’s Budget Bill: A Push to Halt AI Oversight

The recent congressional media attention has largely centered on proposed cuts to Medicaid, which would remove health care coverage for millions. However, an equally significant provision has emerged, buried deep within the budgetary language, that is poised to impact all Americans.

The US House Energy and Commerce Committee, voting along party lines, has supported a measure that would preempt all state and local regulation of AI for the next ten years. This provision, found in Section 43201(c) of the committee’s budget reconciliation proposal, would effectively strip the public of any meaningful recourse in the face of AI-related harm.

Rather than a mere pause in regulation, this provision acts as a decade-long permission slip for corporate impunity. It sends a clear message to the states: “Sit down and shut up” while Big Tech writes its own rules or continues to lobby Congress for a lack of regulation altogether. This situation can be characterized as recklessly irresponsible.

The Inaction of Federal Lawmakers

For years, federal lawmakers have dragged their feet on AI oversight, while state leaders, both Republican and Democrat, have taken the initiative. In the absence of federal action, states have led efforts to build safeguards as AI rapidly reshapes various aspects of life. These safeguards include protections against deepfake election material, deepfake pornography, AI-generated child abuse material, algorithmic discrimination, and autonomous vehicle system liability.

State lawmakers have responded to public outcry and have taken the lead in enacting laws when Congress has failed to act, especially in situations where the public has been harmed by autonomous vehicles and AI-related products.

Potential Impact of the Moratorium

The House Energy and Commerce’s 10-year moratorium on state AI regulation could block numerous state laws designed to protect citizens:

  • Approximately two-thirds of US states have laws against AI-generated Deepfake Porn.
  • Half of US states have laws against AI-generated deceptive election materials.
  • Colorado’s comprehensive state AI Act establishes baseline consumer protections.
  • Kentucky’s AI laws protect citizens from AI discrimination.
  • Tennessee’s ELVIS Act protects against AI voice cloning.
  • A North Dakota law prohibits health insurance companies from using AI for treatment authorization decisions.
  • New York’s AI Bill of Rights provides civil and consumer rights protections.
  • California’s leading AI laws include content disclosures and guidelines for using consumer data in AI training.

The Call for Accountability

These laws and policies reflect the urgent work of state lawmakers and policymakers to address risks and harms in light of Congress’s inaction. Many state lawmakers have been motivated by constituents who have experienced harm from AI products and services.

Despite the hyperpartisan political climate, state lawmakers have collaborated to enact policy solutions to tackle significant harms caused by AI. Initial efforts in state AI regulation have led to the establishment of numerous AI studies and task forces aimed at assessing the best approaches to protect constituents while fostering innovation.

Champions of AI immunity argue they are protecting innovation from a “patchwork” of state laws. Nevertheless, US AI companies continue to thrive under these laws, as evidenced by a recent valuation at $14 billion. The US remains a leader in AI, with businesses across various industry sectors managing variations in state law without claiming national competitiveness is at risk.

The Fallacy of the “AI Arms Race”

The argument that AI regulation will cause the US to “fall behind China” has become a common refrain among anti-regulatory advocates. They assert that the “AI arms race” must be won against China at any cost, including risks to public safety and rights. Yet, there is no evidence to suggest that consumer protection and global leadership are mutually exclusive.

During the markup, the committee chairman suggested that Congress would eventually address a national AI standard. However, years of examination have yielded little concrete action. The recent passage of the TAKE IT DOWN ACT aside, Congress has failed to enact significant tech legislation, leaving the public vulnerable.

The Consequences of Inaction

If Section 43201(c) becomes law, it will not merely delay regulation; it will signal to companies that the race to scale takes precedence over public safety. This provision creates a vacuum—a ten-year window during which AI companies could operate without accountability, leading to a lack of lawsuits, local investigations, transparency mandates, new rights, and democratic debate.

History has shown that deferring action on emerging technologies, such as social media, can lead to dire consequences, including rampant disinformation, privacy abuses, and election interference. The same mistakes must not be repeated with AI, which could have an even more profound impact on society.

The future of AI governance should not be dictated solely by trade associations and Big Tech lobbyists. It should reflect public values such as accountability, fairness, and the right to seek redress when harmed. States have stepped up in the absence of federal leadership, demonstrating a model of federalism that empowers states to make decisions for their constituents.

Congress now faces a critical choice: it can support a robust federal-state partnership in AI governance, or it can silence the only lawmakers who have taken proactive steps to protect the public. The stakes could not be higher.

More Insights

State AI Regulation: A Bipartisan Debate on Federal Preemption

The One Big Beautiful Bill Act includes a provision to prohibit state regulation of artificial intelligence (AI), which has drawn criticism from some Republicans, including Congresswoman Marjorie...

IBM Launches Groundbreaking Unified AI Security and Governance Solution

IBM has introduced a unified AI security and governance software that integrates watsonx.governance with Guardium AI Security, claiming to be the industry's first solution for managing risks...

Ethical AI: Building Responsible Governance Frameworks

As AI becomes integral to decision-making across various industries, establishing robust ethical governance frameworks is essential to address challenges such as bias and lack of transparency...

Reclaiming Africa’s AI Future: A Call for Sovereign Innovation

As Africa celebrates its month, it is crucial to emphasize that the continent's future in AI must not merely replicate global narratives but rather be rooted in its own values and contexts. Africa is...

Mastering AI and Data Sovereignty for Competitive Advantage

The global economy is undergoing a transformation driven by data and artificial intelligence, with the digital economy projected to reach $16.5 trillion by 2028. Organizations are urged to prioritize...

Pope Leo XIV: Pioneering Ethical Standards for AI Regulation

Pope Leo XIV has emerged as a key figure in global discussions on AI regulation, emphasizing the need for ethical measures to address the challenges posed by artificial intelligence. He aims to...

Empowering States to Regulate AI

The article discusses the potential negative impact of a proposed moratorium on state-level AI regulation, arguing that it could stifle innovation and endanger national security. It emphasizes that...

AI Governance Made Easy: Wild Tech’s Innovative Solution

Wild Tech has launched a new platform called Agentic Governance in a Box, designed to help organizations manage AI sprawl and improve user and data governance. This Microsoft-aligned solution aims to...

Unified AI Security: Strengthening Governance for Agentic Systems

IBM has introduced the industry's first software to unify AI security and governance for AI agents, enhancing its watsonx.governance and Guardium AI Security tools. These capabilities aim to help...