Buried in Congress’s Budget Bill: A Push to Halt AI Oversight
The recent congressional media attention has largely centered on proposed cuts to Medicaid, which would remove health care coverage for millions. However, an equally significant provision has emerged, buried deep within the budgetary language, that is poised to impact all Americans.
The US House Energy and Commerce Committee, voting along party lines, has supported a measure that would preempt all state and local regulation of AI for the next ten years. This provision, found in Section 43201(c) of the committee’s budget reconciliation proposal, would effectively strip the public of any meaningful recourse in the face of AI-related harm.
Rather than a mere pause in regulation, this provision acts as a decade-long permission slip for corporate impunity. It sends a clear message to the states: “Sit down and shut up” while Big Tech writes its own rules or continues to lobby Congress for a lack of regulation altogether. This situation can be characterized as recklessly irresponsible.
The Inaction of Federal Lawmakers
For years, federal lawmakers have dragged their feet on AI oversight, while state leaders, both Republican and Democrat, have taken the initiative. In the absence of federal action, states have led efforts to build safeguards as AI rapidly reshapes various aspects of life. These safeguards include protections against deepfake election material, deepfake pornography, AI-generated child abuse material, algorithmic discrimination, and autonomous vehicle system liability.
State lawmakers have responded to public outcry and have taken the lead in enacting laws when Congress has failed to act, especially in situations where the public has been harmed by autonomous vehicles and AI-related products.
Potential Impact of the Moratorium
The House Energy and Commerce’s 10-year moratorium on state AI regulation could block numerous state laws designed to protect citizens:
- Approximately two-thirds of US states have laws against AI-generated Deepfake Porn.
- Half of US states have laws against AI-generated deceptive election materials.
- Colorado’s comprehensive state AI Act establishes baseline consumer protections.
- Kentucky’s AI laws protect citizens from AI discrimination.
- Tennessee’s ELVIS Act protects against AI voice cloning.
- A North Dakota law prohibits health insurance companies from using AI for treatment authorization decisions.
- New York’s AI Bill of Rights provides civil and consumer rights protections.
- California’s leading AI laws include content disclosures and guidelines for using consumer data in AI training.
The Call for Accountability
These laws and policies reflect the urgent work of state lawmakers and policymakers to address risks and harms in light of Congress’s inaction. Many state lawmakers have been motivated by constituents who have experienced harm from AI products and services.
Despite the hyperpartisan political climate, state lawmakers have collaborated to enact policy solutions to tackle significant harms caused by AI. Initial efforts in state AI regulation have led to the establishment of numerous AI studies and task forces aimed at assessing the best approaches to protect constituents while fostering innovation.
Champions of AI immunity argue they are protecting innovation from a “patchwork” of state laws. Nevertheless, US AI companies continue to thrive under these laws, as evidenced by a recent valuation at $14 billion. The US remains a leader in AI, with businesses across various industry sectors managing variations in state law without claiming national competitiveness is at risk.
The Fallacy of the “AI Arms Race”
The argument that AI regulation will cause the US to “fall behind China” has become a common refrain among anti-regulatory advocates. They assert that the “AI arms race” must be won against China at any cost, including risks to public safety and rights. Yet, there is no evidence to suggest that consumer protection and global leadership are mutually exclusive.
During the markup, the committee chairman suggested that Congress would eventually address a national AI standard. However, years of examination have yielded little concrete action. The recent passage of the TAKE IT DOWN ACT aside, Congress has failed to enact significant tech legislation, leaving the public vulnerable.
The Consequences of Inaction
If Section 43201(c) becomes law, it will not merely delay regulation; it will signal to companies that the race to scale takes precedence over public safety. This provision creates a vacuum—a ten-year window during which AI companies could operate without accountability, leading to a lack of lawsuits, local investigations, transparency mandates, new rights, and democratic debate.
History has shown that deferring action on emerging technologies, such as social media, can lead to dire consequences, including rampant disinformation, privacy abuses, and election interference. The same mistakes must not be repeated with AI, which could have an even more profound impact on society.
The future of AI governance should not be dictated solely by trade associations and Big Tech lobbyists. It should reflect public values such as accountability, fairness, and the right to seek redress when harmed. States have stepped up in the absence of federal leadership, demonstrating a model of federalism that empowers states to make decisions for their constituents.
Congress now faces a critical choice: it can support a robust federal-state partnership in AI governance, or it can silence the only lawmakers who have taken proactive steps to protect the public. The stakes could not be higher.