Big Tech’s Unchecked Power: The Risks of Trump’s AI Bill

How Trump’s ‘Big Beautiful Bill’ May Harm AI Development in the US

A new U.S. bill would bar state-level AI regulation for 10 years, granting Big Tech unchecked power. Critics warn it endangers innovation, transparency, and public trust, while isolating the U.S. from global AI norms and reinforcing monopolies in the industry.

Like many proposals from the current U.S. administration, the signature Trump bill is branded “big” and “beautiful.” What hides behind the flamboyant name? A farrago of fiscal, immigration, and defense spending policies, the bill also contains a provision on artificial intelligence that could have catastrophic consequences for global AI development.

The bill states: “No State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act.”

In essence, the Republican Party is offering Big Tech a lavish gift: a decade-long immunity from state-level AI regulations. The consequences could be dire for innovation and public trust in technology. Without transparent, ethical oversight that ensures public accountability, control over AI systems will rest solely in corporate boardrooms.

How Will the Big Beautiful Bill Impact AI?

  • Limited oversight will mean limited accountability.
  • Big Tech firms will become more entrenched in the space, crowding out smaller players and startups.
  • Public trust in AI will evaporate.
  • The US position as a global leader in AI will erode.

No Oversight Means No Accountability

So far, AI regulation in the US has been largely light touch. All deployed models have gone unchecked — and in many ways, this is a natural way of doing things. Technology is always swifter than regulations. The US is also feeling the heat of the global AI race, especially from Chinese competitors. Concerned about national security, threatened lawmakers and party officials are eager not to get in the way of Big Tech.

Prioritizing “national security” over the safety and rights of actual citizens is dangerous, however. More than 140 organizations recognized this in an open letter, urging lawmakers to reject the proposal. Any technology, especially one as powerful as AI, can cause harm. State-level regulation could be the first line of defense, ready to mitigate and respond before damage is done.

Big Tech Will Get Bigger

By blocking state-level regulation, the bill all but guarantees Big Tech’s continued entrenchment in the artificial intelligence industry. OpenAI, Anthropic, Microsoft, Amazon, and Google each made well over $1 billion in revenue in 2024. No other company surpassed $100 million. Without fair standards or open ecosystems, smaller players and startups are left to fend for themselves in a rigged game. The absence of oversight doesn’t create a level playing field; rather, it cements the advantages of those already at the top.

It is no surprise that Big Tech leaders have pushed back against efforts to impose guardrails in the US. Senator Ted Cruz and others at the tip of the deregulatory spear insist that AI should be governed only by federal standards. In practice, this means no standards at all, at least for now. And without them, innovation risks becoming the exclusive domain of the few who already control the infrastructure, the data, and the narrative.

Public Trust in AI Will Evaporate Further

If AI harms go unanswered and remain opaque, trust in the entire system begins to unravel. Transparency is not a luxury, but rather a prerequisite for legitimacy in a world already anxious about AI. According to the Pew Research Center, more than half of U.S. adults are more concerned than excited by recent developments, especially about AI use in hiring decisions and healthcare. The AI regulation bill received broad support and passed the California legislature, only to be shot down by Governor Gavin Newsom after intense lobbying.

Even some federal lawmakers, like Senator Josh Hawley, have voiced concern over regulations. “I would think that, just as a matter of federalism, we’d want states to be able to try out different regimes,” he said, advocating for some form of sensible oversight to protect civil liberties. But the Big Beautiful Bill simply leaves the public with no recourse, no transparency, and no reason to trust the technologies shaping their lives.

DOGE Was a Warning Sign

We have seen this playbook before. The Trump-era DOGE initiative slashed teams working on AI policy and research. External oversight was sidelined, federal agencies annihilated. It ended in predictable failure: privacy violations, biased outputs, a hollowed-out pool of institutional expertise, and Elon Musk turning back to business.

Rather than a misstep, DOGE was a case study in what happens when transparency is traded for control and due process is treated as a nuisance. Repeating that mistake again, under the banner of the Big Beautiful Bill, would risk even greater damage with far fewer guardrails to stop it.

It Is Time to Challenge U.S. Global AI Leadership

While other regions like the EU are pushing forward with ethical, human-centered AI frameworks, the US is veering in the opposite direction toward a regulatory vacuum. That contrast risks more than just reputational damage; it could isolate the US in international AI cooperation and invite backlash from allies and emerging AI powers alike. Failure to live up to international standards on data governance, algorithmic transparency, and AI safety standards might lead to the exclusion of US-based companies from markets and joint research efforts.

Although American Big Tech leads in the AI race for now, the world has some emerging alternatives working towards just, ethical models. Countries in the MENA region, such as Qatar, are also increasingly investing in AI with an eye toward global competitiveness, accountability, and leadership in decentralized AI. As the world moves toward responsible innovation, the US seems poised to protect corporate interests over global leadership by allowing Big Tech to develop models without recourse to public good.

As first reported, the bill would be a gift to tech giants like Meta, which lobbied the White House to oppose state-level regulations on the grounds that they “could impede innovation and investment.” But deregulation is not a vision; it is a retreat, which leaves the US looking less like a leader and more like an outlier.

More Insights

Understanding the Impacts of the EU AI Act on Privacy and Business

The EU AI Act, finalized in late 2023, establishes comprehensive regulations governing the use of artificial intelligence by companies operating in Europe, including those based in the U.S. It aims to...

Kazakhstan’s Bold Step Towards Human-Centric AI Regulation

Kazakhstan's draft 'Law on Artificial Intelligence' aims to regulate AI with a human-centric approach, reflecting global trends while prioritizing national values. The legislation, developed through...

Balancing Innovation and Ethics in AI Engineering

Artificial Intelligence has rapidly advanced, placing AI engineers at the forefront of innovation as they design and deploy intelligent systems. However, with this power comes the responsibility to...

Harnessing the Power of Responsible AI

Responsible AI is described by Dr. Anna Zeiter as a fundamental imperative rather than just a buzzword, emphasizing the need for ethical frameworks as AI reshapes the world. She highlights the...

Integrating AI: A Compliance-Driven Approach for Businesses

The Cloud Security Alliance (CSA) highlights that many AI adoption efforts fail because companies attempt to integrate AI into outdated processes that lack the necessary transparency and adaptability...

Preserving Generative AI Outputs: Legal Considerations and Best Practices

Generative artificial intelligence (GAI) tools raise legal concerns regarding data privacy, security, and the preservation of prompts and outputs for litigation. Organizations must develop information...

Embracing Responsible AI: Principles and Practices for a Fair Future

Responsible AI refers to the creation and use of artificial intelligence systems that are fair, transparent, and accountable. It emphasizes the importance of ethical considerations in AI development...

Building Trustworthy AI for Sustainable Business Growth

As businesses increasingly rely on artificial intelligence (AI) for critical decision-making, the importance of building trust and governance around these technologies becomes paramount. Organizations...

Spain’s Trailblazing AI Regulatory Framework

Spain is leading in AI governance by establishing Europe’s first AI regulator, AESIA, and implementing a draft national AI law that aligns with the EU AI Act. The country is also creating a regulatory...