Microsoft’s Science Chief Opposes Trump’s AI Regulation Ban

Microsoft Science Chief Criticizes Proposal to Prohibit State-Level AI Governance

The chief scientist of Microsoft has expressed strong opposition to Donald Trump’s proposal aimed at banning state-level artificial intelligence (AI) regulations. This proposal is seen as a potential hindrance to the advancement of AI technology, rather than a measure to expedite its development.

Concerns Over Regulation and AI Development

Dr. Eric Horvitz, who has previously served as a technology adviser to Joe Biden, argues that prohibiting regulation may ultimately “hold us back.” He emphasizes that effective guidance and regulation are crucial for fostering innovation and ensuring the responsible use of AI technologies.

Horvitz’s comments surface amidst the Trump administration’s sweeping 10-year ban proposal, which seeks to prevent U.S. states from enacting any laws that would limit or regulate AI models, systems, or decision-making tools. This proposal is driven by White House concerns regarding the possibility of China outpacing the U.S. in AI development.

Industry Perspectives on AI Regulation

Tech investors, including figures like Marc Andreessen, are advocating for this measure, proposing that consumer applications should be the focal point of regulation rather than research efforts. Andreessen describes the current situation as a “two-horse race for AI supremacy” between the United States and China.

Vice President JD Vance supports these concerns, warning that if the U.S. halts AI development while China forges ahead, it risks becoming “enslaved to China-mediated AI.” However, Horvitz presents a contrasting viewpoint, expressing his worries about AI’s potential misuse in spreading misinformation and engaging in malicious activities, particularly in relation to biological hazards.

Call for Communication and Collaboration

Speaking at a recent meeting of the Association for the Advancement of Artificial Intelligence, Horvitz highlighted the need for scientists to engage with government agencies to advocate for balanced regulation. He stated, “Guidance, regulation… reliability controls are part of advancing the field, making the field go faster in many ways.”

This position creates an intriguing contradiction, as Microsoft is reportedly involved in a lobbying effort alongside other tech giants like Google, Meta, and Amazon to support Trump’s proposed ban on state regulation.

Lobbying Efforts and Legislative Implications

According to reports from the Financial Times, Microsoft is joining a lobbying campaign urging the U.S. Senate to approve the decade-long moratorium on state-level AI legislation, which is incorporated into Trump’s budget bill, expected to be passed by July 4th.

This ongoing debate reflects a growing concern regarding unregulated AI development and its potential catastrophic risks to humanity. Critics caution that companies might prioritize short-term profits over safety and ethical considerations.

Future of AI Regulation: A Critical Debate

Stuart Russell, a computer science professor at UC Berkeley, raised alarming questions during the same seminar. He inquired, “Why would we deliberately allow the release of a technology which even its creators say has a 10% to 30% chance of causing human extinction?” This underscores the immense stakes involved in the regulation of AI technologies.

Microsoft’s substantial investment of $14 billion in OpenAI, the company behind ChatGPT, further amplifies the significance of this debate. OpenAI’s CEO has made bold predictions about the future, suggesting that within five to ten years, we may witness “great human robots” performing various tasks in society.

Predictions regarding the arrival of artificial general intelligence (AGI)—defined as AI that matches human-level intelligence—vary widely. While Meta’s chief scientist Yann LeCun suggests AGI could be decades away, Mark Zuckerberg has announced a significant investment aimed at achieving “superintelligence.”

Legislative Outcomes and Regulatory Authority

The timing of this debate is critical as Congress deliberates on Trump’s budget bill. The outcome could determine whether individual states maintain the authority to regulate AI development or whether such decisions will be relegated solely to the federal level for the next decade.

Microsoft has refrained from commenting on the apparent contradiction between their chief scientist’s public stance and their lobbying efforts, leaving the company’s true position on AI regulation ambiguous.

More Insights

Critical Evaluations of AI Compliance Under the EU Act

The EU’s Artificial Intelligence Act introduces new obligations for organizations regarding general-purpose AI models, set to take effect in August. Dealmakers must enhance their due diligence...

Microsoft’s Science Chief Opposes Trump’s AI Regulation Ban

Microsoft's chief scientist, Dr. Eric Horvitz, has criticized Donald Trump's proposal to ban state-level AI regulations, arguing that it could hinder progress in AI development. He emphasizes the need...

AI Regulation: Europe’s Urgent Challenge Amid US Pressure

Michael McNamara discusses the complexities surrounding the regulation of AI in Europe, particularly in light of US pressure and the challenges of balancing innovation with the protection of creative...

Decoding the Regulation of Health AI Tools

A new report from the Bipartisan Policy Center examines the complex regulatory landscape for health AI tools that operate outside the jurisdiction of the FDA. As AI becomes more integrated into...

Texas Takes the Lead: New AI Governance Law Unveiled

The Texas Responsible Artificial Intelligence Governance Act (TRAIGA), passed on May 31, 2025, establishes disclosure requirements for AI developers and deployers while outlining prohibited uses of AI...

Texas Enacts Groundbreaking AI Governance Law

On June 22, 2025, Texas Governor Greg Abbott signed the Texas Responsible AI Governance Act (TRAIGA) into law, significantly altering the original draft that proposed strict regulations on "high-risk"...

G7 Summit Fails to Address Urgent AI Governance Needs

At the recent G7 summit in Canada, discussions primarily focused on economic opportunities related to AI, while governance issues for AI systems were notably overlooked. This shift towards...

Africa’s Bold Move Towards Sovereign AI Governance

At the Internet Governance Forum (IGF) 2025 in Oslo, African leaders called for urgent action to develop sovereign and ethical AI systems tailored to local needs, emphasizing the necessity for...

Top 10 Compliance Challenges in AI Regulations

As AI technology advances, the challenge of establishing effective regulations becomes increasingly complex, with different countries adopting varying approaches. This regulatory divergence poses...