AI Regulation Clash: Schmidt vs. Sweeney on Safety and Accountability

EXCLUSIVE: Former Google CEO Eric Schmidt Butts Heads With Former FTC CTO Over AI Regulation

In a heated discussion with leading AI scholars, Eric Schmidt, the former CEO of Google, argued that AI systems can develop unexpected behaviors that complicate the implementation of safety and governance mechanisms in products offered by tech giants like Google. This conversation took place at the annual Isaac Asimov Memorial Debate, moderated by physicist Neil deGrasse Tyson.

Schmidt highlighted that a central challenge in regulating frontier AI models is the emergence of new features that are often untested and unpredictable. He stated, “We can stop [the emergence of new features or behaviors], and therefore stop all progress, by law, by banning larger models, but as long as you have this new emergent power, you have deep reasoning, deep capabilities, and they will make mistakes. You have to be tolerant.”

Having supported the 2014 acquisition of DeepMind, Schmidt emphasized that AI developers should be held accountable for any legal violations. However, he noted that developers often need to release AI products quickly and then retroactively address unforeseen behaviors as the models evolve.

Schmidt reflected on his tenure at Google, saying, “I went through this… where the system would actually do something that was wrong, and we fixed it. And we fixed it as fast as we could, because we had to, because it was the right thing to do.”

Challenges of Compliance

Schmidt was challenged by Latanya Sweeney, a professor of government and technology at Harvard and former CTO at the Federal Trade Commission. She expressed skepticism about the willingness of tech companies to comply with regulations, citing historical instances where companies have ignored or manipulated laws for their commercial interests.

“Technology just ignores [laws] and rewrites them,” Sweeney stated, pointing to Google’s recent legal battles. Last year, U.S. federal judges found Google guilty of operating illegal monopolies in both ad tech and online search. This follows a significant settlement with Meta (formerly Facebook) over user data mishandling during the Cambridge Analytica scandal.

Sweeney argued for foundational changes in mitigating AI risks, stating, “There are questions about existential harms in the future, but there are a lot of harms happening right now.” She emphasized that the design of AI technology significantly influences what values it embodies.

The Complexity of AI Systems

Schmidt countered the implication that better design could preempt all harms, arguing that leading AI programs are complex, non-linear systems that often exhibit unforeseen capabilities. He admitted that Silicon Valley leaders sometimes rush products to market, leading to numerous issues that require correction.

Despite this, Schmidt mentioned that AI developers have evaluation cards and safety testing teams to mitigate risks in advance. However, Nate Soares, president of the Machine Intelligence Research Institute, critiqued these measures as inadequate for ensuring humanity’s safety.

Soares likened current AI safety efforts to insufficient measures for preventing nuclear disasters, stressing that the nature of frontier AI models often leads to unpredictable behaviors that their creators did not design or anticipate.

The Future of AI

Ultimately, Schmidt argued that the benefits of AI will outweigh its risks, asserting that companies engaged in AI development are acutely aware of the dangers. He stated, “The companies that are doing this work… spend an awful lot of time talking about [the dangers].”

Joining Schmidt on stage were prominent scholars including Kate Crawford, a professor of AI at USC, and Chris Callison-Burch, a computer science professor at the University of Pennsylvania, further enriching the discourse surrounding the future of AI regulation and safety.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...