EXCLUSIVE: Former Google CEO Eric Schmidt Butts Heads With Former FTC CTO Over AI Regulation
In a heated discussion with leading AI scholars, Eric Schmidt, the former CEO of Google, argued that AI systems can develop unexpected behaviors that complicate the implementation of safety and governance mechanisms in products offered by tech giants like Google. This conversation took place at the annual Isaac Asimov Memorial Debate, moderated by physicist Neil deGrasse Tyson.
Schmidt highlighted that a central challenge in regulating frontier AI models is the emergence of new features that are often untested and unpredictable. He stated, “We can stop [the emergence of new features or behaviors], and therefore stop all progress, by law, by banning larger models, but as long as you have this new emergent power, you have deep reasoning, deep capabilities, and they will make mistakes. You have to be tolerant.”
Having supported the 2014 acquisition of DeepMind, Schmidt emphasized that AI developers should be held accountable for any legal violations. However, he noted that developers often need to release AI products quickly and then retroactively address unforeseen behaviors as the models evolve.
Schmidt reflected on his tenure at Google, saying, “I went through this… where the system would actually do something that was wrong, and we fixed it. And we fixed it as fast as we could, because we had to, because it was the right thing to do.”
Challenges of Compliance
Schmidt was challenged by Latanya Sweeney, a professor of government and technology at Harvard and former CTO at the Federal Trade Commission. She expressed skepticism about the willingness of tech companies to comply with regulations, citing historical instances where companies have ignored or manipulated laws for their commercial interests.
“Technology just ignores [laws] and rewrites them,” Sweeney stated, pointing to Google’s recent legal battles. Last year, U.S. federal judges found Google guilty of operating illegal monopolies in both ad tech and online search. This follows a significant settlement with Meta (formerly Facebook) over user data mishandling during the Cambridge Analytica scandal.
Sweeney argued for foundational changes in mitigating AI risks, stating, “There are questions about existential harms in the future, but there are a lot of harms happening right now.” She emphasized that the design of AI technology significantly influences what values it embodies.
The Complexity of AI Systems
Schmidt countered the implication that better design could preempt all harms, arguing that leading AI programs are complex, non-linear systems that often exhibit unforeseen capabilities. He admitted that Silicon Valley leaders sometimes rush products to market, leading to numerous issues that require correction.
Despite this, Schmidt mentioned that AI developers have evaluation cards and safety testing teams to mitigate risks in advance. However, Nate Soares, president of the Machine Intelligence Research Institute, critiqued these measures as inadequate for ensuring humanity’s safety.
Soares likened current AI safety efforts to insufficient measures for preventing nuclear disasters, stressing that the nature of frontier AI models often leads to unpredictable behaviors that their creators did not design or anticipate.
The Future of AI
Ultimately, Schmidt argued that the benefits of AI will outweigh its risks, asserting that companies engaged in AI development are acutely aware of the dangers. He stated, “The companies that are doing this work… spend an awful lot of time talking about [the dangers].”
Joining Schmidt on stage were prominent scholars including Kate Crawford, a professor of AI at USC, and Chris Callison-Burch, a computer science professor at the University of Pennsylvania, further enriching the discourse surrounding the future of AI regulation and safety.