AI Rivalry: Anthropic and OpenAI Clash at Super Bowl
When Anthropic and OpenAI took their rivalry to Super Bowl LX last week, it marked a turning point in the saga of artificial intelligence in American life—not just as a technology, but as a cultural, political, and regulatory flashpoint. For years, tech companies have quietly entered the political arena, but Sunday night on national television signaled something more: the battle for public perception had become fully politicized.
Advertising and Philosophical Differences
Anthropic, best known for its Claude family of large language models and its stated mission to “study their safety properties at the technological frontier,” aired a set of Super Bowl commercials that were hard to miss. In one widely discussed spot, an AI assistant abruptly shifts mid-conversation into selling products—a playful parody aimed directly at OpenAI’s controversial decision to introduce advertising within ChatGPT. As Business Insider reported, the commercial’s message was clear: “There is a time and place for ads. Your conversations with AI should not be one of them.”
OpenAI’s leadership pushed back. Greg Brockman, OpenAI’s president, called Anthropic’s ads a reflection of a “fundamental difference in our respective outlooks on AI,” framing the dispute less as marketing and more as a clash between philosophical visions of the technology.
Contrast in Messaging
OpenAI, for its part, chose a different tone in its own Super Bowl advertisement. Rather than mocking a rival, its commercial centered on Codex—its AI coding tool—and the idea that “anyone can build things.” That messaging was earnest, focused on builders, creativity, and economic agency.
What unfolded on a stage watched by more than 100 million Americans was not mere branding. It was a strategic framing of the social contract of AI: one side warned against intrusive monetization in sensitive conversational spaces; the other celebrated broad utility and innovation. At this point, it is fair to say Anthropic is ahead on points, if not market and mind share.
Political Implications
That corporate tussle bled into politics almost immediately. Days after the Super Bowl, Anthropic announced a $20 million donation to Public First Action, a political group backing state-level AI regulation ahead of the 2026 midterms. According to Reuters, the group is pitched as a counterweight to Leading the Future, a rival super-PAC backed by OpenAI executives and venture-capital heavyweights that has raised around $125 million to advocate for looser regulation. Public First Action is already backing candidates such as Republican Marsha Blackburn, illustrating that the contest over AI policy is crossing traditional partisan lines.
Shifts in AI Discourse
In a few weeks, a technology dispute that once existed only in academic papers and narrow policy circles had expanded to billboards, television screens, and political capital. It has placed Silicon Valley players at the forefront of a culture war over what sort of future AI will shape, and on whose terms.
Internal tensions within AI labs reflect broader unease about the pace and direction of technological change, and Anthropic is not above reproach. In early February, Mrinank Sharma—formerly Anthropic’s head of its safeguards research team—resigned with a public warning that “the world is in peril.” In his letter, which circulated broadly on social platforms, Sharma wrote that humanity is approaching “a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences.”
The Race for AI Development
That resignation, aimed as much at a company’s internal culture as at the public conversation around AI risk, underscored a disconnect. It highlighted the tension between competitive pressures and the explicit safety values that AI labs claim to uphold—precisely the issue regulators are now publicly wrestling with.
The problem is that all of the companies building AI have incentives to move as quickly as possible. In part to reach that mythical goal of AGI, but more importantly, to secure the capital, energy, and chips to sustain their meteoric growth. They are built to do just that. They can’t, won’t, and don’t stop.
The Future of AI
Winning a battle in the court of public opinion might move some market share, but it won’t change the course of the future. For that, we would need a public that is organized, empowered, and capable of working in the best interest of its members.
Or as we used to call it, a functional government. Or you could just build something and see what happens.