From Clipper Chips to Claude: A History of Government Power vs. Technology Safety
This article examines the ongoing tension between government power and technology safety, particularly in the context of the Pentagon-Anthropic standoff regarding AI governance.
The Pentagon-Anthropic Standoff
On February 24, War Department Secretary Pete Hegseth issued a 72-hour ultimatum to Anthropic CEO Dario Amodei: remove Claude’s safety guardrails for military use or risk losing a $200 million contract and face designation as a supply-chain risk to national security. Anthropic refused, leading to the government following through on its threat.
This confrontation marks a significant moment in U.S. technology policy, being the first instance where the federal government has applied the supply-chain-risk designation to an American technology company, highlighting the escalating conflict between government interests and technological safety.
The Clipper Chip (1993–1996)
In 1993, the Clinton administration introduced the Clipper Chip, a government-designed encryption module intended for all telecommunications equipment. Accompanied by a key escrow system, it aimed to provide federal agencies with backdoor access to encrypted communications.
However, a coalition of the technology industry and civil liberties organizations pushed back, arguing that a mandatory backdoor would weaken security for all users. The initiative faced significant criticism, notably from AT&T researcher Matt Blaze, who demonstrated flaws in the escrow mechanism. By 1996, the Clipper Chip initiative was effectively dead.
This episode established a crucial principle: the government’s interest in accessing technology does not automatically supersede the engineering judgment that certain safety features are essential for overall security.
Apple vs. the FBI (2016)
Two decades later, a similar structural argument emerged during the San Bernardino shooting aftermath. The FBI obtained a court order compelling Apple to create a custom operating system that would bypass the iPhone’s encryption.
Apple’s refusal, articulated in an open letter from CEO Tim Cook, emphasized that creating a backdoor, even for a single device, could lead to uncontrollable security risks. Eventually, the FBI purchased a third-party tool and dropped the case, leaving the legal question of the government’s authority over product safety unresolved.
Google and Project Maven (2018)
The history of technology and government interaction took a new turn with Project Maven, where the Pentagon aimed to use machine learning for drone surveillance analysis. Google, initially contracted for image recognition, faced employee backlash, leading to a petition against involvement in military applications.
Ultimately, Google chose not to renew the contract and established AI principles that prohibited the development of AI for military purposes. This situation prompted the Pentagon to recognize the fragility of voluntary cooperation from tech companies, sowing the seeds for a more coercive approach to procurement.
The Grok-Pentagon License (2025)
The most recent precedent involves the Pentagon’s licensing of xAI’s Grok for military applications in 2025. Despite its previous failures, which included generating inappropriate content, the Pentagon proceeded without acknowledging the governance concerns surrounding the technology.
This raises critical questions about the Pentagon’s governance framework: If it licensed a system with documented safety failures and is now demanding another vendor remove safety controls, what does this imply about its governance approach?
Anthropic (2026): From Persuasion to Compulsion
The Anthropic dispute signifies a shift from persuasion to compulsion in government interaction with tech companies. By invoking the Defense Production Act and threatening severe consequences, the government has transitioned from negotiation to direct commands.
The Pattern and Its Lessons
Across three decades, the evolution of confrontations has expanded the toolkit for both technology companies and the government. Key observations include:
- Recurrent Backdoor Arguments: The structural argument against backdoors persists, emphasizing that modifying safety architecture for specific users creates vulnerabilities for all.
- Unresolved Legal Questions: Previous confrontations, like that of Apple, have left legal questions unanswered, which complicates future interactions.
- Escalating Stakes: The nature of government demands has evolved alongside technological capabilities, with the implications of mistakes becoming increasingly severe.
Future discussions will delve deeper into the paradoxes inherent in the Pentagon’s use of the Defense Production Act and its implications for AI safety intersecting with everyday life, such as surveillance and predictive policing.