Expert Comment: The Pentagon-Anthropic Dispute Reflects Governance Failures
On March 4, the Pentagon formally notified Anthropic that it had been deemed a supply chain risk to national security, marking an unprecedented move against an American company.
The designation followed Anthropic’s refusal to accept contract language permitting the use of its technology for “all lawful purposes”. CEO Dario Amodei insisted on retaining two redlines prohibiting mass domestic surveillance and fully autonomous weapons systems. After intensive negotiations, US Secretary of Defense Pete Hegseth announced that the Department of Defense (DoD) would transition away from Anthropic products within six months, despite reports that the Pentagon was extensively relying on Anthropic’s model, Claude, in its ongoing war with Iran.
A Clash Between Ethics and National Security
The dispute has been widely characterized as a conflict between ethics and national security. However, it points to deeper structural challenges. The Pentagon-Anthropic dispute reveals longstanding governance gaps in the integration of AI into military and intelligence operations—gaps that predate this administration and will outlast the present controversy.
In the absence of clear institutional frameworks, private companies like Anthropic have attempted to impose limits through usage policies that define how their models may be deployed. The dispute underscores the shortcomings of that approach. Contractual mechanisms are not a substitute for governance frameworks capable of keeping pace with the operational realities of AI-enabled warfare.
Legal Framework and Changes
The mechanism Secretary Hegseth invoked, 10 USC §3252, is a supply chain security statute designed to address foreign threats to the integrity of defense systems. Historically, it has been applied to adversary-linked vendors, like China’s Huawei. Its application to a domestic American company represents a marked departure from past practice. The evidentiary basis for treating a contractual disagreement over usage terms as equivalent to foreign compromise or sabotage has not yet been publicly established.
The Trump administration originally accepted Anthropic’s usage restrictions when the $200 million contract was awarded in July 2025. However, the Pentagon’s January 2026 Artificial Intelligence Strategy memorandum changed the way that the DoD works with contractors, directing the Department to incorporate a standard “any lawful use” clause into all contracts within 180 days. This memorandum signifies a broader push within the Department to focus on “accelerating America’s military AI dominance” to outpace China, even if safeguards are not fully established.
Alternative Policy Options
Other policy options were available to the administration in its dispute with Anthropic, including contract termination or competitive re-solicitation. Instead, the Pentagon invoked a national security supply chain designation while finalizing an agreement with Anthropic’s competitor, OpenAI. This designation suggests an attempt to rewrite the terms under which frontier AI companies may do business with the US government, potentially chilling public-private partnerships across the defense sector.
Governance Gaps and Implications
The fundamental issue highlighted by this dispute is structural. Existing law leaves significant gaps in the governance of AI-enabled domestic surveillance and autonomous weapons systems—gaps that are often open to contested interpretation. The January 2023 DoD Directive 3000.09 requires lethal autonomous systems to undergo rigorous testing prior to deployment; however, this exists as internal policy rather than statute.
Updating such directives typically involves a lengthy policy process that is not designed to keep pace with rapidly advancing technological capabilities. Meanwhile, the use of AI in systems that fall below the threshold of lethal autonomy but still contribute to kinetic effects is already well underway in warfare contexts, including in Gaza, Ukraine, and Iran. Neither policy nor law has adequately grappled with the civilian harm implications of this operational reality.
OpenAI Agreement and Its Limitations
The agreement with OpenAI is unlikely to bridge these gaps. OpenAI accepted the “any lawful purposes” clause while negotiating safeguards that reportedly include restrictions on mass domestic surveillance, prohibitions on directing fully autonomous weapons systems, and security-cleared engineers embedded within the Pentagon. However, the full scope of these provisions remains uncertain, as the contract has not been released publicly.
Critically, neither the Anthropic nor OpenAI agreements prohibit mass surveillance of foreign nationals, a longstanding concern among allied partners.
International Repercussions
Allied governments are now confronting the implications of the Pentagon-Anthropic dispute. The designation of Anthropic as a supply chain risk may create legal, operational, and financial challenges for NATO and Five Eyes partners that have integrated Anthropic models into shared platforms and joint programs.
Beyond immediate questions, this episode bears on broader debates concerning defense interoperability and the conditions under which US technology partnerships can be relied upon, an issue that may take on renewed urgency in allied capitals.
This dispute is also being closely observed by strategic competitors. Chinese state-affiliated commentary has framed the episode as evidence of structural instability in the American AI ecosystem, implying that China’s military-civil fusion model confers an institutional advantage that the United States lacks.
Conclusion
The United States is deploying frontier AI into consequential military and intelligence environments without the statutory frameworks or structured oversight processes that the scale and stakes of that deployment demand. The Pentagon-Anthropic dispute has made the governance gap surrounding military AI impossible to ignore. Policymakers in the United States and allied countries must now determine how it will be addressed.