Pentagon’s AI Governance Crisis: Implications Beyond Borders

Expert Comment: The Pentagon-Anthropic Dispute Reflects Governance Failures

On March 4, the Pentagon formally notified Anthropic that it had been deemed a supply chain risk to national security, marking an unprecedented move against an American company.

The designation followed Anthropic’s refusal to accept contract language permitting the use of its technology for “all lawful purposes”. CEO Dario Amodei insisted on retaining two redlines prohibiting mass domestic surveillance and fully autonomous weapons systems. After intensive negotiations, US Secretary of Defense Pete Hegseth announced that the Department of Defense (DoD) would transition away from Anthropic products within six months, despite reports that the Pentagon was extensively relying on Anthropic’s model, Claude, in its ongoing war with Iran.

A Clash Between Ethics and National Security

The dispute has been widely characterized as a conflict between ethics and national security. However, it points to deeper structural challenges. The Pentagon-Anthropic dispute reveals longstanding governance gaps in the integration of AI into military and intelligence operations—gaps that predate this administration and will outlast the present controversy.

In the absence of clear institutional frameworks, private companies like Anthropic have attempted to impose limits through usage policies that define how their models may be deployed. The dispute underscores the shortcomings of that approach. Contractual mechanisms are not a substitute for governance frameworks capable of keeping pace with the operational realities of AI-enabled warfare.

Legal Framework and Changes

The mechanism Secretary Hegseth invoked, 10 USC §3252, is a supply chain security statute designed to address foreign threats to the integrity of defense systems. Historically, it has been applied to adversary-linked vendors, like China’s Huawei. Its application to a domestic American company represents a marked departure from past practice. The evidentiary basis for treating a contractual disagreement over usage terms as equivalent to foreign compromise or sabotage has not yet been publicly established.

The Trump administration originally accepted Anthropic’s usage restrictions when the $200 million contract was awarded in July 2025. However, the Pentagon’s January 2026 Artificial Intelligence Strategy memorandum changed the way that the DoD works with contractors, directing the Department to incorporate a standard “any lawful use” clause into all contracts within 180 days. This memorandum signifies a broader push within the Department to focus on “accelerating America’s military AI dominance” to outpace China, even if safeguards are not fully established.

Alternative Policy Options

Other policy options were available to the administration in its dispute with Anthropic, including contract termination or competitive re-solicitation. Instead, the Pentagon invoked a national security supply chain designation while finalizing an agreement with Anthropic’s competitor, OpenAI. This designation suggests an attempt to rewrite the terms under which frontier AI companies may do business with the US government, potentially chilling public-private partnerships across the defense sector.

Governance Gaps and Implications

The fundamental issue highlighted by this dispute is structural. Existing law leaves significant gaps in the governance of AI-enabled domestic surveillance and autonomous weapons systems—gaps that are often open to contested interpretation. The January 2023 DoD Directive 3000.09 requires lethal autonomous systems to undergo rigorous testing prior to deployment; however, this exists as internal policy rather than statute.

Updating such directives typically involves a lengthy policy process that is not designed to keep pace with rapidly advancing technological capabilities. Meanwhile, the use of AI in systems that fall below the threshold of lethal autonomy but still contribute to kinetic effects is already well underway in warfare contexts, including in Gaza, Ukraine, and Iran. Neither policy nor law has adequately grappled with the civilian harm implications of this operational reality.

OpenAI Agreement and Its Limitations

The agreement with OpenAI is unlikely to bridge these gaps. OpenAI accepted the “any lawful purposes” clause while negotiating safeguards that reportedly include restrictions on mass domestic surveillance, prohibitions on directing fully autonomous weapons systems, and security-cleared engineers embedded within the Pentagon. However, the full scope of these provisions remains uncertain, as the contract has not been released publicly.

Critically, neither the Anthropic nor OpenAI agreements prohibit mass surveillance of foreign nationals, a longstanding concern among allied partners.

International Repercussions

Allied governments are now confronting the implications of the Pentagon-Anthropic dispute. The designation of Anthropic as a supply chain risk may create legal, operational, and financial challenges for NATO and Five Eyes partners that have integrated Anthropic models into shared platforms and joint programs.

Beyond immediate questions, this episode bears on broader debates concerning defense interoperability and the conditions under which US technology partnerships can be relied upon, an issue that may take on renewed urgency in allied capitals.

This dispute is also being closely observed by strategic competitors. Chinese state-affiliated commentary has framed the episode as evidence of structural instability in the American AI ecosystem, implying that China’s military-civil fusion model confers an institutional advantage that the United States lacks.

Conclusion

The United States is deploying frontier AI into consequential military and intelligence environments without the statutory frameworks or structured oversight processes that the scale and stakes of that deployment demand. The Pentagon-Anthropic dispute has made the governance gap surrounding military AI impossible to ignore. Policymakers in the United States and allied countries must now determine how it will be addressed.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...