Pentagon’s Strategic Shift in AI Procurement

AI Policy’s New Power Center

The Pentagon is transforming procurement into policy, asserting itself as Washington’s most powerful player in the realm of AI with its recent decision to sever ties with Anthropic.

Why It Matters

As lawmakers engage in debates about regulatory frameworks, the Pentagon has illustrated its capacity to reshape the AI industry through a single contract decision.

The Defense Department stands as the federal government’s largest tech buyer, and the criteria it sets for companies to secure contracts often become de facto rules, influencing sectors beyond military applications.

In an environment lacking clear regulations, defense contracts resonate the loudest.

What They’re Saying

Jessica Tillipman, associate dean for government procurement law studies at George Washington University, remarked, “The biggest question is: What kind of business partner does the government want to be?” She emphasized the government’s dependence on AI companies, stating, “The government’s a superpower … but here it’s trying to jam a lot of policy.”

The Big Picture

The Trump administration has promoted an anti-regulation and pro-innovation stance in AI. However, it has still implemented regulations, albeit in a different manner, as noted by former Office of Science and Technology Policy chief Alondra Nelson.

Nelson points out that this includes “intensive state intervention operating through industrial policy, trade restrictions, immigration controls, equity stakes in private firms (selected by the state), the redirection of research funding, and the strategic preemption of state authority.”

Zoom In

The Pentagon’s recent maneuver is unusual, rests on questionable legal grounds, and may have regulatory implications that extend beyond a single company.

Its classification of Anthropic as a “supply chain risk”—a label typically reserved for foreign adversaries—means that affiliated companies must cease using Claude in any instances directly associated with the department. Anthropic has initiated a lawsuit, arguing that the Pentagon is infringing on its free speech rights and lacks congressional authority.

Moreover, it remains uncertain how this regulation-by-contract aligns with the administration’s AI action plan, which emphasizes rapid development and an industry-friendly approach. The OSTP did not respond to requests for comment.

Impact on Anthropic

The Pentagon’s actions are adversely affecting Anthropic’s contracts beyond just government use. According to Anthropic lawyer Michael Mongan, at least 100 customers, spanning various sectors from pharma to fintech, have requested to pause or cancel their contracts.

Microsoft is seeking a temporary restraining order from the court, arguing that without it, tech companies will be compelled to act swiftly to modify their products and contracts, potentially hampering military operations.

A hearing regarding whether to grant Anthropic temporary relief is scheduled for March 24.

What We’re Watching

The trend of regulation-by-contract is expected to persist as AI companies pursue opportunities with the government, particularly if new draft guidance from the General Services Administration, which includes “all lawful uses” in procurement guidelines, is adopted.

The Bottom Line

The Pentagon’s recent actions risk undermining the White House’s proclaimed hands-off, pro-industry strategy aimed at accelerating AI growth.

Additionally, it could foster a new contract-by-contract framework for governing and controlling AI, leaving companies in uncertainty about how to effectively collaborate with the government.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...