Anthropic’s Break With the Pentagon Ignites AI Ethics Debate
After rejecting Pentagon demands tied to autonomous weapons and surveillance, Anthropic’s stand has intensified debate over AI ethics—echoing recent Vatican warnings.
Amid the explosion in recent years of artificial intelligence (AI), Catholics have consistently called for the inclusion of socially responsible safeguards, limits, and ethical principles within the technology.
Now, a leading AI developer that is trying to do that has found itself in a major dispute with the U.S. government—stoking a heated debate over the ethical and moral dimensions of AI development.
About Anthropic
Anthropic, a San Francisco startup, is the creator of Claude, a large language model (LLM)-based AI assistant that has already enjoyed wide adoption across many sectors of U.S. society, including thousands of businesses and schools. Founded in part by defectors from industry juggernaut OpenAI, Anthropic has positioned itself as the safe, responsible option in the AI ecosystem; its CEO, Dario Amodei, often gives interviews advocating for the development of “guardrails” to protect humanity from unchecked AI.
The Pentagon’s Interest in AI
The U.S. government has been exploring the use of AI in national defense. Four major U.S. AI companies — Google, xAI, OpenAI, and Anthropic — have been working with the Pentagon to varying degrees, with highly lucrative contracts on the table.
Due to its cautious approach, Claude was the first AI product allowed onto the Pentagon’s classified networks. As a tool for the U.S. military, Claude’s analytic capabilities have reportedly been used to support numerous recent high-profile military operations.
The Fallout
Anthropic had been in talks with the Pentagon as part of a $200 million contract negotiation that would have expanded Anthropic’s products throughout the U.S. defense apparatus. However, the talks fell apart dramatically.
Pete Hegseth, the defense secretary, pushed for Anthropic to allow the government to use its AI technology for “any lawful use”—including fully autonomous weapons systems and mass surveillance of U.S. citizens. When Anthropic refused to allow these uses, the government took the unprecedented step of designating Anthropic as a “supply chain risk”—a first for an American company—and directed all government agencies to halt the use of Anthropic within six months.
As a result, Anthropic’s rival OpenAI reportedly inked its own deal with the government, without the same kinds of safeguards that Anthropic sought. This led to an outpouring of goodwill for Anthropic’s principled stand online, with many users proclaiming they would delete ChatGPT in favor of Claude.
Ethical Considerations
Experts in AI ethics expressed appreciation for the path Anthropic has chosen, aligning with the moral entreaties of the Vatican under Popes Francis and Leo.
Pope Leo XIV has consistently called for AI to be used in ways that prioritize human flourishing and the common good. The Vatican has expressed opposition to empowering computerized weapons systems, also known as lethal autonomous weapons systems (LAWS), which operate independently.
Furthermore, the Vatican document Antiqua et Nova declares LAWS a “cause for grave ethical concern” due to their lack of the unique human capacity for moral judgment and decision-making.
The Dilemma of Mass Surveillance
On the issue of mass surveillance, the Vatican states that AI used for surveillance aimed at exploiting, restricting others’ freedom, or benefiting a few at the expense of the many is unjustifiable. Such systems reduce lives to a kind of spectacle to be examined and inspected.
A Grand Gesture?
While Anthropic’s ethical stand may be noble, it could lead to the company’s financial destruction. The government’s “blacklist” designation precludes any contractor working with the Pentagon from doing business with Anthropic.
Despite the challenges, experts believe Anthropic is taking ethics seriously, potentially at great cost. Their refusal to compromise on core principles may alter the conversation around AI ethics at the highest levels of government.
In conclusion, while the future remains uncertain, Anthropic’s stand against unethical practices in AI development brings crucial ethical considerations to the forefront of the ongoing discourse.