Banned but Irreplaceable: Claude used to destroy Iran’s defense +4 Critical Moves
Thanks for reading AI Governance, Ethics & Leadership! If you enjoyed the post, don’t be shy about it! Like it, add a comment, or a restack!
If you haven’t grabbed your free trial of Recraft yet, check it out here. Use my LinkedIn Summary Optimizer Free.
Introduction
While the public conversation fixates on ChatGPT uninstall trends and Claude adoption spikes, a more consequential story unfolded quietly. On February 27, Claude was restricted from federal use due to supply-chain concerns. Hours later, reporting indicated it was nevertheless used in intelligence workflows connected to the Iran strikes.
No congressional debate. No public explanation. Just operational necessity overriding stated policy boundaries.
The Emerging Pattern in AI Governance
This is the pattern emerging across AI governance: frameworks built for peacetime optics, and exceptions carved out the moment speed, capability, or strategic advantage is on the line. Citizens increasingly expect ethical guardrails; institutions increasingly prioritize operational flexibility. The backlash we’re seeing toward consumer AI tools is, in part, the public noticing that gap.
When I zoom out, I recognize that the tension isn’t limited to governments. I see the same pattern inside companies every day. The language of “principles,” “values,” and “responsibility” holds right up until operational pressure arrives. That’s when the calculus changes.
Record-profit years end with mass layoffs. People once called “family” are treated as line items. Ethical commitments dissolve the moment efficiency, speed, or shareholder expectations demand it.
The pattern is consistent: governance for optics, expediency for action. And the public is starting to notice that the gap is becoming the norm, not the exception.
This Week’s Newsletter Highlights
The Top 5 AI Governance Power Moves + EvA Index
EvA ranks weekly developments on a single 0–100 scale where 0 = maximally exploitative and 100 = maximally accountable, so readers can instantly see whether leaders acted to protect people, rights, and oversight or to prioritize speed, control, or extraction.
This week’s developments score a 56, down from last week’s 68.
1. Claude banned for federal use then used in Iran strikes
Exploitation vs Accountability Index: 50 (Ethical Gray Area)
The Signal: A presidential directive designated Anthropic as a supply-chain risk and ordered federal agencies to cease using Claude; reporting then indicated U.S. Central Command used Claude for intelligence assessments, target identification, and simulated battle scenarios during strikes on Iran.
AD’s Take: It’s one thing to flag a vendor as a supply chain risk, it’s quite another to turn around and use that same vendor’s technology hours later. It gives the impression that there’s a lot more to the fallout between Anthropic and the Pentagon than what was made public.
2. OpenAI seizes Pentagon deal, agrees to ban domestic surveillance
Exploitation vs Accountability Index: 50 (Ethical Gray Area)
The Signal: OpenAI reached an agreement to deploy models on classified DoD networks and revised terms to explicitly prohibit mass domestic surveillance and certain intelligence-agency uses after backlash and scrutiny.
AD’s Take: The words are there but they don’t hold weight. What we have seen consistently from Sam Altman’s leadership is a tendency to say what needs to be said in a moment while doing something else entirely.
3. Block cuts ~4,000 jobs citing AI restructuring
Exploitation vs Accountability Index: 40 (Exploitation)
The Signal: Block announced layoffs affecting roughly 4,000 employees which translates to 40% of its workforce as leadership is prioritizing a shift to smaller, AI-enabled teams.
AD’s Take: What tends to accompany these cuts is moving roles overseas along with a swath of ‘AI talent’. However, the proof is always in the pudding.
4. Massachusetts deploys ChatGPT across the executive branch
Exploitation vs Accountability Index: 70 (Accountable)
The Signal: Governor Healey announced a phased rollout of a ChatGPT-powered assistant for the state’s ~40,000 executive-branch employees, with stated safeguards for data privacy and a secure deployment environment.
AD’s Take: It’s truly a live experiment of balancing privacy with oversight.
5. OpenAI and Microsoft join UK-led global AI safety coalition
Exploitation vs Accountability Index: 70 (Accountable)
The Signal: OpenAI and Microsoft pledged funding and technical support to the UK AI Security Institute’s Alignment Project, joining an international coalition focused on AI alignment and safety research.
AD’s Take: Major vendors are publicly investing in alignment research even as commercial and defense deals and digital sovereignty initiatives raise questions about long-term operational use.
March Spotlight – Women’s History Month
Dr. Cynthia Breazeal’s impact on AI spans over two decades, starting with her groundbreaking creation of Kismet in the late 1990s — one of the first robots capable of real-time social interaction through facial expressions, tone, and emotion recognition.
AI Eco System Health Trackers
We can’t talk about AI without talking about the systems that keep it running and how it impacts civilians. Here are a few highlights.
Outage Log – March
[CRITICAL] March 2, 2026: Claude suffered a global outage as a result of Iran’s response to military action from Israel and the USA. The outage marked the first documented case of a major U.S. cloud provider’s data center being physically attacked in warfare.
Data Center Watch
The public opposition to AI data centers is heating up. Some states and communities are mulling temporary bans on new data center development altogether.
Consumer Payouts
No new payouts/settlements in 2026; litigation is active.
Mining & Compute
Zimbabwe has withdrawn from negotiations on a proposed US$350 million health-funding agreement, citing links to critical-minerals arrangements.
Research Papers Getting Buzz This Week
The Auton Agentic AI Framework proposes a declarative architecture for standardizing creation, execution, and governance of autonomous agents.
Polymarket: The AI Governance Line
Prediction markets often aggregate dispersed information and incentives in ways that outperform single models or pundits. For governance topics, they surface how practitioners, investors, and informed observers price regulatory risk.
Conclusion
That quiet moment when Claude was labeled a supply-chain risk and banned for federal use—only to be cleared for targeting and planning in the Iran strikes hours later—was a snapshot of where real AI governance lives.
As the world navigates the complexities of AI and its governance, the contradictions are becoming increasingly evident. It’s a landscape that demands scrutiny and ongoing discourse.
Stay high-fidelity, keep questioning the contradictions, and stay engaged.