Two significant and related stories breaking simultaneously — and they illuminate a genuine fault line in the AI industry.
source: Claude AI
Anthropic vs. the Pentagon
The lawsuit stems from the Trump administration’s decision to label Anthropic a “supply chain risk” — a designation normally reserved for companies associated with foreign adversaries. The trigger was Amodei’s refusal to grant the DoD unrestricted access to Claude. Anthropic’s two hard positions: it didn’t want Claude used for mass surveillance of Americans, and didn’t believe it was ready to power fully autonomous weapons with no human in the targeting and firing loop.
Anthropic filed two separate lawsuits — one in California federal court, another in the DC Circuit — challenging different aspects of the Pentagon’s actions. The company called the moves “unprecedented and unlawful,” arguing the Constitution doesn’t permit the government to punish a company for its protected speech.
The statutory argument is also substantive: the law generally requires agencies to conduct a risk assessment, notify the targeted company, allow it to respond, make a written national-security determination, and notify Congress before excluding a vendor from federal supply chains. None of that appears to have happened.
The financial stakes are serious. Most of Anthropic’s projected $14 billion in revenue this year comes from businesses and government agencies using Claude for coding and other tasks, with over 500 customers paying at least $1 million annually. Across the entire business, the government’s actions could reduce Anthropic’s 2026 revenue by multiple billions of dollars.
A notable wrinkle: dozens of scientists and researchers at OpenAI and Google DeepMind filed an amicus brief in their personal capacities supporting Anthropic, arguing the designation could harm US competitiveness and hamper public discussion of AI risks.
OpenAI’s internal fractures
The OpenAI situation is a separate but thematically parallel story. A senior member of OpenAI’s robotics team, Caitlin Kalinowski, resigned on principle after the company announced plans to make its AI systems available inside secure DoD computing systems — the mirror image of the Anthropic dispute, except OpenAI initially moved toward the Pentagon rather than resisting. Altman subsequently had to walk back some of the terms, calling it a “good learning experience.”
The broader staff exodus predates this. Several senior figures have left OpenAI as it shifted resources from experimental research toward ChatGPT improvements — including VP of research Jerry Tworek, model policy researcher Andrea Vallone (who joined Anthropic), and economist Tom Cunningham. In December, Altman declared a “code red” internally following Google’s Gemini 3 release, which outperformed OpenAI on several benchmarks, and instructed staff to prioritise rapid ChatGPT improvements over longer-term research.
The underlying dynamic
What’s happening here is a forced clarification of something the industry has been able to avoid until now: what are the actual non-negotiable limits on military AI deployment, and who gets to set them — governments or developers? Anthropic has drawn a clear line and is now paying commercially for it. OpenAI tried to straddle the line and is losing staff for the attempt. The Pentagon is using procurement power as a policy lever in a way that legal experts say was designed for Huawei, not San Francisco AI labs.
The outcome of Anthropic’s lawsuits could set a significant precedent on whether the executive branch can informally blacklist domestic technology companies through procurement without following statutory process — a question that goes well beyond AI.
