Anthropic Wins Injunction in Court Battle With Trump Administration WSJ
Judge Rita F. Lin of the Northern District of California in her decision ordered the Trump administration to desist from applying the president’s directive that federal agencies stop using Anthropic’s technology, and from implementing its designation of the company as a risk to the national security supply chain. She also required the government to provide a report by April 6 detailing how they have complied with her ruling.
background Claude.ai
The ruling just dropped today. Here’s the picture:
The ruling: Judge Rita Lin granted Anthropic’s preliminary injunction, issued Thursday, two days after the courtroom hearing. The First Amendment retaliation argument carried.
What triggered it: The DoW wanted a blanket “all lawful use” clause in its contracts. Anthropic balked at autonomous lethal warfare and mass surveillance applications, citing safety concerns. That breakdown led Trump and Hegseth on Feb. 27 to publicly declare Anthropic a supply chain risk via social media — a designation historically reserved for foreign adversaries.
The core legal issue: Lin drew a sharp distinction — the Pentagon is free to stop using Claude, but the question was whether the government violated the law by extending the ban to all contractors and agencies as a punitive measure against a company for publicly criticizing government contracting positions.
What the injunction does: Contractors including Amazon, Microsoft, and Palantir had been required to certify they don’t use Claude in military work. The injunction halts that requirement while the broader lawsuit proceeds.
Government’s weakest argument: Lin’s summary of the DOD position was withering — “it’s enough if an IT vendor is stubborn and insists on certain terms and it asks annoying questions, then it can be designated a supply chain risk because they might not be trustworthy. That seems a pretty low bar.”
What remains open: The injunction is preliminary — the substantive case on First Amendment retaliation and APA violations continues. The Trump administration used two separate statutory authorities to justify the action, requiring Anthropic to fight on two fronts: the California district court and the DC Court of Appeals simultaneously.
The precedent value is what matters here beyond Anthropic. The case spotlights who gets to set the limits of AI use — the developers or the government — and that tension will shape the next phase of AI regulation in Washington. This is your sovereign AI governance frame playing out in open court.
