“Anthropic brings this suit because the federal government has retaliated against it for expressing that principle.”
The battle has reverberated through Silicon Valley, raising questions about what limits AI developers should be able to impose on their technology when they do business with the government. Administration officials and the Defense Department have demanded the freedom to use AI systems for any lawful purpose, arguing that the government must have the final say.
After Anthropic CEO Dario Amodei refused to agree, Trump said last month that he was ordering federal agencies to stop using Claude. Defense Secretary Pete Hegseth went further, saying he was imposing a far-reaching ban on the company doing any work with military contractors.
But behind the scenes, the two sides continued to talk last week. Technology and defence figures lobbied the two sides to de-escalate, warning of the ripple effects that would come with branding a leading American company a security risk in an industry where AI labs, tech giants and hardware makers are intertwined with both one another and the Pentagon.
The discussions finally came to an end on Thursday, according to a defence official – a day after tech news site the Information published a caustic internal staff memo in which Amodei said the administration was opposed to the company “because we haven’t given dictator-style praise to Trump”. The leak of the note contributed to the ultimate breakdown of the talks, according to the defence official and a second person familiar with the discussions.
“It blew up negotiations,” said the second person, speaking on the condition of anonymity to describe private talks.
For now, though, the military is continuing to rely on Claude to help carry out the assault on Iran. The AI tool is embedded in the military’s Maven Smart System, which helps commanders analyse intelligence and identify targets to strike. Before the military campaign, the system suggested hundreds of targets, with precise coordinates, and ranked them in order of importance, people familiar with the system previously told The Washington Post. It also speeds up planning dramatically and helps evaluate the aftermath of strikes, one of the people said.
Defence officials have said they are aware of their dependence on the system, and Trump said he was providing a six-month phaseout of Anthropic’s tools.
In the long term, competitors are positioned to supplant Anthropic, even if the company is victorious in court. As officials were labelling the company a pariah, its chief rival, OpenAI, was finalising an agreement to work on the Pentagon’s secret networks. OpenAI said it had been able to secure protections related to surveillance and autonomous weapons, while agreeing to the “all lawful uses” standard that officials wanted.
Elizabeth Dwoskin and Aaron Schaffer contributed to this report.
Sign up to Herald Premium Editor’s Picks, delivered straight to your inbox every Friday. Editor-in-Chief Murray Kirkness picks the week’s best features, interviews and investigations. Sign up for Herald Premium here.

