As military officials demand that the artificial intelligence startup change its ethical rules by Friday or risk harming its business, a public confrontation between the Trump administration and Anthropic is coming to a standstill.
Twenty-four hours before the deadline, Dario Amodei, the CEO of Anthropic, drew a clear red line, stating that his company “cannot in good conscience accede” to the Pentagon’s final demand to permit unfettered use of its technology.
The company that created the chatbot Claude, Anthropic, can afford to forfeit a defence contract.
However, in the height of the company’s explosive growth from a little-known computer science research facility in San Francisco to one of the most valuable startups in the world, Defence Secretary Pete Hegseth’s ultimatum this week raised more serious concerns.
Military officials have threatened to “deem them a supply chain risk,” a label usually applied to foreign enemies that could jeopardize the company’s vital alliances with other companies, in addition to cancelling Anthropic’s contract if Amodei doesn’t comply.
Additionally, if Amodei gave in, he may lose the trust of the rapidly expanding AI sector, especially from elite talent attracted to the company by its pledges to safely develop superior-to-human AI that, in the absence of controls, could pose catastrophic hazards.
Anthropic stated that it requested specific guarantees from the Pentagon that Claude would not be employed in fully autonomous weaponry or for widespread American monitoring.
However, it added in a statement on Thursday that new contract language “framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will,” following months of secret negotiations that erupted into public discussion.
Following a social media post by Pentagon senior spokesman Sean Parnell stating that “we will not let ANY company dictate the terms regarding how we make operational decisions,” the company was given “until 5:01 p.m. ET on Friday to decide” whether to comply with the demands or risk repercussions.
Later, Amodei came under fire from Emil Michael, the defence undersecretary for research and engineering, who said on X that he “has a God-complex” and “wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk.”
In Silicon Valley, where an increasing number of tech professionals from Anthropic’s main competitors, OpenAI and Google, expressed support for Amodei’s stance in an open letter late Thursday, that message hasn’t struck a chord.
The military has also contracted with OpenAI, Google, and Elon Musk’s xAI to provide their AI models.
The open letter claims that the Pentagon is attempting to persuade Google and OpenAI to accept what Anthropic has rejected through negotiations. “They’re attempting to split each business out of concern that the other will compromise.”
A former head of the Defence Department’s AI programmes, as well as Republican and Democratic senators, expressed reservations about the Pentagon’s strategy.
On social media, former Air Force Gen. Jack Shanahan commented, “Painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end.”
When Shanahan oversaw Maven, a project to employ AI technology to analyze drone footage and target missiles, he encountered a distinct wave of criticism from tech workers under the first Trump administration.
Because so many Google workers opposed the company’s involvement in Project Maven at the time, the tech giant refused to extend the contract and later promised not to utilize AI in weapons.
“It’s fair to assume that I would support the Pentagon in this case, given that I was right in the middle of Project Maven & Google,” Shanahan posted on social media on Thursday. However, I understand Anthropic’s viewpoint. in comparison to Google’s in 2018.
He claimed that Anthropic’s red lines are “reasonable” and that Claude is already being used extensively throughout the government, including in classified contexts.
Claude and other chatbots are powered by AI big language models, which he claimed are “not ready for prime time in national security settings,” especially not for fully autonomous weaponry.
He wrote, “They’re not trying to be cute here.”
Although he and other officials have not provided specifics on how they intend to use the technology, Parnell stated Thursday that the Pentagon wants to “use Anthropic’s model for all lawful purposes” and that allowing the company to use the technology would stop it from “jeopardizing critical military operations.”
The military “does not want to use AI to develop autonomous weapons that operate without human involvement, nor does it have any interest in using AI to conduct mass surveillance of Americans (which is illegal),” Parnell wrote.
Military officials warned Hegseth and Amodei during their meeting on Tuesday that they could declare Anthropic a supply chain risk, terminate its contract, or use the Defence Production Act, a law from the Cold War that gives the military broader permission to use its products, even if the company objects.
“Those latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security,” Amodei stated on Thursday.
Claude is valuable to the military, so he hopes the Pentagon will reconsider. If not, he said, Anthropic “will work to enable a smooth transition to another provider.”
