The Trump administration escalated a highly public dispute between the government and Anthropic over AI safety on Friday by ordering all U.S. agencies to cease using the company’s AI technology and imposing additional significant fines.
After CEO Dario Amodei refused to back down over worries that the company’s products could be used in ways that would violate its safeguards, Anthropic was accused of endangering national security by President Donald Trump, Defence Secretary Pete Hegseth, and other officials on social media for not allowing the military unrestricted use of its AI technology by a Friday deadline.
“We will never do business with them again because we don’t need or want it!” Trump posted on social media.
Additionally, Hegseth classified the company as a “supply chain risk,” a label usually applied to foreign competitors that could jeopardize the company’s vital alliances with other companies.
Anthropic said it will contest what it described as an unprecedented and legally unsound action that had “never before publicly applied to an American company” in a statement released Friday evening.
According to Anthropic, it asked the Pentagon for specific guarantees that their AI chatbot Claude would not be employed in fully autonomous weaponry or for widespread American monitoring.
Although the Pentagon insisted on unrestricted access, it also stated that it was not interested in such purposes and would only utilize the technology in legitimate ways.
The business declared, “No amount of intimidation or punishment from the Department of War will change our position on fully autonomous weapons or mass domestic surveillance.” “Any designation of supply chain risk will be contested in court.”
The government’s attempt to take control of the company’s internal decision-making process coincides with a broader dispute over AI’s role in national security and worries about how increasingly powerful machines might be employed in high-stakes scenarios involving deadly force, private data, or government surveillance.
Hours after Anthropic was disciplined, OpenAI makes a deal with the Pentagon.
OpenAI CEO Sam Altman revealed on Friday night that his company had reached an agreement with the Pentagon to supply its AI to confidential military networks, possibly filling a void left by Anthropic’s dismissal, only hours after its rival was punished.
However, according to Altman, OpenAI’s new collaboration is based on the same red lines that formed the crux of Anthropic’s conflict with the Pentagon.
The Defence Department “agrees with these principles, reflects them in law and policy, and we put them into our agreement,” Altman wrote.
“Prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems are two of our most important safety principles.”
Altman added that in order to “de-escalate away from legal and governmental actions and towards reasonable agreements,” he hopes the Pentagon will “offer these same terms to all AI companies.”
Anthropic is attacked by Trump and others.
According to Trump, Anthropic erred in attempting to coerce the Pentagon. He stated on Truth Social that while the Pentagon has six months to phase out the technology currently integrated into military platforms, most agencies must immediately cease employing Anthropic’s AI.
“A radical left, woke company will never be allowed to dictate how our great military fights and wins wars!” He used full caps when writing.
When Amodei stated that his organization “cannot in good conscience accede” to the demands, months of private discussions erupted into public controversy this week and came to a standstill.
Anthropic is able to afford to lose the deal.
At the height of the company’s explosive growth from a little-known computer science research facility in San Francisco to one of the most valuable companies in the world, however, the government’s actions presented more significant threats.
Before the president’s decision, senior Trump appointees from the State Department and the Pentagon spent hours criticizing Anthropic on social media, but their grievances were contradictory.
Anthropic’s refusal to comply with military orders is “putting our warfighters at risk and jeopardizing critical military operations,” according to top Pentagon spokesman Sean Parnell.
The Pentagon “must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defence of the Republic,” according to Hegseth.
The corporation “better get their act together, and be helpful” during the phase-out time or face “major civil and criminal consequences to follow,” according to Trump’s social media statement.
However, Hegseth’s decision to classify Anthropic as a supply chain risk makes use of an administrative mechanism intended to stop businesses owned by adversaries of the United States from exporting goods detrimental to American interests.
This scenario “combined with inflammatory rhetoric attacking that company, raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations,” said Sen. Mark Warner of Virginia, the top Democrat on the Senate Intelligence Committee.
Silicon Valley is shaken by a dispute.
AI developers in Silicon Valley were taken aback by the conflict, and many employees of Anthropic’s main competitors, OpenAI and Google, as well as venture capitalists and well-known AI experts, expressed support for Amodei’s position in open letters and other places.
The actions may help Elon Musk’s rival chatbot Grok, which the Pentagon also intends to grant access to confidential military networks, and OpenAI’s ChatGPT.
For Google, which has an ongoing deal to provide its AI capabilities to the military, it might be a warning.
Musk declared on his social media site X that “Anthropic hates Western Civilisation,” siding with Trump’s administration.
Altman adopted a different tack, rejecting the government’s “threatening” strategy and voicing support for Anthropic’s protections while simultaneously trying to negotiate OpenAI’s agreement with the Pentagon.
It was the most recent development in the long-running and occasionally contentious conflict between OpenAI and Anthropic, which was established in 2021 by a number of former OpenAI executives.
The government “painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end,” according to a social media post this week by retired Air Force Gen. Jack Shanahan, a former head of the Pentagon’s AI efforts.
Claude is already being used extensively throughout the government, including in sensitive contexts, according to Shanahan, and Anthropic’s red lines were “reasonable.”
Claude, Grok, and ChatGPT are examples of chatbots that use AI big language models, which he claimed are “not ready for prime time in national security settings,” especially when it comes to completely autonomous weapons.
He posted on LinkedIn that Anthropic is “not trying to play cute here.” “A system with a deeper and wider reach throughout the military is unmatched.”
