The Pentagon announced on Friday that it had agreements with seven tech firms to integrate their AI into its classified computer networks, giving the military access to AI-powered capabilities to aid in warfare.
According to the Defence Department, Google, Microsoft, Amazon Web Services, Nvidia, OpenAI, Reflection, and SpaceX will contribute their resources to “augment warfighter decision-making in complex operational environments.”
Anthropic, an AI business, is noticeably missing from the list due to their public disagreement and legal battle with the Trump administration on the morality and security of using AI in combat.
In recent years, the Defence Department has begun using AI far more quickly.
According to a March analysis from the Brennan Centre for Justice, the technology can assist the military in shortening the time it takes to locate and hit targets on the battlefield while also helping to organize supply lines and weapon maintenance.
However, AI has already sparked worries that its application would violate Americans’ privacy or give robots the ability to select targets in combat.
According to one of the Pentagon’s contracting businesses, human monitoring was necessary in some circumstances.
During Israel’s conflict with militants in Gaza and Lebanon, worries about the military’s use of AI surfaced as American tech companies covertly enabled Israel to track targets.
However, the number of civilian casualties also skyrocketed, raising concerns that these weapons may have played a role in the deaths of innocent people.
Concerns regarding the application of AI in the military are currently being resolved.
According to Helen Toner, interim executive director of Georgetown University’s Centre for Security and Emerging Technology, the Pentagon’s most recent contracts coincide with concerns about the possibility of an overdependence on technology on the battlefield.
Toner, a former OpenAI board member, stated that “a lot of modern warfare is based on people sitting in command centres behind monitors, making complicated decisions about confusing, fast-moving situations.”
“AI systems can be useful when analyzing surveillance feeds and attempting to identify possible targets or when summarizing information.”
However, she stated that issues about the proper degrees of risk, human interaction, and training are still being worked out.
“How do you quickly implement these tools so they can be useful and give you a strategic edge?”
“While also realizing that you need to train the operators and make sure they know how to use them and don’t overtrust them,” Toner inquired.
Anthropic brought up these issues.
The IT business stated that it needed guarantees in its deal that the military would not utilize its technology for surveillance of Americans or for fully autonomous weaponry.
The business must permit any usage that the Pentagon deems legal, according to Defence Secretary Pete Hegseth.
Anthropic filed a lawsuit after Republican President Donald Trump attempted to prevent all federal agencies from utilizing the company’s chatbot, Claude, and Hegseth attempted to designate the business as a supply chain risk, a classification intended to guard against foreign adversaries sabotaging national security systems.
In March, OpenAI announced an agreement with the Pentagon to replace Anthropic in classified contexts with ChatGPT.
In a statement released on Friday, OpenAI affirmed that it was the same deal that it had declared at the beginning of March.
“We think the people defending the United States should have the best tools in the world, as we said when we first announced our agreement several months ago,” the business stated.
A company’s contract with the Pentagon contained language stating that any missions in which the AI systems work independently or semi autonomously should be subject to human monitoring.
Additionally, the text said that the AI tools must be applied in a manner that respects human liberties and constitutional rights.
Although OpenAI has previously stated that it obtained comparable guarantees when it struck its own agreement with the Pentagon, those seem to be Anthropic’s sticking points.
The perspective of the Pentagon
The Pentagon’s chief technology officer, Emil Michael, acknowledged the conflict with Anthropic on Friday when he told CNBC that depending solely on one business would have been reckless.
Michael added, “And we went out and made sure that we had multiple different providers when we learned that one partner didn’t really want to work with us in the way we wanted to work with them.”
It was unclear at first if the new agreements materially changed the government ties of some of the corporations, such as Microsoft and Amazon, which have long collaborated with the military in classified settings.
Others, like the startup Reflection and the chipmaker Nvidia, are fresh to this kind of work.
Both businesses provide open-source AI models, which Michael has stated are a top goal in order to offer an “American alternative” to China’s quick development of AI systems, some of which are freely available for others to build upon.
Through its official platform, GenAI.mil, the Pentagon announced on Friday that military personnel are already utilizing its AI capabilities.
The Pentagon stated that the military’s expanding AI capabilities will “give warfighters the tools they need to act with confidence and safeguard the nation against any threat.”
“Warfighters, civilians, and contractors are putting these capabilities to practical use right now, cutting many tasks from months to days.”
According to Toner of Georgetown University, the military frequently employs artificial intelligence in the same way as the general public: to perform repetitive activities that would need hours or days for humans to finish.
AI may be used to more accurately forecast when a helicopter requires maintenance or determine the most effective way to transport large numbers of soldiers and equipment.
Additionally, it can assist in identifying whether the vehicles on a drone’s surveillance feeds are military or civilian.
However, people shouldn’t rely too much on it.
According to Toner, “automation bias is a phenomenon where people can be prone to assume that machines work better than they actually do.”
