US warns Anthropic to allow unrestricted use of AI by military
Published in News & Features
The Pentagon threatened to invoke a Cold War-era law to compel Anthropic PBC to allow the U.S. military to use the artificial intelligence startup’s technology if the company failed to comply with the government’s terms by Friday, according to people familiar with the matter.
During a meeting Tuesday between Chief Executive Officer Dario Amodei and Defense Secretary Pete Hegseth, U.S. officials laid out a series of consequences, including threats to declare Anthropic a supply-chain risk and invoke the Defense Production Act to use the AI software anyway, even if the company didn’t agree, the people said.
The ultimatum marks an escalation in a growing dispute between the Defense Department and the AI startup over the company’s insistence on guardrails for use of its Claude AI tool that the military sees as unnecessary. If carried out, the Pentagon’s threat would put at risk up to $200 million in work that Anthropic had agreed to do for the military.
In the meeting, according to one of the people, Amodei laid out Anthropic’s conditions: that the U.S. military refrain from using its products to autonomously target enemy combatants or conduct mass surveillance of U.S. citizens. The person said Amodei emphasized that these scenarios have yet to arise during operations in the field.
“We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do,” Anthropic said in a statement following the meeting.
The people who described the discussions did so on condition of anonymity owing to their confidential nature. Axios reported earlier on the meeting’s outcome.
Anthropic’s Claude chatbot has stirred interest outside government across a wide range of industries, including financial services, health care and insurance. Newly released, customized plug-ins promise to change the way businesses operate and professional tasks are executed, rattling Wall Street in recent weeks.
Now valued at roughly $380 billion based on its latest funding round, Anthropic was the first AI company granted clearance to handle classified material within the U.S. government, and its Claude Gov tool quickly became a preferred option among officials at the Pentagon who appreciate its ease of use. It faces growing competition in the national security space from Elon Musk’s xAI, which just won approval for classified work, as well as rivals OpenAI and Google’s Gemini.
The feud erupted just weeks after the Pentagon published a new strategy on artificial intelligence that called for making the military an “AI-first” force by increasing experimentation with frontier models and reducing bureaucratic barriers to use. The approach specifically urged the Defense Department to choose models that are “free from usage policy constraints that may limit lawful military applications.”
The Pentagon had grown concerned that Anthropic didn’t support U.S. goals after hearing the company had questions about how its AI was used during the special forces operation in early January that captured Venezuelan President Nicolas Maduro, a U.S. official said. Anthropic offered a different interpretation of the Pentagon’s claim the company had questions about the Maduro raid.
“Anthropic has not discussed the use of Claude for specific operations with the Department of War,” the company said on Monday, via a spokesperson, referring to the Trump administration’s preferred name for the Defense Department. “We have also not discussed this with, or expressed concerns to, any industry partners outside of routine discussions on strictly technical matters.”
Anthropic positions itself as a company focused on the responsible use of AI with a goal of avoiding catastrophic harms from the technology. It built Claude Gov specifically for U.S. national security purposes and aims to serve government customers within its own ethical bounds.
In response to Anthropic’s concerns over the possible use of its use in mass surveillance and autonomous targeting, Pentagon officials have insisted that that the Defense Department follows the law and a human is always involved.
If carried out, the Pentagon’s threat to declare Anthropic a supply-chain risk would put the company’s products off limits to other military vendors. Those companies would then have to verify that they don’t use Anthropic products.
Under the 1950 Defense Production Act, the government can seek to compel U.S. companies to deliver products or services needed on national security grounds. Presidents have used the law in the past to unlock energy supplies, including moves to compel the refurbishment of oil tankers in the 1960s and divert contracted oil to the military in the 1970s.
(With assistance from Nick Wadhams.)
©2026 Bloomberg L.P. Visit bloomberg.com. Distributed by Tribune Content Agency, LLC.







Comments