Pentagon Weighs Cutting Ties with Anthropic Over AI Autonomous Weapons Dispute
War Department seeks “all lawful purposes” access for weapons and intelligence applications, while Anthropic maintains bans on domestic surveillance and fully autonomous weapons.
ERBIL (Kurdistan24) — The Pentagon is considering severing its relationship with AI firm Anthropic amid a dispute over how the U.S. military can use its advanced models, according to a senior administration official cited by Axios.
The report states that the Department of Defense is pressing four leading AI laboratories to allow their tools to be used for “all lawful purposes,” including highly sensitive areas such as weapons development, intelligence gathering and battlefield operations.
Anthropic, however, has declined to fully accept those terms, insisting that certain applications remain off-limits — specifically mass surveillance of Americans and fully autonomous weapon systems.
According to Axios, tensions have mounted after months of difficult negotiations. A senior administration official said “everything’s on the table,” including scaling back or ending the partnership altogether, though any transition would require an orderly replacement.
The official reportedly argued that ambiguity around use-case restrictions makes it impractical for the Pentagon to negotiate individual scenarios or risk the company’s Claude model blocking operational applications unexpectedly.
Anthropic maintains it remains committed to supporting U.S. national security. A company spokesperson told Axios that discussions with the Pentagon have focused narrowly on questions related to its usage policy — particularly its “hard limits” on fully autonomous weapons and mass domestic surveillance — and not on specific operations.
The spokesperson also denied allegations that the company objected to Claude’s potential use in a military operation targeting Venezuelan leader Nicolás Maduro, stating that no such operational discussions had taken place outside routine technical matters.
The reported dispute escalated following claims by the senior official that an Anthropic executive contacted software partner Palantir Technologies to inquire whether Claude had been used during an operation involving kinetic force. Anthropic denied any implication of disapproval related to that operation.
Anthropic signed a contract valued at up to $200 million with the Pentagon last year, becoming the first frontier AI company to deploy its models within classified U.S. government networks. Claude was also the first AI model integrated into those secure systems.
By contrast, OpenAI’s ChatGPT, Google’s Gemini, and X AI’s Grok are currently used in unclassified settings. According to Axios, all three have agreed to lift standard consumer guardrails for Pentagon use, and negotiations are ongoing to bring those models into classified environments under the same “all lawful purposes” framework.
A senior official reportedly claimed that one of the three companies has already agreed to the Pentagon’s terms, while the others have shown greater flexibility than Anthropic. Nonetheless, the official acknowledged that replacing Claude would not be straightforward, as competitors are seen as lagging behind in certain specialized government applications.
The standoff reflects a broader cultural divide between War officials seeking expansive operational flexibility and AI developers concerned about ethical guardrails and unintended consequences.
Anthropic CEO Dario Amodei has previously warned publicly about the risks of advanced AI systems, and Axios reported that internal unease among some engineers about working with the military may also factor into the company’s posture.
Despite the friction, Anthropic reiterated its commitment to the national security space, emphasizing that it was the first frontier AI company to provide customized models for classified government use — even as its dispute with the Pentagon underscores the growing tension between technological safeguards and military ambitions in the rapidly evolving AI landscape.