Anthropic Refuses Pentagon Demand To Relax AI Safeguards Over Military Use Of Claude

Anthropic CEO Dario Amodei (Image Courtesy:X)
Share it:

Anthropic has publicly rejected a request from the United States Department of Defense to loosen safeguards on how its artificial intelligence systems can be used, with chief executive Dario Amodei stating the company “cannot in good conscience accede” to the Pentagon’s demands.

In an official statement published on Anthropic’s website, Amodei confirmed that the dispute centres on the Defense Department’s insistence that the company accepts “any lawful use” of its AI tools, including Claude. Anthropic argues that such wording could permit applications it considers incompatible with democratic principles.

“These threats do not change our position: we cannot in good conscience accede to their request,” Amodei wrote, referring to warnings that Anthropic could be removed from the Pentagon’s supply chain or designated a “supply chain risk.”

At issue are two specific use cases: mass domestic surveillance and fully autonomous weapons. Amodei stated that “such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now.” The “Department of War” is a secondary designation for the Department of Defense under an executive order signed by US President Donald Trump.

Anthropic’s statement emphasised that while the company supports lawful foreign intelligence and counterintelligence missions, it draws a firm line at domestic mass surveillance. Amodei warned that modern AI systems can “assemble scattered, individually innocuous data into a comprehensive picture of any person’s life — automatically and at massive scale,” raising civil liberties concerns if deployed without strict guardrails.

On autonomous weapons, the company argued that current AI systems are “simply not reliable enough to power fully autonomous weapons.” Amodei added, “We will not knowingly provide a product that puts America’s warfighters and civilians at risk,” stressing that such systems require oversight mechanisms that “don’t exist today.”

The Pentagon has pushed back against Anthropic’s position. Pete Hegseth reportedly warned that failure to comply could lead to Anthropic being offboarded from defense systems. Emil Michael criticised Amodei publicly, arguing that the military must be trusted to operate within legal and policy constraints and citing strategic competition with China as a key factor.

Anthropic disclosed that updated contract wording provided by the Defense Department represented “virtually no progress” in preventing Claude’s use in the disputed areas. According to the company, language presented as a compromise was paired with legal provisions that would allow safeguards to be disregarded.

The standoff underscores growing friction between AI developers and governments over the integration of advanced artificial intelligence into national security operations. As AI becomes central to intelligence analysis, logistics, and battlefield systems, technology firms are increasingly being asked to clarify ethical boundaries while navigating high-stakes defense partnerships.

Anthropic has indicated it remains open to working with the Defense Department under its existing safeguards and has even offered to collaborate on research to improve AI reliability. However, Amodei’s statement makes clear that the company is prepared to walk away rather than dilute restrictions it views as fundamental.

The outcome of this dispute could set an important precedent for how AI companies negotiate defense contracts in the future, particularly as artificial intelligence moves deeper into areas that intersect with civil liberties, military autonomy, and geopolitical competition.