Pentagon Labels Anthropic Supply-Chain Risk in Historic First for US AI Company
The Pentagon has officially designated artificial intelligence company Anthropic as a supply-chain risk, marking the first time a US-based AI firm has received such a label from the Department of Defense. The unprecedented move comes even as the military continues to actively use Anthropic's AI technology in classified operations, including recent airstrikes on Iran.
First American Company to Receive Risk Designation
The supply-chain risk designation represents a significant escalation in tensions between the AI company and the Pentagon. According to multiple reports, this marks the first time an American company has been given such a classification by the Defense Department, highlighting the unique nature of the dispute between Anthropic and military officials.
Despite the formal risk designation, Anthropic CEO Dario Amodei has downplayed the potential business impact, stating that the Pentagon's move will not affect the "vast majority" of the company's customers. The company has also indicated it is prepared to challenge the designation legally, with reports suggesting Anthropic plans to sue the Pentagon over the risk label.
Ongoing Military Use Despite Risk Label
In a striking contradiction, the Department of Defense continues to utilize Anthropic's AI technology even after applying the supply-chain risk designation. Sources confirm that Anthropic's AI has been employed in classified military operations, including providing support for US airstrikes on Iran. The company had previously secured a $200 million Pentagon contract, demonstrating the military's reliance on its technology.
This paradoxical situation—simultaneously labeling a company as risky while continuing to use its services—underscores the complex relationship between AI companies and military applications of their technology.
Context: OpenAI's Different Approach
The Anthropic situation has been contrasted with competitor OpenAI's recent approach to Pentagon cooperation. On February 28, OpenAI announced it had reached an agreement allowing the US military to use its technologies in classified settings. OpenAI CEO Sam Altman described the negotiations as "definitely rushed," acknowledging they began only after the Pentagon's public dispute with Anthropic.
The key difference between the companies' approaches appears to center on contractual specificity. According to Altman, "Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with." OpenAI's agreement relies on existing laws and policies, including Pentagon directives on autonomous weapons and Fourth Amendment protections, rather than seeking explicit contractual prohibitions.
However, legal experts note that OpenAI's approach may offer less protection than Anthropic's preferred method. Jessica Tillipman, associate dean for government procurement law studies at George Washington University, observed that OpenAI's published contract excerpt "does not give OpenAI an Anthropic-style, free-standing right to prohibit otherwise-lawful government use."
Industry and Policy Implications
The Pentagon's actions have sparked broader discussions about AI readiness in military applications and the relationship between tech companies and defense contractors. Reports indicate that tech industry groups are now pushing the Trump administration to reconsider the risk designation, suggesting the dispute has implications beyond just Anthropic.
Sources suggest that Anthropic has reopened talks with the Pentagon, indicating potential for resolution despite the formal risk designation and threatened legal action. The company's willingness to continue negotiations suggests both parties may be seeking a path forward that addresses security concerns while maintaining technological cooperation.
The Moral vs. Pragmatic Divide
The dispute highlights a fundamental tension in the AI industry between companies taking principled stances on military applications versus those adopting more pragmatic approaches. Anthropic's position has earned support from various quarters, including some OpenAI employees, for taking what many view as a more ethical stance on military AI applications.
However, the practical outcome suggests that Anthropic's "moral approach," while winning supporters, may have failed to achieve its intended goals, while OpenAI's more flexible stance has allowed continued military collaboration under negotiated terms.
Looking Forward
As the first American company to receive a Pentagon supply-chain risk designation, Anthropic's case sets a precedent that could influence how other AI companies approach military contracts and negotiations. The ongoing use of Anthropic's technology despite the risk label also raises questions about the practical meaning and enforcement of such designations.
The situation remains fluid, with legal challenges threatened, negotiations reportedly ongoing, and broader industry pressure mounting on the administration to reconsider its approach to AI company risk assessments. The resolution of this dispute may well shape the future relationship between the AI industry and military applications of artificial intelligence technology.