Pentagon Got Help From Claude in Iran
Pages from the Anthropic website and the company's logo are displayed on a computer screen. (AP Photo/Patrick Sison)
The U.S. military still relied on Claude, the artificial‑intelligence tool developed by Anthropic, in its recent strikes on Iran — despite President Trump issuing an order for federal agencies to begin phasing out the company’s AI systems. Multiple news outlets have reported that the Pentagon used Claude to help analyze intelligence, highlight potential targets, and run simulation scenarios as part of planning and executing the operation.
President Trump’s directive for agencies to end their use of Anthropic’s technology came amid a broader standoff with the company over how its AI can be used, including military applications and safeguards against mass surveillance. The White House labeled Anthropic a risk to national security and set a six‑month period for federal agencies to transition off the technology.
The reports underscore how deeply Claude has been integrated into U.S. defense systems, which makes an immediate cutoff difficult. Claude has also been reported to have played a role in previous operations, including the Pentagon’s use of AI in the mission that captured Venezuela’s Nicolás Maduro.
At the same time, the Pentagon is moving to bring in AI tools from other companies such as OpenAI and Elon Musk’s xAI for classified work, amid concerns about future risks and surveillance capabilities. OpenAI’s CEO Sam Altman has said their agreement with the Defense Department includes more guardrails for secure use of AI in sensitive contexts.