Claude's model was used for intelligence analysis and targeting within hours of containment.And now Anthropic AI is number one in the US Apple Store.
Pentagon's use of humanitarian aid in attacks on Iran despite Trump's ban
Claude's prototype was used to test artificial intelligence within hours of the ban.And now, Anthropic AI is number one in the US Apple Store
The Pentagon's use of Anthropic's Claude artificial intelligence model during military operations in the Middle East came hours after an official ban on Anthropic, which was removed by President Donald Trump's administration as a security risk. Anthropic's AI is used by the US Central Command in the Middle East for intelligence analysis, targeting processes and tactical simulations, according to sources cited by the Wall Street Journal. The reason is both paradoxical andsimple.Claude was the only AI model fully approved for use in classified US Army systems, and its removal did not happen overnight.
A few days ago, the Pentagon asked Anthropic to remove the limit that the company has given itself: no monitoring, no independent weapons without human supervision.The Californian company refused, CEO Dario Amodei made it clear: "Previous AI systems are reliable enough to control fully supported weapons. We will not provide products that endanger American soldiers and civilians."
The Trump administration's response was immediate and brutal.The Pentagon deemed Anthropic a "supply chain risk," and the president subsequently announced that the entire federal government would stop doing business with the company.A few hours later, OpenAI seized the opportunity, quickly signing a contract with the Department of Defense.
Now the paradox is that the company is being punished for saying "no" to the unrestricted military use of its AI, and at the same time, even in itself, encouraging that type of operation.The complete elimination of Claude may take up to six months (the transition period starting from the termination of the contract) due to its widespread use, including by partners such as Palantir.
Claude's example has already been used in a military operation to arrest the former Venezuelan President Nicolas Maduro in attacks on targets in Caracas, encouraging debate about the dangers of autonomous weapons and the role of special intelligence in decision-making.
Visions of the future of artificial intelligence
What happened before the attacks on Iran is not just a contract dispute.It is a clash of visions about the future of artificial intelligence.On the one hand, there is the Pentagon, which sees AI as an important strategic tool: from the defense against groups of drones to the management of military operations.You want to be able to shoot them faster than a person can do it themselves," Emil Michael, the Pentagon's assistant for research and engineering, told the Wall Street Journal.
On the other hand, there is Antropik, who has a completely different perspective.For society, AI is not just a tool: it is an emerging technology with potentially profound implications for humanity, requiring delicate ‘ethical guardrails’ as its limits and capabilities are still largely unexplored.An AI that fails to detect a military target cannot be 'fixed' and the consequences are irreversible.
"Anthropic" does not make rules.
Emile Michael summed up the position of the American government: "Anthropic does not make laws."It is created by Congress, signed by the president, and approved by the Pentagon.According to this view, imposing ethical restrictions on government contracts amounts to human interference with government authority.Allegations of a "cautious" company mislead Anthropic's ethical concerns.It has become a tradition, turning a technical security issue into a culture war.
But Amodei's answer deserves attention: «No one in the area has, to our knowledge, met the restrictions that we have put in.I don't know what their plans are - we don't know them - but we have no evidence that these uses have created any real problems.
Claude's first app in the US app store
What we do know is that Anthropic's Cloud appears to have benefited from the attention generated by the company's difficult negotiations with the Pentagon.Cloud has climbed the ranks of free apps in Apple's US App Store.On the night between Saturday and Sunday, ChatGypt beat OpenAI to the top.A breakthrough, if you consider that Cloud was out of the top 100 at the end of January.
