Anthropic draws the line
Anthropic just closed a round that tripled its initial $10 billion target. In the wake of that success, the company has been on a goodwill offensive: pledging to cover rising electricity costs from AI data center buildouts, donating $20 million to an AI safety super PAC, and partnering with CodePath to bring Claude Code to students. But the company is simultaneously caught in a standoff with the Pentagon. Defense Secretary Pete Hegseth gave Anthropic until today to roll back its safety guardrails on Claude for military use or face being labeled a "supply chain risk", a designation normally reserved for foreign adversaries like Huawei. Dario Amodei refused due to two ethical red lines Anthropic won't cross: no fully autonomous targeting in military operations and no mass surveillance of US citizens. Meanwhile, its competitors like SpaceX and xAI are competing for a $100 million Pentagon contract, making Anthropic's position look increasingly isolated. Adding to the irony, major AI leaders have recently started calling for regulation(Altman in India,Hassabis warning of deadly AI risks) without the same level of commitment to safety. The backdrop makes Anthropic’s updated Responsible Scaling Policy this week even more striking: it removed its original pledge to pause model training if it can't guarantee adequate safety mitigations. The company blamed the government's failure to regulate AI and the fact that competitors aren't pausing development on dangerous products. Anthropic's entire pitch is built on safety, but the policy that underpins it is now explicitly conditional on what everyone else does. And yet, refusing the Pentagon when every incentive points toward compliance is an act of genuine conviction that no other lab has matched.