Anthropic, QuitGPT, and Monitoring the Situation
A lot has happened in the last two weeks. The sudden escalation of the Iran conflict caused a rise of vibe-coded "situation monitors", reflecting people's desire to track geopolitical rainstorms that turned out to be closer to home than they thought. After the Pentagon designated Anthropic a supply chain risk, Trump ordered all federal agencies to stop using the company's technology. Within hours, OpenAI stepped in and signed the Pentagon's deal. What happened next could be described by some as an unexpected marketing campaign for Anthropic: ChatGPT uninstalls surged 295% overnight, a boycott movement called QuitGPT claimed over 2.5 million participants, Claude shot to No. 1 on the App Store for the first time, people chalked "GOD LOVES ANTHROPIC" outside its San Francisco headquarters, an open letter titled "We Will Not Be Divided" gathered nearly a thousand signatures from OpenAI and Google employees, OpenAI’s robotics lead resigned in protest. Then, on February 28, the US and Israel struck Iran and the stakes became viscerally physical. Iranian retaliatory strikes hit three AWS data centers in the UAE and Bahrain, the first time commercial cloud infrastructure has been deliberately targeted in war. You may have noticed Claude going down for a few hours last week (now you know why). There is something vertiginous about all of this. The mundane and the geopolitical are folding into each other at a speed that's hard to process. The same tools we use daily to think, write, and speak for us now depend on defense decisions unfolding elsewhere. Your morning chat with Claude and a missile strike on an AWS data center are now, in a very literal sense, part of the same system. And perhaps the most recursive detail of all: the news coverage of this entire saga will inevitably make its way into the training data for future versions of these models.