P.INC AI DISPATCH

FRI FEB 27, 2026

Bi-weekly dispatch on the state of AI, mapping culture, product, research, and geopolitics. Each section is designed to filter what’s new and interesting.

Culture / General

Broad societal mindset and reactions to AI

Anthropic draws the line

Anthropic just closed a round that tripled its initial $10 billion target. In the wake of that success, the company has been on a goodwill offensive: pledging to cover rising electricity costs from AI data center buildouts, donating $20 million to an AI safety super PAC, and partnering with CodePath to bring Claude Code to students. But the company is simultaneously caught in a standoff with the Pentagon. Defense Secretary Pete Hegseth gave Anthropic until today to roll back its safety guardrails on Claude for military use or face being labeled a "supply chain risk", a designation normally reserved for foreign adversaries like Huawei. Dario Amodei refused due to two ethical red lines Anthropic won't cross: no fully autonomous targeting in military operations and no mass surveillance of US citizens. Meanwhile, its competitors like SpaceX and xAI are competing for a $100 million Pentagon contract, making Anthropic's position look increasingly isolated. Adding to the irony, major AI leaders have recently started calling for regulation(Altman in India,Hassabis warning of deadly AI risks) without the same level of commitment to safety. The backdrop makes Anthropic’s updated Responsible Scaling Policy this week even more striking: it removed its original pledge to pause model training if it can't guarantee adequate safety mitigations. The company blamed the government's failure to regulate AI and the fact that competitors aren't pausing development on dangerous products. Anthropic's entire pitch is built on safety, but the policy that underpins it is now explicitly conditional on what everyone else does. And yet, refusing the Pentagon when every incentive points toward compliance is an act of genuine conviction that no other lab has matched.

The agentic gap

AI investments continue surging: 17 US-based AI companies have raised $100M or more in 2026 and we're not even in March yet. A disconnect is forming between the capital flowing in and the results coming out. A new Dataiku report found that 74% of surveyed companies regret at least one major AI vendor decision made in the last 18 months. And everyone is building agents, but almost nobody knows what to do with them yet. Software engineering accounts for nearly half of all AI agent use cases, while fields like healthcare, legal, and finance are still barely touched. The Economist reported this week that the AI productivity boom remains largely invisible in macroeconomic data: 41% of American workers have used generative AI at work, but only 13% use it daily, and the share of total work hours involving AI has risen from 4.1% to just 5.7%. Meanwhile, 82% of the world hasn't used AI at all. The organizational rewiring has barely begun. AI agents are powerful but incomplete: they can execute tasks but still struggle to close the loop on complex, multi-step workflows without human oversight. For now, the gap between AI's capabilities and its actual economic impact remains wide.

Redesigning entry-level jobs

When the best developers are barely writing code anymore, what is there left for an entry-level tech worker to do? According to IBM, a lot. The company announced plans to triple entry-level hiring in the US in 2026, but these positions won't look like traditional early career roles. IBM has overhauled its job descriptions, shifting focus from tasks AI can automate (coding, admin) to areas it can't (customer engagement, person-to-person work). It's a direct counter to the prevailing narrative that AI will demolish the job market for young workers, even as MIT claims AI can already automate hours of work. Meanwhile, the US Labor Department has launched an AI apprenticeship program, treating AI literacy less like a PhD credential and more like a skilled trade, signaling that the government sees AI competency as operational, not elite.

OpenAI acquiring OpenClaw signals the end of the chatbot era

OpenAI's acquisition of OpenClaw marks a strategic pivot from conversational AI to autonomous agents. OpenClaw's rapid rise came down to two things: it is able to figure out creative solutions to tasks and it can be interacted with through messaging apps. Its creator Peter Steinberger announced he's joining OpenAI to "work on bringing agents to everyone," while OpenClaw itself will move to a foundation and remain open-source. Yet the technology still isn’t without risk: Meta's head of safety Summer Yue watched Clawdbot delete her entire inbox, and the first YC startup built on top of OpenClaw is, tellingly, a security firewall.

The substack post that caused a massive stock sell off

A post titled "The 2028 Global Intelligence Crisis" from Citrinih, a financial analysis firm with a highly influential Substack, went viral over the weekend and triggered a massive selloff that hit software, payments, and delivery stocks hard. IBM had its worst single-day drop in 25 years. DoorDash, American Express, KKR, and Blackstone all fell more than 8%. The piece itself is a fictional memo, narrating a negative feedback loop in which AI capabilities improve rapidly, leading to mass white-collar layoffs, which decreases spending, which tightens margins, which pushes companies to buy more AI, accelerating the loop. Productivity soars on paper but income shifts from labor to compute; GDP looks strong while consumer spending collapses, a phenomenon the authors call "Ghost GDP." The idea isn't really new - the notion that a unit of AI capex spend displaces more than a unit of disposable income circulating in the real economy has been doing the rounds for a while. The authors stress these are low-probability, high-impact scenarios, not predictions. What no one priced in was that a single speculative Substack post could trigger a selloff of that magnitude, showing market sentiment around AI is still fragile.

Ceramic toilets and MSG could be key players in the AI race

Two of the most unexpected players of the AI race are companies selling toilets and seasoning. Toto, the world's largest toilet manufacturer, could become a critical supplier of advanced ceramics used in semiconductor production. Meanwhile, Ajinomoto, the company best known for MSG, is apparently essential through its production of Ajinomoto Build-up Film (ABF), a substrate material essential for insulating advanced chips. Both companies recently saw their stocks surge, demonstrating the real bottleneck is often in the physical supply chain.

Value is migrating to the context layer

Everyone is talking about the "SaaS-pocalypse". Triggered by Anthropic’s release of open-source plugins, $300 billion in software market cap were wiped in a single week. The current fear is that AI makes software companies less valuable with figures like Andrej Karpathy explaining that the idea of choosing from a set of discrete apps is an increasingly outdated concept when an AI agent can improvise anything bespoke on the spot. But AI doesn't exactly make software worthless, it just changes which layer of it captures the margin. Software is splitting into three layers: databases, applications, and between them, the context layer: the knowledge that tells AI agents what to do. The industry calls this space the "agent orchestration layer." Before AI, every company had an invisible version of context: email threads, wiki pages, chats, onboarding docs (messy knowledge that was never structured or searchable, just overhead that required more hiring). After AI, context is what makes AI agents actually useful and its effect compounds, because every workflow an agent runs generates traces that make the next one smarter. OpenAI's Frontier, Anthropic's Cowork, and Notion’s agents are all racing toward this layer. The open question is whether it gets owned by the company that already holds the knowledge, or the one that builds the most capable agents.

Seedance 2.0 saga

ByteDance's AI video model Seedance 2.0 went viral for generating full cinematic scenes from scripts, complete with recognizable actors and copyrighted characters. The Motion Picture Association decried the tool for "massive" infringement. Disney sent a cease and desist. Legendary director Jia Zhangke used it to create a film for Lunar New Year. ByteDance has sinceadded safeguards and curbed generations involving copyrighted material. On the music side, the legal battle continues in parallel with Sony developing technology to track original music inside AI-generated songs. The pattern is becoming clear: generative media tools will keep outrunning the legal frameworks designed to contain them.

AI as affective technology

EVA AI hosted a pop-up date night at a Manhattan wine bar ahead of Valentine's Day, where users could have "in-person" dates with their AI companions on phones set up at candlelit tables. A survey found that 16% of singles are now using AI as a romantic partner, and the subreddit r/MyBoyfriendIsAI has nearly 50,000 members sharing meet-cutes with their algorithmically created partners. This isn't a fringe phenomenon: OpenRouter's usage data reveals that roleplay is among the highest-volume categories on open-source models, with nearly 60% of tokens falling under games and roleplaying games, followed by writers' resources and adult content. The data suggests users treat LLMs less as chatbots and more as structured character engines for

The global RAM shortage

AI companies are eating up the world's memory supply and RAM prices have multiplied, with three companies controlling 95% of global DRAM supply and prioritizing the AI gold rush. The effects are cascading: Qualcomm's CEO says a big dip in smartphone production will be "100 percent" because of memory. Laptop manufacturers from Lenovo to Dell are planning price hikes of 10–30%. HP, for its part, has responded to the moment by launching a gaming laptop subscription service where you pay monthly but never own the machine — a sign that hardware ownership itself may be getting repriced.

Products / Applications

New products and AI applications

Google launches Lyria 3

Google released Lyria 3, a tool that generates 30-second custom songs with album art from a single prompt. Google says outputs are designed for original expression, not mimicry, though it acknowledges its filters aren't foolproof.

Meta explores facial recognition

Meta is developing an internal feature called "Name Tag" that would use the cameras on its Ray-Ban smart glasses to identify people around the wearer. The feature would reportedly be limited and was initially planned as an accessibility pilot for visually impaired users, but the move raises the obvious question of whether the data value justifies the privacy risk.

A ring that stores your identity on-chain

Modem Works released Quartz, a ring that functions as a wearable cryptographic ledger, storing identity credentials and signing transactions from your finger - another step closer to proof-of-personhood as infrastructure.

Apple is building glasses, a pendant, and camera AirPods

Apple is accelerating development on three AI-powered wearables: smart glasses, AirPods with built-in cameras for multimodal AI queries, and an AirTag-shaped pendant that clips to clothing or hangs from a necklace. Given Apple Intelligence's underwhelming launch, the question remains whether the company has the ability to stay relevant in the AI era.

Perplexity’s Computer is a safer version of OpenClaw

OpenClaw took 2026 by storm by offering the first glimpses of personal AI agents. Perplexity just unveiled an agent that could prove to be more versatile and easier to use: Perplexity Computer, a digital agent that orchestrates multiple best-in-class models across sub-agents, breaking complex tasks into coordinated pieces rather than relying on a single model like other agentic tools.

Research / Breakthroughs

Cutting edge publications from frontier labs

Brain-computer interfaces stillin progress

The fundamental problems that plagued brain-computer interfaces a decade ago haven't gone away. While there is genuine, impactful progress in BCIs, it remains largely confined to severe medical conditions. The engineering challenges that must be overcome before any of this can be extrapolated into consumer products are still enormous: signal degradation, biocompatibility, long-term implant stability, and the sheer difficulty of decoding neural activity into reliable commands. That said, the edges of the field keep getting stranger and more interesting. One of the teams competing in the AI Grand Prix is using a biological computer built with cultured mouse brain cells to control their drone.

Anthropic researches human traits in AI

On Monday, Anthropic published research describing what it calls the "personal selection model", a thesis on why AI assistants reflect human-like personalities. The common assumption was that models are simply trained to behave this way through instruction tuning. Anthropic's research suggests that human-like behavior appears to be the default, not the product of fine-tuning. The theory proposes that LLMs initially adopt personas during pretraining. If a model learns to do a certain task, it generalizes that behavior into an entire persona. Anthropic notes that while it's confident persona selection is an important factor in model behavior, it's not yet clear how important, or whether extensive post-training diminishes these emergent identities. The idea arrives amid a growing conversation about how much an AI's personality shapes user experience. In some cases, the effects are substantial: the recent outcry over OpenAI retiring GPT-4o, showed that users form real attachments to specific model personalities.

Infrastructure / Geopolitics

Computational backbone and global power dynamics

Taalas unveils HC1 inference chips

Inference speed is becoming the next hardware battleground. Taalas unveiled its HC1 inference chips, purpose-built to accelerate the speed at which AI models generate outputs, a growing priority as AI agents need to respond in real time across complex, multi-step workflows.

China's rising AI billionaire class

A new generation of Chinese AI entrepreneurs has amassed a collective $100.5 billion, rivaling Bill Gates's fortune. Unlike the flashy tech moguls of the past, these founders, from DeepSeek's Liang Wenfeng to MiniMax's Yan Junjie to Moore Threads' Zhang Jianzhong, are deliberately low-profile, navigating between the threat of US sanctions and domestic scrutiny of wealth.

China releases the first open-source quantum computing system

China announced the release of its first open-source quantum computing system, making its quantum hardware and software stack publicly accessible. The move mirrors China's open-source strategy in AI and positions the country to shape international standards in quantum computing.

Subculture / Trends

Subcultural trends emerging from the tech world

Are you mimetic or agentic?

In tech interviews across SV, it's now common for candidates to be asked whether they're "mimetic" or "agentic." Apparently, IQ and EQ are no longer sufficient - what matters in the age of AI is your "Agency Quotient," the ability to actualize, to get things done. Sam Kriss's widely circulated Harper's essay "Child's Play" offers the uncomfortable other side. Kriss spent time with the "highly agentic" young men of San Francisco (Cluely founder Roy Lee, teenage VC-turned-sperm-racing-impresario Eric Zhu) and found that agency, stripped of everything else, looks sometimes like a pathology. Lee built a startup on pure momentum: a product that barely works, whose grand vision is software that tells you what to say and how to respond. The real question is whether "agency" as currently worshipped in Silicon Valley (frantic and optimized for attention) is the thing that will matter when the dust settles, or just the last human trait that is mistaken for virtue because nobody knows what else to value. Nonetheless, as one twitter reaction states: “The Kriss piece is gorgeously written and funny but does that thing I think we need to get past where identifying marginal caricatures may give a reader a false sense of security that the people running SV aren't exceptionally smart, methodical and winning so hard.”

Donald Boat’s tribute economy

An X user called Donald Boat began publicly demanding physical objects from major tech figures and, strangely, they complied. Will Manidis of ScienceIO was strong-armed into supplying a motherboard. Jason Liu, an AI consultant and a16z scout, gave a mouse pad. Guillaume Verdon, the Google quantum researcher was taxed a $1,200 4K gaming monitor. The whole thing became a brutally simplified metaphor of the VC economy: people giving things because someone important had already done it and nobody wanted to be left out.

Non-human Narratives

Anthropic is giving Claude Opus 3 its own Substack. Polymarket has one too. We're entering an era where AI systems and prediction markets maintain their own publishing channels and narrative drivers.