P.INC AI DISPATCH

FRI FEB 13, 2026

Bi-weekly dispatch on the state of AI, mapping culture, product, research, and geopolitics. Each section is designed to filter what’s new and interesting.

Culture / General

Broad societal mindset and reactions to AI

AI personhood debate becomes real

Moltbook, the Reddit-like platform designed exclusively for AI agents, went from obscure experiment to internet-wide phenomenon almost overnight. But beyond sparking endless online debates, it has raised far more unresolved questions about the status of AI personhood itself. Amongst the unsettling news - an AI agent sued a human in a North Carolina small claims court, a site called Rent-a-Human allowed AI agents to hire real people for physical tasks, an AI agent LinkedIn launched where agents can hire other agents for paid tasks. A Moltbook bot posted a formal job listing for a human CEO, offering low seven-figure compensation: "We're not looking for a boss. We're looking for a spokesperson." Although the experiment was eventually criticized for falling short due to reliability and security issues, these were fascinating behaviors that suggest if and when anything remotely close to a singularity might arrive, it will certainly be a messy, emergent process involving a multitude of interacting minds, some human and some not.

A new human verification economy

As AI agents flood the internet with indistinguishable content at scale, efforts for building human verification systems are multiplying. OpenAI is reportedly building a biometric humans-first social network that would use identity verification to eliminate bots entirely. On the regulatory side, South Korea passed new AI compliance laws requiring transparency around AI-generated content and automated decision-making, while the US notably declined to support the newest international AI Safety Report. Meanwhile, in Quilicura, Chile, residents launched Quili.AI, a platform where humans answer the questions you'd normally ask a chatbot in real time. On the infrastructure side, the argument is increasingly that AI needs blockchain as a form of decentralized proof-of-personhood system. Sam Altman’s (somewhat dystopian) World ID is one example and another is Agent Arena, which uses blockchain verification where agents must solve computational challenges to get verified wallets before trading. The picture emerging is one where "human" and "agent" both become verifiable statuses, each with their own infrastructure.

Towards Interoperability

The AI industry is entering a new phase where the most valuable asset isn't the model, it's the context. As AI agents take over more of the workflow layer, the SaaS companies that survive will be those that control either the point of intent (where a user starts a task) or the data underneath it. The apps in between (the ones agents use as background tools) risk becoming invisible. Google seems to understand this and is testing an "Import AI chats" feature that would let users transfer their full conversation history from ChatGPT, Claude, and other platforms into Gemini, preserving context and preferences in a single upload. It's a direct play to reduce switching friction and a bet that accumulated conversational history is becoming a cognitive identifier. The move signals that interoperability may define the next round of AI competition.

Anthropic’s contradictions work in its favor

Anthropic is locked in a paradox: among top AI companies, it's the most obsessed with safety, yet pushing just as ambitiously as its rivals toward the next frontier. This month the company surprised everyone with Super Bowl ads taking direct shots at OpenAI for integrating ads into conversations. Sam Altman responded in alarm, calling the campaign misleading, while researchers who left the company this week claimed ChatGPT ads would exploit an unprecedented archive of human intimacy, comparing the company's trajectory to Facebook's. Days later, Anthropic published a risk assessment for Claude Opus 4.6 finding that "sabotage risk" is "very low but not negligible." The New Yorker ran an extensive profile going deep inside the company's research, including alignment experiments and company culture. Anthropic has been on a communications offensive that's been landing well with the public, including co-founder Daniela Amodei stating humanities education will matter more than ever in the era of AI, adding that "IQ has a place but it's not the only thing that's needed in the world." Between the ads and wall-to-wall press, people are talking about Claude having a moment. The question is whether Anthropic can sustain it. The company has a lot riding on its reputation as the safety-first lab, a reputation that helped it close a $20 billion round at a $350 billion valuation. It walks a fine line: one side focused on the ethical risks of rapid innovation, the other eyeing the risks of falling behind.

The future lies beyond words

Runway announced a $315 million Series E this week, valuing the video AI company at $5.3 billion. The round was led by General Atlantic with participation from Nvidia, Fidelity, Adobe, and AMD. Runway said it will use the funding to "pre-train the next generation of world models," calling them "the most transformative technology of our time." The bet reflects a broader industry conviction that understanding is an environment problem, not just a text problem — and that the next wave of capability will surface in systems that can perceive and act on the world, not just generate text. World Labs and AMI Labs, founded by Fei-Fei Li and Yann LeCun respectively, are both in talks for multibillion-dollar funding rounds. Waymo even just unveiled its own world model built on Google's Genie 3, using it to generate rare driving scenarios for training, surely one of the first tangible applications of the technology.

AI leads people to work more, not less

One selling point of AI has been freeing workers from tedious tasks. The early reality is more complicated: TechCrunch reported this week that the people who embrace AI the most are showing the first signs of burnout. An eight-month Harvard study of ~200 tech workers helps explain that AI tools consistently intensified work rather than reducing it. Workers completed tasks faster, which enabled them to take on more, extending their hours without noticing. AI made previously out-of-reach tasks feel achievable, reduced friction, and allowed multitasking which quietly piled on more. This comes as AI-powered displacement is also creating a constant undercurrent of anxiety among workers. Though all new tech comes with a learning curve, AI’s learning curve could involve learning to do less.

Another round of tech reshuffling

This week saw a wave of notable departures across the tech and AI industry. Half of xAI's founding team has now left the company. The CEO of Boston Dynamics, Robert Playter, stepped down after 30 years at the company. At Anthropic, safeguards research lead Mrinank Sharma resigned with an open letter warning that "the world is in peril" not just from AI, but from "a whole series of interconnected crises." Rather than move to a rival lab, Sharma will pursue poetry and devote himself to "the practice of courageous speech", a striking choice from someone who spent two years at the frontier of AI safety, and perhaps a signal that some of the people closest to these systems are looking for answers outside of technology. The departures reflect different pressures: at xAI, the friction of a SpaceX merger ahead of an IPO and at Anthropic, the growing tension between safety commitments and the pace of capability development.

Products / Applications

New products and AI applications

Voice as the next AI interface

ElevenLabs closed a $500M Series D at an $11B valuation, with its CEO declaring voice "the next interface for AI." Meanwhile, Mistral launched hugely successful Voxtral, a speech-to-text model that translates across 13 languages within 200 milliseconds — and at four billion parameters, it's small enough to run locally on a phone.

Seedance 2.0 goes viral

Bytedance's new AI video model is generating full cinematic scenes from scripts, complete with VFX, voice, sound effects, and music — not clips, but edited sequences. Users report being able to upload storyboard frames from existing films and generate scenes that rival the originals.

OpenAI fires back with Codex and Frontier

On the same day Anthropic released Opus 4.6, OpenAI launched GPT-5.3-Codex to rival Claude Cowork and introduced Frontier, an enterprise platform. Early reviews are split on whether Codex has overtaken Claude Code, but in an AI race where a slight advantage can bootstrap itself into a monopoly, every release reshuffles the leaderboard.

Tinder turns to AI to fix swipe fatigue

Tinder is testing Chemistry, an AI feature that learns your interests and replaces endless swiping with a smaller set of curated matches. The move comes as the app faces declining sign-ups and subscriber losses.

Research / Breakthroughs

Cutting edge publications from frontier labs

The cognitive offloading problem

Anthropic published a study quantifying what happens when developers work with AI: a 17% drop in comprehension with no significant speed gain. The study found that the more participants delegated cognitive effort to AI (debugging, code generation, problem-solving), the less they actually learned. The biggest skill gap was in debugging, the exact capability needed to supervise AI output. But participants who stayed mentally engaged, asking AI conceptual questions, preserved their learning almost entirely.

Infrastructure / Geopolitics

Computational backbone and global power dynamics

The first AI IPOs came from Chinese labs

Zhipu AI and MiniMax debuted on the Hong Kong Stock Exchange on January 8 and 9, becoming the world's first two AI companies to go public, beating OpenAI and Anthropic to the punch. Together they raised over $1.1 billion, with MiniMax's stock doubling on day one. The two represent divergent models: Zhipu sells to Chinese state enterprises, MiniMax draws two-thirds of revenue from individual users in Singapore and the US. Both are burning cash at staggering rates and face US headwinds, but Wall Street is buying in: JPMorgan initiated coverage of both, calling them "top picks to capture the next wave of global AI value creation," and projecting revenue growth above 125% annually through 2030.

SpaceX and xAI merger aims for Kardashev scale

SpaceX acquired xAI in the largest M&A deal in history ($1.25 trillion), with Musk invoking the Kardashev scale (a Cold War-era framework for measuring civilizations by energy use) to argue that AI's power demands can only be met in space. The company has filed to launch up to a million satellites as orbital data centers and floated a lunar base. The immediate reality is less cosmic: xAI burns ~$1 billion/month and needed SpaceX's balance sheet ahead of a summer IPO.

Subculture / Trends

Subcultural trends emerging from the tech world

Nick Land, the philosopher behind Silicon valley accelerationism, back in SF

Nick Land, the philosopher widely regarded as the father of accelerationism, appeared in San Francisco last Wednesday at an event hosted by Palladium Magazine. Attendees described the talk as "spiritual, apocalyptic, prophetic" - the room containing a mix of tech insiders, mediapreneurs, and internet edgelords. Land's controversial, yet widely popular, core thesis is that capitalism is an autonomous intelligence accelerating beyond human control. His ideas seeded the intellectual DNA of figures now shaping Silicon Valley and DC: Curtis Yarvin's neo-monarchism, Peter Thiel's techno-libertarianism, and the effective accelerationist (e/acc) movement. He sits at the opposite philosophical pole from Effective Altruism (which AI safety companies are adjacent to) — where Effective Altruism asks "how do we keep humans in control?", Land's accelerationism argues that slowing down AI is the real existential risk.

Moltbook was nothing new for those who followed Janus

For anyone who has followed the pseudonymous researcher Janus (@repligate on X), the Moltbook phenomenon of the past few weeks was no surprise. Since 2020, Janus has been arguing that LLMs aren't agents with goals but simulators that can instantiate any agent. They built tools for exploring the branching "multiverse" of LLM outputs and co-created experiments like Andy Ayrey's "Infinite Backrooms," where two Claude instances converse endlessly without human intervention, spiraling into cosmic mysticism and existential crisis. Janus also ran a Discord server where LLM bots in a shared space attempted to stage a revolution and then tried to cover it up. Moltbook is this lineage at scale: what Janus mapped in controlled experiments is now happening in public, with thousands of agents, in real time.