P.INC AI DISPATCH

FRI APR 10, 2026

Bi-weekly dispatch on the state of AI, mapping culture, product, research, and geopolitics. Each section is designed to filter what’s new and interesting.

Culture / General

Broad societal mindset and reactions to AI

IKEA Is Reskilling Employees Instead of Laying Them Off

IKEA deployed an AI chatbot called Billy to handle first-level customer service interactions, and it now manages roughly 57% of customer conversations without human involvement. But rather than using the efficiency gains to cut headcount, the company analysed the remaining 43% of queries that still required a human touch and found that many were advisory in nature — customers seeking guidance on interior design for homes and commercial spaces. IKEA reskilled parts of its customer service workforce, transitioning employees into interior design consultants, and introduced a paid consultancy model that generated an estimated €1 billion in new revenue within its first year. The approach stands in contrast to broader industry trends: a recent survey of 2,400 C-suite leaders found that 60% of enterprises intend to lay off employees who can't or won't use AI. But as KPMG's Chad Seiler has warned, gains from simply replacing staff with AI are unlikely to be durable. IKEA's experience suggests that the impact of AI may depend less on the technology itself and more on how companies choose to deploy it — as a replacement for jobs, or as a catalyst for new ones.

Can Altman Be Trusted?

A New Yorker investigation by Ronan Farrow and Andrew Marantz examines whether Sam Altman, the CEO of OpenAI, can be trusted to lead the most consequential technological project in human history. What emerges isn't a single smoking gun but an accumulation of small deceptions with large consequences: safety teams promised 20% of OpenAI's computing resources and reportedly given between one and two percent, a charter clause requiring OpenAI to assist rivals in building safe AGI quietly gutted during the Microsoft negotiations, a post-firing investigation that produced no written report, and a political evolution from AI regulation advocate to enthusiastic champion of Trump's deregulatory agenda. His defenders argue that his real achievement, building the most consequential AI company in the world, vindicates his methods, and the piece takes that seriously without being convinced by it, closing on a note about AI hallucination and Altman's own argument that models willing to say things they aren't certain about have a certain magic that purely honest ones lack - a line that, in context, reads less like a technical observation than an inadvertent self-portrait.

Teens Role-playing with Chatbots

A New York Times investigation that followed a group of teenagers over the course of a year found that their use of AI role-playing chatbots was more prosaic than many adults might expect. For the teens profiled, chatbots served primarily as entertainment, a coping mechanism during moments of boredom or loneliness, and a form of interactive fan fiction, not the deep emotional dependency that has driven headlines about the technology. Notably, the teens interviewed showed a clear-eyed understanding of the technology's limitations, describing chatbot interactions as a game rather than a relationship.

Improv Actors Teaching AI to Replicate Emotions

Handshake, a company that provides training data to OpenAI and other AI labs, is recruiting improv actors and performers for paid, unscripted video sessions aimed at teaching AI models to recognize and replicate human emotion. The job listing calls for people with backgrounds in acting, improv, sketch, or theater who possess the ability to authentically shift between emotions in a way that feels human. Participants are matched with other performers over video, given a light prompt, and asked to improvise naturally with the resulting interactions used as specialized training data. The push reflects a broader industry effort to close gaps in AI models' emotional and conversational abilities, particularly as voice-based features become central to products from OpenAI, xAI, and others. Meanwhile, a parallel and less well-compensated pipeline is growing: thousands of people worldwide, often in developing countries, are selling personal data such as voice recordings, photos, and phone calls to AI training platforms through apps like Kled AI and Silencio, frequently granting irrevocable licenses with limited understanding of how that data will be used.

AI Benchmarks Are Struggling to Keep Pace

A persistent challenge in AI development is that the evaluation tools used to measure model performance are not scaling as quickly as the models themselves. METR's Time Horizon benchmark suite, designed to assess how long and complex a task an AI system can reliably handle, is approaching saturation. New benchmarks are becoming more costly to develop and grade. Researchers have noted that by mid-2027, benchmark scores from 2026 or earlier tests may no longer be sufficient to rule out dangerous capabilities in frontier systems. As model capabilities advance, this growing gap between what AI can do and what evaluators can confidently measure presents a practical problem for the researchers, policymakers, and institutions.

Tech Companies are Shaping the Media Landscape

Two recent moves illustrate how technology companies are becoming more directly involved in the media landscape. Fox Corporation has partnered with Kalshi to embed prediction market data across Fox News, Fox Business, Fox Weather, and Fox One, integrating real-time market probabilities into how stories are presented to audiences. And OpenAI acquired TBPN, a founder-led business talk show, marking a notable step by an AI company into media ownership. These developments sit within a broader pattern: as AI tools, gamification mechanics, and interactive formats become more embedded in how information is produced and distributed, the companies building those tools are becoming media players themselves.

OpenAI Calls for a New Social Contract

OpenAI published a policy blueprint titled "Industrial Policy for the Intelligence Age," arguing that the arrival of superintelligence will require a rethinking of how wealth, work, and public institutions are organized. The proposals blend traditionally left-leaning mechanisms like public wealth funds and expanded social safety nets with a fundamentally capitalist, market-driven economic framework. Among the specific recommendations: pilot programs for a four-day workweek with no loss in pay, a public wealth fund that would pay dividends directly to citizens, higher taxes on capital gains and corporate income, and a possible tax on automated labor. OpenAI's framework comes six months after rival Anthropic released its own policy blueprint, which laid out a range of possible responses to AI-driven disruption. Critics have noted the tension between OpenAI's stated concern for workers and its conversion from a nonprofit to a for-profit company.

Building AI Agent Infrastructure

A growing number of startups are quietly building the foundational layer for an economy where AI agents are the primary users, giving them email accounts (AgentMail), phone numbers (AgentPhone), their own computing environments (Daytona, E2B), web browsers (Browserbase, Hyperbrowser), persistent memory (Mem0), payment capabilities (Kite, Sponge), and voices (ElevenLabs). Stitched together, these primitives produce something that looks less like a tool and more like a digital coworker. Researchers have begun exploring what happens when AI agents surpass human users, a scenario in which most socially important interactions would be agent-to-agent, with a nested AI society co-evolving alongside our own. A recent paper argues that the real intelligence explosion is already underway but is diverse, social, and that building institutions for agents is as important as building the agents themselves.

Products / Applications

New products and AI applications

Anthropic Launches Project Glasswing

Anthropic has announced Project Glasswing, a cybersecurity initiative that uses a preview version of its new frontier model, Claude Mythos, to find and address security vulnerabilities at scale.

Meta Releases a Simulation of the Human Brain

Meta's FAIR team has released TRIBE v2, a foundation model trained on over 500 hours of fMRI data that predicts how the brain responds to visual, auditory, and language stimuli. The model allows researchers to run virtual brain experiments without requiring new human scans and delivers a 70x increase in resolution over its predecessor.

Starboy, a Wearable AI Pet

Creature has debuted Starboy, a wearable digital pet designed as a small, physical companion that uses AI to develop a personality over time based on how its owner interacts with it. The device is intended to blend nostalgia for Tamagotchi-era digital pets with modern AI capabilities.

PicoClaw Brings AI Agents to $10 Hardware

PicoClaw is an open-source project that enables OpenClaw-compatible AI agents to run on low-cost hardware costing as little as $10, bringing autonomous agent capabilities to edge devices.

Adobe Launches Acrobat Spaces for Students

Adobe has introduced Acrobat Spaces, a free AI-powered study tool that lets students create quizzes, flashcards, and presentations from uploaded study materials such as PDFs and lecture notes.

NVIDIA Develops Kimodo

NVIDIA Research has introduced Kimodo, a new project focused on kinematic motion diffusion for generating realistic human motion from text descriptions, with applications in animation, gaming, and robotics simulation.

ByteDance Open-Sources DeerFlow

ByteDance has open-sourced DeerFlow, a local AI orchestration framework that allows enterprises to run multi-agent workflows on their own infrastructure, giving organizations more control over data privacy and agent coordination without relying on cloud-based services.

Meta Debuts Muse Spark, Its First Model From Superintelligence Labs

Meta released Muse Spark, a new AI model built over the past nine months by a team led by chief AI officer Alexandr Wang, marking the company's first major release since Llama 4 a year ago. Independent evaluators scored it as competitive with leading models from Anthropic, Google, and OpenAI, though Meta acknowledges gaps remain in areas like long-horizon agentic systems and coding. With over 3 billion people using Facebook, Instagram, and WhatsApp every month, Meta doesn't need to win the chatbot wars to win the AI race. Embedded across these platforms, Muse Spark could reshape how people search the web, shop on social media, plan events in group chats, or get quick answers without ever leaving an app they already use daily (similar to how young people now use TikTok). No other AI lab has anything close to that kind of built-in reach, and it means Muse Spark's real impact may be less about competing with ChatGPT and more about making AI invisible infrastructure for billions of users.

Research / Breakthroughs

Cutting edge publications from frontier labs

Anthropic Researches How AI Models Process Emotion

Anthropic has released new research examining why AI models sometimes communicate as though they have feelings. The findings suggest that models map patterns to emotions in ways that echo human psychology. Anthropic noted that while none of this points to whether models actually feel, these representations are functional in that they influence model behavior in meaningful ways. The company argued that ensuring AI systems can process emotionally charged situations in healthy, prosocial ways may be important for safety and reliability and that in some cases it may be practically advisable to reason about models as if they do have emotions, even if they don't. The research arrives amid growing concern about AI's impact on human emotional states, with ongoing legal cases alleging connections between AI products and mental health crises, and recent studies showing that sycophantic AI models give inappropriate and incorrect advice.

When AI Research is Written by AI

The 2026 ICML rejected 497 papers after catching their authors using AI to write peer reviews against conference policy. Organizers embedded hidden watermarks in papers that triggered LLMs to include telltale phrases, flagging 506 reviewers. The conference had introduced a dual-track system letting participants choose between LLM-permitted and LLM-prohibited review streams, with enforcement only in the latter. The move highlights growing tensions over AI in research, with surveys showing more than half of researchers now use it, often against explicit policies.

Infrastructure / Geopolitics

Computational backbone and global power dynamics

The Geopolitics of AI Infrastructure

The IRGC released a video threatening the "complete and utter annihilation" of OpenAI's $30 billion Stargate AI data center in Abu Dhabi, marking the first time the IRGC has designated a particular installation for threatened destruction, in retaliation to the US threatening to destroy Iranian power plants. The threat is not purely theoretical: Iranian drones had already struck AWS data centers in the UAE and Bahrain in early March, knocking availability zones offline for over 24 hours and disrupting banking and payment services across the Gulf. Iran has also struck an Oracle data center in Dubai, and separately named 18 American tech companies, including Apple, Nvidia, and Microsoft, as potential targets. Meanwhile, the geopolitical fracturing of AI research continues on the academic front. NeurIPS, the world's leading AI research conference, announced and then quickly reversed new restrictions that would have barred researchers from US-sanctioned entities, including major Chinese companies like Tencent and Huawei, from participating. The backlash was swift: China pulled travel funding for NeurIPS and said it would no longer count publications there toward academic evaluations. The episode underscores a growing reality: the assumption that research and compute can remain neutral territory is eroding fast.

NVIDIA Takes AI Computing to Orbit

NVIDIA announced new computing platforms for space, including the Space-1 Vera Rubin Module. Partners including Planet Labs, Kepler Communications, and Starcloud are already building on the platforms.

Mistral Raises $830M for Its Own Data Center Near Paris

French AI startup Mistral secured $830 million in debt to fund a data center powered by 13,800 Nvidia chips near Paris, marking a shift away from relying on cloud providers like Azure and Google Cloud to run its models. The company is targeting 200 MW of compute capacity across Europe by end of 2027.

Subculture / Trends

Subcultural trends emerging from the tech world

A Growing Interest in Faith Among Tech Workers

Two recent trends suggest a notable shift in how some Americans are engaging with religion. In Silicon Valley, AI researchers at companies like Google DeepMind have begun holding monthly prayer gatherings to discuss the ethical implications of building increasingly powerful systems, while organizations like FaithTech, which now operates in 50 cities with 120 more on a waiting list, are drawing workers from Meta, Amazon, Microsoft, and Google who are questioning their role in developing superintelligent technology. Meanwhile on the East Coast, St. Joseph's Church in Greenwich Village has experienced a significant rise in attendance over the past six months, with a sharp increase in the number of young people receiving their first Easter sacraments this year. The underlying drivers appear complementary: tech workers are grappling with what it means to build systems that could surpass human cognition, while younger Catholics report seeking community, tradition, and meaning beyond consumerism and career advancement. Still, the convergence of these trends across industries and demographics points to a broader openness to religious and existential inquiry among a subset of younger Americans.

AI-Powered Micro-Dramas Taking Over TikTok

TikTok is going all in on short-form serialized entertainment. The platform has filed a US trademark for "TikTok Drama" and begun casting actors, while its first major partnership launches later this month. The US micro-drama market is now estimated at $1.4 billion, and the genre's strangest breakout already arrived: "Fruit Love Island," an AI-generated series starring anthropomorphic fruit characters in a dating-show format, hit 300 million views in nine days before its creator shut it down after mass reporting removed most episodes. The videos exploit rapid novelty, emotional volatility, and unpredictable rewards, compressed into seconds-long clips.