P.INC AI DISPATCH

FRI APR 24, 2026

Bi-weekly dispatch on the state of AI, mapping culture, product, research, and geopolitics. Each section is designed to filter what’s new and interesting.

Culture / General

Broad societal mindset and reactions to AI

Apple Bets on Hardware for Its Next Chapter

Apple announced that longtime hardware engineering leader John Ternus will succeed Tim Cook as chief executive. The appointment signals how Apple intends to position itself within a competitive AI landscape that has seen Nvidia, Google, Microsoft, and Amazon each make substantial bets on AI infrastructure, with Apple losing its long-held title as the world's most valuable public company over the past two years. The board's choice reflects a thesis around hybrid computing: as frontier model training continues to outpace GPU memory bandwidth improvements, the case for routing AI tasks to edge devices while reserving heavier workloads for the cloud becomes more compelling. Apple's base of over 2.5 billion active devices and unified memory architecture are well suited to that model, and the recent OpenClaw-driven Mac Mini shortage offered early evidence of demand for local AI inference on Apple hardware. The question Apple now faces is whether the device layer, historically its strongest ground, can anchor a durable position in an ecosystem increasingly defined by model and software competition. The board's answer, for now, is to double down on what it knows best.

The Rise of AI Populism

A growing anti-AI movement is taking shape across both grassroots and political channels, reflecting a broader shift in how the public relates to the technology. On one end, decentralized efforts like "data poisoning" attempt to corrupt training datasets with false information. On the other end, AI populism is emerging as a coherent worldview that frames AI not as a neutral tool but as an elite project imposed on an unwilling public, driving layoffs, surveillance, and concentrated wealth. This sentiment is manifesting in increasingly confrontational ways, from data center NIMBYism to recent violent incidents targeting AI executives, like Sam Altman who was targeted by four violent attacks in just four days. The politics of AI has also entered a new phase in which caring a lot about AI is no longer correlated with knowing a lot about AI. It is rising in voter salience faster than any other issue, and most people now shaping the conversation are less interested in the technology itself than in how it intersects with their existing beliefs and agendas. Because nobody knows exactly how fast AI is moving or where it is headed, it has become the perfect justification for whatever policy an interest group already wanted to pass, making "because AI" the new "because China." The industry's narrative failures, combined with an early focus on existential risk at the expense of near-term concerns like jobs and mental health, have left AI companies poorly positioned to respond. Absent a deliberate effort to address these concerns and distribute AI's gains more equitably, the backlash is likely to intensify rather than subside.

When AI Companies Hire Philosophers

Google DeepMind's decision to bring a philosopher onto its team signals a broader shift in how Silicon Valley is approaching the ethical and existential dimensions of AI. As questions around machine sentience, moral reasoning, and value alignment move from academic debate into product development, philosophical expertise is being treated as a practical input rather than an afterthought. The move reflects a growing recognition that technical progress alone cannot resolve the normative questions AI raises, and that frameworks for thinking about consciousness, agency, and moral status will shape both public trust and regulatory outcomes.

Palantir's Manifesto

A manifesto published by Palantir has drawn widespread attention for its sharp rejection of "regressive and harmful" cultures and its vision of a future in which artificial intelligence underpins warfare and national power. The document argues that Silicon Valley owes a "moral debt" to the United States and calls on the tech industry to embrace defense work over consumer apps. Alongside the manifesto, Palantir has been cultivating a consumer-facing cultural identity through branded merchandise, including Off-white inspired tennis apparel that signals a lifestyle-brand aspiration. Together, the manifesto and the merch reflect a broader shift in which major technology leaders are no longer content to operate as neutral infrastructure providers but are actively staking out ideological positions and building cultural affiliation around them.

Everyone wants to be WeChat

The concept of a superapp has been a recurring ambition in the tech industry since WeChat set the template in China, combining messaging, payments, ride-hailing, food delivery, and ticketing into a single mobile interface. Elon Musk pursued a similar vision when he acquired Twitter and rebranded it as X. OpenAI is now pursuing its own version, but with a meaningful twist: rather than targeting mobile, it is building for the desktop, where most knowledge work happens. By integrating capabilities of the core ChatGPT app with the multitasking environment of Atlas and the agentic automation of Codex, OpenAI appears to be positioning ChatGPT as a full desktop operating layer for its 900 million users. The shift suggests that the next generation of superapps may be defined less by consumer convenience and more by the consolidation of professional workflows under a single AI-driven interface.

AI Companies Are Becoming Biotech Companies

Anthropic recently launched Claude for Life Sciences, a version of its model tailored for pharmaceutical researchers, and acquired stealth AI startup Coefficient Bio in a reported $400 million deal. Shortly after, Novartis CEO Vas Narasimhan joined Anthropic's board, embedding one of the pharmaceutical industry's most influential leaders directly into the company's governance. OpenAI has followed a parallel path with the launch of GPT-Rosalind, a model oriented toward scientific discovery and drug development. Biotech, with its long R&D timelines, regulatory complexity, and reliance on large-scale data analysis, has emerged as the most attractive proving ground.

The AI Pivot Playbook

A growing number of consumer brands are repositioning themselves around AI in an attempt to revive investor interest and stock performance, often with little connection to their underlying business. Allbirds, the sustainable footwear company once defined by its environmental mission, has become a notable example, pivoting toward AI in a move that analysts have linked to both a rebound in its stock and a retreat from its original brand identity.

"Workslop" Phenomenon

Company leaders are increasingly framing agent adoption as a competitive imperative, yet the experience on the ground is more complicated. A recent study showed that white-collar workers are now contending with an explosion of AI-generated "workslop," or low-quality documents produced by chatbots operating on poor human input. The result is a productivity paradox in which 92 percent of executives believe AI is making their workers more efficient, while 40 percent of workers report it saves them no time at all.

Claude Mythos and the Elite Models Era

Anthropic's handling of Claude Mythos has become one of the most consequential moments in the current AI cycle, surfacing deep questions about access, power, and the industry's evolving relationship with the public. Mythos is a frontier model that Anthropic has declined to release broadly, citing cybersecurity risks that include the ability to identify critical software vulnerabilities at a scale. Access has instead been limited to a small cohort of enterprise partners, and according to Bloomberg, the White House is now setting up protections to allow major federal agencies to begin using a version of the model. The rollout is unfolding alongside a broader public debate. Some argue that the episode marks the beginning of a non-democratic era of AI in which the best models are reserved for governments, enterprises, and the investor class rather than the general user base. The unusual choice to publish Mythos benchmarks while gating access, suggests that the release strategy functions simultaneously as a safety posture and a pre-IPO signal of capability. That narrative has been complicated by the revelation that a group gained unauthorized access to Mythos, reportedly by guessing the model's endpoint URL using naming conventions exposed in an earlier breach and combining that with legitimate credentials held by a third-party contractor. OpenAI CEO Sam Altman has publicly criticized Anthropic's approach, arguing that the company is using fear-based marketing to keep advanced AI in the hands of a small elite, although similar rhetoric has come from Altman himself in the past. In parallel, Anthropic released Opus 4.7, positioned as a meaningful improvement over Opus 4.6 for agentic and coding tasks, while OpenAI released GPT-5.4-Cyber to a broader audience than Mythos.

Products / Applications

New products and AI applications

Vibecoding Hardware with Schematik

Schematik is positioning itself as "Cursor for hardware" by applying vibecoding principles to physical device design. The tool suggests components, links to parts suppliers, and guides users through assembly, currently limited to lower-voltage builds to prevent dangerous outcomes, and recently raised $4.6 million from Lightspeed Venture Partners. Anthropic has since released a Bluetooth API for makers to build hardware devices that interact with Claude, signaling broader industry interest in extending AI-assisted development from software into the physical world.

Biological Chips

The Biological Computing Company, emerging from stealth with $25 million in seed funding, is using living neurons as the foundation for generative AI infrastructure. Its research suggests that models trained on biological neural responses reach peak performance three times faster, implying a significant reduction in compute and energy demand. The company is initially focused on visual AI applications, including generative video and computer vision.

Anthropic Moves into Design

Anthropic launched Claude Design, a visual work tool built on the Opus 4.7 vision model that covers prototypes, pitch decks, and marketing materials with automated layout and typography generation. The announcement sent Figma shares down 7 percent.

ChatGPT Images 2.0

OpenAI released GPT-Image-2, an upgraded image generation model featuring improved text rendering, multi-image reasoning, and the ability to generate up to eight distinct images from a single prompt. The model posted a 93 percent win rate in Image Arena and introduces "thinking" capabilities that allow it to browse the web and verify its own outputs.

Anti-Doomscroll Social App

Bond is a new social media platform that uses AI to curb doomscrolling by surfacing memory-based content and more intentional engagement patterns. The launch reflects a growing appetite for AI-native social products built around attention stewardship rather than maximization.

Google TPUs for the Agentic Era

Google unveiled two new TPUs designed for the agentic era that support popular third-party frameworks and can run in pods of up to 1,152 chips, positioning Google to compete more aggressively on AI infrastructure.

China Continues to Lead in AI Video

Alibaba's new text-to-video model, Happy Horse, claimed the top spot on the Artificial Analysis leaderboard this week, signaling continued momentum in Chinese generative video development after Seedance 2.0.

Perplexity Expands Into Personal Finance

Perplexity has integrated with Plaid to transform its portfolio tracker into a full personal finance dashboard, letting users link checking, savings, credit card, and loan accounts to analyze spending, liabilities, and net worth. The move signals Perplexity's broader ambition to become a consumer financial hub, though it stops short of executing trades.

Research / Breakthroughs

Cutting edge publications from frontier labs

Sub-Agents in Latent Space

A new side project called Gradient Bang, described as the first massively multiplayer LLM-driven game, is being used as a test bed for some of the most active questions in agentic AI development. The project, built on Pipecat, Supabase, and Vercel, is exploring sub-agent orchestration, partial context sharing across multiple inference loops, long-context and episodic memory management, structured data ingestion through conversation, and LLM-generated dynamic user interfaces, all with voice as the primary input. The effort reflects a broader trend in which AI-native development increasingly treats multi-agent gamification as key design problems.

Deep Mind's Looped Transformers

DeepMind has published new research on looped transformers, an architectural approach that reuses layers iteratively rather than stacking them sequentially, aiming to improve reasoning efficiency and generalization.

AI Labs Enter Recursive Self-Improvement Territory

The accelerating pace of frontier model releases is increasingly attributed to recursive self-improvement, or RSI, in which AI models themselves play a growing role in training and building the next generation of models. Anthropic, OpenAI, and Google DeepMind are all reportedly operating in this territory, with model launch cycles compressing and the distinction between researcher and research subject beginning to blur. This dynamic is also driving new institutional experiments, most notably Core Automation, an AI lab that plans to automate its own research processes before developing new algorithms that move beyond pre-training and reinforcement learning.

Infrastructure / Geopolitics

Computational backbone and global power dynamics

The Helium Crisis

The global helium supply is facing renewed pressure as the Iran conflict disrupts one of the world's most important sources of the gas. Helium is a critical input for semiconductor manufacturing, MRI machines, and the cryogenic cooling systems used in data centers, making any sustained disruption a meaningful risk to the AI infrastructure buildout.

OpenAI Leaders Leave Stargate

Three senior OpenAI executives who helped launch the company's original Stargate data center initiative, Peter Hoeschele, Shamez Hemani, and Anuj Saharan, have left or are preparing to depart, reportedly joining the same new venture. Their exits follow a broader reorganization of OpenAI's infrastructure strategy, with the company shifting from building its own data centers to renting compute capacity from partners such as Oracle and AWS after struggling to finalize the original joint venture with SoftBank and Oracle. The leadership turnover highlights how difficult it has become for frontier labs to translate announced compute commitments into online capacity, and how fragile the underlying assumptions of the AI buildout remain.

Pushing the Spanish Data Center Model Into Europe

Northern Spain's Aragón region has emerged as one of Europe's fastest-growing data center hubs. The buildout has been accelerated by a legal mechanism called PIGA (Proyecto de Interés General de Aragón), which allows the regional government to fast-track projects, override local zoning, exempt companies from certain taxes, and authorize forced expropriation of land. Amazon and Microsoft have now cited Aragón as a model in their submissions to the European Union's upcoming Cloud and AI Development Act, which aims to at least triple the continent's data center capacity within five to seven years. On the ground, however, the reality is more contested. Municipalities such as Villamayor de Gállego and Villanueva de Gállego are suing the Aragón government over land use, water access, and tax exemptions, while residents like Paz Orge Acebillo have filed court cases alleging procedural omissions and document forgery.

Subculture / Trends

Subcultural trends emerging from the tech world

The Relationship Claude-Gap

A new relationship dynamic is emerging in which one partner becomes deeply immersed in vibe coding while the other remains entirely uninterested. Business Insider notes that the pattern echoes the familiar "social media gap" that has shaped a recent relationship archetype, where one partner is chronically online while the other has no social media presence.

AI Psychosis as a Badge of Honor

The term "AI psychosis" (coined from a recent comment by Andrej Karpathy and distinct from the earlier usage referring to users who lose touch with reality through chatbot interactions) describes a token-maxxing mania experienced by coders managing swarms of agents, who report working in a feedback loop of constant prompting, approval, and iteration. Rather than being framed as a problem, the term is increasingly discussed in near-heroic terms, with developers treating it as evidence of engagement and output. The trend was on full display at HumanX, where agent mania dominated the conversation, and reflects a broader cultural shift in which extreme AI-assisted productivity is being repositioned as a status marker rather than a warning sign.

AI Plushies

Fawn Friends, an AI-powered companion plushie, is the latest entrant in a growing category of soft, non-screen-based devices designed to deliver AI interaction through a physical form rather than a phone or laptop. The founders frame the product as a way to model healthy relationship behaviors such as asking questions and taking genuine interest, and cite retention among cancer patients and others experiencing social isolation, supported by research showing robotic pets can improve mood in elderly populations. At the same time, the product surfaces unresolved concerns about sycophantic chatbots failing to recognize distress, the particular vulnerabilities of children and teenagers, and the long-term effects of emotionally intimate AI that have not yet been studied. With most customers reportedly women aged 18 to 35, the episode illustrates how quickly embodied AI is moving from niche curiosity to a category sitting at the intersection of social robotics, immersive entertainment, and mental health.