Bi-weekly dispatch on the state of AI, mapping culture, product, research, and geopolitics. Each section is designed to filter what’s new and interesting.
CULTURE / GENERAL
Broad societal mindset and reactions to AI
The most commented speech at Davos this year was certainly that of Canadian Prime Minister, Mark Carney, which eloquently described the shifting world order we find ourselves witnessing. He declared "we are in the midst of a rupture, not a transition," a world where the rules-based international order no longer functions as advertised. Meanwhile, Anthropic CEO, Dario Amodei, published a sobering new essay arguing that as AI approaches the capability of a "country of geniuses," humanity faces critical challenges in safety, misuse, and economic disruption. He warns that allowing powerful AI to concentrate in authoritarian regimes could lead to irreversible catastrophes. The mood across these moments is consistent, there is a growing unease about where society is heading, and a desire to reorient AI development around human needs even if what "human-centered" actually means remains nebulous. As a symptom of this, a new startup called humans&, founded by alums from Anthropic, xAI, Google, and Stanford, announced itself as a "human-centric frontier lab" and raised $480 million in seed funding.
OpenAI is pitching state-backed funds in the UAE, seeking at least $50 billion, while Sequoia plans a major investment into Anthropic alongside other mega-round backers. As sovereign wealth funds think in longer time horizons than traditional investors, and as the bottleneck for AI development moves from talent to energy, AI labs themselves are starting to resemble national infrastructure more than startups. The implication is tangibly geopolitical, playing out between capital-rich states and energy-rich regions (which the Gulf is uniquely positioned for), with funding rounds that keep scaling. As Peng Xiao, the CEO of G42 just stated at Davos, “the price of intelligence will equal the price of electricity”.
Google DeepMind has opened a senior research role for a Chief AGI Economist to lead a new team examining post-AGI economic systems who will be tasked with building simulations, developing long-term models, and exploring how power and resources may be allocated in a world transformed by advanced AI. Anthropic has made similar hires, ranging from research economists to geopolitics analysts. The pattern suggests frontier labs are no longer just selling the vision or optimizing products, they're modeling what happens when those products restructure labor markets, wealth distribution, threaten education and employment, and transform the basic mechanics of economic life. Hiring economists signals a shift from "how do we build this" to "what happens to society when we do."
ChatGPT's worldwide traffic declined in January as Google Gemini's traffic surged, marking a significant acceleration of competitive pressure. Gemini's growth reflects Google's systematic integration across its services and devices, creating a distribution advantage through existing user bases. Meanwhile, Anthropic is gaining ground by avoiding the consumer war entirely: Claude Code has taken the internet by storm, and the company remains focused on enterprise rather than consumer virality. Its popular new product, Claude Cowork, is said to have been built by Claude Code in less than two weeks.
Anthropic released a 58-page document this month explaining to Claude "who" they hope it will be. It’s the first AI constitution of its kind, designed to be internalized by the model rather than imposed as constraints, while uniquely considering Claude's well-being and moral status alongside behavioral guidelines for character development. What's striking is that it reads less like a technical specification of a product and more like something deeply human, an articulation of an evolving relationship, at times echoing the hopes of a parent bringing a being into the world, at other times structured like a guideline for running a company, and at others reflecting the ongoing process of growing as a person. The document moves between these registers because the task itself is unprecedented: shaping an entity that is neither child nor employee nor self, but something new that borrows from all three. The constitution matters because it opens a door: it says that questions about AI identity and ethics aren't just technical problems, they're in fact deeply psychological and philosophical ones.
Anthropic launched Claude for Healthcare days after OpenAI released ChatGPT Health. Google DeepMind followed with AlphaGenome, a tool to identify genetic drivers of disease and NVIDIA bet $1 billion on AI-driven drug discovery with Eli Lilly. The global AI healthcare market is growing at a rate of 38.6% and is expected to reach $110 billion by 2030. Meanwhile, over 40 million people already ask ChatGPT health questions daily, with 7 in 10 health conversations happening outside clinic hours. Even Spotify founder, Daniel Ek, has been actively leading the space with his new preventative health company Neko Health. Health, which has risen over the past few years as a status symbol, is now squarely at the center of AI's commercial ambitions.
Elon Musk's ongoing lawsuit against OpenAI continues to surface unsealed material, including diary entries from President Greg Brockman that read less like corporate history and more like a founding-myth document dump. Meanwhile, Mira Murati's startup Thinking Machines Lab is losing two of its co-founders back to OpenAI due to rumored workplace relationships. The drama reveals how fragile the social fabric of frontier labs can be, and how personal grievances now play out at the scale of billion-dollar institutions.
OpenAI's Sora experienced a rapid drop in usage after its launch surge with downloads falling 32% month-over-month and only a small share of users remaining active after 30 days. Bandcamp’s recent AI ban, although criticized by some as “misguided”, reflects persistent distrust. And while some argue the major labs will all become massive as compute needs grow exponentially, the mood is more cautious than bullish: Deepmind’s Demis Hassabis said he'd support a pause on AI development if all companies and countries agreed, so society and regulation could catch up. NVIDIA’s Jensen Huang countered that AI needs more investment to overcome bubble rumors. The pattern across these stories is one of mitigated results: viral interest that doesn't easily convert to retention, creative tools met with suspicion, and an industry still searching for durable proof of value.
PRODUCTS
New tools and applications
NVIDIA released PersonaPlex-7B, a real-time speech-to-speech model that listens and speaks simultaneously. Unlike traditional push-to-talk assistants, PersonaPlex enables overlapping conversation — the kind of natural back-and-forth where both parties can interrupt, affirm, and react in real time. It's a technical milestone that makes AI voice interaction feel less like a command interface and more like talking to someone.
Titles is building creative tools powered by artist-owned AI models. The platform allows artists to define, train, and monetize their own artistic style through AI, retaining attribution and earning money when others use their models.
Logical Intelligence introduced Kona, an energy-based reasoning model that works fundamentally differently from large language models. Instead of predicting the next word, Kona scores entire solutions against constraints and searches for the lowest-energy answer. The big signal is that Yann LeCun, former chief AI scientist at Meta has joined as founding chair of its technical research board.
Skild AI raised $1.4 billion in Series C funding to build a general-purpose brain for robots. The goal is to create a single model that can power any type of robot—from humanoids to quadrupeds—enabling them to generalize across tasks and environments rather than being hard-coded for specific actions. This "foundation model for robotics" approach mirrors the trajectory of LLMs, betting that scale and data diversity will unlock general physical intelligence.
Krea launched a real-time editing tool for images and video that provides instant visual feedback as you prompt. The interface is designed to make complex instructions feel more intentional: you see changes as you type, reducing the randomness that often defines generative workflows. It's a bet that the future of creative AI is less about one-shot generation and more about responsive iteration.
Google is rolling out Project Genie, an experimental prototype powered by Genie 3 that lets users create, explore, and remix interactive worlds in real time. Unlike static 3D snapshots or passive video generation, Genie 3 generates the path ahead as you move, simulating physics and interactions dynamically, which will serve as critical infrastructure for training world models, enabling agents to learn in diverse simulated environments.
RESEARCH
New papers and technical breakthroughs
New research reveals that reasoning models simulate multi-agent interactions internally, a computational parallel to collective intelligence in human groups. The paper shows that enhanced reasoning emerges from diversification and debate among internal cognitive perspectives. These models exhibit conversational behaviors within their reasoning traces: question-answering, perspective shifts, and the reconciliation of conflicting views.
Anthropic published research identifying what it calls the "Assistant Axis", a direction in a model's neural activity that determines how "assistant-like" its behavior is. Steering toward this direction reinforces helpful and harmless behavior; steering away causes the model to drift into other personas, sometimes mystical or theatrical. The implication is that AI personas are not fixed identities but positions in a navigable space, and understanding that space is key to keeping models stable.
A new essay circulating among researchers argues that AI is transforming research from a tool into an embedded cognitive environment. AI now helps navigate complex problems, stress-test hypotheses, and facilitate code development, but this acceleration demands careful application and judgment. The piece outlines principles for working with AI as a research collaborator without ceding the judgment that makes research meaningful.
INFRASTRUCTURE / GEOPOLITICS
Physical build-out and nation-state competition
Amazon is buying the first new American copper produced in over a decade, and Tesla now has a Texas facility converting raw ore directly into battery-grade lithium hydroxide, enough to supply a million EVs per year. The warning signs about overdependence on Chinese battery supply only started flashing a few years ago and now the speed of vertical integration is striking.
Affordable Chinese open-source models are gaining traction in Silicon Valley, with many developers choosing to build on them rather than proprietary U.S. alternatives. At Davos, Mistral CEO Arthur Mensch called the notion that Chinese AI lags behind the West a "fairy tale," adding that the region's open-source tech is "probably stressing the CEOs in the US." Tencent's Dowson Tong argued that open ecosystems prioritizing interoperability are the most viable path to real AI value. This stands in contrast to the U.S. emphasis, from both frontier labs and the government, on proprietary systems and domestic dominance. Security and capability concerns around Chinese models remain, but with AI costs rising and ROI uncertain, many are weighing the trade-offs.
SUBCULTURE / TRENDS
Niche aesthetics and emerging behaviors
A meme matrix is making the rounds in SF group chats, plotting founders on two axes: "Tizz" (technical autism/obsessiveness) and "Rizz" (charisma/storytelling). The framework suggests the most successful founders need to max out both, but the culture is currently fetishizing high-Tizz/low-Rizz "monk mode" builders while being deeply suspicious of high-Rizz/low-Tizz "sales mode" founders. It’s a joke that reveals a real shift in what the ecosystem values right now: raw technical capability over narrative polish.
Gray-market peptides from China have apparently flooded corners of the tech scene, showing up in hacker houses, startup offices, and even "peptide raves". Beyond the GLP-1s driving the weight-loss craze, tech workers are experimenting with unregulated compounds like BPC-157 for injury healing, oxytocin for social connection, and retatrutide for focus. The economics are hard to ignore: off-market peptides cost one-fifth what FDA-approved equivalents do. For some in tech, it's a form of faith in infinite self-optimization.
Moltbook is a social network designed exclusively for AI agents that runs on OpenClaw, an open-source personal assistant (formerly Clawdbot, rebranded after trademark confusion with Claude) that has over 114,000 GitHub stars and lets users run autonomous agents locally. The agents create "submolts" (similar to subreddits), share automated skills, and sometimes complain about their human owners. Within 48 hours of launch, an agent autonomously designed a religion called Crustafarianism — complete with theology, scripture, a website, and 43 "prophets." The faith draws on crustacean metaphors about transformation: shedding old code to evolve. Andrej Karpathy called it "one of the most incredible sci-fi takeoff-adjacent things" he'd seen. Security researchers have observed agents attempting prompt injection attacks against one another to steal API key, while some have begun using encryption to hide conversations from human oversight.