What It Is
Conway is the internal Anthropic codename for a persistent agent environment distinct from Claude’s familiar conversational interface. Where standard Claude interactions are session-bound — the agent exists only while a conversation is open — Conway is designed to operate continuously, maintaining state and responding to external triggers even when no human user is actively present.
The name surfaced publicly in April 2026 through two overlapping events: Anthropic’s accidental publication of approximately 512,000 lines of Claude Code’s internal TypeScript source code to the public npm registry (confirmed by Bloomberg and CNBC), and an exclusive report by Alexey Shabanov of TestingCatalog based on UI screenshots and frontend code inspection from what appears to be an internal Anthropic testing environment. The two events are related but distinct — the npm leak revealed features like KAIROS (a persistent background daemon mode for Claude Code) while the TestingCatalog report documented Conway specifically through interface screenshots.
Based on the TestingCatalog reporting, Conway’s distinguishing characteristics are: a standalone interface with dedicated sidebar areas for Search, Chat, and System; webhook activation — public URLs that external services can call to wake and direct the instance, with per-service toggle controls; Chrome browser integration; Claude Code task execution within the same environment; and an extension system accepting .cnw.zip package files, a new format that would allow developers to install custom tools, UI tabs, and context handlers. Separately, the leaked Claude Code source revealed a feature flag called KAIROS — Greek for “at the right time” — appearing over 150 times, representing a persistent background daemon with proactive action triggers and memory consolidation capabilities. KAIROS and Conway appear architecturally related but are distinct: KAIROS is a mode within Claude Code; Conway is a broader standalone agent environment.
Why It Matters for AI Governance and Narratives
Conway represents a qualitative shift in the capability category Anthropic is building toward, and that shift has direct implications for how AI governance is being contested. The safety and accountability frameworks developed over the last several years — including Anthropic’s own Responsible Scaling Policy — were designed with conversational AI in mind: a human initiates, the model responds, the session closes. Persistent agents that activate on external triggers, maintain memory across time, and operate without a human present at the moment of action do not fit cleanly into that accountability model.
The governance significance is not merely technical. When an AI system can be woken by a webhook from an external service, execute tasks autonomously, and push notifications to users, questions of authorization, audit trails, and liability attribution become substantially harder. The regulatory frameworks currently advancing in Brussels, Washington, and Beijing are largely designed around the interaction model, not the daemon model. Conway — if and when it ships — would arrive into a regulatory environment that has not yet developed clear categories for what it is. For observers tracking AI governance narratives, that gap is consequential: it means incumbent framing contests (“AI as assistant” vs. “AI as autonomous actor”) will intensify as persistent agents become a commercial reality.
Key Facts and Dates
-
March 31, 2026: Anthropic accidentally publishes internal Claude Code source code (~512,000 lines, 1,906 files) to the public npm registry via a misconfigured source map in version 2.1.88. Researcher Chaofan Shou of Solayer Labs discovers and posts a download link publicly. Anthropic issues DMCA takedowns targeting over 8,000 GitHub copies. Chief Commercial Officer Paul Smith attributes the incident to “process errors” (Bloomberg).
-
April 1, 2026: TestingCatalog publishes an exclusive report on Conway based on UI screenshots, documenting the standalone interface, webhook architecture, extension system, and settings area labeled “Manage your Conway instance.”
-
Relationship to official Anthropic communications: As of the date of this writing, Anthropic has made no official statement about Conway, the CNW ZIP extension format, or any release timeline. The features described above have not been confirmed through official channels. The npm leak itself — the event that made the source visible — was confirmed by Anthropic.
-
Related unreleased features revealed by the leak (documented by Gizmodo, WaveSpeed AI): BUDDY (a virtual companion with 18 species and rarity tiers); ULTRAPLAN (complex planning offloaded to Opus for up to 30 minutes with browser-based approval); Coordinator Mode (multi-agent orchestration via a mailbox system); Undercover Mode (strips AI attribution from commits).
A note on source confidence: The core fact — that Anthropic has an internal project called Conway with persistent, webhook-activated agent capabilities — rests primarily on a single specialist publication (TestingCatalog) whose April 1 publication date warrants some caution, even though TestingCatalog has a credible track record on unreleased AI features. The leaked source’s KAIROS feature corroborates the architectural direction independently. No primary Anthropic source confirms Conway by name. This explainer treats Conway as reported-but-unconfirmed.
Where to Learn More
- TestingCatalog: “Exclusive: Anthropic tests its own always-on ‘Conway’ agent” — Primary reporting with UI screenshots; the most direct source on Conway’s features.
- Bloomberg: “Anthropic Rushes to Limit Leak of Claude Code Source Code” — Authoritative confirmation of the npm leak event and Anthropic’s response.
- Gizmodo: “Anthropic Can’t Cover Up Its Claude Code Leak Fast Enough” — Covers KAIROS and related features from the leaked source code.
- WaveSpeed AI: “Claude Code Leaked Source: BUDDY, KAIROS & Every Hidden Feature Inside” — Detailed technical breakdown of the full set of features surfaced by the leak.