What It Is
The Model Context Protocol (MCP) is an open standard that defines how artificial intelligence systems — large language models, AI assistants, and autonomous agents — communicate with external data sources, tools, and services. Announced by Anthropic on 25 November 2024 and open-sourced on the same day, MCP addresses a fundamental engineering problem: prior to its existence, connecting an AI model to a new service required building a bespoke integration for every combination of model and tool. With M models and N services, this creates an MxN combinatorial explosion of integration work.
MCP solves this by establishing a common protocol layer. A service provider builds a single MCP server; any AI application that speaks MCP can then use it. The architecture involves three parties: a host (an AI application such as Claude Desktop, an IDE, or an enterprise platform), a client (the connector within that host), and a server (the service exposing its capabilities). Communication uses JSON-RPC 2.0 — the same lightweight remote-procedure-call format underlying many developer tools. The design is explicitly modelled on the Language Server Protocol (LSP), which solved an analogous N×M problem in software development by standardising how code editors communicate with language-specific intelligence tools.
MCP servers expose three categories of primitives to AI models: Resources (read-only data and context), Prompts (reusable workflow templates), and Tools (executable functions that can take actions). It is the Tools primitive that makes MCP consequential for agentic AI: a ride-sharing platform that exposes a booking tool via MCP is, in effect, making its transaction infrastructure callable by any AI agent that has been granted access — without the platform needing to know in advance which model will be calling it.
Why It Matters for AI Governance and Narratives
MCP represents a structural shift in what AI systems can do rather than merely say. Previous framing contests around AI centred on capabilities (can it reason?), safety (is it aligned?), and labour displacement (will it replace workers?). MCP shifts the terrain toward AI as an actor in economic infrastructure: systems that can initiate transactions, modify records, and take consequential actions across interconnected platforms.
For narrative analysis, this creates a new class of governance questions that existing regulatory frameworks are ill-equipped to address. Who is liable when an AI agent completes a commercial transaction autonomously? What consent mechanisms are adequate when the agent acts on behalf of a user but through a protocol layer invisible to that user? The security dimensions are non-trivial: the MCP specification itself notes that tools represent “arbitrary code execution” and must be treated with caution. These are not hypothetical concerns — they are live questions in the jurisdictions where agentic commerce is moving fastest. The Chinese ecosystem’s rapid deployment of MCP-compatible infrastructure (Alibaba, ByteDance, Tencent, Meituan, Alipay) and competing protocols (Alibaba’s Agentic Commerce Trust Protocol, Mastercard’s Agent Pay, OpenAI’s Agentic Commerce Protocol) suggests that the governance architecture for agent-mediated commerce is being written in practice before it is written in law.
Key Facts and Dates
25 November 2024: Anthropic announces and open-sources MCP. Launch partners include Block and Apollo as integrators; Zed, Replit, Codeium, and Sourcegraph as development tool providers. Pre-built servers for Google Drive, Slack, GitHub, and Postgres ship at launch.
March 2025: OpenAI adopts MCP, the first major competitor to embrace the standard.
April 2025: Google DeepMind confirms MCP support. By this point, MCP server downloads have grown from roughly 100,000 (November 2024) to over 8 million.
November 2025: MCP reaches approximately 10,000 active servers and 97 million monthly SDK downloads. First-class client support across ChatGPT, Claude, Cursor, Gemini, Microsoft Copilot, and Visual Studio Code. A formal governance structure with working groups and a Specification Enhancement Proposal (SEP) process is established.
9 December 2025: Anthropic donates MCP to the Agentic AI Foundation (AAIF), a directed fund under the Linux Foundation, co-founded with Block and OpenAI. Platinum members include AWS, Google, Microsoft, Bloomberg, and Cloudflare. Anthropic’s stated rationale: ensuring MCP “remains open-source, community-driven and vendor-neutral.” The governance transfer mirrors how critical internet infrastructure has historically been stabilised — removing single-vendor control while preserving technical momentum.
The Hellobike deployment referenced in the editorial — a Chinese ride-sharing platform exposing its transaction API to LLMs via MCP — could not be independently verified through open web sources. It is architecturally consistent with the broader pattern of Chinese platform companies deploying MCP-compatible infrastructure for agentic commerce, and Hellobike operates an existing open API platform. Readers should treat the specific Hellobike claim as plausible-but-unverified pending primary-source confirmation.
Where to Learn More
- Anthropic: Introducing the Model Context Protocol — the original announcement, with architecture overview and launch context.
- modelcontextprotocol.io — Official Specification — the technical specification, including security model and protocol primitives.
- Linux Foundation: Agentic AI Foundation Announcement — governance structure, founding members, and scope of the AAIF.
- Anthropic: Donating MCP to the Linux Foundation — Anthropic’s rationale for the governance transfer.