MCP Core Concepts
1.1 What Problem Does MCP Solve
Every AI application needs to connect to external systems. Before MCP, each AI application had to write custom integration code for every external system. N AI applications x M external systems = N x M integrations. MCP reduces this to N + M: each AI application implements one MCP Client, and each external system implements one MCP Server. They communicate through a standard protocol, enabling arbitrary combinations. This design is directly inspired by Microsoft’s Language Server Protocol (LSP). LSP solved the “every editor x every language” integration explosion problem; MCP applies the same approach to solve the “every AI application x every external system” problem. Protocol Version:2025-11-25
Specification: modelcontextprotocol.io
Specification Repository: github.com/modelcontextprotocol/specification
1.2 Three Participants
MCP Host
The AI application itself. Responsible for managing one or more MCP Clients. Examples: Claude Desktop, VS Code (Copilot), ChatGPT, Cursor Host responsibilities:- Starting and managing MCP Client connections
- Exposing MCP Server capabilities (tools, resources) to the AI model
- Handling user interactions
- Enforcing security policies (e.g., requiring user confirmation for tool calls)
MCP Client
A component created internally by the Host for each Server connection. Each Client maintains a 1:1 dedicated connection with one MCP Server. You typically do not need to develop a Client directly — the Host creates and manages them automatically. A Client can also expose its own capabilities to the Server (Sampling, Roots, Elicitation), enabling the Server to make reverse requests for LLM inference or user input.MCP Server
A program that provides tools, data, and prompts. This is the component most developers need to build. Examples:- Filesystem Server: lets AI read and write local files
- Database Server: lets AI query databases
- Sentry Server: lets AI view error logs
- Your commerce Server: lets AI search products, check orders
Architecture Diagram
1.3 Protocol Lifecycle
MCP is a stateful protocol. A connection goes through three phases: initialization, operation, and shutdown.Phase 1: Initialization
The Client sends aninitialize request. Both sides negotiate the protocol version and supported capabilities.
- The Client declares what it supports (e.g.,
samplingfor LLM inference,elicitationfor user interaction,rootsfor filesystem boundaries) - The Server declares what it provides (e.g.,
tools,resources,prompts) - Both sides only use capabilities declared by the other party
- Protocol versions must match; otherwise initialization fails
Phase 2: Operation
After initialization completes, the normal operation phase begins. The Client can discover and invoke Server capabilities:listChanged: true):
tools/list to get the latest list.
Phase 3: Shutdown
Either side can initiate connection closure. After shutdown, all pending requests should be cancelled and resources should be cleaned up.1.4 Complete Primitive Overview
MCP Primitives are divided into Server-side and Client-side categories:Server-Side Primitives (Server provides to Client)
| Primitive | Controlled By | Discovery Method | Execution Method | Purpose |
|---|---|---|---|---|
| Tools | Model | tools/list | tools/call | Executable functions (query databases, call APIs, etc.) |
| Resources | Application | resources/list | resources/read | Data sources (files, records, API responses) |
| Prompts | User | prompts/list | prompts/get | Reusable interaction templates |
- Model-controlled (Tools): The AI model autonomously decides when to call which tool
- Application-controlled (Resources): The Host application decides when to read which resources to provide to the model
- User-controlled (Prompts): The user actively selects which prompt template to use
Client-Side Primitives (Client provides to Server)
| Primitive | Method | Purpose |
|---|---|---|
| Sampling | sampling/createMessage | Server requests LLM inference from the Host. Supports model preference settings: hints (model suggestions), costPriority / speedPriority / intelligencePriority (priority trade-offs) |
| Roots | roots/list | Server queries the Client’s filesystem boundaries. Returns a list of file:// URIs indicating which directories the Server can access |
| Elicitation | elicitation/request | Server requests the Client to ask the user for information. Suitable for scenarios requiring user confirmation or supplementary input |
Content Types
Tool results, resource data, and other outputs use a unified content type system:| Type | Description | Example |
|---|---|---|
text | Plain text | JSON data, descriptive text |
image | Image | Base64-encoded or URI reference |
audio | Audio | Voice data (Base64-encoded) |
resource_link | Resource link | Reference to an MCP resource URI |
embedded_resource | Embedded resource | Inline resource data |
annotations metadata field for marking audience, priority, and other information.
1.5 Experimental Features
Tasks
Tasks is an experimental feature that provides a wrapper for long-running operations. When tool call results cannot be returned immediately (e.g., asynchronous processing, long-running computations), the Task mechanism enables deferred delivery.1.6 What MCP Does Not Do
MCP focuses on context exchange and does not cover:- How AI models process context (that is the AI application’s concern)
- AI model selection and configuration
- User interface design
- Business logic and workflow orchestration
Next Chapter: Data Layer Protocol — JSON-RPC 2.0 message format and capability negotiation in detail