What is the popular MCP (Model Context Protocol) in Cursor? A long introduction, save it!

Explore MCP, a new hot topic in the AI field, and learn how it changes AI application development.
Core content:
1. The origin and basic concepts of MCP
2. The architecture and standardization benefits of MCP
3. The impact of MCP on the field of AI development and future prospects
Hello everyone! Today, let’s take a deep dive into the Model Context Protocol (MCP), which has set off a craze in the field of AI. Anthropic released MCP in November 2024, and the developer community initially responded positively, but few people realized its full potential at that time. In the blink of an eye, it is March 2025, and MCP has suddenly become the hottest topic in the field of AI.
This shift became especially evident when popular consumer IDEs such as Cursor, Cline, and Goose officially supported MCP. As more client applications adopted it, server-side integration became more important and its influence expanded.
Yet, despite all the attention, questions remain: “What exactly is MCP? Should I care? Will it really be the next big breakthrough, or just another AI hype?”
In this article, I'll demystify MCP, explain its purpose, and explain why it's important.
What is MCP
To make it clearer, MCP is neither a framework like LangChain nor a tool, it is a protocol, similar to the HTTP protocol for networking or the SMTP protocol for messaging.
A more relevant example is LSP (Language Server Protocol), which standardizes adding programming language support in the development tool ecosystem. Similarly, MCP standardizes integrating additional context and tools into the AI application ecosystem.
It provides common rules that allow any client to communicate with any server, regardless of who built those components, laying the foundation for a diverse and interoperable AI ecosystem. Anthropic defines it as being similar to a USB-C port for an intelligent system. It standardizes the connection between AI applications, large language models (LLMs), and external data sources (databases, Gmail, Slack, etc.).
The machine is the client, the peripheral is the tool, and the MCP is the Type-C port. So no matter who makes the device or peripheral, they can work together seamlessly.
MCP defines how clients should communicate with servers, and how servers should handle tools (APIs, functions, etc.) and resources (read-only files such as logs and database records). We will cover these in detail later.
Why should we care about MCP?
Benefits of Standardization
Unified Integration : Connect any LLM to any tool using a single protocol. Reduce development time : Standardized patterns for resource access and tool execution. Clear separation of responsibilities : Data access (resources) and computation (tools) are clearly separated. Consistent discovery mechanism : Use a unified mechanism to find available functionality (tools, resources, tips, roots, samples). Cross-platform compatibility : Tools built for one system can be used on other systems.
Is MCP revolutionary?
The answer is: No.
You can work without MCP. It's not revolutionary, but it brings standardization to the otherwise chaotic world of agent development. If your application conforms to the MCP client standard, you can connect to any server that conforms to the MCP client standard. In the other case, as a client developer, you have to customize the server to your needs, and others cannot develop for your platform. The same is true for server developers.
For example, in Cursor, you can connect to any MCP server if it follows the relevant protocol.
At this point, everyone has a more or less understanding of the purpose of MCP. Now, let's understand MCP more thoroughly.
MCP Architecture
The Model Context protocol consists of several key components working together. Here is an architecture diagram posted by Matt Pocock on Twitter.
The complete MCP architecture consists of four parts:
Host : Coordinates the entire system and manages the interaction of LLMs. Clients : Connect hosts to servers in a 1:1 relationship. Servers : Provide specific functionality through tools, resources, and prompts. Base Protocol : defines how all these components communicate.
In the diagram above, the client and host are combined; for more clarity, we will separate them.
1. Host
Hosts are LLM applications that expect to get data from servers. Hosts can be IDEs, chatbots, or any LLM application. They are responsible for:
Initialize and manage multiple clients. Client-server lifecycle management. Handles user authorization decisions. Manages context aggregation across clients.
For example, Claude Desktop, Cursor IDE, Windsurf IDE, etc.
2. Client
Each client has the following key responsibilities:
Dedicated Connections : Each client maintains a one-to-one stateful connection with a single server. This dedicated relationship ensures clear communication boundaries and security isolation. Message Routing : The client handles all bidirectional communication, effectively routing requests, responses, and notifications between the host and the servers it is connected to. We will see a small example combining Linear and Slack in the Cursor IDE. Capability Management : The client monitors the capabilities of the server to which it is connected by maintaining information about available tools, resources (context data), and prompt templates. Protocol Negotiation : During initialization, the client negotiates the protocol version and capabilities, ensuring compatibility between the host and the server. Subscription management : The client maintains subscriptions to server resources and handles notification events when those resources change.
3. Server
The server is the basic building block for enriching LLM with external data and context. The key primitives of the server include:
The tools : are executable functions that allow LLM to interact with external applications. Tools function similarly to functions in traditional LLM calls. A tool can be a POST request to an API endpoint; for example, a tool defined as LIST_FILES with a directory name as an argument will fetch the files in that directory and send them back to the client. These tools can also be API calls to external services like Gmail, Slack, Notion, etc. Resources : These can be any text files, log files, database schemas, file contents, and Git history. They provide additional context for the LLM. Prompt Templates : Predefined templates or instructions used to guide the interaction of language models.
Tools are controlled by the model, while resources and prompts are controlled by the user. The model can automatically discover and invoke tools based on a given context.
protocol
This protocol forms the basis of the Model Context Protocol (MCP) architecture. It defines how the different components (host, client and server) communicate. For more in-depth information, please refer to the official MCP specification.
Protocol Layer
The protocol consists of several key layers:
Protocol Message : The core JSON-RPC message type. Lifecycle Management : Client-server connection initialization, capability negotiation, and session control. Transport Mechanisms : How the client and server exchange messages. There are usually two types: local servers use Stdio, and hosted servers use SSE (Server Sent Events). Server Features : Resources, tips, and tools exposed by the server. Client Features : Sampling and root directory list provided by the client.
Among the five parts mentioned above, the basic protocol, namely JSON-RPC message types and lifecycle management, is crucial for every MCP implementation. Other components can be implemented according to the needs of specific applications.
Key parts of the agreement
1. Messages
The core of MCP uses JSON-RPC 2.0 as its message format, providing a standardized way for communication between clients and servers. The base protocol defines three basic message types:
Requests : Messages sent from a client to a server or from a server to a client to initiate an operation. Example:
{jsonrpc: "2.0" ;id:string|number;method:string;params?:{[key:string]:unknown;};}
Responses : Reply messages to requests.
{jsonrpc: "2.0" ;id:string|number;result?:{[key:string]:unknown;}error?:{code:number;message:string;data?:unknown;}}
Notifications : One-way messages that do not require a response.
{jsonrpc: "2.0" ;method:string;params?:{[key:string]:unknown;};}
2. Transport Mechanisms
Depending on deployment requirements, the protocol can be implemented on different transport layers:
stdio : Communicate via standard input/output streams.
The client and server receive JSON messages through stdin and respond through stdout. Simplifies local process integration and debugging. Perfect for local servers like files, Git servers, etc.
HTTP using Server-Sent Events (SSE) :
A two-way communication mode is established through HTTP. The server maintains the SSE connection to push messages to the client. Clients send commands via standard HTTP POST requests. More suitable for hosting servers.
Custom transports : Implementations can create additional transport mechanisms as needed.
3. Lifecycle Management
The base protocol implements a structured lifecycle for a connection between a client and a server:
Initialization Phase :
The client and server negotiate the protocol version. They exchange functional information (clients share tools and samples with servers, and servers share tools, resources, and hints with clients). They share implementation details.
Operation Phase : Perform normal protocol communication. Both parties abide by the negotiated function. Shutdown Phase : Terminate the connection gracefully. The host is Cursor IDE. The servers are Linear and Slack. Connection Establishment : When a user activates Linear integration in Cursor IDE, the IDE initiates a connection to the Linear MCP server, usually through stdio or WebSockets. Capability Negotiation : A Cursor sends an initialization request containing its capabilities (supported capabilities). The Linear server replies with its capabilities (available resources, tools, protocol version). The Cursor evaluates compatibility to ensure that both parties support the necessary protocol features. Feature Discovery : Cursor requests available tools (tools/list). Linear reply tools, such as create_ticket, assign_ticket, add_comment, etc. Ready Notification : The Cursor sends an initialized notification indicating that it is ready to begin normal operation. Tool Execution : "Create a bug ticket for the login page crash," the user told Cursor. LLM in Cursor Determination requires the use of a tool. The Cursor sends a tools/call request to create_ticket with appropriate parameters. The Linear server creates a ticket and returns the result. The Cursor displays the results to the user. Cross-Service Integration : The user says, "Notify the team on Slack about this new ticket." The Cursor connects to the Slack MCP server (this is a separate connection with its own lifecycle). Cursor sends a tools/call request to the Slack server. The Slack server publishes the message and returns a successful result. Cursor confirms that the notification was sent. Health Checks : The Cursor periodically sends ping requests to ensure that the connection is still valid. The Linear server replies to confirm availability. Graceful Shutdown : When the user closes the workspace or disables the integration. Cursor sends a close request to Linear. Linear replied to confirm. Cursor sends an exit notification. The Linear server releases resources associated with the session. Error Recovery : If the connection fails unexpectedly. Cursor implements retry logic with exponential backoff. After a successful reconnect, the initialization lifecycle begins again. How is MCP different from Langchain or other frameworks? LangChain is a framework, while MCP is a protocol. When using a framework, you may face the risk of vendor lock-in, but this is not the case with protocols. As long as you follow the protocol guidelines, there will be no problem. Even LangChain can adopt MCP as a standard for building stateful agents. How is the MCP server different from calling a tool? Isn't it simpler to call the tool directly rather than going through a roundabout way? Yes, but the protocol ensures that developers define and call tools in a uniform way, which makes it easier to develop both clients (host applications) and servers (integrations). Managing LLM contexts isn't that hard, so why do we need a protocol? Again, the goal is to reduce development overhead as much as possible. For example, Cursor developers only need to worry about client implementations, and Linear developers only need to worry about server implementations. As long as both sides conform to the base protocol standard we describe in detail, everything is fine. Is it necessary? Not necessarily, but as the number of MCP clients grows, the demand on the MCP server will increase dramatically.
MCP Interaction Lifecycle: Cursor IDE and Linear/Slack Examples
Let's understand what we have learned so far with a simple example where:
The following is a detailed workflow of the MCP interaction lifecycle process:
Initialization Phase
Operation Phase
Maintenance Phase
Termination Phase
This standardized lifecycle ensures reliable, predictable interactions between any MCP host and server, regardless of their specific implementation. Whether Cursor is connecting to Linear for ticket management or Slack for messaging, the same protocol patterns apply, making integrations consistent and interoperable.
Limitations of MCP
1. Authentication
One of the current limitations of MCP is the lack of a standardized authentication mechanism. The protocol itself does not specify how authentication should be handled, leaving implementers to create their own solutions. This can lead to inconsistent security practices between different MCP servers and clients.
2. Lack of reliable servers
As a relatively new protocol, the MCP ecosystem is still developing. There are relatively few servers, and many applications do not have official MCP servers.
Composio MCP : Managed MCP server with built-in authentication
Composio MCP solves the core challenges of authentication and ecosystem maturity. Our managed server provides built-in authentication support for over 100 applications, automatically handling OAuth, API keys, and basic authentication flows. We have created pre-built MCP servers for services such as Linear, Slack, GitHub, and Google Workspace, so you can focus on building AI experiences without worrying about integration details.
This eliminates the challenges of maintaining your own MCP server and handling complex authentication flows. It is suitable for integrating applications with access-restricted resources.