MCP protocol detailed explanation: understand the cross-era model context protocol in one article

Learn more about the Anthropic open source protocol MCP and master the new standard in the AI field.
Core content:
1. Basic concepts and goals of the MCP protocol
2. Detailed analysis of the client-server architecture
3. Application examples of resources and tips in MCP
1 Basic Concepts
01
MCP (Model Context Protocol) is an open source protocol launched by Anthropic, which aims to achieve seamless integration of large language models (LLMs) with external data sources and tools, and to establish a secure two-way link between large models and data sources.
The goal is to become the "HTTP protocol" in the AI field and promote the standardization and decentralization of LLM applications.
For example: USB-C ports for AI applications. Just as USB-C provides a standardized way to connect devices to a variety of peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.
1.1 Architecture
MCP follows a client-server architecture where:
The host is the LLM application (Claude for Desktop or other AI tool) that initiates the connection.
The client maintains a 1:1 connection with the server inside the host application and is responsible for protocol communication.
The server is accessed by the client and provides context, tools and prompts to the client. At the same time, because the MCP Server controls its own resources, it does not need to give the API key to the MCP Host, so it is more secure.
1.2 Resources
A resource represents any type of data that an MCP server wants to make available to clients. This can include: file contents, database records, API responses, live system data, screenshots and images, log files, and much more. Each resource is identified by a unique URI and can contain text or binary data.
{uri: string; // Unique identifier for the resourcename: string; // Human-readable namedescription?: string; // Optional descriptionmimeType?: string; // Optional MIME type}
1.3 Tips
Prompts in MCP are predefined templates that can: accept dynamic parameters, context, link multiple interactions, guide specific workflows, surface as UI elements (like slash commands).
{name: string;// Unique identifier for the promptdescription?: string;// Human-readable descriptionarguments?: [// Optional list of arguments{name: string;// Argument identifierdescription?: string;// Argument descriptionrequired?: boolean;// Whether argument is required}]}
1.4 Tools
The tools in the MCP allow the server to expose executable functions that can be called by clients and used by the LLM to perform operations. Key aspects of the tools include:
Discover tools/list: An endpoint through which clients can list available tools
Call: Use the endpoint calling tool tools/call, the server performs the requested operation and returns the result
Flexibility: Tools range from simple calculations to complex API interactions
Like resources, tools are identified by a unique name and can contain instructions to guide their use. However, unlike resources, tools represent dynamic operations that can modify state or interact with external systems.
{name: string;// Unique identifier for the tooldescription?: string;// Human-readable descriptioninputSchema: { // JSON Schema for the tool's parameterstype: "object",properties: { ... }// Tool-specific parameters}}
1.5 Sampling
Sampling is a powerful feature of MCP that allows the server to request LLM completion through the client, thereby implementing complex proxy behaviors while maintaining security and privacy. This human-computer interaction design ensures that users can control what the LLM sees and generates. The sampling process follows these steps:
sampling/createMessage The server sends a request to the client.
The client reviews the request and can modify it.
A sample of clients from LLM.
Customer checks completion.
The client returns the result to the server.
{
messages: [
{
role: "user" | "assistant" ,
content: {
type : "text" | "image" ,
// For text:
text?: string ,
// For images:
data?: string , // base64 encoded
mimeType?: string
}
}
],
modelPreferences?: {
hints?: [{
name?: string // Suggested model name/family
}],
costPriority?: number , // 0-1, importance of minimizing cost
speedPriority?: number , // 0-1, importance of low latency
intelligencePriority?: number // 0-1, importance of capabilities
},
systemPrompt?: string ,
includeContext?: "none" | "thisServer" | "allServers" ,
temperature?: number ,
maxTokens: number ,
stopSequences?: string [],
metadata?: Record< string , unknown>
}
02
Since LLM cannot directly access real-time data sources (such as internal enterprise databases, real-time documents, online services, etc.), developers usually need to customize dedicated adapters or plug-ins for each application scenario, which is time-consuming and labor-intensive and lacks scalability.
Standardization: MCP hopes to define a standardized protocol so that developers can quickly connect models and data sources without repeated development, thereby improving the versatility and implementation efficiency of the model and reducing the complexity of connecting the model with diverse data sources.
Flexibility: MCP ensures flexibility in LLM switching by helping LLM integrate directly with data and tools.
Openness: As an open protocol, it allows any developer to create an MCP server for their product. It helps to quickly expand the ecosystem, form a network effect similar to HTTP and REST API, and promote the integration of models and application scenarios.
Security: The protocol has a strict permission control mechanism built in, and the owner of the data source always has access rights. The model needs to be explicitly authorized when obtaining data to avoid data leakage and abuse.
03
The MCP tool calling process is as follows:
User sends a question -> LLM analyzes available tools -> Client executes the selected tool through MCP server -> Sends the result back to LLM -> LLM answers based on the tool return result and user question.
(The process is a bit like retrieval enhancement generation, replacing the retrieval part with the calling tool).
There are two examples here. One is the tool that is initially implemented in combination with our current business and the example given on the official github.
3.1 Build a tool to get the latest version of Path of Exile 2 game through mcp
We found that simply using RAG often cannot obtain real-time information, so we initially built an MCP tool combined with a crawler according to the official website process.
Client:
1. First, import the corresponding FastMCP class (use the string automatic generation tool definition to easily create and maintain the MCP tool, the effect here will be shown later) and the website where we need to obtain real-time information (here is the official website of PoE2).
from typing import Anyfrom mcp.server.fastmcp import FastMCP# Initialize FastMCP servermcp = FastMCP("Path of Exile 2 hotfix")target_url = "https://www.pathofexile.com/forum/view-forum/2212"
2. The core function of the tool - helper function (in fact, this is the function you want to achieve, the following is just a simple crawler function for example).
async def poe2_hotfix(url: str):headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3'}response = requests.get(url, headers=headers)async with httpx.AsyncClient() as client:try:response = await client.get(url, headers=headers, timeout=30.0)soup = BeautifulSoup(response.text, 'html.parser')# Find the table containing the post content table = soup.find('table')result_text = ""if table:for row in table.find_all('tr'):cells = row.find_all('td')if cells:for cell in cells:result_text += cell.get_text(strip=True) + '\n'result_text += '-' * 50 + '\n' # separator else:print('No table element found')return result_textexcept Exception:return None
3. Add tool functions in the mcp class (execution handlers, responsible for actually executing the logic of each tool), each tool function corresponds to a specific tool.
@mcp.tool()async def find_poe2_hotfix() -> str:hotfix_data = await poe2_hotfix(target_url)if not hotfix_data:return "Unable to find any hotfix in office" return hotfix_data
4. Finally, initialize and run the server. At this point, the server work is complete.
if __name__ == "__main__":# Initialize and run the server mcp.run(transport='stdio')
5. Test through MCP host. Here we use the method of building LLM client in the official document (you can also choose Clauder for Desktop. You only need to add the previous server to the key, which is equivalent to telling the host that there is an MCP server for PoE2 patch version query).
Quickstart - For Client Developers(https://modelcontextprotocol.io/quickstart/client)
Implements client initialization, server connection, query processing, interactive interface and resource management.
6. Testing
After installing the MCP dependencies, you can test it using the MCP Inspector from the command line:
pip install mcpmcp dev server.py
You can see the newly created tool on the tool page in MCP Inspector, and you can see the results returned by the tool after running it.
7. Effect display
Connect the client to the corresponding tool on the command line
uv run client.py poe2hotfix.py
When asked the question "Who are you?", LLM analyzes and answers it based on the input from the FastMCP class at the beginning of the tool.
You can see that when asked about the latest patch, the latest patch version and time of PoE2 can be accurately answered.
And when asked irrelevant questions LLM will refuse to answer.
8. Thinking
Optimize prompt and add more tools to implement more complex functions, such as using better crawler tools and crawling posts corresponding to patches through deep crawlers, so as to return specific content while answering the latest patch version number.
3.2 Build a simple chatbot - Official example
For the complete code, see: https://github.com/modelcontextprotocol/python-sdk/tree/main/examples/clients/simple-chatbot
Logical Flow
1. Tool Integration:
a. Tools are dynamically discovered from the MCP server b. Tool descriptions are automatically included in system prompts c. Tool execution is handled via standardized MCP protocols
2. Runtime process:
a. Receive user inputb. Input is sent to LLM along with context of available toolsc. LLM response parsing: If it is a tool call → execute tool and return resultIf it is a direct response → return to userd. Tool result is sent back to LLM for interpretatione. Final response is presented to user
Server-side tool discovery: The server side calls the list_tools function.
all_tools = []for server in self.servers:tools = await server.list_tools()all_tools.extend(tools)
Tool function definition:
def format_for_llm (self) -> str:
"""Format tool information for LLM.
Returns:
A formatted string describing the tool.
"""
args_desc = []
if "properties" in self.input_schema:
for param_name, param_info in self.input_schema[ "properties" ].items():
arg_desc = (
f"- {param_name} : {param_info.get( 'description' , 'No description' )} "
)
if param_name in self.input_schema.get( "required" , []):
arg_desc += " (required)"
args_desc.append(arg_desc)
return f"""
Tool: {self.name}
Description: {self.description}
Arguments:
{chr( 10 ).join(args_desc)}
"""
chatbot prompt:
system_message = ("You are a helpful assistant with access to these tools:\n\n"f"{tools_description}\n""Choose the appropriate tool based on the user's question. ""If no tool is needed, reply directly.\n\n""IMPORTANT: When you need to use a tool, you must ONLY respond with ""the exact JSON object format below, nothing else:\n""{\n"'"tool": "tool-name",\n''"arguments": {\n''"argument-name": "value"\n'"}\n""}\n\n""After receiving a tool's response:\n""1. Transform the raw data into a natural, conversational response\n""2. Keep responses concise but informative\n""3. Focus on the most relevant information\n""4. Use appropriate context from the user's question\n""5. Avoid simply repeating the raw data\n\n""Please use only the tools that are explicitly defined above.")