My understanding of MCP

Written by
Silas Grey
Updated on:July-11th-2025
Recommendation

Explore the mystery of the MCP plug-in protocol and get a glimpse into the new world of Agent application expansion.

Core content:
1. Preliminary understanding and application scenarios of the MCP protocol
2. Analysis of the relationship between the MCP and LLM tool usage protocols
3. Actual code examples to reveal the combination of MCP calls and tool calls

Yang Fangxian
Founder of 53AI/Most Valuable Expert of Tencent Cloud (TVP)

I personally didn't care much about MCP before, but I recently discussed it with someone and did some research on it. But I didn't go into it in depth, and I didn't look at too many source codes, so the views in this article may be overturned in the future. But as long as this article is still there, it means that my views have not been overturned.


MCP is a plug-in protocol for Agent applications , and this plug-in is more tool-oriented. It cannot largely cover the functions and settings of the original Agent, but rather adds some new tools to it.


It is not essentially a replacement for the LLM tool use protocol. The official demo example converts the MCP call to the LLM tool call to the LLM model. The following code comes from https://modelcontextprotocol.io/quickstart/client

async def process_query(self, query: str) -> str: """Process a query using Claude and available tools""" messages = [ { "role": "user", "content": query } ] response = await self.session.list_tools() available_tools = [{ "name": tool.name, "description": tool.description, "input_schema": tool.inputSchema } for tool in response.tools] # Initial Claude API call response = self.anthropic.messages.create( model="claude-3-5-sonnet-20241022", max_tokens=1000, messages=messages, tools=available_tools ) # Process response and handle tool calls final_text = [] assistant_message_content = [] for content in response.content: if content.type == 'text': final_text.append(content.text) assistant_message_content.append(content)        elif content.type == 'tool_use': tool_name = content.name tool_args = content.input # Execute tool call result = await self.session.call_tool(tool_name, tool_args) final_text.append(f"[Calling tool {tool_name} with args {tool_args}]") assistant_message_content.append(content) messages.append({ "role": "assistant", "content": assistant_message_content }) messages.append({ "role": "user", "content": [ { "type": "tool_result", "tool_use_id": content.id, "content": result.content } ] }) # Get next response from Claude response = self.anthropic.messages.create( model="claude-3-5-sonnet-20241022", max_tokens=1000, messages=messages,                tools=available_tools ) final_text.append(response.content[0].text) return "\n".join(final_text)

This code is very clear, and combined with the link in the previous document, it feels like a typical plugin mode.


The naming of the MCP system is quite bad: MCP Server is the tool provider, and MCP client is the Agent application.


MCP was originally designed for Claude's App to interact with the user's local PC resources and Context. Later, the protocol was gradually expanded to cross-machine communication.


The MCP protocol is not actually bound to LLM. After obtaining the tool, it is actually fine to call it by hard-coding the tool function name, so there is no LLM involved at all. However, in actual use, it is definitely hoped to avoid hard coding of specific plug-ins, so it is necessary to use it in combination with the tool description, so the semantic understanding, reasoning ability, and knowledge of LLM are required .


This can be seen as an example of dependency injection in traditional software development, but if you really want to achieve automatic adaptation of the injection tool, you need the ability of LLM to use it effectively.