Understand MCP in one article! How can big models use it to connect the world and build smarter AI agents?

Written by
Silas Grey
Updated on:July-11th-2025
Recommendation

Explore the new trend of AI and how MCP will revolutionize the way large model tools are called.

Core content:
1. MCP protocol definition and its impact on the AI ​​field
2. The difference between MCP and traditional tool calling methods
3. How to use MCP to develop your own MCP Server and practical application examples

Yang Fangxian
Founder of 53AI/Most Valuable Expert of Tencent Cloud (TVP)

Recently, MCP [1] (Model Context Protocol) has become popular in the AI ​​community. However, many people are still confused about its concept, including me when I first came into contact with it.

Some people may ask: "Isn't this just a tool call for a large model?" Yes, MCP is essentially a tool call. But the difference is that in the past, tool calls required developers to manually write code to embed tools into applications, while MCP introduced a standardized calling protocol.

In addition to providing a simple and easy-to-understand introduction to the MCP protocol and an in-depth introduction through several small questions, this article will also explain how to develop your own MCP Server.

Unlike traditional methods, MCP defines a set of standardized application calling protocols. Anthropic's official metaphor is that it is like USB Type-C in the AI ​​world , allowing all tools that support MCP to connect to the big model in a plug-and-play manner. This means that as long as the application has the corresponding interface, it can be directly called by the big model without the need for developers to manually adapt it.

Perhaps Langchain has also released  Agent Protocol [2] before , which is mainly used for the interoperability protocol between intelligent agents and is relatively complex. However, MCP is becoming a de facto standard due to the popularity of programming tools such as cursor and cline, as well as the popularity of multi-agent application Manus. What is it specifically? How to understand the above picture?

The MCP protocol is a client-server architecture. The big model as the calling tool belongs to the client, while the tool provider belongs to the server. Before this, perhaps we called various official tools and needed to write our own code. Now, with the popularity of MCP, as the current de facto standard, it will drive tool providers to actively integrate into MCP and provide their own MCP Server, thus saving a lot of time for developers. At this point, I think you should have a general understanding of what MCP is.

1. Is it only available for Cluade?

First of all, let me say the conclusion. No, most large models can be used. MCP is a general protocol that supports a variety of large models. Using the MCP SDK, we can get a list of available tools and provide them to LLM, such as encapsulating them as part of the OpenAI API call, or directly passing them to the model as a prompt. The following is an example:

    response =  await  self.session.list_tools()
    available_tools = [{
        "name" : tool.name,
        "description" : tool.description,
        "input_schema" : tool.inputSchema
    }  for  tool  in  response.tools]
     # Initial Claude API call
    response = self.anthropic.messages.create(
        model= "claude-3-5-sonnet-20241022" ,
        max_tokens = 1000 ,
        messages=messages,
        tools=available_tools
    )

2. What are the MCP Server tools?

At present, Anthropic has officially launched some common MCP tools, such as data storage, development tools, Web and browser access, Slack communication, AI tools , etc.

There are also some MCP servers developed by official applications and maintained by the community, such as Obsidian Markdown Notes [3] , Qdrant [4] , Cloudflare [5] , Docker [6] , Kubernetes [7] , Todoist [8] and Spotify [9] . Here is an aggregation website [10] that has collected 3251 MCP servers and 98 clients.

3. How to develop your own MCP Server

Here is an excerpt from the official tutorial to develop an example that provides weather queries.

  • Setting up your development environment
# Create a new directory for our project
uv init weather
cd  weather
# Create virtual environment and activate it
uv venv
source  .venv/bin/activate
# Install dependencies
uv add  "mcp[cli]"  httpx
# Create our server file
touch weather.py
  • initialization
from  typing  import  Any
import  httpx
from  mcp.server.fastmcp  import  FastMCP
# Initialize FastMCP server
mcp = FastMCP( "weather" )
# Constants
NWS_API_BASE =  "https://api.weather.gov"
USER_AGENT =  "weather-app/1.0"
  • Writing Tools

Just like writing a large model tool, you also need to write function descriptions, parameter description information, and finally return a string for the large model to understand.

@mcp.tool()
async def get_alerts (state: str)  -> str: 
    """Get weather alerts for a US state.

    Args:
        state: Two-letter US state code (eg CA, NY)
    """

    url =  f" {NWS_API_BASE} /alerts/active/area/ {state} "
    data =  await  make_nws_request(url)

    if not  data  or "features" not in  data:
        return "Unable to fetch alerts or no alerts found."

    if not  data[ "features" ]:
        return "No active alerts for this state."

    alerts = [format_alert(feature)  for  feature  in  data[ "features" ]]
    return "\n---\n" .join(alerts)
  • run

After running, MCP Server can provide services to the outside world. In actual applications, tool services usually do not run all the time, but when a large model needs to call it, the MCP client will automatically start it.

if  __name__ ==  "__main__" :
    #Initialize and run the server
    mcp.run(transport= 'stdio' )
  • Client

The client is the application end of the big model. How does he access this tool service? First, use the mcp client to initialize an mcp and connect to the tool server, then enter the script address just now, and it will automatically start the weather query service above.

from  mcp  import  ClientSession, StdioServerParameters
from  mcp.client.stdio  import  stdio_client
...
class MCPClient : 
    def __init__ (self) : 
        #Initialize session and client objects
        self.session: Optional[ClientSession] =  None
        self.exit_stack = AsyncExitStack()
        self.anthropic = Anthropic()
    # methods will go here

    async def connect_to_server (self, server_script_path: str) : 
      """Connect to an MCP server
      Args:
          server_script_path: Path to the server script (.py or .js)
      """

      ...
      command =  "python" if  is_python  else "node"
      server_params = StdioServerParameters(
          command=command,
          args=[server_script_path],
          env= None
      )
      stdio_transport =  await  self.exit_stack.enter_async_context(stdio_client(server_params))
      self.stdio, self.write = stdio_transport
      self.session =  await  self.exit_stack.enter_async_context(ClientSession(self.stdio, self.write))

      await  self.session.initialize()
      # List available tools
      response =  await  self.session.list_tools()
      tools = response.tools
      print( "\nConnected to server with tools:" , [tool.name  for  tool  in  tools])

The next step is the calling logic, which is to use this protocol SDK to obtain the tool list and then insert it into the large model. There is no black technology. What MCP does is to enable the large model application to automatically call the application tool through the MCP communication mechanism. As you can see, the essence is still tool calling, but the communication protocol of the external tool is automated.

async def process_query (self, query: str)  -> str:  
    """Process a query using Claude and available tools"""
    messages = [
        {
            "role""user" ,
            "content" : query
        }
    ]

    response =  await  self.session.list_tools()
    available_tools = [{
        "name" : tool.name,
        "description" : tool.description,
        "input_schema" : tool.inputSchema
    }  for  tool  in  response.tools]

    # Initial Claude API call
    response = self.anthropic.messages.create(
        model= "claude-3-5-sonnet-20241022" ,
        max_tokens = 1000 ,
        messages=messages,
        tools=available_tools
    )

    # Process response and handle tool calls
    final_text = []

    assistant_message_content = []
    for  content  in  response.content:
        if  content.type ==  'text' :
            ...
        elif  content.type ==  'tool_use' :
            tool_name = content.name
            tool_args = content.input
            # Execute tool call
            result =  await  self.session.call_tool(tool_name, tool_args)
            final_text.append( f"[Calling tool  {tool_name}  with args  {tool_args} ]" )
            ....

    return "\n" .join(final_text)

I think there is no need to go into more detail.

5. Is it only supported in Python?

In addition to releasing the Python SDK, the official also supports

  • Typescript-sdk [11]
  • Java-sdk [12]
  • kotlin-sdk [13]

Other languages ​​can also be written according to the standard specification [14] .

6. Low-level communication protocol

The underlying communication uses JSON-RPC [15] . For more details, please refer to the official document Transports [16] .

7. Conclusion

In the future, as large models are applied more deeply, MCP Server will become richer and more powerful, and various applications will gradually adapt to MCP, lowering the threshold for AI access. As OpenAI CEO Sam Altman said, 2025 is  "the year of the AI ​​agent" . Perhaps in the near future, we only need one sentence to let AI remotely control computers and manage mobile phones, completely changing the way people interact with computers.