MCP principle analysis and effect measurement | with practical MCP recommendations

Written by
Silas Grey
Updated on:July-11th-2025
Recommendation

How the MCP protocol simplifies the integration of large language models with external tools.

Core content:
1. Five core components of the MCP architecture
2. How MCP improves the intelligence level of large models
3. Practical demonstration and effect test cases

Yang Fangxian
Founder of 53AI/Most Valuable Expert of Tencent Cloud (TVP)

Brief Introduction of MCP Architecture

MCP is an open source protocol launched by Anthropic. Its purpose is to enable large language models (LLMs) to be seamlessly integrated with external data sources and tools through a unified connection method, thereby reducing the process of reinventing the wheel.

The MCP architecture mainly involves the following five parts:

● Host: An application that contains the MCP Client, which can be a Web, App, or other types of programs.

● MCP Client: Use the MCP protocol to establish a one-to-one connection with the Server

● MCP Server: connects internal, external, and network resources and provides services using the MCP protocol

● Local: internal resources

● Remote: External/network resources

OpenAI'sfunction callingThe ability of large models has been enhanced, and the intelligence level of models has been improved by using external tools. However, different models use tools in different ways and definitions, and for common functional requirements, such as network query and weather query, you also need to implement the corresponding code in your own program. With MCP, you only need to introduce these already implemented functions into your own project, just like importing a package.

Therefore, from the actual experience, it can be simply understood that MCP standardizes the tool calls, prompt words, etc. involved in the Agent, simplifies the development process, reduces the need to reinvent the wheel, and makes it easier for everyone to seamlessly use excellent tools in the open source community.

Cursor MCP Practice

I found an MCP Server on the Internet to retrieve arxiv articles (link: https://github.com/blazickjp/arxiv-mcp-server). Here I use Cursor as the MCP Client, but it should be noted that Cursor only supports tools in MCP.

1. Install the corresponding Server

uv tool install arxiv-mcp-server

2. Cursor Configuration

Click "MCP" in the "Cursor Settings" menu bar, and then click the "+ Add new global MCP server" button. A mcp.json file will pop up. Add the configuration information in arxiv-mcp-server to the json file.

{
    "mcpServers" : {
       "arxiv-mcp-server" : {
        "command" : "uv" ,
        "args" : [
          "tool" ,
          "run" ,
          "arxiv-mcp-server" ,
          "--storage-path" , "/path/to/paper/storage"
         ]
    }
  }
}

When the button in front of the MCP Server displays green, it means that the function can be used normally.

3. Effect measurement

Here I enter a question "ai agent 2025". For this question, the model judges that the most suitable tool should be the online search function. Since I clicked "Reject" here, the model judges that the tool "search_papers" should be selected. You can see that the displayed tool name is exactly the tool name in the previously configured MCP Server, so you can choose to use this tool to continue. Here you can see the results of search_papers: 

Parameters:

{
  "query" : "AI agents autonomous systems 2025" ,
  "categories" : [
    "cs.AI" ,
    "cs.LG"
   ],
   "max_results" : 5
}

Result:

{
  "total_results" : 5 ,
  "papers" : [
    {
      "id" : "2503.10638v1" ,
      "title" : "Studying Classifier(-Free) Guidance From a Classifier-Centric Perspective" ,
      "authors" : [
        "Xiaoming Zhao" ,
        "Alexander G. Schwing"
       ],
      "abstract":"Classifier-free guidance has become a staple for conditional generation with\ndenoising diffusion models. However, a comprehensive understanding of\nclassifier-free guidance is still missing. In this work, we carry out an\nempirical study to provide a fresh perspective on classifier-free guidance.\nConcretely, instead of solely focusing on classifier-free guidance, we trace\nback to the root, i.e., classifier guidance, pinpoint the key assumption for\nthe derivation, and conduct a systematic study to understand the role of the\nclassifier. We find that both classifier guidance and classifier-free guidance\nachieve conditional generation by pushing the denoising diffusion trajectories\naway from decision boundaries, i.e., areas where conditional information is\nusually entangled and is hard to learn. Based on this classifier-centric\nunderstanding, we propose a generic postprocessing step built upon\nflow-matching to shrink the gap between the learned distribution for a\npre-trained denoising diffusion model and the real data distribution, majorly\naround the decision boundaries. Experiments on various datasets verify the\neffectiveness of the proposed approach.",
      "categories":[
        "cs.CV",
        "cs.AI",
       "cs.LG"
      ],
      "published":"2025-03-13T17:59:59+00:00",
      "url":"http://arxiv.org/pdf/2503.10638v1",
      "resource_uri":"arxiv://2503.10638v1"
},
{
  "id":"2503.10636v1",
  "title":"The Curse of Conditions: Analyzing and Improving Optimal Transport for Conditional Flow-Based Generation",
  "authors":[
     "Ho Kei Cheng",
    "Alexander Schwing"
   ],
  "abstract" : "Minibatch optimal transport coupling straightens paths in unconditional flow\nmatching. This leads to computationally less demanding inference as fewer\nintegration steps and less complex numerical solvers can be employed when\nnumerically solving an ordinary differential equation at test time. However, in\nthe conditional setting, minibatch optimal transport falls short. This is\nbecause the default optimal transport mapping disregards conditions, resulting\nin a conditionally skewed prior distribution during training. In contrast, at\ntest time, we have no access to the skewed prior, and instead sample from the\nfull, unbiased prior distribution. This gap between training and testing leads\nto a subpar performance. To bridge this gap, we propose conditional optimal\ntransport C^2OT that adds a conditional weighting term in the cost matrix when\ncomputing the optimal transport assignment. Experiments demonstrate that this\nsimple fix works with both discrete and continuous conditions in\n8gaussians-to-moons, CIFAR-10, ImageNet-32x32, and ImageNet-256x256. Our method\nperforms better overall compared to the existing baselines across different\nfunction evaluation budgets. Code is available at\nhttps://hkchengrex.github.io/C2OT" ,
  "categories" : [
    "cs.LG" ,
    "cs.CV"
  ],
  "published" : "2025-03-13T17:59:56+00:00" ,
  "url" : "http://arxiv.org/pdf/2503.10636v1" ,
  "resource_uri" : "arxiv://2503.10636v1"
}

At this point, the model determines that the user's problem has been solved, so there is no next step. In the next interaction, I explicitly asked the model to download one of the papers. You can see that the tool "download_papers" has been called, and the pdf paper has been saved in .md format and stored in the local path. However, it is not over yet. The model will then call the tool "read_papers" to interpret the content of the paper, and it will not terminate until this step.

Customizing MCP Client and MCP Server

The Cursor mentioned above is equivalent to the Client in the MCP architecture (actually it includes the host of the Client). Therefore, for ordinary users, they only need to pay attention to what kind of functions (Server) they need and find the corresponding functions (Server) to install. For developers, if they need to make their programs use the existing MCP Server, they need to modify their programs to make them comply with the MCP specification. Currently, the official SDKs for Python and Js are also provided, which can be easily developed.

1. MCP Client

According to the official tutorial:

https://github.com/modelcontextprotocol/quickstart-resources/tree/main/mcp-client-python

First, create a function to connect to the server. In this function, determine whether the script type is met (only python and js are allowed) and return the tools that can be used.

async def connect_to_server ( self ,  server_script_path : str ):  
"""Connect to an MCP server
      
      Args:
          server_script_path: Path to the server script (.py or .js)
      """

      is_python  =  server_script_path . endswith ( '.py' )
      is_js  =  server_script_path . endswith ( '.js' )
      if  not ( is_python  or  is_js ):
      raise  ValueError ( "Server script must be a .py or .js file" )

      command  "python" if  is_python  else "node"
      server_params =  StdioServerParameters (
          command = command ,
          args = [ server_script_path ],
          env = None
       )

      stdio_transport  await  self . exit_stack . enter_async_context ( stdio_client ( server_params ))
      self . stdio ,  self . write  =  stdio_transport
      self . session  await  self . exit_stack . enter_async_context ( ClientSession ( self . stdio ,  self . write ))

      await  self . session . initialize ()

# List available tools
      response  = await  self . session . list_tools ()
      tools  =  response . tools
      print ( "\nConnected to server with tools:" , [ tool . name  for  tool  in  tools ])

Next, define the input parsing function, which is the processing flow of the large model tool call.

async def process_query ( self ,  query : str ) -> str :  
"""Process a query using Claude and available tools"""
    messages  = [
      {
        "role" : "user" ,
        "content" :  query
      }
    ]

    response  = await  self . session . list_tools ()
    available_tools  = [{
      "name" :  tool . name ,
      "description" :  tool . description ,
      "input_schema": tool.inputSchema
     }for tool in response.tools]

# Initial Claude API call
    response = self.anthropic.messages.create(
        model="claude-3-5-sonnet-20241022",
        max_tokens=1000,
        messages=messages,
        tools=available_tools
    )

# Process response and handle tool calls
    final_text =[]

    for content in response.content:
        if content.type=='text':
            final_text.append(content.text)
        elif content.type=='tool_use':
            tool_name = content.name
            tool_args = content.input

# Execute tool call
            result =await self.session.call_tool(tool_name, tool_args)
            final_text.append(f"[Calling tool {tool_name} with args {tool_args}]")

# Continue conversation with tool results
      if hasattr(content,'text'and content.text:
          messages.append({
           "role":"assistant",
           "content": content.text
          })
          messages.append({
            "role":"user",
            "content": result.content
          })

# Get next response from Claude
          response = self.anthropic.messages.create(
                model="claude-3-5-sonnet-20241022",
                max_tokens=1000,
                messages = messages ,
          )

            final_text . append ( response . content [ 0 ] . text )

         return "\n" . join ( final_text )

 


Processing conversation function. The demo here does not retain historical records and only supports a single conversation.


async  def chat_loop ( self ):
"""Run an interactive chat loop"""
    print ( "\nMCP Client Started!" )
    print ( "Type your queries or 'quit' to exit." )

    while  True :
    try :
        query  = input ( "\nQuery: " ). strip ()

        if  query . lower () == 'quit' :
           break

        response  = await  self . process_query ( query )
        print ( "\n" +  response )

    except  Exception as  e :
    print ( f "\nError:  { str (e)} " )

The official example uses the Claude model, but it is actually compatible with the OpenAI API.toolsThis parameter is enough.

2. MCP Server

MCP Server mainly includes the following three parts:

● Resources: static resources that are allowed to be accessed, such as files, pictures, etc.

● Tools: Tools owned by the Server

● Prompts: Pre-written prompt word templates to assist in completing specific tasks

The official also provides a simple Server development demo:

https://github.com/modelcontextprotocol/quickstart-resources/tree/main/weather-server-python

First, two functions are defined to initiate the actual API request and parse the response content.

async def make_nws_request ( url : str ) -> dict [ str , Any ] | None :  
"""Make a request to the NWS API with proper error handling."""
    headers  = {
        "User-Agent" :  USER_AGENT ,
        "Accept" : "application/geo+json"
     }
    async  with  httpx . AsyncClient () as  client :
         try :
            response  = await  client . get ( url ,  headers = headers ,  timeout = 30.0 )
            response . raise_for_status ()
            return  response.json ( )
         except  Exception :
             return None

def  format_alert ( feature : dict ) -> str : 
"""Format an alert feature into a readable string."""
    props  =  feature [ "properties" ]
    return f """ 
Event:  {props.get( 'event''Unknown' )}
Area:  {props.get( 'areaDesc''Unknown' )}
Severity:  {props.get( 'severity''Unknown' )}
Description:  {props.get( 'description''No description available' )}
Instructions:  {props.get( 'instruction''No specific instructions provided' )}
"""

Two tool functions are defined, and decorators are added to let the Client know that the function is a tool and can be called.

@mcp . tool ()
async  def  get_alerts ( state : str ) -> str : 
"""Get weather alerts for a US state.

    Args:
        state: Two-letter US state code (eg CA, NY)
    """

    url  = f " {NWS_API_BASE} /alerts/active/area/ {state} "
    data  await  make_nws_request ( url )

    if  not  data  or  "features"  not  in  data :
        return  "Unable to fetch alerts or no alerts found."

    if  not  data [ "features" ]:
        return "No active alerts for this state."

    alerts  = [ format_alert ( feature ) for  feature  in  data [ "features" ]]
    return "\n---\n" . join ( alerts )

@mcp . tool ()
async  def  get_forecast ( latitude : float ,  longitude : float ) -> str :  
"""Get weather forecast for a location.

    Args:
        latitude: latitude of the location
        longitude: Longitude of the location
    """

# First get the forecast grid endpoint
    points_url  = f " {NWS_API_BASE} /points/ {latitude} , {longitude} "
    points_data  = await  make_nws_request ( points_url )

    if  not  points_data :
        return "Unable to fetch forecast data for this location."

# Get the forecast URL from the points response
    forecast_url  =  points_data [ "properties" ][ "forecast" ]
    forecast_data  = await  make_nws_request ( forecast_url )

    if  not  forecast_data :
        return "Unable to fetch detailed forecast."

# Format the periods into a readable forecast
    periods  =  forecast_data [ "properties" ][ "periods" ]
    forecasts  = []
    for  period  in  periods [: 5 ]: # Only show next 5 periods
        forecast  = f """
{period[ 'name' ]} :
Temperature:  {period[ 'temperature' ]} ° {period[ 'temperatureUnit' ]}
Wind:  {period[ 'windSpeed' ]} {period[ 'windDirection' ]} 
Forecast:  {period[ 'detailedForecast' ]}
"""

        forecasts.append ( forecast )

     return "\n---\n" . join ( forecasts )


Useful MCP recommendation

MCP Server:

https://github.com/punkpeye/awesome-mcp-servers/

MCP Client:

https://github.com/punkpeye/awesome-mcp-clients/

You can also filter by conditions in glama:

https://glama.ai/mcp/servers

Summarize

The emergence of MCP has promoted the entire AI ecosystem:

Ordinary users only need to pay attention to what kind of plug-in they need, then find it and install it on their own AI applications such as Cursor. Developers can reduce the development workload by introducing excellent MCP Servers according to their own project needs, or they can develop repetitive functions as MCP Servers to achieve multi-project reuse.

However, the Claude model and its desktop client cannot be used directly in China. A foreign mobile phone number is required to register, and there is often a risk of being banned. Cursor can only use MCP Server in Agent mode, and the ordinary Chat mode does not include this function. In addition, the Agent mode has only a certain amount of monthly quota in the free version. We can only hope that more excellent projects will emerge in the open source community to exert the influence of the MCP ecosystem.