How Model Context Provider (MCP) Empowers AI Agents

Written by
Iris Vance
Updated on:June-25th-2025
Recommendation

Model Context Provider (MCP) is the intelligent scheduling layer of AI agents, which can dynamically select relevant tools according to user needs, greatly improving the efficiency and accuracy of AI assistants.

Core content:
1. MCP is an intelligent scheduling layer between AI and tools. How to improve the efficiency and accuracy of AI assistants
2. MCP workflow: from receiving user requests to returning final results
3. Construction of dynamic prompt words: system instructions, tool instructions, and dynamically selected tool definitions

Yang Fangxian
Founder of 53A/Most Valuable Expert of Tencent Cloud (TVP)

 

summary

When AI assistants use tools, how can we make the model focus only on tools related to the current task, without being disturbed by many irrelevant tools? This article deeply explores the working principle of "Model Context Provider" (MCP), which serves as an intelligent scheduling layer between AI and tools. It can dynamically select relevant tools according to user needs, greatly improving the efficiency and accuracy of AI assistants.

introduction

Imagine if you asked an AI assistant "What's the weather like in Paris today?" and the system gave it descriptions of dozens of tools, including searching for restaurants, booking flights, checking stocks, etc. This would not only waste computing resources, but could also lead the AI ​​to make the wrong tool choice.

That’s why we need an intelligent “Model Context Provider” (MCP). It’s like an AI personal assistant. After the user makes a request, it first analyzes the needs, selects only the tools that may be used, and then sends these selected tools to the Large Language Model (LLM) together with the user request.

MCP workflow

The core workflow of MCP can be divided into the following steps:

  1. 1. Receive user request : The user asks a question, such as "What is the weather in Paris?" 
  2. 2. Analyze user intent : MCP analyzes the request content to understand what the user wants to know 
  3. 3. Select relevant tools : Based on the analysis results, select a relevant subset of all available tools (such as weather query tools) 
  4. 4. Build dynamic prompt words : combine system instructions, tool instructions, selected tool descriptions and user requests into prompt words 
  5. 5. Send to LLM : Send the constructed prompt words to the large language model 
  6. 6. Processing LLM output : 
  • • If LLM decides to use a tool, MCP executes the tool and obtains the results
  • • If no tool is needed, LLM generates natural language responses directly
  • 7. Return the final result : return the tool execution result or direct reply to the user 
  • Construction of dynamic prompt words

    The dynamic prompt words constructed by MCP usually include the following parts:

    1. 1. System Directive : Define the role and overall goals of the LLM 
      You are a helpful assistant. Please answer the user's questions using the tools available to you. Always prioritize providing accurate information.
    2. 2. Tool Instructions : Specify the format that must be used when LLM calls the tool 
      When calling a tool, you must use the following JSON format: {"tool_name": "Tool Name", "parameters": {"parameter1": "Value1", "parameter2": "Value2", ...}}
      Do not add any other text before or after the JSON. If you do not need to use a tool, please reply directly in natural language.
    3. 3. Dynamically selected tool definitions : This is the core part of the MCP and only contains tools relevant to the current request 
      Available tools:
      {
        "name": "get_weather",
        "description": "Get the current weather for a specific location.",
        "parameters": {
          "type": "object",
          "properties": {
            "location": {
              "type": "string",
              "description": "City and state/country, e.g. Beijing, China"
            },
            "unit": {
              "type": "string",
              "enum": ["celsius", "fahrenheit"],
              "description": "Temperature unit. Default is Celsius."
            }
          },
          "required": ["location"]
        }
      }
    4. 4. Conversation history (if applicable): previous conversation content 
    5. 5. User current request : 
      User: What's the weather like in Paris?

    A complete example

    User Request

What's the weather like in Paris?

MCP construction prompt words

System: You are a helpful assistant. Please answer the user's questions using available tools. Always give priority to providing accurate information. When calling a tool, you must use the following JSON format: {"tool_name": "tool name", "parameters": {"parameter1": "value1", "parameter2": "value2", ...}}. Do not add any other text before or after the JSON. If you do not need to use a tool, please reply directly in natural language.

Available tools:
{
  "name": "get_weather",
  "description": "Get the current weather for a specific location.",
  "parameters": {
    "type": "object",
    "properties": {
      "location": {
        "type": "string",
        "description": "City and state/country, e.g. Beijing, China"
      },
      "unit": {
        "type": "string",
        "enum": ["celsius", "fahrenheit"],
        "description": "Temperature unit. Default is Celsius."
      }
    },
    "required": ["location"]
  }
}

User: What's the weather like in Paris?

Possible outputs of LLM

{
  "tool_name" : "get_weather" , 
  "parameters" : { 
    "location" : "Paris, France" , 
    "unit" : "celsius" 
  }
}

MCP post-processing

After receiving this tool call request, MCP will executeget_weatherThe tool fetches weather information for Paris and then sends the result back to the LLM, which generates the final natural language response.




E2E Workflow Diagram




How to choose relevant tools for MCP

There are several ways that the MCP can determine which tools are relevant to a user request:

  1. 1. Keyword matching : simple regular expression or string matching to check whether the user request contains keywords in the tool name or description 
  2. 2. Semantic search/embedding : Convert the user request and the description of all available tools into vectors and find the tool with the highest semantic similarity to the request 
  3. 3. Rule system : clearly defined rules (e.g., "if the query contains 'weather' and a city name, then include 'weather search tool'") 
  4. 4. Classification Model : Use a specialized model or small LLM to classify user intent and map to relevant tools 




Advantages of MCP

  1. 1. Reduce the size of prompt words : Only including relevant tools can save tokens, reduce latency and costs 
  2. 2. Improved reliability : LLM is less likely to be confused by a large number of irrelevant tools, or to try to use tools that are not suitable for the current request 
  3. 3. Speed ​​up decision making : With fewer options, LLMs may make tool call decisions faster 
  4. 4. Enhanced control : MCP can fine-tune which functions are exposed to the LLM for any given request or user state 

Actual application scenarios

MCP plays an important role in many scenarios:

  1. 1. Multifunctional AI assistant : AI assistant with dozens or even hundreds of tools can intelligently select tools through MCP to avoid confusion 
  2. 2. Personalized services : Dynamically adjust available tools based on user preferences or permissions 
  3. 3. Resource-constrained environments : When computing resources are limited, reducing unnecessary tool descriptions can significantly improve efficiency 
  4. 4. Professional field application : In professional fields such as medicine and law, professional tools can be selected according to specific problem types. 

Technical considerations for implementing MCP

To realize an efficient MCP system, the following aspects need to be considered:

  1. 1. Tool classification system : Establish a good tool classification system to facilitate quick screening 
  2. 2. Embedding vector database : Create and store embedding vectors for tool descriptions to support semantic search 
  3. 3. Cache mechanism : cache tool selection results for common request types to reduce repeated analysis 
  4. 4. Monitoring and feedback : Track the accuracy of tool selection and continuously optimize the algorithm based on actual usage 

Personal opinion

MCP represents an important trend in AI system design: rather than simply increasing the size of models or adding more tools, the interaction between models and tools should be optimized. This "less is more" philosophy is critical in improving the efficiency of AI systems.

As AI assistants continue to expand their capabilities, the number of tools is likely to grow exponentially, making intelligent scheduling layers like MCP increasingly important. In the future, we may see more sophisticated MCP systems that not only select tools based on user requests, but also predict users’ subsequent needs and prepare relevant tools in advance.

In addition, the idea of ​​MCP is not only applicable to tool selection, but can also be extended to other aspects, such as dynamically adjusting system instructions, selecting appropriate models, etc. This represents an important development direction for AI system architecture.

Summarize

Model Context Provider (MCP), as the intelligent scheduling layer in AI systems, significantly improves the efficiency and accuracy of AI assistants by dynamically selecting tools relevant to user requests. It not only reduces the size of prompt words, but also improves model response speed and decision quality. As the AI ​​tool ecosystem continues to expand, MCP will become increasingly important and become a key component for building efficient AI systems.