Detailed explanation of MCP model context protocol (with references)

Written by
Clara Bennett
Updated on:July-08th-2025
Recommendation

MCP model context protocol provides a standardized solution for large language models to interact with external data.

Core content:
1. Overview of the MCP model and the importance of standardized interaction methods
2. How MCP solves data silos, inefficient development and ecological fragmentation
3. MCP's technical architecture, components and three standard capabilities

Yang Fangxian
Founder of 53AI/Most Valuable Expert of Tencent Cloud (TVP)

1. MCP Overview and Background

Launched by Anthropic in November 2024, MCP is an open protocol designed to standardize how large language model (LLMs) applications interact with external data sources and tools .

The core of MCP is to establish a standardized communication layer so that LLMs can send requests to the MCP server through the MCP client when processing user requests or performing tasks.

The MCP server is responsible for interacting with the corresponding external data sources or tools, obtaining data and formatting it according to the MCP protocol specifications, and finally returning the formatted data to the LLM. This mechanism enables the LLM to obtain more accurate and relevant information, thereby generating higher-quality responses or performing operations requested by users.


2. Core issues solved by MCP

The emergence of MCP solves three key problems in the era of large models: 1️⃣ data silos, 2️⃣ inefficient development, and 3️⃣ ecological fragmentation. Through a unified protocol standard, MCP enables AI models to directly call external tools and data sources like "plugging in a USB", greatly simplifying the integration process and improving development efficiency.

  • Breaking down data silos : Traditional large models cannot directly access real-time data or local resources, but MCP allows AI to "connect everything", such as automatically calling the meteorological API when querying the weather and directly connecting to the internal database when analyzing corporate data.


  • Reduce development costs : Before the emergence of MCP, each large model required a separate interface to be developed for each tool, resulting in duplication of work. With MCP, developers only need to write the server once, and all MCP-compatible models can call it.


  • Improve security and interoperability : MCP has built-in permission control and encryption mechanisms, which are safer than directly opening the database; at the same time, standardization similar to the USB interface allows tools from different manufacturers to be "plug and play", avoiding ecological fragmentation.


3. MCP Technical Architecture and Implementation

✅ MCP follows a client-server architecture , which mainly consists of the following parts:

  • MCP Hosts : refers to AI tools that want to access data through MCP, such as chat clients, integrated development environments (IDEs), Claude Desktop, Cline, etc.


  • MCP Clients : are protocol clients that maintain a one-to-one connection with an MCP server. They are responsible for handling communications and presenting the server’s responses to the AI ​​model.


  • MCP Servers : Through standardized protocols, they enable two-way interaction with MCP Clients and provide access to local or remote resources, such as database queries, API calls, etc. A server can provide multiple functions, similar to function calls or tool usage, and LLM will understand and call the required capabilities on its own.


  • Local Data Sources : Files, databases and other resources local to the computer. MCP Servers can securely access these local data sources.


  • Remote Services : External systems accessed through the Internet (such as APIs). MCP Servers can connect to these remote services.


✅ MCP Servers can provide three types of standard capabilities:

  • Resources : are data objects that can be referenced and retrieved, including documents, images, database schemas, and other structured data. Clients can read the data in these files. For example, an MCP server can expose a database schema containing product information as a resource.


  • Prompts : Prompts are pre-defined templates for users to complete specific tasks. Prompts help ensure consistent, high-quality AI output for common tasks. They can contain placeholders for dynamic content and can be linked together to create complex workflows. For example, an MCP server can provide a prompt template for generating product descriptions.


  • Tools : are functions or third-party services that can be called by the LLM, such as querying a database, calling an API, or processing data. Each tool has a name, description, and input/output mode definition. The LLM decides which tool should be used based on the tool description. For example, an MCP server can provide a tool for querying weather forecasts.


✅The  communication protocol uses JSON-RPC 2.0, which supports two types of communication mechanisms:

  • Local communication - Stdio (standard input and output)  : suitable for local inter-process communication.

  • Remote communication - HTTP communication based on SSE  : supports streaming and interaction with remote services.


4. MCP workflow

The workflow of MCP (Model Context Protocol) is mainly centered around the client-server architecture , which aims to standardize the interaction between AI models and external tools and data sources. The following are its main working steps:

Step 1: Capability Discovery : When an AI application (as an MCP client) needs to expand its capabilities, it first connects to one or more  MCP servers . The MCP client asks the MCP server for the capabilities it provides, namely the list and description of tools , resources  , and  prompts  . This is like the client getting a "menu" from the server to understand what the server can do and how to use it.

  • Tools  are operations or functions that an AI model can perform, such as querying a database, sending an email, or generating an image.

  • Resources  are data objects that AI models can access and retrieve, such as documents, API responses, or database schemas.

  • Prompts  are predefined templates that guide AI interactions, ensuring consistency and efficiency.


Step 2: Enhanced Hints : When a user asks a query that requires external data or an action, that query is sent to the AI ​​model (via its host application, the MCP host), along with a description of the tools/resources/hints provided by the server. The model now “knows” which server capabilities it can leverage to accomplish the task. For example, if a user asks “What will the weather be like tomorrow?”, the hint sent to the model will include a description of the “Weather API tool” provided by one of the MCP servers.

Step 3: Tool Usage : Based on the user’s query and the description of available tools/resources/hints, the AI ​​model (in the MCP client) decides whether a tool is needed to complete the task. If so, the client  sends  a toolUse message, specifying the tool to use and its required parameters.

Step 4: Tool execution : MCP server  receives toolUse After the request is made, the request is dispatched to the corresponding underlying tool or service. For example, if the request is to use the "Weather API Tool", the server will call the actual weather API and get the result.

Step 5: Results returned : After executing the tool, the MCP server returns the results to the MCP client  in a structured format , including a toolResult Messages and output from tool execution.

Step 6: Model Response : MCP  client toolResult After the tool is executed, the results of the tool execution are added to the conversation history and forwarded back to the AI ​​model (usually through its host application). The AI ​​model can now use this information to generate the final response to the user. This process may involve multiple tool calls and data retrievals. The AI ​​Agent can autonomously decide which tools to use, in what order, and how to string them together to complete complex tasks.

Step 7: Human-machine collaboration : MCP also introduces the ability for human-machine collaboration, allowing humans to provide additional data and approve certain execution steps.

In summary, the workflow of MCP can be summarized as follows:

1️⃣  AI client discovers server capabilities

2️⃣ The AI ​​model decides which tools/resources/prompts to use based on the user query and available capabilities

3️⃣ The client sends a request to the server

4️⃣ The server performs the operation and returns the result

5️⃣ The client feeds the results to the AI ​​model to generate the final response

The entire process communicates through standardized protocols, allowing different AI models and various external tools and data sources to be integrated in a universal and flexible way. This standardization reduces integration complexity, improves interoperability, and lays the foundation for building more powerful AI applications.

5. Key advantages of MCP

MCP brings several significant advantages to AI system development:

  • Reduce integration complexity  : MCP provides a unified interface, eliminating the need to implement its own approach for each LLM-driven application to integrate external functions. Through standardized protocols, developers can more easily connect AI models to various tools and data sources. It transforms an M×N problem (integration of M AI models with N tools/data sources) into an M+N solution, greatly reducing the custom code and adapters required for integration.


  • Faster development : Developers can leverage pre-built MCP clients and servers, saving time and effort in developing custom integrations. Tools and APIs Developers only need to build one MCP server, which can be used by all compatible MCP clients.


  • Improved scalability and reliability : By standardizing the architecture, MCP helps build more robust and scalable AI systems. Maintaining context across tools also becomes easier because interactions share a common framework.


  • Enable vendor-agnostic development : As an open standard, MCP does not restrict developers to use a specific AI provider’s ecosystem or toolchain. Any AI client that supports MCP (such as Claude or open source LLMs) can use any compatible MCP server.


  • Enhance the contextual awareness of AI Agents : The emergence of MCP can help the context layer achieve the best results. Since the context layer usually requires developers to define it themselves, MCP, as an open source protocol, can best leverage the power of the community to accelerate this process.


  • Realize the potential of "one-size-fits-all application" : With the appropriate MCP server, users can turn each MCP client into a "one-size-fits-all application".


6. Comparison between MCP and API

  • definition


    • MCP : MCP is a standardized protocol designed to provide contextual information to large language models (LLMs) so that they can interact more effectively with external tools and data sources. It emphasizes the context and semantic background of the data, enabling the model to understand and process complex structured data.

    • API : API is a set of rules and tools that allow communication between different software applications. It defines the format of requests and responses, enabling developers to integrate the functionality of different services without having to understand their internal implementation details.


  • Main Differences



  • Application Scenario


    • Commonly used for simple service integration, such as obtaining weather information, payment processing, etc.

    • It is widely used in traditional enterprise systems to implement function calls between modules.


Application scenarios of MCP :

  • Suitable for complex AI applications that require multiple tools and data sources to work together, such as smart assistants, data analysis platforms, etc.

  • Supports real-time data interaction and dynamic tool calls, enabling AI models to flexibly adjust their behavior based on context changes.


Application scenarios of API :

  • Commonly used for simple service integration, such as obtaining weather information, payment processing, etc.

  • It is widely used in traditional enterprise systems to implement function calls between modules.


7. MCP Ecosystem

An ecosystem is rapidly forming around MCP:

  • MCP Clients : Currently, most high-quality MCP clients are centered around programming. For example, Cline is an open source VSCode plugin that enables developers to easily extend the functionality of AI and even create completely customized agents by combining it with the MCP protocol. Claude Desktop itself is also an MCP client that allows users to connect to and use various MCP servers. More business-oriented MCP clients are expected to appear in the future.


  • MCP Servers : The number of MCP servers is growing rapidly, covering a variety of use cases such as accessing the local file system, querying databases, sending emails, generating images, interacting with design tools such as Figma and Blender, controlling music software such as Ableton Live, and conducting web searches. Many leading database companies, coding companies, and startups are developing their own servers. There are already more than 2,000 MCP servers on GitHub, with the most common use cases being search and data retrieval.


  • MCP Marketplace : As the number of MCP servers increases, a unified MCP marketplace may emerge, allowing AI agents to select the appropriate server based on factors such as speed, cost, and relevance.


  • MCP Infra : The infrastructure around MCP is developing, including server generation tools (such as Mintlify, Stainless, Speakeasy) to reduce the friction of creating MCP-compatible services, and managed solutions (such as Cloudflare, Smithery) to solve deployment and scaling challenges. Connection management platforms (such as Toolbase) are also beginning to simplify local-first MCP key management and proxying. The goal is to make MCP more reliable, easier to deploy in production environments, and more scalable.


8. Challenges and Limitations of MCP

Although MCP has made significant progress, there are still some unresolved issues in building and using MCP:

  • Lack of built-in workflow concept : Most AI workflows require multiple tool calls in sequence, but MCP lacks a built-in workflow concept to manage these steps.


  • Standardizing Client Experience : How to make tool choices when building MCP clients is a common question. It is not clear whether each client needs to implement its own tool RAG, or if there is a layer that is waiting to be standardized.


  • Security and Authentication : Standardized security mechanisms are one of the most pressing needs in the MCP specification. In the future, it is expected that an authentication layer will be defined, such as an OAuth-like flow or an API key standard, so that clients can connect to remote servers securely. A permission model may also be introduced.


  • Remote MCP support : Most current MCP servers are local-first, limiting their scalability. As the ecosystem grows, making remote MCP a first-class citizen and adopting a streamable HTTP transport is expected to promote wider adoption of MCP servers.


  • Server Discovery and Management : As the number of MCP servers increases, it becomes increasingly important to simplify server discovery, sharing, and contribution. A two-sided platform (MCP Marketplace) similar to an app store may emerge.


  • Multi-tenancy support : In the future, there may be an MCP gateway or orchestration layer that acts as a unified endpoint to aggregate multiple MCP services, handle routing and even high-level decisions about which tool to use, and manage multi-tenancy and enforce policies.


  • Optimized AI Agents : In the future, AI models fine-tuned specifically for tool use and MCP may emerge. These models will have a deeper understanding of the protocol, know how to accurately format requests, and may be trained on logs of successful MCP-based operations, thereby improving efficiency and reliability.


IX. Comparison of MCP with existing solutions

  • Comparison with APIs : Compared to traditional APIs, which typically provide fine-grained control and highly specific functionality, MCP provides broader, more dynamic capabilities that are more suitable for scenarios that require flexibility and context-awareness. For applications that require fine-grained control, performance optimization, and maximum predictability, you may still prefer to use fine-grained APIs.


  • Relationship with RAG and Agents  : AI models benefit from RAG and Agents. MCP aims to standardize how to expose this information to language models. Agents can be seen as an extension of RAG to some extent. The emergence of MCP maximizes the value of context.


  • Comparison with OpenAI Function Call, GPTs and Agent SDK  : MCP is considered to be the culmination of existing middleware. It draws on the ideas of OpenAI Function Call and GPTs, but is more lightweight and open. Compared with the still-developing OpenAI Agent SDK, MCP is more open source and flexible, but may not be as sophisticated and efficient as Agent SDK.


  • Comparison with LangChain and LlamaIndex : LangChain and LlamaIndex attempt to build an Agent framework, but due to their high level of abstraction and complex framework, many LLM developers turn to self-development after initial use. The emergence of MCP has had a great impact on them.


10. Conclusion

The Model Context Protocol (MCP) represents a major advancement in AI tool integration. It greatly simplifies the connection between LLM applications and the outside world by providing a standardized open protocol , thereby accelerating the development of AI applications and improving their scalability and reliability. The success of MCP lies not only in its technical design, but also in its strong supporters, successful experience, and actively developing ecosystem. Although there are still some challenges, MCP is expected to become  a key middle layer for Agentic AI and bring new opportunities to developers and startups. Just as USB-C unifies the connection of electronic devices, MCP has the potential to become the common language of the AI ​​native era , promoting smoother and safer collaboration between people and AI through software.

References:


1.MCP official document https://modelcontextprotocol.io/introduction

2.MCP Servers https://github.com/modelcontextprotocol/servers.git

3.MCP python-sdk https://github.com/modelcontextprotocol/python-sdk.git

4.awesome-mcp-servers https://github.com/punkpeye/awesome-mcp-servers.git

5.awesome-mcp-clients https://github.com/punkpeye/awesome-mcp-clients.git

6. Smithery service hosting platform https://smithery.ai/

7. Download claude desktop https://claude.ai/download

8. Download cherry studio https://cherry-ai.com/

9.cline plugin download https://github.com/cline/cline

10.cloudflare https://www.cloudflare.com/products/registrar/

11.OpenRoute https://www.cloudflare.com/products/registrar/


Hello, my name is Tommy Tang Guoliang, and I focus on sharing cutting-edge AI technologies.

Welcome to join my excellent course "Frontier Algorithms and Practical Applications of Multimodal Large Models Season 1" . This series of courses covers the full learning path from basic concepts to advanced algorithm implementation, and covers four important multimodal projects. These contents are not only based on open source projects, but also independently developed some new functions, which are suitable for the deployment and application of enterprise-level models.


You will not only understand the theoretical background of multimodal architecture, but also deeply practice the application of multimodal large models through multiple practical project exercises. Each project practice is accompanied by detailed explanations and practical demonstrations to ensure that you can efficiently master the cutting-edge technologies and applications in the field of multimodality.