AI Novice Village: MCP

Explore the new era of AI technology and how the MCP protocol helps large language models expand their capabilities.
Core content:
1. The limitations of large language models and the integration requirements of external tools
2. The composition of the MCP protocol and its role in tool standardization
3. The functions of the MCP Client and Server and examples of their operation in practical applications
Background of MCP
Due to factors such as the real-time nature of training data and privacy, the capabilities of large language models (LLMs) are not all-encompassing. In order to expand its functional boundaries, external tools are usually needed. These tools can be accessed in the form of APIs or private vector databases. Large language models can select the most appropriate one from among the many registered tools by matching and identifying prompts or relying on structured matching of function calls customized by various manufacturers.
Furthermore, our tasks are usually difficult to solve independently with a single tool, and often require coordinating multiple tools to execute in sequence based on understanding the context. This gave rise to the concept of Agent as a module responsible for coordination and management. Our commonly used code editor Cursor is an Agent in the code field. It can write corresponding code according to user needs with the assistance of a large model. If it is found that the code running environment is missing, it can call related tools for installation and testing with user authorization.
Although the above solutions can expand the capabilities of large models, the problem is that the specific implementation methods of different large models are different. Even for the same large model, the way it calls different types of tools may be different. The emergence of MCP (Model Context Protocol) is to solve this irregular situation. By following the same set of standards, all tools that access large models can achieve standardized docking, which greatly reduces the complexity and workload of integrating tools with large models. For example, for the most commonly used web search service, you only need to write a service that complies with the MCP standard, and it can be used by all large models that support the MCP protocol.
MCP structure
Specifically, MCP consists of two core parts: MCP Client and MCP Server. MCP Client represents the application used by the user (such as Cursor, Claude), and its responsibilities include:
Provides an overview of the MCP protocol for large language models to ensure consistency of interaction
Initiate a request to the MCP Server and return the result to the big model
MCP Server is a service intermediary that connects LLM external capabilities. Its responsibilities include:
Provides a standardized JSON-RPC interface for tools or resources to access
Convert all APIs to MCP compatible format
Handling identity verification
Defining the capabilities provided to external parties
Resources, prompts and tools are the three main types of capabilities that MCP can provide externally.
The core of MCP is that it can structure the contextual content provided by LLM, combine past memories, existing tools and current status, and gradually complete the task objectives required in the context.
example
Install Node and make sure the npx command is available.
Visit the website https://mcp.composio.dev/, find the service you want to integrate (here we take Gmail as an example), click the Generate button, copy the generated command, open the local command line tool, paste it and press Enter to execute.
Open the MCP settings page of Cursor and you can see the added Gmail service.
In the dialog box, use Agent mode and ask "What about the gmail email received yesterday?" The reply is as shown in the figure below.
One More Thing
In addition to the Composio platform, Smithery (https://smithery.ai/) is also an excellent MCP platform that is worth your understanding and trying.