MCP is a bridge

Written by
Iris Vance
Updated on:July-12th-2025
Recommendation

Explore the application and challenges of AI technology in practical work.

Core content:
1. Practical application mode of AI technology in daily work
2. The auxiliary role of AI in operation and maintenance and actual cases
3. Innovative application of AI in information processing and cloud services

Yang Fangxian
Founder of 53AI/Most Valuable Expert of Tencent Cloud (TVP)

Preface

A few days ago, Mr. Liu complained about something. I remember it was a question: Can you guess why KCD is called KCD? I think it’s a good question, so as a Kubernetes veteran, I’d like to talk about AI today.

After ChatGPT came out, I have been keeping a little attention on it. I should have paid a few hundred dollars in AI tax. In addition to chatting, I also tried to use popular large models to solve some practical problems. Over this period of time, the effective use of AI can be summarized into several modes:

  1. Manuscript plagiarism: Of course it’s not plagiarism. I often hand over the text to be published to something like GPT to help me check for typos and polish some words and sentences.
  2. Translation: Traditional translation tools, including Deepl, are not able to handle documents with messy formats (such as random line breaks in PDF, codes and tags in HTML, etc.), but the big model can easily handle this situation.
  3. Auxiliary development: function-level code, writing unit tests, reading and interpreting code, and even tracking some configuration parameters, finding and debugging specific functions. At present, the effects of both Windsurf and Cursor have far exceeded my expectations.
  4. Data query and integration: Currently, both Search and Research fall into this category.

Apart from these tools, have I really integrated the capabilities of big models into my actual business?

Operation and maintenance master

In the field of operation and maintenance, one of the greatest values ​​of a veteran is: being well-informed. However, in the eyes of all mainstream big models, the log information of well-known software is no secret at all. So I wrote a little gadget called Pipe2GPT, which has been staying in my Mac and Home Server. Whenever I encounter any difficult STDOUT/STDERR, I can just pipe it over. In most cases, it can give results that are not inferior to those of StackOverflow. The most important thing is that it is indeed the words of organized people, which is very important.

Baby-soothing gadgets

I have a workflow that generates fairy tales in Cantonese based on a few keywords, and uses dialect TTS to generate voice to read to children - is this a small contribution to preserving Cantonese?

Modulation and Demodulation

In fact, it is similar to translation, allowing the ability of the big model to translate and reorganize information to generate new information patterns, including but not limited to:

  1. Extracting standardized information from natural language web pages, such as announcements and notifications, and passing it on to other systems for further processing. This application method is very widespread and very suitable for small-scale collection work.
  2. Cloud SDK to IaC: Take virtual machines for example. For the same 4-core 8G, each vendor provides multiple models to choose from, and in Terraform's Provider, there are various different expressions. With the help of the big model, it is easy to convert between different vendors' SDK formats and different IaC codes.

However, during this process of riding the wave of popularity, I always felt a little rough. The application side and the model side were always clearly separated and worked independently. I couldn't afford the training, and as for the docking, because of my limited personal architecture ability, I had to make a lot of code adjustments every time due to slight differences in requirements. Especially when docking with some commercial data systems, there was a lack of guidance on best practices, and the resulting feeling of a makeshift team was very frustrating.

MCP

I saw Cloude MCP not long ago, and I feel that this high-end big model is starting to have some appeal. Finally, there is a way to connect "traditional" services and systems with various big models in a relatively regular way.

MCP is the abbreviation of Model Context Protocol. The official introduction says:

The Model Context Protocol (MCP) is an open protocol that enables seamless integration between LLM applications and external data sources and tools. Whether you are building an AI-driven IDE, enhancing a chat interface, or creating custom AI workflows, MCP provides a standardized way to connect LLMs with the context they need.

Currently, SDKs are provided for TypeScript, Python, Java, and Kotlin.

The official architecture diagram is as follows:


At its core, MCP follows a client-server architecture where a host application can connect to multiple servers:

flowchart LR subgraph "Your Computer" Host["MCP Client (Claude, IDEs, Tools)"] S1["MCP Server A"] S2["MCP Server B"] S3["MCP Server C"] Host <-->|"MCP Protocol"| S1 Host <-->|"MCP Protocol"| S2 Host <-->|"MCP Protocol"| S3 S1 <--> D1[("Local Data Source A")] S2 <--> D2[("Local Data Source B")] end subgraph "Internet" S3 <-->|"Web APIs"| D3[("Remote Service C")] end
  • MCP Hosts
    : For example, Claude Desktop, Integrated Development Environments (IDEs), or AI tools that want to access data through MCP
  • MCP Clients
    : A protocol client that establishes a one-to-one connection with a server
  • MCP Servers
    : Lightweight program that provides specific functionality through the standardized Model Context Protocol
  • Local data sources
    : Files, databases, and services on your computer that can be securely accessed by the MCP server
  • Remote Services
    : External systems accessible via the network (such as APIs) to which the MCP server can connect

As can be seen from the architecture diagram, MCP defines a behavior specification and its dependent communication methods and corresponding objects. The LLM client application, as an MCP Client, is connected to local resources and external services through the MCP Server, thus forming a complete data path, allowing the data and capabilities provided by the MCP Server to be used directly in the LLM client application.

The core concepts in MCP include resources and tools for describing atomic capabilities, prompts for reusing prompts, and sampling capabilities that can control text generation. In addition to these capabilities, relatively complete suggestions and best practices are also proposed for transmission, security, sensitive information, etc. Therefore, although there are shortcomings such as only being able to call locally, MCP is still a very useful direction for developing LLM applications (it doesn’t matter if it’s not good enough, who can who up).

example

The official website document provides a weather forecast sample, which is a typical example: obtaining real-time information from an external service as context for use in LLM. This example is divided into three parts:

  1. The server provides development methods in multiple languages, which defines get_forcast and get_alert Two Tools
  2. Client: How to create a Bot and use the previously developed MCP server
  3. How to use MCP Server in Cloude App.

The main "business" expressed in the example is to obtain (US) weather information in LLM and combine LLM's own capabilities to respond to user needs.

What happened when the question was asked?

  1. The client sends the question to Claude
  2. Claude analyzes the available tools and decides which one to use
  3. The client executes the selected tool through the MCP server
  4. The results were sent back to Claude
  5. Claude answers the question based on the response content

Enable MCP in Cloude App

In the App Properties window, go to the Developer Tab and directly edit Settings. Add the following definition to get it:

{ "mcpServers": { "weather": { "command": "uv", "args": [ "--directory", "/ABSOLUTE/PATH/TO/PARENT/FOLDER/weather", "run", "weather.py" ] } } }

After enabling the server, a ? icon will appear at the bottom right of the Cloude chat window input box. Click it to display the Tools provided by the currently enabled MCP Server.

Ecology

There are quite a few tools that support MCP. The official list is as follows:https://modelcontextprotocol.io/clients

Officially listed example services:https://modelcontextprotocol.io/examples

mcp.so There are over 2000 MCP Servers listed in .

Outlook

The overall implementation of MCP is relatively simple, which is convenient for participation, but also a precursor to fragmentation. Currently, it can only support local operations, which greatly reduces possible performance and security issues. However, for automation and real-time requirements, the current capabilities of MCP are still unclear.

In summary, contrary to the general thinking of the community, I personally believe that MCP, as a cheap (cheap in a cheap way), is an attractive solution in an exclusive large model environment.