How to use MCP server

Written by
Audrey Miles
Updated on:July-11th-2025
Recommendation

Master the MCP server and improve your technical competitiveness.

Core content:
1. Discussion on the background and technical value of the MCP protocol
2. Comparison of MCP client selection and usage experience
3. MCP server development prospects and application examples

Yang Fangxian
Founder of 53AI/Most Valuable Expert of Tencent Cloud (TVP)

Suddenly MCP is very popular. Here I will talk about my experience in using it (development experience is planned for the next article). In the face of change, what remains unchanged? At the end of the article, I will imagine the future. This may be the romance of technical people:)

Preface

The technology related to large models is changing with each passing day. I remember I wrote an article about Cursor 2 months ago. At that time, I said that Cline's MCP had a lot of room for imagination. So I did some research and accumulated several related articles before the New Year. Because I was busy with family affairs, I didn't have time to sort them out and publish them. I felt that if I didn't sort them out, they would be outdated. But sometimes we should not simply look at the technology itself, but think about the value behind the technology, such as what ecological niche it is?

Ever since there wasfunction call, giving more room for imagination to the model's capabilities. But it is a bit regrettable that this feature did not become popular quickly, and there may be many reasons for this. For example, to support function calls well, the model itself needs to have certain capabilities. Many of our popular large models cannot support them well (or not at all), such as deepseek (Note: The latest open source Gemma3 from Google supports function calls under very small models. If you have time, test it. If it works, we can save a lot of costs. Please pay attention to subsequent articles). At the same time, the threshold for using function calls is quite high, and the standards of each company are different. OpenAI and Google have different definitions of function calls, which leads to low reusability of related functions. In this case, Anthropic, the company behind the Claude model, proposed a universal interaction standard protocol for large models, which is MCP.

There have been many articles about MCP. Let's focus on what it can do in the end. This is the protagonist of today, mcp-server. Many mcp-servers are developed every day. This article mainly discusses how to make good use of MCP's capabilities from the user's perspective. Later, we may talk about the implementation principle of mcp-server and how to develop some practical mcp-related tools (not just mcp-server) from the development perspective.

MCP Client Selection

Starting with Anthropic's own Claude Desktop, there are already quite a few applications that use MCP, such as:

  • Cursor
  • Windsurf
  • Cline (used in vscode)

However, most of them are integrated into code editors. It seems that programmers are more accepting of automation and intelligence. :) Various apps have different levels of support for the MCP protocol. Of course, Claude has the best support. For details, see here: MCP Feature Support Matrix [1]

Because of the network problems in China, I don't recommend using Claude Desktop. Although Cursor 0.46/0.47 has recently fixed the MCP support bug, it still has many problems. If you are editing and debugging mcp-server in Cursor, you must be prepared to fall into the pit at any time. Cline based on VCode will be much simpler and more convenient to use. Configure a model, and then you can start using MCP happily. Especially now that it hasMarketplace, you can easily install and use various mcp-servers. In terms of the stability of MCP support, cline is still qualified and can basically be adjusted as expected. For example, the following is what I used cline to passperplexity APIRetrieve some information.

Windsurf also provides a good experience in using MCP. The screenshots of Windsurf's process presentation (UI) are shown later in the article, which is more beautiful and easy to understand.

But sometimes we don’t want to write code, we just want to have fun with tools. I found a purer and more compact tool.mcphost(https://github.com/mark3labs/mcphost).

A CLI host application that enables Large Language Models (LLMs) to interact with external tools through the Model Context Protocol (MCP). Currently supports both Claude 3.5 Sonnet and Ollama models.

Using mcphost

Regarding mcphost, I first used it to debug mcp-server, but later I found that it is actually a pure MCP client that can easily call various mcp-servers. Its functions are briefly described as follows:

  • Have an interactive conversation with the Claude 3.5 Sonnet or Ollama model
  • Support multiple concurrent MCP servers
  • Dynamic tool discovery and integration
  • Tool call capabilities for both model types
  • Configurable MCP server location and parameters
  • Consistent command interface across model types
  • Configurable message history window for context management

We can install mcphost with one command (assuming go is already installed):

go install github.com/mark3labs/mcphost@latest

Then we configure a simple mcp-server to play with and create an mcp.json file with the following content:

{
  "mcpServers" : {
    "time" : {
      "command""uvx" ,
      "args" : [ "mcp-server-time""--local-timezone=Asia/Shanghai" ]
    }
  }
}

This mcp-server runs the service based on uvx, so if you don't have it installed locally, it's best to install it in advance.

uvx is a lightweight tool that is mainly used to run Python tools and scripts in a temporary environment. Through its ready-to-go features and the use of temporary virtual environments, it solves the complexity of environment management and tool version management issues, and is particularly suitable for quickly trying out tools or performing temporary tasks in CI/CD environments.

Then run mcphost:

export  OPENAI_API_KEY=sk-xxx

mcphost --openai-api-key= $OPENAI_API_KEY  -m openai:gpt-4o --config mcp.json

After successfully loading mcp-server, it enters interactive mode:

2025/03/15 17:33:04 INFO Model loaded provider=openai model=gpt-4o
2025/03/15 17:33:04 INFO Initializing server... name=time
2025/03/15 17:33:11 INFO Server connected name=time
2025/03/15 17:33:11 INFO Tools loaded server=time count=2



  • time
    • get_current_time
      • Get current time  in  a specific timezones
    • convert_time
      • Convert time between timezones



  You: What time is it now?

  Assistant:

2025/03/15 17:33:22 INFO ? Using tool name=time__get_current_time
2025/03/15 17:33:22 INFO Usage statistics input_tokens=221 output_tokens=20 total_tokens=241

  Assistant:


  It is now 17:33 Beijing time (5:33 pm).


┃ Enter your prompt (Type / help  for  commands, Ctrl+C to quit)

The big model, which originally had no concept of time, now has a clear grasp of the current time. Then I asked one more question:

  You: What is the current lunar date?

  Assistant:


  The current date is March 15, 2025. I cannot calculate the corresponding lunar date directly. But you can use the lunar calendar conversion tool or query the relevant calendar to get the corresponding lunar date. If you need my help to find other relevant information, please let me know!


┃ Enter your prompt (Type / help  for  commands, Ctrl+C to quit)

The tools are no longer enough, it's time to move on to the next step.

PS: Of course, you can use /help to view some commands supported by mcphost, such as tools/servers, and try them out.

Introduction to some useful mcp-server

Tavily Search

As shown above, I want to know how to calculate the lunar calendar. There may be an mcp-server that can answer this question, but I believe that as long as the big model can connect to the Internet, it may also be able to answer it. We continue to add an mcp-server in the mcp.json above. We choose tavily, a search engine. You need to apply for a tavily API key [2]

{
  "mcpServers" : {
    "time" : {
      "command""uvx" ,
      "args" : [ "mcp-server-time""--local-timezone=Asia/Shanghai" ]
    },
    "tavily" : {
      "command""npx" ,
      "args" : [ "-y""@mcptools/mcp-tavily" ],
      "env" : {
        "TAVILY_API_KEY""{{You need to apply for Tavily's API key, the free one is enough}}"
      }
    }
  }
}

Next, ask again:

  You: What is the current lunar date?

  Assistant:

2025/03/15 17:47:03 INFO ? Using tool name=time__get_current_time
2025/03/15 17:47:03 INFO Usage statistics input_tokens=945 output_tokens=20 total_tokens=965

  Assistant:

2025/03/15 17:47:04 INFO ? Using tool name=tavily__searchQNA
2025/03/15 17:47:04 INFO Usage statistics input_tokens=1002 output_tokens=30 total_tokens=1032

  Assistant:


  The current lunar date is February 16, 2025, which belongs to the year of Yi Si, the month of Jimao, and the day of Gui Wei.

  If you need more detailed information, you can check the relevant almanac and lunar calendar query websites, such as: Mr. Shui Mo https://m.smxs.com/hl/nongli_2025-3-15.html.


┃ Enter your prompt (Type / help  for  commands, Ctrl+C to quit)

It actually lists the source, and it really is a valid link. My hallucinations have improved a lot after connecting to the Internet.

Perplexity Search

If you are used to using Perplexity to search for information like me, we can also use Perplexity's MCP-Server to search casually. It's just that this API uses the Sonar API for online search, which may be slightly different from the several modes available on the official website. For example, in-depth research is probably not available. We look forward to it being more perfect in the future, and it can also meet daily use at present. Similarly, we need to register an API KEY, but there is no free quota, so you may need to register and recharge with a credit card or Google pay before you can call it. The installation of this mcp-server is a little more troublesome. It has not been uploaded to the official warehouse, etc. We may have to compile and build it manually.

git  clone  https://github.com/ppl-ai/modelcontextprotocol.git
cd  modelcontextprotocol/perplexity-ask
npm install
npm run build

After that, there will be a corresponding dist directory.index.jsfile, we can modify mcp.json to let node run it.

{
  "mcpServers" : {
    "time" : {
      "command""uvx" ,
      "args" : [ "mcp-server-time""--local-timezone=Asia/Shanghai" ]
    },
    "perplexity-ask" : {
      "command""/opt/homebrew/bin/node" ,
      "args" : [ "{{your local code path}}/modelcontextprotocol/perplexity-ask/dist/index.js" ],
      "env" : {
        "PERPLEXITY_API_KEY""{{Your Perplexity API KEY}}"
      }
    }
  }
}

Use Rendering:

 • time
    • get_current_time
      • Get current time  in  a specific timezones
    • convert_time
      • Convert time between timezones
  • perplexity-ask
    • perplexity_ask
      • Engages  in  a conversation using the Sonar API. Accepts an array of messages (each with a role and content) and returns a ask completion response from the Perplexity model.



  You: Briefly summarize the recent open source projects and their uses of deepseek in 500 words

  Assistant:

2025/03/15 19:27:54 INFO ? Using tool name=perplexity-ask__perplexity_ask
2025/03/15 19:27:54 INFO Usage statistics input_tokens=348 output_tokens=49 total_tokens=397

  Assistant:


  Recently, DeepSeek has open-sourced several important projects aimed at improving the training and reasoning efficiency of artificial intelligence (AI) models, including:

  1. FlashMLA: This is an efficient multi-head attention decoding kernel optimized for Hopper GPUs. It can significantly improve video memory utilization and inference speed, giving old graphics cards a new lease of life, thereby improving the operating efficiency of the entire model.
  2. DeepEP: An efficient communication library designed for Mixture of Experts (MoE) models. By optimizing inter-node communication, it can accelerate the training and inference of large models, reduce communication latency, and improve overall performance.
  3. DeepGEMM: This is a lightweight, high-performance matrix multiplication library that supports FP8 precision and is designed to improve the efficiency of large-scale matrix operations. This library provides basic support for the calculation of deep learning models, especially when processing large-scale data.
  4. DualPipe and EPLB: These two projects are optimization solutions for parallel training, including bidirectional pipeline parallel algorithms and expert parallel load balancing, which can greatly improve the efficiency of distributed training and help researchers train models faster.
  5. 3FS: This is a high-performance distributed file system designed for AI training and reasoning, aiming to solve the problem of high-speed access and management of massive data. It makes data management more efficient and provides fast data access capabilities for AI models.

  These projects cover the entire chain of AI development, from underlying hardware optimization to model training and reasoning acceleration, to data storage and communication efficiency improvement. Their goal is to lower the threshold for AI development, improve resource utilization, accelerate model training and reasoning, and promote the popularization and innovation of AI technology. This series of open source initiatives is regarded as an important breakthrough in the field of AI infrastructure and is expected to reshape the industry landscape.


┃ Enter your prompt (Type / help  for  commands, Ctrl+C to quit)

I don't know if it's correct, but it looks very powerful. There are also some search tools that I recommend:

  • Brave Search, you can use it very conveniently [3] (much easier than the above one).
  • FirecrawlYou can check out firecrawl-mcp-server [4] here , which supports deep research and other searches. It is said to have a very high success rate for web crawling. A few free registrations are available.

Of course, all of these require a Key. You can register according to your preferences. Adults don’t have to make a choice.

server-filesystem file system operations

In the past, we could not only manually search for things in files one by one, but also use command lines or string matching to find things. With mcp-server, we can interact with the file system conveniently and intelligently. Similarly, we can add a configuration and play:

  "filesystem" : {
      "command""npx" ,
      "args" : [
        "-y" ,
        "@modelcontextprotocol/server-filesystem" ,
        "{{your local file system path 1}}" ,
        "{{your local file system path 2}}"
      ]
    }

For example, we can talk like this

  You: What projects are in my Source directory?

  Assistant:

2025/03/16 06:47:42 INFO ? Using tool name=filesystem__list_allowed_directories
2025/03/16 06:47:42 INFO Usage statistics input_tokens=2433 output_tokens=16 total_tokens=2449

  Assistant:

2025/03/16 06:47:44 INFO ? Using tool name=filesystem__list_directory
2025/03/16 06:47:44 INFO Usage statistics input_tokens=2469 output_tokens=25 total_tokens=2494

  Assistant:


  In your Source directory, there are the following projects:

  1. blog (directory)
  2. mcp-test (directory)
  3. modelcontextprotocol (directory)

...

  You: What articles were published in the blog directory in 2024?

  Assistant:

2025/03/16 06:48:13 INFO ? Using tool name=filesystem__list_directory
2025/03/16 06:48:13 INFO Usage statistics input_tokens=2595 output_tokens=26 total_tokens=2621

  Assistant:

2025/03/16 06:48:15 INFO ? Using tool name=filesystem__list_directory
2025/03/16 06:48:15 INFO Usage statistics input_tokens=2734 output_tokens=27 total_tokens=2761

  Assistant:

2025/03/16 06:48:16 INFO ? Using tool name=filesystem__list_directory
2025/03/16 06:48:16 INFO Usage statistics input_tokens=2789 output_tokens=28 total_tokens=2817

  Assistant:

2025/03/16 06:48:18 INFO ? Using tool name=filesystem__list_directory
2025/03/16 06:48:18 INFO Usage statistics input_tokens=2886 output_tokens=31 total_tokens=2917

  Assistant:


  In your blog directory, the articles published in 2024 are:

  1. 2024-01-08-summarize
  2. 2024-01-21-using-vscode
  3. 2024-02-19-tl-r479gp-proxy
  4. 2024-02-24-speed-up-your-network
  5. 2024-03-05-using-webhook
  6. 2024-03-10-monitor-and-alert-traffic
  7. 2024-03-31-oneapi-for-all-models
  8. 2024-06-10-oneapi-and-models-tips
  9. 2024-06-15-ai-with-alfred
  10. 2024-07-07-secret-watcher
  11. 2024-07-21-remember-me
  12. 2024-07-29-talk-with-ai
  13. 2024-08-25-usg-to-mikrotik
  14. 2024-09-30-steamdeck-to-ps5
  15. 2024-10-16-create-union-k8s-cluster
  16. 2024-11-16-mac-mini-m4
  17. 2024-11-25-mac-mini-play-games

  These are the catalogues of articles published in 2024.

Uh, I accidentally exposed my tendency to update monthly.

Obsidian Operation Tool

Smithery.ai is an mcp server platform that is user-friendly for beginners. We can use it to configure mcp-server. It also provides an interactive interface for you to fill in the necessary configurations. For example, we can look at this: https://smithery.ai/server/obsidian-mcp. On the right, you can choose the corresponding installation method based on your application.

For example, I chose Windsurf, and after the copy command was executed locally:

❯ npx -y @smithery/cli@latest install obsidian-mcp --client windsurf
✔ Successfully resolved obsidian-mcp
? The absolute path to your Obsidian vault. (required) {{The absolute path to your local Obsidian Vault}}
obsidian-mcp successfully installed  for  windsurf

Then you can use it directly in Windsurf. Let's take a look at the configuration:

    "obsidian-mcp" : {
      "command""npx" ,
      "args" : [
        "-y" ,
        "@smithery/cli@latest" ,
        "run" ,
        "obsidian-mcp" ,
        "--config" ,
        "\"{\\\"vaultPath\\\":\\\"{{absolute path of your local Obsidian Vault}}\\\"}\""
      ]
    }

Let's just ask some random questions and interact with your Obsidian knowledge base.

After that, you can talk to AI about your personal knowledge base. I will share more detailed practices in the future. I am considering writing another article about the use of Obsidian in combination with large models.

Current issues and future prospects

In actual use, both the applications supporting MCP and the maturity of mcp-server itself still have a long way to go. For example: it seems to be in a period of wild growth now.

  • The related applications are not rich enough and do not seem to be oriented towards ordinary users. It is ridiculous to ask them to install a code editor. We don’t have Claude Desktop in our country.
  • mcp-server itself has many implementation versions of the same function. Not to mention how many bugs there are, just the dependency management and the fact that some versions require manual installation make the usage threshold high for ordinary users.
  • When using mcp-server, as shown in the article, we have to apply for various API KEYs, which also increases the cost and complexity of use.
  • MCP needs large model support, and can call tools according to the MCP protocol relatively stably. This large model itself also needs time to improve itself. At present, I use gpt-4o-mini, and Deepseek fails from time to time. Give them time.

But these are not reasons for us to deny MCP. On the contrary, I prefer to believe that MCP will be a trend, just as they described, and expect it to become a standard similar to USB, connection, connection, connection.

Let's imagine that in order to grab users in the future, various AI applications such as Yuanbao and Doubao may also support MCP, allowing users to apply intelligence to more scenarios. However, it is impossible for large companies to realize all functions, so they will still call on other platform capabilities, right? It's interesting to think about how to integrate them together.

Let's imagine that in the future our second brain is a big model. We give it the requirements and it decides which tools to call. As the tool being called, there should be a wave of traffic in the early stage, because when everyone builds their own intelligent body (Agent), they will eventually need the terminal's "hands" to work, but the final development may still depend on the quality of each tool and its insight into and satisfaction of the requirements.

After learning about MCP and having some practical experience, you will find that it can be applied in many scenarios in work and daily life. For example:

  • When there is too much content in a chat, I want AI to summarize it; if there are some follow-ups and notes, I just need to mention it or @ a certain tool and it will automatically categorize them and put them in the right place.
  • There are so many demand orders and bug reports coming in one after another. Those that need my attention must be screened by AI. The screening rules are too complicated and confuse me, and I only know natural language.
  • The commonly used Alfred tool must also become an application that supports MCP and can automatically interactively input, so that I don’t have to enter the wrong command again and again.

Mr. Kai-Fu Lee’s statement that “all industries will be touched, changed, transformed and made more efficient by artificial intelligence” is insightful. Once you have seen efficient and intelligent practices, you will never go back to the old processes.

This article is just a start. The next article may talk about the experience of developing mcp-server, including some debugging techniques. At the same time, if there are more AI practical experiences in the future that I find interesting and valuable, I will share them again. See you next time.