MCP Practical Guide

Written by
Caleb Hayes
Updated on:July-09th-2025
Recommendation

Explore the new heights of AI productivity, and the MCP practical guide will reveal the secrets for you.

Core content:
1. How MCP enhances AI model performance and improves productivity
2. Detailed introduction and application cases of the two major MCPs, Sequential Thinking and Tavily
3. How to find and use MCP, and the core competitiveness analysis of the MCP navigation website MCP.so

Yang Fangxian
Founder of 53AI/Most Valuable Expert of Tencent Cloud (TVP)


MCP is a super plug-in for the model. After installing it, you will find that AI productivity can be so high.

For example, I paired the Claude-3.7 Sonnet with two MCPs and it became a low-profile version of the Deep Research app.

An MCP is Sequential Thinking. It is a standardized thinking mode that allows the model to maintain logic and coherence when dealing with multi-step reasoning tasks. For example, it breaks down complex tasks into clear steps. When new information emerges, it can also flexibly adjust the thinking path.

Another MCP is Tavily, which has been introduced before and is a search engine optimized for models.

With these two, you see, Claude will search and think at the same time; according to the searched content, it will adjust the thinking path and then proceed to the next round of search; when it feels that there is enough information and the logic is complete, it will output the final report.

After going through this long process, I spent $1 to get a higher quality answer. This shows two points:

First, OpenAI's Deep Research is really expensive for a reason. Just look at the process of thinking and collecting, and you will know that it costs too much tokens. OpenAI's side must be more complicated.

Second, MCP is really useful. I can show you a comparison. I removed Sequential Thinking and only left the Internet search. The model gave a much simpler answer to the same question.

This is why I have been promoting MCP recently. So, where do we find MCP? How do we use it after we find it? In this video, I will give you a detailed answer.

Hello everyone, welcome to my channel. To be modest, I am one of the few bloggers in China who can explain the Why and How of AI clearly. What I provide is more valuable than tutorials. Remember to follow me. If you want to link to me, please come to the newtype community. More than 1,000 people have paid to join!

Back to today’s topic: MCP Practical Guide.

Let’s talk about the first question first: where to find MCP?

If you want to use a ready-made MCP, then the MCP navigation website is your first choice. In this field, the current number one is MCP.so.

MCP.so is a project of idoubi, a famous developer in China. He has done many projects before, such as AI search engine ThinkAny. In my last video, I said that someone has started to build MCP infrastructure, and I was referring to him.

In MCP.so, more than 3,000 servers have been included. In fact, its core competitiveness is not navigation - anyone can do navigation, which does not require high technical content. Its core competitiveness is MCP Server Hosting.

For our users, how should we choose from so many servers? I suggest that you pay attention to the following three types of servers:

First, search related. For example, Perplexity and Tavily are both search engines. Fetch and Firecrawl are both crawlers.

Second, data-related. For example, Filesystem allows the model to call local files, and GitHub allows the model to access the code repository.

Third, tool-related. For example, Blender, Figma, Slack, etc., you can tell what application they are connected to by looking at the name.

OK, now you know where to find and how to choose an MCP. So, how do you access and use it?

This is actually very easy to understand. If you think about it, since it is called a "server", then where the server is placed determines the communication method.

If it is placed locally and running on your own machine, use stdio; if it is running in the cloud, such as on MCP.so, use SSE.

stdio is the standard input and output stream, usually used for local communication. For example, MCP clients such as Cursor, Claude, and ChatWise communicate with MCP servers running on the same machine through standard input (stdin) and standard output (stdout).

SSE is a remote communication method based on HTTP. The MCP server is remotely hosted. Your local client uses SSE to achieve cross-machine communication.

It's okay if you don't quite understand. I'll show you what it actually looks like.

Take ChatWise as an example. In the "Tools" page of the settings, click the plus button in the lower left corner to add an MCP server. In "Type", we can choose between stdio and SSE communication methods.

For example, for Sequential thinking, I use stdio. The string in the command is actually the parameters required by GitHub. Because it does not require API Key or something like that, the environment variables below are left empty.

For some MCPs that require environment variables, such as Tavily, fill in the API Key. Click "View Tools", ChatWise will try to connect and then list all the tools under this MCP.

So, what is SSE like?

For example, for Firecrawl, I use SSE. This is much simpler, just fill in the link. So, where does the link come from?

Remember what I just said? If the MCP server is running in the cloud, then connect via SSE. MCP.so provides such a cloud service.

Go to the Firecrawl page of this website, fill in your API Key on the right, click "Connect", and it will generate a unique link. Copy this link and paste it into ChatWise.

So, you can understand it roughly as follows:

If the MCP server is running on your own machine, you have to fill in the parameters and environment variables yourself. You can just make a few changes according to GitHub's requirements.

If the MCP server is running on a cloud server like MCP.so, then you provide the API Key, it gives you the link, and then completes the configuration.

There is one point I need to emphasize again: just because the MCP server runs locally does not mean it cannot connect to the Internet.

For example, for Tavily, I choose to run it locally. My MCP client, ChatWise, communicates locally with the Tavily MCP server via stdio. Then, the Tavily MCP server calls the remote Tavily API via HTTP requests to complete the search.

So, let me summarize:

Find the MCP server and go to MCP.so. Then, you can choose whether to run it locally. If you want, click the button on the right to go to the project's GitHub page to see what parameters and environment variables you need to fill in. If not, fill in the link to MCP.so in your client settings.