Understanding MCP in one article: A complete guide from entry to mastery

Get a deep understanding of MCP and grasp the new opportunities of AI application integration.
Core content:
1. Analysis of MCP core concepts and certification value
2. Current status of AI model development and breakthrough applications of MCP
3. How to plan MCP learning and examination paths to improve professional competitiveness
This is an in-depth guide to MCP. In this article, we will comprehensively analyze the core concepts of MCP (Model Context Protocol Professional), the value of certification, and how to plan your learning and exam path from scratch. Whether you are a newcomer to the IT industry or a senior practitioner who wants to improve your skills, the MCP certification can provide solid technical support and broader career opportunities for your career development. Next, we will start with the definition and importance of MCP to reveal the full picture of this certification system for you.
Why MCP is a breakthrough
We know that AI models have developed rapidly in the past year, from GPT 4 to Claude Sonnet 3.5 to Deepseek R1, with significant progress in reasoning and hallucination.
There are also many new AI applications, but one thing we can all feel is that the AI applications currently on the market are basically brand-new services that are not integrated with our commonly used services and systems. In other words, the integration of AI models with our existing systems is developing very slowly.
For example, we cannot currently use an AI application to search the Internet, send emails, publish our own blogs, etc. at the same time. These functions are not difficult to implement individually, but if they are all integrated into one system, it will become out of reach.
If you don’t have a specific feeling yet, we can think about it in daily development. Imagine that in the IDE, we can use the IDE’s AI to complete the following tasks.
• Ask AI to query existing data in the local database to assist development • Ask AI to search Github Issues to determine whether a problem is a known bug • Use AI to send comments on a PR to colleagues' instant messaging software (such as Slack) for code review • Use AI to query or even modify the current AWS and Azure configurations to complete deployment
The features mentioned above are now becoming a reality through MCP. You can follow **Cursor MCP** and **Windsurf MCP** for more information. You can try using Cursor MCP + **browsertools** plug-in to experience the ability to automatically obtain Chrome dev tools console log in Cursor.
Why is the progress of AI integration with existing services so slow? There are many reasons. On the one hand, enterprise-level data is very sensitive, and most companies need a long time and process to move. On the other hand, in terms of technology, we lack an open, universal, and consensus protocol standard.
MCP is an open, universal, and consensus-based protocol standard released by Claude (Anthropic). If you are a developer familiar with AI models, you must be familiar with Anthropic. They released the Claude 3.5 Sonnet model, which is still the most powerful programmable AI model so far (3.7 was released right after it was written?).
I would like to mention one more thing here. OpenAI should have the best chance to release this protocol. If OpenAI had promoted the protocol when it first released GPT, I believe no one would refuse it. However, OpenAI became CloseAI and only released a closed GPTs. This kind of standard protocol that requires dominance and consensus is generally difficult for the community to form spontaneously, and is usually dominated by industry giants.
After Claude released MCP, the official Claude Desktop opened the MCP function and promoted the open source organization **Model Context Protocol** , with the participation of different companies and communities. For example, the following lists some examples of MCP servers released by different organizations.
MCP official integrated teaching:
• **Git** - Git reading, manipulation, and searching. • **GitHub** - Repo management, file manipulation, and GitHub API integration. • **Google Maps** - Integrate Google Maps to get location information. • **PostgreSQL** - Read-only database query. • **Slack** - Slack message sending and query.
?️ Examples of third-party platforms officially supporting MCP
MCP server built by third-party platform.
• **Grafana** - Search query data in Grafana. • **JetBrains** - JetBrains IDEs. • **Stripe** - Interact with the Stripe API.
? Community MCP Server
Below are some MCP servers developed and maintained by the open source community.
• **AWS** - Operate AWS resources with LLM. • **Atlassian** - Interact with Confluence and Jira, including searching/querying Confluence spaces/pages, accessing Jira Issues and projects. • **Google Calendar** - Integrate with Google Calendar, schedule, find times, and add/delete events. • **Kubernetes** - Connect to Kubernetes clusters and manage pods, deployments, and services. • **X (Twitter)** - Interact with the Twitter API. Post tweets and search for tweets via queries. • **YouTube** - Integrate with the YouTube API, video management, short video creation, etc.
Why MCP?
You may have a question when you see this. When OpenAI released GPT function calling in 2023, wasn’t it also able to achieve similar functions? Wasn’t the AI Agent introduced in our previous blog used to integrate different services? Why did MCP appear again?
What are the differences between function calling, AI Agent, and MCP?
Function Calling
• Function Calling refers to the mechanism by which AI models automatically execute functions based on context. • Function Calling acts as a bridge between AI models and external systems. Different models have different Function Calling implementations and different ways of code integration. It is defined and implemented by different AI model platforms.
Model Context Protocol (MCP)
• MCP is a standard protocol, just like the Type C protocol of electronic devices (which can be used for charging and data transmission), which enables AI models to interact seamlessly with different APIs and data sources. • MCP aims to replace fragmented Agent code integration to make AI systems more reliable and efficient. By establishing common standards, service providers can launch the AI capabilities of their own services based on the protocol, thereby supporting developers to build more powerful AI applications faster. Developers do not need to reinvent the wheel and can build a powerful AI Agent ecosystem through open source projects. • MCP can maintain context between different applications/services, thereby enhancing the overall ability to perform tasks autonomously.
AI Agent
• AI Agent is an intelligent system that can operate autonomously to achieve specific goals. While traditional AI chat only provides suggestions or requires manual tasks, AI Agent can analyze specific situations, make decisions, and take actions on its own. • AI Agent can use the functional descriptions provided by MCP to understand more context and automatically perform tasks on various platforms/services.
the difference
It can be simply understood that MCP tells the AI Agent a list of capabilities of different services and platforms. The AI Agent determines whether a service needs to be called based on the context and model reasoning, and then uses Function Calling to execute the function. This function is told to Function Calling through MCP, and finally the whole process is completed through the specific code provided by the MCP protocol.
The benefits of MCP to the community ecology are mainly the following two points:
• Open standards to service providers, who can open their own APIs and some capabilities for MCP. • No need to reinvent the wheel, developers can use existing open source MCP services to enhance their own Agents.
think
Why was Claude's MCP widely accepted? In fact, I have personally participated in the development of several small AI projects in the past year. During the development process, it is indeed troublesome to integrate AI models into existing systems or third-party systems.
Although there are some frameworks on the market that support Agent development, such as **LangChain Tools** , **LlamaIndex** or **Vercel AI SDK** .
Although LangChain and LlamaIndex are both open source projects, their overall development is still quite chaotic. First, the code abstraction level is too high. What they want to promote is to allow developers to complete certain AI functions with a few lines of code. This is very useful in the Demo stage, but in actual development, as soon as the business starts to become complicated, poor code design brings a very bad programming experience. In addition, these projects are too eager to commercialize and ignore the construction of the overall ecosystem.
Another one is Vercel AI SDK. Although I personally think that the code abstraction of Vercel AI SDK is better, it is only good for the integration with front-end UI and the encapsulation of some AI functions. The biggest problem is that it is too deeply bound to Nextjs and does not support other frameworks and languages enough.
Therefore, it can be said that it is a good time for Claude to promote MCP. First of all, Claude Sonnet 3.5 has a high status in the minds of developers, and MCP is an open standard, so many companies and communities are willing to participate. I hope that Claude can always maintain a good open ecosystem.
How MCP works
Let's introduce how MCP works. First, let's take a look at the **official MCP architecture diagram** .
It is divided into the following five parts:
• MCP Hosts: Hosts are applications that LLM launches connections to, such as Cursor, Claude Desktop, and **Cline** . • MCP Clients: Clients are used to maintain 1:1 connections with servers within Hosts applications. • MCP Servers: Provide context, tools, and prompts to clients through standardized protocols. • Local Data Sources: Local files, databases, and APIs. • Remote Services: External files, databases, and APIs.
The core of the entire MCP protocol lies in the Server. I believe that the Host and Client are familiar to those who are familiar with computer networks and are very easy to understand. But how does the Server understand it?
Looking at the development process of Cursor's AI Agent, we will find that the entire AI automation process will develop from Chat to Composer and then evolve to a complete AI Agent.
AI Chat only provides suggestions on how to convert AI's response into actions and final results, which all depends on humans, such as manual copy and paste, or making certain modifications.
AI Composer can modify code automatically, but it requires human participation and confirmation, and cannot perform other operations besides modifying code.
AI Agent is a completely automated program that will be able to automatically read Figma images, automatically generate code, automatically read logs, automatically debug code, and automatically push code to GitHub in the future.
The MCP Server exists to realize the automation of AI Agent. It is an intermediate layer that tells AI Agent which services, APIs, and data sources currently exist. AI Agent can decide whether to call a service based on the information provided by the Server, and then execute the function through Function Calling.
**How MCP Server works
Let's take a look at a simple example. Suppose we want the AI Agent to automatically search the GitHub Repository, then search for the Issue, then determine whether it is a known bug, and finally decide whether a new Issue needs to be submitted.
Then we need to create a Github MCP Server, which needs to provide three capabilities: finding Repository, searching Issues and creating Issues.
Let's take a look at the code directly:
1const server = new Server (
2 {
3 name : "github-mcp-server" ,
4 version : VERSION ,
5 } ,
6 {
7 capabilities : {
8 tools : { } ,
9 } ,
10 }
11) ;
12
13server . setRequestHandler ( ListToolsRequestSchema , async ( ) => {
14 return {
15 tools : [
16 {
17 name : "search_repositories" ,
18 description : "Search for GitHub repositories" ,
19 inputSchema : zodToJsonSchema ( repository . SearchRepositoriesSchema ) ,
20 } ,
twenty one {
twenty two name : "create_issue" ,
twenty three description : "Create a new issue in a GitHub repository" ,
twenty four inputSchema : zodToJsonSchema ( issues . CreateIssueSchema ) ,
25 } ,
26 {
27 name : "search_issues" ,
28 description : "Search for issues and pull requests across GitHub repositories" ,
29 inputSchema : zodToJsonSchema ( search . SearchIssuesSchema ) ,
30 }
31 ] ,
32 } ;
33} ) ;
34
35server . setRequestHandler ( CallToolRequestSchema , async ( request ) => {
36 try {
37 if ( ! request . params . arguments ) {
38 throw new Error ( "Arguments are required" ) ;
39 }
40
41 switch ( request . params . name ) {
42 case "search_repositories" : {
43 const args = repository . SearchRepositoriesSchema . parse ( request . params . arguments ) ;
44 const results = await repository . searchRepositories (
45 args . query ,
46 args . page ,
47 args . perPage
48 ) ;
49 return {
50 content : [ { type : "text" , text : JSON . stringify ( results , null , 2 ) } ] ,
51 } ;
52 }
53
54 case "create_issue" : {
55 const args = issues . CreateIssueSchema . parse ( request . params . arguments ) ;
56 const { owner , repo , ... options } = args ;
57 const issue = await issues . createIssue ( owner , repo , options ) ;
58 return {
59 content : [ { type : "text" , text : JSON . stringify ( issue , null , 2 ) } ] ,
60 } ;
61 }
62
63 case "search_issues" : {
64 const args = search . SearchIssuesSchema . parse ( request . params . arguments ) ;
65 const results = await search . searchIssues ( args ) ;
66 return {
67 content : [ { type : "text" , text : JSON . stringify ( results , null , 2 ) } ] ,
68 } ;
69 }
70
71 default :
72 throw new Error ( ` Unknown tool: ${ request . params . name } ` ) ;
73 }
74 } catch ( error ) { }
75} ) ;
76
77async function runServer ( ) {
78 const transport = new StdioServerTransport ( ) ;
79 await server . connect ( transport ) ;
80 console . error ( "GitHub MCP Server running on stdio" ) ;
81}
82
83runServer ( ) . catch ( ( error ) => {
84 console . error ( "Fatal error in main():" , error ) ;
85 process . exit ( 1 ) ;
86} ) ;
In the above code, we pass server.setRequestHandler
To tell the client what capabilities we provide, description
field to describe the function of this capability, through inputSchema
To describe the input parameters required to complete this capability.
Let's take a look at the specific implementation code:
1export const SearchOptions = z . object ( {
2 q : z . string ( ) ,
3 order : z . enum ( [ "asc" , "desc" ] ) . optional ( ) ,
4 page : z . number ( ) . min ( 1 ) . optional ( ) ,
5 per_page : z . number ( ) . min ( 1 ) . max ( 100 ) . optional ( ) ,
6} ) ;
7
8export const SearchIssuesOptions = SearchOptions . extend ( {
9 sort : z . enum ( [
10 "comments" ,
11 ...
12 ] ) . optional ( ) ,
13} ) ;
14
15export async function searchUsers ( params : z . infer < typeof SearchUsersSchema > ) {
16 return githubRequest ( buildUrl ( "https://api.github.com/search/users" , params ) ) ;
17}
18
19export const SearchRepositoriesSchema = z . object ( {
20 query : z . string ( ) . describe ( "Search query (see GitHub search syntax)" ) ,
twenty one page : z . number ( ) . optional ( ) . describe ( "Page number for pagination (default: 1)" ) ,
twenty two perPage : z . number ( ) . optional ( ) . describe ( "Number of results per page (default: 30, max: 100)" ) ,
twenty three} ) ;
twenty four
25export async function searchRepositories (
26 query : string ,
27 page : number = 1 ,
28 perPage : number = 30
29) {
30 const url = new URL ( "https://api.github.com/search/repositories" ) ;
31 url . searchParams . append ( "q" , query ) ;
32 url . searchParams . append ( "page" , page . toString ( ) ) ;
33 url . searchParams . append ( "per_page" , perPage . toString ( ) ) ;
34
35 const response = await githubRequest ( url . toString ( ) ) ;
36 return GitHubSearchResponseSchema . parse ( response ) ;
37}
It can be clearly seen that our final implementation is through https://api.github.com
To interact with Github through the API, we githubRequest
Function to call GitHub's API and finally return the result.
Before calling Github's official API, the main task of MCP is to describe what capabilities the server provides (provided to LLM), what parameters are required (what are the specific functions of the parameters), and what the final result is.
So MCP Server is not something new or advanced, it is just a consensus protocol.
If we want to implement a more powerful AI Agent, for example, we want the AI Agent to automatically search for related GitHub Repositories based on local error logs, then search for Issues, and finally send the results to Slack.
Then we may need to create three different MCP Servers, one is the Local Log Server, which is used to query local logs; one is the GitHub Server, which is used to search for Issues; and the other is the Slack Server, which is used to send messages.
AI Agent responds to user input I need to query the local error log and send the relevant Issue to Slack
After receiving the instruction, it determines which MCP servers need to be called and decides the calling order. Finally, it decides whether to call the next server based on the return results of different MCP servers to complete the entire task.
How to use MCP
If you haven't tried using MCP yet, we can consider using Cursor (I have only tried Cursor), Claude Desktop or Cline to experience it.
Of course, we do not need to develop MCP Servers ourselves. The advantage of MCP is that it is universal and standardized, so developers do not need to reinvent the wheel (but learning can reinvent the wheel).
The first thing I recommend is some servers from official organizations: the official MCP Server list .
At present, the MCP Server in the community is still quite confusing. There are many missing tutorials and documents, and many code functions have problems. We can try some examples of Cursor Directory by ourselves . I will not go into details about the specific configuration and practice. You can refer to the official documentation.
Some resources of MCP
Below are some MCP resources that I personally recommend for your reference.
MCP Official Resources
• Official open source organization Model Context Protocol . •
Official document modelcontextprotocol . •
Official MCP Server list • Claude Blog
List of community MCP Servers
• Cursor Directory • Pulsemcp • Glama MCP Servers