MCP (Large Model Context Protocol) Quick Start

Written by
Jasper Cole
Updated on:June-13th-2025
Recommendation

Master MCP, efficiently integrate LLM and data tools, and build intelligent workflows.

Core content:
1. The importance of MCP as a standardized protocol between LLM and data sources
2. MCP architecture diagram, underlying principles and version specifications
3. MCP life cycle stages and collaboration with LLM

Yang Fangxian
Founder of 53A/Most Valuable Expert of Tencent Cloud (TVP)

This article assumes that you have mastered the following basic knowledge and will not go into details:

  • #LLM 
  • #Prompt word project
  • #FunctionCalling
  • #Python

What is MCP

#MCP  is an open protocol that standardizes how applications provide context to LLMs. MCP is like a USB-C port for AI applications . Just as USB-C provides a standardized way to connect your device to a variety of peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.

MCP helps you build agents and complex workflows on top of LLM. LLM often requires integration with data and tools, and MCP provides the following capabilities:

  • A growing number of pre-built integrations that plug directly into your LLM
  • Flexibility to switch between LLM providers and suppliers
  • Best practices for protecting data within your infrastructure

The above content is excerpted from the MCP official documentation. I think it has been explained very clearly and intuitively, but many people still mistakenly believe that MCP is a magical tool with functions such as executing commands, reading files, querying data , etc.

No, no, no, MCP is just a protocol, and LLM uses this protocol to use various tools. Whether it is executing commands, reading files, or querying data, if you have these tools, you can use the tools without MCP; conversely, if you don't have these tools, MCP alone can't do anything.

The significance of MCP is that LLM can use "remote tools": these tools can use different languages, be deployed on different devices, and have different functions. They are integrated together through MCP for LLM to choose from; similarly, if you already have a batch of tools, you can also use MCP to provide them to any LLM.

In summary, MCP, like USB, provides quick access between various LLMs and various tools .

MCP underlying principle

Specification version: 2025-03-26

Transport Protocol :

  • stdio (preferred, local only)
  • SSE (deprecated)
  • Streamable HTTP

Authentication method:

  • OAuth 2.1 (HTTP only)

If MCP is used on your own computer and you don't need to consider authentication, then stdio is enough, it is simple and fast.

If you plan to share your MCP Server over the Internet, it is more convenient to use HTTP. Previously, MCP used SSE for remote transmission, but the new specification encourages the use of Streamable HTTP, which also supports authentication for greater security.


Message content:

  • JSON-RPC (UTF-8)

This is a classic solution. There is nothing much to say about it. If you are not planning to re-implement the MCP Client, you do not need to pay attention to the details of the message content for the time being.


Life cycle (key points):

  1. Declare capabilities: define tools, resources, and other capabilities on the server side
  2. Initialization: client connects to server, negotiates version and capabilities
  3. Operation: (bidirectional multi-round)
  • The client uses the server's capabilities (such as tools)
  • The server uses the client's LLM
  • Close: The client disconnects and releases resources

  • There is a "handshake" phase in the early stage so that the Client can know what tools the Server has and then provide them to the LLM for use.

    What’s interesting is that MCP stipulates a sampling capability for the Server: using LLM for content generation, which means that LLM is also provided by the Client to the Server for use.

    However, sampling is not yet implemented in the current version of the SDK, and the specific usage will have to wait for updates.

    MCP Application Practice

    Install Dependencies

pip install mcp openai-agents

mcp It is an SDK officially provided by MCP, with built-in MCP Server and MCP Client implementation, ready to use out of the box

openai-agentsOpenAI's open source #Agent framework simplifies the operational details of LLM

MCP Server Declarative Capabilities

# server.py

import  os
from  mcp.server.fastmcp  import  FastMCP

mcp = FastMCP( "MCP Demo" )


@mcp.tool()
def add (a: int, b: int)  -> int: 
    """Returns the sum of a and b"""
    return  a + b


@mcp.tool()
def ls () : 
    """List the names of files in a directory"""
    return  os.listdir( "." )


if  __name__ ==  '__main__' :
    mcp.run()   # Use stdio transmission by default

MCP Client Initialization

# client.py
from  mcp  import  ClientSession, StdioServerParameters, types
from  mcp.client.stdio  import  stdio_client

# stdio startup parameters
server_params = StdioServerParameters(
    command = "python" ,
    args=[ "server.py" ],
)

async  def run () : 
    async  with  stdio_client(server_params)  as  (read, write):
        async  with  ClientSession(read, write)  as  session:
            # Initialization
            await  session.initialize()

            # Query supported tools
            result =  await  session.list_tools()

            print( "Supported tools are:" ,)
            for  tool  in  result.tools:
                print( f"     {tool.name} {tool.description} "   )

            # Calling the tool
            result =  await  session.call_tool( "add" , arguments={ "a"1"b" : 10 })
            print( 'Add call result:' , result.content[ 0 ].text)



if  __name__ ==  "__main__" :
    import  asyncio

    asyncio.run(run())

Execution results (tools can be discovered and used):

Processing request of type ListToolsRequest
Processing request of type CallToolRequest
Supported tools are:
    add returns the result of adding a and b
    ls lists the names of files in a directory
The result of calling add: 11

LLM using remote tools

Before using the MCP Loader

#llm.py
from  agents  import  Agent, Runner

agent = Agent(
    name= "AI Assistant" ,
    instructions = "Use the appropriate tools to generate appropriate responses" ,   # System prompts
)

result = Runner.run_sync(agent, 'How many files are there in the current directory?' )

print(result.final_output)

Execution result (cannot be completed, start chatting):

Please provide more specific information, for example:

* **Operating System:** Are you using Windows, macOS, or Linux?
* **Where do you want to see the number of files? ** In the command line (Terminal/CMD) or in a file manager?

Because of different operating systems and methods, the commands and operations for viewing the number of files will be different.

For example:

* **Use `ls` and `wc` commands in Linux/macOS command line:**

   ```bash
   ls -l | grep -v ^d | wc -l
   ```

   This command will list all files in the current directory (excluding directories) and count their number.

* **Using the `dir` and `find` commands in the Windows command line:**

   ```cmd
   dir /ad | find /c "<DIR>"
   ```

   This command will list all the files in the current directory and count their number.

Once you provide more information I can give a more precise answer.

After using the MCP loader

#llm.py
from  agents  import  Agent, Runner
from  agents.mcp  import  MCPServerStdio

async  def run () : 
    async  with  MCPServerStdio(
            name= "mcp server based on stdio" ,
            params={
                "command""python" ,
                "args" : [ "server.py" ],
            },
    )  as  mcp_server:
        agent = Agent(
            name= "AI Assistant" ,
            instructions= "Use the appropriate tools to generate the appropriate responses" ,
            mcp_servers=[mcp_server, ]
        )

        result =  await  Runner.run(agent,  'How many files are there in the current directory?' )

        print(result.final_output)


if  __name__ ==  '__main__' :
    import  asyncio

    asyncio.run(run())

In order to use MCP, we use asynchronous writing here. For Agent, there is only one moremcp_serversparameter

    agent = Agent(
        name="AI Assistant",
        instructions="Use the appropriate tool to generate the appropriate response",
++ mcp_servers=[mcp_server, ]
    )

Execution results (calling the tool to complete the task and outputting the results):

Processing request of type ListToolsRequest
Processing request of type CallToolRequest
The current directory has 4 files.

In fact, I don’t know much about the resources and prompt words in the Server capability, and I haven’t tried authentication. There are still many details such as error handling, progress tracking, and interface testing that I haven’t delved into.