AI King Bomb: Complete Implementation of MCP Server Client

Master the new solution of integrating AI and applications, and the complete implementation guide of MCP server and client.
Core content:
1. How the MCP protocol standardizes the connection between AI models and applications
2. Analysis of common MCP servers and their functions
3. Detailed steps and code examples for rapid development of MCP Server
Overview
Imagine if we want to build on existing applications and let AI read and reference our functions and data, what should we do? For example, if we ask about the weather in a certain city, we hope that AI can call the weather function and return the corresponding results. At this time, MCP (Model Context Protocol) can come in handy. It is equivalent to the USB-C interface of our computer and provides a standard way for AI models to connect different references and tools. We can build a MUP Server to handle this kind of business. For example, there are various MUP Servers on the market, such as the typical Amap MCP. In addition, there are also MCPs for travel and transportation. AirbnbMCPServer
, provide house inquiry, version control gitlab-mr-mcp
, Tools mcp-openai, Development mcp-server-and-gw
For more tools, please see: https://github.com/punkpeye/awesome-mcp-servers/blob/main/README-zh.md#%E6%9C%8D%E5%8A%A1%E5%99%A8%E5%AE%9E%E7%8E%B0
The function of the MCP server becomes very easy to understand: it follows the MCP protocol to expose the resources, tools or prompts it can provide:
- Resources: structured data (such as files, API responses)
- Tools: executable functions (such as querying a database, sending emails)
- Prompts: Predefined interaction templates
The difference between MCP and Function Caling:
Developing an MCP Server
Let’s use Python as an example.
- Install uv
windows: powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex" linux/mac: curl -LsSf https://astral.sh/uv/install.sh | sh
- Install Dependencies
pip install mcppip install mcp[cli]pip install httpx==0.27
- Writing code: It's very simple, just add an annotation to the ordinary py function.
import osfrom mcp.server.fastmcp import FastMCPmcp = FastMCP("Bond Service Demo")@mcp.tool()def filterByRate(a:str) -> list: """Filter bonds according to bond rating conditions""" print('Bond rating filtering conditions:',a) return ["Bond A","Bond B","Bond C"]@mcp.tool()def filterByType(t:str) -> list: """Filter bonds according to bond type conditions""" print('Bond type filtering conditions:',t) return ["Bond A","Bond C","Bond D"]@mcp.tool()def filterByBidRange(a1:float,a2:float) -> list: """Filter bonds according to bond bid yield range""" print('Bond type filtering conditions:',a1,a2) return ["Bond A2","Bond C2","Bond D2"]@mcp.tool()def filterResult(**kwargs) -> list: """Get all bond results that meet the conditions""" print('filterResult:',kwargs) return ["Bond A111","Bond C222","Bond D3333"]if __name__ == "__main__": # mcp.run(transport='stdio') mcp.run(transport='sse')
- Start the service:
mcp-dev.\mcp\hello.py
Client Integration
Tool Integration
by CherryStudio
For example, for other types, add one: MCPServer
, just select MCP Server when answering questions.
Code Integration
Program integration: Just write a client program. Let’s take qwen deployed locally in Ollama as an example. The code is as follows:
client_mcp.pyimport asyncioimport jsonimport sysimport timefrom typing import Optionalfrom contextlib import AsyncExitStackfrom mcp.client.sse import sse_clientfrom mcp import ClientSession, StdioServerParametersfrom mcp.client.stdio import stdio_clientfrom openai import AsyncOpenAIclass MCPClient: def __init__(self): # Initialize session and client objects self.session: Optional[ClientSession] = None self.exit_stack = AsyncExitStack() self.client = AsyncOpenAI( api_key="test", base_url="http://ollama address:11434/v1" ) async def connect_to_server(self, server_script_path: str): """Connect to an MCP server Args: server_script_path: Path to the server script (.py or .js) """ is_python = server_script_path.endswith(".py") is_js = server_script_path.endswith(".js") if not (is_python or is_js): raise ValueError("Server script must be a .py or .js file") command = "python" if is_python else "node" server_params = StdioServerParameters( command=command, args=[server_script_path], env=None ) stdio_transport = await self.exit_stack.enter_async_context( stdio_client(server_params) ) self.stdio, self.write = stdio_transport self.session = await self.exit_stack.enter_async_context( ClientSession(self.stdio, self.write) ) await self.session.initialize() # List available tools response = await self.session.list_tools() tools = response.tools print("\nConnected to server with tools:", [tool.name for tool in tools]) async def connect_to_sse_server(self, server_url: str): """Connect to an MCP server Args: server_script_path: Path to the server script (.py or .js) """ self._streams_context = sse_client(url=server_url) streams = await self._streams_context.__aenter__() self._session_context = ClientSession(*streams) self.session = await self._session_context.__aenter__() await self.session.initialize() # List available tools response = await self.session.list_tools() tools = response.tools print("\nConnected to server with tools:", [tool.name for tool in tools]) async def process_query(self, query: str) -> str: """Use tools provided by LLM and MCP servers to process queries""" messages = [ { "role": "user", "content": query } ] response = await self.session.list_tools() available_tools = [{ "type": "function", "function": { "name": tool.name, "description": tool.description, "parameters": tool.inputSchema } } for tool in response.tools] # Initialize LLM API call response = await self.client.chat.completions.create( model="qwen2.5:14b", messages=messages, tools=available_tools # Pass list of tools to LLM ) final_text = [] message = response.choices[0].message print(response.choices[0]) final_text.append(message.content or "") # Process response and handle tool calls if message.tool_calls: # Handle each tool call for tool_call in message.tool_calls: tool_name = tool_call.function.name tool_args = json.loads(tool_call.function.arguments) # Execute tool call start_time = time.time() result = await self.session.call_tool(tool_name, tool_args) end_time = time.time() print(f"Tool }) # Send the result of the tool call to LLM response = await self.client.chat.completions.create( model="qwen2.5:14b", messages=messages, tools=available_tools ) message = response.choices[0].message if message.content: final_text.append(message.content) return "\n".join(final_text) async def chat_loop(self): """Run an interactive chat loop""" print("\nMCP Client Started!") print("Type your queries or 'quit' to exit.") while True: try: query = input("\nQuery: ").strip() if query.lower() == 'quit': break response = await self.process_query(query) print("\n" + response) except Exception as e: print(f"\nError: {str(e)}") async def cleanup(self):"""Clean up resources""" await self.exit_stack.aclose()main.pyimport requestsimport jsonimport httpximport asynciofrom client_mcp import MCPClientimport sysasync def main(): url_server_mcp = 'http://localhost:8000/sse' client = MCPClient() try: # Select according to MCP Server transport protocol await client.connect_to_sse_server(url_server_mcp) await client.chat_loop() finally: await client.cleanup()if __name__ == '__main__': loop = asyncio.get_event_loop() loop.run_until_complete(main())