Breaking down the barriers between AI and data: A practical guide to MCP server development

Written by
Audrey Miles
Updated on:June-22nd-2025
Recommendation

Explore the new era of AI and data interaction. The MCP server development practical guide will take you to appreciate the charm of technological innovation.

Core content:
1. Anthropic's MCP protocol introduction and its impact on the AI ​​field
2. How MCP solves the M×N problem of traditional AI integration
3. The core concept and architecture of MCP to achieve seamless connection between AI applications and data sources

Yang Fangxian
Founder of 53A/Most Valuable Expert of Tencent Cloud (TVP)

1. MCP: Breaking down the barriers between AI and data

Last November, Anthropic released an open standard called Model Context Protocol (MCP), which caused quite a stir in the AI ​​community. This seemingly ordinary protocol may completely change the way AI interacts with data systems. As an AI developer with many years of experience, I immediately studied the design ideas and implementation details of MCP and found that it is indeed a technological innovation worthy of attention.

Why MCP Changes the Interaction between AI and Data

Simply put, MCP is an open protocol standard for connecting AI assistants with data sources. Before this, if you wanted an AI assistant like Claude to access your database, code repository, or file system, you often had to develop specific integration solutions, which were usually customized for specific AI models and specific data sources. The emergence of MCP makes this connection standardized and simplified.

In fact, the real value of MCP is that it allows AI to finally "step out of its cocoon" and no longer be limited to its own training data and knowledge base. For example, an ordinary AI chatbot, without external connections, is like a "genius" locked in a small dark room, and can only answer questions based on the knowledge in its head. With MCP, this "genius" can finally open the door, consult external information, and use various tools, greatly expanding its capabilities.

M×N Problem: Pain Points and Challenges of Traditional AI Integration

Before the emergence of MCP, the integration of AI and data systems faced the typical M×N problem. Suppose you have M different AI applications (such as intelligent customer service, document analysis tools, code assistants, etc.) and need to connect to N different data sources or tools (such as MySQL database, MongoDB, Elasticsearch, GitHub, Slack, etc.), then you may need to develop M×N different integration solutions.

For example, in a previous project, I needed to allow the AI ​​assistant to access the company's PostgreSQL database, internal knowledge base, and GitHub code repository at the same time. As a result, every time a data source was added, a set of adaptation logic had to be redeveloped; every time an AI model was changed, most of the code had to be rewritten. This repetitive work not only wastes time, but is also prone to errors and, more importantly, difficult to maintain and expand.

In addition, each integration needs to deal with complex authentication, data format conversion, error handling, etc., which are additional development burdens. As the number of AI applications and data sources increases, this approach becomes increasingly unsustainable.

MCP core concept: unified protocol

The core concept of MCP is to transform M×N complex problems into M+N simple problems. Specifically, MCP defines a set of standard interfaces and protocols. As long as the AI ​​application implements the MCP client interface and the data source implements the MCP server interface, they can be connected seamlessly. In this way, M AI applications only need to implement M MCP clients, and N data sources only need to implement N MCP servers, so a total of only M+N adapters are needed instead of M×N.

I particularly like the MCP design because it uses the classic "middle layer" idea in computer science to solve complexity problems. This approach has been successful in many fields, such as operating system driver interfaces and database JDBC/ODBC interfaces. Through standardized interfaces, MCP achieves decoupling between AI applications and data sources, making the system more flexible and scalable.

MCP architecture: collaboration between client and server

MCP adopts the classic client-server architecture, which mainly includes the following parts:

  • MCP client : embedded in the AI ​​application, responsible for sending requests and receiving responses. Currently known MCP clients include the Claude Desktop application and the Claude Code command line tool.
  • MCP server : connects to the actual data source or tool, is responsible for receiving client requests, performing corresponding operations, and returning results.
  • Transport layer : supports two main communication methods: local communication uses standard input/output (stdio), and remote communication uses HTTP+Server-Sent Events (SSE).

This architecture design is very flexible, supporting both local communication (such as accessing the local file system and database) and remote communication (such as accessing remote APIs and cloud services). For local communication, using stdio can avoid network overhead and improve efficiency; for remote communication, using HTTP+SSE takes into account both security and real-time performance.

It is worth mentioning that MCP uses JSON-RPC format for communication, which is a lightweight remote procedure call protocol that is easy to understand and implement. Each message contains the method name, parameters, and a unique identifier to facilitate tracking the correspondence between requests and responses.

Five core components

MCP defines five basic "primitives" that are the basic components for building MCP applications:

  1. Prompts : Instructions or templates provided by the server to guide the AI ​​to generate output in a specific form. For example, a SQL database MCP server might provide a prompt template for "query table structure".

  2. Resources : Structured data provided by a server that can be included in an AI's context window. For example, a file system MCP server can provide file contents as resources.

  3. Tools : Executable functions provided by the server that an AI can call to get information or perform actions. For example, a GitHub MCP server might provide a tool to "Create a Pull Request".

  4. Roots : File system entry points provided by the client that allow the server to access files on the client side. This is particularly useful for working with local files.

  5. Sampling : Allows the server to request the client-side AI model to generate text. This can be used to implement nested AI calls, but needs to be used with caution to ensure security.

In actual applications, Prompts, Resources, and Tools are capabilities provided by the server to extend the context and functionality of AI, while Roots and Sampling are capabilities provided by the client, allowing the server to access the client's resources and AI capabilities.

These five components may seem simple, but they actually cover most of the scenarios where AI applications interact with external systems. By combining these basic components, we can achieve various complex AI integration requirements.

2. Build the first MCP server

Now that we understand the basic concepts and architecture of MCP, we can start building our own MCP server. As a developer who has actually operated it many times, I would like to share some practical experience to help you avoid some common pitfalls.

The essence of MCP server

First of all, it should be clear that the MCP server is not just a simple wrapper of the existing API, it is more like a "translator" that is responsible for converting MCP protocol requests into operations on a specific system and converting the operation results back to MCP protocol responses.

A good MCP server should have the following characteristics:

  1. Functional completeness : fully expose the core functions of the underlying system so that AI can effectively utilize these functions.
  2. Appropriateness of abstraction : Provide an appropriate level of abstraction, neither oversimplifying to limit functionality nor overcomplicating to make it difficult to use.
  3. Security and controllability : Perform appropriate permission control and verification on sensitive operations to prevent security risks.
  4. Error handling : Provide clear error messages and recovery mechanisms to enhance the robustness of the system.
  5. Documentation and Examples : Provide detailed documentation and usage examples to help AI understand how to use these features.

When designing an MCP server, we need to think about: What tasks does AI need to complete through this server? What data does it need to access? What operations does it need to perform? These requirements should be the starting point of our design.

Technology selection and environment construction

There are many technical options for building an MCP server. Anthropic officially provides SDKs for Python and TypeScript, and the community has also contributed implementations of languages ​​such as Java and C#. For most developers, Python may be the easiest choice to get started, because of its concise syntax, rich ecosystem, and Anthropic provides a fully functional Python SDK.

Taking Python as an example, the following components are required to build the basic environment of the MCP server:

  1. **Python 3.9+**: It is recommended to use a newer version of Python for better performance and compatibility.
  2. anthropic-mcp library : MCP Python SDK officially provided by Anthropic, or use the lighter-weight fastmcp library.
  3. Dependent libraries : Depending on the system you want to connect to, you may need to install specific client libraries, such as pymongo, psycopg2, requests, etc.
  4. Development tools : a good code editor (such as VSCode, PyCharm) and version control tools (such as Git).
  5. Claude Desktop : Used to test your MCP server, it has built-in MCP client functionality.

Installing dependencies can be done with pip. It is worth mentioning that in order to avoid environment conflicts, it is best to create an independent virtual environment for each MCP project. I usually use venv or conda to manage the environments of different projects.

Anatomy of an MCP Server: Workflow

A typical MCP server mainly consists of the following parts:

  1. Server registration : Initialize the MCP server instance and set basic information.
  2. Function registration : various functions provided by the registration server, including Resources, Tools, and Prompts.
  3. Request processing : Receive and process requests from clients and perform corresponding operations.
  4. Response generation : Format the operation result into a response message of the MCP protocol and return it to the client.
  5. Error handling : Capture and handle various exceptions and return appropriate error information.
  6. Start service : Start the server and wait for client connections.

The workflow of an MCP server is usually as follows: client sends a request → server receives the request → parses the request → performs an operation → generates a response → returns a response. The entire process is asynchronous, allowing multiple requests to be processed simultaneously.

In actual development, most of the processing logic revolves around "function registration" and "request processing", which is also the part that developers need to focus on.

Basic configuration and dependency management

Before starting actual development, it is very important to establish a good project structure and configuration management. Here are some best practices I recommend:

  1. Project structure : Use a clear directory structure, for example:

  • src/: Source code directory
  • tests/: Test code directory
  • config/: Configuration file directory
  • docs/: Document directory
  • examples/: Sample code directory
  • Configuration management : Separate configuration information (such as connection strings, API keys, etc.) from the code. You can use environment variables, configuration files, or key management services. Avoid hard-coding sensitive information in the code.

  • Dependency management : Use requirements.txt or pyproject.toml to clearly record project dependencies for easy environment replication and deployment.

  • Version control : Use version control tools such as Git to manage code and establish a reasonable branch strategy.

  • Logging : Configure appropriate logging mechanisms to facilitate debugging and troubleshooting.

  • I have found in actual projects that a good basic configuration can greatly reduce the cost of subsequent maintenance, especially when the project becomes complex or requires collaboration among multiple people.

    3. Make your MCP server more powerful

    Theoretical preparation is complete, now let's enter the actual coding stage. Building a powerful MCP server is not as complicated as it seems. As long as you master the core concepts and key technical points, you can quickly get started.

    Quickly implement an MCP server

    To get started quickly, we can first implement a minimum viable MCP server that only provides the most basic functionality. Suppose we want to create a simple weather information server that allows the AI ​​to query the weather conditions of a specific city.

    First, initialize the MCP server and set basic information:

    Initializing a server usually requires providing information such as the server name, description, version, etc. This information will be displayed when the client connects to help users understand the purpose and function of the server.

    Then, we need to register a tool to query the weather:

    This tool receives the city name as a parameter, then calls the weather API to obtain weather information and returns the result. When registering a tool, you need to provide a name, description, parameter definition and processing function.

    Finally, start the server and wait for clients to connect:

    After the server is started, it will listen for client connection requests. If it is local communication, it will use standard input/output; if it is remote communication, it will listen to the specified HTTP port.

    In this way, a minimum viable MCP server is completed. Although the function is simple, it contains the basic elements of the MCP server: initialization, function registration, and service startup. Through this example, you should be able to understand the basic structure and workflow of the MCP server.

    How to provide data access capabilities elegantly

    The Resource primitive of MCP allows the server to provide data in various forms, including text, structured data, images, etc. Proper use of Resource can greatly enhance the contextual understanding ability of AI.

    When designing your resources, there are several key points to consider:

    1. Resource granularity : Resources should neither be too large (may exceed the context window of the AI) nor too small (may lack sufficient context). Proper division of resource granularity is an art.

    2. Resource structure : For complex data, using a clear structure can help AI better understand and use the data. For example, presenting tabular data in a table rather than simple text.

    3. Resource metadata : Providing appropriate metadata (such as creation time, author, type, etc.) can increase the information content of resources and help AI better utilize these resources.

    4. Resource caching : For resources that are frequently accessed but rarely change, you can consider implementing a caching mechanism to improve performance.

    5. Resource paging : For large resources, implement a paging access mechanism to avoid loading too much data at one time.

    For example, if you want to expose table data from a database, you can register a resource getter that allows AI to query data from a specific table:

    This resource getter receives the table name and query conditions as parameters, executes the SQL query, and then formats the results into an easy-to-understand table format.

    In actual applications, I found that reasonable resource design is the key to building an effective MCP server. Good resource design can greatly reduce the AI's understanding burden and improve interaction efficiency.

    Enable AI to perform actions you define

    The Tool primitive of MCP is the main way for the server to provide functionality. It allows AI to call specific functions to perform operations or obtain information. A well-designed tool should have the following characteristics:

    1. Clear functions : Each tool should have a clear definition of its functions to avoid overlapping or confusing functions.

    2. Reasonable parameters : The parameters of the tool should be designed reasonably, neither too many to complicate the use nor too few to limit the functions.

    3. Error handling : The tool should be able to properly handle various exceptions and return clear error messages.

    4. Complete documentation : The tool should have a detailed documentation description, including function description, parameter explanation, return value description and usage examples.

    5. Performance optimization : For complex or time-consuming operations, performance optimization should be considered to avoid blocking the entire server.

    When implementing tool functions, we need to consider AI’s usage scenarios and habits. For example, AI may not be familiar with the professional terminology or operation logic in a specific field, so the design of the tool should be as simple and intuitive as possible, avoiding the use of overly professional or abstract concepts.

    In addition, the combination of tools is also an important consideration. Good tool design should support combined use to form more complex operation processes. For example, a file operation MCP server may provide basic tools such as "list files", "read files", and "write files". AI can use these tools in combination to complete complex file processing tasks.

    4. Connection between MCP client and server

    With a powerful MCP server, we need to focus on how to connect it to the client (such as Claude Desktop). This step is also important, because no matter how good the server is, if the connection is not smooth, it will not play its value.

    Different strategies for local and remote communication

    MCP supports two main communication modes: local communication and remote communication. They each have their own characteristics and applicable scenarios:

    Local communication (stdio) :

    • Communication is done via standard input/output and is suitable for clients and servers running on the same machine.
    • Advantages: low latency, high security (no network transmission required), simple setup.
    • Disadvantages: Limited to local use, does not support cross-machine communication.
    • Applicable scenarios: personal development environment, single-user application, access to local resources (such as file system, local database, etc.).

    Remote communication (HTTP+SSE) :

    • It communicates through HTTP protocol and Server-Sent Events, which is suitable for clients and servers distributed on different machines.
    • Advantages: supports cross-machine communication, can be integrated into existing Web services, and is suitable for multi-user scenarios.
    • Disadvantages: Need to deal with network security issues, relatively complex configuration, and high latency.
    • Applicable scenarios: team collaboration environment, multi-user system, cloud service integration.

    In actual applications, the choice of communication method mainly depends on your usage scenario and needs. For individual developers or small teams, local communication is usually sufficient; for enterprise-level applications or scenarios that need to serve multiple users, remote communication is a better choice.

    It is worth mentioning that the design of MCP allows the same server to support both communication methods at the same time, and only requires different configurations at startup. This flexibility allows the MCP server to adapt to a variety of different usage environments.

    How Claude finds your server

    Before using MCP, clients (such as Claude Desktop) need to know which MCP servers are available and what functions these servers provide. This requires a service discovery mechanism.

    MCP's service discovery mechanism is relatively simple:

    1. Local Services : For local services, clients typically discover the service by starting a server process and connecting to its standard input/output.
    2. Remote service : For remote services, the client needs to know the server URL and authentication information.

    In Claude Desktop, users can add MCP servers through the GUI interface. When adding a local server, you need to provide the server startup command; when adding a remote server, you need to provide the server URL and authentication information.

    For more complex enterprise environments, more advanced service discovery mechanisms may be required, such as a central registry, service catalog, etc. These may be supported in future versions of MCP or community extensions.

    A key link to ensure secure connection

    Security is an important part of MCP connection, especially for remote communication. MCP provides a variety of security mechanisms to ensure connection security:

    1. Authentication : Confirm the identity of the client and server to prevent unauthorized access.

    • Local communication: usually relies on the operating system's process security mechanism.
    • Remote communication: supports HTTP basic authentication, API key, OAuth and other authentication methods.
  • Authorization : Controls the resources that authenticated users can access and the operations they can perform.

    • Permission control: restrict access rights based on user roles or identities.
    • Operation audit: record all user operations for easy traceability.
  • Transmission security : protect the security of data during transmission.

    • Local communication: Data does not pass through the network and is relatively safe.
    • Remote communication: Use HTTPS to encrypt transmission to prevent data from being eavesdropped or tampered with.
  • Data isolation : Ensure that data of different users is isolated.

    • Session isolation: Each client connection is independent and data is not shared.
    • Resource isolation: Users can only access resources for which they have permissions.

    When implementing an MCP server, security should be a factor considered at the beginning of the design, rather than a feature added after the fact. Reasonable security design can ensure system security without excessively restricting system functionality and availability.

    5. MCP practical cases: from theory to application

    Theoretical knowledge and basic technical points have been introduced. Now let us use two practical cases to demonstrate the powerful capabilities of MCP in practical applications.

    Case 1: Building an MCP server connected to a local database

    Databases are a core component of almost all applications, and enabling AI to interact directly with databases is a very valuable capability. Below we will describe how to build an MCP server that connects to a local SQLite database.

    Scenario description : We have a SQLite database that stores product information, including product name, price, inventory, etc. We hope to enable AI to query product information, add new products, update inventory, etc. through the MCP server.

    Design ideas :

    1. Resource design : Provide resources for querying data table structure and data, allowing AI to understand the structure and content of the database.
    2. Tool design : Provide tools such as executing SQL queries, adding products, updating inventory, etc., so that AI can operate the database.
    3. Security considerations : Limit SQL execution permissions to avoid dangerous operations (such as DROP TABLE).

    Implementation steps :

    First, initialize the MCP server and set basic information:

    When initializing the server, you need to provide information such as the server name and description, which will be displayed when the client connects.

    Then, connect to the SQLite database:

    Database connections can be established when the server starts or on demand. For performance reasons, connections are usually established when the server starts and released when the server shuts down.

    Next, register the resource getter to query the database structure and data:

    The registered resource acquirer can respond to the client's resource request and return information such as the database table structure and table data.

    Then, register the tool function to perform SQL operations:

    The registered tool function can respond to the client's tool call request, execute SQL query or update operation, and return the result.

    Finally, start the server and wait for clients to connect:

    After the server is started, the client (such as Claude Desktop) can connect to the server to query the database structure, perform SQL operations, etc.

    Effect : Through this MCP server, AI can perform the following operations:

    • Query the table structure in the database and understand the data model.
    • Query product information, such as "Query products priced below 100 yuan".
    • Add new products and update product information.
    • Analyze sales data, generate reports, and more.

    This case demonstrates how MCP enables AI to interact directly with the database, greatly expanding AI’s data processing capabilities.

    Case 2: Building a development assistant that integrates seamlessly with GitHub

    Software development is an important area of ​​AI application. Allowing AI to interact directly with code repositories can greatly improve development efficiency. Below we will introduce how to build an MCP server connected to GitHub.

    Scenario description : We hope to use the MCP server to enable AI to view the code in the GitHub repository, create issues, submit pull requests, etc., making it a powerful development assistant.

    Design ideas :

    1. Resource design : Provides resources such as warehouse information, file content, and issue lists to help AI understand the structure and status of the warehouse.
    2. Tool design : Provide tools such as creating issues, submitting PRs, and adding comments, so that AI can participate in the development process.
    3. Security considerations : Use appropriate permission scopes to avoid dangerous operations (such as deleting repositories).

    Implementation steps :

    First, initialize the MCP server and set basic information:

    When initializing the server, you need to provide information such as the server name and description, which will be displayed when the client connects.

    Then, configure the GitHub API client:

    Configuring the GitHub API client requires providing a GitHub access token for authentication and authorization.

    Next, register the resource getter to view repository information and code:

    The registered resource acquirer can respond to the client's resource request and return repository information, file content, Issue list, etc.

    Then, register the tool function to perform GitHub operations:

    Registered tool functions can respond to client tool call requests, create issues, submit PRs, add comments, etc.

    Finally, start the server and wait for clients to connect:

    After the server is started, clients (such as Claude Desktop) can connect to the server to view repository information, perform GitHub operations, etc.

    Effect : Through this MCP server, AI can perform the following operations:

    • View the repository's code to understand the project structure and implementation details.
    • Create an issue to report a bug or suggest a feature.
    • Submit a Pull Request to contribute code directly.
    • Add comments and participate in code reviews and discussions.
    • Analyze project history, generate statistical reports, etc.

    This case shows how MCP allows AI to interact directly with GitHub and become a powerful assistant for development teams.

    Summarize

    As an open standard for connecting AI assistants and data systems, MCP solves the M×N problem in traditional AI integration through a unified protocol, greatly simplifying the integration complexity of AI applications and data sources. It adopts a client-server architecture and defines five basic primitives: Prompts, Resources, Tools, Roots, and Sampling, covering most scenarios of interaction between AI applications and external systems.

    In actual applications, the MCP server can connect to a variety of data sources and tools, from local file systems to databases, from GitHub to Slack, greatly expanding the capabilities of AI. Through the SQLite database and GitHub integration case introduced in this article, we can see the powerful value of MCP in actual applications.