MCP and Function Calling: Concepts

Written by
Jasper Cole
Updated on:July-08th-2025
Recommendation

Explore the boundaries of artificial intelligence and learn how to make AI models interact with external systems more intelligently.

Core content:
1. The limitations of large language models (LLMs) and the application scenarios of RAG
2. The basic concepts and importance of Function Calling and MCP
3. The execution process of Function Calling and the plug-and-play feature of MCP

Yang Fangxian
Founder of 53AI/Most Valuable Expert of Tencent Cloud (TVP)

With the rapid development of artificial intelligence, large language models (LLMs) have gradually penetrated into every aspect of our lives and work. However, despite the power of the models, their capabilities still have limitations, such as inadequacies in real-time information acquisition and complex task execution.

RAG (Retrieval Augmented Generation) is now widely used in enterprise AI applications to solve the problem that the model's information is not real-time enough and lacks vertical domain knowledge.

Although RAG is powerful, it is mainly used in the field of intelligent knowledge question answering. If large models want to have stronger capabilities, they need to interact with the outside world effectively and securely, which gave birth to Function Calling and MCP.

Function Calling relies on the capabilities of large models, and MCP is a general protocol.

This article discusses Function Calling and MCP from a conceptual level. The next article will write some code examples.

Introduction to Function Calling

concept

Function Calling is a mechanism provided by certain large models (such as OpenAI's GPT-4, Qwen2, etc.) that enables the model to actively generate structured outputs to call predefined functions or APIs in external systems.

Execution Process

Usually we say that the big model calls the API, but this is not entirely accurate. The execution process is as follows:

0. The Agent program is an AI program developed by us. In the program, the external function interface will be pre-registered with the large model (it is recommended that no more than 20 interfaces are registered).

1. The user initiates a request through natural language, and the Agent receives the request.

2. The agent program submits the user request to the large model, which first parses the semantics and evaluates whether it is necessary to call external tools.

3. If the model determines that a function needs to be called, it will generate a call instruction containing the tool ID and input parameters and return it to the Agent program.

4. After the Agent program receives the call instruction returned by the model, it executes the call to the tool function.

5. After the tool function is executed, the result is returned to the Agent program.

6. The Agent program feeds back the result returned by the function and the custom prompt words to the big model.

7. The large model combines the data returned by the tool with the original context to generate the final result and returns it to the Agent.

8. The Agent program presents the output results to the end user.

Features

1. Active call: The model can recognize the user's natural language intention, decide whether to call the tool, and generate calling instructions.

2. Real-time feedback: The function call instructions generated by the model are executed by the Agent program and then fed back to the model, allowing the model to generate real-time and accurate responses.

3. Flexible implementation: There is no strict standard communication protocol requirement, and the communication format depends on the specific model manufacturer. This is a feature, but also a drawback.

Introduction to MCP (Model Context Protocol)

concept

MCP (Model Context Protocol) is a standard protocol launched by Anthropic in November 2024. Its purpose is to establish a unified communication interface between AI models (such as large language models) and external data sources or tools.

It can be simply compared to the USB interface of AI. Whether it is an AI model or an external tool, as long as it complies with the MCP standard, it can achieve a quick "plug and play" connection, without the need to write an interface program for each tool separately. There is no language restriction either. Just like after the front-end and back-end are separated, the interface can be written based on the WebAPI specification, and any language can be used to write the interface.

Now OpenAI also supports MCP.

Features

1. Openness: MCP is an open standard. Any developer or service provider can develop APIs based on this protocol to avoid reinventing the wheel and promote ecological co-construction. Currently, there are a large number of MCP collection sites (such as: https://mcp.so).

2. Standardization: Use JSON-RPC 2.0 standard communication to ensure unified and efficient interaction.

3. AI enhancement: MCP can upgrade AI applications from simple question-and-answer to tools that can perform complex tasks, such as managing code, processing files, and connecting to external systems. For example, after Claude connects to GitHub through MCP, he can complete complex operations such as project creation and submitting code requests.

4. Security: MCP data interaction is based on standard protocols, which facilitates the control of data flow and prevents leakage. The server has a built-in security mechanism to protect the API key from being leaked to large model providers, and the host can authorize the client to connect to ensure that the connection is secure and controllable.

5. Compatibility: MCP supports almost all data formats such as file content, database records, API responses, real-time data, screenshot images, log files, etc., and is suitable for a variety of data interaction scenarios.

6. Extensibility: MCP provides prompt templates, tools, sampling and other functions, which can flexibly expand the interaction capabilities of AI applications and data sources. Developers can customize prompt templates according to their needs or use tools to expand data processing functions.

Core Architecture

MCP adopts a client-server architecture and includes the following components:

1. MCP Host: The AI ​​application or tool that initiates the request, such as Claude Desktop, Cursor, Windsurf, etc.

2. MCP Client: Located inside the Host, it maintains a one-to-one connection with the MCP Server and is responsible for message routing, capability management, and protocol negotiation.

3. MCP Server: A server-side component that provides context data, tools, and prompt word templates, is responsible for responding to client requests, and provides access to external resources.

4. Resources and tools: including local or remote data resources (such as files, databases) and functions (tools) that can be called by the model, supporting AI models to obtain external information and perform tasks in real time.

Calling steps

1. Configure the relevant MCP Server in the host program (client) and establish a connection with the MCP Server.

2. The user asks questions in natural language, and the host program gives the prompt words (integrated with the user's questions) and the tools provided by the MCP Server to the big model.

3. After the large model is understood, a call instruction is generated, and the host program sends the call instruction to the MCP Server through the Client.

4. After receiving the request, the MCP Server parses the request content, performs corresponding operations (such as searching network information, taking notes, etc.), and then encapsulates the processing results into a response message and sends it back to the client.

The relationship and difference between MCP and Function Calling

relation

The big model is like a brain. Both MCP and Function Calling are designed to enable the big model to speak and do things. Function Calling can be seen as a specific form of function implementation under the MCP ecosystem, especially in the concept of tool calling (Tools), the two are highly similar.

the difference

1. Interaction mode

  • MCP: Supports interactive and continuous context management, and AI can interact with external resources in multiple rounds.
  • Function Calling: A simple request-response model where a single call performs a specific task without interactive continuity.

2. Positioning:

  • MCP: An open standard protocol that defines a common communication architecture and data format (similar to the USB standard).
  • Function Calling: Extension capabilities provided by specific model vendors.

3. Communication protocol standardization:

  • MCP: Strictly complies with JSON-RPC 2.0, with high standardization and interoperability.
  • Function Calling: There is no unified standard, and the protocol depends on the implementation of specific model manufacturers.

4. Ecological openness:

  • MCP: The ecosystem is open and community co-construction is the main focus. Any developer or service provider can freely access it.
  • Function Calling: The ecosystem is relatively closed and relies on support from specific model manufacturers.

I hope this article was helpful to you!