The relationship and difference between MCP and Function Calling

Explore the differences and connections between MCP and Function Calling in large models, and understand how MCP optimizes intelligent systems.
Core content:
1. Function Calling context explosion risk and solution
2. MCP application and advantages in large model Function Calling
3. MCP initialization process and on-demand loading mechanism
Question: Will too many functions defined in Function Calling cause model context explosion?
The function definition itself is counted in the model context. OpenAI clearly wrote in the official description: "Functions will be injected into the system message, so they will occupy the context and be charged as input tokens; if you encounter the context limit, you should reduce the number of functions or simplify the parameter description."
As long as the sum of all input tokens (system prompt + function list + conversation history + user questions) does not exceed the maximum context window of the selected model, it will not be triggered. context_length_exceeded
Error; otherwise an error will be reported.
What will really explode is:
The significance of MCP for large model function calling
In the first round of interaction with LLM, the client only passes in "user query + system message (default prompt & history context & capabilities)", and LLM determines whether it needs to call a tool or access a resource. Only when it needs to call a tool or access a resource, the client will pass in the "specific tool or resource JSON Schema" to LLM, allowing LLM to generate tool or resource parameters.
Hierarchical calls
MCP provides a unified JSON-RPC specification for the "execution layer" through the separation of responsibilities between the client and the server, supporting dynamic enumeration and calling tools (tools/list
→ tools/call
), so that the tool’s metadata can be maintained “outside the model” rather than being put into context for every turn of conversation .