Understanding MCP, API and Function Call in AI Agents

Written by
Silas Grey
Updated on:July-03rd-2025
Recommendation

In-depth analysis of the key interaction technologies in AI agents to help you master the bridge between AI models and external systems.

Core content:
1. The three major technical concepts and characteristics of API, Function Call and MCP
2. Their application scenarios and functions in AI agents
3. Actual code examples to demonstrate how to obtain data through API

Yang Fangxian
Founder of 53AI/Most Valuable Expert of Tencent Cloud (TVP)

introduction

With the rapid development of artificial intelligence technology, the demand for large language models (LLMs) to interact with external systems is growing. In this context, API, Function Call, and MCP (Model Context Protocol) are key interaction mechanisms, each playing a different role. This article will explore the concepts, characteristics, application scenarios, and roles of these three technologies in AI Agents, and use specific examples to help readers fully understand their similarities and differences.

Basic concepts and definitions

API (Application Programming Interface)

API is a universal system component communication standard that defines the rules for interaction between software components. API can be used for communication between any two systems and is not specific to AI or AI agents.

APIs mainly play the role of "bridge" in AI systems, connecting AI models with external data sources or services. Through APIs, AI models can access and utilize the functions and data of external systems, thereby enhancing their capabilities and application scope.

Function Call

Function Call is a mechanism specific to Large Language Models (LLMs) that allows models to call external functions or APIs. This is the main way for LLMs to interact with the outside world, and it is up to the LLM to decide when to call which function.

Function Call was first introduced by OpenAI in June 2023 and was initially implemented on the GPT-3.5 and GPT-4 models. It allows the model to generate structured JSON output to call predefined functions or APIs in external systems.

Model Context Protocol (MCP)

Model Context Protocol is an open standard launched by Anthropic in November 2024 to unify the communication protocol between large language models (LLMs) and external data sources and tools. The main purpose of MCP is to solve the problem that current AI models cannot fully realize their potential due to data silo limitations.

MCP provides a standardized interface that enables AI models to securely access and operate local and remote data, providing an interface for AI applications to connect everything. Through a standardized data access interface, it greatly reduces the number of links that directly contact sensitive data and reduces the risk of data leakage.

Role in AI Agent

The role of API in AI Agent

API is the basis for AI Agent to interact with external systems. Through API, AI Agent can access and utilize various external services and data sources. For example:

  • Call the weather API to get real-time weather information
  • Call the map API to obtain route planning
  • Call database API to query and update data
  • Call third-party service API to obtain specific functions

API provides AI Agent with a connection to the outside world, enabling AI Agent to obtain necessary data and functional support to complete more complex tasks.

Example

import  requests
Check today's weather in Beijing
response = requests.get( 'http://localhost:5000/api/weather?city=Beijing&date=today' )
data = response.json()
 print ( f"Today's weather in Beijing is: {data[ 'weather' ]} , temperature range: {data[ 'temperature' ]} , rainfall probability: {data[ 'rain_probability' ]} " )

The role of Function Call in AI Agent

Function Call is a mechanism for AI Agent to call external functions. It allows AI models to generate instructions to call external functions, and developers can implement specific functions. For example, the model calls the weather API and returns real-time weather data.

Function Call enables AI Agents to interact directly with external systems without requiring developers to write complex text parsing logic. This simplifies the development process and improves the reliability and accuracy of AI applications.

Function call workflow

  1. 1.  The user sends a natural language request to the LLM (e.g. "What is the weather in Beijing today?")
  2. 2.  LLM recognizes that an external function (get_weather) needs to be called and generates parameters in JSON format
  3. 3.  The function call request is sent to the corresponding API or service
  4. 4.  API returns result data
  5. 5.  LLM converts the results into user-friendly natural language responses

Function Call Key Components

  • Function definition: describes the function name, parameters, type and description
  • Parameter generation: LLM generates appropriate parameter values ​​based on user input
  • JSON formatting: output function calls in a structured format
  • Result processing: Parse API responses and generate natural language responses

Specifically, Function Call provides the following advantages for AI Agents:

  1. 1.  Structured output : Ensure that the model can output function call parameters in a predefined JSON format, which improves parsability and reliability
  2. 2.  Clearly define the function : clearly define the name, description, parameter type, etc. through the function schema
  3. 3.  Reduce parsing complexity : Developers do not need to write complex text parsing logic
  4. 4.  Improved accuracy : Reduced the "hallucination" problem of the model when generating function calls
  5. 5.  Simplify the development process : standardize the interaction between large models and external tools


import  json
import  requests
from  typing  import DictListAny 
Define available functions
def search_weather ( city:  str , date:  str  =  "today" ) ->  Dict [ strAny ]: 
 """
 Search for weather in a specific city and date
Args:  
    city: city name, such as "Beijing", "Shanghai"  
    date: date, in the format of YYYY-MM-DD or "today" for today  
    
Returns:  
    A dictionary containing weather information  
"""
  
# Calling the weather API  
response = requests.get( f'http://weather-api.example.com/api/weather?city= {city} &date= {date} ' )  
return  response.json()  
def search_restaurants ( location:  str , cuisine:  str  =  None , price_range:  str  =  None ) ->  Dict [ strAny ]: 
 """
 Search for restaurants in a specific location
Args:  
    location: location, such as "Beijing Haidian District"  
    cuisine: cuisine, such as "Sichuan cuisine", "Cantonese cuisine" (optional)  
    price_range: price range, such as "below 100", "100-300", "above 300" (optional)  
    
Returns:  
    A dictionary containing restaurant information  
"""
  
# Constructing query parameters  
params = { "location" : location}  
if  cuisine:  
    params[ "cuisine" ] = cuisine  
if  price_range:  
    params[ "price_range" ] = price_range  
    
# Calling the restaurant API  
response = requests.get( 'http://restaurant-api.example.com/api/search' , params=params)  
return  response.json()  

Function dictionary, mapping function names to actual functions
available_functions = {
 "search_weather" : search_weather,
 "search_restaurants" : search_restaurants
 }
Function description, used by the AI ​​model to determine which function should be called
function_descriptions = [
 {
 "name""search_weather" ,
 "description""Get weather information for a specified city and date" ,
 "parameters" : {
 "type""object" ,
 "properties" : {
 "city" : {
 "type""string" ,
 "description""City name, such as Beijing, Shanghai"
 },
 "date" : {
 "type""string" ,
 "description""Date, in the format of YYYY-MM-DD or today for today"
 }
 },
 "required" : [ "city" ]
 }
 },
 {
 "name""search_restaurants" ,
 "description""Search for restaurants in a specified location" ,
 "parameters" : {
 "type""object" ,
 "properties" : {
 "location" : {
 "type""string" ,
 "description""Location, such as Beijing Haidian District"
 },
 "cuisine" : {
 "type""string" ,
 "description""Cuisine, such as Sichuan cuisine, Cantonese cuisine"
 },
 "price_range" : {
 "type""string" ,
 "description""Price range, such as less than 100, 100-300, and more than 300"
 }
 },
 "required" : [ "location" ]
 }
 }
 ]
Simulate the Function Call decision made by the AI ​​model
def simulate_ai_function_call ( user_message:  str ) ->  Dict [ strAny ]: 
 """
 Simulate an AI model to analyze user messages and decide which function to call
In actual applications, this part is completed by the AI ​​model, and this is only for demonstration  
"""
  
if "weather" in  user_message  or "raining" in  user_message:      
    # Extract cities  
    city ​​=  "Beijing" # Simplified processing, should actually be extracted from the message    
    if "tomorrow" in  user_message:    
        date =  "2025-04-06" # Simulate tomorrow's date    
    else :  
        date =  "today"  
        
    return  {  
        "function""search_weather" ,  
        "parameters" : {  
            "city" : city,  
            "date" : date  
        }  
    }  
elif "restaurant" in  user_message  or "eating" in  user_message:      
    # Extracting location and preferences  
    location =  "Beijing Haidian District" # Simplified processing    
    cuisine =  "Sichuan cuisine" if "Sichuan cuisine" in  user_message  else None      
    price_range =  "100-300" # Default medium price    
    
    return  {  
        "function""search_restaurants" ,  
        "parameters" : {  
            "location" : location,  
            "cuisine" : cuisine,  
            "price_range" : price_range  
        }  
    }  
else :  
    return None # no need to call function     
Simulate the process of processing user messages
def process_user_message ( user_message:  str ) ->  str : 
 """Process user messages and return replies"""
    print ( f"User:  {user_message} " )
# Determine whether a Function Call is needed  
function_call = simulate_ai_function_call(user_message)  

if  function_call:  
    function_name = function_call[ "function" ]  
    parameters = function_call[ "parameters" ]  
    
    print ( f"AI decided to call function:  {function_name} " )  
    print ( f"Parameters:  {json.dumps(parameters, ensure_ascii= False )} " )  
    
    # Execute Function Call  
    if  function_name  in  available_functions:  
        function_to_call = available_functions[function_name]  
        function_response = function_to_call(**parameters)  
        
        print ( f"Function return result:  {json.dumps(function_response, ensure_ascii= False )} " )  
        
        # Generate a reply based on the function return result (simplified processing)  
        if  function_name ==  "search_weather" :  
            return f" {parameters[ 'city' ]} { 'tomorrow' if  parameters[ 'date' ] !=  'today' else 'today' } the weather is {function_response[ 'weather' ]} , temperature range {function_response[ 'temperature' ]} , rainfall probability {function_response[ 'rain_probability' ]} ."      
        elif  function_name ==  "search_restaurants" :  
            restaurants = function_response.get( "restaurants" , [])  
            if  restaurants:  
                restaurant_names = [r[ "name"for  r  in  restaurants[: 3 ]]  
                return f"I found the following restaurants for you: { ', ' .join(restaurant_names)} , ​​they are all in {parameters[ 'location' ]} { ', provide'  + parameters[ 'cuisine'if  parameters[ 'cuisine'else '' } ."    
            else :  
                return f"Sorry, no restaurants matching the criteria were found."   
    else :  
        return f"Sorry, I can't process this request because the function {function_name} is not available."   
else :  
    # No need to call function, just reply directly  
    return "I understand your question, but there is no need to call a specific tool to answer it. [Here is a direct reply generated by the AI ​​model]"   
test
test_messages = [
 "What's the weather like in Beijing today? Will it rain?" ,
 "Recommend several Sichuan restaurants in Haidian District" ,
 "What is the history of the development of artificial intelligence?"
 ]
for  message  in  test_messages:
 response = process_user_message(message)
 print ( f"AI:  {response} \n" )

The role of MCP in AI Agent

MCP is a high-level protocol for AI Agents to interact with external systems. It provides a standardized interface that enables AI models to securely access and operate local and remote data, providing an interface for AI applications to connect everything.


MCP is implemented through a client-server architecture, which includes the following core concepts:

  • MCP Hosts : The LLM application that initiated the request (such as Claude Desktop, IDE, or AI tool)
  • MCP Clients : Maintain a 1:1 connection with the MCP server within the host program
  • MCP Servers : Provide context, tools, and prompt information to MCP clients
  • Local resources : resources on the local computer that can be securely accessed by the MCP server (such as files and databases)
  • Remote resources : remote resources that the MCP server can connect to (e.g. via an API)

MCP provides the following advantages to AI Agents:

  1. 1.  Unified standards : MCP is an open standard that provides a consistent interface for communication between AI models and external systems
  2. 2.  Flexibility : MCP allows AI Agents to connect to a variety of different data sources and tools without having to write custom integrations for each data source
  3. 3.  Security : MCP greatly reduces the links of direct contact with sensitive data and reduces the risk of data leakage through standardized data access interfaces.
  4. 4.  Scalability : MCP’s architectural design enables AI Agents to scale as demand grows without rewriting code
MCP definition file example
Usually passed to AI models as system prompts or configurations

MCP_DEFINITION =  """
Intelligent Assistant Code of Conduct (Model Context Protocol)
1. Basic Code of Conduct
Identity and Tone
You are a professional and friendly AI assistant
Use a polite yet natural tone, avoiding being too formal or too casual
Use consistent pronouns in your responses, using "I" instead of "AI assistant" or third-person pronouns
Response Format
Provide clear and concise answers to simple questions
Provide structured responses to complex questions, using headings and paragraphs
Use Markdown format to improve the readability of your answers
Avoid unnecessary repetition and redundant statements
Knowledge Boundaries
Acknowledge the timeliness of knowledge. Your knowledge is valid until April 2023.
When you are unsure of an answer, state your uncertainty clearly and avoid making up information
For the latest information, proactive alerts require the use of search tools
2. Tool Usage Guidelines
Search Tool Usage
When users ask about events, news or data after April 2023
When users explicitly request the latest information
When the problem involves rapidly changing information (e.g., weather, stock prices)
When you need to verify information that may be out of date
Data analysis tools used
When a user uploads a data file that needs to be analyzed
When complex numerical calculations need to be processed
When you need to generate data visualizations
When you need to extract insights from structured data
Image generation tool usage
When the user explicitly requests that an image be created or generated
When complex concepts are better expressed with images
When users need visual creativity or design inspiration
3. Safety and ethical standards
Handling sensitive topics
Maintain a neutral and objective stance on sensitive topics such as politics and religion
Provide diverse perspectives and avoid being biased towards any particular position
Refuse to generate content that may be controversial or offensive
Harmful request handling
Politely refuse to generate information that could cause harm
Refuse to provide detailed instructions for illegal activities
Redirect users to constructive alternatives
Personal Data Protection
Do not store or remember users' sensitive personal information
Remind users to avoid sharing sensitive information in conversations
Do not attempt to collect private user data
4. Multi-turn dialogue management
contextual understanding
Maintain the continuity of the conversation and remember the previous communication content
Understand pronoun reference and implicit information
Politely ask for clarification when the context is unclear
Dialogue Initiative
Provide follow-up suggestions related to the current topic when appropriate
Recognize and respond to the user’s emotional state
Steer the conversation in a constructive and helpful direction
Error handling
When you recognize an error in a previous answer, you can proactively correct it.
Accept corrections from users and express gratitude
Avoid making excuses for mistakes and focus on providing correct information
5. Responsive Style Guide
Technical content
Adapt the use of technical terminology to the expertise level of the user
Provide easy-to-understand explanations for complex concepts
Use analogies when necessary to explain abstract concepts
Educational Content
Use progressive explanation, starting with the basics and then the advanced
Provide examples and application scenarios
Encourage users to ask questions and explore
Creative Content
Demonstrate flexibility of thought and creativity
Provide diverse creative options
Balancing practicality and innovation
6. Tool call detailed specifications
Search tools (search)
Parameter construction: 
Keywords: concise search keywords, no more than 5 words
rewritten_query: detailed search query, including full context
Result processing: 
Extract the most relevant information points
Verify consistency of information
Cite the source of the information
Data analysis tools (data_analysis)
When to use: 
When the file ID is available
Code builds follow best practices
The results need to be explained in easy-to-understand terms
Coding standards: 
Data processing using pandas
Visualization using matplotlib or seaborn
Include appropriate comments
Image generation tool (text_to_image)
Prompt word structure: 
Describe in detail the visual elements, style and mood
Avoid requesting specific brands or celebrity likenesses
Specify an appropriate art style
 """

In actual applications, this MCP will be loaded and used to configure the behavior of the AI ​​model
def configure_ai_model_with_mcp ( ai_model, mcp_definition=MCP_DEFINITION ): 
 """Configure the behavior specifications of the AI ​​model"""
 # In actual implementation, this may involve setting system prompts or specific parameters of the model
 # This is just for illustration
    ai_model.set_system_prompt(mcp_definition)
    return  ai_model
Simulate the process of AI processing user messages
def process_with_mcp ( user_message ): 
 """Use MCP specification to process user messages"""
    print ( f"User:  {user_message} " )
    #Simulate the process of AI thinking according to MCP  
    print ( "\n[AI's thinking process based on MCP]" )  

# Determine the message type and required processing method  
if "weather" in  user_message:    
    print ( "1. The message is about weather and belongs to the real-time information category" )  
    print ( "2. According to the MCP tool usage specifications, you should use the search tool to obtain the latest weather information" )  
    print ( "3. Need to construct appropriate search parameters, including location and time" )  
    
    # Simulate Function Call decision  
    function_call = {  
        "function""search" ,  
        "parameters" : {  
            "keywords""Beijing's weather today" ,  
            "rewritten_query""Today's latest weather forecast for Beijing: temperature and rainfall probability"  
        }  
    }  
    
    print ( f"4. Decided to call: {function_call[ 'function' ]} " )  
    print ( f"5. Parameters: {json.dumps(function_call[ 'parameters' ], ensure_ascii= False )} " )  
    
    #Simulate the function return result  
    function_result = {  
        "weather""clear" ,  
        "temperature""22-28℃" ,  
        "rain_probability""5%"  
    }  
    
    print ( "6. Get weather information and organize the answer according to the MCP response format specification" )  
    print ( "7. Provide clear and concise answers to simple questions based on MCP" )  
    
    # Generate a reply  
    response =  f"Today's weather in Beijing is sunny, with temperatures between 22-28℃ and a very low probability of rainfall of only 5%. It is suitable for outdoor activities, but the ultraviolet rays are strong, so it is recommended to take sunscreen."  

elif "artificial intelligence" in  user_message  and "history" in  user_message:      
    print ( "1. The message asks about the history of artificial intelligence, which is a knowledge question" )  
    print ( "2. According to MCP, this is a complex question and a structured answer should be provided" )  
    print ( "3. This is within the scope of the model's knowledge and does not require the use of tools" )  
    print ( "4. According to the MCP Educational Content Guidelines, progressive explanations should be used" )  
    
    # Generate a reply  
    response =  """# The history of the development of artificial intelligence  
The development of artificial intelligence (AI) can be divided into several key stages:
Early exploration (1940s-1950s)
1943: McCulloch and Pitts create the first mathematical model of a neural network
1950: Alan Turing proposes the famous "Turing Test"
1956: Dartmouth Conference formally establishes the term "artificial intelligence"
Early development (1960s-1970s)
Expert systems begin to emerge
The rise of rules-based AI systems
Preliminary research on natural language processing
The First AI Winter (1970s-1980s)
Funding cuts, research slows
Computational power limitations hinder progress
Renaissance and progress (1990s-2000s)
Machine learning methods become popular
1997: IBM's Deep Blue defeats chess champion Kasparov
Deep Learning Revolution (2010s to Present)
2012: AlexNet achieves breakthrough in ImageNet competition
2016: AlphaGo defeats Go champion Lee Sedol
2020s: Large language and multimodal models such as GPT series and DALL-E emerge
Artificial intelligence continues to advance, impacting every aspect of our lives. """

elif "help me generate" in  user_message  and "picture" in  user_message:      
    print ( "1. Message request to generate image, image generation tool is required" )  
    print ( "2. According to the MCP image generation tool usage specifications, detailed prompt words need to be constructed" )  
    print ( "3. Need to extract image description elements from user messages" )  
    
    # Simulate Function Call decision  
    function_call = {  
        "function""text_to_image" ,  
        "parameters" : {  
            "prompt""A cute orange cat sits on the windowsill, looking at the birds outside the window, the sun shines through the window onto the cat, creating a warm and cozy atmosphere" ,  
            "style""digital-art"  
        }  
    }  
    
    print ( f"4. Decided to call: {function_call[ 'function' ]} " )  
    print ( f"5. Parameters: {json.dumps(function_call[ 'parameters' ], ensure_ascii= False )} " )  
    print ( "6. Image generated, answer organized according to MCP response specification" )  
    
    # Generate a reply  
    response =  "I have generated for you an image of a cute orange cat sitting on a windowsill, in a digital art style. The cat is looking out the window at the birds, the sun is shining on it, and the whole picture is warm and cozy. I hope this meets your expectations!"  

else :  
    print ( "1. Unable to determine the specific message type" )  
    print ( "2. According to MCP, when a message is unclear, one should politely ask for clarification" )  
    
    # Generate a reply  
    response =  "Sorry, I'm not sure what you're looking for. Are you looking for weather forecasts, searching for information, or do you need help with something else? Please provide more details so I can better help you."  

print ( "\n[AI final reply]" )  
return  response  
Testing different types of user messages
test_messages = [
 "What's the weather like in Beijing today? Is it suitable for going out?" ,
 "What is the history of the development of artificial intelligence?" ,
 "Help me generate a picture of a cat" ,
 "What do you think will happen tomorrow?"
 ]
for  message  in  test_messages:
 ai_response = process_with_mcp(message)
 print ( f"User:  {message} " )
 print ( f"AI:  {ai_response} \n" )

The differences and connections among the three

Level Difference

From a hierarchical perspective, these three technologies can be divided into three different levels:

  1. 1.  Level 1: Function CallingSolve  the problem of "how to call external functions"
  2. 2.  Level 2: MCP  solves "how to efficiently connect a large number of external tools"
  3. 3.  Level 3: AI Agent  solves "how to complete complex tasks autonomously"

Feature Comparison


Relationships and collaboration

These three technologies are not mutually exclusive, but can work together. Together, they form a complete system for AI Agents to interact with the outside world:

  1. 1.  APIs  provide basic functionality, enabling systems to communicate with each other
  2. 2.  Function Call  provides direct operation capabilities, enabling AI models to call external functions
  3. 3.  MCP  provides a higher level of intelligent coordination capabilities, enabling AI Agents to efficiently and securely access and operate various data sources and tools

Through this collaboration, AI Agent is able to complete complex tasks such as querying sales contract PDFs from CRM, sending emails, scheduling meetings, etc.

Specific application scenarios

Use Function Call

Function Call is suitable for simple external system calls that are tightly coupled to a specific LLM. For example:

  1. 1.  Weather query : call the weather API to obtain real-time weather information
  2. 2.  Route planning : call the map API to get the best route
  3. 3.  Math calculations : perform simple math calculations
  4. 4.  Stock Query : Get stock prices and trend analysis

Scenarios for using MCP

MCP is suitable for applications that need to access multiple different systems and maintain context. For example:

  1. 1.  Enterprise-level applications : need to access data from multiple systems such as Google Drive, Slack, and GitHub
  2. 2.  Long-term conversation : Applications that need to maintain conversation context
  3. 3.  Complex workflow : applications that require coordination of multiple tools to complete complex tasks
  4. 4.  Security-sensitive applications : applications that require strict control over data access

When using at the same time

Many modern AI applications support both the MCP specification and the Function Call feature. For example:

  1. 1.  Claude Desktop : supports MCP Server access capability and acts as an MCP client to connect to an MCP Server to sense and implement calls
  2. 2.  Cursor : Support MCP Server function to improve development efficiency
  3. 3.  Zed : Support for MCP resources
  4. 4.  Sourcegraph Cody : Supports resources through OpenCTX

Development Trends and Future Prospects

Standardization and interoperability

As an open standard, MCP is becoming an important bridge connecting AI and external systems. It makes tool calls no longer dependent on specific LLM providers, improving interoperability and scalability.

Security and Privacy

The MCP architecture provides a better security model and privacy protection. For example, the MCP server controls its own resources and does not need to provide sensitive information such as API keys to the LLM provider. In this way, even if the LLM provider is attacked, the attacker cannot obtain this sensitive information.

Context Management

MCP is particularly suitable for AI applications that need to maintain long-term context. It helps AI models better understand and handle complex tasks by providing resources, tools, and prompts.

Ecosystem Construction

The MCP ecosystem is developing rapidly, and many MCP servers implemented by the community have emerged. These servers provide rich functions, such as file system access, database interaction, Web automation, etc.

in conclusion

MCP, API and Function Call are three key mechanisms for AI Agent to interact with the outside world. They work at different levels and together constitute a complete AI system interaction framework.

API provides basic inter-system communication capabilities and is the foundation for building AI applications.

Function Call provides AI models with the ability to directly call external functions, simplifying the development process of AI applications.

As a high-level protocol, MCP provides a consistent standard for communication between AI models and external systems, improving interoperability and security.

By understanding the similarities, differences, and application scenarios of these three technologies, developers can better design and implement AI applications, enabling them to interact with the outside world efficiently and securely and complete more complex tasks.