Using DeepSeek Function Calling to build an intelligent testing assistant

Written by
Jasper Cole
Updated on:July-11th-2025
Recommendation

How does AI technology revolutionize the field of software testing? This article deeply analyzes the application of the function calling capability of the large language model in the intelligent test assistant.

Core content:
1. The application prospects of large language model technology in test automation
2. Function Calling technology analysis and parameter definition examples
3. Comparison of technical advantages and implementation methods of traditional testing and function calling

Yang Fangxian
Founder of 53AI/Most Valuable Expert of Tencent Cloud (TVP)

Introduction: When AI meets test automation 

With the breakthrough progress of large language model (LLM) technology, especially the emergence of function calling capability, software testing, a traditional and critical software development link, is gradually facing AI-based changes and challenges. This article will explore how to effectively use the function calling capability of large language models to build an intelligent and efficient testing assistant.

function_calling technical analysis 

What is function_calling

function_calliingIt is the ability of a large language model to intelligently select and call predefined functions based on user requests. The model automatically matches the corresponding function template and generates structured parameters by understanding natural language instructions.

# Traditional test parameter definition example
test_case = {
    "api""/user/login" ,
    "method""POST" ,
    "params" : { "username""test""password""123456" }
}

# Function call parameter definition example
tools = [
    {
        "type""function" ,
        "function" : {
            "name""execute_api_job" ,
            "description""Execute interface test tasks" ,
            "parameters" : {
                "type""object" ,
                "properties" : {
                    "location" : {
                        "type""string" ,
                        "description""Task name, for example: User Center Test Task" ,
                    }
                },
                "required" : [ "job" ]
            },
        }
    }
]

Comparison of technical advantages

Traditional method
Function calling method
Manual maintenance of test cases
Automatically generate test logic
Fixed parameter combination
Dynamic parameter generation
Single Assertion Strategy
Smart assertion mechanism
Linear execution flow
Adaptive test path

Implementation 

Defining functions

We need to define the function we want to call through the large model:

tools = [
    {
        "type""function" ,
        "function" : {
            "name""execute_api_job" ,
            "description""Execute interface test tasks" ,
            "parameters" : {
                "type""object" ,
                "properties" : {
                    "location" : {
                        "type""string" ,
                        "description""Task name, for example: User Center Test Task" ,
                    }
                },
                "required" : [ "job" ]
            },
        }
    },
    {
        "type""function" ,
        "function" : {
            "name""get_report" ,
            "description""Query test report" ,
            "parameters" : {
                "type""object" ,
                "properties" : {
                    "location" : {
                        "type""string" ,
                        "description""Task name: for example: Message center test task" ,
                    }
                },
                "required" : [ "job" ]
            },
        }
    },
{
        "type""function" ,
        "function" : {
            "name""analysis_report" ,
            "description""Analyze test report data" ,
            "parameters" : {
                "type""object" ,
                "properties" : {
                    "location" : {
                        "type""string" ,
                        "description""Task name, for example: Order Center Task" ,
                    }
                },
                "required" : [ "job" ]
            },
        }
    }
]

Implementing the function

The specific implementation content is defined according to your own project. Here is a simple demo

def execute_api_job (job) : 
    """Simulate execution of test tasks"""
    return f"Test task has started executing: {job} "


def get_report (job) : 
    """Simulate obtaining test report"""
    return f"To view the test report of {job} , please click https://www.baidu.com"


def analysis_report (job) : 
    """Simulation analysis test data"""
    # Get report data
    report = get_data(job)
    # Send the test data to the AI ​​big model for analysis
    response = send_messages( f"Analysis test task data: {report} " )
    return f'Test result analysis:\n  {response} '

Calling the large model

import  json
from  openai  import  OpenAI


def send_messages (messages) : 
    msg = [{ "role""user""content" : messages}]
    response = client.chat.completions.create(
        model = "deepseek-chat" ,
        messages=msg,
        tools=tools
    )
    mess = response.choices[ 0 ].message
    if  mess.tool_calls:
        tool = mess.tool_calls[ 0 ]
        function_name = tool.function.name
        arguments = json.loads(tool.function.arguments)

        # Calling the function
        if  function_name ==  'execute_api_job' :
            function_response = execute_api_job(arguments.get( "location" ))
        elif  function_name ==  'get_report' :
            function_response = get_report(arguments.get( "location" ))
        elif  function_name ==  'analysis_report' :
            function_response = analysis_report(arguments.get( 'location' ))
        else :
            return  response.choices[ 0 ].message
        return  function_response
    else :
        return  response.choices[ 0 ].message


client = OpenAI(
    api_key = "your_api_key" ,
    base_url= "https://api.deepseek.com" ,
)

implement

# Test example
if  __name__ ==  "__main__" :

    while True : 
        user_input = input( "User input: " )
        if  user_input.lower() ==  'exit' :
            break
        response = send_messages(user_input)
        print( "Assistant reply: " , response)
        print( "\n"  +  "="  *  50  +  "\n" )

Execution results display: