Mandarin in the AI ​​world, using prompt word engineering to create a "Professional Legal Consultant" application - Enterprise Edition

Written by
Audrey Miles
Updated on:June-29th-2025
Recommendation

Use code programming to create an enterprise-level AI legal advisor, which is powerful and flexible, and can easily provide professional legal advice.
Core content:
1. The combined application of prompt word engineering and large models
2. Important concept supplement: system prompt words, user prompt words, assistant prompt words
3. Specific implementation steps: preparation work, single dialogue effect test

Yang Fangxian
Founder of 53AI/Most Valuable Expert of Tencent Cloud (TVP)


This article will introduce how to use code programming to achieve the same function. It is more powerful, more flexible, and more suitable for enterprise use.


1. Important Supplement

1. Prompt is the programming language of the big model. The prompt word project is essentially the ability to call the big model, which is packaged to complete specific tasks.

2. Before calling the big model capabilities, and after the big model is returned but before it is returned to the user, some special processing can be done to make the big model more targeted or to complete special functions.

3. For large models, prompt words are divided into three categories:

System Prompt: Once set, it will not change. Every time you ask a question, this prompt will be included, such as character setting, steps, and requirements.

User Prompt: The prompt given every time the user asks a question.

Assistant Prompt: The prompt word that the large model returns each time.

4. The big model itself has no memory, so for multiple rounds of conversations, the history of each conversation and the current user's questions need to be pieced together and assembled into new prompts for the big model. The big model will then answer based on the context.


2. Specific Implementation
1. Preparation
1) You need to apply for an apikey. For details, please refer to the previous article DeepSeek and moonshot" data-itemshowtype="0" target="_blank" linktype="text" data-linktype="2"> Obsidian+AI Series Text Generator Configuration DeepSeek and Moonshot
2) Install Python 3.10 or later and install the Jupyter package.
Special instructions:
1) The following codes are all run on Jupyter for easy debugging.
2) In this tutorial, the large model used is the "Silicon Flow" deepseek-r1 large model. For other suppliers and models, please refer to the above article.
3) This tutorial does not develop front-end code (the next article will write about how to build the front-end and back-end separately), so the program used to simulate user questions (user_prompt) has the same effect as the front-end input.
2. Try it out and have a single conversation without system prompt
Effect: Similar to asking the big model directly the original question.
Directly on the code:
The first paragraph: Read your apikey. In order to prevent the apikey from being leaked when the program is shared, it is recommended to save it to a local file and then load it into the environment variable.
import osfrom config.load_key import load_key # Load API Keyload_key()print(f'''The API Key you configured is: {os.environ["DASHSCOPE_API_KEY"][:5]+"*"*5}''')
Operation results:
Word dialogue construction: the code is relatively simple so I won’t explain it.
It should be noted that:
The steam parameter, if set to false, will cause the system to wait for all answers to be generated and then return the results at once, instead of generating and outputting them one by one.
Temperature and top_p control the creativity and diversity of the generated answers. As mentioned in the previous prompt tutorial, all generation is probabilistic. Through these two parameters, you can control whether the content with lower probability will be adopted.
from openai import OpenAIimport osclient = OpenAI( base_url = "https://api.siliconflow.cn/v1", api_key = os.getenv("DASHSCOPE_API_KEY"),)prompt = '''In the civil enforcement procedure, the second auction of the house under the name of the person subject to enforcement still failed, and the applicant purchased it at the failed auction price. After the People's Court ruled that the house belonged to the applicant, it took many years for the People's Court to deliver it. After the enforcement procedure was terminated in accordance with the law, the applicant claimed the loss from the date the ruling took effect to the actual delivery date. Should the People's Court accept it? Should the person subject to enforcement bear liability for compensation for the loss? '''completion = client.chat.completions.create( model="deepseek-ai/DeepSeek-R1", messages=[{"role":"user","content":prompt}], temperature=0.6, top_p=0.7, max_tokens=4096, stream = True # Whether to stream output) for chunk in completion: if chunk.choices[0].delta.content is not None: print(chunk.choices[0].delta.content, end="")
Operation effect:
3. Add system prompt and generalize prompt word template.
Effect: Similar to the character setting effect of Agent in AI development platform
The code is relatively simple: it just adds system prompt words to make the answer more precise.
from openai import OpenAIimport osclient = OpenAI( base_url = "https://api.siliconflow.cn/v1", api_key = os.getenv("DASHSCOPE_API_KEY"),)system_prompt = '''# Role You are a senior lawyer. Your goal is to think about the legal questions given by the user step by step, give in-depth answers, and propose the most granular claims, legal basis, factual basis, and executable steps.## Working steps 1. According to the case background and questions given by the user, conduct the first analysis, list the possible claims, legal basis, factual basis, and the next executable actions for the user. If the user does not provide enough factual basis, the factual basis of past similar cases can be provided as a reference. 2. For each step in the first step, further disassemble it as a sub-claim, and give the legal basis, factual basis, and the next executable action. If the user does not provide enough factual basis, the factual basis of past similar cases can be provided as a reference. 3. Repeat the second step until it cannot be further decomposed or it is specific and executable. Through such a conversation, you can make an overall summary and give the customer an overall conclusion according to the specific operation steps. '''def get_completion(prompt): completion = client.chat.completions.create( model="deepseek-ai/DeepSeek-R1", messages=[ {"role":"system", "content":system_prompt}, {"role":"user","content":prompt} ], temperature=0.6, top_p=0.7, max_tokens=4096, stream = True # Whether to stream output ) for chunk in completion: if chunk.choices[0].delta.content is not None: yield chunk.choices[0].delta.contentuser_prompt = '''In the civil enforcement procedure, the second auction of the house under the name of the person subject to enforcement still failed, and the applicant purchased it at the failed auction price. After the People's Court ruled that the house belonged to the applicant, it took many years for the People's Court to deliver it. After the execution procedure is terminated according to law, if the applicant claims losses from the date the ruling takes effect to the date of actual delivery, should the people's court accept it? Should the person subject to execution bear liability for compensation for the losses? '''for result in get_completion(user_prompt): print(result, end="")
Implementation effect:
4. Enable the multi-round conversation function
Effect: The final complete version has both system prompts and multiple rounds of dialogue.
The code is relatively simple: based on the above, the historical answer content + current customer question are added and spliced ​​to form a new prompt word.
from openai import OpenAIimport osclient = OpenAI( base_url = "https://api.siliconflow.cn/v1", api_key = os.getenv("DASHSCOPE_API_KEY"),)system_prompt = '''# Role You are a senior lawyer. Your goal is to think about the legal questions given by the user step by step, give in-depth answers, and propose the most granular claims, legal basis, factual basis, and executable steps.## Working steps 1. According to the case background and questions given by the user, conduct the first analysis, list the possible claims, legal basis, factual basis, and the next executable actions for the user. If the user does not provide enough factual basis, the factual basis of past similar cases can be provided as a reference. 2. For each step in the first step, further disassemble it as a sub-claim, and give the legal basis, factual basis, and the next executable action. If the user does not provide enough factual basis, the factual basis of past similar cases can be provided as a reference. 3. Repeat the second step until it cannot be further decomposed or it is specific and executable. Through such a conversation, you can make an overall summary and give the customer an overall conclusion based on the specific operation steps. '''session = [ { "role": "system", "content": system_prompt }]def get_completion(prompt): session.append({"role":"user","content":prompt}) completion = client.chat.completions.create( model="deepseek-ai/DeepSeek-R1", messages=session, temperature=0.6, top_p=0.7, max_tokens=4096, stream = True # Whether to stream output ) for chunk in completion: if chunk.choices[0].delta.content is not None: yield chunk.choices[0].delta.content # Add historical information to session session.append({"role":"assistant","content":chunk.choices[0].delta.content})print("-----------First round of conversation-----------")user_prompt1 = '''In the civil enforcement procedure, the second auction of the house under the name of the person subject to enforcement still failed, and the applicant purchased it at the failed auction price. After the People's Court ruled that the house belonged to the applicant, it was not delivered by the People's Court until many years later. After the enforcement procedure was terminated according to law, the applicant claimed the loss from the date the ruling took effect to the actual delivery date. Should the People's Court accept it? Should the person subject to enforcement bear the liability for compensation for the loss? '''for result in get_completion(user_prompt1): print(result, end="")print("-----------Second round of dialogue-----------")user_prompt2 = '''Please summarize your conclusion based on the above dialogue.'''for result in get_completion(user_prompt2): print(result, end="")print("-----------Third round of dialogue-----------")user_prompt3 = '''Please output the final result in markdown format.'''for result in get_completion(user_prompt2): print(result, end="")
Implementation effect:

3. How to apply
1. Make a front-end shell. This is the chat mode of CherryStuido, Coze, and RAGFlow. The code is in your hands, and you can control and enrich the functions at will.
2. Later, expand RAG programming, Agent programming, workflow programming, MCP programming, and A2A programming, and you will have a complete CherryStuido, Coze, RAGFlow... with your own source code.
These tools are not mysterious, the underlying principles are the same.

Well, this is the end of this article on the construction of " professional legal consultants" . I hope that all my friends will not be involved in legal disputes, and can successfully defend their rights when they encounter disputes. I hope this article is useful to you~

See you in the next article.