How to write prompts for inference models like DeepSeek

Written by
Jasper Cole
Updated on:July-16th-2025
Recommendation

Master the efficient interaction skills of DeepSeek inference model and improve problem-solving ability.

Core content:
1. Feature analysis of inference model DeepSeek R1
2. Efficient prompt writing skills
3. Advantages of inference model in practical applications

Yang Fangxian
Founder of 53AI/Most Valuable Expert of Tencent Cloud (TVP)

Large language models (LLMs) have emerged with their powerful language processing capabilities. Among them, reasoning models represented by DeepSeek R1 (Paper Review: DeepSeek-R1 — Improved Reasoning Capabilities of Large Language Models Driven by Reinforcement Learning) have performed well in tasks such as logical reasoning and problem solving. However, to fully tap the potential of such models, it is crucial to write effective prompts. This article will explore in depth how to write high-quality prompts for reasoning models such as DeepSeek, helping users achieve more accurate and efficient interactions.

1. Understanding the characteristics of the inference model

Reasoning models such as DeepSeek R1 have a unique capability architecture. Unlike traditional language models, it can not only predict the next word in a text sequence, but also focus on logical deduction, problem solving, and multi-step reasoning. Through advanced training techniques such as reinforcement learning and thought chain prompts, it has shown certain advantages in deductive reasoning, inductive reasoning, abductive reasoning, and analogical reasoning.

In deductive reasoning, the model draws conclusions based on established rules and premises, such as accurately deducing that "whales have lungs" based on "all mammals have lungs and whales are mammals". In inductive reasoning, it can summarize rules from multiple specific examples, such as inducing the general conclusion that "metals expand when heated" after observing that metals expand when heated many times. Abductive reasoning requires the model to infer the most reasonable explanation based on the phenomenon, such as seeing that the road surface is wet, it is speculated that it may be caused by rain. Analogical reasoning uses similar situations or concepts to make inferences, such as the earth revolving around the sun, and inferring that other planets also revolve around stars by analogy.


These reasoning capabilities enable DeepSeek R1 to excel in tasks such as mathematical problem solving, common sense reasoning, symbol manipulation, and logical deduction. Understanding these characteristics of the model is the basis for writing effective prompts. Only by playing to its strengths can we guide the model to perform at its best.

2. General and efficient prompt writing skills

1. Simplicity and directness are key

Reasoning models prefer concise and clear instructions. Complex and lengthy prompts can easily confuse the model, interfere with its capture and processing of key information, and thus reduce performance. For example, when asking the model to summarize an article, a concise statement such as "Please summarize the core content of the climate change article in three main points" allows the model to quickly locate key information and give a refined answer. However, complex prompts such as "Please break down the article in detail and step by step, and then condense it into a summary with a clear structure, coherent logic, and precise reasoning" may cause the model to deviate in understanding and execution, making it difficult to give ideal results.

2. Avoid overusing thought chaining prompts

Although Chain-of-Thought (CoT) is effective in improving the reasoning ability of general language models, for reasoning models like DeepSeek R1, it already has the ability to disassemble internal logic. Excessively requiring the model to "think step by step" will not only fail to improve accuracy, but may hinder its performance. Taking probability calculation problems as an example, directly asking "What is the probability of getting two sixes when rolling two dice", the model can quickly answer with its own reasoning ability. If the chain of thought prompt "first explain the probability of rolling a six, then consider the probability of rolling another six, and finally multiply them together" is used, it may disrupt the inherent reasoning rhythm of the model and reduce efficiency. However, when evaluating non-inference models or when the model makes reasoning errors, the chain of thought prompt can still be used as an auxiliary means to help the model sort out its ideas.

(III) Using separators to improve clarity

When dealing with structured output tasks, such as generating JSON, tables, or code snippets, using delimiters such as Markdown, XML tags, or section titles can help the model clearly distinguish different parts of the prompt and accurately understand the output requirements. For example, when extracting key information from a contract, clearly give the structured format:


{ "Parties": "Name of parties involved", "Effective Date": "Start date of the contract", "Obligations": "Main contractual duties", "Termination Clause": "Conditions for contract termination" }

Compared with the vague statement "Please summarize the contract in a structured manner and include all important details", the prompt with separators can guide the model to output more standardized and demand-oriented content, reducing the situation of missing information or confusing format.

4. Proper use of zero-shot and few-shot learning

Reasoning models usually perform well in zero-shot learning scenarios, that is, they can complete tasks without examples. When converting the voice of a sentence, the model can often give the correct answer by directly asking "Convert 'The committee approved the new policy' to the passive voice." Only when the output results need further optimization, consider introducing few-shot learning. For example, when the model's initial conversion effect is not good, provide examples such as "Active: 'She baked a cake.' Passive: 'A cake was baked by her.'" to help the model better understand the task requirements. But it should be noted that the examples should be highly matched with the target task to avoid too much irrelevant information interfering with the model.

5. Clarify the criteria and constraints

Setting clear guidelines and constraints for the model can enable it to generate more desirable results. These constraints can involve aspects such as answer length, format, content scope, or tone. When planning a travel itinerary, "create an affordable travel itinerary for New York City with a budget of no more than $500, a 3-day itinerary, including sightseeing and food recommendations, and only consider vegetarian options." Such detailed requirements allow the model to plan within a limited range to avoid budget overruns or situations that do not meet dietary preferences. At the same time, instructions such as "explain no more than 100 words" and "only use everyday examples and avoid too many details" can further standardize the model's answers, making them more accurate and concise.

6. Precisely describe the ultimate goal

Clearly defining the success criteria can make the model better fit the needs of users. When explaining economic concepts, "explain the supply and demand concepts in less than 50 words, and express them concisely and avoid professional terms." Clear word limits and language style requirements can guide the model to give concise and easy-to-understand answers. In contrast, a broad statement such as "describe the supply and demand relationship" may cause the model to output lengthy and complex content that cannot meet user expectations. By setting precise goals, users can also iterate and optimize based on the model's answers to continuously improve the interaction effect.

7. Ensure Markdown formatting as needed

Starting from a specific time (such as o1–2024–12–17), some inference models will no longer output in Markdown format by default. If the model is required to generate Markdown format content, it should be clearly indicated in the prompt, such as "Formatting re-enabled to generate a Markdown summary of quantum mechanics". If you simply ask "Give me a Markdown answer about quantum mechanics", the model may output plain text content, which cannot meet the needs of structured presentation. For content that requires structured presentation, adding format instructions in the prompt is a key step to ensure that the output meets expectations.

3. Prompt writing strategies for different tasks

1. Solving Mathematical Problems

Mathematical problems focus on logical deduction and precise calculation. When writing prompts, clearly state the problem conditions and requirements. For complex problems, you can appropriately guide the model to analyze step by step, but avoid excessive intervention in its reasoning process. When solving algebraic equations, "Solve the equation 2x + 5 = 13 and show the key calculation steps" is a prompt that not only clarifies the model's task, but also gives it room for autonomous reasoning, allowing it to use mathematical knowledge to solve equations and present intermediate calculation steps, making it easier for users to understand the problem-solving ideas.

2. Common sense reasoning

Common sense reasoning relies on the understanding of real-world knowledge and universal laws. Prompt (Why doesn't your AI prompt work? Master these points to make AI listen to you) should be as close to the actual scene as possible to help the model mobilize relevant knowledge reserves. "Judge whether this sentence is reasonable: He forgot his keys on the moon. Please explain the reason." By setting such realistic questions, the model is guided to use common sense judgment and explanation, strengthen its grasp of the logic of the real world, and avoid answers that violate common sense.

3. Symbol Operation and Logical Deduction

When dealing with symbolic operations and logical deduction tasks, Prompt needs to clarify the symbolic rules and logical relationships. When performing logic circuit analysis, "given that the input A of the AND gate is 1 and the input B is 0, calculate the output result and explain the reasoning process according to the AND gate logic rules." Clear rules and condition settings can enable the model to accurately deduce according to the logic rules, output correct results, and provide reasonable reasoning basis, ensuring high accuracy in complex logic tasks.

4. Evaluate and optimize prompt effects

1. Evaluation based on multi-dimensional indicators

The model's response to Prompt is evaluated using indicators such as accuracy, consistency, explanation quality, solution innovation, and error analysis. Accuracy measures the correctness of the answer; consistency examines the logical coherence in related tasks; explanation quality reflects the clarity of the model's explanation of the reasoning process; solution innovation focuses on whether the model can propose novel and reasonable ideas; and error analysis helps to find the weak links of the model. In a series of math problem tests, the accuracy is evaluated by the proportion of correct answers given by the statistical model, the consistency is measured by checking whether the solution ideas of different problems are contradictory, and the model's explanation of wrong answers is analyzed to find logical loopholes. Through multi-dimensional evaluation, the effect of Prompt is fully understood.

2. Iterative Optimization Prompt

Optimize Prompt based on the evaluation results (Prompt writing framework: unlock efficient and accurate AI interaction). If the model omits key information in the answer, further emphasize it in the Prompt; if the answer is too long, adjust the constraints to make it concise. When asking the model to summarize the article, if the initial summary omits important points, the optimized Prompt can add "be sure to cover all important points about influencing factors in the article"; if the summary content is too much, add the restriction of "the summary content should not exceed 200 words". By continuously adjusting and optimizing the Prompt, the quality and fit of the model output can be gradually improved.

Writing prompts for inference models such as DeepSeek is an art and a technical job. By deeply understanding the characteristics of the model, using concise and direct expressions, reasonable prompt strategies, clear constraints and other writing techniques, and flexibly adjusting prompts according to different task types, while continuously evaluating and optimizing, we can fully tap the potential of the inference model and achieve more efficient and intelligent human-computer interaction.