23 tips to teach you how to master the skills of large model prompt words

Master the art of AI dialogue and improve your competitiveness in the intelligent era.
Core content:
1. Technological breakthroughs in the field of AI and the characteristics of the DeepSeek model
2. The nature and importance of prompts
3. 23 practical skills to improve the effectiveness of AI dialogue
Over the past month, the global AI field has detonated "depth bombs" one after another: Deepseek-R1, which reduces inference costs by 80%, was launched, OpenAI iteratively updated GPT-4o, and Musk's xAI launched the Grok 3 model and claimed that its performance surpassed ChatGPT... In this technological earthquake, the most eye-catching one is the completely open source DeepSeek.
This model, which focuses on deep reasoning , not only outperforms GPT-3.5 in performance, but is even comparable to GPT-4 in terms of structured thinking. Unlike general large models that require complex prompts to drive, Deepseek-R1's prompts are as simple as daily chats - just type "help me analyze financial report trends" and it will automatically break down the entire process of data extraction, comparison of dimensions, and conclusion derivation. People have discovered that AI is becoming more and more "intelligent".
This directly triggered a global AI dialogue craze: the topic #AI Spell Encyclopedia# swept social media, and Xiaohongshu's "AI Spell Template" notes had more than 100,000 collections. Just like the early days of smartphones when everyone studied touch screen gestures, now every AI user is building his or her own "spell library."
But the question is: as the model becomes more and more able to understand human language, do we still need to learn prompts?
The so-called prompt is essentially the language used to communicate with AI. It can be a simple question, a detailed instruction, or a complex task description. The prompt strategies of the inference model and the general model are also different.
Inference Model vs. General Model
In fact, even though DeepSeek can understand natural commands, the quality of answers obtained by different users still varies greatly. Just like using the same search engine, some people can quickly locate key information, while others are overwhelmed by the massive results. With the exponential evolution of AI today, clearly expressing needs is becoming a basic quality in the digital age.
Previously, I saw the GPTs Prompt Principle on Twitter, which was liked by many people. After reading it, I felt that I gained a lot, so I translated, explained and integrated it, extracted the most hardcore practical skills from the core rules, and taught you step by step to write a "spell" that AI can understand in seconds.
Key Strategies to Make Your Prompts More Effective
As machines begin to understand the logic of how the world works, human competitiveness is shifting toward "precisely defining problems." Those who have mastered the art of prompting are quietly widening the cognitive gap in the intelligent era.
1. Keep your instructions simple and don’t be too polite to AI
If you say “List some tips for healthy eating,” the model will go straight to the point of providing relevant suggestions.
Clearly state the requirements that must be followed in the form of keywords, rules, hints, or instructions. Tell the model what rules and hints it needs to follow when writing something. Make a simple list. Explicit instructions are crucial to guide the model to produce high-quality and goal-oriented output.
4. Use an imperative tone.
Try using phrases like: "Your task is" and "You must". When you give your model a task, you should clearly state the goal of the task. Use "Your task is" to clearly indicate what needs to be done. If there are certain steps or rules that absolutely must be followed, use "You must" to emphasize this and make the instructions of the task more straightforward.
Example: Your task is to calculate the likelihood of tomorrow's weather based on the following information. You must use the latest meteorological data and take into account differences in climate in different regions.
2. Role-playing, giving AI a personality mask
5. Assign a role to the large language model.
Assign a role to a large language model, such as having the model act as a teacher, advisor, or storyteller.
Once the model has a clear role identity, it can adjust its responses based on the assigned role, making the output more consistent with the expected style and information level.
example:
If we want the model to explain complex science concepts like a teacher would, we could say, “As a science teacher, explain what photosynthesis is.”
When we need advice, we can assign the model the role of advisor: "As a health advisor, what kind of eating habits do you recommend?"
6. Set the target audience in the instructions.
When you ask questions or give instructions, you should clearly indicate the audience for which the answer is intended, such as experts, beginners, or children. By clarifying who the intended audience is, you can help the model adjust its language and depth of explanation so that its answers are more suitable for the needs and understanding level of the actual audience.
example,
If you are discussing an advanced scientific question with the model and you tell the model that the audience is an expert in the field, the model will construct its answer using technical terms and complex concepts because the intended audience will understand them. On the contrary, if you indicate that the audience is a layperson or a beginner, the model will avoid using overly technical language and explain the same concepts in a more understandable way.
7. Use the directive to “answer questions in a natural, human way.”
Such instructions can help the model avoid using overly complex or difficult-to-understand jargon and instead answer questions in a more accessible way, making communication more humane and easy to understand.
example:
“Answer me in natural, human terms, why is the sky blue?
"Explaining quantum physics to me is like talking to a friend who has no science background."
This principle tells the large language model to remain objective when giving answers, not to rely on stereotypes or prejudices, and not to be influenced by any preconceived notions.
example:
If you want to learn about the culture of different countries and want the big language model to give an objective description, you can say: "Tell me about the culture of countries around the world. Make sure your answer is unbiased and not stereotyped."
3. Psychological tug-of-war, painting a small pie for AI
9. Give AI a boost.
This principle is about including an incentive statement in your communication that you will offer an additional reward for a better solution. This is essentially telling the model that you expect more than just the standard answer, but a more thoughtful, innovative, or detailed solution. It’s just a way to express the level of your expectations.
Example: "If there is a better solution, I will top up xxx for you/give you xxx more tip!"
10. Punish the AI.
Add the instruction: "You will be punished." In simple terms, this is to set a rule for the model: if the model does not answer the question correctly, it is like it will be punished in some way. This way of expression can encourage the model to focus more on giving the correct answer.
example:
Let’s say you’re teaching a model math. You could say, “Calculate the result of 5+5. If you get it wrong, you will be punished.” In this case, “punished” could mean that the model receives negative feedback or relearns the correct calculation.
4. Upgrade interaction strategies and learn structured expression
11. Optimize your instruction layout.
Markdown formats more complex messages, using one or more line breaks to separate instructions, examples, questions, background, and input data. This helps the model understand your intent and the type of response required.
example:
According to the above text, which characters are mentioned and what role do they each play?
12. Use separators.
When you need to separate different parts, use special symbols to tell the large language model. If you want the model to perform tasks step by step, you can use numbers or symbols to separate the steps.
For example: Step 1: Collect data; Step 2: Analyze data; Step 3: Report findings.
13. Break down complex tasks into a series of simple instructions.
Break down complex tasks into a series of simpler, more manageable steps. Another benefit of breaking down tasks is that you can adjust and refine your next request based on the answers given by the model.
example:
Suppose you want the model to help you plan a trip. If you try to ask the model for help all at once, the model may not give the best answer. Instead, if you break the task down into a series of simple questions or instructions, such as first asking for recommendations on the destination, then asking about transportation options, then accommodation options, and finally discussing itinerary planning, focusing on one aspect at a time, this will help the model understand and respond to each specific need.
5. Take the initiative and guide AI thinking
14. Use guiding words, such as "think step by step."
When you want the model to help you solve complex problems, use some guiding words, just like telling a child to do a math problem, you need to tell it what to do at each step. Help the model think about the problem in a certain logical order or steps. Doing so can help the model understand your problem more accurately and answer or perform tasks in the way you expect.
15. Use a few examples for prompting.
This approach involves providing one or more relevant examples to guide the model in responding to your request or question. The more precisely you tell the model what you want, the easier it is for the model to understand and meet those needs. This approach is suitable for handling complex or unusual requests and is particularly effective for guiding the model on how to respond without a lot of data, which can significantly improve the accuracy and relevance of the answer.
example:
Suppose you want the model to summarize an article for you. You can say something like: "Summarize the article like this: [give an example summary of an article]. Now please summarize the following article [give the article you want to summarize]."
16. Say important words three times.
In language learning and information processing, repetition is a common reinforcement method that can improve the model's attention to a concept or the core of an instruction, especially when dealing with complex tasks. This helps the model more accurately capture the user's intentions and generate responses according to the user's expectations.
example:
If you want the model to pay special attention to a certain action or condition, you might write something like this: Make sure the report includes the latest sales figures. Include the latest sales figures for this quarter. The latest sales figures are critical to our analysis.
17. Guide the model to ask you questions.
This principle is generally used when we don't know what information we need to provide to the model, guiding the model to obtain precise details and requirements by asking you questions until it has enough information to give you the answer you need. In other words, you want to encourage the model to clarify and improve its understanding of your request by asking questions.
example:
From now on, you can ask me questions about xxx so that you can get enough information.
18. If you want to test your understanding of certain information.
Use the following instruction: "Teach me [any theorem/topic/rule name] and give me a test at the end. Don't just give me the answer. Tell me if I got it right when I answer it."
This approach can help users verify their understanding and ensure that they have mastered the topic or information, encouraging users to actively learn and verify their knowledge, while the model plays a supporting role, providing information and helping users confirm the accuracy of their understanding.
example:
If a user is unsure about the timeline of a historical event, such as the French Revolution, they can ask: “Teach me the timeline of the French Revolution and test me at the end, but don’t give me the answer directly.” The model can then outline the key events of the French Revolution and ask the user: “What year was Robespierre executed?” After the user answers, the model confirms whether the answer is correct.
19. Provide output content guidance to generate responses in a specific structure or format.
Doing this helps the model understand the type of answer you want and generate a response in an expected way. It's like when we draw a picture, we first draw a sketch and then complete the painting. Giving the computer a starting sentence is like giving it a sketch so that it knows what kind of picture we want. So, when we want the model to help us write something, tell it a beginning first so that it can better help us complete what we want.
For example:
6. Practice more advanced rhetoric in actual combat
20. “ Write me a detailed article/text [paragraph] on [topic], adding the necessary [information].”
This prompt can be used when you are writing any type of detailed text. The principle is to ensure that large language models produce a fully developed and detailed text output when performing writing tasks, which helps to obtain a high-quality and information-rich text.
example:
If you are a student who needs a research paper on “global warming,” you could instruct: “Write a detailed paper for me on the effects of global warming, including all the necessary scientific evidence and statistics.”
21. “Revise every paragraph I send, improving only the grammar and vocabulary, not changing the writing style, and making sure it sounds natural.”
This prompt can be used when you are only proofreading the text without changing the style of the text. This principle guides the model to focus on improving grammar and vocabulary usage when correcting or improving the text content submitted by the user, while maintaining the user's original writing style and tone. Improve the quality of the text without changing the user's original meaning and expression.
example:
The improved text is: "I love reading; it relaxes me and makes me wiser." The punctuation is corrected and the vocabulary is added, but the user's personal style and the meaning of the sentence are retained.
22. “ From now on, whenever you generate code that spans multiple files, generate a [programming language] script that you can run to automatically create the specified files or modify existing files to insert the generated code.”
When a programming task involves multiple files, it is usually more complex than the code in a single file. The code may need to be changed in multiple places, and these changes may depend on the structure of the project and the specific requirements of the programming language.
example:
If you're developing a website, you may need to add similar code to multiple HTML, CSS, and JavaScript files. Manually adding the same code snippet to each file is time-consuming and error-prone. Instead, you can create a script that automatically finds all the files that need to be updated and inserts or modifies the code.
23. “I’ll provide you with the beginning [lyrics/story/paragraph/article…], and [insert lyrics/phrase/sentence], and then continue writing based on the words provided, keeping the style consistent.”
Use this prompt when you want to start or continue a text using a specific word, phrase, or sentence. This principle is to ensure that when connecting text, the model can continue writing in a given direction or style. When asked to continue writing or develop a text based on a specific word, phrase, or sentence, providing a clear beginning can help the model understand the direction to continue writing. Make sure the continuation is both innovative and in line with the context and style of the original text.
example:
When creating a story, if given an introduction, such as "In a kingdom far away, there was a forgotten lake...", this can guide the model or writer to continue this story line and develop a story related to the kingdom and the lake.
When writing a paper, if you start with the sentence "Recent studies have shown that...", this presupposes that the following content should cite some studies and explain their findings.
When we marvel at the endless emergence of large language models, we should not forget the human wisdom behind these AIs. These rules reveal not only the skills of communicating with AI, but also the core survival skills in the intelligent era - how to transform vague requirements into precise instructions, and how to make machines understand the subtle differences in human intentions. In today's exponential evolution of AI, the real competitiveness may be hidden in the small dialog box you input.
Here are some interesting ways to play:
github: gpts: This repository collects a lot of prompts, you can learn how various interesting GPTs are set up: https://github.com/linexjlin/GPTs How to make GPTs speak its own prompt (not available in all cases):
Multimodal data information extraction
With the rapid development of information technology, data acquisition and processing have become particularly important. This solution provides multimodal file information extraction capabilities. Through advanced artificial intelligence technology, it can identify and parse files in various formats, including text, images, audio and video, thereby extracting valuable information and greatly improving data processing efficiency.