Prompt Engineering

Written by
Caleb Hayes
Updated on:June-19th-2025
Recommendation

Master prompt engineering to improve the quality and efficiency of interaction with AI models.

Core content:
1. The definition of prompt and its role in the GPT model
2. A detailed introduction and application of CRISPE prompt framework
3. Best practices and optimization techniques for prompt engineering

Yang Fangxian
Founder of 53A/Most Valuable Expert of Tencent Cloud (TVP)

What is Prompt?

Prompt is a human-constructed input sequence that guides the GPT model to generate relevant output based on the previous input content. In simple terms, it is the "prompt word" you provide to the model.
How is the prompt generated?

Prompt is so important, how should we write a good prompt? Matt Nigh, a big guy on github, proposed CRISPE Prompt Framework . CRISPE is an abbreviation, which stands for the following meanings: https://github.com/mattnigh/ChatGPT3-Free-Prompt-List

CR: Capacity and Role. What role do you want ChatGPT to play?

I: Insight, background information and context.

S:Statement, what do you want ChatGPT to do?

P: Personality. In what style or way do you want ChatGPT to answer you?

E: Experiment, which requires ChatGPT to provide you with multiple answers.

The prompt role collection on github is basically based on the CRISPE framework: first determine the role, then the background, then the requirements, and finally the style. Whether to generate multiple examples depends on your preference.

Importance of prompts

Proper use of prompts can bring many benefits. Here are some examples:

Improve generation accuracy: With the correct prompt guidance, the model can better understand the user's intention and generate more accurate text.

Enhanced freedom: Through a variety of different prompts, we can let the model generate a variety of texts, which enhances the expressiveness and freedom of the model.

Improve efficiency: If we already know the general content of the text to be generated, the correct prompt can allow the model to generate the desired results more quickly.

Prompt Engineering Definition

Definition: Prompt Engineering is the process of designing and optimizing input prompts to obtain the expected output. When interacting with large language models, how you construct prompts can significantly affect the quality of the model's answers.

Simple prompt: "Tell me about cats."

Optimization Tip: "Please describe in detail the biological characteristics, behavioral habits, and symbolic meanings of cats in different cultures."

By refining hints, users can guide the model to generate more detailed and useful responses.

Prompt Engineering is the process of designing and optimizing input prompts to achieve the expected output. To achieve the best results when working with large language models, here are some best practices:

1. Define your goals: Be clear about the tasks you want your model to accomplish or the questions you want it to answer .

Unclear goal: "Tell me about climate change."

The objective is clear: "Please briefly describe the main causes of climate change and their impacts on agriculture."

2. Provide context: Provide the model with necessary background information or context to help it understand the task

No context: "Explain calculus." 

With context: "As a high school student, I am studying calculus. Please explain the basic concepts of calculus in simple language."

3. Use specific instructions: Use clear instructions and requests and avoid vague prompts.

Vague instructions: "Write an article about technology."

Specific instructions: "Please write an article about the application of artificial intelligence in the medical field, including the following points: application scenarios, advantages and challenges."

4. Provide examples: Provide examples to show your expected output format or content

No example: "Generate a report about products." 

Example: "Generate a report about products in the following format:\n\n- Product Name:\n- Price:\n- Features:\n- Advantages:\n- Disadvantages:"

5. Use step-by-step instructions: For complex tasks, break them down into multiple steps and guide the model through the process.

One step: "Explain and solve this math problem: 2x + 3 = 7." 

Step-by-step instructions: "First, explain how to solve the equation. Then, solve the equation 2x + 3 = 7."

6. Control output length: Control the length of output through prompts to ensure that the content is concise or detailed.

No length control: "Explain quantum mechanics." 

There is a length control: "Explain the basic concepts of quantum mechanics in no more than 100 words."

7. Use placeholders and templates: Use placeholders and templates to indicate the content or format that needs to be filled.

No template: "Generates a user registration form." 

There is a template: "Generate a user registration form with the following fields: username, password, email, phone number."

8. Iterate and tweak: Keep experimenting and tweaking prompts, observing the model’s output, and optimizing as needed.

Initial prompt: "Describe the Python programming language." 

Adjustment prompt: "Describe the main features and common application scenarios of the Python programming language."

9. Specify output format: Clearly specify the output format to ensure that the generated content meets expectations.

Plain text: "Generate a report on the company's financial status." 

There is a format specification: "Generate a report on the company's financial status in the following format:\n\n1. Revenue:\n2. Expenses:\n3. Net Profit:\n4. Financial Analysis:"

10. Use multiple rounds of dialogue: When necessary, use multiple rounds of dialogue to gradually guide the model to generate the required content
11. Use reflection and iteration: After generating initial answers, reflect on and possibly revise their responses to improve accuracy and quality.
By following these best practices, large language models can be more effectively guided to produce high-quality outputs for a variety of tasks.
Tips Technology
1. Zero-shot: refers to the model completing the task without any examples. The model must rely on its pre-trained knowledge and prompts to generate answers.

Hint: "Translate this sentence: 'The cat is on the roof.'"

Answer: "The cat is on the roof."

The model has not seen specific translation examples, but is still able to translate the sentence correctly.

2. Few-shot: Before the model completes the task, a few examples are provided to help the model understand the task.

Tip: "Translate the following sentences: 'The dog is in the garden.' -> 'The dog is in the garden.' 'The bird is in the tree.' -> 'The bird is on the tree.' 'The cat is on the roof.' ->"

Answer: "The cat is on the roof."

By providing several translation examples, the model can complete the translation task more accurately.

3. Chain-of-Thought (CoT): is a prompting technique that solves complex problems by showing the steps of the model’s thinking process. This approach can help the model better reason and generate answers.

Prompt: "If the total cost of an apple and a banana is $3, and the price of an apple is $2, how much does a banana cost? Show your thought process."

Answer: "First, the price of apples is $2. The total price is $3, so the price of bananas is $3 minus $2, which equals $1."

By showing the thought process, the model can more clearly reason about the correct answer.

4. ReAct: is a prompting technique that combines reaction and action, usually used in interactive tasks or complex decision-making.

Prompt: "You are a virtual assistant. The user asks: 'What should I wear today?' You need to give suggestions based on the weather. Step 1: Check the weather. Step 2: Give suggestions based on the weather."

answer:

Step 1: "Check the weather: Today's weather is sunny and the temperature is between 25-30 degrees."

Step 2: "Suggestion: Today is a good day to wear light summer clothing, such as a T-shirt and shorts."

By reacting and acting in steps, models can complete complex tasks more efficiently.

5. Reflexion: It is a prompting technique where the model reflects on and possibly modifies its response after generating a preliminary answer. This process can improve the accuracy and quality of the answer.

Prompt: "Explain why the sky is blue."

Initial answer: "Because oxygen and nitrogen in the atmosphere scatter the blue light in sunlight."

Reflection: "This explanation is not accurate enough. In fact, blue light is scattered more because of the Rayleigh scattering effect."

Modified answer: "The sky is blue because when sunlight passes through the atmosphere, the short-wavelength blue light is scattered more by air molecules than other colors of light. This phenomenon is called Rayleigh scattering."

Through reflection and revision, the model can provide more accurate and detailed responses.

6. Prompt Chaining: It is to string together multiple prompts to solve complex problems or complete multi-step tasks step by step.

Assignment: Write an article about climate change.

Tip Chain:

a. “First, let’s briefly introduce what climate change is.”

b. “Next, describe the main causes of climate change.”

c. “Then, discuss the impacts of climate change.”

d. “Finally, recommendations are made to address climate change.”

By breaking down tasks into multiple steps, the model can complete complex tasks in a more systematic and organized manner.

These techniques and methods help users interact with large language models more effectively and obtain higher quality output.

Structured output
In Prompt Engineering, structured output refers to guiding the language model to generate output with a clear format or structure by designing specific prompts. This is especially important in tasks that require processing data tables, generating code, creating reports, etc. Through structured output, it can be ensured that the generated content conforms to the expected format, which is convenient for subsequent processing and use.
Common forms of structured output

1.JSON format: suitable for tasks that require generating or processing data objects.

2. Markdown format: used to generate documents or reports for easy reading and presentation.

3. Table format: suitable for data display and analysis.

4. Code format: used to generate code snippets in a specific programming language.

Tips for designing structured output prompts

1. Clarify format requirements: Clearly state the output format in the prompt.

2. Provide examples: Show the expected output format through examples.

3. Use placeholders: Use placeholders in prompts to indicate content that needs to be filled in.

Please generate a JSON object containing the following information: - Name - Age - Occupation - Hobbies Example format: {"name": "张三","age": 30,"occupation": "Software Engineer","hobbies": ["Reading", "Travel", "Programming"]}
Please generate a report in Markdown format based on the following information: - Title: Impact of Climate Change on Agriculture - Introduction: Briefly introduce the background of climate change. - Impact: Detailed description of the specific impact of climate change on agriculture. - Conclusion: Summarize and propose countermeasures. Sample format: # Title # Introduction content # Impact content # Conclusion content
Please generate a table with the following information: - Product Name - Price - Stock Quantity - Supplier Example format: | Product Name | Price | Stock Quantity | Supplier ||----------|------|----------|--------|| Product A | 100 | 50 | Supplier 1|| Product B | 200 | 30 | Supplier 2|
Please generate a Python code according to the following requirements: - Define a function `add_numbers`, accept two parameters `a` and `b`, and return their sum. - Print the result of calling the function, the parameters are 5 and 10. Example format: ```pythondef function_name(parameters):# function bodyprint(function_name(arguments))
Prompt App
Using Prompt Engineering in different application scenarios can significantly improve the performance of large language models. Here are some specific application scenarios and their best practices, including data processing, code generation, function definition, classification tasks, idea generation, content generation, information aggregation, and risk and security management.
1. Data processing: data cleaning, transformation and analysis
Data cleaning: Hint: "Please convert the following data to standard date format: '12/31/2023', '01-01-2024', '2024.02.28'." Output: "2023-12-31, 2024-01-01, 2024-02-28" Data conversion: Hint: "Please convert the following CSV data to JSON format:\nName, Age, City\nAlice, 30, New York\nBob, 25, Los Angeles" Output: [{"Name":"Alice","Age":30,"City":"New York"},{"Name":"Bob","Age":25,"City":"Los Angeles"}]

2. Code generation: Generate and optimize code snippets.

Code Generation: Hint: "Write a function in Python that calculates the average of all numbers in a list." Output: def calculate_average(numbers): return sum(numbers) / len(numbers) Code Optimization: Hint: "Optimize the following Python code to improve performance:\n\n```\nfor i in range(1000000):\n result.append(i * 2)\n```"Output: #List comprehension: With list comprehension, we can complete operations that originally require multiple lines in one line of code. It is not only concise, but also often more efficient. #Performance improvement: List comprehension is optimized at the bottom level, reducing the overhead of function calls and interpreters, so it can significantly improve performance when processing large-scale data. result = [i * 2 for i in range(1000000)]

3. Function definition : defines a function with specific functionality.

Hint: "Please define a Python function that accepts two string parameters and returns their concatenation result." Output: def concatenate_strings(str1, str2): return str1 + str2

4. Classification tasks: text classification, sentiment analysis, etc.

Text Classification: Prompt: "Please classify the following sentences as 'positive' or 'negative': 'I am very happy today.' 'This product is terrible.'" Output: "Positive: 'I am very happy today.' Negative: 'This product is terrible.'" Sentiment Analysis: Prompt: "Please analyze the sentiment of the following reviews: 'This movie is great!' 'I am very disappointed with the service.'" Output: "Positive: 'This movie is great!' Negative: 'I am very disappointed with the service.'" 

5. Creative generation: Generate creative content, such as stories, advertising copy, etc.

6. Content generation: generate articles, reports, etc.

7. Aggregate Information: Aggregate and summarize information.

8. Risk and Safety Management: Identify and manage potential risks and safety issues.
risk assessment:Tip: "Please evaluate the potential risks of the following project: 'Project Description: AI Technology Medical Project'" Output: "Potential risks: 1. Project delay. 2. Budget overrun. 3. Difficulty in technical implementation."
Safety Tips:Tip: "Please provide security suggestions on data privacy protection." Output: "1. Use strong passwords and two-factor authentication. 2. Update and patch systems regularly. 3. Encrypt sensitive data. 4. Perform regular security audits."