A Prompt tip to improve the ability to handle complex tasks on large models by 3%-20%

Master Step-Back Prompting to make your AI model handle complex tasks more accurately!
Core content:
1. The definition and working mechanism of Step-Back Prompting
2. The advantages compared with traditional prompting techniques
3. Application examples and effect improvement data display
Step-Back Prompting is a prompting technique that improves the ability of large models to handle complex tasks.
It splits the one-step prompt into two steps: first, let the big model extract the abstract concepts and principles involved in the problem, and then let the big model deal with specific problems.
This approach helps prevent errors in the intermediate steps, thereby improving the accuracy of content generation. It has been tested on models such as PaLM-2L, GPT-4, and Llama2-70B, and significantly outperforms traditional methods such as chain thinking (CoT) prompts.
The method consists of two steps:
Abstract : Ask about the concepts or principles involved in solving the problem;
Reasoning : Provide the concepts and principles obtained in the previous step + the original problem to the large model for processing.
For example:
We want to write a storyline . If we ask the big model the question directly, we get the following feedback:
How to use Step-Back Prompting
We first let the big model refine the concepts and key points: "Referring to popular first-person shooter games, what are the 5 key points for writing a storyline for the first-person shooter game level?" The big model will return to the core elements of writing the level plot.
On many data sets, the use of step-back prompting techniques has brought significant improvements in results.