The essence of prompt engineering: How to communicate effectively with AI from the perspective of Anthropic experts

Written by
Clara Bennett
Updated on:July-01st-2025
Recommendation

In-depth analysis of prompt engineering, master the art of dialogue with AI.

Core content:
1. The definition and essence of prompt engineering, how to optimize AI prompts through iteration
2. The six characteristics of excellent prompt engineers and their importance
3. Practical suggestions for improving prompt skills and the future development of prompt engineering

Yang Fangxian
Founder of 53AI/Most Valuable Expert of Tencent Cloud (TVP)

Six months ago, Anthropic produced a 1 hour and 16 minute video called "AI prompt engineering: A deep dive". This one-hour long video discusses prompt engineering, covering different perspectives from research, consumers (that is, users of large models), and businesses. There are four guests in the video:

Alex: Head of Developer Relations at Anthropic, former Prompt Engineer [ 00:30 ].

David: Anthropic customer technical support, focusing on fine-tuning and language model applications [ 00:52 ].

Amanda: Head of the Anthropic tweaking team, working on making Claude more honest and kind [ 01:10 ].

Zach: Anthropic prompt engineer, involved in the development of prompt generators and educational materials [ 01:22 ].

I tried to use Gemini to summarize the key points in the video, but some very valuable details were difficult to obtain that way, so I insisted on watching the video to the end.

There are several relatively clear and important topics in this video:

  1. Tips on the nature and definition of engineering
  2. Qualities of a Good Tips Engineer
  3. Interaction and trust with the model
  4. The necessity of role playing
  5. Tips for improving your prompting skills
  6. The evolution and future of prompt engineering

Tips on the nature and definition of engineering

The video begins by discussing why prompt engineering is called engineering. Four guests use examples to illustrate that "engineering" is reflected in the process of trial and error and iteration , and that prompts are optimized through experiments and design. Several engineers pointed out that prompts can be regarded as natural language codes, but excessive abstraction should be avoided and clear task descriptions should be maintained.

This explanation of "engineering" makes me feel that communicating with big models is not suitable for ordinary people, because ordinary people do not have the concept of "engineering" (trial and error, iteration). Most ordinary users are in the mode of trying and evaluating. That is, they try a big model product and find it not easy to use or easy to use. But in fact, communicating with big models is an engineering behavior, which requires continuous attempts to make the response gradually meet the expectations of users. This was vividly reflected in the period when deepseek first became popular and then cooled down. At the beginning, the "deep thinking" mode made many ordinary people feel "I can do it too", but as more and more users used it, many people began to question deepseek's "nonsense". This matter actually has nothing to do with big models. The working principle of big models determines that its generation is actually an advanced "algorithm", rather than judgment, analysis, and response based on human intellectual cognition. Moreover, this kind of response of big models is a black box behavior, which makes the judgment of right and wrong return to people. So some people asked me if Ai can do this or that, and my first reaction was that you have to make sure you know how to do it first.

Qualities of an Excellent Tips Engineer

Several guests jointly gave the characteristics of excellent engineers:

  1. Clear communication skills, able to understand and express tasks accurately
  2. Ability to iterate, willingness to experiment and improve prompts
  3. Risk awareness, the ability to anticipate situations that could go wrong
  4. Read the model output in depth to identify problems and areas for improvement
  5. Possess "theory of thinking" and be able to think about problems from the perspective of models
  6. Ability to strip away assumptions and clearly communicate all the information required for the task

The above six points also verify the last sentence in the previous topic: the main factor that determines whether AI can do something is whether people know how to do it. These six points actually require very high standards from people, which proves my argument in the first topic that ordinary people can't use big models well. But at least we can move in this direction and know how to communicate better with big models.

Interaction and trust with the model

In this topic, Amanda proposed a very important tip: when encountering an error, ask the AI ​​this question: " I don't want you to follow these instructions. I just want you to tell me the ways in which they're unclear or any ambiguities, or anything you don't understand. " That is to say, when entering a prompt, you can let the AI ​​judge whether there is a problem with the prompt and what to add to make the prompt more effective. Amanda said that she has never believed in big models . Even as an engineer at Anthropic, she will carefully read Claude's output every time to determine whether the content is accurate. In addition, Amanda also admitted that AI is not omnipotent. It is wise to avoid falling into the misunderstanding of "perfect prompts" and give up unsolvable problems at the right time.

The necessity of role playing

The conclusion of this topic was something I didn't expect. Amanda talked about this topic. As the model becomes more and more "intelligent", role-playing becomes less and less necessary. And Amanda believes that role-playing is more like "fraud", and letting AI play a role is like a fraud. For more powerful models, there is no need to "lie", just express your intention directly and clearly. This topic inspired me that in the conversation, the user's role should be emphasized more, so that AI can understand the user's role and output from the perspective of the user's role. For example, as long as it is clear that I am a primary school student and what my knowledge background is, AI can organize the output well according to the user's background information, and it is not necessary to let AI play the role of a primary school teacher to organize the output. So just imagine AI as a newly arrived employee and provide clear task descriptions and background information.

Tips for improving your prompting skills

Several engineers jointly gave the following suggestions for improving skills:

  1. Read examples of great prompts and learn their structure and techniques [ 51:14 ].

  2. Practice repeatedly, stay curious and have fun [ 51:55 ].

  3. Give the prompt to someone else to read and get feedback [ 51:55 ].

  4. Trying to ask the model to complete tasks beyond its capabilities [ 52:49 ].

The evolution and future of prompt engineering

There is a very important point in this topic: philosophical thinking ability is of great significance in prompt engineering and can clearly express complex concepts [ 01:14:26 ].

This view is similar to what the word prompt master "Li Jigang" said, that the thoughts in philosophy are the ultimate compression of human thoughts.

  1. Philosophical thinking trains people to express complex ideas and concepts clearly and accurately [ 01:14:26 ]. This is crucial for prompt engineering because you need to accurately communicate your needs and expectations to the model.

  2. Philosophical thinking emphasizes logic and critical thinking, which helps to analyze and understand problems and thus design more effective prompts.

  3. Philosophical speculation encourages abstract thinking, which is very useful for understanding the behavior of a model and predicting its output.

  4. Philosophical thinking can help you express your ideas more accurately and avoid ambiguity, thereby improving the effectiveness of prompts [ 01:15:32 ].

  5. Philosophical thinking can help you define new concepts and communicate them to the model [ 01:13:39 ].

Amanda believes that as model capabilities improve, previous prompting techniques may become obsolete, but philosophical thinking ability is still an important ability in prompting engineering.

at last

The above is the main content of the video. Amanda mentioned "brain externalization" at the end of the video. I listened to this concept several times before I understood its meaning. "Brain externalization" refers to expressing the thinking process, knowledge and strategies in the brain about how to solve problems and complete tasks clearly and structuredly so that they can be understood and executed by external systems (that is, large models).

You can understand it this way:

Articulate your thought process:  This means breaking down your thought process, which might have been vague and intuitive, into specific steps, rules, and constraints. You need to explain the reasons and purpose of each step in detail, as if you were teaching it to someone who didn’t understand the situation.

Make knowledge explicit:  You need to make explicit the relevant knowledge you have, even if you take it for granted. The model does not have all your background knowledge, so you need to provide it with the necessary information.

Communication of strategy:  You need to tell the model what strategy you want it to adopt to solve the problem. For example, do you want it to reason step by step, brainstorm first, or follow a specific framework.

Convert internal intentions into external instructions:  The goals and intentions in your brain need to be accurately translated into natural language instructions (i.e. prompts) that the model can understand and execute.

To use an analogy:

"""

Imagine you are an experienced chef with a perfect recipe and cooking techniques in mind. The large language model is like an apprentice who has just started.

  • “Analyzing the thoughts in your brain”  is like you carefully recalling and sorting out your cooking steps and techniques.
  • “Pass it on to an educated, average person”  is like writing down recipes and techniques in clear, understandable language, making sure that someone without your experience can understand it.
  • “Externalizing your brain”  means that you turn this set of recipes and techniques into detailed written instructions, and the apprentice (model) can follow these instructions exactly to make dishes as delicious as you did.

In the context of prompt engineering, “brain externalization” means that you need to clearly reflect your problem-solving logic, required knowledge, expected output format, etc. in your prompts, so as to guide the model to work according to your intentions.

"""