Teach you how to use big models better - prompt engineer

Written by
Silas Grey
Updated on:June-18th-2025
Recommendation

Master the skills of using large models to improve work efficiency and output quality.

Core content:
1. The importance and advantages of prompt engineering
2. Tip word optimization and example skills to improve output accuracy
3. Use CoT and XML tags to build efficient structured prompts

Yang Fangxian
Founder of 53A/Most Valuable Expert of Tencent Cloud (TVP)
The big model in 2025 is very smart, but you should not overestimate your ability to use the big model. Instead, you should humbly keep up with the times and learn how to better use the big model . I think even elementary school students can learn it, so don't make excuses for yourself.
Summarize Anthropic's Prompt Engineer tutorial
First of all, why do you need prompt engineering?
Pro mpt engineering is faster than other methods of controlling model behavior (such as fine-tuning) and can often achieve performance leaps in a shorter time. In other words: save GPU, save money, fast model update, save time, save data, quick results, no need for retraining, better understanding of out-of-document content, retaining common sense, transparency.
The second point: The prompt words are clear and direct. The prompt word optimizer that our primary school students use: yujianaier.xyz can achieve this very well, and the paper verification effect is improved by 50%+ .
Point 3: Examples are the secret weapon to getting Claude to generate what you want. By providing a few well-crafted examples in your prompts, you can significantly improve the accuracy, consistency, and quality of Claude's output. This technique is called " few-shot " or " multishot " and is particularly effective for tasks that require structured output or that follow a specific format.

Fourth point: Use  CoT  to complete tasks that require human thinking, such as complex mathematics, multi-step analysis, writing complex documents, or decision-making involving multiple factors.
Why make Claude think? Accuracy:  Solving problems step by step reduces errors, especially in math, logic, analysis, or generally complex tasks. Debugging: Understanding Claude's thought process helps you figure out where prompts may not be clear .
Why not let Claude think? Increasing the output length may affect latency. Not all tasks require deep thinking. Use  CoT carefully to ensure a balance between performance and latency.

practice:
Basic Tips  : Include "Think Step by Step" in your tips . “Think step-by-step” .
 Facilitation prompt  : Outline the specific steps for Claude to follow in his thinking process.
Structuring Tip  : Use XML tags such as <thinking> and <answer> to separate your reasoning from your final answer.
example:

Fifth point: XML tags
XML Tips  : Use tags such as <instructions>, <example>, and <formatting> to clearly separate the parts of the tip. This prevents Claude from confusing instructions with examples or context.
Power User Tip  : Combine XML tags with other techniques, such as multiple prompts ( <examples> ) or thought chains ( <thinking> , <answer> ). This allows you to create super structured, high-performance prompts.

Here I use ClaudeSonnet  4  ReasoningGive me a simple example:


Sixth , role prompting . Improve accuracy:  In complex scenarios such as legal analysis or financial modeling, role prompting can significantly improve Claude's performance. Role prompting  : Try different roles! A data scientist and a marketing strategist may come to different insights for the same data. A data scientist specializing in customer insight analysis for Fortune 500 companies may still produce different results!
Point 7:
Pre-populate Claude 's responseFor better output control
When using Claude, you can guide his responses by pre-populating Assistant messages. This powerful technology allows you to control Claude's actions, skip leading messages, enforce specific formats like JSON or XML, and even help Claude stay consistent in character during role-playing scenarios. In some cases, if Claude isn't performing as well as you'd like, a few pre-populated sentences can go a long way to improve Claude's performance. A little pre-population can go a long way!
It means personalized preset answer, what is your favorite color? Green.

8. Chain complex prompts for stronger performance
Enter cue chaining: Break down complex tasks into smaller, manageable subtasks.
For example: data processing  : extraction → transformation → analysis → visualization.
Advanced: Self-correcting chains
You can chain prompts and have Claude review his own work! This can catch errors and optimize output, especially for high-risk tasks.

Tip 9: Long contextual hints
Put long-form data at the top: Put long documents and inputs (~20k+ tokens) near the top of prompts, above the query, description, and examples. This can significantly improve Claude's performance on all models.
In testing, the final query can improve response quality by up to 30%, especially in the case of complex multi-document input.
Use XML tags to structure document content and metadata: When working with multiple documents, wrap each document in a <document> tag for clarity, and use <document_content> and <source> (and other metadata) subtags.
Citation  : For tasks with long documents, have Claude quote the relevant parts of the document before performing the task. This helps Claude cut through the "noise" of the rest of the document.