Ten major misunderstandings about prompt word engineering

Written by
Iris Vance
Updated on:July-17th-2025
Recommendation

Deepen the misunderstandings of prompt word engineering and improve the ability to communicate with large language models.

Core content:
1. Common misunderstandings of prompt word engineering and their causes
2. The complexity and challenges of prompt word engineering and how to overcome them
3. Actual case analysis, how to effectively build and tune prompt words

Yang Fangxian
Founder of 53AI/Most Valuable Expert of Tencent Cloud (TVP)

Ali Mei Introduction


This article will list some cognitive misunderstandings in engineering cognition and creation, and share some of the author's insights, hoping to provide inspiration for readers.

1. Background

After systematically studying a large number of prompt word tutorials and practicing them continuously, I found that many people have many misunderstandings about prompt word engineering.

This article will list some cognitive misunderstandings in engineering cognition and creation, and share some of my insights, hoping to provide inspiration to readers.

2. Ten major misunderstandings


Myth 1: Prompt word engineering is simple and can be easily learned

Many people mistakenly believe that prompt engineering is very simple and that they can be competent with a little understanding. In fact, this perception is like thinking that software engineering is only "high cohesion, low coupling" or "CRUD" operations. Although these concepts are easy to understand on the surface, they are full of challenges in actual operation. Many programs often expose the problems of lack of scalability and maintainability in practice, and the performance is often unsatisfactory. To overcome these problems, in addition to mastering the basics, you must also have a deep understanding of design patterns and learn various frameworks to truly transform theory into high-quality practice.

Although speaking is easy, it is not easy to explain complex things in simple language. Prompt word engineering is similar to the art of asking questions, and its core lies in how to clearly and effectively convey task requirements to large language models. However, just as clear communication requires not only clear expression but also clear thinking, prompt word engineering also has the characteristics of "easy to know but difficult to do". Many people still do not have a good grasp of efficient communication skills after working for many years, which is very similar to the challenges of prompt word engineering.
Although the basic techniques of prompt word engineering seem simple, in practice we need to choose the most appropriate expression according to the specific scenario and use effective tuning methods to deal with complex situations. Especially in the context of the current large model capabilities not yet fully mature, even if the task requirements are expressed clearly, it is often necessary to further guide the model to complete the task.
For example, when some people asked a large language model to create a fairy tale, they simply put forward their requirements, only to find that the content generated by the model was very empty and far from their expectations. People who are proficient in prompt word engineering will use the CO-STAR framework to explain the context, goals, style, tone, audience, and related details of the story in detail, so as to obtain more specific and compliant output. Many people write prompt words and give up hastily when they find that the effect is not good and they don't know how to tune it. However, some people can calmly write prompt words for various scenarios, and can quickly and specifically tune and solve problems when encountering abnormal cases. This is the value of prompt word engineering.


Myth 2: Prompt word engineering can solve all problems

Prompt word engineering is not a panacea.
It is easy to understand if we compare it to a car. Our driving speed is not only determined by our driving skills, but also by the performance of the car and traffic regulations.

The upper limit of the prompt effect is determined by the model capability and the level of the prompt writer. If the model capability is insufficient, even if the prompt is well written, the final result will be unsatisfactory. On the contrary, if the model capability is strong but the prompt is not well written, the effect will also be greatly reduced.
In addition, not all problems can be solved by prompt word engineering. Some tasks may require model fine-tuning to achieve better results, while some problems may not get ideal results no matter how the prompt words are optimized. In this case, it is necessary to consider further decomposing the tasks.

Myth 3: One set of prompts fits all scenarios and models

A set of prompt words often cannot adapt to all scenarios . We need to master the skills of prompt word engineering and flexibly adjust prompt words to meet personalized needs. In business applications, it is crucial to use different prompt words according to different scenarios.

In addition, different models have different command understanding and reasoning capabilities. Some prompt words that work well on one model may not work well on another model, so appropriate adjustments need to be made for different models.

Myth 4: The more complex the prompt, the better

"Simplicity is the best way to achieve success." Complex prompts do not necessarily mean better results. The core task of prompts is to clearly convey requirements. If they are too complex, the model may not be able to grasp the key points and may even lead to misunderstandings.

If the prompt words are too complex or too long, the following problems may occur:

1. Context confusion : When the prompt word is too long, the model may find it difficult to maintain the clarity of the context and easily deviate from the original topic or semantics in the generated content, resulting in inaccurate or irrelevant results.  

2.   Performance degradation: Too long prompt words will increase the computational complexity of the model and may lead to slower response speed, especially in resource-constrained environments, where this effect will be more obvious.

3.   Information redundancy: A cue word that is too long may contain too much redundant information, making it difficult for the model to identify and extract the most relevant parts, thus affecting the output quality.

4.   The length of generated content is limited: The generation length of the model is usually limited. If the prompt word is too long, the model may reduce the length of the generated content, resulting in the output result being unable to cover all the required content.

5.   Cause misunderstanding: The prompt words are too long and complex in structure, which may cause the model to deviate when understanding the prompt words, resulting in results that are inconsistent with expectations.

In addition, some people have a particular preference for a specific prompt word framework and like to use the same framework regardless of the scenario, which can sometimes be counterproductive. Each prompt word framework has its own applicable scenario, and we need to choose the most suitable prompt word framework based on the scenario.
For simple tasks, concise and clear prompts are often more effective; for complex tasks, using structured prompts can help the model understand and perform tasks more clearly.

Myth 5: The more examples of your prompt, the better

More examples are not always better.
For tasks that the model has already mastered, there is no need to provide additional examples. Even if examples are needed, the number should not be too many. If the examples are not accurate or contain errors, it will affect the performance of the model. For similar examples, only one is enough. Multiple homogeneous examples will not bring additional improvement.
Therefore, the examples of prompt words should follow the order from few to many. The construction of examples should focus on correctness, representativeness and diversity rather than quantity.

Myth 6: If you add a request to the prompt, the model will listen

Different models have different abilities to understand instructions, and the requirements in the prompt words are not always fully executed by the model. In order to improve the response effect of the model, it may be necessary to combine other strategies, such as using more advanced models or adding specific examples in the prompt words.

Myth 7: Once the prompt is designed, there is no need to change it

Just like a programmer writing code, the code needs to be maintained after it is written. If there are bugs in the code or new requirements arise, the code needs to be modified.
Similarly, writing prompt words is not a one-time process.
In actual applications, prompt words often need to be tuned according to personalized needs or bad cases encountered. Prompt word engineering is essentially a process of continuously obtaining feedback and continuous optimization.

Myth 8: Prompt words must be written manually

Nowadays, many platforms have supported the function of automatically generating prompt words. Users only need to describe their needs, and the platform can automatically write prompt words. There are also abundant prompt word templates available for copying and use online. Therefore, not all prompt words must be written manually.
However, this does not mean that prompt word engineering has become unimportant. Only when the requirements are clearly expressed can the model generate high-quality prompt words. In addition, mastering the skill of prompt word engineering is still crucial because it gives us the ability to tune the automatically generated prompt words to better meet actual needs.
Automated prompt word writing has certainly improved efficiency, but we still need to have the ability to tune the prompt words to ensure the accuracy and applicability of the final effect.

Myth 9: If the prompt words work well in my own tests, they should work well online

The test results are not equivalent to the online performance. When doing self-testing, the test cases may be simpler, the number of test cases may be small or lack representativeness, while the use cases of online applications may be more diverse and complex, so the results may not be as expected.
In order to obtain more objective evaluation results, more representative and diverse use cases should be constructed during testing, covering different levels of complexity. Avoid judgments that are influenced by excessive optimism or pessimism.


Myth 10: Just write the prompt word, user input is not important

The quality of the prompt words is important, but the content of the user input is equally critical. Just like when a doctor diagnoses a disease, if the symptoms described by the patient are inaccurate, it is difficult for the doctor to make a correct judgment and prescribe the right medicine. Similarly, if the information entered by the user is ambiguous or incomplete, even if the prompt words are well written, it is difficult for the model to produce ideal results.
Therefore, it is necessary to pay attention to the accuracy and completeness verification of user input information. High-quality prompt words and high-quality user input complement each other to ensure the best performance of the model.

Conclusion

Prompt word engineering is a bridge to communicate with large language models and is an art of asking questions. Although it seems simple, it is full of challenges in practical applications.
We need to deeply understand the capabilities and limitations of the model and flexibly adjust the prompt word design according to different scenarios to achieve the best results. The core of prompt word engineering is not a complex framework or a large number of examples, but how to accurately convey task requirements and improve model performance through continuous optimization.

Avoiding common misunderstandings and mastering the core skills of prompt word engineering can help us better utilize the potential of large models. At the same time, paying attention to the quality of user input and the ability to continuously tune prompt words are also the keys to the success of prompt word engineering. Prompt word engineering is a task that requires continuous practice and reflection. Only through continuous learning and adjustment can we truly master the secrets and maximize the role of large models.

I hope that the sharing in this article can provide you with some inspiration and help you go further on the road of prompt word engineering.


Tongyi Wanxiang text drawing and portrait beautification


Tongyi Wanxiang AIGC is used to realize image generation in Web services, including text to image, graffiti conversion, portrait style reshaping, and character portrait creation, to speed up the creation process and improve creative efficiency.   


Click to read the original text for details.