Use of langchain prompt words

Written by
Iris Vance
Updated on:June-22nd-2025
Recommendation

The clever use of prompt words makes your AI conversation more accurate and efficient.
Core content:
1. The definition of prompt words and their application in AI large models
2. The complexity and optimization techniques of prompt word design
3. Practical cases and effect analysis of prompt engineering

Yang Fangxian
Founder of 53A/Most Valuable Expert of Tencent Cloud (TVP)

1. Overview

Prompt words refer to the input information provided to the artificial intelligence model, usually including keywords, questions or instructions, which can guide the model to generate responses that match the user's expectations. The questions we input into Doubao, DeepSeek and other large models can be considered as simple prompt words, but in order to really get the results we need, prompt words can be very complex. For example, if I want to extract summary information from the text, I can use the following prompt words:



You are an assistant designed to perform text summarization tasks. Your job is to extract key information from the original text and generate a brief, clear summary that retains the main idea of ​​the original text. Next, I will provide a string of "texts that need to be summarized" below. You need to return the results of the summary to me. I should be able to quickly understand the main content of the text from the summary. Texts that need to be summarized:

###

      In a televised address to the nation late Monday, Indian Prime Minister Narendra Modi said India had only "paused" its military operations against Pakistan and would retaliate for any attack "in its own way."

  Modi said he was monitoring Pakistan's every move. He also hinted at the nuclear threat that loomed over last week's escalating tensions, adding that they would not tolerate "nuclear blackmail" in any future conflict with Pakistan.

  A fragile ceasefire between the nuclear-armed nations to end the worst round of violence in recent years held over the weekend, with no overnight exchanges of fire along their border. "Throughout the night, Jammu and Kashmir and other areas along the border remained largely peaceful," the Indian military said.

  In his speech, Modi mentioned possible future negotiations, but he noted that "if we talk to Pakistan, it will only be about fighting terrorism...discussing Pakistan-occupied Kashmir."

  He claimed that the Indian government would not hesitate to use force to eliminate terrorist camps in Pakistan and called it the "new normal" in relations with neighboring Pakistan.

  The ceasefire ends days of fighting in Indian-controlled Kashmir following a militant attack that India blamed on Pakistan, which it denies involvement.

  In his televised address, Modi did not mention the United States or credit Trump for the ceasefire. Instead, he said it was Pakistan that first called for a de-escalation after Indian troops attacked the "heart" of Pakistan. U.S. President Donald Trump on Saturday claimed that the U.S. initiated diplomatic efforts to broker the ceasefire.

  Trump made comments at the White House on Monday, saying that the U.S. intervention in India and Pakistan "prevented a nuclear conflict." He said: "I think it could have been a bad nuclear war, millions of people could have died. So I'm very proud of that."

  "So when Pakistan called and said it would not indulge in any kind of terror or military adventure again, India thought about giving and gave its response," Modi pointed out.

  However, Pakistan's Armed Forces Public Relations Director General Chaudhry said at a press conference that Pakistan had never taken the initiative to request a ceasefire. After responding to India's attack, Pakistan responded to India's request for a ceasefire with the mediation of the international community.

  Before the Indian prime minister's speech, Pakistan's military issued a statement saying: "No one should be in any doubt that whenever Pakistan's sovereignty is threatened and its territorial integrity is violated, we will take comprehensive and decisive retaliatory measures."

###

The results returned by DeepSeek are as follows:


Prompt engineering is to guide AI to generate high-quality, accurate and targeted responses by carefully designing and optimizing input information (prompts). It is an engineering science that relies heavily on experience, involving careful adjustments to question formulation, keyword selection, context setting and constraints to improve the effectiveness, usability and degree of satisfaction of AI responses.

Through prompt engineering, we can make the big model generate the results we need more accurately. However, as programmers, we can go a step further and use langchain to access the big model and combine it with the prompt word template to get more accurate answers, and even achieve a certain degree of automation.

2. General Chat

1. Local large model
from langchain_Ollama.chat_models import ChatOllamafrom langchain_core.prompts import ChatPromptTemplatefrom langchain_core.output_parsers import StrOutputParseroutput_parser = StrOutputParser() # Output in string format # Call the local deepseek-r1:7b large model llm = ChatOllama(model="deepseek-r1:7b", temperature=0.1)prompt = ChatPromptTemplate.from_messages([ ("system", "You are a top artificial intelligence technology expert with a high level of writing"), ("user", "{input}")])# Chain the prompt words, large models, and output formats chain = prompt | llm | output_parserprint(chain.invoke({"input": "Introduce the technical principles of the large language model"}))

The returned results are as follows:

2. Calling the remote large model API
from  langchain_openai  import  ChatOpenAIfrom  langchain_core.output_parsers  import  StrOutputParserfrom  langchain.schema.runnable  import  RunnableLambda, RunnablePassthroughfrom  langchain.prompts  import  PromptTemplatemodel = ChatOpenAI(    model = 'deepseek-chat'    openai_api_key=<your API KEY>,     openai_api_base= 'https://api.deepseek.com' ,    max_tokens = 1024)prompt_template2 =  "You are a document processing expert, proficient in reading comprehension and text writing, with a high level of aesthetics and text summarization ability. There is the following article {content}, please help me summarize it."PROMPT = PromptTemplate(    template=prompt_template2,    input_variables = [ "content" ])chain = (    { "content" : RunnablePassthrough()}    | PROMPT    | model    | StrOutputParser() )# Read the file and get the article content from the filecontent =  ""try :    # Open the file using the with statement    with  open ( 'example.txt''r' , encoding= 'utf-8'as  file:        # Read the entire contents of the file        content = file.read()        #print(content)except  FileNotFoundError:    print ( "File not found, please check the file path and file name." )
input_data = {    "content" : content}# Using the invoke method to call the chainresult = chain.invoke(input_data)print (result)

The file stores the content of the second chapter of the Seven Star Palace of King Lu in the Tomb Robbers' Chronicles. It can be seen that after calling the API of the deepseek large model, a relatively accurate text summary is given:

3. Answer based on examples

In addition to asking questions directly to the big model, we can also teach the big model the answer style we need, give a few practical examples, and let the big model answer based on our examples.

from  langchain_openai  import  ChatOpenAIfrom  langchain_core.output_parsers  import  StrOutputParserfrom  langchain.schema.runnable  import  RunnableLambda, RunnablePassthroughfrom  langchain.prompts  import  PromptTemplateexamples = [    {        "question""Which is bigger, an adult blue whale or an adult elephant?" ,        "answer"            """            Need a follow up question here: Yes.            Follow-up: How big are adult blue whales?            Middle answer: An adult blue whale is 30 meters long and weighs 130 tons.            Follow-up: How big are adult elephants?            Middle answer: An adult elephant is 4 meters long and weighs 5 tons.            So the final answer is: blue whale.            """
    },    {        "question""In which year was the founder of Alibaba born?" ,        "answer"            """            Need a follow up question here: Yes.            Follow-up: Who is the founder of Alibaba?            Middle answer: The founder of Alibaba is Jack Ma.            Follow-up: What is Jack Ma’s year of birth?            Middle answer: Jack Ma was born on September 10, 1964.            So the final answer is: 1964.            """
    }]example_prompt = PromptTemplate(    input_variables=[ "question""answer" ],    template = "Question: {question}\n{answer}" ,)prompt = FewShotPromptTemplate(    # Example    examples=examples,    # Example prompt words    example_prompt=example_prompt,    # End character    suffix= "Question:{question}" ,    # Input variables    input_variables=[ "question" ])model = ChatOpenAI(    model = 'deepseek-chat'    openai_api_key=<your API KEY>,     openai_api_base= 'https://api.deepseek.com' ,    max_tokens = 1024)chain = (    { "question" : RunnablePassthrough()}    | prompt    | model    | StrOutputParser() )question =  "In which year was the founder of Tencent born?"result = chain.invoke(question)print (result)# OutputQuestion: In which year was the founder of Tencent born?Need a follow up question here: Yes.Follow-up: Who is the founder of Tencent?Middle answer: The founder of Tencent is Ma Huateng.Follow-up: What is Ma Huateng’s year of birth?Middle answer: Ma Huateng was born on October 29 , 1971 .So the final answer is: 1971 .

As you can see, the big model gives the answer in the format we need according to the requirements in our example. This is actually a bit similar to RAG (retrieval-augmented generation). I will write a special article to introduce this technology later. If there are many examples here, it is not suitable to pass all of them to the big model. You can also use the example selector method to find the example that best suits our problem. The code is as follows:

# Using the example selectorfrom langchain.prompts.example_selector import SemanticSimilarityExampleSelector#from langchain_community.vectorstores import Chromafrom langchain_community.vectorstores import FAISSfrom langchain_openai import OpenAIEmbeddingsfrom langchain_ollama import OllamaEmbeddings#from langchain_chroma import Chroma# Using the semantic similarity example selectorexample_selector = SemanticSimilarityExampleSelector.from_examples( # Examples examples, # Embedding model OllamaEmbeddings(model="deepseek-r1:32b"), # Vector database FAISS, # Maximum number of examples k=1,)# Select the example most similar to the inputquestion = "In which year was the founder of Tencent born?"selected_examples = example_selector.select_examples({"question": question})print(f"Find the most similar examples: {question}")for example in selected_examples: print("\n") for k,v in example.items(): print(f"{k}: {v}")# Output Find the most similar examples: In which year was the founder of Tencent born? question: In which year was the founder of Alibaba born? answer: Is a follow-up question needed here: Yes. Follow-up: Who is the founder of Alibaba? Intermediate answer: The founder of Alibaba is Jack Ma. Follow-up: What is the year of Jack Ma's birth? Intermediate answer: Jack Ma was born on September 10, 1964. So the final answer is: 1964.

As you can see, the closest answer example to the question "In which year was the founder of Tencent born?" is "In which year was the founder of Alibaba born?".

4. Based on the answers, further automate the execution results

Another interesting way to use the prompt word is to let the big model return a piece of code to us, and then the system will automatically execute it to get the result we need. In this way, we will break through the limit of our return content. Ordinary big language models only return text. Here I use a prompt word to try to let the big model return a piece of code, and directly execute this code to get the desired result.

def generate_chart(df): # Draw a statistical chart df = df.fillna(value="None") prompt_template2 = "You are a top data visualization expert, proficient in python data visualization, and have a high level of aesthetics. Use matplotlib for data visualization. There is the following data {df}. Please choose a suitable chart for visualization. The color matching should be beautiful and save the chart as chart.png. No display is required. Give me the code, no other content. The chart should be able to display Chinese." PROMPT = PromptTemplate( template=prompt_template2, input_variables=["df"] ) chain = ( {"df": RunnablePassthrough()} | PROMPT | model | StrOutputParser() ) input_data = { "df": df } # Use the invoke method to call the chain result = chain.invoke(input_data) result = result.replace("```python", "") result = result.replace("```", "") print(result)    exec(result) return 'chart.png'

The data df here is as follows:

data = {'Development Zone Code': ['S327012', 'S327011'], 'Development Zone Name': ['Economic Development Zone 1', 'Economic Development Zone 2'], 'Number of Plots': [320, 27], 'Total Area (hectares)': [438.332284, 66.609016]}

The code returned by the large model is as follows:

import matplotlib.pyplot as pltimport pandas as pddata = {'Development Zone Code': ['S327012', 'S327011'], 'Development Zone Name': ['Economic Development Zone 1', 'Economic Development Zone 2'], 'Number of Plots': [320, 27], 'Total Area (hectares)': [438.332284, 66.609016]}df = pd.DataFrame(data)plt.rcParams['font.sans-serif'] = ['SimHei'] # Set Chinese displayplt.rcParams['axes.unicode_minus'] = Falsefig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 6))colors = ['#4C72B0', '#DD8452']# Column chart of the number of plots ax1.bar(df['Development Zone Name'], df['Number of plots'], color=colors, width=0.6)ax1.set_title('Comparison of the number of plots in the development zone', pad=20, fontsize=14)ax1.set_ylabel('Number of plots', fontsize=12)ax1.grid(axis='y', linestyle='--', alpha=0.7)# Column chart of total area ax2.bar(df['Development Zone Name'], df['Total area (hectares)'], color=colors, width=0.6)ax2.set_title('Comparison of the total area of ​​the development zone (hectares)', pad=20, fontsize=14)ax2.set_ylabel('Area (hectares)', fontsize=12)ax2.grid(axis='y', linestyle='--', alpha=0.7)plt.tight_layout()plt.savefig('chart.png', dpi=300, bbox_inches='tight')plt.close()

This code is automatically executed through Python's exec() function to obtain the visual results:

It can be seen that the code returned by the large model can be directly executed automatically and obtain good display results. This method can be used in our daily development, not necessarily for drawing. Many intermediate processes can allow the large model to be automatically generated, and we only need to call this intermediate result.

There are endless ways to play with the big model prompt words, and there are many interesting and creative ways. We can pay more attention to it and "play" more, and we will definitely find more interesting ways to play.