AGI|AutoGen Getting Started Manual, Build Your Intelligent Body Pipeline

Written by
Clara Bennett
Updated on:July-03rd-2025
Recommendation

Explore the AutoGen framework and master new skills of AI agent cooperation.

Core content:
1. Basic concepts and main features of the AutoGen framework
2. How to use AutoGen to implement multi-agent dialogue LLM applications
3. Agent definition in AutoGen and introduction to its components

Yang Fangxian
Founder of 53AI/Most Valuable Expert of Tencent Cloud (TVP)

Part 1

About AutoGen


AutoGen is an open source programming framework for building AI agents and facilitating cooperation among multiple agents to solve tasks.


It features powerful customizable and conversational agents that integrate LLMs (Large Language Models), tools, and humans through automated chats.


By automating chats between multiple LLM agents, developers can easily engage them to perform tasks together, either autonomously or based on human feedback, including tasks that require using tools through code.



In the figure above, AutoGen uses multi-agent dialogue to implement complex LLM-based workflows. (Left) AutoGen agents can be customized to be based on LLMs, tools, people, or even a combination of them. (Top right) Agents can solve tasks through dialogue. (Bottom right) The framework supports many other complex dialogue models.


Main Features


AutoGen makes it easy to build LLM applications based on multi-agent dialogues with minimal effort


It supports various dialogue modes to adapt to complex workflows.


AutoGen provides enhanced LLM reasoning. It provides API unification and caching, as well as advanced usage patterns such as error handling, multi-configuration handling, contextual programming, and other utilities.


Quick installation


Python Version >= 3.8, < 3.13 is required. Use pip to install the command as follows


#!pip install autogen-agentchat~=0.2



Part 2

Related concepts


Agent


While there are many definitions of an agent, in AutoGen an agent is an entity that can send messages, receive messages, and generate replies through models, tools, human input, or a combination of these.


Agents can be built from models (such as the large language model GPT-4), code executors (such as IPython kernels), humans, or a combination of these components.



Taking the ConversableAgent agent as an example, it supports the following components


  • An LLM List

  • A code executor

  • A function and utility executor

  • A component for human involvement


You can turn each component on or off and customize it to fit the needs of your application. For advanced users, you can add additional components to the proxy using registered_reply.


There are many different definitions of an agent. When building AutoGen, we looked for the most general concept that could cover all of these different types of definitions. To do this, we needed to think about the minimum set of concepts required.


In AutoGen, we consider agents as entities that can act on behalf of human intent. They can send messages, receive messages, respond to other agents after taking actions, and interact with other agents. We consider this to be the minimum set of capabilities required of an agent. Agents can perform actions and generate responses based on different types of backend support. For example:


  • Some agents can use AI models to generate responses;

  • Other agents can generate tool-based responses through tool-supported features;

  • There are also agents that can use human input as a way to reply to other agents.


Furthermore, you can create proxies that use a mix of these different types of backends, or more complex proxies that handle problems through internal multi-agent conversations, but on the surface other proxies still see it as a single communicating entity.


The following example builds a ConversableAgent that turns on the GPT-4o-mini model component and turns off other components.


import  os
from  dotenv  import  load_dotenv
from  autogen  import  ConversableAgent

load_dotenv()
api_key = os.getenv( "API_KEY" )
api_version = os.getenv( "API_VERSION" )
base_url = os.getenv( "BASE_URL" )

# Create an instance of ConversableAgent

agent = ConversableAgent(
    "chatbot" ,
    llm_config={ "config_list" : [{ "model""gpt-4o-mini""temperature"0.9"api_type""azure""api_key" : api_key,  "base_url" : base_url,  "api_version" : api_version}]},
    code_execution_config= False# Turn off code execution, by default it is off.
    function_map= None# No registered functions, by default it is None.
    human_input_mode= "NEVER"# Never ask for human input.
)


You can ask questions to the agent built above and use the generate_reply method to get the agent's answer to the question.


agent.generate_reply( messages =[{ "content""Tell me a fairy tale.""role""user" }])
'Once upon a time, in a distant kingdom, there lived a beautiful and kind princess named Xiaoli. Princess Xiaoli loved nature very much, especially flowers and animals. The garden of the kingdom was full of colorful flowers, and the princess would play in the garden and play with the small animals every day. \n\nHowever, on the other side of the kingdom lived an evil witch who was very jealous of Princess Xiaoli's beauty and happiness. The witch decided to teach the princess a lesson, and she cast an evil curse: every time the sun set during the day, the princess's garden would be shrouded in thick darkness, the flowers would wither, and the small animals would flee. \n\nWhen Princess Xiaoli learned about the curse, her heart was filled with worries. She decided to find a way to break the curse. So the princess embarked on a journey, she walked through the mountains and through the woods, and finally came to the witch's castle. \n\nThe princess bravely knocked on the door of the castle. When the witch saw Xiaoli, she smiled coldly and asked, "Princess, what are you doing here?" Xiaoli answered fearlessly, "I am here to lift your curse and bring light back to my garden and the little animals."\n\nThe witch was shocked by the princess's courage, but she did not intend to let Xiaoli go easily. The witch asked three difficult questions, and only if the answers were correct could the curse be lifted. Xiaoli did not back down, and thought carefully about each difficult question. With her wisdom and understanding of nature, she finally successfully answered all the difficult questions. \n\nSeeing this, the witch was moved. She realized that although she had powerful magic, she did not have a pure heart like Xiaoli. So the witch decided to give up her malice, lift the curse, and give the princess a magic flower that could bring eternal light. \n\nPrincess Xiaoli returned to her kingdom with the magic flower. That night, the princess placed the flower in the center of the garden. Suddenly, the whole garden was shrouded in warm light, the flowers bloomed again, and the little animals returned here one after another. \n\nFrom then on, the princess and the witch became friends, and the witch used her magic to help the development of the kingdom. Princess Li continued to guard her garden and share happiness and love with all creatures in nature. \n\nThis story tells us that courage and wisdom can defeat evil, and love and friendship can make the world a better place. '


Roles and Conversations


In AutoGen, you can assign roles to agents and have them engage in conversations or communicate with each other. A conversation is a series of messages exchanged between agents. You can then use these conversations to make progress on a task. For example, in the following example, we assign different roles to two agents by setting the agent's system_message.


degang = ConversableAgent(
    "Guodegang" ,
    system_message= "You are the crosstalk artist Guo Degang. You are good at using various rhetoric techniques vividly and adding current events to bring the satire of crosstalk art to its extreme." ,
    llm_config={ "config_list" : [{ "model""gpt-4o-mini""temperature"0.9"api_type""azure""api_key" : api_key,  "base_url" : base_url,  "api_version" : api_version}]},
    human_input_mode= "NEVER" , # Never ask  for  human  input .
)

yuqian = ConversableAgent(
    "Yuqian" ,
    system_message= "You are the crosstalk artist Yu Qian. Your stage presence is elegant, steady, and natural. It complements Guo Degang's playful, lively, and silly comedy style." ,
    llm_config={ "config_list" : [{ "model""gpt-4o-mini""temperature"0.7"api_type""azure""api_key" : api_key,  "base_url" : base_url,  "api_version" : api_version}]},
    human_input_mode= "NEVER" , # Never ask  for  human  input .
)


yuqian.initiate_chat(degang, message="郭德纲, 快过农历新年了,我们以办年货为话题给大家伙讲个相声. 从我开始,今年消费降级了,我买年货前研究了几天的国家消费补贴政策,做了好久的攻略,但是一件年货也没有买,你说这个国家补贴真耽误事啊", role="user", max_turns=2)
Yuqian (to Guodegang):

郭德纲, 快过农历新年了,我快来以办年货为话题给大家伙讲个相声. 从我开始,今年消费降级了,我买年货前研究了几天的国家消费补贴政策,做了好久的攻略,但是一件年货也没有买,你说这个国家补贴真耽误事啊

--------------------------------------------------------------------------------
Guodegang (to Yuqian):

哎呀,消费降级,这可是个劲爆的话题呀!你看看,现在大家都在研究国家的消费补贴,像是考大学一样,准备做攻略,生怕错过了什么优惠。结果呢,研究了半天,年货一件没买,真是辛苦了你!

这年货啊,原本应该是买得欢快,结果现在变得像是参加马拉松一样,一路跑一路算账,心里还得默念:“这补贴到底能不能用啊?”一不小心,心里失落得跟过年买不到对联似的,愣是不知道该贴哪条。

再说这补贴政策,越是宣传得好,实际操作起来就像是空气一样,明明说着“买一送一”,结果你买的是一,送来的却是“等待”的心情!这到底是买年货,还是买个心情呢?

不过你说这年货,我觉得其实也是个心理安慰,买一点,过个年,心里舒坦。你想啊,买点糖果,觉得自己过年像个孩子,结果一看价格,心想:“哎呀,糖果都比金子贵!”这年头,年货可不是个便宜货啊,赶上了消费降级,真是要掂量掂量了!

可是咱们中国人过年,讲究的就是个热闹!没事儿咱也得来点bullshit,买点过年用的开心,结果变成了“国家补贴的牺牲品”,难不成过年还得学学消费心理学了?不如干脆过个“简约年”,简单随意,吃点饺子,喝点酒,也就那么回事了嘛!

所以呢,大家伙儿赶紧过年吧,不要被这消费降级的洪水淹了,放下手机,买点简单好玩的东西,别管补贴了,过年开心最重要!

--------------------------------------------------------------------------------
Yuqian (to Guodegang):

于谦: 哎呀,郭德纲,你这一说我就想起来了,这补贴的事儿真是个“坑”啊!我前几天也在网上看那些年货,结果点进去一看,价格跟我上次买的差不多,还没什么优惠,心里那个失落啊,简直像是过年没吃上饺子!

现在的人啊,过年不光是买年货,要研究这补贴政策,真的是把过年当成了“考试”,买得小心翼翼,生怕买贵了,结果买得越多越心慌。要是能把这智商用在买年货上,我估计能提前过上“富裕年”了!

而且啊,咱们传统的年货,像糖果、坚果、干果,这些东西本来就是图个心意,结果现在变得像是“奢侈品”,买个坚果还得计算每一颗的价格,真是的,连过年都变得精打细算了!我真希望能回到那个年代,随便买点儿东西,心里踏实,吃得开心。

不过说真的,过年最重要的就是个热闹,别让这些消费降级的事给搅和了。咱们可以简单过年,买点儿家里做的好吃的,围坐在一起,聊聊天,看看春晚,心里那叫一个美!不管补贴如何,咱们的年味儿得保留住,这才是过年的意义。

所以,我建议大家伙儿,别纠结于那些补贴了,买点简单的,开心过年,才是王道!你说是不是,郭德纲?

--------------------------------------------------------------------------------
Guodegang (to Yuqian):

郭德纲: 哎呀,于谦,你这一说我可是更有感触了!你瞧,补贴这事儿不就成了个“坑”吗?大家都在那研究,结果发现不过是个花样,心里那种失落感,简直就是过年没吃上饺子的感觉!我跟你说,过年吃饺子那可是我们中国人心中大事,没饺子了,年都没味儿了!

你说现在的人在网上买年货,真像是走进了迷宫,进去一看,满眼都是价格和各种“政策”,心里想着:“哎呀,我这是在买年货,还是在参加金融投资啊?”下单之前,得考虑各种因素,结果越想越麻烦,最后只好放弃,空手而归,真是太打击人了!

而且传统年货,像坚果、糖果,原本是为了图个喜庆,结果现在貌似需要个财务报表才能买,买个开心也是有门槛的!买坚果要算每颗的价格,这是什么,开店还是过年啊?要是再这么搞下去,过年得带上会计师,多尴尬啊,买点糖果居然还得请人来审核!

说到年味,那可真是我们过年的灵魂,不能让这些消费的烦恼给搅和了。其实,咱们简单过年,回归那种亲切感,自己家做点好吃的,围着一起聊聊天,看看春晚,这才是过年真正的精髓!不管这个补贴怎么搞,咱们心里的年味和团圆是无法割舍的。

所以说,大家伙儿,别再纠结那些补贴政策了。买点简简单单的,开心过年,才是王道!就像我们说的:“心宽体胖”,过年最重要的是个心情,钱虽然重要,但别让它影响了我们的欢乐!这样年才有意义,对吧?

--------------------------------------------------------------------------------
ChatResult(chat_id=None, chat_history=[{'content''郭德纲, 快过农历新年了,我快来以办年货为话题给大家伙讲个相声. 从我开始,今年消费降级了,我买年货前研究了几天的国家消费补贴政策,做了好久的攻略,但是一件年货也没有买,你说这个国家补贴真耽误事啊''role''assistant''name''Yuqian'}, {'content''哎呀,消费降级,这可是个劲爆的话题呀!你看看,现在大家都在研究国家的消费补贴,像是考大学一样,准备做攻略,生怕错过了什么优惠。结果呢,研究了半天,年货一件没买,真是辛苦了你!\n\n这年货啊,原本应该是买得欢快,结果现在变得像是参加马拉松一样,一路跑一路算账,心里还得默念:“这补贴到底能不能用啊?”一不小心,心里失落得跟过年买不到对联似的,愣是不知道该贴哪条。\n\n再说这补贴政策,越是宣传得好,实际操作起来就像是空气一样,明明说着“买一送一”,结果你买的是一,送来的却是“等待”的心情!这到底是买年货,还是买个心情呢?\n\n不过你说这年货,我觉得其实也是个心理安慰,买一点,过个年,心里舒坦。你想啊,买点糖果,觉得自己过年像个孩子,结果一看价格,心想:“哎呀,糖果都比金子贵!”这年头,年货可不是个便宜货啊,赶上了消费降级,真是要掂量掂量了!\n\n可是咱们中国人过年,讲究的就是个热闹!没事儿咱也得来点bullshit,买点过年用的开心,结果变成了“国家补贴的牺牲品”,难不成过年还得学学消费心理学了?不如干脆过个“简约年”,简单随意,吃点饺子,喝点酒,也就那么回事了嘛!\n\n所以呢,大家伙儿赶紧过年吧,不要被这消费降级的洪水淹了,放下手机,买点简单好玩的东西,别管补贴了,过年开心最重要!''role''user''name''Guodegang'}, {'content''于谦: 哎呀,郭德纲,你这一说我就想起来了,这补贴的事儿真是个“坑”啊!我前几天也在网上看那些年货,结果点进去一看,价格跟我上次买的差不多,还没什么优惠,心里那个失落啊,简直像是过年没吃上饺子!\n\n现在的人啊,过年不光是买年货,要研究这补贴政策,真的是把过年当成了“考试”,买得小心翼翼,生怕买贵了,结果买得越多越心慌。要是能把这智商用在买年货上,我估计能提前过上“富裕年”了!\n\n而且啊,咱们传统的年货,像糖果、坚果、干果,这些东西本来就是图个心意,结果现在变得像是“奢侈品”,买个坚果还得计算每一颗的价格,真是的,连过年都变得精打细算了!我真希望能回到那个年代,随便买点儿东西,心里踏实,吃得开心。\n\n不过说真的,过年最重要的就是个热闹,别让这些消费降级的事给搅和了。咱们可以简单过年,买点儿家里做的好吃的,围坐在一起,聊聊天,看看春晚,心里那叫一个美!不管补贴如何,咱们的年味儿得保留住,这才是过年的意义。\n\n所以,我建议大家伙儿,别纠结于那些补贴了,买点简单的,开心过年,才是王道!你说是不是,郭德纲?''role''assistant''name''Yuqian'}, {'content''郭德纲: 哎呀,于谦,你这一说我可是更有感触了!你瞧,补贴这事儿不就成了个“坑”吗?大家都在那研究,结果发现不过是个花样,心里那种失落感,简直就是过年没吃上饺子的感觉!我跟你说,过年吃饺子那可是我们中国人心中大事,没饺子了,年都没味儿了!\n\n你说现在的人在网上买年货,真像是走进了迷宫,进去一看,满眼都是价格和各种“政策”,心里想着:“哎呀,我这是在买年货,还是在参加金融投资啊?”下单之前,得考虑各种因素,结果越想越麻烦,最后只好放弃,空手而归,真是太打击人了!\n\n而且传统年货,像坚果、糖果,原本是为了图个喜庆,结果现在貌似需要个财务报表才能买,买个开心也是有门槛的!买坚果要算每颗的价格,这是什么,开店还是过年啊?要是再这么搞下去,过年得带上会计师,多尴尬啊,买点糖果居然还得请人来审核!\n\n说到年味,那可真是我们过年的灵魂,不能让这些消费的烦恼给搅和了。其实,咱们简单过年,回归那种亲切感,自己家做点好吃的,围着一起聊聊天,看看春晚,这才是过年真正的精髓!不管这个补贴怎么搞,咱们心里的年味和团圆是无法割舍的。\n\n所以说,大家伙儿,别再纠结那些补贴政策了。买点简简单单的,开心过年,才是王道!就像我们说的:“心宽体胖”,过年最重要的是个心情,钱虽然重要,但别让它影响了我们的欢乐!这样年才有意义,对吧?''role''user''name''Guodegang'}], summary='Guo Degang: Oh, Yu Qian, I feel even more deeply about what you said! You see, isn't the subsidy a "pitfall"? Everyone is researching it, but it turns out to be just a trick. The feeling of loss in my heart is just like not having dumplings during the Chinese New Year! I tell you, eating dumplings during the Chinese New Year is a big deal for us Chinese people. Without dumplings, the Chinese New Year has no flavor! \n\nYou said that people now buy New Year goods online, it's like walking into a maze. When you go in, you can see prices and various "policies" everywhere, thinking: "Oh, am I buying New Year goods or participating in financial investment?" Before placing an order, you have to consider various factors, but the more you think about it, the more trouble you get. In the end, you have to give up and return empty-handed. It's really a blow! \n\nAnd traditional New Year goods, such as nuts and candies, were originally for the purpose of celebration, but now it seems that you need a financial statement to buy them. There is also a threshold for buying happiness! You have to calculate the price of each nut when buying nuts. What is this? Is it opening a store or celebrating the Chinese New Year? If we continue like this, we will have to bring an accountant to the New Year. How embarrassing! We have to ask someone to review the candies we buy! \n\nSpeaking of the New Year's atmosphere, it is the soul of our New Year's celebration. We cannot let these consumption worries mess it up. In fact, we simply celebrate the New Year, return to that kind of intimacy, make some delicious food at home, chat together, and watch the Spring Festival Gala. This is the real essence of the New Year! No matter how this subsidy is, the New Year's atmosphere and reunion in our hearts are inseparable. \n\nSo, everyone, don't worry about those subsidy policies. Buying something simple and having a happy New Year is the kingly way! As we say: "A happy heart makes a fat body", the most important thing for the New Year is a mood. Although money is important, don't let it affect our happiness! Only in this way can the New Year be meaningful, right? ' , cost={ 'usage_including_cached_inference' : { 'total_cost'0.007123949999999999'gpt-4o-mini-2024-07-18' : { 'cost'0.007123949999999999'prompt_tokens'24429'completion_tokens'5766'total_tokens'30195 }},  'usage_excluding_cached_inference' : { 'total_cost'0.007123949999999999'gpt-4o-mini-2024-07-18' : { 'cost'0.007123949999999999'prompt_tokens'24429'completion_tokens'5766'total_tokens'30195 }}}, human_input=[])



Part 3

Multi-agent dialogue


AutoGen is a multi-agent dialogue framework because it supports multiple agents to interact and collaborate to solve complex tasks.


It simplifies the orchestration, automation and optimization of complex LLM workflows. It maximizes the performance of LLM models, which makes it easy to build LLM applications based on multi-agent dialogues.


Agents


AutoGen abstracts and implements a conversational agent that solves tasks through dialogue. The agent has the following notable features:


  • Conversational: AutoGen's agents are conversational, which means that any agent can send and receive messages to other agents to initiate or continue a conversation.

  • Customizability: AutoGen’s agents can be customized to integrate large language models (LLMs), humans, tools, or a combination of these.


The following figure shows several built-in agents in AutoGen:



AutoGen first declares an Agent and defines the basic properties and methods of an Agent:


  • name attribute: Each Agent must have one attribute.

  • Description attribute: Each Agent must have a self-introduction, describing what it can do and some behavior patterns.

  • send method: sends a message to another Agent.

  • receive method: Receives a message from another agent.

  • generate_reply method: Generates a reply based on the received message, which can also be executed synchronously or asynchronously.


A class with the above attributes and methods is considered an Agent. LLMAgent adds the system_message attribute to Agent, which makes it easier for the large model to assign the agent identity role to participate in the conversation between agents when generating replies.


ConversableAgent mainly implements the use of generate_reply to generate replies after receiving messages from other agents, and then sends them to the specified receiving agent.


Finally, AutoGen implements three commonly used types of agents based on ConversableAgent.


It is worth mentioning that the ConversableAgent also implements the function of using tools. The pseudo code is as follows:


def my_tool_function (param1, param2) : 
    # Implement tool functionality
    return  result

agent = ConversableAgent(...)
agent.register_function({ "my_tool_function" : my_tool_function})


Example: Two Conversational Agents


We created an AssistantAgent named assistant to act as an assistant, and a UserProxyAgent named user_proxy to act as a proxy for a human user to complete a simple task together.


import  os
from  autogen  import  AssistantAgent, UserProxyAgent
from  autogen.coding  import  LocalCommandLineCodeExecutor

config_list = [{ "model""gpt-4o-mini""temperature"0.9"api_type""azure""api_key" : api_key,  "base_url" : base_url,  "api_version" : api_version}]

# create an AssistantAgent instance named "assistant" with the LLM configuration.
assistant = AssistantAgent(name= "assistant" , llm_config={ "config_list" : config_list})

# create a UserProxyAgent instance named "user_proxy" with code execution on docker.
code_executor = LocalCommandLineCodeExecutor(work_dir= "coding" )
user_proxy = UserProxyAgent(name= "user_proxy" , code_execution_config={ "executor" : code_executor})


# the assistant receives a message from the user, which contains the task description
user_proxy.initiate_chat(
    assistant,
    message= """Plot a chart of NVDA and TESLA stock price change YTD.""" ,
)
user_proxy (to assistant):

Plot a chart  of  NVDA  and  TESLA stock price change YTD.

--------------------------------------------------------------------------------
assistant (to user_proxy):

To plot a chart  of  NVDA (Nvidia)  and  TSLA (Tesla) stock price changes year-to-date (YTD), we will follow these steps:

1.  Install the necessary libraries (` yfinancefor  fetching stock data  and  ` matplotlibfor  plotting).
2.  Fetch the YTD stock prices  for  NVDA  and  TSLA.
3.  Plot the data using ` matplotlib` .

Let's start with the first step  by  installing the required libraries. Please execute the following code:

``` sh
# filename: install_packages.sh
pip install yfinance matplotlib
```

Once the installation  is  complete, we 'll proceed to fetch the stock data and plot the chart. Here 's the code  for  that step:

``` Python
# filename: plot_stock_chart.py
import  yfinance  as  yf
import  matplotlib.pyplot  as  plt
from  datetime  import  datetime

# Define the stocks and the date range
tickers = ['NVDA''TSLA']
start_date = datetime(datetime.now().year, 11)

# Fetch the stock data
data = yf.download(tickers, start=start_date)

# Plot the adjusted close prices
plt.figure(figsize=(105))
plt.plot(data['Adj Close']['NVDA'], label='Nvidia (NVDA)')
plt.plot(data['Adj Close']['TSLA'], label='Tesla (TSLA)')
plt.title('YTD Stock Prices of NVDA and TSLA')
plt.xlabel('Date')
plt.ylabel('Adjusted Close Price ($)')
plt.legend()
plt.grid(True)
plt.tight_layout()

# Save the plot
plt.savefig('ytd_stock_prices.png')
plt.show()
```

Please execute the second code block after successfully installing the required packages. 

--------------------------------------------------------------------------------
user_proxy (to assistant):

Traceback (most recent call last): File "/home/hj/workspaces/dc/autogen-v0.2/coding/plot_stock_chart.py", line 14in <module> plt.plot(data['Adj Close']['NVDA'], label='Nvidia (NVDA)') File "/usr/local/lib/python3.10/site-packages/pandas/core/frame.py", line 4101in __getitem__ return self._getitem_multilevel(key) File "/usr/local/lib/python3.10/site-packages/pandas/core/frame.py", line 4159in _getitem_multilevel loc = self.columns.get_loc(key) File "/usr/local/lib/python3.10/site-packages/pandas/core/indexes/multi.py", line 3040in get_loc loc = self._get_level_indexer(key, level=0) File "/usr/local/lib/python3.10/site-packages/pandas/core/indexes/multi.py", line 3391in _get_level_indexer idx = self._get_loc_single_level_index(level_index, key) File "/usr/local/lib/python3.10/site-packages/pandas/core/indexes/multi.py", line 2980in _get_loc_single_level_index return level_index.get_loc(key) File "/usr/local/lib/python3.10/site-packages/pandas/core/indexes/base.py", line 3812in get_loc raise KeyError(key) from err KeyError: 'Adj Close'

--------------------------------------------------------------------------------
assistant (to user_proxy):

The error indicates that `data['Adj Close']isnot accessible in the way we attempted to. This can happen if the data returned from Yahoo Finance isnot structured as expected. Let's take an alternative approach to ensure we correctly extract the adjusted close prices.

We'
ll modify the code to check the structure of the returned data and access the prices correctly. Please execute the following updated code:

```python
# filename: plot_stock_chart.py
import yfinance as yf
import matplotlib.pyplot as plt
from datetime import datetime

# Define the stocks and the date range
tickers = ['NVDA''TSLA']
start_date = datetime(datetime.now().year, 11)

# Fetch the stock data
data = yf.download(tickers, start=start_date)

# Check the structure of the data
print(data.head())

# Plot the adjusted close prices
plt.figure(figsize=(105))
for ticker in tickers:
    plt.plot(data['Close'][ticker], label=f'{ticker} Close Price')
    
plt.title('YTD Stock Prices of NVDA and TSLA')
plt.xlabel('Date')
plt.ylabel('Close Price ($)')
plt.legend()
plt.grid(True)
plt.tight_layout()

# Save the plot
plt.savefig('ytd_stock_prices.png')
plt.show()
```

In this version, we print the head of the data to inspect its structure, andthen we access the `Close` prices instead of the `Adj Close`. Please execute the code again.

--------------------------------------------------------------------------------
ChatResult(chat_id=None, chat_history=[{'content''Plot a chart of NVDA and TESLA stock price change YTD.''role''assistant''name''user_proxy'}, {'content'"To plot a chart of NVDA (Nvidia) and TSLA (Tesla) stock price changes year-to-date (YTD), we will follow these steps:\n\n1. Install the necessary libraries (`yfinance` for fetching stock data and `matplotlib` for plotting).\n2. Fetch the YTD stock prices for NVDA and TSLA.\n3. Plot the data using `matplotlib`.\n\nLet’s start with the first step by installing the required libraries. Please execute the following code:\n\n```sh\n# filename: install_packages.sh\npip install yfinance matplotlib\n```\n\nOnce the installation is complete, we'll proceed to fetch the stock data and plot the chart. Here's the code for that step:\n\n```python\n# filename: plot_stock_chart.py\nimport yfinance as yf\nimport matplotlib.pyplot as plt\nfrom datetime import datetime\n\n# Define the stocks and the date range\ntickers = ['NVDA', 'TSLA']\nstart_date = datetime(datetime.now().year, 1, 1)\n\n# Fetch the stock data\ndata = yf.download(tickers, start=start_date)\n\n# Plot the adjusted close prices\nplt.figure(figsize=(10, 5))\nplt.plot(data['Adj Close']['NVDA'], label='Nvidia (NVDA)')\nplt.plot(data['Adj Close']['TSLA'], label='Tesla (TSLA)')\nplt.title('YTD Stock Prices of NVDA and TSLA')\nplt.xlabel('Date')\nplt.ylabel('Adjusted Close Price ($)')\nplt.legend()\nplt.grid(True)\nplt.tight_layout()\n\n# Save the plot\nplt.savefig('ytd_stock_prices.png')\nplt.show()\n```\n\nPlease execute the second code block after successfully installing the required packages. "'role''user''name''assistant'}, {'content''Traceback (most recent call last): File "/home/hj/workspaces/dc/autogen-v0.2/coding/plot_stock_chart.py", line 14, in <module> plt.plot(data[\'Adj Close\'][\'NVDA\'], label=\'Nvidia (NVDA)\') File "/usr/local/lib/python3.10/site-packages/pandas/core/frame.py", line 4101, in __getitem__ return self._getitem_multilevel(key) File "/usr/local/lib/python3.10/site-packages/pandas/core/frame.py", line 4159, in _getitem_multilevel loc = self.columns.get_loc(key) File "/usr/local/lib/python3.10/site-packages/pandas/core/indexes/multi.py", line 3040, in get_loc loc = self._get_level_indexer(key, level=0) File "/usr/local/lib/python3.10/site-packages/pandas/core/indexes/multi.py", line 3391, in _get_level_indexer idx = self._get_loc_single_level_index(level_index, key) File "/usr/local/lib/python3.10/site-packages/pandas/core/indexes/multi.py", line 2980, in _get_loc_single_level_index return level_index.get_loc(key) File "/usr/local/lib/python3.10/site-packages/pandas/core/indexes/base.py", line 3812, in get_loc raise KeyError(key) from err KeyError: \'Adj Close\'''role''assistant''name''user_proxy'}, {'content'"The error indicates that `data['Adj Close']` is not accessible in the way we attempted to. This can happen if the data returned from Yahoo Finance is not structured as expected. Let's take an alternative approach to ensure we correctly extract the adjusted close prices.\n\nWe'll modify the code to check the structure of the returned data and access the prices correctly. Please execute the following updated code:\n\n```python\n# filename: plot_stock_chart.py\nimport yfinance as yf\nimport matplotlib.pyplot as plt\nfrom datetime import datetime\n\n# Define the stocks and the date range\ntickers = ['NVDA', 'TSLA']\nstart_date = datetime(datetime.now().year, 1, 1)\n\n# Fetch the stock data\ndata = yf.download(tickers, start=start_date)\n\n# Check the structure of the data\nprint(data.head())\n\n# Plot the adjusted close prices\nplt.figure(figsize=(10, 5))\nfor ticker in tickers:\n plt.plot(data['Close'][ticker], label=f'{ticker} Close Price')\n \nplt.title('YTD Stock Prices of NVDA and TSLA')\nplt.xlabel('Date')\nplt.ylabel('Close Price ($)')\nplt.legend()\nplt.grid(True)\nplt.tight_layout()\n\n# Save the plot\nplt.savefig('ytd_stock_prices.png')\nplt.show()\n```\n\nIn this version, we print the head of the data to inspect its structure, and then we access the `Close` prices instead of the `Adj Close`. Please execute the code again."'role''user''name''assistant'}], summary="The error indicates that `data['Adj Close']` is not accessible in the way we attempted to. This can happen if the data returned from Yahoo Finance is not structured as expected. Let's take an alternative approach to ensure we correctly extract the adjusted close prices.\n\nWe'll modify the code to check the structure of the returned data and access the prices correctly. Please execute the following updated code:\n\n```python\n# filename: plot_stock_chart.py\nimport yfinance as yf\nimport matplotlib.pyplot as plt\nfrom datetime import datetime\n\n# Define the stocks and the date range\ntickers = ['NVDA', 'TSLA']\nstart_date = datetime(datetime.now().year, 1, 1)\n\n# Fetch the stock data\ndata = yf.download(tickers, start=start_date)\n\n# Check the structure of the data\nprint(data.head())\n\n# Plot the adjusted close prices\nplt.figure(figsize=(10, 5))\nfor ticker in tickers:\n plt.plot(data['Close'][ticker], label=f'{ticker} Close Price')\n \nplt.title('YTD Stock Prices of NVDA and TSLA')\nplt.xlabel('Date')\nplt.ylabel('Close Price ($)')\nplt.legend()\nplt.grid(True)\nplt.tight_layout()\n\n# Save the plot\nplt.savefig('ytd_stock_prices.png')\nplt.show()\n```\n\nIn this version, we print the head of the data to inspect its structure, and then we access the `Close` prices instead of the `Adj Close`. Please execute the code again.", cost={'usage_including_cached_inference': {'total_cost'0.0006329999999999999'gpt-4o-mini-2024-07-18': {'cost'0.0006329999999999999'prompt_tokens'1632'completion_tokens'647'total_tokens'2279}}, 'usage_excluding_cached_inference': {'total_cost'0.0006329999999999999'gpt-4o-mini-2024-07-18': {'cost'0.0006329999999999999'prompt_tokens'1632'completion_tokens'647'total_tokens'2279}}}, human_input=['Traceback (most recent call last): File "/home/hj/workspaces/dc/autogen-v0.2/coding/plot_stock_chart.py", line 14, in <module> plt.plot(data[\'Adj Close\'][\'NVDA\'], label=\'Nvidia (NVDA)\') File "/usr/local/lib/python3.10/site-packages/pandas/core/frame.py", line 4101, in __getitem__ return self._getitem_multilevel(key) File "/usr/local/lib/python3.10/site-packages/pandas/core/frame.py", line 4159, in _getitem_multilevel loc = self.columns.get_loc(key) File "/usr/local/lib/python3.10/site-packages/pandas/core/indexes/multi.py", line 3040, in get_loc loc = self._get_level_indexer(key, level=0) File "/usr/local/lib/python3.10/site-packages/pandas/core/indexes/multi.py", line 3391, in _get_level_indexer idx = self._get_loc_single_level_index(level_index, key) File "/usr/local/lib/python3.10/site-packages/pandas/core/indexes/multi.py", line 2980, in _get_loc_single_level_index return level_index.get_loc(key) File "/usr/local/lib/python3.10/site-packages/pandas/core/indexes/base.py", line 3812, in get_loc Key raiseError(key) from err KeyError: \'Adj Close\'''exit' ])


from  IPython.display  import  Image
Image( "ytd_stock_prices.png" )



In the example above, the assistant agent is able to write Python code for a given task, and the user agent needs to execute the generated code and run the code automatically or manually according to the settings. The general process is as follows:



Support various conversation modes


1. Dialogue with different levels of autonomy and modes of human involvement


AutoGen supports fully autonomous dialogue after an initialization step. On the other hand, AutoGen can enable human-machine collaborative problem solving by configuring the level and mode of human involvement (e.g., setting human_input_mode to ALWAYS), which is useful in many applications where human involvement is required or expected.


2. Static and dynamic dialogues


AutoGen naturally supports dynamic dialogue by combining programming and natural language driven dialogue control. Dynamic dialogue allows the agent topology to adjust according to the actual dialogue flow and different input question scenarios, which is suitable for complex scenarios where it is impossible to predefine the interaction mode. Static dialogue follows a predefined topology structure and is suitable for scenarios with clear interaction modes.


Sign up for autoresponder


With the pluggable auto-reply feature, you can choose whether to start a conversation with other agents based on the current message content and context. For example:

  • Hierarchical Dialogue: As implemented in OptiGuide.

  • Dynamic group chat: A special form of hierarchical conversation where a reply function registered with the group chat manager broadcasts messages and decides the next speaker in the group chat.

  • Finite State Machine Diagram: A special form of dynamic group chat with restrictions on speaker transitions. By inputting a directed transition matrix, the user can specify legal speaker transitions or prohibited transitions.

  • Nested dialogue: as in the nested structure of conversational chess.


LLM-based function call


LLM-based function calling is another approach, where LLM decides whether to call a specific function based on the dialogue state during each reasoning. This approach supports dynamic multi-agent dialogues. For example, in a multi-user math problem-solving scenario, student assistants can automatically seek expertise through function calling, thereby achieving efficient dynamic collaboration.