OpenManus local ollama running tutorial + some common pitfalls

Written by
Audrey Miles
Updated on:July-09th-2025
Recommendation

Teach you how to deploy Ollama and OpenManus locally and avoid common pitfalls.

Core content:
1. How to correctly install the conda environment and the Ollama model
2. Configure the interface of OpenManus and the Ollama model
3. FAQs and solutions

Yang Fangxian
Founder of 53AI/Most Valuable Expert of Tencent Cloud (TVP)
I see that many people often fall into the trap of running OpenManus on local Ollama, so I made a simple video here, plus a graphic version of this article to sort out some things.

When installing the conda environment, please follow the official website readme. Don't mess it up by not reading the documentation .
Official Chinese Readme: https://github.com/mannaandpoem/OpenManus/blob/main/README_zh.md
After installing the conda environment, we need to pull the ollama model. Because of the previous settings on my computer, I need to configure ollama parallel processing and multi-model startup. My command is:
set OLLAMA_NUM_PARALLEL=set OLLAMA_MAX_LOADED_MODELS=
This can be configured or not according to your needs.
Then you can pull the model. I chose qwen2.5-7b for the language model and llava for the visual model.
When choosing a language model, you need to choose one that supports calling tools and function calls. Ollama's official documentation clearly states which models support this :
https://ollama.com/search?c=tools
Pull model command:
ollama run qwen2.5:7bollama run llava
Friends who are pulling the model for the first time may feel a little slower because it needs to download the model from the Internet to the local computer.
Seeing the " Send a message " message indicates that the local model has been successfully started. If you have not made any changes before, the default port is the local port 11434.
After starting the Ollama model, we have to modify the openmanus configuration file, fill in the name and base_url, and if nothing else is changed, the default is http://127.0.0.1:11434/v1. Fill in any api_key, but it cannot be empty .
Remember to save after filling it out, and then you can start openmanus. I started it in the openmanus project folder:
python main.py
If you start it in other paths, remember to configure the path, for example:
python xxx/xxx/main.py
Finally it worked successfully.



Here I will add a few common pitfalls:
1. Why did my model configuration succeed in other places but not in OpenManus?
Answer: It depends on whether you call the OpenAI compatible interface in other places. If not, it is normal that it does not work. Openmanus calls the OpenAI compatible interface. Of course, the new version also has the Azure interface.

2. What should I do if an error message appears in the input dialog after startup, saying that the model does not support tools?
Answer: No matter you call the model of Silicon Valley, DeepSeek, OpenAI, Ollama or other service providers, their official websites have documents that write about the models that support tools. Each service provider is different, so please check it yourself.

3. Why can’t the file be saved?
Answer: The project is not stable yet, and it is recommended to wait for optimization.

4. The task is completed but cannot be stopped and keeps looping.
A: Press ctrl c to stop directly. This is because the model does not call the stop tool. It is still unstable and you need to wait for official optimization.

5. Ollama has no API?
A: How can I run without API? Some people are still stubborn.

6. What should I do if the Google search I called has no network or cannot be connected?
Answer: 1. Change to Bing 2. Wait for official changes

7. What should I do if I don’t have enough technical skills or can’t read documents?
Answer: It is recommended to give up and wait for the lazy package, don't bother yourself