2 ways to deploy with 1 click, free open source version of OpenManus, 28.7K Stars, completely replaces Manus

**Explore the free and open source OpenManus, one-click deployment, a perfect alternative to Manus AI. **
Core content:
1. Comparison and advantage analysis between OpenManus and Manus AI
2. Two quick deployment methods: Conda environment and uv tool
3. Local deployment of Ollama large model, completely free to use
Preface
A few days ago, the popularity of Manus, which was hyped up by an invitation code, has not diminished. Xiaozhi also joined the ranks of the application army like everyone else, but it has been fruitless.OpenManus
28.7K is really hot
As for whether it can be replacedManus
? Let's take a look at the effect of this open source tool.
In the previous article, Xiaozhi shared this tool with you. The link in the article is the Manus AI, which is very popular on the Internet and hard to find. Today, I will introduce the operation steps of the free alternative in detail. The biggest advantage of this tool is that it can connect to our local Ollama open source model! Because it is deployed locally, there is no need for an API Key, which is completely free. You can get it for free.
OpenManus: [https://github.com/mannaandpoem/OpenManus]
Procedure
There are two quick deployment methods available on Github
Method 1: Using conda (for Windows users)
1. Create a new conda environment:
conda create -n open_manus python=3.12
conda activate open_manus
2. Clone the repository:
git clone https://github.com/mannaandpoem/OpenManus.git
cd OpenManus
3. Install dependencies:
pip install -r requirements.txt
4. Install Ollama to deploy large AI models locally
Ignore the installation of Ollama deployment. See Xiaozhi’s previous article for detailed operation and deployment.
Method 2: Use UV (recommended)
1. Install uv (Quick Python Package Installer and Parser):
curl -LsSf https://astral.sh/uv/install.sh | sh
If the command execution fails, copy the terminal proxy command
export https_proxy=http://127.0.0.1:7890 http_proxy=http://127.0.0.1:7890 all_proxy=socks5://127.0.0.1:7890
2. Clone the repository:
git clone https://github.com/mannaandpoem/OpenManus.git
cd OpenManus
If the problem of being unable to connect to the GitHub service occurs, the solution is the same as above. Execute the terminal proxy command
3. Create a new virtual environment and activate it:
uv venv
source .venv/bin/activate # On Unix/macOS
# Or on Windows:
# .venv\Scripts\activate
If this error occurs:
uv
zsh: command not found: uv
Execute the following command to add it to your PATH environment variable
source $HOME /. local /bin/env
Verify that uv is available
uv --version
4. Install dependencies:
uv pip install -r requirements.txt
5. Install the local large model
Due to the local docking AI model, you must use a model with function calls, such as qwen2.5-coder:14b, qwen2.5-coder:14b-instruct-q5_K_S, qwen2.5-coder:32b. The visual model can use minicpm-v
Local model installation instructions
ollama run qwen2.5-coder:14b
Visual model installation instructions
ollama run minicpm -v
6. Modify the configuration file
In the installation directory, find OpenManus\config\config.example.toml, change config.example.toml to config.toml
Then change the content to the following:
# Global LLM configuration
[llm]
model = "qwen2.5-coder:14b"
base_url = "http://localhost:11434/v1"
api_key = "sk-..."
max_tokens = 4096
temperature = 0.0
# [llm] #AZURE OPENAI:
# api_type = 'azure'
# model = "YOUR_MODEL_NAME" #"gpt-4o-mini"
# base_url = "{YOUR_AZURE_ENDPOINT.rstrip('/')}/openai/deployments/{AZURE_DEPOLYMENT_ID}"
#api_key = "AZURE API KEY"
# max_tokens = 8096
# temperature = 0.0
# api_version="AZURE API VERSION" #"2024-08-01-preview"
# Optional configuration for specific LLM models
[llm.vision]
model = "qwen2.5-coder:14b"
base_url = "http://localhost:11434/v1"
api_key = "sk-..."
7. Run the file
python3 main.py
See the effect of running
Because when you open a web page, playwright will be automatically installed
8. Conclusion
The above are the two deployment methods I want to share with you, the steps to use the open source version of OpenManus to call the local Ollama model, and the problems encountered in the process. I hope it will be helpful to you.