OpenManus+QwQ32B local deployment

Written by
Clara Bennett
Updated on:July-10th-2025
Recommendation

A detailed guide to local deployment of OpenManus and QwQ-32B to help you quickly build your personal AI environment.

Core content:
1. QwQ-32B local operation and ollalma deployment steps
2. OpenManus environment construction and dependency installation
3. OpenManus configuration file settings and API key management

Yang Fangxian
Founder of 53AI/Most Valuable Expert of Tencent Cloud (TVP)

1: Run QwQ-32b locally

ollama run qwq

2: Install and deploy OpenManus

Download the installation package

 git  clone  https://github.com/mannaandpoem/OpenManus

Environment preparation and installation

conda create -n open-manus python=3.12 
My default base environment here is python3.12.9, so I use it directly

cd  OpenManus
# Set pip domestic mirror
pip config  set  global.index-url https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
# Install dependencies            
pip install -r requirements.txt

Configuration Instructions

OpenManus needs to be configured to use the LLM API. Please follow the steps below to set it up:

Create a config.toml file in the config directory (can be copied from the example):
cp config/config.example.toml config/config.toml
Edit config/config.toml to add your API key and custom settings:
  1. deepseek official API-key method

# Global LLM configuration
[llm]
model =  "deepseek-reasoner"
base_url =  "https://api.deepseek.com/v1"
api_key =  "sk-741cd3685f3548d98dba5b279a24da7b"
max_tokens = 8192   
temperature = 0.0

# Note: Multimodality has not yet been integrated, so you can leave it alone for now
# Optional configuration for specific LLM models
[llm.vision]
model =  "claude-3-5-sonnet"
base_url =  "https://api.openai.com/v1"
api_key =  "sk-..."

qwq:32b official API-key method


# Global LLM configuration
[llm]
model =  "qwq-32b"
base_url =  "https://dashscope.aliyuncs.com/compatible-mode/v1"
api_key =  "sk-f9460b3a55994f5ea128b2b55637a2b7"
max_tokens = 8192   
temperature = 0.0

# Note: Multimodality has not yet been integrated, so you can leave it alone for now
# Optional configuration for specific LLM models
[llm.vision]
model =  "claude-3-5-sonnet"
base_url =  "https://api.openai.com/v1"
api_key =  "sk-..."

Model filling instructions:

  • If it is official, it is: qwq-32b
  • If it is silicon-based flow, the application is: Qwen/QwQ-32B,
  • The application for Paio computing power is: qwen/qwq-32b

Start a task

python main.py

Enter the prompt word. If no error message is given, it is normal.

3. Configure the local model in OpenManus

QWQ-32B

Note: For QWQ-32B docking, since the think speed is slow, you need to change the timeout in the ask_tool method to 600 (the default is 60s)

vi config/config.toml

```toml
# Global LLM configuration
[llm]
model =  "qwq:latest"
base_url =  "http://localhost:11434/v1"
api_key =  "EMPTY"
max_tokens = 4096
temperature = 0.0

# Optional configuration for specific LLM models
[llm.vision]model =  "llava:7b"
base_url =  "localhost:11434/v1"
api_key =  "EMPTY" ```

The model name must be the name of your local Ollama running, otherwise an error will be reported

View it through the ollama command,

Correct entry: qwq:latest

Note: api_key must be set to EMPTY, otherwise it will report

API error: Connection error

Start OpenManus

python main.py

Qwen2.5-32B

vi config/config.toml

#Global LLM configuration
[llm]
model =  "qwen2.5:latest"
base_url =  "http://localhost:11434/v1"
api_key =  "EMPTY"
max_tokens = 4096
temperature = 0.0
# Optional configuration for specific LLM models
[llm.vision]model =  "llava:7b"
base_url =  "localhost:11434/v1"
api_key =  "EMPTY" ```

deepseek

vi config/config.toml

# Global LLM configuration
[llm]
model =  "deepseek-r1:32b"
base_url =  "http://localhost:11434/api"
api_key =  "EMPTY"
max_tokens = 4096
temperature = 0.0

# Optional configuration for specific LLM models
[llm.vision]model =  "llava:7b"
base_url =  "localhost:11434/v1"
api_key =  "EMPTY" ```

4: Install the components required by the browser. After completion, we will first select and configure the model

playwright install

Not studied yet..