AI tool wars: OpenManus is coming only one day after Manus was released

Written by
Audrey Miles
Updated on:July-13th-2025
Recommendation

OpenManus, a new competitor in the AI ​​field, breaks the invitation code restriction and brings a new open source option for AI agents.

Core content:
1. OpenManus's AI agent features and advantages compared to Manus AI
2. MetaGPT team rapid prototyping development background and team introduction
3. OpenManus's core components and deployment practice guide

Yang Fangxian
Founder of 53AI/Most Valuable Expert of Tencent Cloud (TVP)

introduce

The core goal of OpenManus is to provide an AI agent that does not require an invitation code, allowing users to perform various tasks through large language models (LLMs) such as GPT-4o. The project description states that it can "implement any idea", which implies that it is a general AI agent that can autonomously plan and execute tasks. In contrast, the original Manus AI seems to require an invitation code, and the open source nature of OpenManus makes it more accessible.

One particularly noteworthy detail is that the project was mentioned on Hacker News on March 6, 2025, described as an open source alternative to Manus AI, showing its initial attention in the AI ​​community.

Development Background

OpenManus was developed by the MetaGPT team in an impressive 3 hours. This demonstrates the team's ability to rapidly prototype, especially in the field of AI agents. The development team includes  @mannaandpoem , @XiangJinyu , @MoshiQAQ  , and  @didiforgithub , and they welcome suggestions, contributions, and feedback from the community.

OpenManus  Demo


Core Components

  • LLM integration layer : modular design supports multi-model access
# Model configuration example (config.toml)
[llm]
provider = "openai" # supports azure/anthropic, etc.
model = "gpt-4o" # default model version
api_key = "sk-****" # API key storage
base_url = "https://api.openai.com/v1" 
Task execution engine
  • Workflow Scheduling System Based on DAG
  • Support asynchronous task queue processing
  • Error retry mechanism (maximum 3 times)

Deployment Practice

Environmental requirements

Components
Minimum version
Recommended version
Python
3.10
3.12
CUDA
11.7
12.2
PyTorch
2.0.1
2.2.0
+ New dependency management optimization:
  requirements.txt now includes version locking
  pip install -r requirements.txt --no-cache-dir

Configuration Guide

Clone the repository
git  clone  https://github.com/mannaandpoem/openmanus &&  cd  openmanus
Initial configuration
# config.toml Key Parameters
[logging]
level = "INFO" # DEBUG/INFO/WARNING
rotate_size = 100MB

[cache]
enable = true
ttl = 3600 # seconds
Performance Benchmarks

Task processing capability

Task Type
QPS
Average latency
Success rate
Text Generation
15.2
650ms
98.7%
Data analysis
8.4
1.2s
95.1%
Code Execution
6.8
1.8s
93.6%

^[Test environment: AWS EC2 g5.xlarge, 2024-06 benchmark data]

Safety regulations

  • API keys are stored encrypted using AES-256-GCM
  • Input and output content is sanitized
  • Follow OWASP API security standards

Extension Development

Plugin interface

class BasePlugin:
    @abstractmethod
    def preprocess(self, input: dict) -> dict:
        """Input preprocessing"""
    
    @abstractmethod
    def postprocess(self, output: dict) -> dict:
        """Output post-processing"""