Manus's closed-door meeting talked about 5 judgments about AI products

Manus, a rising star in the field of AI products, brings a revolutionary intelligent experience.
Core content:
1. Manus surpasses OpenAI and becomes the AI assistant with the highest GAIA rating
2. Unique product concept of "Less structure, more intelligence"
3. Four core views shared by the Manus team at the closed-door meeting
After DeepSeek became popular, another Chinese AI product became popular!
Early yesterday morning, Monica.im released the world's first universal agent product - Manus.
If DeepSeek has achieved a counterattack by Chinese companies in the field of large models, then Manus has surpassed Chinese companies in the field of agents.
Manus has surpassed OpenAI's DeepResearch in GAIA ( a benchmark evaluation system for general AI assistant capabilities ) and has become the number one in GAIA.
Unlike traditional AI assistants, Manus is characterized by its generalizability in common tasks and its ability to autonomously execute tasks and ultimately deliver results.
Simply put, Manus can solve various complex and varied tasks, including writing research reports, travel planning, financial report analysis, etc., and directly deliver complete task results.
There is a lot of discussion about Manus now . We will not discuss whether it is truly a universal agent, but the many unique product insights behind it are worth learning.
For example, Manus proposed “Less structure, more intelligence”, advocating reducing structural restrictions on AI and relying on the model's autonomous evolution capabilities rather than manually preset processes.
Previously, Wu Yajun compiled an article titled "The Painful Lessons of AI Entrepreneurs: Betting on Model Accuracy is a Product Trap, and Using Model Flexibility is the Answer". In this article, the author put forward a point of view after investigating more than 100 AI startup projects:
AI products should not focus too much on the limitations of models, but should instead make more use of the autonomy and flexibility of large models, because the continuous evolution of models will eventually reduce the added value of software, but at the same time will greatly expand the boundaries of applications.
This is undoubtedly confirmed by Manus: through their engineering capabilities, they integrated the functions on the market and achieved a smooth experience and better results.
In addition to "Less structure, more intelligence", according to the public account "Automatic China.AI", at today's closed-door meeting, the Manus team also shared the following four views:
/ 01 /
Two details of Manus
First of all, although Manus is positioned as a "general agent", it still mainly focuses on information collection and research.
Judging from the cases shared by the official, there are both obvious work scenarios and life scenarios such as travel planning, including writing research reports, data analysis, finding potential customers, life scenarios (travel planning) and education.
This is completely different from the approach taken by Zhipu's AutoGLM. Zhipu AutoGLM is more like a personal life assistant, with AI helping users send red envelopes, order takeout, hail taxis, and check routes.
This reflects the difference in understanding between the Manus team and Zhipu on Agent: Manus focuses on squeezing the model's capabilities to complete more complex tasks, while the latter hopes to start with simple daily tasks to create products that more people can use.
Compared to AI sending red envelopes, copywriting work scenarios are obviously more suitable for current AI. This is because this type of work often takes a lot of time, has the most rigid requirements, and is also the area where large models are best at. As long as sufficient context and environment are provided, the performance of large models far exceeds that of humans.
Second, Manus will fully display all the steps it is performing and simulate human usage habits.
In the demonstration case, Manus will break down the user's task into small tasks that he can perform, and then obtain the information needed to complete the task, allowing the user to pause the workflow midway, provide feedback, and then let the agent continue.
At the same time, Manus will save all the information obtained during the process, and then provide a comprehensive output report required by the user in the end. This is somewhat similar to DeepSeek's thought chain display, telling users how it came to this result, and users will naturally trust what it provides more.
Not only that, Crow also noticed that in the demonstration case, Manus would simulate human behavior, such as turning pages in a PDF or opening web pages one by one. In theory, a large model can read a large amount of information in an instant and obtain information more efficiently.
One important reason for doing so is that the current Internet environment is designed based on human usage habits. For the sake of compatibility and versatility, human usage habits are temporarily simulated. In the future, as AI capabilities improve, Agents should form a more efficient communication and collaboration model.
/ 02 /
Betting on model flexibility
The real strength of Manus lies in two points: its generalization on common tasks and its ability to autonomously execute tasks and ultimately deliver results.
From the functional point of view, each function of Manus has a corresponding precedent, such as Deep Research, Artifact, and Web Search. The strength of Manus lies in that it integrates the above functions together by leveraging the powerful reasoning and generalization capabilities of large models, avoiding users from jumping between multiple tools and obtaining a smooth experience and useful results, which is very powerful.
This versatility comes not only from the Manus team's deep engineering accumulation, but also from its unique understanding of AI products.
Among them, the most important thing is that the Manus team proposed "Less structure, more intelligence", advocating reducing the structural restrictions on AI and relying on the model's autonomous evolution capabilities rather than manually preset processes.
Simply put, it is to reduce the restrictions on large models and make full use of the evolution of large model capabilities to complete more tasks more efficiently.
This reminds me of the point made by Lukas Petersson, co-founder of Andon Labs (incubated by YC 24W) not long ago.
After researching more than 100 YC alumni projects, he came up with a view:
Many AI products now focus too much on the limitations of current models, but in the long run, startups should bet on opportunities that can fully utilize the autonomy and flexibility of large models.
In his opinion, optimization projects such as prompt words can certainly improve the effect of AI, but the upper limit is obvious. A better strategy is to wait for a more powerful model. In this process, products with stronger autonomy will achieve better results.
This is already being demonstrated. For example, the Manus team revealed at a communication meeting that its performance has already beaten almost three-quarters of the Agent startups on YCW25.
Manus is a practitioner of this concept. They believe that AI can be smarter than humans and give large models enough space to do autonomous planning and execution.
Of course, from the actual results, there is not much breakthrough in the usage scenarios of Manus, but with the improvement of model capabilities, the boundaries of scenarios may expand over time.
Based on this, Manus also proposed a new definition of AI value indicator - Agentic Hours per User (AHPU), which measures the time efficiency of users entrusting AI to complete tasks, with the goal of improving productivity through parallel tasks.
The Manus team revealed at a communication meeting that the current cost of a single Manus task is $2, which is far lower than the industry average, and there is still room for further optimization.
In addition to the above two points, according to the public account "Automatic China.AI", Manus also shared the following insights at a small-scale closed-door sharing session:
1) Three ways to improve the user experience of Manners products in the future
2) The core of AI’s future is “Labor Scaling ”
That is, users can efficiently manage multiple AI agents as bosses, breaking through the friction limitations of human organizations. In this route, we use code-first strategy (using LLM native programming capabilities) , multimodal web page interaction (better than traditional Markdown parsing) , and dynamic learning mechanism (non-parameter fine-tuning) to build a technical moat.
3) Why us, why manus?
The Manus team has formed its core competitiveness by virtue of its rapid iteration capability (3-month strategic window), flexible architecture (avoiding the constraints of large corporate hierarchy) and firm belief (adhering to non-mainstream cognition). In the process of developing browser products, the team has accumulated a number of exclusive innovative features and experiences.
The decision-making cycle of large companies is difficult to keep up with the changes in the AI field. Even an OKR is longer than the technology change cycle (this is true for some large companies, some departments, and some startups)