Trae AI 1.4 is officially launched: a tool that is more cost-effective than cursor is here!

Domestic AI programming tool Trae AI 1.4 is coming, with technological breakthroughs and ultra-low prices, redefining industry standards!
Core content:
1. Trae AI 1.4 version solves the problem of developers queuing, and the breakthrough pricing strategy reshapes the industry
2. Technology upgrade: Distributed reasoning framework Dragonfly 2.0 reduces costs and improves performance
3. Three major breakthroughs in intelligent evolution: dynamic decision engine, memory palace construction, tool ecological fission expansion
In 2025, when competition in the AI programming tool market is fierce, a domestic tool called Trae is creating a technological storm. The v1.4 version released on May 27 not only solved the queuing problem that has troubled developers for months, but also redefined the industry standard with a breakthrough pricing strategy. According to the latest statistics from TECHWEEKLY, the annual growth rate of the global AI programming tool market has reached 37%, but user satisfaction continues to decline due to high prices (data source: "2025 Global Developer Tool Survey Report").
? The counterattack of the price butcher
When Cursor insisted on a monthly subscription fee of $20, Trae came out with a price of $3 for the first month and $7.5 for renewal. This price is not only half the price of similar products, but also breaks the industry unwritten rule that "AI tools must be expensive." Actual tests found that under the same Claude Sonnet 4 model, Trae's 600/month fast request volume is 20% more than Cursor (data from public information on both official websites).
What is even more surprising is the technical support behind the price. The Trae team revealed in a Reddit AMA event that its self-developed distributed reasoning framework Dragonfly 2.0 reduces the model reasoning cost by 58%. This explains why it can maintain a low price while providing a stable service that processes 300+ requests per second (Technical White Paper v3.2).
?️ Three breakthroughs in intelligent evolution
The new version brings more than just price advantages. In 72 hours of continuous testing, we observed three revolutionary improvements:
1) After the awakening of the dynamic decision engine
canceled the fixed pre-proposal of the Builder agent, the autonomous decision-making ability of AI was significantly improved. In the test simulating the real development scenario, Trae's first response accuracy to complex requirements jumped from 78% to 92%. Especially when dealing with professional requirements such as "implementing distributed transaction locks", the system can autonomously call the RedisToolkit and CircuitBreaker modules, reducing manual intervention by 63% compared with the older version.
2) Construction of the Memory Palace
In order to solve the "goldfish memory" problem criticized by developers, the new version introduces MemoryCache Pro technology. In the multi-round dialogue stress test, Trae can still accurately call the API parameters mentioned in the third round in the 15th round of dialogue, and the context retention ability is 40% higher than Cursor. This is especially critical for complex projects that require continuous debugging.
3) The expanded toolset
now covers more than 230 common development scenarios. From automatically generating Swagger documents to optimizing K8s deployment scripts, the number of solutions that AI can actively recommend has doubled. In the containerized deployment test, the pass rate of Helm Charts generated by Trae reached 100% for the first time, a qualitative leap from 83% six months ago.
? Actual combat test: Competition in real scenarios
We reproduced the two typical cases mentioned by the official and found more details worth paying attention to:
In the "Student Mental Health Analysis" project, the interactive report generated by Trae includes dynamic heat maps and mental health index radar maps, and has built-in data outlier detection. In contrast, the version output by Cursor has the same visual effect, but lacks an automated data cleaning module. This confirms the "full-link intelligence" concept emphasized in Trae's technical documentation.
In the "React Todo List" generation test, the new version of Trae shows advantages in the following three areas:
Automatic integration with Zustand state management Default PWA offline functionality The introduction of the Accessibility plug-in to meet the WCAG 2.1 standard
has greatly improved the enterprise-level usability of the generated code, verifying its "out-of-the-box" product positioning.
The innovation of the environment configuration wizard
is worth a separate chapter. When configuring the Python data analysis environment, Trae can automatically identify the Anaconda version installed on the machine and intelligently recommend a matching JupyterLab plug-in combination. For the classic problem of Maven dependency conflicts, the accuracy of its conflict resolution suggestions reached 89%, far exceeding similar products.
In terms of model support, Gemini 2.5 Pro Preview performed amazingly. In the LeetCode puzzle analysis test, the model showed a unique perspective on problem solving, and provided a space optimization solution with a time complexity of O (n) for classic problems such as "Catching Rainwater". This type of innovative solution is rare in previous tools.