Recommendation
Qwen3 hybrid reasoning model is launched, opening a new chapter in intelligent programming!
Core content:
1. Qwen3 model performance comparison and open source advantages
2. The four core capabilities of Tongyi Lingma programming agent
3. Autonomous decision-making and tool use of agent mode
Yang Fangxian
Founder of 53A/Most Valuable Expert of Tencent Cloud (TVP)
Recently, Qwen3 officially released and open-sourced 8 "hybrid inference models". The flagship model Qwen3-235B-A22B showed very competitive results compared with top models such as DeepSeek-R1, o1, o3-mini, Grok-3 and Gemini-2.5-Pro in benchmarks such as code, mathematics, and general ability.Tongyi Lingma now fully supports Qwen3 and has added an intelligent agent mode, which has the capabilities of autonomous decision-making, environmental perception, and tool use. It can use tools such as project retrieval, file editing, and terminal according to the developer's coding requirements to complete coding tasks end-to-end. At the same time, it supports developers to configure MCP tools, so that coding is more in line with the developer's workflow.At the same time, it supports switching between intelligent question and answer, file editing, and intelligent agent modes in the same conversation flow. Developers can freely switch conversation modes according to their demands during the conversation without having to create a new conversation.
Tongyi Lingma supports the first hybrid reasoning model Qwen3 in China
Recently, Alibaba Qwen3 was officially open sourced, becoming the world's largest open source model! Qwen3 uses a hybrid expert (MoE) architecture, with a total parameter size of 235B and only 22B required for activation. Qwen3 is also the first "hybrid reasoning model" in China, integrating "fast thinking" and "slow thinking" into the same model. It can provide answers to simple needs in seconds with low computing power, and can "deeply think" in multiple steps for complex problems, greatly saving computing power consumption.- The Qwen3 model focuses on fast response and is suitable for scenarios that require fast coding;
- The Qwen3-thinking model focuses on deep thinking mode and generates higher quality code after reasoning. It is suitable for scenarios such as complex question answering and unit test generation.
Tongyi Lingma now fully supports Q wen3, and users can experience the Lingma plug-in in mainstream IDEs such as VS Code, Visual Studio, and JetBrains IDEs. Tongyi Lingma Programming Intelligent Agent is online
Based on Qwen3's powerful coding capabilities and native MCP protocol support, the newly launched Tongyi Lingma Programming Agent has the following four capabilities:Project-level changes: Based on the developer's task description, you can independently decompose the task and modify multiple code files within the project. At the same time, you can perform gradual iterations or snapshot rollbacks through multiple conversations, and collaborate with Lingma to complete coding tasks.
Automatic project perception: Based on the developer's task description, it can automatically perceive project information such as project framework, technology stack, required code files, error messages, etc., without the need to manually add project context, making task description easier.
Tool usage: You can use more than ten built-in programming tools independently, such as reading and writing files, code query, error troubleshooting, etc. At the same time, it supports automatic perception and use of MCP tools.
Terminal command execution: When executing coding tasks, you can independently decide the commands to be executed, automatically complete command writing and run the terminal, greatly improving the execution efficiency of coding tasks.
(To experience the agent mode, you need to upgrade the Tongyi Lingma plugin to version 2.5.0 or above in VS Code and JetBrains IDEs)
Tongyi Lingma provides more than ten programming tools for agents to make autonomous decisions, including file search, file reading, directory reading, semantic symbol retrieval in the project, file modification, error acquisition, terminal execution, etc. When using the tools, agents can make autonomous decisions and execute without the need for confirmation or intervention from developers. At the same time, they can decide on the next execution plan based on the returned results.Terminal command execution
The agent mode can make independent decisions to write and execute terminal commands based on current needs. To ensure the certainty of command execution, the developer is required to confirm each time a command is executed by default:- Click the Run button, and the agent will send the command to the IDE's Terminal window for execution;
- Clicking the Cancel button will skip the command execution and the agent will continue with the next step of planning based on the developer's feedback.
For commands that need to be run in the background, a "background running" mark will appear, the agent will continue with the subsequent tasks, and will actively check or obtain the terminal output results when necessary.Of course, developers can configure the list of commands to be automatically executed in the plugin settings. The configured commands can be automatically executed without confirmation. If you need to add multiple commands, you can use English commas to separate them.MCP Tool Execution
When you have configured the MCP tool, the agent will make its own decision on whether to call MCP to complete the task based on the needs. It will ask you before each execution. Click the Execute button to confirm. The agent will call the MCP tool and return the result as a context reference for subsequent execution. Tongyi Lingma supports 2400+ MCP services in the Moda community
Tongyi Lingma Programming Intelligent Agent supports the use of MCP tools. According to the user's demand description, it realizes the call of MCP tools through model autonomous planning, and is deeply integrated with the largest MCP Chinese community in China - Moda MCP Plaza, covering 2400+ MCP services in ten popular fields such as developer tools, file system, search, map, etc., comprehensively broadening the capabilities of AI coding assistants and better fitting the developer's workflow.- AI autonomously calls MCP tools: You only need to describe the task in natural language, and Tongyi Lingma can autonomously plan and call the corresponding MCP tool to automate the entire process from task description to code generation and environment construction, simplifying complex development work and further improving development efficiency.
- DevOps expansion: Alibaba Cloud Yunxiao MCP service was first launched, which supports obtaining requirements, defects, and code review issues from Yunxiao, and submitting merge requests immediately after generating code. All operations can complete the entire DevOps process without leaving the IDE, greatly improving R&D delivery efficiency.
- Integration of Moda MCP Plaza: Deep integration of Moda community MCP Plaza allows developers to install with one click without leaving the IDE, and easily call more than 2,400 professional MCP services.
// Demonstration 1: Rebuilding the Tongyi Lingma official website homepageIn the past, building a web page required at least project initialization, component development, state management, interaction implementation, responsive layout, performance optimization and other steps.Today, with just one sentence - "Help me develop the page based on the design draft", Tongyi Lingma can call the MCP tool to read the design draft, select the appropriate technology stack according to the user's coding habits, automatically create the project file, define the development specifications, and provide real-time feedback on the generation effect, and even generate R&D documents. The user only needs one sentence and a few confirmation keys to complete the whole process.
// Demo 2: Developing an Accounting AppDeveloping an App requires at least processes such as demand analysis, design, business logic development, front-end page development, and testing optimization. The entire R&D process takes at least 1 to 2 days for a development engineer.Now, with the help of Tongyi Lingma programming intelligent agent, you only need to enter a few sentences, and Tongyi Lingma can automatically create engineering files, define development specifications, provide real-time feedback on generation effects, and even generate R&D documents based on your demand description and combined with the technology stack. Development can be completed in 10 minutes, greatly improving R&D efficiency.