Agent is on the rise, enterprise software is dying?

Explore the evolution and symbiosis of enterprise software in the AI era, and reveal the integration of AI Agent and traditional software.
Core content:
1. The extinction and symbiosis of enterprise software under the AI wave
2. The integration path of traditional enterprise software and AI Agent
3. The evolution strategy of enterprise software in the AI era
Will traditional application software die out or coexist with AI under the wave of AI?
In the second series of topic columns "DeepTalk" planned by Cui Niuhui, the dialogue "Controversy of AI" was hosted by Cui Qiang, founder and CEO of Cui Niuhui , and invited Mingdao Cloud founder and CEO Ren Xianghui and enterprise-level AI application entrepreneur Zhang Haoran to engage in a wonderful discussion on the theme of "The theory of the demise of application software: Will AI Agent replace traditional enterprise service software?"
Ren Xianghui believes that Agent will be an important category of enterprise software in the future. It will be integrated with the original enterprise software rather than completely replaced . Enterprise software should first build up its advantages in its application field and then consider the ability to access AI Agent.
Zhang Haoran believes that the new wave of AI-Native application companies should focus more on how to make good use of the ecosystem that has been built in the previous era to form their own value delivery .
The following is the content of the conversation edited by Niutou News Agency: (abridged)
Tips: To watch the original video of the conversation, you can get it through [Niutoushe Video Account-Live Replay-May 7]
Replacement Anxiety
Cui Qiang: The topic of tonight's discussion is "Will AI Agent replace enterprise service software?" We will discuss from three aspects: architecture, knowledge barriers, and new and old patterns. Please ask the two guests to talk about their views on tonight's topic.
Ren Xianghui: My view is that it will not replace the traditional enterprise software, although this does not mean that traditional enterprise software can sit back and relax. In the short and medium term, it is not realistic for the new AI Agent segment to replace all enterprise software segments. There are mainly the following reasons:
First, although enterprise software is traditional, it is not completely useless . Although it is not as intelligent as today's AI, it also has many characteristics such as accuracy, manageability, and the unique capabilities of the GUI itself, which are unlikely to be completely replaced by AI.
Second, the AI ecosystem has shown a trend of actively integrating into tool software and application software , or it has actively adopted this technical architecture.
For example, almost all AI Flow products now provide Function Calling, including the latest MCP protocol, as well as vectorized embedding, knowledge graph embedding and other architectures. Their existence shows that AI capabilities must be combined with the capabilities of application software to ultimately solve customer problems end-to-end .
It's like someone has already extended his hand to shake your hand, but you say I can't do it anymore, I want to do what you do, which is obviously very irrational. The above are two basic reasons.
Third, from a market perspective, there is a certain asymmetry when it comes to the mutual penetration between companies in the AI field and traditional enterprise software companies.
From the current perspective, it is relatively easy to add AI capabilities to enterprise applications, but it is relatively more difficult or takes longer to add AI capabilities to the domain capabilities of the market segments where enterprise software is located . Therefore, enterprise software companies, at least in terms of the difficulty of mutual integration with AI, have a certain advantage.
Fourth, it is the reality of the time cycle. In the realistic cycle of enterprise software product planning, when AI has not fully formed replacement capabilities, enterprise software still needs to evolve according to the existing technical architecture.
So although AI technology is leaping forward, the application industry cannot leap forward, it will always evolve . Therefore, in the next five years, enterprise software should still build advantages in its application field, and then consider integrating AI Agent capabilities , rather than switching to AI Agent because of the fear of being completely replaced by AI Agent. This is more likely to be a case of putting the cart before the horse.
Zhang Haoran: This question depends on the essence of SaaS or the previous software era. My summary is that all workflows and SOPs of an organization have changed from being completely offline to being online, and some workflows have become better automated because of being online.
Essentially, when we talk about SaaS software, we are talking about an application carrier that is composed of countless SOPs or business know-hows . This application carrier has been online and automated in the past, and will definitely evolve towards intelligence in the future.
Online and automated processes provide abundant data nutrients and huge data volumes. The SOPs or workflows constructed will become AI learning experiences.
The essence of intelligence is to think about how to use these data and experiences to change the company's business model in the next wave of technology.
This trend towards intelligence had already occurred before the advent of large models. Today’s LLM (large language model) coupled with RL (reinforcement learning) has actually pushed deep learning and machine learning to a new level.
So, it’s more of an evolution than a replacement.
In this process, the requirements will also change a lot. In the past, many requirements were either black or white, but now there is a grayscale requirement somewhere in between . These requirements can no longer be met by the previous generation of architecture, and at this time, they will be integrated to produce a new AI-Native application.
So, in the end, it looks like a replacement relationship.
The road to integration
Cui Qiang: Haoran just mentioned a question: Is there any essential difference between the original software architecture and the current AI-Native architecture? What specific adjustments should be made to the architecture and functions of enterprise software to integrate with the current AI?
Ren Xianghui: It may not be very clear two years ago, but now I think the technical path of the entire intelligent body and LLM itself is relatively clear. The combination is nothing more than the following aspects:
The first category is to make necessary fine-tuning or retraining in specific fields. For example, in the medical and legal fields, there have been some attempts to build large models in vertical fields, or to provide services with higher accuracy and better quality in professional fields.
The second category is the RAG that is widely used now, such as knowledge base and customer service . The application direction of this aspect is also relatively clear;
The third category, which may be the most important and the main way to combine enterprise software and AI capabilities, is Function Calling , which requires the interface of the enterprise software itself to be presented in an AI-friendly way.
If the interface design of software products was not perfect in the past, we must quickly make up for this lesson now. In the past, Chinese enterprise software companies were relatively weak in openness, with few open APIs and even fewer high-quality open interfaces.
I think I should make up this lesson as soon as possible today. Fortunately, it is not very difficult.
Cui Qiang: Haoran, what do you think about the integration and handshake between enterprise software and AI?
Zhang Haoran: I think we must shake hands. Today, if the LLM-driven AI base does not have a good container, it will definitely be incomplete. Therefore, how AI can make good use of the software advantages established by the previous generation to form its own value delivery is a key .
A simple LLM-driven agent application actually has no barriers. How to make the agent work and how to ensure that the agent can be generalized and output stably in different working scenarios are a direction we should focus on.
Cui Qiang: Agent makes the application experience conversational. Will some designs or processes no longer be necessary in the future?
Ren Xianghui: The current judgment is that it cannot. Enterprise software is not finished after completing a single task today, and sometimes there are indeed many things that have nothing to do with intelligence.
For example, some practical work needs or some industry requirements all require the assistance of GUI and cannot be completely solved through dialogue.
For these reasons, it will definitely be a gradual, integrated process, and both traditional enterprise software and AI are likely to exist .
Cui Qiang: Haoran, what do you have to add to this question?
Zhang Haoran: The necessity of people will always exist, but their roles may change . SOPs and know-how in vertical fields will not disappear, but will be presented in another way .
For example, process engines such as Zapiar, HubSpot, and Salesforce have in the past converted the know-how built by humans in this world into a process mapped in the machine. In essence, it is still human insight , not machine insight.
The greatest value of this wave of AI is that machines have the ability to gain insight . So is it possible for processes to be completed by machines, and how will the role of humans change?
In this process, the role of humans should be to build a embankment to prevent the Work Flow from running away from the embankment. Moreover, AI will not expand indefinitely and will inevitably be restricted by the boundaries of specific fields. The more important role of humans is to pay attention to where this boundary is .
In the past, SaaS and software were based on a set of " 0 and 1 " rules. However, AI-Native applications are different. They have grayscale, and this grayscale is superimposed with intelligence, allowing AI to replace humans to a certain extent in defining workflows . This is also the essence of AI-Native applications that will definitely replace SaaS.
But CUI (Conversational User Interface) and GUI (Graphical User Interface) are not in conflict with each other, but rather complement each other .
The biggest advantage of CUI is its understanding of intent, while the advantage of GUI is its efficiency. Therefore, a new way of interaction will surely emerge in the future.
The dynamic changes in workflows brought about by the new generation of intelligent systems driven by Grayscale are the greatest potential of AI Agent in the enterprise market.
What do we need to do in the meantime? How to correct in the planning stage , how to control in the operation stage, and how to observe and audit after operation .
This may be where AI-Native applications differ from SaaS.
Cui Qiang: Why did Grayscale promote such progress? Can you elaborate on this?
Zhang Haoran: Grayscale is a generalization to some extent in my opinion . When I use LLM, my experience is that when I study everything, it will break through the framework I provide it with and generate some angles that I have never thought of.
This can actually be compared to the meaning of grayscale that I just wanted to express: although it is not absolutely accurate, it is not wrong either . This grayscale in the enterprise market is different from the original strong definition.
Ren Xianghui: Grayscale will definitely exist at present, and there is no way to accurately control it. At present, those that are running relatively fast in the AI application market segment have low thresholds for accuracy requirements, such as general customer service scenarios.
However, most enterprise software categories may not be able to accept grayscale, such as financial software, workflow software, etc. From our experience in serving customers, no customer likes grayscale.
Zhang Haoran: My perspective here is that today's SaaS is just a delivery process, not a delivery result.
If AI Agent delivers a certain result five years later, it is likely that in the process of delivering the result, it will reach a milestone node and require human interaction. However, compared with the past, the granularity of human intervention may be coarsened. In a sense, this grayscale is digested by AI internally.
Although there is a certain degree of grayness in some cases, this milestone is controllable .
Ren Xianghui: I believe that people who use AI to solve enterprise software problems today all want to achieve end-to-end goals. You also mentioned that it is impossible to achieve it in one step, and sometimes you can only jump to a certain milestone, which requires human participation.
The problem is that you don’t know at which point this milestone will fail. It may fail at any time in the entire chain, which means that you still need to arrange observation windows at every link and provide an entrance for human operation. This is no different from now.
This also shows from another perspective that the enterprise software we build today may be a must before achieving end-to-end. And if it is only end-to-end for the enterprise, it is not very meaningful, because in many cases it is necessary to cross enterprise boundaries, but enterprise boundaries do not necessarily develop synchronously.
It will be very difficult for enterprises to achieve true end-to-end in the next five or even ten years. In this case, sticking to the observation window is still the guarantee of leading advantage.
Zhang Haoran: The observation window will definitely not disappear, but the definition may change . If the GUI is regarded as an observation window, it was controlled by humans in the past, but it may be controlled by AI today.
In the past, we made software for people. Will it change to be for AI in the next step? Just like browsers were used by people in the past, but now everyone is making browsers for AI. This is a singularity of change.
If we transform the interoperable APIs, workflows, and processes that have been established in the previous generation into AI-oriented ones, some new changes may occur, which is also an opportunity for new entrepreneurs. From another perspective, I also agree that LLM will stop after a certain stage of development. There is no way to develop towards the so-called end-to-end. There may be newer frameworks to solve this problem.
Today, various Agentic systems are superimposed on LLM, which is a possible substitute for SaaS . It is difficult to achieve end-to-end driven by a big model, but a new direction of using Agentic systems to complete end-to-end application construction will come.
Ren Xianghui: In the future, Agent will definitely be a very important form of software, and may be an important category in enterprise software . The original categories will be combined with it. Now everyone should be able to see this vision.
In the current intelligent agent market, there are many young entrepreneurs who are working on this new category, which I think is more desirable. Some existing enterprise software product companies are also working on intelligent agent orchestration tools, but I think this is unnecessary.
I do not deny that Agent is a very important form, but it is only a combination with enterprise software and will not replace the original software of this category.
Cui Qiang: When we use AI, how do we ensure its security? Will the company's own data be made public in this process? How can we iterate while ensuring the privacy and security of the data?
Zhang Haoran: This is a difficult point, but it is not a new problem. Take the interaction protocol between agents as an example. The permissions that can be obtained in these communication protocols must also be managed.
In the past, people needed identity authentication in the system, and today, agents also need identity authentication in the entire system. This means that the previous security assurance systems are still needed in the AI era. The question is how to integrate them into the new AI system to complete the first security barrier.
As the second security barrier, I think technologies such as homomorphic encryption will also become popular.
Ren Xianghui: This is similar to SaaS public cloud and private cloud services. If Function Calling is used, private data is not stored in the big model in essence .
If everyone thinks that as long as there is communication with their own internal data, it is unsafe, this is actually a bit excessive. This also led to the emergence of DeepSeek, and everyone wanted to deploy a private one. I think this is unsustainable because its cost is unreasonable .
Large enterprises may still have economies of scale, but this is unrealistic for most small and medium-sized enterprises . Moreover, the model itself is also improving, and there is no reason why the cost in the future will be as low as it is today, so basically in the end, most small and medium-sized enterprises will still use the public service model.
Only large enterprises may choose private deployment. And the essence of security is not determined by the deployment mode , but a comprehensive thing.
So I don't think this has anything to do with AI specifically.
Changes in the AI Ecosystem
Cui Qiang: At present, the domestic ecosystem is not very open, and it is difficult to solve the problem of interconnection using the MCP method. From the perspective of the domestic ecological environment, what will the AI ecosystem develop into in five years?
Zhang Haoran: It’s hard to say. The entire infrastructure is still being improved, and the accuracy of agent calling tools will continue to increase, so basic capabilities such as MCP will become a consensus, but it is hard to say whether it can become a part of AI Infra.
But in five years, the accuracy of Agent calling tools will be very high, which means that the boundaries of an Agent unit will be defined very clearly and specifically .
People may use the previous definition and imagination of SaaS to clearly depict the definition and imagination of the future Agent, which may also be a change we will see next.
Ren Xianghui: This may depend on the joint efforts of both sides. On the one hand, the model capability will certainly continue to improve , and it may provide a better way than the three combinations I just mentioned, but one thing is certain, enterprise software itself also needs to work hard.
This requires patience and avoiding meaningless things. For example, what we are doing today is to use Agentic capabilities to schedule our internal interfaces, and generate self-defined Agents through user prompts and provided materials. This is Mingdao Cloud's own zero-code application model.
This process is simple to say, but it is actually broken down into many steps. Each detail may take months to polish to achieve acceptable accuracy, which is what we should strive for when developing application software. If we do not do alignment and verification on our side, even if the model on the other side is becoming more and more powerful, we will not be able to solve the customer's problem.
So, five years can actually go a long way, but we need to reach some consensus as soon as possible, that is, don’t do meaningless things. If you don’t understand the existing technical limitations and just want to make some gimmicky features, it’s a waste of time.
Cui Qiang: We are now seeing a trend that big models are also pushing some new capabilities. For example, Kimi has launched general capabilities such as PPT and mind mapping, which may cause some problems for some single-point application products.
Similarly, some tool software is also moving up to train its own private small models. How do you see the competition between AI-Native and traditional software, between large companies, and between model platforms in the future?
Ren Xianghui: I think small companies have huge advantages , and this has nothing to do with AI. For example, we recently purchased a screen recording software, which is completely based on design. It was made by a small team abroad, and its functional design has nothing to do with AI.
This shows that competition is definitely not about technological path, and has little to do with whether or not AI is adopted.
In most cases, winning the hearts of customers requires something other than mainstream technology. Because mainstream technology is what everyone is paying attention to , today's AI is almost a public thing, and its threshold is getting lower and lower.
So for small companies, the key is whether you can find unique opportunities in many market segments and focus on doing so.
Cui Qiang: Your point is that small factories still have great advantages. They should not focus on technology or brand strength, but rely entirely on innovation to find their own market segment.
Zhang Haoran: I agree very much. I think technology is not the essence . The new wave of AI-Native application companies should focus on how to make good use of the ecosystem that has been built in the last era . They don’t need to think about how to build those foundations that have been built countless times. Instead, they should think about how to use them to create new value through AI.
The key is to clearly define value and constantly think about how to use the new generation of AI-Native applications to deliver value .
Cui Qiang: SAP has made corresponding acquisitions during each round of transformation. Will China's enterprise service ecosystem also have such a state in the future, that is, large companies will acquire some new small companies during the transformation process to supplement their own technology map?
Ren Xianghui: There will always be such a day, which is also the law of the industry. But I think the "big brothers" must first solve the current problems, that is, their own hematopoietic capacity must be restored . I think it will be more troublesome in the next one or two years. It will take three to five years to reach such an integrated state, which is also healthier.
Zhang Haoran: This is very good. For example, an AI-Native application can be delivered in a vertical scenario quickly today . However, the application base actually has huge competitive barriers, and it cannot be accomplished in just a few years.
At this time, if small innovative AI application companies can timely pass on the value we have defined and find a base that can amplify it 10 or 20 times and hand it over to them, this is actually a good result.
Ren Xianghui: In fact, most SaaS company founders also focus on their AI products, or how their products use AI capabilities. At this time, I think the risk of making mistakes is higher than missing out.
Cui Qiang: Finally, please summarize your views and content today in the briefest possible words.
Ren Xianghui: Everyone knows that the most classic comparison is the example of cars replacing horse-drawn carriages more than 100 years ago. SaaS companies are compared to horse-drawn carriages, and AI native applications are compared to cars. But people may be more likely to overlook a historical fact, that is, many car companies were transformed from horse-drawn carriage companies at that time.
So when we look back at history, we tend to oversimplify , but from a micro-factual point of view, it is not the case. Traditional enterprises have always been paying attention to this market. Therefore, the greater possibility is technology integration , so don't be too anxious.
Zhang Haoran: I think overall it is definitely not a replacement relationship, but rather a fusion and growth . We have no reason to say that the giants will definitely fall behind in this regard.
The biggest difference between AI-Native applications and previous application software is its divergence and generative nature . The grayscale brought by these two cores will also grow different needs. These different needs will definitely distance themselves from traditional software , and this is also our most important entry point today.