"The era of AI experiments is over": IBM reconstructs the future blueprint of enterprise-level AI

Written by
Caleb Hayes
Updated on:June-20th-2025
Recommendation

IBM CEO announces the end of the AI ​​experiment era, and enterprise AI ushers in a new starting point for business reconstruction.

Core content:
1. AI technology moves from experimentation to a new stage of core business reconstruction
2. Four key elements for enterprise AI implementation: data, models, security governance, and intelligent agents
3. Software-defined infrastructure to achieve flexible expansion of infrastructure and reduce costs and increase efficiency

Yang Fangxian
Founder of 53A/Most Valuable Expert of Tencent Cloud (TVP)

"The era of AI experiments is over." In his opening speech at the Think 2025 conference, IBM Chairman and CEO Arvind Krishna made this powerful statement. This is not only a judgment on the development stage of enterprise AI, but also a new starting point for AI technology to move from experimentation to core business reconstruction.

This turning point is driven by two major factors: first, the continuous advancement of technology has made AI easier and more efficient to implement; second, the actual needs of enterprises - reducing costs and increasing efficiency - have driven AI to truly serve core business processes. From "experimenting with AI" to "reshaping business", enterprises need to achieve deep-level innovation rather than shallow tool replacement.

"When talking about AI on the Internet, it is sometimes deified or demonized, but the implementation of AI at the enterprise level must be a step-by-step process." Zhai Feng, general manager of technical sales and chief technology officer of IBM Greater China, said in an interview with the author that the development of enterprise-level AI is at a critical turning point. In the past, companies focused more on building knowledge bases, developing smart assistants or customer service, but with the maturity of technological capabilities, AI is gradually penetrating from the edge to the core business processes of enterprises. The real challenge facing enterprises is how to deeply embed AI into the core areas of production and manufacturing, thereby achieving business automation and even process reshaping.

Zhai Feng, General Manager of Technical Sales and Chief Technology Officer, IBM Greater China

In this process, the key elements of enterprise-level AI implementation have gradually become clear. IBM has always believed that the most important factor for enterprise-level AI implementation is data, which is the core productivity. Without data, everything is empty talk. Without high-quality data, no matter how powerful AI is, it cannot provide meaningful output. Enterprises need to continue to ask the "three soul questions" - do they have high-quality data? Are they really using it? Is it generating the expected value?

The second is the model. The model is not about size, but about fitting the scenario. "Small and beautiful" lightweight models and domain expert models will become the mainstream. Within an enterprise, different business areas require multiple small models to run in coordination, and the coexistence of multiple modalities and multiple models will be the norm.

Security and governance capabilities are also crucial. Data security, model security, application security, and how to effectively supervise the entire process will become the basis for the trustworthy implementation of AI within the enterprise.

At the same time, the role of agents is becoming increasingly prominent. A good agent must not only understand human intentions and formulate action plans, but also have the ability to execute. Especially in enterprise scenarios, this means that it must be able to connect systems, issue instructions, and link business processes.

"In order for intelligent agents to truly play a role, the system integration capabilities behind them are indispensable. The information systems of enterprises are highly fragmented. If AI is to complete the closed loop from understanding to execution, it must first solve the interconnection of underlying systems. As AI gradually penetrates into corporate processes, hybrid IT environments become the norm, with data and applications distributed on the cloud, off the cloud, at the edge, and even on device ends." Zhai Feng believes that it is necessary to use a hybrid cloud architecture to open up the underlying capabilities through a simple and unified API interface so that intelligent agents can truly "issue orders" and drive business execution.

As more and more applications and intelligent entities are developed, enterprises increasingly need to deliver, deploy and launch them quickly, and the flexibility, agility and automation of enterprise infrastructure become increasingly important. Therefore, IBM emphasizes a new concept, which is software-defined infrastructure - managing complex and changing computing power, storage and other resources in a more automated way, so that the infrastructure can be flexibly expanded or contracted according to business changes, thereby achieving real cost reduction and efficiency improvement.

Focusing on the four cores of data, models, security governance and intelligent agents, IBM is not only committed to building a technical foundation, but also accelerating the development of the intelligent agent ecosystem. By providing ready-to-use and customizable intelligent agent templates, enterprises can quickly get started and iterate quickly, truly transforming AI capabilities into business productivity. The value of AI in enterprise-level scenarios is realized step by step through this system engineering from "foundation to application".

It is not difficult to "build" an intelligent agent. Scenarios and data are the key entry points.

Looking back at the development path of artificial intelligence, the current technological focus is evolving from traditional AI assistants to more autonomous "AI agents". However, there are essential differences between AI assistants and AI agents: assistants rely on pre-trained data models, and the reasoning process is relatively simple; while true agents have the ability to make decisions autonomously based on the actual environment during operation, and the reasoning is more complex and closer to the essence of "intelligence".

"To build an efficient AI agent, we need to integrate three core capabilities: enterprise-level automation capabilities on the back end, whether it is API or MCP; front-end dialogue that allows the agent to interact naturally with users; and autonomous thinking capabilities." Wu Minda, senior technical expert in data and artificial intelligence at IBM Greater China Technology Division, said that only by integrating these three elements can AI agents be truly implemented and demonstrate their actual value.

Wu Minda, senior technical expert of data and artificial intelligence, IBM Greater China Technology Division

Against the backdrop of the rapid development of generative AI, it is not difficult for enterprises to build an "intelligent entity", but they face many challenges in truly realizing the large-scale deployment and value release of intelligent entities.

On the one hand, the intelligent technology ecosystem is fragmented. There are a large number of open source frameworks and third-party tools on the market. Enterprises often need to combine multiple solutions in actual deployment, which makes system integration complex, especially when connecting with open systems or external interfaces.

On the other hand, although intelligent agents can be applied to multiple business scenarios in theory, enterprises often find it difficult to quickly identify high ROI entry points. Once entering the actual deployment stage, from development, launch to operation and maintenance optimization, the continuous evolution of intelligent agents puts higher requirements on management, and the "AgentOps" mechanism similar to AIOps is still being explored.

In response to these pain points, IBM officially released its intelligent agent strategy at the Think Conference 2025, proposing four core initiatives: First, intelligent agent orchestrators, multi-agent, multi-tool monitors, etc., help to perform complex tasks; second, open ecological integration, in response to the trend of accelerating technological evolution, IBM is committed to building a sustainable and scalable intelligent agent ecosystem, supporting the rapid integration of open source capabilities, and providing pre-built models of intelligent agents to improve implementation efficiency; third, the IBM platform supports the construction of intelligent agents in a variety of ways, including no code, low code, and professional code, to help users from different backgrounds participate. At the same time, it supports the transition from no code to code mode to improve flexibility and scalability; fourth, AI intelligent agent operation and maintenance, discovery, management, monitoring and optimization of AI intelligent agents.

In addition, IBM also released three types of pre-built agents for enterprise business scenarios for the first time, covering the fields of human resources, sales and procurement. Among them, the human resources agent has been officially launched, and the sales and procurement agents will be available in June.

"When domestic customers use it, they can use these intelligent entities as templates, adjust them according to their actual needs, and quickly build their own intelligent entities," said Wu Minda.

As AI agents develop, enterprises will have more and more urgent demands for data. According to Gartner statistics, by 2026, more than 60% of AI projects of organizations that do not enable and support their AI use cases through AI-ready data practices will be abandoned because they cannot meet business SLAs.

Another very important point is that 90% of the data within an enterprise is unstructured. For this reason, IBM has made the processing capabilities of unstructured data the focus of the upgrade of the WatsonX.Data platform to promote the comprehensive utilization of data assets.

From the perspective of the product logical architecture of IBM watsonx.data, the overall design is divided into five levels: the bottom layer is consistent with all watsonx series products, supports flexible deployment across multiple clouds and local, and adapts to the IT environment of different enterprises; the storage layer is mainly object storage, supports object storage of mainstream cloud vendors, and is compatible with local deployment; the open source data format layer, whether it is table data or documents, logs, pictures, etc., the platform can be uniformly accessed and managed and governed through a unified metadata system. This is a very challenging part of the industry at present, and it is also a technical difficulty that IBM focuses on; at the data engine layer, diversified processing engine support, from the early supported Presto, Spark, Db2 Warehouse, to vector database Milvus, newly acquired DataStax and other engines, one piece of data supports multiple query methods to achieve flexible access; at the unified metadata governance layer, new integrated data latitude and longitude are added to generate semantic understanding capabilities through unified metadata governance. Users can ask questions based on natural language.

Wu Minda believes that WatsonX.Data can enable enterprises to unlock key business information in unstructured enterprise data. For example, it supports access to multiple document sources: including user-uploaded documents, and also supports connecting to common enterprise document libraries such as Box, SharePoint, FileNet, etc.

It is worth mentioning that the unique optimization of IBM's watsonx.data in the RAG (retrieval enhancement generation) solution has increased the accuracy by 40% compared with the traditional RAG process. "Before document vectorization, the watsonx.data integration module extracts entities and values ​​first, and then performs vectorization processing. When querying a large model, in addition to returning similar vector fragments, it can also provide additional related entity and value information, which improves my accuracy with the assistance of entities and values."

watsonx.data integration is a comprehensive data integration tool that is not only compatible with traditional structured data processing tools (such as DataStage, CDC, Data Replication, etc.), but also integrates capabilities such as StreamSets (stream processing) and Databand (observability). More importantly, it formally incorporates the processing of unstructured data into a unified platform, supports the integration of multi-source heterogeneous data, and helps enterprises complete the first step of transformation from "raw materials" to "usable data."

After the data is extracted and put into watsonx.data, watsonx.data intelligence is responsible for unified data governance after the data is landed. By building a semantic layer, data lineage analysis and permission management, it provides accurate and controllable data support for upper-level applications such as AI question and answer and BI reports. Whether the user queries a RAG knowledge base through natural language or queries reports through traditional SQL, watsonx.data intelligence can ensure that it is based on the same data logic and permission model, ensuring the consistency and compliance of data use.

Overall, these three products perform their respective functions and work together to build a data foundation for the AI ​​era: WatsonX.Data Integration is responsible for integration; WatsonX.Data is equivalent to a data warehouse; WatsonX.Data Intelligence generates a related semantic layer to connect AI and BI (business intelligence). With this complete data governance closed loop, enterprises can not only improve the efficiency of using unstructured data, but also truly unleash the maximum value of data assets in AI and BI scenarios.

AI+automation enables efficient execution and operation of intelligent entities

The reality that enterprises are facing now is that they have thousands of application systems on average, with varying lengths of existence and different technical architectures. The IT environment of enterprises is becoming more and more complex, and the technology stack is no longer a single one, but an ecosystem that is "more and more complex and more and more complicated". When enterprises try to build AI agents and truly make them "move", that is, from understanding and analysis to execution, problems arise: how to connect existing systems? How to implement them? This step is a key step in whether AI can generate actual business value in enterprises.

"If AI is required to learn these interactions and become familiar with these interfaces one by one, the training and deployment costs will be enormous." Zhang Cheng, senior technical expert in automation at IBM Greater China Technology Division, said that IBM does not deny the potential of Agent, but in order to truly achieve efficient execution of Agent, the first thing to solve is the problem of deep integration with the company's existing systems - this is not "AI solving its own problems", but AI must rely on a complete set of integration capabilities to deal with the system.

Zhang Cheng, senior automation technical expert of IBM Greater China Technology Division

To this end, IBM launched the "IBM Hybrid Integration" platform at this conference. The underlying logic of this platform is, "AI is the brain, but to really work, it needs tentacles, and these tentacles are integration capabilities." This set of capabilities covers both cloud SaaS services, such as SAP, ServiceNow, and Workday, and can also penetrate into traditional local systems and even connect to the industrial control layer, such as SCADA, OPC, Modbus and other industrial protocol systems.

IBM has accumulated a lot of experience in local integration for many years. This time, through the acquisition of webMethods, which has rich experience in cloud integration, the ability to "connect on-cloud and off-cloud" has been further enhanced. Ultimately, the goal of this platform is to help enterprises build an intelligent hub with business agility and flexibility - through AI-enabled integration, AI can not only think, but also act.

At the same time, in the process of large-scale deployment of AI Agents in enterprises, realizing the visualization and controllable operation and maintenance of multi-agent links has become a practical problem that must be solved. AI Agent is essentially an application. When it performs tasks, it often calls operations of other systems, which forms a complex link. When hundreds of Agents may be running simultaneously within an enterprise, any error in any link may affect the stability and accuracy of the entire task.

In practice, Agents may encounter a variety of faults: network fluctuations leading to disconnection, Java program memory overflow leading to application downtime, database log errors, and exhaustion of CPU or disk resources on the underlying server. These problems span the application layer, data layer, network layer, and hardware layer. To ensure the stability of the Agent link, the key is to achieve full-link, full-stack monitoring and intelligent diagnosis capabilities.

IBM has built an intelligent operation and maintenance system from the bottom layer to the application through its series of IT automation products: SevOne focuses on underlying network monitoring, can collect and analyze network performance data, and quickly identify network-related problems; Instana provides full data analysis of applications, which is different from traditional sampling methods. It ensures that every call and every link is fully recorded to facilitate problem tracing; Concert, as a unified AI operation and maintenance platform, aggregates various error information such as network, application, database, and security, and uses AI capabilities to perform root cause analysis, fault location and automatic repair suggestions.

This entire system can not only detect problems in the Agent link in real time, but also provide repair suggestions through AI assistance, such as system patch upgrades, security vulnerability repairs, certificate updates and other operation and maintenance actions.

In addition, when companies deploy AI customer service and encounter "slow response" issues, on the surface this is a customer experience issue, but in essence it is often caused by insufficient underlying resources or improper management.

At this time, the key response strategy is automated elastic resource management. In other words, when the AI ​​system detects resource shortages, it can automatically apply for more resources to achieve elastic scaling and ensure stable AI customer service performance. At the same time, after the system load is reduced, it can automatically release excess resources to avoid unnecessary resource occupation and cost waste.

"This process not only requires the infrastructure to have flexible scheduling capabilities, but also requires the scheduling actions to be compliant, secure, and controllable. This includes automatic deployment of new applications, permission management, operation auditing, network configuration, etc. All require a complete automated governance process," said Zhang Cheng.

In this scenario, IBM emphasizes IT automation capabilities with AI as the core. For example, HashiCorp, which was just acquired and integrated into its sub-brand, is an important layout of IBM in this field. HashiCorp's open source products such as Terraform, Vault and Consul are widely used in global and Chinese companies, especially in the fields of cloud native, elastic computing and other automated management, forming industry influence.

In summary, IBM has built a top-to-bottom AI-driven automation capability through its own automation software system and HashiCorp, the underlying infrastructure automation platform. IBM also emphasizes the low-code and no-code features of the platform, so that resource adjustment and automation operations no longer rely on professional operation and maintenance teams, further reducing the technical threshold and labor costs of enterprises.

At a time when an average of 27% of cloud resource spending is wasted, IBM hopes to use AI+automation to help companies find and reduce resource waste at the root, thereby achieving a more efficient and flexible AI application support system.

Bringing AI into practical application scenarios

In the process of digital transformation of enterprises, Zhang Xun, manager of the Garage Innovation Team of IBM Greater China Technology Division, used a vivid metaphor to describe the current challenges: it is like changing tires on a high-speed car - the enterprise cannot stop, but must ensure safety and pursue a faster and more stable development rhythm. Faced with such multiple pressures, the transformation path proposed by IBM is not achieved overnight, but emphasizes the strategy of "small steps and fast running" - accumulating experience in local pilots, gradually building a "digital foundation" for intelligent operations, and ultimately achieving innovation-driven transformation and upgrading.

Taking the manufacturing industry as an example, the digital transformation of the manufacturing industry is regarded as a "top priority project" and must cover the entire life cycle from product development to customer service. Relying on the WatsonX platform and a strong IT and operations foundation, IBM is promoting manufacturing companies to achieve intelligent upgrades in all aspects of "research, production, supply, marketing and service".

Zhang Xun, Garage Innovation Team Manager, IBM Greater China Technology Division

In the R&D stage, AI is becoming an intelligent engine for product prototyping and process design. By training on a large amount of CAD manufacturing data, IBM WatsonX can automatically generate design drafts that meet industry standards and continuously optimize the model until the optimal solution for performance and specifications is achieved.

At the same time, with the integration of industrial databases and AI algorithms, companies can quickly evaluate the quality impact of materials and processes and reduce trial and error costs. In addition, combined with IBM's high-performance computing platform LSF, companies can conduct intelligent simulation tests in a virtual environment to verify the reliability of products in multiple scenarios. The construction of the R&D knowledge base also realizes large-scale knowledge sharing within the company.

In the field of intelligent manufacturing, the production process focuses on AI quality inspection and predictive maintenance. Through IBM Maximo, equipment is monitored in real time, and IBM WatsonX analyzes historical and real-time data to detect potential failures in advance, helping companies intervene in advance and reduce downtime losses.

In terms of supply chain management, IBM emphasizes "decision intelligence". With the help of operations optimization technology, enterprises can achieve the optimal allocation of human, material and financial resources and improve the overall efficiency of the system. The ability to dynamically respond to raw material fluctuations and market uncertainties is becoming the key to supply chain resilience.

In terms of sales and market forecasting, IBM WatsonX supports in-depth analysis and trend forecasting of sales behavior, and can dynamically adjust procurement strategies according to market fluctuations to achieve more refined sales and procurement plans. In the field of customer service, IBM responds to customer inquiries by building a large-scale intelligent customer service system to improve service efficiency and customer satisfaction.

Behind this series of intelligent scenarios is the powerful "digital operations foundation" built by IBM: the control tower realizes intelligent decision-making management, embedded with IBM or other intelligent entities, can simulate complex business scenarios and generate response plans; the Instana observability platform provides enterprises with full-stack AI-driven monitoring and intelligent root cause analysis; the security operations platform ensures security compliance under the linkage of multi-system data.

IBM also further summarized four high ROI scenarios: First, the ELM + WatsonX integrated solution for R&D management. ELM is responsible for lifecycle management, while WatsonX provides intelligent analysis capabilities to identify cross-departmental conflicts and optimize demand configuration. It has helped a car company improve its review efficiency by 35%; in terms of virtual verification, the solution also helped a semiconductor company shorten its prototype verification cycle by 60%. At the same time, combined with the defect prediction of the ELM bug library and the timing model, high-risk modules can be identified three weeks in advance.

Secondly, intelligent equipment management. This includes real-time monitoring, AI maintenance assistant, root cause analysis, spare parts demand forecasting, and automatic triggering of procurement processes. Each link is driven by an independent agent and executed in series.

Secondly, production and sales coordination and supply chain resilience are improved. IBM Planning Analytics combines intelligent agent analysis to predict uncertainty, optimize procurement, inventory and production rhythm, and improve response speed and resource utilization efficiency.

Finally, comprehensive budget management. Through pre-built financial, procurement and other intelligent entities, combined with the enterprise ERP/CRM system, budget control and cost forecasting can be achieved, helping enterprises to operate steadily in market fluctuations.

"The original spare parts inventory management of a certain car company was highly dependent on manual operations. Faced with as many as 60,000 SKUs, the cumbersome process of manual price inquiry and PDF file comparison, the operation efficiency was low and the cost was high." Zhang Xun revealed that IBM built an AI-driven spare parts inventory optimization system, from consumption prediction, threshold analysis to automatic quotation and procurement process integration, to achieve full process intelligence. With the continuous iteration and optimization of the model, it not only reduced inventory backlogs, but also avoided shortages of key components, helping the company achieve a cost savings target of more than 8 million yuan.

Last words

IBM has shown an increasingly pragmatic attitude, focusing on hybrid cloud and AI capability building, and is committed to accelerating the real implementation of enterprise-level AI through the improvement of technology platforms and ecosystems. Compared with those "mythical" AI applications on social media, IBM emphasizes that the deployment of enterprise-level AI is a systematic project that needs to face many challenges such as data governance, system docking, business process reconstruction, and infrastructure adaptation. This is more like a marathon rather than a leap that can be achieved overnight.

In this long-term construction, IBM has built a complete set of enterprise AI implementation frameworks through its full-stack capabilities. From the underlying hybrid cloud architecture and host system, to the middle-level software platform and intelligent tools, to the upper-level business development and integration capabilities, IBM is trying to help enterprises achieve the integration of AI and business in a complex IT environment.

In addition, IBM has always regarded open source as an important bridge between technological innovation and enterprise-level implementation. From its early deep involvement in the Java and Eclipse era to its continued investment in AI and hybrid cloud today, IBM is not only an active contributor to the open source community, but also a long-term practitioner. In terms of AI, IBM has open-sourced some of its self-developed Granite models, and in the hybrid cloud field, it continues to expand its influence through Red Hat, the world's largest open source contributor.

"The future of AI belongs to an open ecosystem." Zhai Feng emphasized that open source not only represents flexibility and innovation, but also brings more possibilities to enterprises. When IBM integrates open source technology into its products, it will perform enterprise-level security reinforcement and scalability optimization to ensure that customers can enjoy the fruits of innovation while meeting security and controllable business requirements.

In the process of promoting the implementation of AI's "last mile" business, IBM Consulting's industry depth and customer understanding have become key advantages. Relying on the consulting team's in-depth insights into industries and scenarios, combined with its own technical capabilities, IBM can help customers truly realize the transformation of AI from tools to value, and create measurable business results for enterprises. This is the core of IBM's continued efforts in the integration of open source, platform and service capabilities.