Behind the popularity of MCP: Has the productivity era of AI Agent arrived?

Written by
Audrey Miles
Updated on:June-26th-2025
Recommendation

The MCP protocol is leading a new revolution in AI productivity, breaking the integration difficulties of traditional AI models and tools.

Core content:
1. The birth background and core value of the MCP protocol
2. How MCP simplifies the integration process of AI models and tools
3. MCP's practical performance and future prospects in ecological integration capabilities

Yang Fangxian
Founder of 53A/Most Valuable Expert of Tencent Cloud (TVP)



In July 2024, in an office in San Francisco, California, USA, Anthropic engineer David Soria Parra looked at the display screen in frustration.


When he was thinking about how to get more employees to deeply integrate the existing models, he found that Claude Desktop had limited functions and could not be expanded, while the IDE lacked the practical functions of Claude Desktop, resulting in him having to copy content back and forth between the two, which was very troublesome.


“I realized that this was an 'MxN' problem, meaning multiple applications and multiple integrations, and that it was best solved with one protocol.”


After a few weeks of thinking, David came up with an idea: to make something similar to LSP to standardize the communication between AI applications and extensions. He found Justin, an engineer at Anthropic, and they hit it off and started building it.


Four months later, this idea was commercialized and presented to the public. This is MCP.


Nowadays, MCP  is widely discussed on major platforms, and many large model service providers have announced their support for MCP. In the secondary market, MCP  concept stocks are also hotly speculated.


Amid the enthusiasm, many questions continue to emerge: Why is MCP so popular? Can it become a truly universal standard? What is the business logic behind the large model manufacturers joining in? In addition, does the popularity of MCP mean that the productivity era of AI Agents has truly arrived?


one

MCP , USB -C interface  for AI  applications


For a long time, the integration of AI models and external tools has always faced dual challenges: the high cost of customized development and the difficulty in ensuring system stability. In the traditional model, developers need to develop dedicated interfaces for each newly connected tool or data source. This "one-to-one" adaptation method not only wastes resources, but also causes the fragility of the system architecture.


The MCP protocol was created to solve these pain points.


Its core value lies in standardized interaction rules. That is, through MCP, developers only need to make models and tools follow the protocol standards to achieve plug-and-play , simplifying the original "M×N" integration complexity to "M+N". In this way,  AI  models can  directly call databases, cloud services and even local applications through MCP  , without the need to develop an adaptation layer for each tool separately.



From the current perspective, at the practical level, MCP has demonstrated strong ecological integration capabilities.


For example, Anthropic's Claude desktop application+ connects to the local file system through the MCP server, enabling the AI ​​assistant to directly read document content and generate context-sensitive answers; the development tool Cursor+ installs multiple MCP servers (such as Slack and Postgres) to achieve seamless switching of multiple tasks within the IDE.


MCP seems to be slowly becoming what Justin said: " We agree that MCP is like  the USB-C interface for AI  applications   . It is a universal interface that connects the entire ecosystem. "


However, there is a long and very important story to tell from the release of MCP to its outbreak.


In November 2024, MCP was released and quickly attracted the attention of developers and companies in the industry. However, it was not as popular as it is now because people were not clear about the value of intelligent agents at that time, or even if the complexity of Agent "M×N" integration was solved, no one knew whether AI productivity would explode.


This unclear feeling mainly comes from the problem of the application side being slow to implement the big model technology while it is constantly upgraded and iterated. In addition, Internet social platforms are also full of various voices about intelligent entities, which makes people have low confidence or no hope in the implementation of AI technology in the industry. Even if there are good implementation directions and applications on the market, it is difficult to see whether AI technology has really been transformed into productivity or is just integrated on the surface and unable to make decisions. This requires a lot of time to verify.


The turning point came when Manus released its framework and OpenAI officially announced its support for MCP.


The multi-agent collaboration capabilities demonstrated by Manus perfectly illustrate the ultimate expectations of users for AI productivity. When MCP uses the chat interface to achieve the innovative experience of "dialogue is operation" - users only need to issue instructions in the input box to directly trigger system-level operations such as file management and data retrieval, a cognitive change about "AI can really assist in completing actual work" begins.


This subversive user experience, in turn, further increased the popularity of MCP. It can be said that the release of Manus is an important factor in promoting the popularity of MCP.


In addition to a "product sales expert" like Manus, OpenAI  's official announcement also  helped MCP  to be elevated to the high position of "universal interface".


On March 27, 2025, OpenAI announced a major update to its core development tool AgentSDK, officially supporting the MCP service protocol. When this giant, which occupies 40% of the global model market share, announced support for the protocol, it meant that MCP began to have underlying infrastructure properties similar to HTTP. MCP officially entered the public eye, and its popularity continued to rise, soaring exponentially.


This made everyone see the possibility of "HTTP in AI" becoming a reality. Subsequently, platforms such as Cursor, Winsurf, and Cline also successively connected to the MCP protocol, and the Agent ecosystem created by MCP gradually grew stronger.


two

MCP  is here.

Is the Agent ecosystem far away?


Can MCP really become the de facto standard for AI interaction in the future?


On March 11, LangChain co-founder Harrison Chase and LangGraph head Nuno Campos had a heated debate on whether MCP will become the de facto standard for future AI interactions. Although there was no conclusion, it greatly stimulated everyone's imagination of MCP.


It is worth noting that during this debate, LangChain also launched an online vote. The voting results were unexpected: 40% of the participants supported MCP as the future standard.



In this vote , the remaining 60% of non-voters make it  look like MCP's  path to becoming the de facto standard for future  AI  interactions is not smooth.


What are their concerns?


The most noteworthy thing is the separation between technical standards and commercial interests . This can be seen from the actions of domestic and foreign players after the release of MCP.


Shortly after Anthropic released MCP, Google launched an A2A (Agent to Agent).


If MCP paves the way for individual intelligent agents, allowing them to easily reach various "resource points", then the goal of A2A is to build a huge communication network connecting these intelligent agents, allowing them to "talk" to each other and work together.


In fact, from the bottom up, whether it is  MCP or A2A, the essence is the competition for Agent ecology.


So at this moment, what trends are showing in the domestic market?


Specifically, more actions are concentrated on large model manufacturers. Since April, Alibaba, Tencent, and Baidu have successively announced their support for the MCP protocol.


Among them, Alibaba Cloud Bailian Platform launched the industry's first full-lifecycle MCP service on April 9, integrating more than 50 tools such as AutoNavi Maps and Shadowless Cloud Desktop, and can generate a dedicated Agent in 5 minutes. Alipay and Moda Community took the lead in launching the "Payment MCP Server" service in China, allowing AI agents to access payment capabilities with one click.


On April 14, Tencent Cloud upgraded its large model knowledge engine to support calling MCP plug-ins and access to Tencent location services, WeChat Reading and other ecological tools. On April 16, Alipay launched the "Payment MCP Server", which allows developers to quickly access payment functions through natural language commands and open up the commercialization loop of AI services. On April 25, Baidu announced full compatibility with the MCP protocol and launched the world's first e-commerce transaction MCP and search MCP services. The Smart Cloud Qianfan platform has been connected to a third-party MCP Server, and the search platform indexes all network resources to reduce development costs.


It can be found that the MCP  gameplay of domestic large model manufacturers  is a "full closed loop". From Alibaba Cloud Bailian Platform MCP service integration with Amap, to Tencent Cloud supporting the calling of MCP plug-ins and access to WeChat Reading and other ecosystems, to Baidu launching the search MCP service, all of them are using MCP to play their own strengths and strengthen their own ecological barriers.


There is profound business logic behind this strategic choice.


Imagine if Alibaba Cloud platform allows Baidu map services to be called, or Tencent ecosystem opens core data interfaces to external models, then the differentiated advantages brought by the data and ecological moats that various manufacturers have painstakingly built may collapse. It is this absolute control demand for "connectivity" that makes MCP quietly redistribute the control of infrastructure in the era of artificial intelligence under the guise of technical standardization.


This contradictory tension is emerging: on the surface, MCP  is promoting the standardization of technical protocols through unified interface specifications; in essence, each platform is defining its own connection rules through private protocols.


This separation of open protocols and ecosystems will inevitably become a deep obstacle that restricts  MCP  from becoming a truly universal standard.


three

In the wave of AI  industry landing ,

Let’s look at  the true value of MCP 


Perhaps there will not be an absolute "unified protocol" in the future, but  the standard revolution triggered  by MCP  has opened the floodgates for the explosion of AI  productivity.


At present, each large model manufacturer is building its own "ecological enclave" through the MCP protocol. This "full closed loop" strategy will expose the deep contradiction of Agent ecological fragmentation. However, it can also release the accumulated capabilities of ecological builders, quickly form an application matrix, and promote the implementation of AI.


For example, the advantages of large companies in the past (such as Alipay's payment technology, user scale, and risk control capabilities) were originally limited to their own businesses, but after being opened up through standardized interfaces (MCP), these capabilities can be called by more external developers. For example, other companies' AI Agents do not need to build their own payment systems and can directly call Alipay interfaces. It can also attract more participants to use the infrastructure of large companies, forming dependence and network effects, and expanding the ecological influence.


This kind of "enclosure-style innovation" has, to a certain extent, accelerated  the industrial penetration of AI  technology.


From this perspective, the future Agent ecosystem may be driven to present a "limited open" pattern.


Specifically, core data interfaces will still be firmly controlled by large companies, but in non-core areas, through the promotion of the technical community and the intervention of regulatory agencies, cross-platform "micro-standards" may gradually form. This "limited openness" can both protect the ecological interests of manufacturers and avoid a completely fragmented technical ecosystem.


In this process, the value of MCP will also change from "universal interface" to "ecological connector".


It no longer seeks to become the only standardized protocol, but serves as a bridge for dialogue between different ecosystems. When developers can easily achieve cross-ecosystem agent collaboration through MCP, and when users can seamlessly switch intelligent agent services between different platforms, the Agent ecosystem will truly usher in its golden age.


The premise of all this is whether the industry can find a delicate balance between commercial interests and technical ideals. This is the change that MCP brings in addition to the value of the tool itself.


In fact, the construction of the Agent ecosystem does not depend on the emergence of a certain standard protocol. The implementation of AI does not depend on the opening of a certain link, but consensus.


Just as Anthropic engineer David originally envisioned: What we need is not only a "universal socket", but also a "grid" that allows sockets to be compatible with each other. And this grid requires not only technical consensus, but also a global dialogue on the rules of infrastructure in the AI ​​era.


At a time when AI technology is rapidly iterating, under the "catalysis" of MCP, manufacturers are accelerating the unification of this technological consensus.