What monetization opportunities does the MCP bring?

Written by
Audrey Miles
Updated on:July-09th-2025
Recommendation

Analysis of the monetization prospects of the Big Model Contextual Protocol MCP.

Core content:
1. The reasons behind the rapid rise of MCP and the market popularity
2. How MCP promotes the big model ecosystem and monetization opportunities
3. The impact of MCP on programmers' workflow and future trends

Yang Fangxian
Founder of 53AI/Most Valuable Expert of Tencent Cloud (TVP)

Readers who open this article have all observed that MCP has become popular since mid-February. Let's take a look at two key indicators that reflect the popularity of open source projects: GitHub Stars and search index.

Star has accelerated its growth since February:

WeChat Index, starting from February, saw a sudden increase in traffic:

From the discussion in the community, it is expected that in April, a group of MCP middleware providers will emerge in China, including Server, Client, Server hosting, Registy, Marketplace, etc., and expand in their original advantageous areas. This article aims to further clarify some confusing concepts, share some monetization opportunities we see, and our plans and progress on MCP. 

1. Why is MCP so popular?

MCP replaces fragmented integration methods with a single standard protocol in the interaction between large models and third-party data, APIs, and systems[1]. It is an evolution from N x N to One for All, and enables artificial intelligence systems to obtain the required data in a simpler and more reliable way.

MCP was released in November last year and quickly gained the first wave of attention from the market. In February of this year, Cursur, Winsurf, and Cline all began to introduce MCP. Different from the thousands of callees that had been connected in the early stage, the introduction of MCP into AI programming can be regarded as the clarion call of the big model ecological effect, which will lead a large number of developers on the AI ​​programming tool side to the callees, so as to awaken the huge scale of existing applications and systems.  

From the perspective of the industrial chain, this will not only solve the current situation of isolation and fragmentation of AI applications and massive classic online applications, but also greatly improve the depth of use of AI programming tools, expand the user group, and bring enough monetization space for AI applications. It will also bring more traffic to classic online applications and even give birth to a market for professional software using natural language. For example, Blender MCP connects AI to Blender, so that 3D models can be created, modified and enhanced through simple text prompts.

In this ecosystem, MCP, AI applications, AI programming tools, and classic online applications are all beneficiaries. Whoever accesses first will benefit first. OpenAI announced its support for MCP, which will accelerate MCP to become the core infrastructure of AI native applications. PS Since domestic large models have not yet made any moves in the large model context protocol, there is still uncertainty in China as to whether MCP can eventually become a de facto standard.

From the perspective of key productivity programmers, programmers no longer need to switch to Supabase to check the database status. Instead, they can use the Postgres MCP server to execute read-only SQL commands and the Redis MCP server to interact directly with the Redis key-value store from the IDE. When iterating code, you can also use Browsertools MCP to let coding agents access the live environment for feedback and debugging. This is not new. When programmers use cloud products, they will also prefer to use API methods to call cloud product capabilities instead of jumping between the consoles of multiple cloud products.

Programmers are often early adopters of new technologies. As MCP matures, ordinary consumers can also use natural language to boost the prosperity of the MCP industry chain.

2. The more mature the MCP is, the less useful Function Calling is?

First of all, MCP and Function Calling are both technical implementations for large models to call external data, applications, and systems. MCP was launched by Anthropic at the end of November 2024, and Function Calling was first proposed by OpenAI in June 2023 (i.e., creating an external function as an intermediary to pass the request of the large model on one side and call external tools on the other side. Most other large models also use this type of technical solution ). However, they have obvious differences in positioning, development costs, etc.

Different positioning:

  • MCP is a standard for the general protocol layer, similar to the "USB-C interface in the AI ​​field". It defines the communication format , but is not bound to any specific model or manufacturer, and abstracts complex function calls into a client-server architecture.
  • Function Calling is a proprietary capability provided by large model vendors. It is defined by large model vendors, and there are differences in interface definition and development documentation between different large model vendors. It allows the model to directly generate calling functions and trigger external APIs, relying on the model's own contextual understanding and structured output capabilities.

Development costs vary:

  • The technical implementation process of Function Calling, taking OpenAI as an example, requires writing a functional description in JSON Schema format for each external function and carefully designing a prompt word template to improve the accuracy of Function Calling response . If a requirement involves dozens of external systems, the design cost is huge and the productization cost is extremely high.  

  • MCP calls the large model operating environment MCP Client, and the external function operating environment MCP Server. It unifies the operating specifications of the MCP client and server, and requires that the MCP client and server communicate in accordance with a certain established prompt word template. In this way, the collaboration of global developers can be strengthened through the MCP Server, and global development results can be reused.

Different interaction methods:

  • MCP achieves two-way communication through a standardized client-server architecture, requiring developers to pre-configure the server and define the interface.
  • Function Calling is actively triggered by the model. The model directly inserts the call request (such as JSON Schema format) when generating text . The host application parses and executes it and returns the result.  

Deep coupling with model capabilities:

  • Function calling is often deeply tied to the model’s contextual understanding. For example, GPT-4’s function calling can use the model’s reasoning ability to optimize calling parameters or adjust subsequent generated content based on the returned results.

  • As a universal protocol, MCP needs to transmit information through standardized interfaces, which may sacrifice some collaborative optimization space with specific models.

Real-time and low-latency requirements:

  • The calling logic of Function Calling is directly embedded in the model response process, which is suitable for scenarios with high real-time requirements (such as online payment and real-time data analysis).
  • MCP needs to be transferred through the MCP server, which may increase latency, especially when calling across networks.

In general, the full adaptation of MCP will reduce the reliance on Function Calling, especially in cross-platform, standardized tool integration scenarios. However, Function Calling will still be irreplaceable in specific scenarios, such as model-driven dynamic decision-making, real-time task execution, proprietary ecosystem integration, etc., and in some lightweight calling scenarios, Function Calling has more advantages in terms of effectiveness. In the future, the two can complement each other, with MCP becoming the basic protocol layer and Function Calling as the model enhancement layer, jointly promoting seamless interaction between AI and the outside world.

3. MCP changes the supply side, but it transforms the consumer side

Different people have different understandings of the supply side and the consumer side. In this article, we define the supply side and the consumer side as follows:

  • Supply side: The industry chain that provides AI Agent services, including cloud vendors, large models, AI applications (including AI Agents), classic online applications, and various AI middleware service providers.

  • Consumer side: end users who use AI Agent.

First of all, we have to mention Devin and Manus.

The emergence of Devin is a qualitative change in AI programming from a programming auxiliary tool to a programmer agent. It is no longer just code completion and auxiliary generation, but can cover the entire process from demand analysis → code writing → testing → deployment → bug fixing, and independently handle complete tasks. Devin has changed the programmer community (domestic users use programmer agents, Lingma is recommended); Manus has changed the vast number of ordinary Internet users. The interaction between users and AI is no longer just a question-and-answer dialogue robot service model, but a general AI agent that can mobilize Internet online services other than AI applications and independently and completely implement user ideas, realizing a qualitative change from "passive response" to "active co-creation".

The more intelligent the result, the more complicated the process. The view that "cognitive load is the core obstacle to engineering efficiency" is even more evident in AI Agents. Therefore, AI Agents have a stronger demand for efficient development and engineering paradigms.

Unlike the classic Internet, the productization and engineering of AI Agent is more complicated. E-commerce applications meet the needs of users to shop without leaving home, and chat applications meet the needs of users to socialize without leaving home. They are a physical substitute, while AI Agent is a mental and mental substitute, helping users complete the entire chain of activities from basic survival to high-level creation. If you only rely on Function Calling to call external applications, it is obviously not an efficient development paradigm. MCP allows developers to rub the next Manus more conveniently. It is like the HTTP protocol in the Internet world, allowing all clients and websites to communicate based on the same specification, thereby promoting global developers to collaborate and accelerate the arrival of AGI.

4. Does MCP accelerate the monetization of large models?

From our observations, this is indeed the case.

Take Firecrawl as an example. This open source project provides:

  • Comprehensive crawling of website data: Automatically crawl all accessible subpages of the entire website without relying on sitemaps.

  • Data cleaning and formatting: automatically convert the crawled web page content into clean Markdown or structured data, remove irrelevant information such as advertisements and navigation bars, and discard page noise.

  • Process and retrieve data without further processing: It can seamlessly connect to the model and directly output LLM-ready format, and can also be integrated into various AI programming frameworks to accelerate the data preprocessing process.

Before supporting MCP, Firecrawl already had the ability to crawl web pages automatically, but it relied on traditional technology. Users had to manually call the Firecrawl service through the REST API or SDK, and could not be directly integrated with large models. In January this year, Firecrawl officially introduced the MCP protocol through integration with the Cline platform. Developers can call Firecrawl's crawling capabilities through the MCP server to achieve the automated process of "AI models directly controlling web crawling." More importantly, users don't have to worry about protocol binding affecting scalability. After all, to achieve richer large model capabilities, it is necessary to rely on multiple large model middleware suppliers similar to Firecrawl. Therefore, MCP has opened up the network effect of large model middleware suppliers and accelerated the monetization capabilities of such players.

The a16z Infra team has drawn up an MCP Market Map [2]  . This map covers the most active areas of the MCP ecosystem today. Although there are still many gaps, it will bring a lot of inspiration to domestic innovation.

As the adoption rate of MCP continues to increase, infrastructure and tools will play a key role in the scalability, reliability, and accessibility of the MCP ecosystem. This will lead to a result that may be completely different from the classic Internet industry chain: opportunities in the to B field will be richer than those in the to C field.

  • MCP Client: As the caller, it is the interactive entrance between users and the MCP ecosystem, focusing on the realization of terminal functions. For example, chat applications (such as Claude) provide natural language interaction services, allowing users to call AI capabilities through conversations; coding tools (such as Cline, Cursor), AI programming scenarios, the ability to call external applications and systems in the IDE; task automation: help users automate repetitive tasks, such as data processing and process scheduling, to improve efficiency. Manus is a typical MCP Client. 

  • MCP Server: As the called party, it provides backend service support, including various core functional modules. For example, databases (such as ClickHouse and Supabase) are responsible for data storage, query and management; design tools (such as Figma and Blender) support design creation, file collaboration and other functions; productivity tools (such as Notion and Obsidian) provide office collaboration services such as note management and knowledge organization; payment tools (such as Stripe) process online payment transactions and support capital flow in business scenarios.

  • MCP Marketplace: plays the role of an ecological hub, aggregating and distributing MCP-related tools, similar to an "app store". On the one hand, developers can publish MCP client and server tools here; on the other hand, users can easily discover and use various MCP tools (such as MCP.so, Glama), promoting the circulation and sharing of resources within the ecosystem.

  • Server Generation & Curation: Focus on the development and maintenance of MCP servers. Provide tools or frameworks (such as Mintlify, Stainless) to assist server development and simplify the construction process; optimize server configuration and function iteration to ensure stable server performance and adapt to different business scenario requirements.

  • Connection Management: Coordinates the interaction of various components in the MCP ecosystem. Manages connections between clients and servers, and between servers, to ensure efficient data transmission; optimizes connection stability, handles network protocol adaptation, request routing, etc., to ensure smooth interaction within the ecosystem.

  • Server Hosting: Provides operating environment support for MCP servers. With the help of cloud computing and other infrastructure (such as Cloudflare, Smithery), it hosts server code and data; is responsible for server operation, maintenance, expansion, and security protection to ensure continuous and stable operation of the server.

Recently, Higress, as an AI-native API gateway, has opened up its Remote MCP Server hosting solution to achieve seamless conversion from existing APIs to MCP. This solution has been officially adopted by Anthropic and published on the MCP GitHub introduction page.

In addition, Nacos released MCP Registry to achieve "0 change" upgrade of existing application interfaces to MCP protocol. As MCP Registry, Nacos plays the role of control plane, providing existing service management and dynamic service information definition, helping businesses to dynamically validate the MCP Server protocol generated by Higress gateway through Nacos service management without changing the existing interface.

The combination of Nacos + Higress, together with open source solutions such as Apache RoceketM and OTeL, is maximizing the reuse of cloud-native existing technology components, greatly reducing the construction cost of building AI Agents for classic Internet applications.

Drawn by Li Yanlin (Yanlin), a senior technical expert at Alibaba Cloud

5. The more prosperous the MCP ecosystem is, the more it relies on gateways and observability?

MCP Server is an encapsulation of functional services. Its essence is a server that provides a standardized interface through the MCP protocol. Whenever cross-network access is involved, identity authentication, authorization, data encryption and decryption, anti-attack mechanisms, etc. are required. At this time, an MCP gateway for MCP Server management and control is required.

Similar to the API Gateway, the MCP Gateway will enforce access control, route requests to the correct MCP server, handle load balancing, and cache responses for efficiency. This is especially important for multi-tenant environments, where different users and agents require different permissions. A standardized gateway will make MCP deployments more scalable and manageable by simplifying interactions between clients and servers, improving security, and providing better observability.

  • Authentication: Verify the identity of users, devices, or services to prevent unauthorized entities from accessing the ecosystem. For example, when a user logs in to an MCP client (such as Claude), the user verifies the identity through an account password, token, etc. to avoid malicious attacks or illegal access.

  • Authorization: Provides fine-grained control over permissions, and determines the scope of operations that users or services can perform after identity verification. For example, ordinary users can only use basic MCP server functions, while advanced users or specific services can obtain higher permissions such as database reading and writing, sensitive tool calls, etc.

  • Traffic control: Implement functions such as request filtering, rate limiting, and protocol conversion. For example, limit the flow of high-concurrency requests, intercept illegal requests, and uniformly handle encrypted transmission to improve the overall security and stability of the ecosystem.

The above capabilities have been implemented in the Higress Remote MCP Server hosting solution.

In the MCP ecosystem, observability is also an important infrastructure that cannot be ignored because the calling relationships are more complex and diverse:

  • Troubleshooting and problem diagnosis : Logs are collected to record discrete events during the operation of each component in the ecosystem (such as MCP client and server). When problems occur, developers and operation and maintenance personnel can trace system behavior based on these records and quickly locate the fault point. Link tracking can analyze the call path of requests between different components, determine which part has errors or blockages, and whether the input and output meet expectations, helping to troubleshoot problems caused by cross-component interactions. In addition, the call situation can be analyzed through the call chain.

  • Performance optimization : Aggregate metrics conduct statistical analysis on key indicators such as system resource usage (such as CPU and memory usage), response time, and throughput to identify performance bottlenecks and provide a basis for optimizing system configuration and adjusting architecture. For example, if a certain MCP server is found to respond slowly under high concurrency, the code can be optimized or hardware resources can be increased in a targeted manner.

  • Service quality monitoring : Real-time monitoring of the operating status and availability of services within the MCP ecosystem, timely detection of problems that affect user experience, such as service interruptions and excessive latency, and triggering of corresponding early warning mechanisms so that operation and maintenance personnel can respond quickly to ensure stable and reliable services.

As a standardized map service capability platform, AutoNavi has taken the lead in launching its MCP Server, which provides 12 core functions to help develop enterprise-level intelligent applications. We expect that a large number of MCP Servers and MCP middleware will be born quickly in China, accelerating the productization and engineering of AI Agents.