OpenAI embraces MCP. What new opportunities will there be under the “MCP economy”?

OpenAI and Anthropic join hands to use the MCP protocol to reshape the new era of AI tool integration.
Core content:
1. OpenAI announced its support for Anthropic's MCP protocol, which is of great significance to the agent field
2. How the MCP protocol simplifies AI tool integration and data silos
3. Analysis of agent startup opportunities and industry development trends based on the MCP protocol
The Agent field has seen new progress.
In the early morning, OpenAI CEO Sam Altman suddenly announced that it would support the Model Context Protocol (MCP) proposed by competitor Anthropic in its products : "MCP is currently available in OpenAI's Agents SDK, and support for ChatGPT desktop version and Responses API will be launched soon."
MCP is widely regarded as the "new standard for AI tool integration". The cooperation between the two giants means that we can use unified standards to solve the data silos and tool integration problems that have long plagued AI applications .
It is foreseeable that as OpenAI supports MCP faster than expected, MCP has become the industry's de facto standard in terms of model calling tool resources. When the Agent industry has a more unified protocol standard, the industry will usher in faster development.
At this point in time, Jinqiu Fund referred to Anthropic's official documents as well as research and reports from overseas VCs and media, trying to sort out the current status and future prospects of MCP, and also trying to sort out possible opportunities and trends for Agent entrepreneurship.
If you are interested in these topics, please feel free to contact us (gzh@jinqiucapital.com)
01
What is MCP? A Standardization Revolution for AI Context
The Model Context Protocol (MCP) is an open standard first proposed and open sourced by Anthropic in November 2024. In layman's terms, MCP defines a common interface for AI assistants to obtain external "background information" and perform tool operations .
As the official metaphor says, MCP can be thought of as the "USB-C interface" for AI applications - providing a unified plug-and-play connection method for various tools and data sources.
In the past, different AI applications often needed to develop their own plug-ins or integration codes for each database, API or knowledge base; the vision of MCP is to "solve this problem once and for all" and allow any LLM client and any data source server to communicate using the same protocol.
This will greatly simplify the mechanism by which AI systems obtain contextual information and perform operations on behalf of users. In other words, MCP attempts to elegantly transform the complicated "M×N" docking problem in the field of AI integration in the past into a standardized "M+N" architecture, so that developers do not have to manage the individual adaptation of each pair of components.
According to Anthropic’s official statement ( “Introducing the Model Context Protocol,” anthropic.com , November 25, 2024) , from the technical architecture point of view, MCP follows a lightweight client-server model . Developers can encapsulate any source with information or functions as an “MCP server” and then let the AI assistant act as an “MCP client” to call these servers.
Anthropic’s philosophy is that anything that can provide additional information or functionality to an AI assistant can be turned into an API tool and called autonomously by the LLM .
By calling these standardized tools, AI assistants (also called AI Agents) can uniformly implement various extended functions without relying on cumbersome intermediate frameworks. MCP specifically includes three types of participants:
MCP Server : An independently running microservice, each MCP Serve is responsible for a specific capability or data source, and is accessible to AI through standard protocols. For example, an MCP server can correspond to an enterprise database, Slack message library, Git code library, file system, or even browser control. The server can be local (accessing computer files, databases, etc.) or remote (accessing online services through APIs).
MCP Client : A protocol client embedded in an AI application, responsible for establishing a one-to-one connection with the above server and calling the tools or data provided by it according to AI needs. Each MCP client instance is connected to a server, just like a "plug-in driver" for an AI assistant.
Host AI application (Host) : The host program that runs the AI assistant or agent, such as ChatGPT, Claude desktop application, IDE plug-in, or any AI application that requires external capabilities. The host application can start multiple MCP clients to connect to different servers, allowing multiple tools to be connected at the same time.
With MCP, LLM has an additional standardized "bridge" with the outside world, giving AI standardized "tentacles" that can reach out to various databases, applications, and toolboxes to absorb nutrients and provide us with smarter services. It brings about a standardized revolution in AI contextual interaction, laying the foundation for open integration for the next generation of AI applications.
02
Main use cases of MCP: from intelligent agents to real-time data connectivity
MCP gives AI two key expansion capabilities: perceiving the environment and influencing the environment.
On the one hand, through standardized protocols, AI can connect in real time and securely access external data sources, whether it is an enterprise's internal business database (such as inventory, transaction records), real-time sensor readings, or public APIs on the Internet, thereby breaking free from the timeliness limitations of training data and obtaining the latest and most dynamic information.
On the other hand, MCP makes AI no longer just a provider of information, but also an executor of action, capable of scheduling and manipulating external software tools to complete specific tasks, such as calling APIs to send messages, modifying database entries, operating calendar applications, or executing automated scripts.
In addition, MCP also provides a convenient user context injection mechanism, allowing AI to access users' personal data (such as historical conversations, preferences, local files) or internal enterprise knowledge bases and documents with authorization, thereby providing highly personalized and context-aware responses.
These three basic capabilities together constitute the core value of MCP, allowing AI to be more closely integrated into actual workflows and provide smarter and more practical services.
Supported by these basic capabilities, a series of specific practice paradigms and innovative application models have emerged in the industry. In a16z’s latest article “A Deep Dive Into MCP and the Future of AI Tooling”, author Yoko Li put forward some of his views. He believes:
Empowering developers with centralized workflows :
For developers, frequent context switching is a major pain point that reduces efficiency. MCP provides an ideal solution for this, allowing developers to seamlessly integrate and call external tools and services in their familiar integrated development environment (IDE).
For example, developers can directly execute database queries (such as Postgres MCP server), manage caches (such as Upstash MCP server), or use browser tools (such as Browsertools MCP) for real-time debugging and access console logs and other information in the IDE without leaving the current coding environment. This "local first" workflow significantly reduces interruptions and improves development efficiency.
Furthermore, MCP has also spawned a new paradigm for automatically generating servers: by crawling documents or API specifications, MCP servers can be quickly generated, allowing coding agents to instantly obtain high-fidelity contextual information or directly call tool capabilities, greatly reducing the burden of manual integration and writing boilerplate code.
Generate multifunctional applications and innovative experiences :
The potential of MCP goes far beyond connecting existing tools. It is driving the evolution of client applications themselves into "universal applications" and creating a brand new user experience.
Taking the code editor Cursor as an example, as an MCP client, users can transform it into an integrated workstation with multiple capabilities such as Slack message sending and receiving, email sending (Resend MCP), and even image generation (Replicate MCP) by loading different MCP servers.
Even more imaginatively, combining multiple MCP servers in a single client can unlock complex new processes, such as allowing an AI agent to generate front-end interface code while calling an image generation server to create visual elements for the homepage. This model goes beyond the single function limitations of traditional applications.
At the same time, MCP is also lowering the threshold for using professional tools and benefiting a wider user group. Although the initial use cases are mostly focused on developers, clients like Claude Desktop are becoming the entry point for non-technical users to access the powerful capabilities of MCP.
It can be foreseen that more dedicated MCP clients for specific business scenarios (such as customer support, marketing content creation, and design assistance) will emerge in the future.
The design of the client itself and the interaction methods it supports play a decisive role in the final user experience.@
Commands allow users to call any MCP server within the client and seamlessly pass the generated content to downstream applications (such as Notion), forming a novel UX mode.
Another notable example is the Blender MCP Server, which enables users with little to no knowledge of 3D modeling to create models through natural language descriptions.
As the community builds servers for Unity, Unreal Engine, and more, we are witnessing the birth and popularity of innovative workflows from text to 3D content generation in real time, which indicates that MCP will profoundly reshape the way people interact with digital tools.
Bridge existing integration platforms and activate massive application ecosystems :
The standardized nature of MCP enables it to efficiently connect to existing mature integration platforms, and Zapier's practice is a typical example.
According to the Zapier official website ( https://zapier.com/mcp ), Zapier MCP acts as a bridge, allowing any MCP-compatible AI assistant (such as Cursor, Claude Desktop, etc.) to connect to the huge ecosystem of more than 7,000 applications and more than 30,000 preset actions on the Zapier platform.
This approach cleverly avoids the enormous complexity of developing massive application interfaces separately for AI, and directly reuses Zapier's existing integration capabilities.
Users pre-configure and authorize the specific actions that AI can perform in Zapier (for example, "send a message to a specific Slack channel", "create a task in a specified Jira project"), including selecting accounts, limiting parameters, etc., and then add the generated secure MCP server URL to the AI client.
When a user gives instructions to AI, AI communicates with Zapier through MCP, and Zapier securely performs the authorized actions.
This means that AI is no longer limited to conversations, but can directly perform actual tasks in various business software under user instructions, such as project management (Jira, Asana), customer relationship management (Salesforce, HubSpot), communication collaboration (Slack, Teams, Gmail), data processing (Google Sheets) and even version control (GitHub).
The practice of Zapier MCP has greatly lowered the technical threshold for integrating AI into daily business processes, transforming AI assistants into powerful workflow engines that can perform tasks across applications.
03
Current Ecosystem Landscape: Infrastructure and Early Practitioners
Although the MCP ecosystem is still in its early stages, its specific form built around the protocol has begun to emerge, providing a preliminary landing scenario and support system for the business opportunities of the aforementioned various participants. This emerging ecosystem not only includes the core of the protocol - clients and servers, but also includes the increasingly critical infrastructure layer and early adopters who are brave enough to explore.
This is also explained in detail in a16z's article "A Deep Dive Into MCP and the Future of AI Tooling".
Current status of clients and servers : Currently, high-quality MCP client implementations are mostly concentrated in coding-centric scenarios (such as the IDE integration mentioned above, such as Cursor), which reflects the general rule of the developer community as a technology pioneer. The server side shows the early characteristics of "local first" and single-user orientation, which is related to the early implementation of the protocol based on technologies such as SSE. However, as the protocol matures and more flexible mechanisms such as Streamable HTTP may be adopted in the future, it is expected that the client will expand to a wider range of business scenarios (such as customer service and marketing), and the application scope and deployment mode of the server will also be greatly enriched, attracting more service providers to encapsulate their capabilities as standard MCP interfaces.
Accelerated infrastructure construction : In order to support the prosperity of the MCP ecosystem, a series of key infrastructures are taking shape rapidly:
Discovery and Marketplace : Similar to npm for JavaScript or RapidAPI for the API economy, MCP marketplaces and directory services are emerging, such as Mintlify’s mcpt, Smithery, OpenTools and other platforms, which aim to make it easier for developers to discover, share and reuse MCP servers (connectors), which is critical for standardized access and enabling AI agents to call tools on demand.
Development and generation tools : To lower the building threshold, companies such as Mintlify, Stainless, and Speakeasy provide server generation tools to help developers quickly encapsulate existing data sources or APIs into MCP-compatible services.
Hosting and expansion : As the number of applications increases, server deployment, expansion, and stability become the focus. Hosting solutions provided by Cloudflare, Smithery, etc. are working hard to meet these challenges.
Connection and management : Especially in local-first scenarios, the complexity of connection and key management needs to be simplified. Platforms such as Toolbase are beginning to provide such connection management solutions. The improvement of these infrastructures will greatly enhance the scalability, reliability, and ease of use of the MCP ecosystem.
Early adopters’ practical exploration : Although MCP is still new, many forward-looking enterprises and developer tools have begun to integrate and practice it, proving its value:
Block is using MCP to build automated agent systems designed to eliminate repetitive, mechanical tasks.
Zed, Replit, Codeium, Sourcegraph and others in the developer tool space are actively working with MCP to enhance its core functionality by enabling AI agents to more accurately retrieve code context.
Confluent built an MCP server that connects directly to its data streaming platform, enabling AI agents to interact with real-time data using natural language.
In addition, companies such as Apollo are also actively exploring the integrated application of MCP. The practices of these early adopters (usually in cooperation with major contributors to the protocol such as Anthropic) not only demonstrate the practical advantages and wide applicability of MCP, but their use cases and experiences also provide valuable reference and motivation for subsequent wider adoption.
MCP has activated a prosperous ecosystem: developers, platform providers, data providers, and integrators all get what they want, and together they make the AI application market bigger. This also confirms an insight: in the AI era, competitive advantage will come more from ecological position rather than single point technology.
05
Challenges and future of MCP: Building the foundation for next-generation AI interaction
Controversies facing MCP
Although MCP is rapidly becoming the de facto standard for AI to call external tools and data with the promotion of Anthropic and the rapid support of giants such as OpenAI (domestic companies such as Baidu Maps and Amap have also followed suit), showing strong development potential, the controversy, doubts and even cautious views of it in the industry have never stopped.
When Andrej Karpathy was asked about this, his simple response of “please make it stop” was a representative reflection of the complex emotions or reservations that some industry insiders had about its prospects.
These disputes first pointed out the core positioning of MCP and its capability boundary. Although MCP has performed well in model calling tool resources and is widely recognized as the "de facto standard" in this field, a key question is whether (and how) its current design can effectively support the increasingly important and complex communication and collaboration scenarios between intelligent agents.
Critics believe that the original design and existing architecture of MCP are not optimized for this purpose and may have inherent deficiencies, especially when compared with protocols specifically designed for the communication needs of intelligent agents (such as ANP, Agora, etc.), its limitations are more prominent.
This is directly related to whether MCP should stick to its core positioning of tool calling in the future and accept the situation of coexistence and collaboration with other protocols, or whether it needs to undergo major evolution that may be full of compatibility challenges to expand its capabilities.
Following closely behind is a deep concern about the ecological niche occupied by MCP and its impact on the innovation space. The rapid development of MCP and its near-dominant position brought about by the support of giants have greatly promoted the standardization process at the tool calling level, but the "other side of the coin" is the possibility of premature convergence or even monopoly of technology routes.
The industry is concerned that this "winner takes all" situation will squeeze the survival and development space of other innovative protocols with different design philosophies that may be better in specific scenarios (for example, ANP projects that adopt different worldviews and technology selections), thereby potentially inhibiting the diversified exploration and long-term vitality that the entire AI interaction protocol ecosystem should have.
The strategic hesitation of some large platforms when considering opening their core capabilities to access MCP is also related to their deep concerns about standard dominance, ecological control, and future technology path selection.
The main challenges we face
For example, in the view of the digital marketing company Digidop (cited from: "Model Context Protocol (MCP): How This Revolutionary Technology Is Transforming Artificial Intelligence") , MCP currently and in the future faces a series of specific technical and ecological challenges:
Standard unification and fragmentation risks : As an open standard, MCP needs to be wary of the risk of ecosystem fragmentation due to different implementations, private extensions or competing protocols, and seek a balance between open innovation and maintaining core interoperability.
Security, authentication and authorization mechanisms are not yet mature : The current protocol is still insufficient in standardized identity authentication, refined permission control, etc., which poses a challenge to ensuring that AI can perform tasks safely and controllably.
Developer experience and ecosystem maturity need to be improved : MCP introduces a new development paradigm, which leads to a learning curve. At the same time, the mature tool chain and infrastructure for developing, deploying, managing, discovering, and debugging MCP servers need to be improved.
Lack of tool quality and standardized guidance : The quality of the early MCP tools varied greatly, and there was a lack of unified quality assessment standards and best practices for interface design, which affected interoperability and developer confidence.
06
“MCP Economy”: Where might the entrepreneurial opportunities be?
Based on the current development of MCP, many people believe that this may be a new paradigm for general AI interaction and intelligence. For example, in Yoko Li's article "A Deep Dive Into MCP and the Future of AI Tooling", he tried to explain this point of view. Based on this, he also proposed some possible entrepreneurial opportunities.
Opportunities at MCP Infra
Hosting and Multi-Tenancy
MCP supports a one-to-many relationship between AI agents and their tools, but multi-tenant architectures (such as SaaS products) need to support multiple users accessing a shared MCP server simultaneously.
Defaulting to remote servers may be a near-term solution to make MCP servers more accessible, but many enterprises also want to host their own MCP servers and separate the data and control planes. A simplified toolchain to support deployment and maintenance of MCP servers at scale is the next key piece to unlocking broader adoption.
Certification
MCP currently does not define a standard authentication mechanism for how clients authenticate to servers, nor does it provide a framework for how MCP servers should securely manage and delegate authentication when interacting with third-party APIs. Authentication is currently left to the discretion of individual implementations and deployment scenarios.
In practice, MCP adoption to date seems to be focused on local integrations, which do not always require explicit authentication. A better authentication paradigm could be a breakthrough in driving remote MCP adoption. From a developer perspective, a unified approach should cover:
Client authentication: Standard methods for client-server interaction, such as OAuth or API tokens
Tool authentication: Helper functions or wrappers for authenticating to third-party APIs
Multi-user authentication: tenant-aware authentication for enterprise deployments
Authorization
Even if a tool is authenticated, who should be allowed to use it? How granular should their permissions be? MCP lacks a built-in permissions model, so access control is done at the session level — meaning a tool either has access or is completely restricted.
While future authorization mechanisms may enable more fine-grained control, the current approach relies on an OAuth 2.1-based authorization flow that grants session-scoped access once authentication is successful.
This adds additional complexity as more agents and tools are introduced - each agent typically requires its own session and unique authorization credentials, leading to an ever-growing web of session-based access management.
Gateway
As MCP adoption scales, a gateway can act as a centralized layer for authentication, authorization, traffic management, and tool selection. Similar to an API gateway, it will enforce access controls, route requests to the correct MCP servers, handle load balancing, and cache responses for efficiency.
This is especially important for multi-tenant environments, where different users and agents require different permissions. A standardized gateway will simplify client-server interactions, improve security, and provide better observability, making MCP deployments more scalable and manageable.
Discoverability and availability of MCP servers
Currently, finding and setting up an MCP server is a manual process that requires developers to locate endpoints or scripts, configure authentication, and ensure compatibility between servers and clients.
Integrating new servers is time-consuming, and AI agents cannot dynamically discover or adapt to available servers. However, according to Anthropic’s presentation at the AI Engineer Conference last month , it seems that an MCP server registry and discovery protocol are coming soon. This may unlock the next phase of adoption for MCP servers.
Execution Environment
Most AI workflows require multiple tool calls to be made in sequence - but MCP lacks a built-in workflow concept to manage these steps. Requiring every client to implement recoverability and retryability is not ideal.
While today we see developers exploring solutions like Inngest to achieve this, promoting stateful execution as a first-class citizen concept will clarify the execution model for most developers.
Standard client experience
A common question we hear from the developer community is how to account for tool choice when building MCP clients: does everyone need to implement their own RAG for their tool, or is there a layer waiting to be standardized?
Beyond tool selection, there is no unified UI/UX pattern for invoking tools (we’ve seen everything from slash commands to pure natural language). A standard client layer for tool discovery, sorting, and execution would help create a more predictable developer and user experience.
debug
Developers of MCP servers often find it difficult to make the same MCP server work easily between different clients. Typically, each MCP client has its own quirks, and client tracking is either missing or hard to find, making debugging an MCP server an extremely difficult task.
As the world starts building more remote-first MCP servers, a new set of tools is needed to make the development experience smoother across local and remote environments.
If MCP becomes the de facto standard for AI-driven workflows
The development experience with MCP reminds us of API development in the 2010s. The paradigm is new and exciting, but the toolchain is in its early stages. If we fast forward a few years, what will happen if MCP becomes the de facto standard for AI-driven workflows? Some predictions:
The competitive advantage of developer-first companies will evolve from delivering the best API design to also delivering the best set of tools for agents to use. If MCPs can discover tools autonomously, providers of APIs and SDKs will need to ensure that their tools are easily discoverable from search and differentiated enough for agents to choose them for specific tasks. This is likely to be much more granular and specific than what a human developer would look for.
If every application becomes an MCP client and every API becomes an MCP server, new pricing models may emerge : agents may choose tools more dynamically based on a combination of speed, cost, and relevance. This may lead to a more market-driven tool adoption process, selecting the best performing and most modular tools rather than the most widely adopted tools.
Documentation will become a key part of the MCP infrastructure , as companies will need to design tools and APIs in a clear, machine-readable format (such as llms.txt ) and make the MCP server the de facto artifact based on existing documentation.
APIs alone are no longer sufficient, but they can be a good starting point. Developers will find that the mapping from APIs to tools is rarely 1:1. Tools are higher-level abstractions that make the most sense to an agent at the time of task execution—an agent may not simply call
send_email()
, but rather choose to include multiple API calls to minimize latencydraft_email_and_send()
Function. The design of the MCP server will be scenario- and use-case-centric rather than API-centric.If every software becomes an MCP client by default, a new hosting model will emerge because the workload characteristics will be different from traditional website hosting. Each client is multi-step in nature and will need to enforce safeguards such as recoverability, retries, and long-running task management. Hosting providers will also need to perform real-time load balancing between different MCP servers to optimize cost, latency, and performance, allowing AI agents to choose the most efficient tool at any given moment.
In fact, we may have many more ideas. For example, in the future, MCP may be combined with specific industry data and specific SaaS platforms to open up more personalized and precise products and services.