AI Product Managers Think About MCP (3): Possible Future of MCP

Interpretation of the future trend of the MCP protocol, insight into the cutting-edge thinking of AI product managers.
Core content:
1. The industry status and development prospects of the MCP protocol
2. The implementation path and challenges of MCP Server intelligence
3. The necessity and construction direction of high-level programming languages for large models
The evolution of MCP and its future possibilities
With OpenAI and others announcing their support, MCP is rapidly evolving from an open protocol concept to becoming a de facto industry standard, with domestic and foreign manufacturers making plans. But this is just the beginning. Most current MCP implementations are still at a relatively basic stage - providing standardized interfaces to call some simple tools, such as web content extraction, file reading and writing, etc.
So, where will MCP go next? It will evolve in two directions:
3.1 Intelligentization of MCP Server
The current MCP tool/server is more like a faithful executor: it does what the model tells it to do (for example, extract the text of this webpage), but it is often unable to cope with complex business scenarios.
But there is a problem.
Take the common NL2SQL (natural language to SQL query, also called text2sql, also called chat2DB) scenario in enterprises as an example: the current common solution for calling MCP is: LLM directly generates SQL statements based on user questions, and MCP Server executes the SQL and returns the results.
However, the model has several key problems in the NL2SQL scenario:
1. Most of the original database tables do not have sufficient semantic information, and there is no way to directly infer business relationships based on the table header information. 2. As the complexity of the table increases, the performance of the model writing SQL drops rapidly. 3. There is no way to do permission management, and restrictions can only be made on the front end.
The possible form of intelligent MCP Server in the future is to do complex logic: for example, simplify model tasks. LLM does not need to write complete SQL, but provides key query intents and parameters according to API conventions (for example: "query 'sales' in 'Beijing area' 'last quarter'"). MCP Server is responsible for safely and accurately converting these intents into final queries (possibly through solidified templates, DSL conversion or other internal mechanisms), and executing and returning results.
Permission and data isolation can implement strict permission control within the server, automatically filtering and limiting the exposure scope of data and semantic information based on the call source and parameters.
In this mode, MCP Server becomes an "expert assistant" in a specific field. It encapsulates complexity and ensures security, allowing LLM to focus more on understanding user intent rather than getting bogged down in underlying technical details (SQL statement generation). This is a more "AI Native" way to use the tool.
3.2 High-level programming languages for large models
The current MCP protocol is, to some extent, similar to the "assembly language" in the computer field. It provides a set of low-level, standardized instruction sets (such as calling specific tools and passing structured data), giving the model the basic ability to interact with the outside world. This is undoubtedly an important cornerstone.
But just as we would not use assembly language directly to build complex modern software, relying solely on models to complete multi-step, logically complex tasks through scattered MCP instructions would be inefficient and fragile. Different models and even different attempts at the same call may produce different call sequences.
Therefore, an obvious and crucial development direction is to build higher-level abstraction and encapsulation capabilities on top of the MCP "assembly instructions" - just as we have developed high-level programming languages such as C++, Java, Python, etc. on top of assembly language. This can be seen as a "programming paradigm for MCP calls" . I am currently exploring this direction. I am thinking about how to design a mechanism that allows large models to not only execute a single MCP instruction, but also understand and arrange the order of these instructions, so as to achieve more complex autonomous task planning and execution.
The screenshot below is my attempt, which is equivalent to defining the MCP calling order of two functions named Alexi and Bob, and then telling the model my identity to let it choose different call chains to execute.
I believe this is the key path to a truly powerful agent that can accomplish complex tasks. (Friends who are interested in this direction are welcome to discuss with us!)
The future of MCP is not just about continuously expanding the standardized tool interface library. The more far-reaching value lies in two aspects: one is to promote the intelligence of the MCP Server itself, and the other is to build a mature and efficient high-level AI application "programming paradigm" on it. This may be the core factor that determines whether Agent can truly go from ideal to reality and be successfully implemented.