OpenAI releases a big move! Core API supports MCP, changing the development of intelligent agents overnight

Written by
Iris Vance
Updated on:June-19th-2025
Recommendation

OpenAI Responses API supports MCP, which brings revolutionary breakthrough to intelligent agent development.

Core content:
1. Responses API supports MCP, simplifying the connection between intelligent agents and external services
2. Developers can achieve efficient integration of intelligent agents and external tools with just a few lines of code
3. MCP centralizes management tools to improve intelligent agent performance and enhance security

Yang Fangxian
Founder of 53A/Most Valuable Expert of Tencent Cloud (TVP)

This morning, the news that OpenAI acquired io occupied most of the headlines. At the same time, OpenAI also "quietly" released another blockbuster news that the core API for developing intelligent agents - Responses API supports MCP services.


In the traditional method, when developing intelligent agents, we need to interact with external services through function calls. Each operation involves network transmission from the large model to the backend and then to the external service, resulting in multiple jumps, high latency, and increased complexity in expansion and management.



Responses APIOther new features


In addition to supporting MCP , OpenAI  has also made major updates to the image generation, code interpreter  , and file search tools in the Responses API  , further enhancing the capabilities of the agents.


Image Generation: Developers can now directly access  OpenAI  ’s latest image generation models (such as  <gpt-image-1> ) in the Responses API  and use them as tools. The tool supports live streaming, allowing developers to see a preview of the image generation process, and supports multiple rounds of editing, allowing developers to gradually fine-tune the image.


Code Interpreter : The Code Interpreter  tool is now available in  the Responses API  , supporting data analysis, solving complex math and coding problems, and even helping models deeply understand and manipulate images. For example, when working with math problems, models can use  Code Interpreter  to run code to get the answer, significantly improving performance.


File Search: The file search tool has been enhanced to allow developers to extract relevant content blocks from documents into the context of the model based on user queries. In addition, the tool supports performing searches across multiple vector stores and allows attribute filtering using arrays.


At the same time, OpenAI also introduced new features in the Responses API .


Background mode: For tasks that take a long time to process, developers can use background mode to start these tasks asynchronously without worrying about timeouts or other connection issues. Developers can poll these tasks to check if they are completed, or start streaming events when needed.


Reasoning summaries: The Responses API  can now generate concise natural language summaries of the model's internal thought chain. This makes it easier for developers to debug, audit, and build better end-user experiences.


Encrypted Inference Items: Zero Data Retention ( ZDR ) eligible customers can reuse inference items between  API  requests without storing any inference items on  OpenAI  ’s servers. This not only improves intelligence, but also reduces labeling usage, lowering costs and latency.