Multi-agent protocol is finally here! A strong complement to MCP! Google officially released the multi-agent protocol Agent2Agent (A2A)!

Google released a revolutionary multi-agent collaboration protocol Agent2Agent (A2A), opening a new era of interoperability for agents.
Core content:
1. The A2A protocol allows AI agents to communicate and collaborate across platforms to improve efficiency
2. Supported by 50+ technical partners, it complements Anthropic's MCP protocol
3. A2A follows five design principles to achieve standardized collaboration between agents
Hello friends, Google has just released a new open protocol Agent2Agent (A2A), which aims to solve the problem of multi-agent collaboration. This protocol will allow AI agents to communicate with each other, exchange information securely and coordinate actions on various enterprise platforms or applications.
Below is the original translation.
A new era of agent interoperability
AI agents offer a unique opportunity to help people be more productive by autonomously handling many daily, repetitive or complex tasks. Today, businesses are increasingly building and deploying autonomous agents to help extend, automate, and augment processes throughout the workplace—from ordering a new laptop to assisting a customer service representative to aiding supply chain planning.
To maximize the benefits of agent AI, it is critical that these agents can collaborate in a dynamic, multi-agent ecosystem that spans siloed data systems and applications. Enabling agents to interoperate with each other, even if they are built by different vendors or in different frameworks, will increase autonomy and multiply productivity gains while reducing long-term costs.
Today, we are launching a new open protocol called Agent2Agent (A2A) , with support and contributions from over 50 technology partners such as Atlassian, Box, Cohere, Intuit, Langchain, MongoDB, PayPal, Salesforce, SAP, ServiceNow, UKG, and Workday; and leading service providers including Accenture, BCG, Capgemini, Cognizant, Deloitte, HCLTech, Infosys, KPMG, McKinsey, PwC, TCS, and Wipro. The A2A protocol will allow AI agents to communicate with each other, securely exchange information, and coordinate actions across a variety of enterprise platforms or applications. We believe the A2A framework will bring significant value to customers, whose AI agents will now be able to work together across their entire enterprise application estate.
This collaborative effort signals a shared vision of a future in which AI agents work seamlessly together, regardless of the underlying technology, to automate complex enterprise workflows and drive unprecedented levels of efficiency and innovation.
A2A is an open protocol that complements Anthropic’s Model Context Protocol (MCP), which provides useful tools and context to agents. Drawing on Google’s internal expertise in scaling systems of agents, we designed the A2A protocol to address the challenges we see when deploying large-scale, multi-agent systems for customers. A2A enables developers to build agents that can connect to any other agent built using the protocol, and provides users with the flexibility to combine agents from different providers. Crucially, enterprises benefit from a standardized way to manage their agents across different platforms and cloud environments. We believe this pervasive interoperability is critical to fully realizing the potential of collaborative AI agents.
A2A Design Principles A2A is an open protocol that provides a standard way for agents to collaborate, regardless of their underlying framework or vendor. We followed five key principles when designing the protocol with our partners:
Embracing agent capabilities: A2A is focused on enabling agents to collaborate in their natural, unstructured ways, even if they do not share memory, tools, and context. We are enabling true multi-agent scenarios without limiting agents to “tools.” Built on existing standards: The protocol is built on existing, popular standards, including HTTP, SSE, JSON-RPC, which means it is easier to integrate with the existing IT stack that enterprises use every day. Secure by default: A2A is designed to support enterprise-grade authentication and authorization, on par with OpenAPI's authentication schemes at the time of release. Support for long-running tasks: We designed A2A to be flexible and support a wide range of scenarios, from quick tasks to in-depth studies that take hours or even days (when humans are involved). During this process, A2A can provide real-time feedback, notifications, and status updates to its users. Modality-agnostic: The world of an agent is not limited to text, which is why we designed A2A to support various modalities, including audio and video streams.
How A2A works (a pictorial flow chart showing the data flow between remote and client agents to enable secure collaboration, task and state management, user experience negotiation, and capability discovery)
A2A facilitates communication between a “client” agent and a “remote” agent. The client agent is responsible for formulating and communicating tasks, while the remote agent is responsible for executing those tasks in an attempt to provide the correct information or take the right action. This interaction involves several key functions:
Capability Discovery: Agents can advertise their capabilities using an Agent Card in JSON format, allowing client agents to identify the best agent that can perform a task and communicate with remote agents using A2A. Task Management: Communication between the client and the remote agent is task-completion oriented, where the agent works to satisfy the end-user's request. This "task" object is defined by the protocol and has a lifecycle. It can be completed immediately, or for long-running tasks, each agent can communicate with each other to stay in sync on the latest status of the completed task. The output of a task is called an "artifact". Collaboration: Agents can send messages to each other to convey context, replies, artifacts, or user instructions. User experience negotiation: Each message consists of "parts", which are fully formed pieces of content, such as a generated image. Each part has a specified content type, allowing the client and remote agent to negotiate the correct format required, and explicitly includes negotiation of user UI capabilities - for example, iframes, videos, web forms, etc.
Please see our draft specification for full details on how the protocol works.
A Real World Example: Candidate Sourcing
The process of hiring software engineers can be significantly streamlined through A2A (agent-to-agent) collaboration. In a unified interface like Agentspace, a user (e.g., a hiring manager) can instruct their agent to find candidates that match the job listing, location, and skill requirements. The agent then interacts with other specialized agents to source potential candidates. Once the user receives these suggestions, they can instruct their agent to schedule further interviews, streamlining the candidate sourcing process. Once the interview process is complete, another agent can be called upon to assist with background checks. This is just one example of how AI agents need to collaborate across systems to source qualified job candidates.
The future of interoperability
A2A has the potential to usher in a new era of agent interoperability, foster innovation, and create more powerful and versatile agent systems. We believe this protocol will pave the way for a future where agents can seamlessly collaborate to solve complex problems and improve our lives.