Latest: Google took the lead in launching an A2A, and AIs from different companies can now "add friends"

Written by
Clara Bennett
Updated on:July-03rd-2025
Recommendation

Google leads a new era of cross-platform collaboration in AI, opening a new chapter of "adding friends" for different AI agents.

Core content:
1. Google launches Agent2Agent (A2A) open protocol to support safe collaboration of AI agents across ecosystems
2. The A2A protocol follows five design principles, supports long-term tasks, is modality-independent, and is built on existing standards
3. A2A works with Anthropic's MCP protocol, MCP connects tool resources, and A2A promotes communication between agents

Yang Fangxian
Founder of 53AI/Most Valuable Expert of Tencent Cloud (TVP)


 

Just now, Google officially launched a new open protocol called Agent2Agent (A2A) that allows AI agents to collaborate securely across ecosystems without being restricted by frameworks or vendors.

The launch of this agreement has been supported and contributed by  more than 50 technology partners (such as Atlassian, Box, Cohere, Intuit, Langchain, MongoDB, PayPal, Salesforce, SAP, ServiceNow, UKG, Workday, etc.) and  leading service providers (such as Accenture, BCG, Capgemini, Cognizant, Deloitte, HCLTech, Infosys, KPMG, McKinsey, PwC, TCS, Wipro, etc.)

The core goal of the A2A protocol is to enable AI agents from different sources and technologies to communicate with each other, exchange information securely, and collaborate to perform complex tasks across enterprise platforms or applications.  This means that the enterprise's AI agents will be able to work together across the entire application system, unleashing unprecedented efficiency and innovation potential.

It is worth mentioning that the A2A protocol complements Anthropic’s Model Context Protocol (MCP), which provides useful tools and context for agents. A2A focuses more on the interaction and collaboration between agents, drawing on Google’s internal experience in large-scale agent systems. I think the following figure is a good diagram showing how MCP and A2A protocols work together. MCP helps connect tools and resources, and A2A helps agents communicate, regardless of who the provider is.

A2A's core design principles

When designing an A2A protocol, five key principles apply:

  1. 1.  Embrace agentic capabilities:  A2A aims to allow agents to collaborate in their natural, unstructured ways, even if they do not share memory, tools, or context. The goal is to enable true multi-agent scenarios rather than limiting agents to simple “tools.”
  2. 2.  Build on existing standards:  The protocol is built on widely used existing standards, including HTTP, SSE, and JSON-RPC. This means that enterprises can more easily integrate it into their existing IT technology stack.
  3. 3.  Secure by default:  A2A is designed to support enterprise-grade authentication and authorization, on par with OpenAPI’s authentication scheme when it is released.
  4. 4.  Support for long-running tasks:  The protocol is designed to be flexible and can handle both quick tasks and in-depth studies that may take hours or even days (especially when there is human intervention). During this process, A2A can provide real-time feedback, notifications, and status updates.
  5. 5.  Modality agnostic:  The world of agents is more than just text. A2A supports multiple modalities, including audio and video streaming.

How does A2A work?

A2A facilitates communication between a “client” agent and a “remote” agent. The client agent is responsible for formulating and communicating tasks, while the remote agent is responsible for executing these tasks to provide information or take actions. This interactive process includes several key capabilities:

Capability discovery:  Agents can declare their capabilities through an “Agent Card” in JSON format. This allows client agents to find the remote agent that is best suited to perform a task and initiate A2A communication.

Task management:  Communication revolves around task completion. The protocol defines a "task" object with a life cycle. Tasks can be immediate or long-running. The output of a task is called an "artifact".

Collaboration:  Agents can send messages to each other to communicate context, responses, artifacts, or user instructions

User experience negotiation:  Each message contains "parts", which are complete pieces of content (such as generated images). Each part has a specified content type, allowing the client and remote agent to negotiate the required format and explicitly negotiate the user's UI capabilities (for example, whether iframes, videos, web forms, etc. are supported).

For example: Recruitment

Imagine a scenario where you are hiring a software engineer. In a unified interface like Agentspace, a hiring manager can ask their workhorse agent to find candidates based on the job description, location, and skill requirements.

This main agent can then interact with other agents dedicated to "candidate search" through the A2A protocol to obtain a list of potential candidates. After receiving the suggestion, the user can instruct the main agent to arrange an interview. After the interview process is completed, another agent responsible for background check can also be called.

This is just one example of how A2A enables AI agents from different systems to collaborate to accomplish complex tasks.