A2A, MCP and ACP protocols that every AI engineer should know

Written by
Caleb Hayes
Updated on:June-19th-2025
Recommendation

In-depth exploration of MCP and ACP protocols in AI engineering, mastering advanced context management and agent communication technologies for large language models.

Core content:
1. Definition, core functions and implementation characteristics of the MCP protocol
2. The specific role of MCP in engineering application scenarios
3. Design, architecture and communication mechanism of the ACP protocol

Yang Fangxian
Founder of 53A/Most Valuable Expert of Tencent Cloud (TVP)
What is MCP (Model Context Protocol)

MCP [1] (Model Context Protocol) is a standardized interface proposed by Anthropic for providing structured real-time context information to large language models (LLMs).

Core Features

Contextual Data Injection

MCP allows you to inject external resources (such as files, database rows, or API responses) directly into the prompt or working memory. All data is transferred through standardized interfaces, keeping your large language model (LLM) lightweight and clean.

Function Routing & Invocation

MCP also supports dynamic model invocation tools. You can register searchCustomerData or generateReport LLM can call them on demand, just like equipping AI with a toolbox without hard-coding the tools into the model.

Prompt Orchestration

Instead of piling all the details into the prompt word, MCP can dynamically assemble key context. It supports modular and real-time construction of prompt words - smarter context, fewer tokens, and better output.

Implementation Characteristics

Based on HTTP(S) protocol, functional capabilities are described in JSON format Designed to be model-agnostic, any LLM with a compatible runtime can use an MCP-compatible server Compatible with API gateways and enterprise-level authentication standards (e.g. OAuth2, mTLS, etc.)

Engineering Use Cases

➀  LLM integration with internal APIs supports secure, read-only or interactive access to structured business data, avoiding exposure of raw interfaces.

➁  Enterprise Agents provide autonomous agents with runtime contextual information from tools such as Salesforce, SAP, or internal knowledge bases.

➂Dynamic  Prompt Construction dynamically generates prompts based on user sessions, system status, or task flow logic.

What is ACP (Agent Communication Protocol)

ACP (Agent Communication Protocol) is an open standard originally proposed by BeeAI and IBM, designed to support structured communication, discovery, and coordination between multiple AI agents in the same local or edge environment.

Unlike cloud-oriented protocols (such as A2A) or contextual routing protocols (such as MCP), ACP is designed for local-first and real-time agent orchestration, emphasizing minimizing network overhead and achieving tight integration between multiple agents in the runtime.

Protocol Design & Architecture

ACP defines a decentralized agent environment whose core features include:

Each agent publishes its identity, capabilities, and status via a local broadcast/discovery mechanism; Agents communicate with each other through event-driven messaging systems, common methods include local buses or inter-process communication (IPC) systems; An optional runtime controller can be used to coordinate agent behavior, aggregate telemetry data, and enforce operational policies; ACP agents typically run as lightweight, stateless services or containers and share communication infrastructure.

Implementation Characteristics

Designed for low-latency scenarios (e.g. local orchestration, robotic systems, offline edge AI); Can be implemented via gRPC, ZeroMQ, or a custom runtime bus; Emphasis on local sovereignty - no need to rely on the cloud or register with external services; Support capability typing and semantic descriptors for automated task routing.

Engineering Use Cases

➀Multi  -agent orchestration on edge devices is suitable for real-time collaboration of multiple agents in scenarios such as drones, IoT clusters, or robot fleets.

➁  The local-first LLM system supports local coordination of model calls, sensor inputs, and action execution to achieve low-latency responses.

➂Autonomous  runtime environment: Agents can still coordinate and operate autonomously without relying on centralized cloud infrastructure.

In short, ACP provides a local runtime protocol layer for modular AI systems — it prioritizes low-latency coordination , system resilience , and composability . For deployment environments that value privacy, autonomous operation, or edge-first (rather than cloud-first), ACP is a natural fit.

What is A2A (Agent-to-Agent Protocol)?

The A2A protocol [2] was proposed by Google and is a cross-platform specification designed to enable AI agents to communicate, collaborate, and delegate tasks in heterogeneous systems.

Official link: google.github.io [3]

Unlike ACP, which emphasizes locality first, or MCP, which focuses on tool integration, A2A focuses on horizontal interoperability - it standardizes how agents from different vendors or operating environments can exchange capabilities and coordinate workflows over open networks.

Protocol Overview

A2A defines an HTTP-based communication model that treats agents as interoperable services. Each agent exposes an "Agent Card" - a machine-readable JSON description file that contains the agent's identity, capabilities, interface endpoints, and authentication requirements.

The agent uses this information to:

Programmatically discover each otherNegotiate tasks and rolesExchange messages, data, and streaming updates

Although in principle A2A has no restrictions on the transport layer protocol, the current standard specifies the use  of JSON-RPC 2.0 over HTTPS  as the core interaction mechanism.

Core Components

Agent Cards

Describes an agent's capabilities, interface endpoints, supported message types, authentication methods, and runtime metadata in the form of a JSON document.

A2A Client/Server Interface

Each agent can act as a client (task initiator), a server (task executor), or both, thus supporting dynamic routing and negotiation of tasks.

Message & Artifact Exchange

It supports multi-segment tasks containing context information, streaming output in multi-round interactions (implemented through SSE), and the transmission of persistent resources (such as files and knowledge fragments).

User Experience Negotiation

The agent can adapt the message format, content granularity and visualization method according to the capabilities of downstream agents.

Security Architecture

Authorization mechanism based on OAuth 2.0 and API Key Capability-scoped endpoints: Agents expose only the capabilities they need for declared interactions Support “opaque mode”: hide internal logic and only expose callable service interfaces

Implementation Characteristics

Naturally adapted to the Web environment : built on HTTP, JSON-RPC and standard Web security mechanismsModel-independent : Applicable to any agent system that implements the protocol, whether based on LLM or notSupport task streaming and multi-round collaboration : light data load, support real-time interaction and efficient collaboration

Engineering Use Cases

➀The  cross-platform intelligent agent ecosystem is suitable for scenarios where intelligent agents from different teams or manufacturers need to interoperate securely.

➁  Distributed agent orchestration in cloud-native AI environments, such as collaborative agent management in platforms such as Vertex AI, LangChain, and HuggingFace Agents.

➂The  multi-agent collaboration framework supports collaboration between multiple systems (such as CRM, HR, and IT agents) in enterprise-level AI workflows.


Protocols Compared Side-by-Side

Features/Protocols
A2A (Agent-to-Agent)
MCP (Model Context Protocol)
ACP (Agent Communication Protocol)
Initiator
Google
Anthropic
BeeAI & IBM
Core Purpose
Interoperability and collaboration among heterogeneous agents
Inject structured live context into LLM & call function tool
Real-time communication and orchestration of multiple agents in local environments
Communication architecture
HTTP + JSON-RPC 2.0
HTTP(S) + JSON
Local Bus/IPC (Inter-Process Communication)
Message Mechanism
Agent Card + Multi-turn Task Negotiation
Real-time context assembly + tool function registration and calling
Event-driven messaging
Deployment Environment
Open Network / Web Native / Cloud Platform
Runs with LLM to support context processing and function calls
Edge devices/local systems (e.g. drones, robots)
Is the model dependent?
Model-independent, adaptable to any intelligent agent system
Model-independent, but mainly serves LLM
Model-independent, emphasizing lightweight and stateless operation
Adaptability
Highly adaptable to cloud platforms, API gateways, OAuth2, etc.
Support API gateway, enterprise authentication, easy to integrate business system
Emphasis on decentralization and local autonomy, suitable for no network or low latency environment
Safety Mechanism
OAuth 2.0 + API Key, support for capability limitation and black box mode
Enterprise-level authentication standards (such as OAuth2, mTLS)
No external service registration required, local broadcast discovery + optional controller
Typical application scenarios
Enterprise-level AI workflow collaboration, LLM service integration, and supplier docking platform
LLM internal prompt management, function routing, enterprise system embedded use
Edge AI, unmanned systems, and robot swarm collaboration
Representative platform/project
Vertex AI, LangChain, HuggingFace Agents
Claude, internal enterprise agent call service
Local deployment of intelligent agents, edge reasoning, and robot bus systems


Complementarity or competition?

A2A + MCP

A2A and MCP are not competing with each other - they are solving completely different parts of the intelligent agent AI puzzle, and in fact they fit in very well with each other.

Think of MCP as a protocol that lets AI agents connect to the world. It gives agents access to files, APIs, databases — basically, all the structured context they need to do useful work. Whether it’s getting real-time sales data or generating custom reports, MCP handles the connection to the tools and data.

Now add the additional layer of A2A. This is where agents start to collaborate. A2A provides a shared language and set of rules that allow agents to discover each other, delegate tasks, and negotiate how to work together — even if they are built by different vendors or run on different platforms.

So it can be understood in a simple way:

⟢ MCP connects AI to tools ⟢ A2A connects AI to other AI

Together, they form a powerful and modular foundation for building smart, working-together systems.

What about ACP?

Next up is  ACP , which takes a completely different approach. ACP focuses on  local-first agent collaboration  — no reliance on the cloud at all. Instead of using HTTP and a web-based discovery mechanism, it lets agents discover and communicate with each other in a shared runtime.

This is ideal for the following scenarios:

Environments with limited bandwidth or low latency requirements (such as robots or on-device assistants), High privacy requirements, hope that all operations remain offline, Or deployed in places without Internet access (such as factory workshops, edge nodes).

ACP is not meant to compete with A2A—it just fills a different niche. But in some deployment environments, especially tightly controlled ones, ACP has the potential to replace A2A entirely because it skips the overhead of web-based protocols and does the job directly locally.

Integration or fragmentation?

As more teams begin to adopt these protocols, the future may take several different paths.

✅Best  case scenario : We see a convergence trend. Imagine a unified intelligent agent platform: A2A handles communication between agents, MCP manages access to tools and data, and ACP-style runtimes are used for edge or offline scenarios. Everything works together smoothly, and developers don't need to worry about which specific protocol is used behind the scenes.


❌The  worst case scenario is that things become fragmented. Different vendors promote their own versions of A2A or MCP, which eventually becomes a mess - just like early Web services, different systems cannot communicate with each other unless a lot of "glue code" is written for bridging.

⚖️Middle  -road open source tools and middleware may be able to save the situation. Such projects are located between the proxy and the protocol, abstracting the differences between the protocols and providing developers with a clean and unified API - the underlying layer automatically completes the protocol conversion based on the environment and operation mode of the proxy.

In short: we are still in the early stages. But how we build and adopt these standards today will determine whether the future of AI agents becomes a coordinated ecosystem or a patchwork of isolated islands.