Google A2A Protocol: Agent-to-Agent Communication Guide
Google's Agent-to-Agent (A2A) protocol enables AI agents to communicate and collaborate. Now at Linux Foundation with 50+ enterprise partners. Complete guide.
Launch Partners
Core Protocols (A2A + MCP)
Task State Transitions
Transport Layer Standard
Key Takeaways
Artificial intelligence agents are becoming more capable, but they are also becoming more specialized. Rather than a single AI system handling every task, modern AI architectures decompose complex workflows into multiple agents — each optimized for a specific function — that collaborate to produce outcomes no single agent could achieve alone. The challenge this creates is interoperability: how do agents built by different teams, running on different platforms, and produced by different vendors communicate with each other?
Google's Agent-to-Agent (A2A) protocol, announced in April 2025 and developed with over 50 launch partners, is the most significant attempt to answer this question at an industry scale. A2A defines a standard communication layer for AI agents — a common language that allows any compliant agent to delegate work to, collaborate with, and receive results from any other compliant agent, regardless of their underlying implementation. This guide covers the protocol architecture, its relationship to MCP, the Agent Card discovery system, the task lifecycle model, and what it means for teams building multi-agent systems. The rapid growth of MCP itself provides useful context: as covered in our analysis of MCP reaching 97 million downloads, the AI agent ecosystem is standardizing rapidly, and A2A represents the next layer of that standardization effort.
What Is the A2A Protocol
A2A is an open protocol that defines how AI agents communicate with each other as peers. It specifies message formats, task lifecycle management, capability discovery, authentication, and streaming mechanisms — everything needed for one agent to reliably delegate work to another and receive results. The protocol is built on top of HTTP, uses JSON for message serialization, and employs Server-Sent Events for real-time streaming.
The protocol distinguishes between two roles: the client agent, which initiates a task and consumes results, and the remote agent (or server agent), which receives task assignments and performs the requested work. In a multi-agent system, the same agent can play both roles simultaneously — acting as a client when delegating subtasks and as a server when receiving assignments from an orchestrator.
Built on HTTP/HTTPS as the transport layer, A2A integrates with existing enterprise network infrastructure, security tooling, and API gateways without requiring new communication protocols or custom networking.
Standardized JSON capability declarations published at a well-known URL enable client agents to discover what a remote agent can do, what authentication it requires, and what modalities it supports before initiating any task.
A defined state machine with six states handles everything from instant API-style responses to long-running autonomous tasks spanning hours or days, including intermediate states for when agents need human input.
The design philosophy of A2A emphasizes existing standards wherever possible. Rather than inventing new authentication mechanisms, A2A supports OAuth 2.0, API keys, and service account tokens — the same mechanisms enterprises already use for API authentication. Rather than inventing a new serialization format, A2A uses JSON. Rather than inventing a new streaming mechanism, A2A uses Server-Sent Events, which every HTTP library supports. This conservatism in design choices lowers the barrier to implementation and leverages existing enterprise security infrastructure.
A2A vs. MCP: Complementary, Not Competing
The most common question about A2A is how it relates to the Model Context Protocol (MCP), which has become the de facto standard for connecting AI agents to tools and data sources. The answer is that they solve different problems at different layers of the same stack, and most sophisticated multi-agent systems will use both.
MCP defines how an AI agent connects to external tools, APIs, databases, and data sources. It answers: “How does this agent call a web search API, query a database, or read a file?” MCP servers expose capabilities as callable tools that agents invoke during task execution. The agent is always the client; the tools are always the servers.
A2A defines how AI agents communicate with each other as peers. It answers: “How does an orchestrator agent delegate a research task to a specialized research agent, and how does the research agent return results?” Both parties are agents with their own models, contexts, and capabilities. Either can be the client or server depending on the workflow.
The distinction matters for system design. When you want an agent to call a tool — a search engine, a database, a file system — you use MCP. When you want an agent to delegate a complete task to another agent that has its own reasoning, context, and tool access, you use A2A. The remote agent in an A2A interaction is not a passive tool; it is an autonomous system that can reason, plan, and use its own MCP-connected tools to complete the assigned task.
Stack example: An orchestrator agent receives a user request to “research competitors and draft a report.” Using A2A, it delegates the research subtask to a specialized research agent. The research agent uses MCP to call web search, news APIs, and document retrieval tools. It returns structured results via A2A. The orchestrator uses those results with a writing agent (also via A2A) to produce the final report. Each agent uses MCP for its tools and A2A for peer communication.
Agent Cards and Capability Discovery
Agent Cards are the A2A equivalent of OpenAPI specifications for REST APIs or WSDL documents for SOAP services. Every A2A-compliant agent publishes a JSON Agent Card at the well-known path /.well-known/agent.json on its host. Client agents fetch this document before initiating any interaction to understand the agent's capabilities and requirements.
A complete Agent Card includes several key sections:
Name, description, version, and documentation URL. Natural language description of the agent's purpose and capabilities that client agents and their underlying models can use to decide whether to delegate a particular task.
Declares which input and output types the agent supports: plain text, structured JSON, file uploads, audio, video, or streaming data. Client agents use this to format requests correctly and know what form results will take.
Specifies which authentication schemes the agent accepts: API key, OAuth 2.0 with specific scopes, service account tokens, or no authentication for public agents. Client agents present the appropriate credentials with each task request.
Optional structured list of specific skills the agent offers, each with an ID, description, and example inputs. Pricing information enables client agents to make cost-aware routing decisions when multiple agents can handle the same task.
The Agent Card discovery mechanism is central to A2A's vision of dynamic multi-agent composition. Rather than requiring developers to hard-code which agents an orchestrator can work with, A2A enables agents to discover and evaluate each other at runtime. An orchestrator agent that needs to translate a document can query a registry of A2A agents, fetch their Agent Cards, compare their capabilities and pricing, and select the most appropriate agent for the task — all without prior configuration of that specific delegation relationship.
Task Lifecycle, Streaming, and Artifacts
A2A's task lifecycle model is designed to handle both instantaneous responses and long-running autonomous work. The six-state machine provides enough granularity to monitor progress, handle interruptions, and recover from failures without requiring custom state management in each agent implementation.
The task has been received and accepted by the remote agent. The client has a task ID it can use for status queries and cancellation. The agent has not yet begun processing.
The remote agent is actively processing the task. During this state, the agent may stream incremental results to the client via Server-Sent Events, providing real-time visibility into progress for long-running tasks.
The remote agent needs additional input to proceed. This state enables human-in-the-loop workflows where agents pause to request clarification, additional data, or authorization before continuing. The client responds with the requested input.
The task has finished successfully. The response includes the task output as one or more artifacts — structured data objects containing the results in whatever format the agent produces (text, JSON, file references, etc.).
The task ended with an error. The response includes error details that the client agent can use to decide whether to retry with different parameters, escalate to a different agent, or report the failure upstream.
The task was terminated before completion, either by a client cancellation request or by the server agent due to resource constraints or policy violations. Partial results may be included in the response.
The artifact model in A2A deserves special attention. Task results are not returned as raw text but as typed artifacts with metadata. An artifact has an ID, a MIME type, and either inline data or a reference to external storage. This structure enables agents to produce large outputs — documents, images, datasets — without embedding them in the response payload. It also enables artifact reuse: a client agent can pass an artifact ID from one task to another agent without retransmitting the data.
Security, Authentication, and Enterprise Trust
Enterprise adoption of multi-agent systems depends critically on security — not just in the sense of access control, but in the broader sense of trust, auditability, and containment. A2A addresses these concerns at the protocol level rather than leaving them to individual implementation choices.
A2A uses OAuth 2.0, API keys, and service account tokens — the same mechanisms enterprises already use. There is no new identity infrastructure to deploy. Existing SSO, PAM, and secrets management systems work with A2A out of the box.
All A2A communication is over HTTPS, providing transport-layer encryption. Agent Cards declare whether TLS is required, and enterprise deployments can enforce TLS through existing network policies and API gateway configurations.
A2A enables agents from different organizations to collaborate without sharing internal systems access. Each organization controls what capabilities its agents expose and what credentials are required. Cross-org workflows are mediated through well-defined API boundaries.
Every A2A task has a unique ID and structured lifecycle events that can be logged to existing SIEM systems. The HTTP-based transport means standard API logging infrastructure captures complete interaction records for compliance and forensic purposes.
Security consideration for multi-agent systems: When agent A delegates a task to agent B, agent B operates with its own credentials and access permissions — not the credentials of the original user who initiated the workflow. This “confused deputy” problem requires careful permission scoping: each agent should have only the minimum permissions needed for its specific role. A2A's explicit authentication declarations in Agent Cards support implementing least-privilege agent permissions.
Ecosystem Adoption and Partner Integrations
Google announced A2A in April 2025 with more than 50 launch partners — a level of industry coordination that signals serious intent to establish A2A as a lasting standard rather than a Google-proprietary protocol. The partner list spans enterprise software, cloud infrastructure, developer tools, and consulting.
- Salesforce (Agentforce)
- SAP (Business AI)
- ServiceNow (Now Assist)
- Workday (AI Assistant)
- Atlassian (Rovo)
- Deloitte
- Accenture
- Capgemini
- KPMG
- McKinsey QuantumBlack
- MongoDB
- LangChain
- CrewAI
- Vertex AI (native)
- Google Agentspace
The enterprise software commitments are particularly significant. Salesforce, SAP, ServiceNow, and Workday collectively represent the core of enterprise application infrastructure. When these systems expose their AI agents through A2A, it becomes possible to build cross-system workflows that were previously impossible without custom integration work — for example, a Salesforce opportunity agent collaborating with a Workday resource planning agent and a ServiceNow project setup agent to onboard a new client.
The consulting firm commitments signal that A2A is becoming part of the enterprise AI implementation conversation. When Deloitte, Accenture, and Capgemini commit to building A2A-compatible solutions for their clients, they are also committing to training their consultants, developing implementation frameworks, and creating reference architectures — all of which accelerate ecosystem adoption. For businesses evaluating agentic commerce and automation investments, the related developments in agentic commerce protocols show how A2A-style standardization is enabling entirely new categories of AI-driven business transactions.
Building A2A-Compatible Agents
Implementing A2A support in a new or existing AI agent requires addressing three areas: Agent Card publication, task endpoint implementation, and client-side task management. Google has published open-source SDK libraries for Python and JavaScript that handle much of the boilerplate.
Serve a JSON document at /.well-known/agent.json that describes your agent's capabilities, supported input and output modalities, authentication requirements, and optional skill list. Keep this document up to date as your agent's capabilities evolve. Use semantic versioning in the card to help client agents handle capability changes gracefully.
GET /.well-known/agent.json → AgentCard JSONImplement the POST /tasks endpoint to receive task assignments. Accept the task message, validate authentication, create a task record with a unique ID, return an initial response with the task ID and status, and then process the task asynchronously. Implement GET /tasks/{id} for status polling and GET /tasks/{id}/stream for SSE streaming.
POST /tasks → { taskId, status: "submitted" }When acting as a client agent, fetch the target agent's Agent Card to verify it supports the task type, submit the task with appropriate credentials, and either poll for completion or subscribe to the SSE stream. Handle the input-required state by providing requested additional information. Parse artifacts from completed tasks into whatever format your orchestration logic expects.
GET /tasks/{id}/stream → SSE eventsReal-World Multi-Agent Workflow Examples
Abstract protocol specifications become clearer through concrete workflow examples. Here are three enterprise scenarios that illustrate how A2A enables multi-agent coordination that would otherwise require custom integration work:
A sales orchestrator agent in Salesforce receives a closed-won opportunity. Via A2A, it delegates to a contract agent (DocuSign) to generate and send the service agreement, a resource planning agent (Workday) to allocate delivery team members, a project setup agent (Jira) to create the project structure, and an onboarding agent (internal) to provision client portal access. The orchestrator monitors task statuses via A2A and notifies the account team when all setup tasks complete.
Without A2A: Custom API integrations between 4 systems, brittle webhook chains, no unified error handling.
A strategy orchestrator receives a request for a competitive landscape report. It delegates to a web research agent (via A2A with MCP for search) to gather recent news and product updates, a financial analysis agent to pull and analyze competitor financials, a sentiment analysis agent to process social media, and finally a report writing agent to synthesize all artifacts into a formatted document. Each subagent returns structured artifacts that the orchestrator passes to the next stage.
Without A2A: Single monolithic agent prompt engineering problem, context window limitations, no specialization benefits.
A monitoring orchestrator detects an anomaly and initiates a structured response workflow. Via A2A, it engages a diagnostics agent to analyze logs and identify root cause, a runbook agent to determine the appropriate remediation procedure, and a notification agent to alert the on-call team with context. If the runbook agent enters the input-required state requesting human approval for a destructive remediation step, the orchestrator routes the approval request to the on-call engineer before proceeding.
Without A2A: Manual coordination, delayed response, inconsistent runbook execution.
These examples share a common pattern: an orchestrator that decomposes a complex task into specialized subtasks, delegates to agents with deep domain expertise, and synthesizes results. The orchestrator does not need to know how each subagent works internally — it only needs the Agent Card to know what each can do. This separation of concerns is what makes large-scale multi-agent systems maintainable. For organizations looking to implement AI and digital transformation at enterprise scale, A2A provides the interoperability foundation that makes complex multi-agent deployments practical.
Limitations and Current State
A2A is a promising and well-designed protocol with strong ecosystem backing, but it is early-stage. Teams considering A2A for production deployments should understand several current limitations:
Schema evolution: A2A does not yet define a standard approach for schema versioning or backward compatibility. When an agent's capabilities change, client agents may break if they were relying on specific input or output structures. Best practice is to implement explicit versioning in Agent Cards and maintain backward compatibility within major versions.
Discovery infrastructure: A2A defines how agents declare their capabilities but does not specify a centralized registry or directory service for discovering available agents. Teams need to manage their own agent registries or rely on organizational knowledge of which A2A agents are available. Third-party registry services are emerging but not yet standardized.
Billing and metering: While Agent Cards support declaring pricing information, A2A does not define a standard for billing, usage tracking, or cost allocation across multi-agent workflows. For commercial deployments where agent services are metered, custom billing integration is required.
Observability tooling: Distributed tracing across multi-agent A2A workflows is not yet as mature as distributed tracing for microservices. Understanding why a complex multi-agent workflow produced an unexpected result requires instrumenting each agent individually and correlating logs by task ID — doable, but not yet supported by purpose-built tooling.
Despite these limitations, A2A is the most credible candidate for becoming the interoperability standard for enterprise multi-agent systems. The combination of strong enterprise software partner commitments, an HTTP-based design that works with existing infrastructure, and a thoughtful task lifecycle model positions it well. The current limitations are gaps in ecosystem maturity rather than fundamental architectural flaws, and they are likely to close as adoption increases.
Conclusion
The Agent-to-Agent protocol addresses a genuine bottleneck in the development of multi-agent AI systems: the absence of a standard language for agents to communicate with each other. By building on familiar HTTP conventions, defining a clear task lifecycle, and enabling machine-discoverable capability declarations through Agent Cards, A2A makes it practical to compose multi-agent workflows from specialized agents without custom integration code for each agent pair.
The pattern emerging from MCP, A2A, and related protocol work is that the AI agent ecosystem is deliberately adopting a layered architecture: standardized tool access at one layer, standardized agent-to-agent communication at another. This architecture will enable the kind of modular, composable, and interoperable AI systems that can tackle enterprise-scale problems — systems where specialized agents collaborate across organizational boundaries with the same ease that HTTP enabled web services to collaborate across network boundaries thirty years ago. Teams that understand these protocols now will be best positioned to build and deploy those systems.
Ready to Build Multi-Agent Systems?
A2A and MCP are enabling a new generation of AI automation. Our team helps businesses design and implement multi-agent architectures that deliver real enterprise value.
Related Articles
Continue exploring with these related guides