AI Development12 min read

Google A2A Protocol: Agent-to-Agent Communication Guide

Google's Agent-to-Agent (A2A) protocol enables AI agents to communicate and collaborate. Now at Linux Foundation with 50+ enterprise partners. Complete guide.

Digital Applied Team
March 11, 2026
12 min read
50+

Launch Partners

2

Core Protocols (A2A + MCP)

6

Task State Transitions

HTTP

Transport Layer Standard

Key Takeaways

A2A solves the interoperability problem that MCP leaves open: The Model Context Protocol (MCP) standardizes how AI agents access tools and data sources. A2A standardizes how AI agents communicate with each other as peers. Together they form a complete stack: MCP handles agent-to-resource connections, A2A handles agent-to-agent delegation and collaboration across organizational and vendor boundaries.
Agent Cards make capabilities machine-discoverable: Every A2A-compliant agent publishes a JSON Agent Card at a well-known URL that describes its capabilities, supported modalities, authentication requirements, and pricing. Client agents query these cards to discover what a remote agent can do before initiating a task — enabling dynamic composition of multi-agent workflows without hard-coded integrations.
The task lifecycle model handles long-running and asynchronous work: A2A defines a standard task state machine (submitted, working, input-required, completed, failed, cancelled) with Server-Sent Events streaming for real-time progress updates. This architecture supports both synchronous API-style exchanges and long-running autonomous tasks that may take hours or days to complete.
Over 50 technology partners have committed to A2A support: Google launched A2A with commitments from Salesforce, SAP, ServiceNow, Workday, Atlassian, Deloitte, Accenture, and more than 40 other technology and consulting firms. This breadth of early adoption suggests A2A is on a path to becoming the industry standard for enterprise multi-agent interoperability.

Artificial intelligence agents are becoming more capable, but they are also becoming more specialized. Rather than a single AI system handling every task, modern AI architectures decompose complex workflows into multiple agents — each optimized for a specific function — that collaborate to produce outcomes no single agent could achieve alone. The challenge this creates is interoperability: how do agents built by different teams, running on different platforms, and produced by different vendors communicate with each other?

Google's Agent-to-Agent (A2A) protocol, announced in April 2025 and developed with over 50 launch partners, is the most significant attempt to answer this question at an industry scale. A2A defines a standard communication layer for AI agents — a common language that allows any compliant agent to delegate work to, collaborate with, and receive results from any other compliant agent, regardless of their underlying implementation. This guide covers the protocol architecture, its relationship to MCP, the Agent Card discovery system, the task lifecycle model, and what it means for teams building multi-agent systems. The rapid growth of MCP itself provides useful context: as covered in our analysis of MCP reaching 97 million downloads, the AI agent ecosystem is standardizing rapidly, and A2A represents the next layer of that standardization effort.

What Is the A2A Protocol

A2A is an open protocol that defines how AI agents communicate with each other as peers. It specifies message formats, task lifecycle management, capability discovery, authentication, and streaming mechanisms — everything needed for one agent to reliably delegate work to another and receive results. The protocol is built on top of HTTP, uses JSON for message serialization, and employs Server-Sent Events for real-time streaming.

The protocol distinguishes between two roles: the client agent, which initiates a task and consumes results, and the remote agent (or server agent), which receives task assignments and performs the requested work. In a multi-agent system, the same agent can play both roles simultaneously — acting as a client when delegating subtasks and as a server when receiving assignments from an orchestrator.

HTTP Transport

Built on HTTP/HTTPS as the transport layer, A2A integrates with existing enterprise network infrastructure, security tooling, and API gateways without requiring new communication protocols or custom networking.

Agent Cards

Standardized JSON capability declarations published at a well-known URL enable client agents to discover what a remote agent can do, what authentication it requires, and what modalities it supports before initiating any task.

Task Lifecycle

A defined state machine with six states handles everything from instant API-style responses to long-running autonomous tasks spanning hours or days, including intermediate states for when agents need human input.

The design philosophy of A2A emphasizes existing standards wherever possible. Rather than inventing new authentication mechanisms, A2A supports OAuth 2.0, API keys, and service account tokens — the same mechanisms enterprises already use for API authentication. Rather than inventing a new serialization format, A2A uses JSON. Rather than inventing a new streaming mechanism, A2A uses Server-Sent Events, which every HTTP library supports. This conservatism in design choices lowers the barrier to implementation and leverages existing enterprise security infrastructure.

A2A vs. MCP: Complementary, Not Competing

The most common question about A2A is how it relates to the Model Context Protocol (MCP), which has become the de facto standard for connecting AI agents to tools and data sources. The answer is that they solve different problems at different layers of the same stack, and most sophisticated multi-agent systems will use both.

MCP: Agent to Resources

MCP defines how an AI agent connects to external tools, APIs, databases, and data sources. It answers: “How does this agent call a web search API, query a database, or read a file?” MCP servers expose capabilities as callable tools that agents invoke during task execution. The agent is always the client; the tools are always the servers.

A2A: Agent to Agent

A2A defines how AI agents communicate with each other as peers. It answers: “How does an orchestrator agent delegate a research task to a specialized research agent, and how does the research agent return results?” Both parties are agents with their own models, contexts, and capabilities. Either can be the client or server depending on the workflow.

The distinction matters for system design. When you want an agent to call a tool — a search engine, a database, a file system — you use MCP. When you want an agent to delegate a complete task to another agent that has its own reasoning, context, and tool access, you use A2A. The remote agent in an A2A interaction is not a passive tool; it is an autonomous system that can reason, plan, and use its own MCP-connected tools to complete the assigned task.

Agent Cards and Capability Discovery

Agent Cards are the A2A equivalent of OpenAPI specifications for REST APIs or WSDL documents for SOAP services. Every A2A-compliant agent publishes a JSON Agent Card at the well-known path /.well-known/agent.json on its host. Client agents fetch this document before initiating any interaction to understand the agent's capabilities and requirements.

A complete Agent Card includes several key sections:

Identity and Description

Name, description, version, and documentation URL. Natural language description of the agent's purpose and capabilities that client agents and their underlying models can use to decide whether to delegate a particular task.

Supported Modalities

Declares which input and output types the agent supports: plain text, structured JSON, file uploads, audio, video, or streaming data. Client agents use this to format requests correctly and know what form results will take.

Authentication Requirements

Specifies which authentication schemes the agent accepts: API key, OAuth 2.0 with specific scopes, service account tokens, or no authentication for public agents. Client agents present the appropriate credentials with each task request.

Skills and Pricing

Optional structured list of specific skills the agent offers, each with an ID, description, and example inputs. Pricing information enables client agents to make cost-aware routing decisions when multiple agents can handle the same task.

The Agent Card discovery mechanism is central to A2A's vision of dynamic multi-agent composition. Rather than requiring developers to hard-code which agents an orchestrator can work with, A2A enables agents to discover and evaluate each other at runtime. An orchestrator agent that needs to translate a document can query a registry of A2A agents, fetch their Agent Cards, compare their capabilities and pricing, and select the most appropriate agent for the task — all without prior configuration of that specific delegation relationship.

Task Lifecycle, Streaming, and Artifacts

A2A's task lifecycle model is designed to handle both instantaneous responses and long-running autonomous work. The six-state machine provides enough granularity to monitor progress, handle interruptions, and recover from failures without requiring custom state management in each agent implementation.

submitted

The task has been received and accepted by the remote agent. The client has a task ID it can use for status queries and cancellation. The agent has not yet begun processing.

working

The remote agent is actively processing the task. During this state, the agent may stream incremental results to the client via Server-Sent Events, providing real-time visibility into progress for long-running tasks.

input-required

The remote agent needs additional input to proceed. This state enables human-in-the-loop workflows where agents pause to request clarification, additional data, or authorization before continuing. The client responds with the requested input.

completed

The task has finished successfully. The response includes the task output as one or more artifacts — structured data objects containing the results in whatever format the agent produces (text, JSON, file references, etc.).

failed

The task ended with an error. The response includes error details that the client agent can use to decide whether to retry with different parameters, escalate to a different agent, or report the failure upstream.

cancelled

The task was terminated before completion, either by a client cancellation request or by the server agent due to resource constraints or policy violations. Partial results may be included in the response.

The artifact model in A2A deserves special attention. Task results are not returned as raw text but as typed artifacts with metadata. An artifact has an ID, a MIME type, and either inline data or a reference to external storage. This structure enables agents to produce large outputs — documents, images, datasets — without embedding them in the response payload. It also enables artifact reuse: a client agent can pass an artifact ID from one task to another agent without retransmitting the data.

Security, Authentication, and Enterprise Trust

Enterprise adoption of multi-agent systems depends critically on security — not just in the sense of access control, but in the broader sense of trust, auditability, and containment. A2A addresses these concerns at the protocol level rather than leaving them to individual implementation choices.

Standard Auth Protocols

A2A uses OAuth 2.0, API keys, and service account tokens — the same mechanisms enterprises already use. There is no new identity infrastructure to deploy. Existing SSO, PAM, and secrets management systems work with A2A out of the box.

TLS Encryption

All A2A communication is over HTTPS, providing transport-layer encryption. Agent Cards declare whether TLS is required, and enterprise deployments can enforce TLS through existing network policies and API gateway configurations.

Cross-Organization Trust

A2A enables agents from different organizations to collaborate without sharing internal systems access. Each organization controls what capabilities its agents expose and what credentials are required. Cross-org workflows are mediated through well-defined API boundaries.

Audit Trail

Every A2A task has a unique ID and structured lifecycle events that can be logged to existing SIEM systems. The HTTP-based transport means standard API logging infrastructure captures complete interaction records for compliance and forensic purposes.

Ecosystem Adoption and Partner Integrations

Google announced A2A in April 2025 with more than 50 launch partners — a level of industry coordination that signals serious intent to establish A2A as a lasting standard rather than a Google-proprietary protocol. The partner list spans enterprise software, cloud infrastructure, developer tools, and consulting.

Enterprise Software
  • Salesforce (Agentforce)
  • SAP (Business AI)
  • ServiceNow (Now Assist)
  • Workday (AI Assistant)
  • Atlassian (Rovo)
Consulting and SI
  • Deloitte
  • Accenture
  • Capgemini
  • KPMG
  • McKinsey QuantumBlack
Developer Platforms
  • MongoDB
  • LangChain
  • CrewAI
  • Vertex AI (native)
  • Google Agentspace

The enterprise software commitments are particularly significant. Salesforce, SAP, ServiceNow, and Workday collectively represent the core of enterprise application infrastructure. When these systems expose their AI agents through A2A, it becomes possible to build cross-system workflows that were previously impossible without custom integration work — for example, a Salesforce opportunity agent collaborating with a Workday resource planning agent and a ServiceNow project setup agent to onboard a new client.

The consulting firm commitments signal that A2A is becoming part of the enterprise AI implementation conversation. When Deloitte, Accenture, and Capgemini commit to building A2A-compatible solutions for their clients, they are also committing to training their consultants, developing implementation frameworks, and creating reference architectures — all of which accelerate ecosystem adoption. For businesses evaluating agentic commerce and automation investments, the related developments in agentic commerce protocols show how A2A-style standardization is enabling entirely new categories of AI-driven business transactions.

Building A2A-Compatible Agents

Implementing A2A support in a new or existing AI agent requires addressing three areas: Agent Card publication, task endpoint implementation, and client-side task management. Google has published open-source SDK libraries for Python and JavaScript that handle much of the boilerplate.

Agent Card Publication (Server Side)

Serve a JSON document at /.well-known/agent.json that describes your agent's capabilities, supported input and output modalities, authentication requirements, and optional skill list. Keep this document up to date as your agent's capabilities evolve. Use semantic versioning in the card to help client agents handle capability changes gracefully.

GET /.well-known/agent.json → AgentCard JSON
Task Endpoint Implementation (Server Side)

Implement the POST /tasks endpoint to receive task assignments. Accept the task message, validate authentication, create a task record with a unique ID, return an initial response with the task ID and status, and then process the task asynchronously. Implement GET /tasks/{id} for status polling and GET /tasks/{id}/stream for SSE streaming.

POST /tasks → { taskId, status: "submitted" }
Task Management (Client Side)

When acting as a client agent, fetch the target agent's Agent Card to verify it supports the task type, submit the task with appropriate credentials, and either poll for completion or subscribe to the SSE stream. Handle the input-required state by providing requested additional information. Parse artifacts from completed tasks into whatever format your orchestration logic expects.

GET /tasks/{id}/stream → SSE events

Real-World Multi-Agent Workflow Examples

Abstract protocol specifications become clearer through concrete workflow examples. Here are three enterprise scenarios that illustrate how A2A enables multi-agent coordination that would otherwise require custom integration work:

Customer Onboarding Automation

A sales orchestrator agent in Salesforce receives a closed-won opportunity. Via A2A, it delegates to a contract agent (DocuSign) to generate and send the service agreement, a resource planning agent (Workday) to allocate delivery team members, a project setup agent (Jira) to create the project structure, and an onboarding agent (internal) to provision client portal access. The orchestrator monitors task statuses via A2A and notifies the account team when all setup tasks complete.

Without A2A: Custom API integrations between 4 systems, brittle webhook chains, no unified error handling.

Competitive Intelligence Report

A strategy orchestrator receives a request for a competitive landscape report. It delegates to a web research agent (via A2A with MCP for search) to gather recent news and product updates, a financial analysis agent to pull and analyze competitor financials, a sentiment analysis agent to process social media, and finally a report writing agent to synthesize all artifacts into a formatted document. Each subagent returns structured artifacts that the orchestrator passes to the next stage.

Without A2A: Single monolithic agent prompt engineering problem, context window limitations, no specialization benefits.

IT Incident Response

A monitoring orchestrator detects an anomaly and initiates a structured response workflow. Via A2A, it engages a diagnostics agent to analyze logs and identify root cause, a runbook agent to determine the appropriate remediation procedure, and a notification agent to alert the on-call team with context. If the runbook agent enters the input-required state requesting human approval for a destructive remediation step, the orchestrator routes the approval request to the on-call engineer before proceeding.

Without A2A: Manual coordination, delayed response, inconsistent runbook execution.

These examples share a common pattern: an orchestrator that decomposes a complex task into specialized subtasks, delegates to agents with deep domain expertise, and synthesizes results. The orchestrator does not need to know how each subagent works internally — it only needs the Agent Card to know what each can do. This separation of concerns is what makes large-scale multi-agent systems maintainable. For organizations looking to implement AI and digital transformation at enterprise scale, A2A provides the interoperability foundation that makes complex multi-agent deployments practical.

Limitations and Current State

A2A is a promising and well-designed protocol with strong ecosystem backing, but it is early-stage. Teams considering A2A for production deployments should understand several current limitations:

Despite these limitations, A2A is the most credible candidate for becoming the interoperability standard for enterprise multi-agent systems. The combination of strong enterprise software partner commitments, an HTTP-based design that works with existing infrastructure, and a thoughtful task lifecycle model positions it well. The current limitations are gaps in ecosystem maturity rather than fundamental architectural flaws, and they are likely to close as adoption increases.

Conclusion

The Agent-to-Agent protocol addresses a genuine bottleneck in the development of multi-agent AI systems: the absence of a standard language for agents to communicate with each other. By building on familiar HTTP conventions, defining a clear task lifecycle, and enabling machine-discoverable capability declarations through Agent Cards, A2A makes it practical to compose multi-agent workflows from specialized agents without custom integration code for each agent pair.

The pattern emerging from MCP, A2A, and related protocol work is that the AI agent ecosystem is deliberately adopting a layered architecture: standardized tool access at one layer, standardized agent-to-agent communication at another. This architecture will enable the kind of modular, composable, and interoperable AI systems that can tackle enterprise-scale problems — systems where specialized agents collaborate across organizational boundaries with the same ease that HTTP enabled web services to collaborate across network boundaries thirty years ago. Teams that understand these protocols now will be best positioned to build and deploy those systems.

Ready to Build Multi-Agent Systems?

A2A and MCP are enabling a new generation of AI automation. Our team helps businesses design and implement multi-agent architectures that deliver real enterprise value.

Free consultation
Expert guidance
Tailored solutions

Related Articles

Continue exploring with these related guides