Post

Agent-to-Agent (A2A) Protocol

Google's open protocol for agent-to-agent communication — enabling AI agents built on different frameworks and by different vendors to discover each other, negotiate capabilities, and collaborate on tasks.

Agent-to-Agent (A2A) Protocol

Google’s open protocol for agent-to-agent communication — enabling AI agents built on different frameworks and by different vendors to discover each other, negotiate capabilities, and collaborate on tasks without sharing internal memory, tools, or execution logic.


What is A2A?

A2A is a JSON-RPC 2.0 based protocol that standardizes how autonomous AI agents communicate with each other. Announced by Google in April 2025 with backing from over 50 technology partners (Salesforce, SAP, Atlassian, MongoDB, Deloitte, and others), it addresses a gap that MCP doesn’t cover: agent-to-agent interoperability.

The Problem A2A Solves

Enterprise AI is converging on multi-agent architectures — a planning agent delegates subtasks to specialist agents (research, coding, compliance, scheduling). Without a standard protocol, every agent-to-agent integration is bespoke:

  • Agent A built on LangGraph can’t talk to Agent B built on Google ADK
  • Agent B built on CrewAI can’t delegate to Agent C on Amazon Bedrock Agents
  • Each vendor’s agent framework is a walled garden
  • No standard way to discover what an agent can do

A2A provides the interoperability layer: any agent can discover, communicate with, and delegate work to any other agent, regardless of framework, vendor, or underlying model.

A2A vs. MCP — Complementary, Not Competing

This distinction matters and is frequently confused:

Aspect MCP A2A
What talks Agent to Tool/Data Agent to Agent
Analogy USB cable connecting peripherals HTTP connecting web services
Scope Give an agent access to APIs, databases, files Let agents collaborate on tasks
Intelligence Server is dumb (executes commands) Both sides are intelligent (negotiate, plan)
State Stateless tool calls Stateful task lifecycle
Who initiates Agent calls tool Either agent can initiate

In practice, you use both. An orchestrator agent uses A2A to delegate a research task to a specialist agent. That specialist agent uses MCP to access a database, call an API, or read files. A2A handles agent collaboration; MCP handles tool integration.


Architecture

Agent Cards — Discovery Mechanism

Every A2A-compliant agent publishes an Agent Card at a well-known URL (/.well-known/agent.json). This is the agent’s machine-readable resume:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
{
  "name": "ComplianceChecker",
  "description": "Checks proposed changes against EU regulatory policies including GDPR, AI Act, and DORA",
  "url": "https://compliance-agent.internal.mms.de/a2a",
  "version": "1.0.0",
  "capabilities": {
    "streaming": true,
    "pushNotifications": true,
    "stateTransitionHistory": true
  },
  "authentication": {
    "schemes": ["OAuth2"],
    "credentials": "https://auth.internal.mms.de/oauth2/token"
  },
  "defaultInputModes": ["text/plain", "application/json"],
  "defaultOutputModes": ["text/plain", "application/json"],
  "skills": [
    {
      "id": "gdpr-check",
      "name": "GDPR Compliance Check",
      "description": "Validates data processing activities against GDPR requirements",
      "tags": ["compliance", "gdpr", "privacy", "eu-regulation"],
      "examples": ["Check if this data pipeline is GDPR compliant"]
    },
    {
      "id": "ai-act-risk",
      "name": "AI Act Risk Classification",
      "description": "Classifies AI system risk level per EU AI Act",
      "tags": ["compliance", "ai-act", "risk-assessment"],
      "examples": ["What risk category is this recommendation engine?"]
    }
  ]
}

Agent Cards enable dynamic discovery — a client agent can crawl a registry, find agents with specific skills (tagged compliance + gdpr), and delegate tasks without any hardcoded integration.

Task Lifecycle

A2A communication revolves around Tasks. A task goes through a defined state machine:

1
2
3
4
5
6
7
8
9
10
11
                    ┌──────────────┐
                    │   submitted  │
                    └──────┬───────┘
                           │
                    ┌──────▼───────┐
               ┌────│   working    │────┐
               │    └──────┬───────┘    │
               │           │            │
        ┌──────▼───┐ ┌────▼─────┐ ┌───▼──────────┐
        │  failed   │ │completed │ │input-required │
        └──────────┘ └──────────┘ └───────────────┘

States:

  • submitted — Task received, queued for processing
  • working — Agent is actively processing
  • input-required — Agent needs clarification or additional input from the client
  • completed — Task finished, results available
  • failed — Task failed with error details

The input-required state is what makes A2A more than simple request-response. Agents can negotiate, ask follow-up questions, and collaborate iteratively — much like a human conversation between two specialists.

Core JSON-RPC Methods

Method Purpose
tasks/send Send a new task or provide input to an existing task
tasks/get Poll for task status and results
tasks/cancel Cancel a running task
tasks/sendSubscribe Send task and subscribe to SSE updates (streaming)
tasks/pushNotification/set Register a webhook for push notifications
tasks/pushNotification/get Get current push notification config

Communication Patterns

1. Synchronous (Simple Request-Response)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
// Client sends task
{
  "jsonrpc": "2.0",
  "method": "tasks/send",
  "params": {
    "id": "task-001",
    "message": {
      "role": "user",
      "parts": [{"type": "text", "text": "Classify the risk level of our product recommendation AI under EU AI Act"}]
    }
  }
}

// Agent responds with completed task
{
  "jsonrpc": "2.0",
  "result": {
    "id": "task-001",
    "status": {"state": "completed"},
    "artifacts": [{
      "name": "risk-assessment",
      "parts": [{"type": "text", "text": "Classification: LIMITED RISK. Rationale: ..."}]
    }]
  }
}

2. Streaming (Server-Sent Events)

For long-running tasks, use tasks/sendSubscribe to get real-time updates as the agent works. The client receives SSE events with partial results, status changes, and intermediate artifacts.

3. Push Notifications (Webhooks)

For fire-and-forget patterns — submit a task, register a webhook, and get notified when it completes. Essential for async multi-agent workflows where tasks may take minutes or hours.


Multi-Agent Orchestration Pattern

Here’s how A2A enables a real enterprise workflow — an AI platform orchestrator delegating across specialist agents:

1
2
3
4
5
6
7
8
9
10
11
12
┌──────────────────────────────────────────────────────┐
│              Orchestrator Agent (ADK)                  │
│  "Process this customer service desk ticket"          │
└───────┬──────────────┬──────────────┬────────────────┘
        │ A2A          │ A2A          │ A2A
┌───────▼──────┐ ┌─────▼──────┐ ┌────▼───────────┐
│ Triage Agent │ │ Knowledge  │ │ Compliance     │
│ (LangGraph)  │ │ Agent      │ │ Agent          │
│              │ │ (Bedrock)  │ │ (Custom/ADK)   │
│ MCP: Jira,   │ │ MCP: Vector│ │ MCP: Policy DB │
│ ServiceNow   │ │ DB, Docs   │ │ Regulatory API │
└──────────────┘ └────────────┘ └────────────────┘

Each agent:

  1. Publishes its own Agent Card
  2. Accepts tasks via A2A
  3. Uses MCP internally to access its own tools/data
  4. Returns results via A2A artifacts
  5. Can request input from the orchestrator if it needs clarification

Enterprise Considerations

Authentication and Security

A2A supports standard enterprise auth via Agent Card declarations:

  • OAuth 2.0 — recommended for production, integrates with existing identity providers
  • API Keys — simpler but less secure, acceptable for internal services
  • mTLS — mutual TLS for zero-trust network architectures

The protocol itself doesn’t handle auth — it declares what auth the agent requires, and clients must comply. This means your existing IAM infrastructure (Azure AD, Google Workspace, Okta) works with A2A.

Opaque Execution

A critical design principle: agents don’t share their internal state, prompts, tools, or reasoning with each other. An agent is a black box that accepts tasks and returns results. This matters for:

  • IP protection — vendor agents don’t expose proprietary logic
  • Security — no prompt injection via shared context
  • Compliance — agents can enforce their own guardrails independently
  • Modularity — swap agent implementations without breaking integrations

EU AI Act Alignment

For MMS’s enterprise AI platform, A2A’s architecture aligns well with EU AI Act requirements:

  • Transparency — Agent Cards document capabilities and limitations
  • Human oversightinput-required state enables human-in-the-loop at any point
  • Auditability — Task lifecycle with state history provides audit trail
  • Risk management — Agents can independently enforce compliance rules

Current Ecosystem (as of early 2025)

Framework Support

Framework A2A Support Notes
Google ADK Native First-class support, reference implementation
LangGraph SDK available Python client/server
CrewAI Planned Community interest, early integration
Amazon Bedrock TBD No official announcement yet
Anthropic Agent SDK TBD Could integrate via A2A client wrapper

Partner Commitments

Google announced 50+ launch partners. The notable ones for enterprise:

  • Salesforce — Agentforce agents exposing A2A endpoints
  • SAP — Joule AI agents with A2A interop
  • Atlassian — Rovo agents discoverable via A2A
  • ServiceNow — Now Assist agents with A2A support planned
  • MongoDB — Database agents accessible via A2A

What’s Not There Yet

Be realistic about maturity:

  • Spec is early (v0.1-v0.2 range) — breaking changes expected
  • Production deployments are limited; mostly demos and proofs of concept
  • No established agent registry/marketplace yet
  • Security model defers to existing infra (good for flexibility, bad for out-of-box guarantees)
  • Error handling and retry semantics are underspecified

When to Use A2A

Use A2A when:

  • You’re building a multi-agent system where agents are developed by different teams or vendors
  • You need agents on different frameworks (ADK + LangGraph + custom) to collaborate
  • You want to expose internal AI agents as services other teams can consume
  • You’re building an agent marketplace or registry within the enterprise

Don’t use A2A when:

  • All your agents are in the same framework (use that framework’s native orchestration)
  • You just need an agent to call an API (use MCP or direct tool calls)
  • You’re in early prototyping (the overhead isn’t worth it yet)
  • You need sub-100ms latency between agents (A2A adds network overhead)

Practical Recommendation for MMS

Start with MCP for tool integration (already underway). Monitor A2A but don’t adopt yet — the spec is too early for production enterprise use. When A2A stabilizes (likely late 2025 or 2026), it becomes the right choice for exposing your platform’s AI agents as discoverable services that other MMS teams can consume via standardized Agent Cards.

The key architectural decision: design your agents as self-contained services now (clear input/output contracts, no shared state, own auth). This makes A2A adoption straightforward later without requiring a rewrite.


References

This post is licensed under CC BY 4.0 by the author.