High Level documentation of BAT-ADK
Introductionβ
The BR-ADK (BubbleRAN Agent Development Kit) is a Python SDK built on top of LangGraph that helps you build intelligent AI agents with minimal effort. It natively supports two standard protocols:
- A2A (Agent-to-Agent): for communication between agents
- MCP (Model Context Protocol): for accessing external tools, prompts, and contextual data
Protocols at a Glanceβ
- A2A is the default and recommended protocol for agent-to-agent communication.
- MCP is used by agents to securely interact with external resources such as tools and prompts.
- While MCP can be used for agent-to-agent communication, this is mainly intended for compatibility with other agent frameworks. Whenever possible, prefer A2A for agent interactions.
Design Philosophyβ
BR-ADK is designed to let you focus on agent behavior and workflows, not protocol details.
You define your agentβs logic using graph-based workflows, while the SDK handles communication, protocol compliance, and integration under the hood.
Preliminary Conceptsβ
Before using BR-ADK, you should be familiar with a few core ideas:
-
LangGraph basics
Understand how to define a graph, manage graph state, use streaming, and handle interrupts for Human-in-the-Loop (HITL) interactions. -
A2A fundamentals
Know the basics of the A2A protocol, especially what an Agent Card is and its role in agent communication. -
MCP fundamentals
Have a basic understanding of the Model Context Protocol (MCP) and how it enables agents to access tools, prompts, and context.
Agent Applicationβ
An Agent Application is the main object for building agents in BR-ADK. It is an instance of the AgentApplication class and requires two parameters:
- Graph type β a class extending
AgentGraph - State type β a class extending
AgentState
What Happens on Instantiationβ
When you create an AgentApplication, it automatically:
- Loads the Agent Card from
./agent.jsonand the Agent Configuration from./config.yaml - Instantiates the AgentGraph
- Sets up the AgentExecutor, request handler, and a Starlette web application
Running the Agent Applicationβ
After creation, call the run() method to start the Starlette app and expose the A2A Server.
- Use
run(expose_mcp=True)to also start an MCP Server, which provides two tools:get_agent_card()β returns the Agent Card as a JSON stringcall_agent(query, context_id, message_id)β sends a request to the A2A Server and returns the response
Note: All MCP requests are internally forwarded to the A2A endpoint.
Portsβ
- A2A: 9900 (default)
- MCP: 9800 (default)
You can override these using the PORT and MCP_PORT environment variables.
Agent Configurationβ
The AgentConfig defines how an agent behaves and what external resources it can access, including:
- Whether to perform checkpoints
- Which MCP Servers and other Agents it needs to communicate with
Loading and Validationβ
- Automatically loaded from
config.yamlor the path set in theCONFIGenvironment variable when the AgentApplication starts - Validated by checking connectivity to all
requiredMCP Servers and Agents- If any required resource is unreachable, the application crashes and must be restarted (handled automatically in Kubernetes)
Featuresβ
The AgentConfig class provides methods to:
- List available MCP Servers and Agents
- Retrieve the AgentCard of specific Agents
- Retrieve the Tools provided by specific MCP Servers
These methods help you integrate external resources directly into your agentβs Graph logic.
Naming Guidelinesβ
- In the configuration, you assign a name and a URL for each MCP Server or Agent.
- Always use the Official name of the server or agent in your agent logic, not the name you assigned in the config.
- The SDK automatically maps your assigned names to the Official names during validation.
- Using Official names ensures your agent will work correctly when deployed with tools like the AIFabric controller of the Odin Operator.
Example:
remote-agents:
- name: MyHelperAgent # your assigned name
url: http://helper:9900
In your agent code, reference this agent by its Official name, e.g., HelperAgent, not MyHelperAgent.
Agent Executorβ
The Agent Executor handles task execution and event publishing for an agent.
Key Functionsβ
-
Execute a request
- Calls the
astreammethod of the AgentGraph. - Processes each chunk individually, converting
AgentTaskResultobjects into A2A events for the event queue. - Collects usage metadata from the graph at each step and includes them in the stream chunks.
- Calls the
-
Cancel a request
- Currently not implemented
Agent Graphβ
An AgentGraph defines the core logic of an agent using the LangGraph library. Agents built with BR-ADK stream their responses as AgentTaskResult objects, produced after executing each node in the graph.
Creating a Custom Graphβ
You cannot instantiate AgentGraph directly. Instead:
- Extend the
AgentGraphclass - Implement the
setup(config: AgentConfig)method. Inside this method:- Instantiate all ChatModelClient instances your agent will use as properties of the extended class (e.g.
self.<client-name>). - Instantiate any prebuilt workflows.
- Define the graphβs nodes and edges via the
graph_builderproperty.
- Instantiate all ChatModelClient instances your agent will use as properties of the extended class (e.g.
After setup completes, the AgentGraph is compiled by the BR-ADK (you don't need to do it manually) and ready to use.
Streaming Responsesβ
The AgentGraph class provides an astream method, which the Agent Executor uses to submit requests and receive streamed responses.
Agent Stateβ
Each AgentGraph has a State that updates after each node executes. The state is defined by extending the AgentState class, which is a Pydantic model ensuring type safety.
Implementing a Custom Stateβ
When you extend AgentState, you must:
- Override
from_query(str)β to initialize the state from a query - Override
to_task_result()β to convert the state into anAgentTaskResult
Optional overrides for advanced features:
update_after_checkpoint_restore(str)β for multi-turn conversationsis_waiting_for_human_input()β for Human-in-the-Loop (HITL) interactions
How State Worksβ
- Each node in the Agent Graph receives the current
AgentStateand returns an updatedAgentState - The Agent Graph converts the updated state into an AgentTaskResult using
to_task_result()in theastreammethod. - The Agent Executor then processes the
AgentTaskResultand generates A2A events for the event queue
Chat Model Clientβ
A Chat Model Client is a wrapper around an LLM that combines:
- A LangChain BaseChatModel
- System instructions
- Optional tools
What It Doesβ
- Provides
invokefor single requests andbatchfor parallel requests - Tracks usage metadata:
- Input, output, and total tokens
- LLM inference time
When a ChatModelClient is created as part of an AgentGraph, the graph automatically collects this usage data after each node execution and includes it in the streamed results.
Configurationβ
A ChatModelClient is configured using a ChatModelClientConfig, typically loaded from environment variables via from_env.
The configuration includes:
- Model provider and model name
- Optional LLM endpoint URL
- Optional client name (developer-defined identifier)
Prebuilt Workflowsβ
Prebuilt Workflows are reusable Runnable components that can be used as nodes inside an AgentGraph.
Purposeβ
They help you encapsulate common or complex logic into reusable building blocks, making agent graphs easier to design and maintain.
Creating a Prebuilt Workflowβ
BR-ADK provides the abstract PrebuiltWorkflow class. To create one, you must extend it and implement:
-
_setup()- Similar to
AgentGraph.setup - Used to initialize models, tools, and internal state
- Similar to
-
_astream()- Implements the workflowβs custom streaming logic
- This is more advanced and requires careful handling
Using a Prebuilt Workflowβ
Once defined, call as_runnable() to obtain a Runnable that can be directly used as a node in an Agent Graph.
ReAct Loopβ
The ReAct Loop is a prebuilt workflow that implements the classic ReAct pattern (Reason + Act) using:
- An LLM node
- A Tool node
It lets you add a full ReAct workflow to an Agent Graph as a single node.
How to Use Itβ
To instantiate a ReAct Loop, you must provide:
- The Agent State schema used by your graph (the
AgentStatesubclass you defined) - A
ChatModelClientwith tools enabled (created inside theAgentGraph.setupmethod) - The names of specific state fields:
- Where the loop reads its input from
- Where the loop writes its output to
Check the documentation for the full list of available parameters.
This design keeps the ReAct logic reusable while cleanly integrating with your agentβs state and graph.
Call Agent Nodeβ
The Call Agent Node is a prebuilt workflow that abstracts the Agent-to-Agent (A2A) streaming.
It lets you integrate a remote agent call into your graph as a single node, handling the stream consumption and state updates automatically. This feature allows for real-time streaming of what the called agent is doing.
How to Use Itβ
To instantiate a Call Agent Node, you must provide:
- The Agent Config and Agent State schema
- The target agent's name (the agent must be specified in the Agent Config)
- A Message Builder function to construct the request from the current state
- The names of some specific state fields:
- Where to store the remote agent's status and content
- Where to flag if the remote agent needs human input
This node also automatically captures token usage from the remote agent and merges it into your local metrics.