Skip to main content
Version: v4.0.0 [Denim]

Lab 2: Building a Task-Specific Agent

In this lab, you will create a simple RNG Agent that generates random numbers based on user input and formats the response in a user-friendly way providing interesting insights on the generated numbers.

Prerequisites​

This lab assumes:

  • Basic Python and OOP knowledge
  • Familiarity with the LangGraph framework
  • Basic understanding of Pydantic models (i.e. what is a Pydantic model)

Some explanations are included to help you follow the code structure and logic. Keep the BR rApp SDK documentation handy, as it provides details on the classes and methods used in this lab. Spending some time with the documentation and experimenting with the labs will make agent development faster and easier.

Project setup​

Follow the general instruction to setup your codebase. If done correctly, your project structure should look like this:

.
β”œβ”€β”€ .env
β”œβ”€β”€ .venv/
β”œβ”€β”€ README.md
β”œβ”€β”€ __init__.py
β”œβ”€β”€ __main__.py
β”œβ”€β”€ agent.json
β”œβ”€β”€ config.yaml
β”œβ”€β”€ pyproject.toml
β”œβ”€β”€ src/
β”‚Β Β  β”œβ”€β”€ __init__.py
β”‚Β Β  β”œβ”€β”€ graph.py
β”‚Β Β  └── llm_clients/
β”‚Β Β  β”œβ”€β”€ __init__.py
β”‚Β Β  └── rng_client.py
└── uv.lock

You may need to create any missing Python files manually.

Project Structure Overview​

  • .env: Environment variables such as model provider and API keys
  • .venv/: Virtual environment created by uv
  • README.md: Agent documentation
  • __init__.py: Marks the directory as a Python package
  • __main__.py: Agent entry point
  • agent.json: The AgentCard
  • config.yaml: The AgentConfig file used by the BR-ADK
  • pyproject.toml: Configuration for uv, listing dependencies and project metadata
  • src/: Contains source code including Agent logic and LLM clients
  • uv.lock: Locks dependency versions (for uv)

You usually don’t need to modify pyproject.toml, as uv manages it automatically. For this lab, paste the following to ensure your environment matches the lab setup:

[project]
name = "rng_agent"
version = "0.1.0"
description = "RNG Agent"
readme = "README.md"
requires-python = ">=3.13"
dependencies = [
"br-rapp-sdk==2025.12",
]

Installing Dependencies​

Install dependencies in the project's virtual environement:

uv sync

Important note: the command won't install dependencies in the environment where uv is installed, but only in the environment created for this project (available in the .venv/ directory). You should not modify the uv.lock files or the content of the .venv/ directory.

You can run the agent using the installed dependencies with:

uv run .

If you encounter issues with cached packages, use:

uv sync --no-cache

or

uv sync --reinstall

Implementing the Agent​

After setting up the codebase, as an Agent Developer you need to:

  1. Define the AgentCard in agent.json
  2. Define the AgentConfig in config.yaml
  3. Implement the agent logic in src/graph.py
  4. Implement any LLM clients in src/llm_clients/<llm_client>.py
  5. Implement the agent entry point in __main__.py
  6. Provide the necessary environment variables in .env

The following subsections will guide you through these steps for the RNG Agent.

Agent Card​

The AgentCard is a JSON file that describes the agent's capabilities, inputs, outputs, and other metadata. It was introduced by the A2A protocol and is used to register the agent in the MX-AI ecosystem and to provide information to other agents.

agent.json

{
"version": "1.0.0",
"name": "RNG Agent",
"description": "Expert in generating random numbers and presenting them in a nice format with useful insights.",
"capabilities": {
"pushNotifications": false,
"streaming": true
},
"defaultInputModes": ["text", "text/plain"],
"defaultOutputModes": ["text", "text/plain"],
"skills": [
{
"description": "Generate random integer numbers and present them to the user in a well-formatted way.",
"examples": [
"Can you generate a random number?",
"Generate some random numbers for me.",
"I need a random number between 1 and 100."
],
"id": "random-number-generation-skill",
"name": "Random Number Generation Skill",
"tags": ["rng", "random numbers", "number generation"]
}
]
}

Note: The Agent url is not specified in the AgentCard because it varies at runtime in Kubernetes based on the associated service and port. The URL is read from environment variables written by the Odin Operator and set in the Agent Card before starting the A2A server.

Agent Config​

The AgentConfig is a YAML file interpreted by the BR-ADK. It defines minimal agent configuration, including reachable agents and MCP servers. It also allows you to enable memory by setting the checkpoints flag.

For this simple agent, only the following configuration is needed.

config.yaml

checkpoints: false
remote-agents: []
mcp-servers: []

Agent Logic - State​

Implementing the agent logic requires knowledge of LangGraph concepts, including:

  • State
  • Graph (nodes, edges, conditional edges)
  • Tools
  • Checkpoints (optional, more advanced)

The first step is defining the graph State, a data structure that stores user inputs, LLM outputs, and any intermediate data needed to handle a user query.

For agents built with the BR-ADK, the state must extend the AgentState class, which provides:

  • All features of pydantic.BaseModel (runtime type checking, validation, serialization). See the Pydantic documentation for details.
  • Abstract methods that you override to align with the MX-AI development workflow, saving you from implementing the async streaming functionality.

src/graph.py

from br_rapp_sdk.agents import AgentState, AgentTaskResult
from typing import Optional, Self
from typing_extensions import override

class RNGAgentState(AgentState):
query: str
response: Optional[str] = None

@classmethod
@override
def from_query(cls, query: str) -> Self:
return cls(query=query)

@override
def to_task_result(self) -> AgentTaskResult:
return AgentTaskResult(
task_status="completed" if self.response else "working",
content=self.response or "Generating response...",
)

Fields

  • query: User query string
  • response: Agent response (string or None)

Methods to override:

  • from_query: Initializes the state when the agent receives a new query
  • to_task_result: Converts the state to an AgentTaskResult object, the expected output format for the SDK
  • Optional methods (not needed for this agent):
    • update_after_checkpoint_restore – used for multi-turn conversations
    • is_waiting_for_human_input – used for human-in-the-loop actions

Note: since this agent uses a ReActLoop node, the chat history and tool results are handled internally, allowing the state to remain simple.

Agent Logic - Tool​

The RNG Agent uses a tool to generate random numbers. A Python function can be registered as a tool by adding the @tool decorator from LangChain. This makes the function available for the LLM to call.

src/graph.py

from langchain_core.tools import tool

@tool
def random_number_generator(
how_many: int = 1,
min_value: int = 0,
max_value: int = 100
) -> str:
"""
Generates a list of random numbers.

Args:
how_many (int): Number of random numbers to generate.
min_value (int): Minimum value for the random numbers.
max_value (int): Maximum value for the random numbers.

Returns:
str: List of generated random numbers as a string.
"""
import random
return f"{[random.randint(min_value, max_value) for _ in range(how_many)]}"
  • The LLM receives the function signature and docstring automatically thanks to the @tool decorator.
  • The return type of the tool is str because the result is sent back to the LLM.

Agent Logic - Graph​

Next, define the graph that handles user queries.

  • Extend AgentGraph from the BR-ADK.
    • Provides built-in methods to manage nodes, edges, and execution.
  • Use ReActLoop to implement a ReAct-style agent.
    • Handles reasoning, tool calling, and metadata tracking.
    • Reduces boilerplate compared to implementing a ReAct agent manually.

src/__init__.py

from .graph import RNGAgentGraph

src/graph.py

from br_rapp_sdk.agents import AgentGraph
from br_rapp_sdk.agents.prebuilt import ReActLoop
from langgraph.graph import StateGraph, START, END
from .llm_clients import RNGClient

class RNGAgentGraph(AgentGraph):
# Node names
GENERATION = "generation"

@override
def setup(
self,
config,
) -> None:
# Create an LLM client providing the tool
self.rng_client = RNGClient(tools=[random_number_generator])

# Create the ReAct loop providing the RNGClient
# Note: 'query' is a key in RNGAgentState
self.rng_loop = ReActLoop(
config=config,
StateType=RNGAgentState,
chat_model_client=self.rng_client,
loop_name="generation_loop",
input_key="query",
output_key="response",
)

# Add nodes to the graph
self.graph_builder.add_node(RNGAgentGraph.GENERATION, self.rng_loop.as_runnable())

# Add edges to connect the nodes
self.graph_builder.add_edge(START, RNGAgentGraph.GENERATION)
self.graph_builder.add_edge(RNGAgentGraph.GENERATION, END)

Step-by-step explanation: override the setup method of the AgentGraph class to define the graph structure and the necessary LLM Clients.

  1. Instantiate the LLM Client (RNGClient) and provide the random_number_generator tool.
    • The client must be assigned to a self.<client_name> attribute to enable automatic usage metadata collection by the BR-ADK.
  2. Create a ReActLoop node:
    • Provide the RNGClient to it
    • Specify the keys of the State to read input from and write output to
  3. Add the node to the Graph using add_node method of the graph_builder property exposed by the base class
  4. Connect the nodes: START β†’ GENERATION β†’ END
    • No conditional edges are needed for this simple agent
  5. Do not compile the graph builder, it is handled internally by the AgentGraph class.

Note: in this case the agent is just a simple ReAct agent which is completely implemented as a prebuilt module in the BR-ADK. When developing a more complex logic, you might need to define functions to run as nodes of the graph. These functions can be defined as methods of the graph class, and passed as parameters to the add_node method.

Chat Model Client​

The Chat Model Client is responsible for invoking the LLM with the user’s query and previous history (if available). In the BR-ADK, an LLM Client is a specialized caller that uses predefined system instructions and user instructions to accomplish a specific task.

For the RNG Agent, the RNGClient calls the LLM with a system prompt instructing it to generate random numbers using the random_number_generator tool.

Important note: you can override the invoke method of a ChatModelClient to customize the way the input is built and how the output is returned. However, when using the client with a ReActLoop, it is recommended to keep at least the default function signature in terms of input parameters and return type. In the example below, the invoke method is not overridden at all, and the default implementation provided by the ChatModelClient class is used.

src/llm_clients/__init__.py

from .rng_client import RNGClient

src/llm_clients/rng_client.py

from br_rapp_sdk.agents import  ChatModelClient, ChatModelClientConfig
from langchain_core.messages import AIMessage, BaseMessage, HumanMessage, ToolMessage
from typing import List

class RNGClient(ChatModelClient):

SYSTEM_INSTRUCTIONS = (
"You are a specialized assistant which uses a tool to generate a list of random numbers."
"\n\n"
"Based on the user query, your task is to call the tool with the appropriate parameters. "
"You can NOT ask for more information to the user, so invent reasonable parameters if needed. "
"After you call the tool, you will receive the generated numbers, "
"which you will format in a nice way and return as the answer. "
"Feel free to add decorations, curious facts about some of the numbers, or any other "
" interesting information."
"\n\n"
"If the user query is not related to generating random numbers, "
"respond with a polite message indicating that you can only generate random numbers."
"\n\n"
)

def __init__(
self,
tools,
):
super().__init__(
system_instructions=self.SYSTEM_INSTRUCTIONS,
chat_model_config=ChatModelClientConfig.from_env(client_name="RNGClient"),
tools=tools,
)

Explanation:

  • RNGClient extends ChatModelClient and defines the system instructions for the LLM.
  • The __init__ method initializes the client with the system instructions, environment-based configuration, and registers the tools.
  • If needed, you can also override the invoke method of the ChatModelClient to customize input/output processing. When using a ReActLoop, like in this case, it is recommended to stick to the default implementation.

Agent Entry Point​

The agent entry point is __main__.py. It initializes and runs the agent.

__main__.py

from br_rapp_sdk.agents import AgentApplication
from src import RNGAgentGraph, RNGAgentState

if __name__ == '__main__':
agent = AgentApplication(
RNGAgentGraph,
RNGAgentState,
)
agent.run()
  • This automatically loads the AgentCard (agent.json), AgentConfig (config.yaml) and environment variables (.env).
  • The AgentApplication instantiates an AgentGraph of the specified type with the specified AgentState type.

Environment Variables​

The .env file configures the LLM client, logging, and other settings.

.env

MODEL_PROVIDER=openai
MODEL=gpt-4.1-mini
OPENAI_API_KEY=<your-api-key>
LOG_LEVEL=debug
URL=http://localhost

Running the Agent (Bare-Metal)​

Run the agent locally:

uv run .

Test the agent with curl:

curl -X POST http://localhost:9900 \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"id": 1,
"method": "message/stream",
"params": {
"configuration": {
"acceptedOutputModes": [
"text"
]
},
"message": {
"contextId": "8f01f3d172cd4396a0e535ae8aec6681",
"messageId": "1",
"role": "user",
"parts": [
{
"type": "text",
"text": "Generate a few numbers greater than 40"
}
]
}
}
}'
  • The response is streamed as JSON objects representing graph execution steps.
  • The final response (handled by MX-AI) may look like:
Here are a few random numbers greater than 40 for you:
53, 72, 42, 83, 52

Did you know?
- 53 is a prime number.
- 83 is also a prime number and is the 23rd prime, which is itself a prime number (making 83 a super-prime)!

If you'd like more numbers or numbers within a different range, just let me know!

Optional (Adding more nodes)​

The graph used by the RNG Agent is intentionally simple, but it can be easily extended by adding more nodes. As an example, you can add a translation node that translates the generated response into another language.

This requires:

  • Adding a new node to the graph
  • Connecting it with the existing nodes
  • Implementing a small function to execute the translation

Below is a simplified example showing how the RNGAgentGraph can be extended.

src/graph.py

class RNGAgentGraph(AgentGraph):
# Node names
GENERATION = "generation"
TRANSLATION = "translation"

def __init__(self):
...
# Create a translation client
self.translate_client = TranslateClient()

...
# Add a node for translation to the graph
graph_builder.add_node(RNGAgentGraph.TRANSLATION, self.translate)

# Add edges to connect the nodes
graph_builder.add_edge(START, RNGAgentGraph.GENERATION)
graph_builder.add_edge(RNGAgentGraph.GENERATION, RNGAgentGraph.TRANSLATION)
graph_builder.add_edge(RNGAgentGraph.TRANSLATION, END)

...
self._log("RNGAgentGraph initialized", "info")

def translate(
self,
state: RNGAgentState
) -> RNGAgentState:
# Sample translation logic
state.response = self.translate_client.invoke(state.response)
return state

In this example:

  • A new node named TRANSLATION is introduced.
  • The execution flow is updated so that the translation step runs after number generation.
  • The translate method receives the current state, modifies the response, and returns the updated state.

This solution is intentionally incomplete and left as an exercise. You are encouraged to:

  • Implement a TranslateClient by extending ChatModelClient, with a system prompt instructing the LLM to translate text into a specific language (e.g. French).
  • Decide whether to override the invoke method to directly accept and return strings, or to adapt the input/output inside the translate method.
  • As a further improvement, infer the target language from the user query instead of using a fixed one. This may require adding new fields to RNGAgentState or introducing an additional node in the graph.

This exercise illustrates how easily the agent graph can be extended with new capabilities while keeping the overall structure clean and declarative.

What You Learned​

  • Design and implement a task-specific agent using the BR-ADK
  • Define an Agent Card and Agent Config for agent registration
  • Build agent logic using LangGraph concepts (State, Tools, Graphs)
  • Implement a ReAct-style agent with tool calling
  • Create and configure a custom Chat Model Client
  • Run and test an agent in a bare-metal environment
  • Extend an agent graph with additional nodes and capabilities