Lab 2: Building a Task-Specific Agent
In this lab, you will create a simple RNG Agent that generates random numbers based on user input and formats the response in a user-friendly way providing interesting insights on the generated numbers.
Prerequisitesβ
This lab assumes:
- Basic Python and OOP knowledge
- Familiarity with the LangGraph framework
- Basic understanding of Pydantic models (i.e. what is a Pydantic model)
Some explanations are included to help you follow the code structure and logic. Keep the BR rApp SDK documentation handy, as it provides details on the classes and methods used in this lab. Spending some time with the documentation and experimenting with the labs will make agent development faster and easier.
Project setupβ
Follow the general instruction to setup your codebase. If done correctly, your project structure should look like this:
.
βββ .env
βββ .venv/
βββ README.md
βββ __init__.py
βββ __main__.py
βββ agent.json
βββ config.yaml
βββ pyproject.toml
βββ src/
βΒ Β βββ __init__.py
βΒ Β βββ graph.py
βΒ Β βββ llm_clients/
βΒ Β βββ __init__.py
βΒ Β βββ rng_client.py
βββ uv.lock
You may need to create any missing Python files manually.
Project Structure Overviewβ
.env: Environment variables such as model provider and API keys.venv/: Virtual environment created by uvREADME.md: Agent documentation__init__.py: Marks the directory as a Python package__main__.py: Agent entry pointagent.json: The AgentCardconfig.yaml: The AgentConfig file used by the BR-ADKpyproject.toml: Configuration for uv, listing dependencies and project metadatasrc/: Contains source code including Agent logic and LLM clientsuv.lock: Locks dependency versions (for uv)
You usually donβt need to modify pyproject.toml, as uv manages it automatically. For this lab, paste the following to ensure your environment matches the lab setup:
[project]
name = "rng_agent"
version = "0.1.0"
description = "RNG Agent"
readme = "README.md"
requires-python = ">=3.13"
dependencies = [
"br-rapp-sdk==2025.12",
]
Installing Dependenciesβ
Install dependencies in the project's virtual environement:
uv sync
Important note: the command won't install dependencies in the environment where uv is installed, but only in the environment created for this project (available in the .venv/ directory). You should not modify the uv.lock files or the content of the .venv/ directory.
You can run the agent using the installed dependencies with:
uv run .
If you encounter issues with cached packages, use:
uv sync --no-cache
or
uv sync --reinstall
Implementing the Agentβ
After setting up the codebase, as an Agent Developer you need to:
- Define the AgentCard in
agent.json - Define the AgentConfig in
config.yaml - Implement the agent logic in
src/graph.py - Implement any LLM clients in
src/llm_clients/<llm_client>.py - Implement the agent entry point in
__main__.py - Provide the necessary environment variables in
.env
The following subsections will guide you through these steps for the RNG Agent.
Agent Cardβ
The AgentCard is a JSON file that describes the agent's capabilities, inputs, outputs, and other metadata. It was introduced by the A2A protocol and is used to register the agent in the MX-AI ecosystem and to provide information to other agents.
agent.json
{
"version": "1.0.0",
"name": "RNG Agent",
"description": "Expert in generating random numbers and presenting them in a nice format with useful insights.",
"capabilities": {
"pushNotifications": false,
"streaming": true
},
"defaultInputModes": ["text", "text/plain"],
"defaultOutputModes": ["text", "text/plain"],
"skills": [
{
"description": "Generate random integer numbers and present them to the user in a well-formatted way.",
"examples": [
"Can you generate a random number?",
"Generate some random numbers for me.",
"I need a random number between 1 and 100."
],
"id": "random-number-generation-skill",
"name": "Random Number Generation Skill",
"tags": ["rng", "random numbers", "number generation"]
}
]
}
Note: The Agent url is not specified in the AgentCard because it varies at runtime in Kubernetes based on the associated service and port. The URL is read from environment variables written by the Odin Operator and set in the Agent Card before starting the A2A server.
Agent Configβ
The AgentConfig is a YAML file interpreted by the BR-ADK.
It defines minimal agent configuration, including reachable agents and MCP servers. It also allows you to enable memory by setting the checkpoints flag.
For this simple agent, only the following configuration is needed.
config.yaml
checkpoints: false
remote-agents: []
mcp-servers: []
Agent Logic - Stateβ
Implementing the agent logic requires knowledge of LangGraph concepts, including:
- State
- Graph (nodes, edges, conditional edges)
- Tools
- Checkpoints (optional, more advanced)
The first step is defining the graph State, a data structure that stores user inputs, LLM outputs, and any intermediate data needed to handle a user query.
For agents built with the BR-ADK, the state must extend the AgentState class, which provides:
- All features of
pydantic.BaseModel(runtime type checking, validation, serialization). See the Pydantic documentation for details. - Abstract methods that you override to align with the MX-AI development workflow, saving you from implementing the async streaming functionality.
src/graph.py
from br_rapp_sdk.agents import AgentState, AgentTaskResult
from typing import Optional, Self
from typing_extensions import override
class RNGAgentState(AgentState):
query: str
response: Optional[str] = None
@classmethod
@override
def from_query(cls, query: str) -> Self:
return cls(query=query)
@override
def to_task_result(self) -> AgentTaskResult:
return AgentTaskResult(
task_status="completed" if self.response else "working",
content=self.response or "Generating response...",
)
Fields
query: User query stringresponse: Agent response (string orNone)
Methods to override:
from_query: Initializes the state when the agent receives a new queryto_task_result: Converts the state to anAgentTaskResultobject, the expected output format for the SDK- Optional methods (not needed for this agent):
update_after_checkpoint_restoreβ used for multi-turn conversationsis_waiting_for_human_inputβ used for human-in-the-loop actions
Note: since this agent uses a ReActLoop node, the chat history and tool results are handled internally, allowing the state to remain simple.
Agent Logic - Toolβ
The RNG Agent uses a tool to generate random numbers. A Python function can be registered as a tool by adding the @tool decorator from LangChain. This makes the function available for the LLM to call.
src/graph.py
from langchain_core.tools import tool
@tool
def random_number_generator(
how_many: int = 1,
min_value: int = 0,
max_value: int = 100
) -> str:
"""
Generates a list of random numbers.
Args:
how_many (int): Number of random numbers to generate.
min_value (int): Minimum value for the random numbers.
max_value (int): Maximum value for the random numbers.
Returns:
str: List of generated random numbers as a string.
"""
import random
return f"{[random.randint(min_value, max_value) for _ in range(how_many)]}"
- The LLM receives the function signature and docstring automatically thanks to the
@tooldecorator. - The return type of the tool is
strbecause the result is sent back to the LLM.
Agent Logic - Graphβ
Next, define the graph that handles user queries.
- Extend
AgentGraphfrom the BR-ADK.- Provides built-in methods to manage nodes, edges, and execution.
- Use
ReActLoopto implement a ReAct-style agent.- Handles reasoning, tool calling, and metadata tracking.
- Reduces boilerplate compared to implementing a ReAct agent manually.
src/__init__.py
from .graph import RNGAgentGraph
src/graph.py
from br_rapp_sdk.agents import AgentGraph
from br_rapp_sdk.agents.prebuilt import ReActLoop
from langgraph.graph import StateGraph, START, END
from .llm_clients import RNGClient
class RNGAgentGraph(AgentGraph):
# Node names
GENERATION = "generation"
@override
def setup(
self,
config,
) -> None:
# Create an LLM client providing the tool
self.rng_client = RNGClient(tools=[random_number_generator])
# Create the ReAct loop providing the RNGClient
# Note: 'query' is a key in RNGAgentState
self.rng_loop = ReActLoop(
config=config,
StateType=RNGAgentState,
chat_model_client=self.rng_client,
loop_name="generation_loop",
input_key="query",
output_key="response",
)
# Add nodes to the graph
self.graph_builder.add_node(RNGAgentGraph.GENERATION, self.rng_loop.as_runnable())
# Add edges to connect the nodes
self.graph_builder.add_edge(START, RNGAgentGraph.GENERATION)
self.graph_builder.add_edge(RNGAgentGraph.GENERATION, END)
Step-by-step explanation: override the setup method of the AgentGraph class to define the graph structure and the necessary LLM Clients.
- Instantiate the LLM Client (
RNGClient) and provide therandom_number_generatortool.- The client must be assigned to a
self.<client_name>attribute to enable automatic usage metadata collection by the BR-ADK.
- The client must be assigned to a
- Create a
ReActLoopnode:- Provide the
RNGClientto it - Specify the keys of the State to read input from and write output to
- Provide the
- Add the node to the Graph using
add_nodemethod of thegraph_builderproperty exposed by the base class - Connect the nodes: START β GENERATION β END
- No conditional edges are needed for this simple agent
- Do not compile the graph builder, it is handled internally by the
AgentGraphclass.
Note: in this case the agent is just a simple ReAct agent which is completely implemented as a prebuilt module in the BR-ADK. When developing a more complex logic, you might need to define functions to run as nodes of the graph. These functions can be defined as methods of the graph class, and passed as parameters to the add_node method.
Chat Model Clientβ
The Chat Model Client is responsible for invoking the LLM with the userβs query and previous history (if available). In the BR-ADK, an LLM Client is a specialized caller that uses predefined system instructions and user instructions to accomplish a specific task.
For the RNG Agent, the RNGClient calls the LLM with a system prompt instructing it to generate random numbers using the random_number_generator tool.
Important note: you can override the invoke method of a ChatModelClient to customize the way the input is built and how the output is returned. However, when using the client with a ReActLoop, it is recommended to keep at least the default function signature in terms of input parameters and return type. In the example below, the invoke method is not overridden at all, and the default implementation provided by the ChatModelClient class is used.
src/llm_clients/__init__.py
from .rng_client import RNGClient
src/llm_clients/rng_client.py
from br_rapp_sdk.agents import ChatModelClient, ChatModelClientConfig
from langchain_core.messages import AIMessage, BaseMessage, HumanMessage, ToolMessage
from typing import List
class RNGClient(ChatModelClient):
SYSTEM_INSTRUCTIONS = (
"You are a specialized assistant which uses a tool to generate a list of random numbers."
"\n\n"
"Based on the user query, your task is to call the tool with the appropriate parameters. "
"You can NOT ask for more information to the user, so invent reasonable parameters if needed. "
"After you call the tool, you will receive the generated numbers, "
"which you will format in a nice way and return as the answer. "
"Feel free to add decorations, curious facts about some of the numbers, or any other "
" interesting information."
"\n\n"
"If the user query is not related to generating random numbers, "
"respond with a polite message indicating that you can only generate random numbers."
"\n\n"
)
def __init__(
self,
tools,
):
super().__init__(
system_instructions=self.SYSTEM_INSTRUCTIONS,
chat_model_config=ChatModelClientConfig.from_env(client_name="RNGClient"),
tools=tools,
)
Explanation:
RNGClientextendsChatModelClientand defines the system instructions for the LLM.- The
__init__method initializes the client with the system instructions, environment-based configuration, and registers the tools. - If needed, you can also override the
invokemethod of theChatModelClientto customize input/output processing. When using aReActLoop, like in this case, it is recommended to stick to the default implementation.
Agent Entry Pointβ
The agent entry point is __main__.py. It initializes and runs the agent.
__main__.py
from br_rapp_sdk.agents import AgentApplication
from src import RNGAgentGraph, RNGAgentState
if __name__ == '__main__':
agent = AgentApplication(
RNGAgentGraph,
RNGAgentState,
)
agent.run()
- This automatically loads the AgentCard (agent.json), AgentConfig (config.yaml) and environment variables (.env).
- The
AgentApplicationinstantiates anAgentGraphof the specified type with the specifiedAgentStatetype.
Environment Variablesβ
The .env file configures the LLM client, logging, and other settings.
.env
MODEL_PROVIDER=openai
MODEL=gpt-4.1-mini
OPENAI_API_KEY=<your-api-key>
LOG_LEVEL=debug
URL=http://localhost
Running the Agent (Bare-Metal)β
Run the agent locally:
uv run .
Test the agent with curl:
curl -X POST http://localhost:9900 \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"id": 1,
"method": "message/stream",
"params": {
"configuration": {
"acceptedOutputModes": [
"text"
]
},
"message": {
"contextId": "8f01f3d172cd4396a0e535ae8aec6681",
"messageId": "1",
"role": "user",
"parts": [
{
"type": "text",
"text": "Generate a few numbers greater than 40"
}
]
}
}
}'
- The response is streamed as JSON objects representing graph execution steps.
- The final response (handled by MX-AI) may look like:
Here are a few random numbers greater than 40 for you:
53, 72, 42, 83, 52
Did you know?
- 53 is a prime number.
- 83 is also a prime number and is the 23rd prime, which is itself a prime number (making 83 a super-prime)!
If you'd like more numbers or numbers within a different range, just let me know!
Optional (Adding more nodes)β
The graph used by the RNG Agent is intentionally simple, but it can be easily extended by adding more nodes. As an example, you can add a translation node that translates the generated response into another language.
This requires:
- Adding a new node to the graph
- Connecting it with the existing nodes
- Implementing a small function to execute the translation
Below is a simplified example showing how the RNGAgentGraph can be extended.
src/graph.py
class RNGAgentGraph(AgentGraph):
# Node names
GENERATION = "generation"
TRANSLATION = "translation"
def __init__(self):
...
# Create a translation client
self.translate_client = TranslateClient()
...
# Add a node for translation to the graph
graph_builder.add_node(RNGAgentGraph.TRANSLATION, self.translate)
# Add edges to connect the nodes
graph_builder.add_edge(START, RNGAgentGraph.GENERATION)
graph_builder.add_edge(RNGAgentGraph.GENERATION, RNGAgentGraph.TRANSLATION)
graph_builder.add_edge(RNGAgentGraph.TRANSLATION, END)
...
self._log("RNGAgentGraph initialized", "info")
def translate(
self,
state: RNGAgentState
) -> RNGAgentState:
# Sample translation logic
state.response = self.translate_client.invoke(state.response)
return state
In this example:
- A new node named TRANSLATION is introduced.
- The execution flow is updated so that the translation step runs after number generation.
- The translate method receives the current state, modifies the response, and returns the updated state.
This solution is intentionally incomplete and left as an exercise. You are encouraged to:
- Implement a
TranslateClientby extendingChatModelClient, with a system prompt instructing the LLM to translate text into a specific language (e.g. French). - Decide whether to override the invoke method to directly accept and return strings, or to adapt the input/output inside the translate method.
- As a further improvement, infer the target language from the user query instead of using a fixed one. This may require adding new fields to
RNGAgentStateor introducing an additional node in the graph.
This exercise illustrates how easily the agent graph can be extended with new capabilities while keeping the overall structure clean and declarative.
What You Learnedβ
- Design and implement a task-specific agent using the BR-ADK
- Define an Agent Card and Agent Config for agent registration
- Build agent logic using LangGraph concepts (State, Tools, Graphs)
- Implement a ReAct-style agent with tool calling
- Create and configure a custom Chat Model Client
- Run and test an agent in a bare-metal environment
- Extend an agent graph with additional nodes and capabilities