Lab 1: Tutorial Agent
In this Lab, you will build a simple TutorialAgent
that generates random numbers based on a user query and formats the response in a user-friendly way. Then, you will understand how to integrate this new agent into the MX-AI ecosystem.
Prerequisites
This Lab assumes you can understand and write simple Python code and you have basic knowledge of the LangGraph framework and what are Pydantic models. Some explanations will still be provided to help you understand the code structure and logic.
Follow the instructions in [Link to previous README here] to setup your codebase. If you did everything correctly, this is how the project structure should look like (ignoring the tests/
directory):
.
├── agent.json
├── .env
├── __init__.py
├── __main__.py
├── pyproject.toml
├── README.md
├── src/
│ ├── graph.py
│ ├── __init__.py
│ └── llm_clients/
│ ├── __init__.py
│ └── tutorial_client.py
├── uv.lock
└── .venv/
You can create the missing python files by hand.
The project structure includes:
agent.json
: The AgentCard for your agent..env
: Environment variables for your agent, such as model provider and API keys.__init__.py
: Marks the directory as a Python package.__main__.py
: The entry point for your agent.pyproject.toml
: Configuration file for theuv
package manager, listing dependencies and project metadata.README.md
: Documentation for your agent.src/
: Contains your source code, including the main agent logic and LLM client.uv.lock
: Lock file for theuv
package manager..venv/
: Virtual environment created byuv
.
Usually, you don't need to touch the pyproject.toml
file, as uv
will manage it for you. However, for this tutorial, we recommend that you paste the following content into it to ensure your environment looks exactly like the one used in this Lab:
[project]
name = "tutorial"
version = "0.1.0"
description = "Tutorial Agent"
readme = "README.md"
requires-python = ">=3.12"
dependencies = [
"br-rapp-sdk==2025.06",
"httpx>=0.28.1",
"python-dotenv>=1.1.0",
"uvicorn>=0.35.0",
]
After updating the pyproject.toml
file, run the following command to install the dependencies:
uv sync
Never touch the uv.lock
file and .venv/
directory, as they are managed by uv
. The uv.lock
file is used to lock the dependencies to specific versions, while the .venv/
directory contains the virtual environment created by uv
. This environement will be used when you run your agent with
uv run .
Note: if you play a bit with removing/installing dependencies, keep in mind that uv
might cache some packages. To fix any issues arising from this, use the --no-cache
option when syncing your environment.
Implementing the Agent
After setting up the codebase, the next step you need to take as an Agent Developer are:
- Write down the AgentCard in
agent.json
. - Implement the agent logic in
src/graph.py
. - Implement the LLM clients in
src/llm_clients/<llm_client>.py
. - Implement the agent entry point in
__main__.py
. - Provide the necessary environment variables in the
.env
file.
The following subsections will guide you through these steps for the TutorialAgent
.
AgentCard
The AgentCard is a JSON file that describes the agent's capabilities, inputs, outputs, and other metadata. It is used to register the agent in the MX-AI ecosystem and to provide information about the agent to other components.
agent.json
{
"url": "http://0.0.0.0:9999/",
"version": "1.0.0",
"name": "Tutorial Agent",
"description": "Expert in generating random numbers and presenting them in a nice format.",
"capabilities": {
"pushNotifications": false,
"streaming": true
},
"defaultInputModes": [
"text",
"text/plain"
],
"defaultOutputModes": [
"text",
"text/plain"
],
"skills": [
{
"description": "Generate random integer numbers and present them to the user in a well-formatted way.",
"examples": [
"Can you generate a random number?",
"Generate some random numbers for me.",
"I need a random number between 1 and 100."
],
"id": "random-number-generation-skill",
"name": "Random Number Generation Skill",
"tags": [
"RNG",
"numbers"
]
}
]
}
Agent Logic - State
Implementing the logic is where your knowledge of LangGraph will come into play. Make sure to check out the basic concepts of LangGraph before proceeding, including:
- State
- Graph - nodes, edges, conditional edges
- Tools
- Checkpoints (more advanced, not required for this tutorial)
The first step here is to define the graph State. This is a data structure where you can store the user inputs, LLMs outputs, and any intermediate data that you want to keep track of while handling the user query.
To develop an agent using the BubbleRAN rApp SDK, the state must extend the br_rapp_sdk.agents.agent.AgentState
class. This class provides:
- All
pydantic.BaseModel
features, such as runtime type checking, validation, and serialization, as it is derived from it. Make sure to check the Pydantic documentation for more information about all the features it provides. - Some abstract methods that must be implemented to facilitate integration with the SDK.
graph.py
from br_rapp_sdk.agents.agent import AgentState, AgentTaskResult
from langchain_core.messages import BaseMessage, ToolMessage
from typing import List, Optional, Self
class TutorialAgentState(AgentState):
query: str
llm_input: str | List[ToolMessage] = ""
history: List[BaseMessage] = []
llm_response: Optional[str | List[ToolCall]] = None
@classmethod
@override
def from_query(cls, query: str) -> Self:
return cls(
query=query,
llm_input=query,
)
@override
def update_after_checkpoint_restore(self, query: str) -> None:
pass
@override
def to_task_result(self) -> AgentTaskResult:
if self.llm_response is None:
return AgentTaskResult(
task_status="working",
content="Generating response...",
)
elif isinstance(self.llm_response, str):
return AgentTaskResult(
task_status="completed",
content=self.llm_response.strip(),
)
else:
return AgentTaskResult(
task_status="working",
content="Processing tool call...",
)
Notice how we defined fields for the TutorialAgentState
class as we would do for any Pydantic model. The fields required by this TutorialAgent
are:
query
: The user query that the agent will process.llm_input
: The input for the LLM, which can be either the user query or the results of a tool call.history
: The conversation history, which is a list of messages exchanged between the user and the agent. This is useful for maintaining context in the conversation.llm_response
: The response from the LLM, which can be either a string or a list of tool calls.
The methods to override are:
from_query
: This method is called by the SDK when the agent receives a new query. This is where you initialize the state using the user query.update_after_checkpoint_restore
: This method is called when the agent state is restored from a checkpoint in a multi-turn conversation. This is where you should update the state with the new user query. In this case, the agent will not perform checkpointing as we do not want to manage multi-turn conversations in this tutorial. For this reason, the method is left empty.to_task_result
: This method converts the state to anAgentTaskResult
object, which is the format expected by the BubbleRAN rApp SDK as the output of the agent.is_waiting_for_human_input
(optional): This method must be overridden in case the agent supports human-in-the-loop for action validation, feedbacks, etc. This method always returnsFalse
by default. In your implementation, it should returnTrue
if, based on the state, you can determine that the agent execution has been paused and is waiting for human feedback to continue. We don't need this in ourTutorialAgent
, so we can leave it out.
Two notes about the state:
- In case your agent doesn't use tools, you could simplify the
llm_input
andllm_response
fields to just be strings. - In case you need multiple LLM clients in your agent, you can handle the history for each LLM client separately (recommended) or use a single history for all LLM clients. The choice depends on your agent's logic and requirements, and on how you structure the prompts.
Agent Logic - Tool
Our TutorialAgent
will use a tool to generate random numbers based on the user query. Tools, in the AI Agent context, are functions that can be called by the LLM to perform specific tasks. In this case, we will define a tool that generates a certain amount of random integer values between a minimum and maximum value.
graph.py
from langchain_core.tools import tool
@tool
def random_number_generator(
how_many: int = 1,
min_value: int = 0,
max_value: int = 100
) -> str:
"""
Generates a list of random numbers.
Args:
how_many (int): Number of random numbers to generate.
min_value (int): Minimum value for the random numbers.
max_value (int): Maximum value for the random numbers.
Returns:
str: List of generated random numbers as a string.
"""
import random
return f"{[random.randint(min_value, max_value) for _ in range(how_many)]}"
The function is decorated with the @tool
decorator in order to automatically register it as a tool for our LLM clients. Never forget to write down a nice docstring for your tool, as it will be used by the LLM to understand what the tool does, what parameters it accepts, and what it returns.
Since this function will be called by an LLM, we return the result as a string to feed it back to the LLM more easily.
Agent Logic - Graph
The next step is to define the Graph
that will handle the user query. For the graph, it is recommended to extend the AgentGraph
class from the BR rApp SDK, which already provides functionalities to run the graph under the hoods.
graph.py
from br_rapp_sdk.agents.agent import AgentGraph, AgentTaskResult
from br_rapp_sdk.agents.tools import ToolCall
from langchain_core.messages import BaseMessage, ToolMessage
from langchain_core.runnables import RunnableConfig
from langgraph.graph import StateGraph, START, END
from .llm_clients import TutorialClient
from typing import List, Literal, Optional
class TutorialAgentGraph(AgentGraph):
# Node names
LLM_NODE = "llm"
TOOL_CALL_NODE = "tool_call"
# Decide if the next node is a tool call or the end of the graph
@classmethod
def _tool_or_end(
cls,
state: TutorialAgentState
) -> Literal["tool_call", "__end__"]:
return END if isinstance(state.llm_response, str) else cls.TOOL_CALL_NODE
def __init__(
self,
):
# Define the graph builder
graph_builder = StateGraph(TutorialAgentState)
# Add nodes to the graph
graph_builder.add_node(TutorialAgentGraph.LLM_NODE, self._llm_node)
graph_builder.add_node(TutorialAgentGraph.TOOL_CALL_NODE, self._tool_call_node)
# Add edges to connect the nodes
graph_builder.add_edge(START, TutorialAgentGraph.LLM_NODE)
graph_builder.add_conditional_edges(TutorialAgentGraph.LLM_NODE, self._tool_or_end)
graph_builder.add_edge(TutorialAgentGraph.TOOL_CALL_NODE, TutorialAgentGraph.LLM_NODE)
# Call AgentGraph constructor to initialize and compile the graph
super().__init__(graph_builder=graph_builder, use_checkpoint=False, logger_name="TutorialAgent")
# Create LLM clients providing the tool
self.llm_client = TutorialClient(tools=[random_number_generator])
def _llm_node(
self,
state: TutorialAgentState,
) -> TutorialAgentState:
self._log("Node `llm`: invoked", "debug")
# Invoke the LLM client with the user's query and previous history
response = self.llm_client.invoke(
question=state.llm_input,
history=state.history,
)
# Update the state with the response and return it
state.llm_response = response
return state
def _tool_call_node(
self,
state: TutorialAgentState,
) -> TutorialAgentState:
self._log("Node `tool_call`: invoked", "debug")
state.llm_input = []
for tool_call in state.llm_response:
if tool_call.name == "random_number_generator":
tool_response: List[int] = self.llm_client.tools[0].invoke(
input=tool_call.arguments
)
tool_msg = ToolMessage(
tool_call_id=tool_call.id,
content=tool_response,
)
state.llm_input.append(tool_msg)
else:
self._log(f"Error: Unknown tool call {tool_call.name}", "error")
state.llm_response = None
# Return the updated state
return state
Let's break down the code:
- The
__init__
method defines the graph structure in terms of nodes and edges. Then, it calls theAgentGraph
class constructor which is responsible for compiling the defined graph and initializing a checkpointer (optional) and a logger (optional). The graph includes a conditional edge to route the execution flow to the tool call node or the end node, based on what the LLM client generated. Check theLangGraph
documentation for more details about conditional edges. Additionally, this method initializes all the LLM Clients used by the agent. In this case we only use theTutorialClient
. Notice how we provide therandom_number_generator
tool to the client constructor. - The
_tool_or_end
method is a class method used by the conditional edge that determines whether the next node should be the tool call node or the end node, based on the type of thellm_response
field in the state. An important note: the return type must be aLiteral
with the possible node names in order for LangGraph to correctly compile the graph. - The
_llm_node
method is a node in the graph that invokes theTutorialClient
LLM client with the user's query and previous history. It updates the state with the LLM response. - The
_tool_call_node
method is a node in the graph that processes the tool calls generated by the LLM client. It invokes therandom_number_generator
tool with the arguments from the tool call and updates thellm_input
field in the state with the tool response. This is done because after running a tool, we provide the result back to the LLM for further processing. Thellm_response
field is set toNone
after being consumed to keep the state clean for the next iteration.
Note how the AgentGraph
class provides a _log
method that can be used to log messages during the graph execution. This is useful for debugging and monitoring the agent's behavior.
LLM Client
The LLM Client is responsible for invoking the LLM with the user's query and previous history. In BR rApp SDK, an LLM Client can be seen as a specialized LLM caller, meaning that it will call the LLM with a specific prompt to accomplish a specific task. In our case, the TutorialClient
will call the LLM with a prompt that instructs it to generate random numbers based on the user query, using the random_number_generator
tool.
tutorial_client.py
from br_rapp_sdk.agents.chat_model_config import ChatModelClientConfig
from br_rapp_sdk.agents.chat_model_client import ChatModelClient
from br_rapp_sdk.agents.tools import ToolCall
from langchain_core.messages import HumanMessage, BaseMessage, ToolMessage
from typing import List
class TutorialClient(ChatModelClient):
SYSTEM_INSTRUCTIONS = (
"You are a specialized assistant which uses a tool to generate a list of random numbers."
"\n\n"
"Based on the user query, your task is to call the tool with the appropriate parameters. "
"You can NOT ask for more information to the user, so invent reasonable parameters if needed. "
"After you call the tool, you will receive the generated numbers, "
"which you will format in a nice way and return as the answer. "
"Feel free to add decorations, curious facts about some of the numbers, or any other "
" interesting information."
"\n\n"
"If the user query is not related to generating random numbers, "
"respond with a polite message indicating that you can only generate random numbers."
"\n\n"
)
USER_INSTRUCTIONS = (
"USER QUERY:\n{question}\n\n"
)
def __init__(
self,
tools,
):
super().__init__(
system_instructions=self.SYSTEM_INSTRUCTIONS,
chat_model_config=ChatModelClientConfig.from_env(client_name="TutorialClient"),
tools=tools,
)
def invoke(
self,
question: str | List[ToolMessage],
history: List[BaseMessage],
) -> str | List[ToolCall]:
if isinstance(question, str):
input = HumanMessage(
content=TutorialClient.USER_INSTRUCTIONS.format(
question=question,
)
)
else:
input = question
self._log(f"Invoking LLM with input: {question if isinstance(question, str) else 'tool results'}", "debug")
response = super().invoke(input, history=history)
if response.tool_calls:
self._log(f"LLM response: tool calls", "debug")
return [
ToolCall.from_langchain_tool_call(tool_call) for tool_call in response.tool_calls
]
else:
response_content = response.content.strip()
self._log(f"LLM response: {response_content}", "debug")
return response_content
Defining a custom Client requires some code but is quite straightforward:
- The
TutorialClient
class extends theChatModelClient
class and defines the system and user instructions for the LLM. If you are not familiar with the concept of system and user instructions, you can check the LangChain documentation or any other resource about prompt engineering. - The
__init__
method initializes the client with the system instructions and a configuration object that is loaded from the environment variables. Thetools
parameter is passed to the parent class constructor, which registers the tools for the client. - Although not strictly necessary, we also override the
invoke
method to handle the input and output of the LLM client. In particular, we do this to logically separate the prompt construction from the LLM invocation with just a query or the result of a tool call. This method builds the input for theChatModelClient.invoke
method and proceeds to call it. The response is then checked for tool calls, and if any are present, they are returned as a list ofToolCall
objects. The BR rApp SDK provides a simplerToolCall
abstraction, so the tool calls are converted to this format thanks to theToolCall.from_langchain_tool_call
class method. If no tool calls are present, the response content is returned as a string.
Agent Entry Point
The entry point for the agent is the __main__.py
file. This file is responsible for initializing the agent and running it. It is also the place where you can define the environment variables for the agent, such as the model provider, model name, etc.
__main__.py
import httpx
import json
import logging
import uvicorn
from a2a.types import AgentCard
from br_rapp_sdk.agents import AgentApplication
from src.graph import TutorialAgentGraph
from dotenv import load_dotenv
load_dotenv()
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
def main():
"""Starts the Tutorial Agent."""
try:
with open('./agent.json', 'r') as file:
agent_data = json.load(file)
agent_card = AgentCard.model_validate(agent_data)
logger.info(f'Agent Card loaded: {agent_card}')
url = httpx.URL(agent_card.url)
graph = TutorialAgentGraph()
agent = AgentApplication(
agent_card=agent_card,
agent_graph=graph,
)
uvicorn.run(agent.build(), host=url.host, port=url.port)
except Exception as e:
logger.error(f'An error occurred during server startup: {e}')
exit(1)
if __name__ == '__main__':
main()
The above code simply loads the AgentCard from the agent.json
file and the environment variables from the .env
file (using the dotenv
package).
Then, it initializes the TutorialAgentGraph
and creates an AgentApplication
instance with the agent card and graph. Finally, it starts a server using uvicorn
.
Environment Variables
The environment variables for the agent are defined in the .env
file. These variables are used to configure the LLM client, such as the model provider, model name, and API key. They are also used to configure the logging level for the agent or any other settings that you might need.
Here follows a sample .env
file for the TutorialAgent
, using OpenAI as the model provider.
.env
MODEL_PROVIDER=openai
MODEL=gpt-4.1-mini
OPENAI_API_KEY=<your-api-key>
LOG_LEVEL=debug
Running the Agent (Bare-Metal)
The agent can be run in a bare-metal environment using the following command:
uv run .
To test your agent, open a terminal and run the following command:
curl -X POST http://localhost:9999 \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"id": 1,
"method": "message/stream",
"params": {
"configuration": {
"acceptedOutputModes": [
"text"
]
},
"message": {
"contextId": "8f01f3d172cd4396a0e535ae8aec6681",
"messageId": "1",
"role": "user",
"parts": [
{
"type": "text",
"text": "Generate a few numbers greater than 40"
}
]
}
}
}'
The output will look like a series of JSON object, each representing a step of the agent's graph execution. Here is an example object containing the response to the question sent with the command above:
data: {
"id":1,
"jsonrpc":"2.0",
"result": {
"artifact": {
"artifactId":"0cef4e96-6434-47c6-8708-64f0dffcecf1",
"parts": [{
"kind":"text",
"text":"Here are a few random numbers greater than 40 for you:\n53, 72, 42, 83, 52\n\nDid you know? \n- 53 is a prime number.\n- 83 is also a prime number and is the 23rd prime, which is itself a prime number (making 83 a super-prime)!\n\nIf you'd like more numbers or numbers within a different range, just let me know!"
}]
},
"contextId":"8f01f3d172cd4396a0e535ae8aec6681",
"kind":"artifact-update",
"taskId":"0d4860b9-fe3a-4178-9f03-8899dfccdf01"
}
}
The MX-AI ecosystem already handles the streaming of the response, so if you plug it into the system, you will only see the final response:
Here are a few random numbers greater than 40 for you:
53, 72, 42, 83, 52
Did you know?
- 53 is a prime number.
- 83 is also a prime number and is the 23rd prime, which is itself a prime number (making 83 a super-prime)!
If you'd like more numbers or numbers within a different range, just let me know!