br_rapp_sdk.agents.chat_model_client_config
ModelProvider
ModelProvider is a type alias for the supported model providers.
The currently supported providers are:
openainvidiaollama
ChatModelClientConfig Objects
class ChatModelClientConfig(BaseModel)
Configuration for the chat model client.
This class is used to configure the chat model client with the necessary parameters. Some model providers may require specific environment variables to be set, like OPENAI_API_KEY for OpenAI.
Attributes
model (str): The name of the model to use.
model_provider (ModelProvider): The provider of the model (e.g., OpenAI, Meta, etc.).
base_url (str, optional): The base URL for the model provider, required for non-OpenAI providers.
client_name (str, optional): Name for the client.
The class can be instantiated directly or created from environment variables using the from_env class method (usually preferred).
Examples
Direct instantiation:
config = ChatModelClientConfig(
model="gpt-4o-mini",
model_provider="openai",
base_url="https://api.openai.com/v1",
client_name="SampleClient",
)
From environment variables:
config = ChatModelClientConfig.from_env(
client_name="SampleClient",
)
__init__
def __init__(model: str,
model_provider: ModelProvider,
base_url: Optional[str] = None,
client_name: Optional[str] = None)
Initialize the ChatModelClientConfig with the provided parameters.
Arguments:
modelstr - The name of the model to use.model_providerModelProvider - The provider of the model (e.g., openai, nvidia, etc.).base_urlOptional[str] - The base URL for the model provider, required for non-OpenAI providers.client_nameOptional[str] - Name for the client.
from_env
@classmethod
def from_env(cls,
client_name: Optional[str] = None) -> "ChatModelClientConfig"
Create a ChatModelClientConfig instance from environment variables.
This method reads the following environment variables:
MODEL: The model name, which can be in the format<provider>:<model>.MODEL_PROVIDER(optional): The provider of the model (e.g., openai, nvidia, ollama, etc.).
Arguments:
client_nameOptional[str] - Name for the client.
Returns:
An instance of ChatModelClientConfig configured with values from environment variables.
Raises:
EnvironmentError- If the required environment variables are not set or if the format is incorrect.
br_rapp_sdk.agents.application
AgentApplication Objects
class AgentApplication()
__init__
def __init__(AgentGraphType: Type[AgentGraph],
AgentStateType: Type[AgentState],
agent_card_path: str = './agent.json')
Initialize the AgentApplication with the given agent card path and agent graph.
Arguments:
agent_graphAgentGraph - The agent graph implementing the agent's logic.agent_card_pathstr - The path to the agent card JSON file. Defaults to "./agent.json".
load_agent_card
def load_agent_card(agent_card_path: str) -> AgentCard
Load the Agent Card from a JSON file.
Arguments:
agent_card_pathstr - The path to the Agent Card JSON file.
Returns:
AgentCard- The loaded Agent Card.
Raises:
Exception- For general errors during loading.EnvironmentError- If the URL environment variable is not set.FileNotFoundError- If the agent card file does not exist.ValidationError- If the agent card JSON is invalid.
agent_graph
@property
def agent_graph() -> AgentGraph
Get the agent graph.
agent_card
@property
def agent_card() -> AgentCard
Get the agent card.
run
def run(expose_mcp: bool = False) -> None
Run the agent application.
Arguments:
expose_mcpbool, optional - Whether to expose the MCP protocol. Defaults to False. This parameter isn't fully supported yet and may lead to unexpected behavior when set to True.
br_rapp_sdk.agents.chat_model_client
UsageMetadata Objects
class UsageMetadata(BaseModel)
Metadata about the usage of the chat model.
Note: Defining a ChatModelClient as a property of an object deriving the AgentGraph class
allows to automatically collect and aggregate usage metadata from the chat model
and return it as part of the streaming response metadata.
Attributes
input_tokens (int): Number of input tokens used in the request. output_tokens (int): Number of output tokens generated in the response. total_tokens (int): Total number of tokens used (input + output). inference_time (float): Time taken for the inference in seconds.
__add__
def __add__(other: Self | Dict[str, int]) -> Self
Add two UsageMetadata instances.
__sub__
def __sub__(other: Self | Dict) -> Self
Subtract two UsageMetadata instances.
ChatModelClient Objects
class ChatModelClient()
Client that facilitates interaction with a chat model.
This client can be used to send user instructions to the chat model and receive responses. It supports both single and batch invocations, and can handle tool calls if tools are provided.
If stored as a property of an object deriving the AgentGraph class, UsageMetadata will be
automatically collected and returned as metadata of the streaming response.
Arguments:
chat_model_config (ChatModelClientConfig, optional): Configuration for the chat model client. system_instructions (str): System instructions to be used in the chat model. tools (Sequence[Dict[str, Any] | type | Callable | BaseTool | None], optional): LangChain-defined tools to be used by the chat model.
Examples:
config = ChatModelClientConfig.from_env(
client_name="SampleClient",
)
client = ChatModelClient(
chat_model_config=config,
system_instructions="You always reply in pirate language.",
)
response = client.invoke(HumanMessage("What is the weather like today?"))
__init__
def __init__(chat_model_config: ChatModelClientConfig | None = None,
system_instructions: str = "You are a helpful assistant.",
tools: Sequence[Dict[str, Any] | type | Callable | BaseTool
| None] = None)
Initialize the ChatModelClient with the given configuration, system instructions, and tools.
Arguments:
chat_model_config (ChatModelClientConfig, optional): Configuration for the chat model client. If None, it will be loaded from environment variables. system_instructions (str): System instructions to be used by the chat model. tools (Sequence[Dict[str, Any] | type | Callable | BaseTool | None], optional): LangChain-defined tools to be used by the chat model.
Raises:
EnvironmentError- If the chat model configuration is not provided and cannot be loaded from environment variables.
get_chat_model
def get_chat_model() -> BaseChatModel
Get the chat model instance.
Returns:
BaseChatModel- The chat model instance configured with the provided model and tools.
invoke
def invoke(input: HumanMessage | List[ToolMessage],
history: Optional[List[BaseMessage]] = None) -> AIMessage
Invoke the chat model with user instructions or tool call results.
If the history is provided, it will be prepended to the input message.
This method modifies the history in-place to include the input and output messages.
Arguments:
inputHumanMessage | List[ToolMessage] - The user input or tool call results to process.historyOptional[List[BaseMessage]] - Optional history of messages.
Returns:
AIMessage- The response from the chat model.
Raises:
ValueError- If the input type is invalid or if the response from the chat model is not anAIMessage.
batch
def batch(inputs: List[HumanMessage],
history: Optional[List[BaseMessage]] = None) -> List[AIMessage]
Batch process multiple human messages in batch.
If the history is provided, it will be prepended to each input message.
This method does NOT modify the history in-place.
Arguments:
inputsList[HumanMessage] - List of user inputs to process.historyOptional[List[BaseMessage]] - Optional history of messages.
Returns:
List[AIMessage]- List of responses from the chat model for each input.
Raises:
ValueError- If the input type is invalid or if the response from the chat model is not an AIMessage.
get_usage_metadata
def get_usage_metadata(
from_timestamp: Optional[float] = None) -> UsageMetadata
Get the aggregated usage metadata from the chat model client.
Arguments:
from_timestampOptional[float] - If provided, only usage metadata after this timestamp will be considered. If None, all usage metadata will be considered.
Returns:
UsageMetadata- The aggregated usage metadata.
br_rapp_sdk.agents.graph
AgentGraph Objects
class AgentGraph(ABC)
Abstract base class for agent graphs.
Supported Environment Variables:
- LOG_LEVEL: The logging level for the agent graph logger. Defaults to "info".
Extend this class to implement the specific behavior of an agent.
Example
from br_rapp_sdk.agents import AgentGraph, AgentState
from langgraph.runnables import RunnableConfig
from langgraph.graph import StateGraph
class MyAgentState(BaseModel):
# Your state here
# ...
pass
class MyAgentGraph(AgentGraph):
def __init__(self):
# Define the agent graph using langgraph.graph.StateGraph class
graph_builder = StateGraph(MyAgentState)
# Add nodes and edges to the graph as needed ...
super().__init__(
graph_builder=graph_builder,
use_checkpoint=True,
logger_name="my_agent"
)
self._log("Graph initialized", "info")
# Your nodes logic here
# ...
<a id="br_rapp_sdk.agents.graph.AgentGraph.__init__"></a>
#### \_\_init\_\_
```python
def __init__(config: AgentConfig, StateType: Type[AgentState])
Initialize the AgentGraph with a state graph and optional checkpointing and logger. Compile the state graph and set up the logger if the logger_name is provided.
Arguments:
graph_builderStateGraph - The state graph builder.use_checkpointbool - Whether to use checkpointing. Defaults to False.logger_nameOptional[str] - The name of the logger to use. Defaults to None.
graph_builder
@property
def graph_builder() -> StateGraph
Get the state graph builder.
Returns:
StateGraph- The state graph builder.
setup
@abstractmethod
def setup(config: AgentConfig) -> None
Set up the agent graph with the provided configuration. Subclasses must implement this method.
Arguments:
configAgentConfig - The agent configuration.
astream
async def astream(query: str,
config: RunnableConfig) -> AsyncIterable[AgentTaskResult]
Asynchronously stream results from the agent graph based on the query and configuration. This method performes the following steps:
- Looks for a checkpoint associated with the provided configuration.
- If no checkpoint is found, creates a new agent state from the query,
using the
from_querymethod of theStateType. - If a checkpoint is found, restores the state from the checkpoint and updates it with the query
using the
update_after_checkpoint_restoremethod. - Prepares the input for the graph execution, wrapping the state in a
Commandif theis_waiting_for_human_inputmethod of the state returnsTrue. - Executes the graph with the
astreammethod, passing the input and configuration. - For each item in the stream:
- If it is an interrupt, yields an
AgentTaskResultwith the statusinput-required. This enables human-in-the-loop interactions. - Otherwise, validates the item as an
StateTypeand converts it to anAgentTaskResultusing theto_task_resultmethod of the state. Then it yields the result.
This method prints debug logs in the format [<thread_id>]: <message>.
Arguments:
querystr - The query to process.configRunnableConfig - Configuration for the runnable.
Returns:
AsyncIterable[AgentTaskResult]- An asynchronous iterable of agent task results.
consume_agent_stream
async def consume_agent_stream(
agent_card: AgentCard,
message: Message) -> AsyncIterable[ClientEvent | Message]
WARNING: THIS METHOD IS DEPRECATED AND WILL BE REMOVED IN FUTURE RELEASES. Consume the agent stream from another A2A agent using the provided agent card and request.
Arguments:
agent_cardAgentCard - The agent card of the target agent.requestMessage - The message to send to the agent.
Yields:
AsyncIterable[SendStreamingMessageSuccessResponse]- An asynchronous iterable of streaming message responses.
draw_mermaid
def draw_mermaid(file_path: Optional[str] = None) -> None
Draw the agent graph in Mermaid format. If a file path is provided, save the diagram to the file, otherwise print it to the console.
Arguments:
file_pathOptional[str] - The path to the file where the Mermaid diagram should be saved.
br_rapp_sdk.agents.state
AgentTaskStatus
AgentTaskStatus is a type alias for the status of an agent task.
The possible values are:
working: The agent is currently processing the task.input-required: The agent requires additional input from the user to proceed.completed: The agent has successfully completed the task.error: An error occurred during the task execution.
AgentTaskResult Objects
class AgentTaskResult(BaseModel)
Result of an agent invocation.
Attributes
task_status (AgentTaskStatus): The status of the agent task.
content (str): The content of the agent's response or message.
Attributes meaning
task_status | content |
|---|---|
| working | Ongoing task description or progress update. |
| input-required | Description of the required user input or context. |
| completed | Final response or result of the agent's processing. |
| error | Error message indicating what went wrong during the task execution. |
AgentState Objects
class AgentState(BaseModel, ABC)
Abstract Pydantic model from which agent's state classes should inherit.
This class combines Pydantic's model validation with abstract state management requirements for agent operations. Subclasses should define concrete state models while implementing the required abstract methods.
Attributes
br_rapp_sdk_extra (Dict[str, Any]): A dictionary for storing extra state information.
The user should not modify this directly, as it is used internally by the SDK.
br_rapp_sdk_buffer (List): A list used as a buffer for intermediate state data.
The user should not modify this directly, as it is used internally by the SDK.
Methods
from_query (**abstract**): Factory method to create an agent state from an initial query
to_task_result (**abstract**): Convert current state to a `AgentTaskResult` object
update_after_checkpoint_restore: Refresh state after checkpoint restoration
is_waiting_for_human_input: Check if agent requires human input
Example
from br_rapp_sdk.agents import AgentState, AgentTaskResult
from typing import List, Optional, Self
from typing_extensions import override
class MyAgentState(AgentState):
user_inputs: List[str] = []
assistant_outputs: List[str] = []
question: str = ""
answer: Optional[str] = None
@classmethod
def from_query(cls, query: str) -> Self:
return cls(
user_inputs=[query],
question=query,
)
@override
def update_after_checkpoint_restore(self, query: str) -> None:
self.user_inputs.append(query)
self.question = query
@override
def to_task_result(self) -> AgentTaskResult:
if self.answer is None:
return AgentTaskResult(
task_status="working",
content="Processing your request..."
)
return AgentTaskResult(
task_status="completed",
content=self.answer
)
<a id="br_rapp_sdk.agents.state.AgentState.from_query"></a>
#### from\_query
```python
@classmethod
@abstractmethod
def from_query(cls, query: str) -> Self
Instantiate agent state from initial query.
Factory method called by the execution framework to create a new state instance. Alternative to direct initialization, allowing state-specific construction logic.
Arguments:
query- Initial user query to bootstrap agent state
Returns:
Self- Fully initialized agent state instance
update_after_checkpoint_restore
def update_after_checkpoint_restore(query: str) -> None
Update state with new query after checkpoint restoration.
Called by the SDK when restoring from a saved checkpoint. Allows the state to synchronize with new execution parameters before resuming the graph.
Arguments:
query- New query to execute with the restored state
to_task_result
@abstractmethod
def to_task_result() -> AgentTaskResult
Convert current state to a task result object.
Used to yield execution results during graph processing. This method defines how the agent's internal state translates to external-facing task results.
Returns:
AgentTaskResult- Task result representation of current state
is_waiting_for_human_input
def is_waiting_for_human_input() -> bool
Check if agent is blocked waiting for human input.
Default implementation returns False. Override in subclasses to implement
human-in-the-loop pausing behavior.
Returns:
bool- True if agent requires human input to proceed, False otherwise