Skip to main content

br_rapp_sdk.agents.chat_model_config

ModelProvider

ModelProvider is a type alias for the supported model providers. The currently supported providers are:

  • openai
  • nvidia
  • ollama

ChatModelClientConfig Objects

class ChatModelClientConfig(BaseModel)

Configuration for the chat model client.

This class is used to configure the chat model client with the necessary parameters. Some model providers may require specific environment variables to be set, like OPENAI_API_KEY for OpenAI.

Arguments:

  • model str - The name of the model to use.

  • model_provider ModelProvider - The provider of the model (e.g., OpenAI, Meta, etc.).

  • base_url str, optional - The base URL for the model provider, required for non-OpenAI providers.

  • client_name str, optional - Name for the client logger.

  • logging_level Literal["debug", "info", "warning", "error", "critical"], optional - Logging level for the client logger.

    When a client name and logging level are provided, a logger is created to be used by the client.

    The class can be instantiated directly or created from environment variables using the from_env class method (usually preferred).

Examples:

Direct instantiation:

config = ChatModelClientConfig(
model="gpt-4o-mini",
model_provider="openai",
base_url="https://api.openai.com/v1",
client_name="SampleClient",
logging_level="debug",
)

From environment variables:

config = ChatModelClientConfig.from_env(
client_name="SampleClient",
logging_level="debug",
)

__init__

def __init__(model: str,
model_provider: ModelProvider,
base_url: Optional[str] = None,
client_name: Optional[str] = None,
logging_level: Optional[Literal["debug", "info", "warning",
"error", "critical"]] = None)

Initialize the ChatModelClientConfig with the provided parameters.

Arguments:

  • model str - The name of the model to use.
  • model_provider ModelProvider - The provider of the model (e.g., openai, nvidia, etc.).
  • base_url Optional[str] - The base URL for the model provider, required for non-OpenAI providers.
  • client_name Optional[str] - Name for the client logger.
  • logging_level Optional[Literal["debug", "info", "warning", "error", "critical"]] - Logging level for the client logger.

from_env

@classmethod
def from_env(
cls,
client_name: Optional[str] = None,
logging_level: Optional[Literal["debug", "info", "warning", "error",
"critical"]] = None
) -> "ChatModelClientConfig"

Create a ChatModelClientConfig instance from environment variables.

This method reads the following environment variables:

  • MODEL: The model name, which can be in the format <provider>:<model>.
  • MODEL_PROVIDER (optional): The provider of the model (e.g., openai, nvidia, ollama, etc.).
  • LOG_LEVEL (optional): The logging level for the client logger.

Arguments:

  • client_name Optional[str] - Name for the client logger.
  • logging_level Optional[Literal["debug", "info", "warning", "error", "critical"]] - Logging level for the client logger.

Returns:

An instance of ChatModelClientConfig configured with values from environment variables.

Raises:

  • EnvironmentError - If the required environment variables are not set or if the format is incorrect.

br_rapp_sdk.agents.chat_model_client

ChatModelClient Objects

class ChatModelClient()

Client that facilitates interaction with a chat model.

This client can be used to send user instructions to the chat model and receive responses. It supports both single and batch invocations, and can handle tool calls if tools are provided.

Arguments:

chat_model_config (ChatModelClientConfig, optional): Configuration for the chat model client. system_instructions (str): System instructions to be used in the chat model. tools (Sequence[Dict[str, Any] | type | Callable | BaseTool | None], optional): LangChain-defined tools to be used by the chat model.

Examples:

config = ChatModelClientConfig.from_env(
client_name="SampleClient",
logging_level="debug",
)
client = ChatModelClient(
chat_model_config=config,
system_instructions="You always reply in pirate language.",
)
response = client.invoke(HumanMessage("What is the weather like today?"))

__init__

def __init__(chat_model_config: ChatModelClientConfig | None = None,
system_instructions: str = "You are a helpful assistant.",
tools: Sequence[Dict[str, Any] | type | Callable | BaseTool
| None] = None)

Initialize the ChatModelClient with the given configuration, system instructions, and tools.

Arguments:

chat_model_config (ChatModelClientConfig, optional): Configuration for the chat model client. If None, it will be loaded from environment variables. system_instructions (str): System instructions to be used by the chat model. tools (Sequence[Dict[str, Any] | type | Callable | BaseTool | None], optional): LangChain-defined tools to be used by the chat model.

Raises:

  • EnvironmentError - If the chat model configuration is not provided and cannot be loaded from environment variables.

get_chat_model

def get_chat_model() -> BaseChatModel

Get the chat model instance.

Returns:

  • BaseChatModel - The chat model instance configured with the provided model and tools.

invoke

def invoke(input: HumanMessage | List[ToolMessage],
history: Optional[List[BaseMessage]] = None) -> AIMessage

Invoke the chat model with user instructions or tool call results.

If the history is provided, it will be prepended to the input message. This method modifies the history in-place to include the input and output messages.

Arguments:

  • input HumanMessage | List[ToolMessage] - The user input or tool call results to process.
  • history Optional[List[BaseMessage]] - Optional history of messages.

Returns:

  • AIMessage - The response from the chat model.

Raises:

  • ValueError - If the input type is invalid or if the response from the chat model is not an AIMessage.

batch

def batch(inputs: List[HumanMessage],
history: Optional[List[BaseMessage]] = None) -> List[AIMessage]

Batch process multiple human messages in batch.

If the history is provided, it will be prepended to each input message. This method does NOT modify the history in-place.

Arguments:

  • inputs List[HumanMessage] - List of user inputs to process.
  • history Optional[List[BaseMessage]] - Optional history of messages.

Returns:

  • List[AIMessage] - List of responses from the chat model for each input.

Raises:

  • ValueError - If the input type is invalid or if the response from the chat model is not an AIMessage.

br_rapp_sdk.agents.agent

AgentTaskStatus

AgentTaskStatus is a type alias for the status of an agent task.

The possible values are:

  • working: The agent is currently processing the task.
  • input_required: The agent requires additional input from the user to proceed.
  • completed: The agent has successfully completed the task.
  • error: An error occurred during the task execution.

AgentTaskResult Objects

class AgentTaskResult(BaseModel)

Result of an agent invocation.

Attributes:

  • task_status AgentTaskStatus - The status of the agent task.

  • content str - The content of the agent's response or message.

    Attributes meaning:

    task_statuscontent
    workingOngoing task description or progress update.
    input_requiredDescription of the required user input or context.
    completedFinal response or result of the agent's processing.
    errorError message indicating what went wrong during the task execution.

AgentGraph Objects

class AgentGraph(ABC)

Abstract base class for agent graphs.

Extend this class to implement the specific behavior of an agent.

Example:

from br_rapp_sdk.agents import AgentGraph, AgentTaskResult
from langgraph.runnables import RunnableConfig
from langgraph.graph import StateGraph
from pydantic import BaseModel
from typing import AsyncIterable
from typing_extensions import override

class MyGraphState(BaseModel):
property1: str
property2: int

def to_task_result(self) -> AgentTaskResult:
return AgentTaskResult(
task_status="completed",
content=f"Processed {self.property1} with value {self.property2}"
)

class MyAgentGraph(AgentGraph):
def __init__(self):
# Define the agent graph using langgraph
graph_builder = StateGraph(MyGraphState)
# Add nodes and edges to the graph as needed ...

self.graph = graph_builder.compile()

@override
async def astream(
self,
query: str,
config: RunnableConfig
) -> AsyncIterable[AgentTaskResult]:
state = ... # Create or retrieve the initial state for the agent graph
graph_stream = self.graph.astream(
state,
config,
stream_mode="values"
)
async for item in graph_stream:
state_item: MyGraphState = MyGraphState.model_validate(item)
yield state_item.to_task_result()
return

<a id="br_rapp_sdk.agents.agent.AgentGraph.astream"></a>

#### astream

```python
@abstractmethod
async def astream(query: str,
config: RunnableConfig) -> AsyncIterable[AgentTaskResult]

Stream results from the agent graph based on the query and configuration.

Arguments:

  • query str - The query to process.
  • config RunnableConfig - Configuration for the runnable.

Returns:

  • AsyncIterable[AgentTaskResult] - An asynchronous iterable of agent task results.

MinimalAgentExecutor Objects

class MinimalAgentExecutor(AgentExecutor)

Minimal Agent Executor.

Minimal implementation of the AgentExecutor interface used by the AgentApplication class to execute agent tasks.

AgentApplication Objects

class AgentApplication()

Agent Application based on Starlette.

Attributes:

  • agent_card AgentCard - The agent card containing metadata about the agent.
  • agent_graph AgentGraph - The agent graph that defines the agent's behavior and capabilities.

Example:

    import httpx
import json
import uvicorn
from a2a.types import AgentCard
from br_rapp_sdk.agents import AgentApplication

with open('./agent.json', 'r') as file:
agent_data = json.load(file)
agent_card = AgentCard.model_validate(agent_data)
logger.info(f'Agent Card loaded: {agent_card}')

url = httpx.URL(agent_card.url)
graph = MyAgentGraph()
agent = AgentApplication(
agent_card=agent_card,
agent_graph=graph,
)

uvicorn.run(agent.build(), host=url.host, port=url.port)

__init__

def __init__(agent_card: AgentCard, agent_graph: AgentGraph)

Initialize the AgentApplication with an agent card and agent graph.

Arguments:

  • agent_card AgentCard - The agent card.
  • agent_graph AgentGraph - The agent graph implementing the agent's logic.

agent_graph

@property
def agent_graph() -> AgentGraph

Get the agent graph.

build

def build() -> Starlette

Build the A2A Starlette application.

Returns:

  • Starlette - The built Starlette application.

br_rapp_sdk.agents.tools

ToolCall Objects

class ToolCall(BaseModel)

Tool call model.

This model represents a tool call with its name, arguments, and an optional unique identifier. The class provides useful methods to convert from a Langchain ToolCall and to dump the model to a dictionary or JSON string:

  • from_langchain_tool_call: Converts a Langchain ToolCall to this model.
  • model_dump: Dumps the model to a dictionary.
  • model_dump_json: Dumps the model to a JSON string.

from_langchain_tool_call

@classmethod
def from_langchain_tool_call(cls, tool_call: LangchainToolCall) -> Self

Convert a Langchain ToolCall to the custom ToolCall model.

Arguments:

  • tool_call LangchainToolCall - The Langchain ToolCall instance to convert.

Returns:

  • ToolCall - An instance of the custom ToolCall model.

model_dump

def model_dump() -> Dict[str, Any]

Dump the model to a dictionary.

Returns:

Dict[str, Any]: A dictionary representation of the model.

model_dump_json

def model_dump_json() -> str

Dump the model to a JSON string.

Returns:

  • str - A JSON string representation of the model.