bat.chat_model_client.config
ModelProvider
ModelProvider is a type alias for the supported model providers.
The currently supported providers are:
openainvidiaollama
ChatModelClientConfig Objects
class ChatModelClientConfig(BaseModel)
Configuration for the chat model client.
This class is used to configure the chat model client with the necessary parameters. Some model providers may require specific environment variables to be set, like OPENAI_API_KEY for OpenAI.
Attributes
model (str): The name of the model to use.
model_provider (ModelProvider): The provider of the model (e.g., OpenAI, Meta, etc.).
base_url (str, optional): The base URL for the model provider, required for non-OpenAI providers.
client_name (str, optional): Name for the client.
The class can be instantiated directly or created from environment variables using the from_env class method (usually preferred).
Examples
Direct instantiation:
config = ChatModelClientConfig(
model="gpt-4o-mini",
model_provider="openai",
base_url="https://api.openai.com/v1",
client_name="SampleClient",
)
From environment variables:
config = ChatModelClientConfig.from_env(
client_name="SampleClient",
)
__init__
def __init__(model: str,
model_provider: ModelProvider,
base_url: Optional[str] = None,
client_name: Optional[str] = None)
Initialize the ChatModelClientConfig with the provided parameters.
Arguments:
modelstr - The name of the model to use.model_providerModelProvider - The provider of the model (e.g., openai, nvidia, etc.).base_urlOptional[str] - The base URL for the model provider, required for non-OpenAI providers.client_nameOptional[str] - Name for the client.
from_env
@classmethod
def from_env(cls,
client_name: Optional[str] = None) -> "ChatModelClientConfig"
Create a ChatModelClientConfig instance from environment variables.
This method reads the following environment variables:
MODEL: The model name, which can be in the format<provider>:<model>.MODEL_PROVIDER(optional): The provider of the model (e.g., openai, nvidia, ollama, etc.).
Arguments:
client_nameOptional[str] - Name for the client.
Returns:
An instance of ChatModelClientConfig configured with values from environment variables.
Raises:
EnvironmentError- If the required environment variables are not set or if the format is incorrect.
bat.chat_model_client.client
UsageMetadata Objects
class UsageMetadata(BaseModel)
Metadata about the usage of the chat model.
Note: Defining a ChatModelClient as a property of an object deriving the AgentGraph class
allows to automatically collect and aggregate usage metadata from the chat model
and return it as part of the streaming response metadata.
Attributes
input_tokens (int): Number of input tokens used in the request. output_tokens (int): Number of output tokens generated in the response. total_tokens (int): Total number of tokens used (input + output). inference_time (float): Time taken for the inference in seconds.
__add__
def __add__(other: Self | Dict[str, int]) -> Self
Add two UsageMetadata instances.
__sub__
def __sub__(other: Self | Dict) -> Self
Subtract two UsageMetadata instances.
ChatModelClient Objects
class ChatModelClient()
Client that facilitates interaction with a chat model.
This client can be used to send user instructions to the chat model and receive responses. It supports both single and batch invocations, and can handle tool calls if tools are provided.
If stored as a property of an object deriving the AgentGraph class, UsageMetadata will be
automatically collected and returned as metadata of the streaming response.
Arguments:
chat_model_config (ChatModelClientConfig, optional): Configuration for the chat model client. system_instructions (str): System instructions to be used in the chat model. tools (Sequence[Dict[str, Any] | type | Callable | BaseTool | None], optional): LangChain-defined tools to be used by the chat model.
Examples:
config = ChatModelClientConfig.from_env(
client_name="SampleClient",
)
client = ChatModelClient(
chat_model_config=config,
system_instructions="You always reply in pirate language.",
)
response = client.invoke(HumanMessage("What is the weather like today?"))
__init__
def __init__(chat_model_config: ChatModelClientConfig | None = None,
system_instructions: str = "You are a helpful assistant.",
tools: Sequence[Dict[str, Any] | type | Callable | BaseTool
| None] = None)
Initialize the ChatModelClient with the given configuration, system instructions, and tools.
Arguments:
chat_model_config (ChatModelClientConfig, optional): Configuration for the chat model client. If None, it will be loaded from environment variables. system_instructions (str): System instructions to be used by the chat model. tools (Sequence[Dict[str, Any] | type | Callable | BaseTool | None], optional): LangChain-defined tools to be used by the chat model.
Raises:
EnvironmentError- If the chat model configuration is not provided and cannot be loaded from environment variables.
get_chat_model
def get_chat_model() -> BaseChatModel
Get the chat model instance.
Returns:
BaseChatModel- The chat model instance configured with the provided model and tools.
invoke
def invoke(input: HumanMessage | List[ToolMessage],
history: Optional[List[BaseMessage]] = None) -> AIMessage
Invoke the chat model with user instructions or tool call results.
If the history is provided, it will be prepended to the input message.
This method modifies the history in-place to include the input and output messages.
Arguments:
inputHumanMessage | List[ToolMessage] - The user input or tool call results to process.historyOptional[List[BaseMessage]] - Optional history of messages.
Returns:
AIMessage- The response from the chat model.
Raises:
ValueError- If the input type is invalid or if the response from the chat model is not anAIMessage.
batch
def batch(inputs: List[HumanMessage],
history: Optional[List[BaseMessage]] = None) -> List[AIMessage]
Batch process multiple human messages in batch.
If the history is provided, it will be prepended to each input message.
This method does NOT modify the history in-place.
Arguments:
inputsList[HumanMessage] - List of user inputs to process.historyOptional[List[BaseMessage]] - Optional history of messages.
Returns:
List[AIMessage]- List of responses from the chat model for each input.
Raises:
ValueError- If the input type is invalid or if the response from the chat model is not an AIMessage.
get_usage_metadata
def get_usage_metadata(
from_timestamp: Optional[float] = None) -> UsageMetadata
Get the aggregated usage metadata from the chat model client.
Arguments:
from_timestampOptional[float] - If provided, only usage metadata after this timestamp will be considered. If None, all usage metadata will be considered.
Returns:
UsageMetadata- The aggregated usage metadata.