Lab 3: Integrating a Task-Specific Agent in MX-AI
In this lab, you will integrate a task-specific agent into the MX-AI ecosystem. Once deployed, the agent will collaborate with the supervisor agent and handle user requests through the UI.
Containerizing the Agent
To deploy your RNG Agent in MX-AI, you must containerize it so it can run inside Kubernetes and be managed by the platform.
Below is a reference Dockerfile suitable for packaging a BR-ADK based agent.
Dockerfile
# Base image with Python and uv
FROM ghcr.io/astral-sh/uv:python3.12-bookworm AS base
# Stage for building the application
FROM base AS builder
# Pull latest uv image
COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/
# Set environment variables for uv
ENV UV_COMPILE_BYTECODE=1 UV_LINK_MODE=copy
WORKDIR /app
# Create a directory for the application
RUN mkdir -p agent
# Copy the application code
COPY . agent/
WORKDIR /app/agent
# Create lockfile
RUN --mount=type=cache,target=/root/.cache/uv \
uv lock
# Install the application dependencies using uv
RUN --mount=type=cache,target=/root/.cache/uv \
uv sync --frozen --no-install-project --no-dev
# Stage for the final image
FROM base
# Copy the built application from the builder stage
COPY --from=builder /app /app
# Set environment variables for the final image
ENV PATH="/app/.venv/bin:$PATH"
# Set the working directory to the application directory
WORKDIR /app/agent
# Run the application using uv
CMD ["uv", "run", "."]
Build and Push the Image
You can simplify image management with the following Makefile.
Makefile
# Set your Docker registry here
DOCKER_REGISTRY ?= hub.example.com
VERSION ?= $(shell git describe --tags --always --abbrev --dirty)
PORT ?= 9900
# Set the repository and image name here
REPO ?= example/rng-agent
IMAGE := $(DOCKER_REGISTRY)/$(REPO):$(TAG)
TAG ?= latest
.PHONY: build run push clean
build:
@echo "Building Agent's Docker image with tag: $(IMAGE) - Version: $(VERSION)"
docker build $(if $(NO_CACHE),--no-cache) \
--build-arg VERSION=$(VERSION) \
--tag $(IMAGE) .
@echo "Agent's Docker image built successfully."
run:
docker run --rm \
-it \
--env-file .env \
--network host \
-p $(PORT):$(PORT) \
$(IMAGE)
push:
@echo "Pushing Agent's Docker image with tag: $(IMAGE)"
docker push $(IMAGE)
@echo "Agent's Docker image pushed successfully."
clean:
docker rmi $(IMAGE) || true
Build the image:
make build
Run it locally:
make run
Push it to your registry:
make push
Integration with other agents
After pushing the image, you can deploy the agent using an AIFabric, as introduced in Lab 1.
The example below deploys:
- A UI component
- A Supervisor Agent
- The custom RNG Agent
myfabric.yaml
apiVersion: odin.trirematics.io/v1
kind: AIFabric
metadata:
name: myfabric
namespace: trirematics
spec:
ui: hub.bubbleran.com/orama/ui/iris
llms:
- name: openai-model
provider: openai
model: gpt-4.1-mini
apiKey: <YOUR-API-KEY>
topology: supervised
agents:
- name: supervisor-agent
role: supervisor
image: hub.bubbleran.com/orama/agents/supervisor
llm: openai-model
icp: a2a
- name: rng-agent
role: worker
image: hub.bubbleran.com/orama/agents/rng-agent
llm: openai-model
icp: a2a
imagePullSecrets:
- name: bubbleran-hub
Deploy the AIFabric:
brc install aifabric myfabric.yaml
Note: with future updates, you will be able to push your agents directly to the BubbleRAN registry to share them with the community.
What You Learned
- How to containerize a task-specific agent
- How to publish it to a container registry
- How to integrate it into an AIFabric
- How custom agents collaborate with the supervisor agent in MX-AI