Lab 1: Deploying an AIFabric
This lab guides you through deploying an AIFabric resource in the MX-AI ecosystem. An AIFabric enables multiple AI agents to collaborate with each other and use tools from MCP servers.
By the end of this lab, you will be able to:
- Deploy an AIFabric
- Define agents and their roles
- Connect agents to MCP servers and LLMs
- Interact with the system through a UI component
AIFabric Custom Resourceβ
The following YAML defines an AIFabric where agents use OpenAI APIs to generate responses and interact with MCP servers.
myfabric.yaml
apiVersion: odin.trirematics.io/v1
kind: AIFabric
metadata:
name: myfabric
namespace: trirematics
spec:
ui: hub.bubbleran.com/orama/ui/iris
mcp:
- name: observability-db
image: hub.bubbleran.com/orama/mcp/observability-db
smoAccess: read-only
llms:
- name: openai-model
provider: openai
model: gpt-4.1-mini
apiKey: <YOUR-API-KEY>
topology: supervised
agents:
- name: supervisor-agent
role: supervisor
image: hub.bubbleran.com/orama/agents/supervisor
llm: openai-model
icp: a2a
- name: smo-agent
role: worker
image: hub.bubbleran.com/orama/agents/smo-agent
llm: openai-model
smoAccess: read-write
mcpServers:
- observability-db
icp: a2a
imagePullSecrets:
- name: bubbleran-hub
Notesβ
- You may rename the AIFabric, agents, LLMs, and MCP servers by changing the corresponding
namefield. - Do not modify the
imagefields; they are required by the Odin Operator. - Replace
<YOUR-API-KEY>with a valid OpenAI API key. Alternative, use a local model with Ollama replacing theopenai-modelwith the following configuration.llms:
- name: local-model
provider: ollama
model: <YOUR-MODEL-NAME>
baseUrl: <YOUR-OLLAMA-URL>
Deploy the AIFabricβ
Create the resource with:
brc install aifabric myfabric.yaml
The Odin Operator will deploy all components defined in the AIFabric specification.
Wait until all pods in the trirematics namespace reach a Running state. This typically takes a few seconds up to one minute, depending on the number of components and dependencies.
During startup, some pods may be restarted while waiting for required dependencies to become available. This behavior is expected.
Access the UIβ
The UI is always deployed with name <name-of-aifabric>-ui. In this case, it will be myfabric-ui. In order to use it, follow these three steps:
- SSH into the BubbleRAN cluster with local port forwarding. This step is not needed if you are working directly on the cluster.
ssh -L 9900:localhost:9900 <your-user>@<your-cluster-ip>
- Forward the UI port
kubectl port-forward -n trirematics svc/myfabric-ui 9900:9900
- Open your browser at http://localhost:9900/
Interacting with the Agentsβ
The UI lets you choose which agent to interact with.
In this lab, the AIFabric contains two agents: a Supervisor and an SMO Agent. The Supervisor is designed for more complex scenarios involving multiple worker agents; here, it is included for demonstration purposes.
When you submit a request to the Supervisor, it delegates the task to the SMO Agent, which generates the response step by step. The UI displays the intermediate steps, such as tool usage and text generation, so you can follow the agentβs reasoning process.
Example Questionsβ
After deploying a Network and one or more Terminals, try:
- "What is the TDD configuration of the network?"
- "How many UEs are available?"
- "What is the IMSI of the UE named
ue1?"
Since the worker agent relies on a RAG-based approach, try to include keywords present in the Network or Terminal configuration YAML files.
You can also start without any deployment, and use the agents to deploy something:
- "Deploy a new network called
testnetwith one access network namedan. - "Deploy a UE called
userand attach it to the access networkan.testnet.
What You Learnedβ
- Deploy an AIFabric resource in the MX-AI ecosystem
- Configure agents, their roles, and supervision topology
- Attach MCP servers to agents and control their access levels
- Configure and use an LLM within an AIFabric
- Use the UI component to interact with agents and observe their execution flow