🏷️ €9900 | Yearly Subscription
includes MX-AI License, 1-year Updates, Support, and Marketplace Access.
📄 Download Data Sheet 📰 MX-AI Walkthrough?
MX-AI is the BubbleRAN closed-Loop automation and intelligence platform tailored to 5G/6G E2E networks offering open ecosystem and development kits to accelerate the development, deployment, and sharing of network applications - xApps, rApps, data processing, and AI Agents - cross multi-vendor, multi-model, and multi-cloud (multi-x) networks.
💡 Getting started is simple: choose or customize a network blueprint from our portfolio, and deploy it with a single command:
bash$ brc install aifabric smo-agent.yaml
Why MX-AI ?
- Tame 5G/6G complexity — Autonomously orchestrate multi-vendor RAN/Core with closed loops.
- Proactive operations — Agents detect anomalies, predict faults, and cut MTTR from hours to minutes.
- Sovereign by design — Run SLMs (5–20 GB) fully on-prem/edge; burst to large GPUs only when needed.
- Digital-twin safeguard — Test policies in MX-DT and push only if better (“apply-if-better” gates).
- One hub for reuse — Share xApps/rApps, agents, and tuned models via MX-HUB — no lock-in.
What’s Included?
- MX-AI Core: Multi-agent runtime, Orchestrator, A2A protocol, tool connectors
- Base Agents: O-RAN SMO, O-RAN RIC, Observability and Actions
- Dev Kits (SDK/CDK/ADK): templates, readymade samples, one-command container builds
- MX-HUB Access: Pull/push agents, datasets, models with versioning
- Web GUI, CLI & Chat: Intent to actions with UI
- Integrated Data & APIs: SMO observability connectors
[BETA]
🚀 Add-on Specialized Agent
+€9900 / All Agents / Year
OR 2900 / Agent / Year
🟢 NOW 🟣 BETA 🔵 PLANNED
Agent Category | Status & Agent | Description |
---|---|---|
Infra / Lifecycle | 🟢 Network Blueprint | create/update blueprint (included) |
🟢 SMO | orchestrate configs (included) | |
🟣 RIC | enforce policies (included) | |
🔵 xApp/rApp | deploy/monitor apps | |
🔵 Kubernetes | cluster health | |
Observability | 🟢 Watcher | real-time metrics and RAG (included) |
🔵 Digital-Twin | what-if replicas | |
🔵 Data Collection | dynamic KPIs | |
Orchestration | 🟢 Orchestrator | intent → agents (included) |
Compliance | 🔵 Spec/Regulatory | 3GPP/O-RAN Q&A |
🔵 Judicial | detect misbehaviour | |
Negotiation | 🟣 SLA Agent | symbiotic mediator |
Predictive & Ops | 🔵 PM Agent | predict faults |
🔵 AD Agent | detects anomalies | |
🔵 Resolution Agent | resolves anomalies |
ℹ️ Note: Watcher keeps MX-AI’s vector DB fresh; Orchestrator coordinates via A2A; SMO deploys and updates network blueprints and configurations.
Multi-Model Support
- Local: SLMs (5–20 GB) on 16 GB GPUs or CPU-only; burst to A100/H100 when needed.
- Logos: are trademarks of their respective owners and used here for identification only.
MX-AI Core Software Stack
Mx-AI Benefits at a Glance
# | Benefit | Description |
---|---|---|
1 | O-RAN-friendly & Open APIs | R1/A1, SMO/RIC, REST/gRPC; OSS/BSS, CRM, billing connectors. |
2 | Sovereign SLM-first | Run 5–20 GB SLMs on-prem/edge; data stays local. |
3 | Predictive optimisation | Anticipate surges/failures; reduce congestion & MTTR. |
4 | NOC/Field-Ops automation | Guided workflows, smart ticketing, fewer escalations. |
5 | Twin-driven rollouts | Test fixes in MX-DT; “apply-if-better” to live network. |
6 | Analytics & billing | Reconcile invoices, detect anomalies, predict churn. |
7 | Privacy & compliance | XAI dashboards, EU-AI-Act-ready patterns, federated learning. |
8 | Reuse via MX-HUB | Share agents, datasets, tuned models—no lock-in. |
Practical Use-Cases
# | Use-Case | Description |
---|---|---|
1 | Build–Benchmark–Publish | Ship agents with SDK wizards; replay traffic; publish to MX-HUB. |
2 | Multi-Agent Experiments | Run variants in parallel; compare latency/throughput/energy. |
3 | Plug-and-Play Integration | O-RAN R1/SMO north-bound; unify OSS/BSS/CRM/billing data. |
4 | Twin-Driven Closed Loops | Validate policies in MX-DT; promote only improvements. |
5 | NOC & Field-Ops | Knowledge + telemetry + decision support; fewer truck rolls. |
6 | Planning & Optimisation | Simulate demand; optimise spectrum/capacity; site planning. |
Ready to test?
bash$ brc install aifabric tutorial-agent.yaml
- MX-AI: Tutorial Agent
- BubbleRAN Command Line (
brc
) Tutorial - Prefer a walkthrough? Book a live demo
Frequently Asked Questions
1️⃣ Can MX-AI run fully on-prem?
Yes — SLM-first deployments run on modest edge GPUs or CPU-only. You can burst to larger GPUs when needed.
2️⃣ Do you integrate with our SMO/RIC?
MX-AI is O-RAN-friendly (R1/A1) and offers REST/gRPC adapters for common SMO/RIC NBI.
3️⃣ How do we publish and reuse agents?
Via MX-HUB — pull/push agents, datasets, and tuned models with versioning and fork/merge workflows.
4️⃣ What hardware do we need?
MX-AI runs great on a single 16–24 GB GPU (or CPU-only with SLMs).
For training or large-scale inference, use A100/H100 class GPUs.
5️⃣ How is data privacy handled?
Data stays local in sovereign mode.
We support audit logging, role-based access, and optional federated learning.
No telemetry leaves your site unless you opt-in.
6️⃣ Ask your questions
Need more information?
We recognize each deployment has unique needs.
We can plan a live demo, help with a requirements questionnaire,
and connect you with our partner ecosystem (universities, system integrators, cloud providers).