🏷️ €9900 | Base Yearly Subscription
MX-AI Software License + 1Y Software Update & Technical Support & Access to MX-HUB
Current Release Requirement: MX-PDK software License.
BubbleRAN Multi-X Automation and Intelligence Platform, MX-AI, built on the top of MX-PDK, is an open ecosystem and development kits to accelerates the development, deployment, and sharing of network applications – xApps, rApps, data processing, and AI Agents – cross multi-vendor, multi-model, and multi-cloud (multi-x) networks. Powered by cutting-edge R&D advancements, it integrates Generative AI and Large Language Models (LLMs) at its core, building advanced pipelines and enabling an agentic approach to network operations, optimizations, and observability.
Designed for human network operators, MX-AI interprets natural language queries and actionable intents, seamlessly interacting with heterogeneous data sources. These include standards, configurations and specifications, real-time network states and KPIs, and enforced policies. This enables the agent to provide context-aware responses, assisting with network diagnostics, troubleshooting, and real-time decision-making.
What is included in the MX-AI product?
- MX-AI Core provides an extendable and customizable Multi-Agent environment with task-specialized agents to facilitate network operations. Agents can connect to different Large Language Models (LLMs) either in the cloud or on-premises. It has built-in set of pre-defined agents, each tailored for specific tasks and domains(Check the software stack below).
- Reusable and extendable xApps/rApps/Agents samples for automatic network management and optimization.
- Development tool chain with Software-Development Kit (SDK), Container Development Kit (CDK), Agent Development Kit (ADK), which allows users to develop and deploy custom xApps/rApps/Agents.
- Pull/Push Access to the BubbleRAN Artifact Registry, MX-HUB.
- A simple and intuitive GUI for easy interaction with the system.
Extras Feature
- A cutting-edge Multi-X Network Digital Twin (MX-NDT) framework designed for real-time parallel emulation, intelligent recommendation, and adaptive optimization, enabling predictive insights and accelerated decision-making across complex network environments. Checkout the video here.
MX-AI Compatible Hardware
- MX-AI runs perfectly on the hardware provided with our MX-PDK solution: thanks to the support for LLMs through APIs, a GPU is not strictly required.
- If you want to support local LLMs for inference, or run simple training workloads, a solution like the NVIDIA DGX Spark (or similar) is recommended.
- If you plan to extend the Flexible MX-AI Core by training/fine-tuning your own LLMs, we recommend having at least one A100 or H100 GPU, depending on your needs.
MX-AI redefines network automation and intelligence realizing the vision of AI-for-RAN solution by redefining network automation and intelligence.
MX-AI Unique Features
Multi-Artifacts Registry (MX-HUB)
With a shared registry, MX-AI hosts a directory of divers artifacts empowering customers, partners, and developers with tools and development kits to build, validate, and share network automation applications and datasets.
Artifacts | Type | Examples |
---|---|---|
Blueprints | Deployment Models, Composition Models | 5G SA, Multi-vendor 5G O-RAN, Digital Twin |
xApps | Monitoring, Optimization, Coordination | RAN Slicing, MLB, BWP, Spectrum Management |
rApps | A1Policies, OAM, Deployment Automation, Reconfiguration, AI pipeline | Throughput enforce, SLA provisioning, Network Assistant |
Dataset | Sensing, UE Stats, RAN Stats, App States | Power-PRB-Performance, RSRP-RSRQ-RSSI-SNR-Power |
AI Agent | See the Multi-agent table below | - |
MX-HUB feature of MX-AI offers users an open ecosystem of reusable and extendable artifacts effectively unlocking the innovation faster than would otherwise occurs.
Multi-Models
With Flexible AI Core, MX-AI hosts different Large Language Models (LLMs), which can be customized based on user preferences and infrastructure. It seamlessly integrates with various providers, including
Deployment Model | LLM Providers |
---|---|
Remote API | OpenAI, DeepSeek, NVIDIA NIM |
Locally hosted | Llama, MISTRAL, Custom Models |
Multi-Models feature of MX-AI offer users full control over AI deployment based on their operational needs, privacy requirements, and compute capabilities, also giving the possibility to customize the model with a specialized fine-tuning.
Multi-Agent
With multi-agent architecture, MX-AI is designed to be modular and flexible, allowing for seamless agents selection, customization and integration. The product comes with a set of pre-defined agents, each tailored for specific tasks and domains.
Agent | Capabilities |
---|---|
Network Blueprint Agent | creates and updates the network blueprint |
RIC Agent | enforces simple policies or deploys xApps and rApps for more advanced control loops |
Network Digital Twin Agent | deploys the Network Digital Twin enabling automated simulation and optimization |
Kubernetes Agent | monitors and troubleshoots the Kubernetes cluster |
Specification Agent | provides context-aware insights into 3GPP and ORAN specifications |
Multi-Agent feature of MX-AI allows users to develop their own custom agents and/or extend the functionality of existing agents, or even integrate third-party agents.
Multi-Twins
[Experimental Feature] A key innovation of MX-AI is its ability to operate multiple Network Digital Twin allowing to unlock Zero-Touch Automation and Optimization/Recommendation
- Run multiple replicas of the network blueprint in parallel
- Simulate different network configurations and test policies before applying them to the live network
- Synchronize with the physical network in real-time ensuring up-to-date information and accurate simulations
With Multi-Twin feature of MX-AI, users can perform advanced planning, predictive action-testing (what-if analysis), and autonomous network optimization, enabling smarter, faster network operations and redefining AI-driven network intelligence.
Read our blog post here.
MX-AI Core Stack
Benefits of MX-AI
- Industry aligned – MX-AI brings together customers, partners, developers, and AI agents in a unified, cloud-native platform that enables seamless integration of xApps, rApps, AI agents, and CI/CD pipelines — all aligned with 3GPP and O-RAN standards.
- Enhanced Network Accessibility – By translating complex network data into natural language, MX-AI enables even non-technical users to gain deeper insights into network performance, configurations, and issues.
- Proactive Network Troubleshooting – The agent streamlines fault detection, root cause analysis, and configuration management, reducing downtime and enhancing network resilience.
- Automated Network Operations – MX-AI assists in daily network management by automating tasks such as deploying, testing, and updating network blueprints, reducing manual effort and operational complexity.
- End-to-End Optimization – Through automated policy enforcement and AI-driven network adjustments, MX-AI helps operators continuously optimize network performance.
- R&D Acceleration – Developers can leverage MX-AI to benchmark different LLM models for network monitoring and optimization, evaluating their effectiveness across various downstream tasks and domains.
- Seamless Integration & Customization – Developers and vendors can integrate MX-AI with their own LLMs, APIs (e.g., OpenAI, DeepSeek, Nvidia NIMs), or local models, ensuring adaptability to different environments and compliance requirements.
Applications of MX-AI in Network Intelligence
- Real-Time Network State Monitoring – Provides live insights into network KPIs, performance metrics, and infrastructure health, helping operators maintain optimal conditions.
- AI-Driven Troubleshooting – Assists operators in diagnosing and resolving network issues, reducing downtime and operational costs.
- Actionable Network Management – Simplifies and automates network operations by bridging the gap between human operators and complex network systems, enabling intuitive natural language-driven configurations and optimizations.
- Knowledge Repository & Querying – Acts as an AI-powered assistant capable of answering questions about 3GPP, ORAN, and network observability, providing instant, context-aware insights.
- Benchmarking & AI Integration – Enables vendors and researchers to compare and integrate different AI models, including SLMs, LLM APIs (e.g., OpenAI), and Nvidia NIMs, ensuring the most effective deployment strategy for network intelligence.
Need more information ?
We recognize that each deployment scenario has unique needs, and that the solution must be tailored, adjusted, and tested in its intended environment. To assist you, we could plan a live demo and aid you in filling out a questionnaire to assess your requirements and assist you in finding the best solution.