Setup Access to the Cluster
This guide explains how to access the MX-PDK remotely and how to use BubbleRAN CLI from a workstation outside the cluster nodes. In most deployments, the MX-PDK runs in a lab or server room, so remote access is the standard operating mode.
1. Before You Start
The IP addresses provided with the MX-PDK describe the control-plane LAN of the delivered system. They do not automatically mean that your office network or your normal lab network can already reach that LAN.
If you are unsure, ask your lab or office IT team the following:
- whether your workstation can already reach the MX-PDK control-plane network directly
- whether you must connect first through a VPN
- whether you must connect first through a jump host or bastion host
- which hostname, IP address, and username you should use to reach the
control-plane
brc is intended for Ubuntu/Linux systems, including WSL Ubuntu on Windows. It is not intended for native macOS installation. If you only need SSH access from Windows, PowerShell/OpenSSH or MobaXterm are both valid options.
Choose the section that matches your setup:
2. Ubuntu/Linux (Preferred Setup)
If you have access to an Ubuntu or Linux workstation, this is the preferred way to interact with the MX-PDK. It matches the tooling used on the cluster nodes, supports the BubbleRAN CLI natively, and provides the most consistent operational workflow.
2.1. Access the control-plane directly with SSH
This is the simplest option when you only need shell access and are comfortable working directly on the cluster.
When you are connecting from outside the lab network, you usually need your IT or infrastructure team to provide one of the following:
- VPN access to the lab network where the MX-PDK is installed
- a jump host or bastion host that can reach the lab network
- the SSH username and login method approved for your deployment
The USB stick delivered with the system normally contains the MX-PDK lab-side IP information. Once remote access into that network is available, use the control-plane IP from the USB documentation to open the SSH session.
-
Optionally add a saved SSH host entry:
cat >> ~/.ssh/config <<EOF
Host mx-pdk-control-plane
HostName control-plane.example.com
User ubuntu
EOF -
Connect to the node:
ssh mx-pdk-control-plane
# or
ssh ubuntu@control-plane.example.com
If you must connect through a jump host first, the SSH config can be extended as follows:
cat >> ~/.ssh/config <<EOF
Host mx-pdk-jumphost
HostName jumphost.example.com
User ubuntu
Host mx-pdk-control-plane
HostName <control-plane-IP-from-USB>
User ubuntu
ProxyJump mx-pdk-jumphost
EOF
brc is already installed by default on control-plane and devops, so no additional CLI installation is required when you work directly there.
2.2. BubbleRAN CLI Setup for Linux Machines
brc is distributed as a Snap package, and you receive automatic updates for it once installed. The Snap is installed by default on your control-plane and devops machines. To use it on another machine, such as your personal laptop, you need to install it manually.
Before using brc from your own Linux machine, install the CLI once, copy the kubeconfig locally, and initialize the Snap configuration:
-
Install the BubbleRAN CLI:
sudo snap install br-t9s --channel=beta -
Initialize the Snap user configuration:
br-t9s.brc help -
Copy the kubeconfig from a machine that already has access:
scp user@control-plane:/home/user/.kube/config ~/config-mx-pdk -
A kubeconfig copied from the
control-planeis expected to contain entries similar to the following:apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://<control-plane-IP-from-USB>:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTEDThe most important field is
server. Usehttps://<control-plane-IP-from-USB>:6443when your machine is on the same MX-PDK network. Usehttps://127.0.0.1:6443only when you are using the SSH tunnel described in Section 2.4. -
Copy the kubeconfig to the Snap directory used by
brc:mkdir -p ~/snap/br-t9s/current/.kube
cp ~/config-mx-pdk ~/snap/br-t9s/current/.kube/config
2.3. Use brc from a Linux Machine on the same LAN
This is the preferred setup for day-to-day work when your workstation can reach the MX-PDK network directly.
-
Use the kubeconfig locally and validate access:
cp ~/config-mx-pdk ~/snap/br-t9s/current/.kube/config
brc extract infraIf this command succeeds, your Linux workstation can reach the MX-PDK directly.
2.4. Use brc Remotely Through an SSH Tunnel
Use this workflow when your Linux workstation is outside the MX-PDK LAN and the Kubernetes API is only reachable through a jump host or the control-plane.
-
Create a remote copy of the kubeconfig so you can modify only the tunneled version:
cp ~/config-mx-pdk ~/config-remote -
Start an SSH session to the
control-planeand forward the Kubernetes API port locally:ssh -L 6443:127.0.0.1:6443 mx-pdk-control-planeIf your environment requires a jump host, configure
mx-pdk-control-planewithProxyJumpas shown in Section 2.1 and keep this SSH session open while you usebrc. -
Edit
~/config-remoteso the cluster endpoint points to the local tunnel:server: https://127.0.0.1:6443
tls-server-name: <control-plane-IP> -
Update the kubeconfig used by
brc:cp ~/config-remote ~/snap/br-t9s/current/.kube/config -
Keep the SSH tunnel running while you use
brc:brc extract infra
2.5. Optional Aliases
Aliases are useful if you work with the MX-PDK frequently. They give you shorter command names, reduce typing mistakes, and make common brc and cdk commands faster to run.
To simplify daily usage, add the following aliases to ~/.aliases:
# bubbleran brc aliases
alias cli=br-t9s.brc
alias brc=br-t9s.brc
alias t9s-cli=br-t9s.brc
alias br-cli=br-t9s.brc
alias br-ctl=br-t9s.brc
alias t9s-ctl=br-t9s.brc
# bubbleran cdk aliases
alias cdk=br-t9s.cdk
alias t9s-cdk=br-t9s.cdk
alias br-cdk=br-t9s.cdk
You need to create a new shell session to make the aliases available or source the file ~/.aliases.
Debugging aliases:
- In bash, check whether in the file
~/.bashrcthere is a linesource ~/.aliases. - In zsh, check whether in the file
~/.zshrcthere is a linesource ~/.aliases. - In fish, check whether in the file
~/.config/fish/config.fishthere is a linesource ~/.aliases. - In other shells, check whether in the file
~/.profilethere is a linesource ~/.aliases.
2.6. Optional Autocompletion
Autocompletion is useful when you do not remember the full command syntax, resource name, or option flags. It speeds up command entry and helps avoid simple typing errors.
To enable command completion for the CLI tools, follow the instructions for your shell.
Bash
Run the following commands:
brc completion bash >/tmp/brc.bash
cdk completion bash >/tmp/cdk.bash
helm completion bash >/tmp/helm.bash
sudo cp /tmp/brc.bash /usr/share/bash-completion/completions/brc
sudo cp /tmp/cdk.bash /usr/share/bash-completion/completions/cdk
sudo cp /tmp/helm.bash /usr/share/bash-completion/completions/helm
All the users on the machine will have access to the autocompletion.
2.7. Important Notes for Linux Workstations
- Never use
sudowith the BubbleRAN CLI tools. Doing so can break permissions inside the Snap directories. - The kubeconfig
serverfield must always point to an address reachable from the machine wherebrcruns. - For direct LAN access, the
serverfield usually remains the control-plane IP or hostname. - For remote access through a jump host, the
serverfield is typicallyhttps://127.0.0.1:6443while the SSH tunnel is active. - If you switch between local and remote access, keep separate kubeconfig files such as
~/config-mx-pdkand~/config-remote.
3. macOS Access
macOS is a good option for SSH access to the MX-PDK. brc should not be installed natively on macOS. If you need the BubbleRAN CLI, use an Ubuntu/Linux workstation instead.
3.1. Install the Required Tools
- The built-in macOS Terminal is enough for SSH access in most deployments.
- If needed, see the official Apple guide: Terminal User Guide for Mac.
3.2. Connect from macOS
Use the built-in macOS Terminal to connect to the control-plane.
-
If needed, add a saved SSH host entry:
cat >> ~/.ssh/config <<EOF
Host mx-pdk-control-plane
HostName <control-plane-IP-from-USB>
User ubuntu
EOF -
Connect to the
control-plane:ssh mx-pdk-control-plane
Once you are connected to the control-plane, you can use the BubbleRAN CLI directly from that shell session. The CLI is already installed there and is already configured for the local MX-PDK environment.
macOS access is therefore suitable for SSH-based interaction and for using the CLI from inside the control-plane itself. If you want to run brc locally on your own machine, use Section 2 from an Ubuntu/Linux workstation.
4. Windows Access
Windows is supported, but it is not the preferred environment for daily brc usage. If you need the BubbleRAN CLI on Windows, use WSL Ubuntu and follow Section 2.
4.1. PowerShell + OpenSSH
Use this option when you need native Windows SSH access.
- Windows 10 and Windows 11 can use the built-in OpenSSH client.
- If OpenSSH is not enabled on your machine, see the official Microsoft guide: Get started with OpenSSH for Windows.
Example connection:
ssh ubuntu@<control-plane-IP-from-USB>
4.2. WSL Ubuntu
This is the recommended Windows option when you need to install and run brc.
- Install WSL by following the official Microsoft guide: Install WSL.
- Install Ubuntu from the Microsoft Store or from the WSL setup flow.
- Inside Ubuntu, install Snap support by following the official Snapcraft guide: Installing snapd.
- Open the Ubuntu shell and follow Section 2 of this guide.
4.3. MobaXterm
MobaXterm is a practical option when users prefer a graphical SSH client with saved sessions and a built-in tunneling UI.
- Download MobaXterm from the official website: MobaXterm Home Edition.
- Install or extract MobaXterm on your Windows machine.
- Start MobaXterm and open
Session > SSH. - Enter the remote host name of the jump host or cluster entry point.
- Enable
Specify usernameand enter the SSH user provided for your environment. - If your IT team gave you a private key, enable
Advanced SSH settingsand select that key file. - Save the session so the connection details can be reused.
MobaXterm is intended here for SSH-based access to the control-plane. If you need the BubbleRAN CLI, use WSL Ubuntu and follow the Linux workflow in Section 2.
5. Troubleshooting
x509: certificate signed by unknown authority: confirm that the kubeconfig still contains the expected certificate authority data used bybrcUnable to connect to the server: verify VPN access, jump host reachability, and whether the SSH tunnel is still running- TLS hostname errors after tunneling: verify
tls-server-nameagainst the API server certificate brccannot read the configuration: confirm that the kubeconfig was copied to~/snap/br-t9s/current/.kube/configand that file ownership belongs to your user
6. First Steps with BRC
6.1. Extract Infrastructure Information
The following command generates a tabular view of the infrastructure:
brc extract infra
It contains two tables:
- Cluster information where the cluster nodes are listed.
- Non-cluster information where the NiBs are listed.
The cluster information contains the following columns:
- CLUSTER - with REGION, ROLE, NAME, and IP address of each node.
- SYSTEM - with the OS, KERNEL, and RUNTIME of each node.
- COMPUTE - with the ARCH, VENDOR, CPU, and MEM of each node.
- RADIO - with DEVICE, TERMINALS, and SYNC status of each node.
The REGION is a virtual grouping of the nodes, useful for simulating multi-region deployments.
6.2. Update Operators
You might not have the latest operator versions installed, because newer releases may have become available after your system was delivered and commissioned. As of the Bronze release, the following operators are available:
| Operator Name | Provider | Latest Available Version |
|---|---|---|
| Athena Base Operator | BubbleRAN | v1.12.5 |
| ODIN | BubbleRAN | v1.0.3 |
| Athena Slice Operator | BubbleRAN | v1.10.0 |
To check the installed operator versions, use one of the following commands:
brc list operators
which runs the diagnostics and prints the version of the installed operators.
If your ODIN version is < v1.0.0, contact BubbleRAN support before updating. These older deployments require a catalog migration step that should not be performed manually by customers through low-level cluster tooling.
We strongly recommend removing any older operator version and installing the latest available release:
brc remove operator <operator-name> # operator-name as reported by brc list operator
brc install operator <operator-name>
6.3. Update Models
It is good practice to update the models to the latest version. First, list the models currently installed on the cluster:
brc list models
Then upgrade each model as needed:
brc upgrade model <model-name>
6.4. Update Device Plugins
In rare cases, device plugin updates may be required as part of a supported maintenance workflow.
Check the release notes to see if you need to update the device plugins.
If your installation includes CU-DU nodes for O-RAN 7.2 FHI, the baseline pontus configuration may also need to be adjusted.
For pre-installed systems, this modification is already applied before shipping, so you can ignore the rest of this subsection.
The baseline configuration looks similar to the following:
apiVersion: v1
data:
config.yaml: |
devices:
- resource-name: net-tun
groups:
- count: 10
paths:
- host-path: "/dev/net/tun"
- resource-name: vfio
groups:
- count: 1
paths:
- host-path: "/dev/vfio"
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"config.yaml":"devices:\n - resource-name: net-tun\n groups:\n - count: 10\n paths:\n - host-path: \"/dev/net/tun\"\n - resource-name: vfio\n groups:\n - count: 1\n paths:\n - host-path: \"/dev/vfio\"\n"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"pontus-config","namespace":"gaia"}}
creationTimestamp: ""
name: pontus-config
namespace: gaia
resourceVersion: -
uid: -
To apply the required CPU isolation for CU-DU nodes, the configuration should be updated as follows:
apiVersion: v1
data:
config.yaml: |
cpu-set:
- node-name: bubble2 # HERE YOU SHOULD CHANGE THE NAME FOR YOUR OWN CU-DU NODE NAME
default-set: 11-31
system-default-set: "0"
- node-name: bubble3 # HERE YOU SHOULD CHANGE THE NAME FOR YOUR OWN CU-DU NODE NAME
default-set: 11-31
system-default-set: "0"
devices:
- resource-name: net-tun
groups:
- count: 10
paths:
- host-path: "/dev/net/tun"
- resource-name: vfio
groups:
- count: 1
paths:
- host-path: "/dev/vfio"
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"config.yaml":"devices:\n - resource-name: net-tun\n groups:\n - count: 10\n paths:\n - host-path: \"/dev/net/tun\"\n - resource-name: vfio\n groups:\n - count: 1\n paths:\n - host-path: \"/dev/vfio\"\n"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"pontus-config","namespace":"gaia"}}
creationTimestamp: ""
name: pontus-config
namespace: gaia
resourceVersion: -
uid: -
If this change is required on a manual deployment, contact BubbleRAN support or your platform operator to apply it through the supported maintenance process.