Skip to main content

Setup Access to cluster

This guide is provided to ease the user interaction with the MX-PDK. Normally, you would have the machines setup in a server room or lab thus remote usage of the MX-PDK is the most common. In addition to access and run commands via SSH, you can also interact with the cluster using BubbleRAN CLI (brc) from your any other Ubuntu system within the MX-PDK LAN.

BubbleRAN Command Line Interface (BRC) Setup

brc is distributed as a Snap package, and you receive automatic update for it once installed. The Snap is installed by default in your control-plane and devops machines. To use it for another machine (e.g. your personal laptop), you need to install it manually.

sudo snap install br-t9s --channel=beta

You can upgrade it to the latest version by running the following command:

sudo snap refresh br-t9s
danger

Due the bugs in Canonical Snapd, it does not automatically create the user configuration unless you run the Snap once as the user. Hence, please after installation, BEFORE DOING ANYTHING ELSE, just run the following command as your user:

br-t9s.brc help

Credentials

Most of the applications in the BRC require one or more credentials to be able to connect to the Kubernetes cluster or container registries. To be consistent with Kubernetes, we follow the same credential management approach.

Copy the Kubernetes configuration file from the USB stick given to you to ~/snap/br-t9s/current/.kube/config. Perhaps before that you need to create the directory ~/snap/br-t9s/current/.kube/ if it does not exist.

danger

NEVER use sudo to interact with the BRC applications. This will mess up the permissions, and you will not be able to use the applications properly.

caution

If you open the configuration file, you find the following structure:

apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://192.168.0.2:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: { }
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED

The server field is the IP address and port number of the Kubernetes API server. You should be able to access it from the machine where you have CTL installed. Make sure the firewalls are configured properly in between the CTL machine and the Kubernetes cluster. You could use port forwarding or similar techniques to make it work. Contact BubbleRAN support if you have any issues.

Aliases

To make the usage of the CTL applications easier, we have created a set of aliases. You add the following lines to the end of the file ~/.aliases:

# bubbleran brc aliases
alias cli=br-t9s.brc
alias brc=br-t9s.brc
alias t9s-cli=br-t9s.brc
alias br-cli=br-t9s.brc
alias br-ctl=br-t9s.brc
alias t9s-ctl=br-t9s.brc

# bubbleran cdk aliases
alias cdk=br-t9s.cdk
alias t9s-cdk=br-t9s.cdk
alias br-cdk=br-t9s.cdk
tip

You need to create a new shell session to make the aliases available or source the file ~/.aliases.

tip

Debugging aliases:

  • In bash, check whether in the file ~/.bashrc there is a line source ~/.aliases.
  • In zsh, check whether in the file ~/.zshrc there is a line source ~/.aliases.
  • In fish, check whether in the file ~/.config/fish/config.fish there is a line source ~/.aliases.
  • In other shells, check whether in the file ~/.profile there is a line source ~/.aliases.

Autocompletion

To make the best out of the CTL applications, we enable autocompletion for the commands. Based on your shell follow one of the following instructions.

Bash

Run the following commands:

brc completion bash >/tmp/brc.bash
cdk completion bash >/tmp/cdk.bash
kubectl completion bash >/tmp/kubectl.bash
helm completion bash >/tmp/helm.bash

sudo cp /tmp/brc.bash /usr/share/bash-completion/completions/brc
sudo cp /tmp/cdk.bash /usr/share/bash-completion/completions/cdk
sudo cp /tmp/kubectl.bash /usr/share/bash-completion/completions/kubectl
sudo cp /tmp/helm.bash /usr/share/bash-completion/completions/helm
caution

All the users on the machine will have access to the autocompletion.

First Steps with BRC

Extracting infrastructure information

The command:

brc extract infra

generates a tabular output of the infrastructure information. It contains two tables:

  1. Cluster information where the cluster nodes are listed.
  2. Non-cluster information where the NiBs are listed.

The cluster information contains the following columns:

  • CLUSTER - with REGION, ROLE, NAME, and IP address of each node.
  • SYSTEM - with the OS, KERNEL, and RUNTIME of each node.
  • COMPUTE - with the ARCH, VENDOR, CPU, and MEM of each node.
  • RADIO - with DEVICE, TERMINALS, and SYNC status of each node.
tip

The REGION is a virtual grouping of the nodes, useful for simulating multi-region deployments.

Update Operators

You might not have the latest version of the operators installed on your machine, as during the delivery and setup time of your product, a newer version might have been released. As of the Bronze release, the following operators are available:

Operator NameProviderLatest Available Version
Athena Base OperatorBubbleRANv1.12.5
ODINBubbleRANv1.0.2
Athena Slice OperatorBubbleRANv1.10.0

To check the version of the installed operator, you could use either of the following commands:

brc list operators

which runs the diagnostics and prints the version of the installed operator, or

kubectl operator list -A

which lists all the installed operators.

We STRONGLY recommend to remove the older version of the operator and install the latest version, by running the following commands:

brc remove operator athena:{version}
brc install operator athena:v1.12.5

where {version} is the version of the operator you have installed.

Update Models

It is always a good idea to update the models to the latest version. Before that, you need to know which models are installed on your cluster.

brc list models

For each of the models that you see, you need to upgrade them by running:

brc upgrade model {model-name}

Update Device Plugins

In rare cases, you might need to update the device plugins. For doing so, you simply roll out the device plugin pods by running:

kubectl rollout restart daemonset --namespace gaia pontus
kubectl rollout status daemonset --namespace gaia pontus
tip

Check the release notes to see if you need to update the device plugins.

In the case your installation includes CU-DU nodes to run O-RAN 7.2 FHI you should edit the baseline configmap for pontus. For pre-installed cases this modification is already done before shipping so you can ignore this section.

To do this, you may open the pontus configmap with:

kubectl edit configmap --namespace gaia pontus-config

This will open a vim-like editor on which you may see content as the following:

pontus config
apiVersion: v1
data:
config.yaml: |
devices:
- resource-name: net-tun
groups:
- count: 10
paths:
- host-path: "/dev/net/tun"
- resource-name: vfio
groups:
- count: 1
paths:
- host-path: "/dev/vfio"
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"config.yaml":"devices:\n - resource-name: net-tun\n groups:\n - count: 10\n paths:\n - host-path: \"/dev/net/tun\"\n - resource-name: vfio\n groups:\n - count: 1\n paths:\n - host-path: \"/dev/vfio\"\n"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"pontus-config","namespace":"gaia"}}
creationTimestamp: ""
name: pontus-config
namespace: gaia
resourceVersion: -
uid: -

In order to apply the necessary CPU isolation for CU-DU nodes, you should modify it as follows and save:

pontus config for 7.2
apiVersion: v1
data:
config.yaml: |
cpu-set:
- node-name: bubble2 # HERE YOU SHOULD CHANGE THE NAME FOR YOUR OWN CU-DU NODE NAME
default-set: 11-31
system-default-set: "0"
- node-name: bubble3 # HERE YOU SHOULD CHANGE THE NAME FOR YOUR OWN CU-DU NODE NAME
default-set: 11-31
system-default-set: "0"
devices:
- resource-name: net-tun
groups:
- count: 10
paths:
- host-path: "/dev/net/tun"
- resource-name: vfio
groups:
- count: 1
paths:
- host-path: "/dev/vfio"
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"config.yaml":"devices:\n - resource-name: net-tun\n groups:\n - count: 10\n paths:\n - host-path: \"/dev/net/tun\"\n - resource-name: vfio\n groups:\n - count: 1\n paths:\n - host-path: \"/dev/vfio\"\n"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"pontus-config","namespace":"gaia"}}
creationTimestamp: ""
name: pontus-config
namespace: gaia
resourceVersion: -
uid: -

Then to apply the changes, again:

kubectl rollout restart daemonset --namespace gaia pontus
kubectl rollout status daemonset --namespace gaia pontus