Deployment with 7.2 Split and Benetel RU
This deployment includes a 5G Standalone (SA) network with OpenAirInterface (OAI) gNB using a Benetel RU, and Open5GS core to analyze the control plane and data plane.
apiVersion: athena.trirematics.io/v1
kind: Network
metadata:
name: benetel-demo
namespace: trirematics
spec:
slices:
- plmn: "00101"
dnn: internet
network-mode: IPv4
service-type: eMBB
differentiator: 0x000000
ipv4-range: 12.1.1.0/24
ipv6-range: 2001:db8:1::/64
access:
- name: oai-gnb
stack: 5g-sa
model: oai-ran/monolithic-gnb-ru-benetel
identity:
an-id: 30
tracking-area: 1
radio:
device: oran-7.2
antenna:
formation: 4x4
scheduling:
nodeName: bubble2 # gNB/CU-DU node name
annotations:
# PCI IDs obtained by running `dpdk-devbind.py --status` in the CU-DU node
extras.t9s.io/pci-ids: '["0000:01:11.1", "0000:01:11.0"]'
# VF MACs obtained for dpdk0 interface by running `ip link show` in CU-DU node
extras.t9s.io/du-macs: '["02:11:22:33:44:67", "02:11:22:33:44:66"]'
# RU MAC (repeated) obtained by logging into RU and running `show eth-info`
extras.t9s.io/ru-macs: '["8c:1f:64:d1:12:02", "8c:1f:64:d1:12:02"]'
extras.t9s.io/mtu: '9600'
# RU IP used for logging into it
extras.t9s.io/o1-remote-ipv4: 192.168.1.50
# Flag for reboot, after first use, it can be set to `false` if values in 'cells' is not changed
extras.t9s.io/benetel-ru-reboot: 'true' #'true'
cells:
- band: n77
arfcn: 650016 # Center Freq. at 3750MHz
bandwidth: 100MHz
subcarrier-spacing: 30kHz
tdd-config:
period: 5ms
dl-slots: 7
dl-symbols: 6
ul-slots: 2
ul-symbols: 4
core-networks:
- open5gs.benetel-demo
core:
- name: open5gs
stack: 5g-sa
model: open5gs/5gc
profiles:
- debug
identity:
region: 128
cn-group: 4
cn-id: 5
dns:
ipv4:
default: 8.8.8.8
secondary: 8.8.4.4
This deployment does not include a terminal (UE) so it should be installed "separately". Please refer to this guide with UE sample deployments.
Deployment pre-checks
Before deploying a network using 7.2 certain pre-checks are recommended to be done, specially after reboot.
As can be seen in at line 27 of the sample benetel.yaml
, the nodeName
of the node where the OAI gNB/CU-DU will run.
It is important to make sure that the nodeName
specified correspond to a node with a SFP NIC (for pre-installed MX-PDK normally 1 or 2 worker nodes have a Intel E810-XXV NIC).
Indeed the gNB/CU-DU node should be connected to the PTP-grandmaster fronthaul switch using both SFP interfaces of the node which are generally configured to be called sync0
and dpdk0
.
Once the gNB/CU-DU node is selected you should check that DPDK is correctly set-up and review the information specified in the annotations
section of the network .yaml.
Check the NIC's PCI IDs for DPDK on gNB/CU-DU node
Check the PCI IDs of the NIC.
dpdk-devbind.py --status
which should provide an output like the following:
Network devices using DPDK-compatible driver
============================================
0000:01:11.0 'Ethernet Adaptive Virtual Function 1889' drv=vfio-pci unused=iavf
0000:01:11.1 'Ethernet Adaptive Virtual Function 1889' drv=vfio-pci unused=iavf
...
Check that these 2 PCI IDs (in this case 0000:01:11.0
and 0000:01:11.1
) are specified in extras.t9s.io/pci-ids
. In the case that the command fails, you may need to reconfigure DPDK on that node:
sudo dpdk-setup
This command will give the same output as dpdk-devbind.py --status
.
Check VF MACs obtained for dpdk0 on gNB/CU-DU node
ip link show
which should provide an output like the following:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: bubbleran0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether d8:43:ae:ca:bf:50 brd ff:ff:ff:ff:ff:ff
3: sync0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 50:7c:6f:66:ec:6e brd ff:ff:ff:ff:ff:ff
4: dpdk0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 50:7c:6f:66:ec:6f brd ff:ff:ff:ff:ff:ff
alias dpdk0
vf 0 link/ether 02:11:22:33:44:66 brd ff:ff:ff:ff:ff:ff, vlan 564, spoof checking off, link-state auto, trust off
vf 1 link/ether 02:11:22:33:44:67 brd ff:ff:ff:ff:ff:ff, vlan 564, spoof checking off, link-state auto, trust off
altname enp1s0f1
...
Check that the MACs reported for vf 1
and vf 0
(in this case 02:11:22:33:44:67
and 02:11:22:33:44:66
) are specified in extras.t9s.io/du-macs
.
If no vfs are reported in the output it means that you perhaps skipped the previous step, if so, run sudo dpdk-setup
.
Check connectivity with Benetel RU on gNB/CU-DU node
Try logging in to the RU via SSH. For pre-installed MX-PDK the IP of the RU is preconfigured as specified in the configuration spreadsheet (see here). In the case of a remote installation the Benetel RU needs to be configured (in this case contact BubbleRAN support).
ssh root@<benetel-ru-ip>
- Username: root
Once logged in, you can check the status of the RU using the following command:
cat /tmp/logs/radio_status
Once logged in, you can check the MAC of the RU using the following command:
ifconfig
which should provide an output like the following:
root@benetelru:~# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 metric 1
inet 192.168.1.50 netmask 255.255.255.0 broadcast 192.168.1.255
inet6 fe80::8e1f:64ff:fed1:1202 prefixlen 64 scopeid 0x20<link>
ether 8c:1f:64:d1:12:02 txqueuelen 1000 (Ethernet)
RX packets 21819032440 bytes 150092895720236 (136.5 TiB)
RX errors 0 dropped 21760 overruns 0 frame 0
TX packets 2467612690 bytes 14558104214036 (13.2 TiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0xff200000-ff2002ff
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 metric 1
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 2906529 bytes 301349892 (287.3 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2906529 bytes 301349892 (287.3 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Check that the MAC address (in this case 8c:1f:64:d1:12:02
) and IP address (in this case 192.168.1.50
) of eth0
correspond to the ones specified in extras.t9s.io/ru-macs
and extras.t9s.io/o1-remote-ipv4
Check PTP status on gNB/CU-DU node
Check the status of PTP services in gNB/CU-DU with the following commands:
sudo systemctl status ptp4l.service
sudo systemctl status phc2sys.service
which should provide the following outputs:
$ sudo systemctl status ptp4l.service
● ptp4l.service - Precision Time Protocol (PTP) service
Loaded: loaded (/etc/systemd/system/ptp4l.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2025-07-23 14:22:15 CEST; 1 week 2 days ago
Docs: man:ptp4l
Main PID: 210606 (ptp4l)
Tasks: 1 (limit: 49542)
Memory: 604.0K
CPU: 3h 23min 33.257s
CGroup: /system.slice/ptp4l.service
└─210606 /usr/sbin/ptp4l -f /etc/ptp4l.conf
Aug 01 18:49:27 moto ptp4l[210606]: [810004.243] rms 4 max 7 freq -7677 +/- 4 delay 94 +/- 1
Aug 01 18:49:28 moto ptp4l[210606]: [810005.368] rms 3 max 7 freq -7679 +/- 4 delay 94 +/- 1
Aug 01 18:49:29 moto ptp4l[210606]: [810006.493] rms 4 max 8 freq -7671 +/- 5 delay 95 +/- 1
Aug 01 18:49:30 moto ptp4l[210606]: [810007.618] rms 3 max 8 freq -7668 +/- 5 delay 94 +/- 1
Aug 01 18:49:32 moto ptp4l[210606]: [810008.743] rms 3 max 7 freq -7677 +/- 4 delay 94 +/- 1
Aug 01 18:49:33 moto ptp4l[210606]: [810009.872] rms 3 max 6 freq -7678 +/- 5 delay 94 +/- 1
Aug 01 18:49:34 moto ptp4l[210606]: [810010.993] rms 2 max 4 freq -7677 +/- 3 delay 94 +/- 0
Aug 01 18:49:35 moto ptp4l[210606]: [810012.118] rms 3 max 6 freq -7673 +/- 4 delay 94 +/- 1
Aug 01 18:49:36 moto ptp4l[210606]: [810013.243] rms 2 max 4 freq -7677 +/- 3 delay 95 +/- 1
Aug 01 18:49:37 moto ptp4l[210606]: [810014.368] rms 3 max 6 freq -7671 +/- 5 delay 95 +/- 0
Check that the service is running (i.e. the status is running
) and that the values reported after max
are not higher that +/- 10.
$ sudo systemctl status phc2sys.service
● phc2sys.service - Synchronize system clock or PTP hardware clock (PHC)
Loaded: loaded (/etc/systemd/system/phc2sys.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2025-08-01 18:49:53 CEST; 13s ago
Docs: man:phc2sys
Main PID: 3287995 (phc2sys)
Tasks: 1 (limit: 49542)
Memory: 440.0K
CPU: 1ms
CGroup: /system.slice/phc2sys.service
└─3287995 /usr/sbin/phc2sys -a -r -r -n 24
Aug 01 18:49:57 moto phc2sys[3287995]: [810033.903] CLOCK_REALTIME phc offset 11 s2 freq -9386 delay 541
Aug 01 18:49:58 moto phc2sys[3287995]: [810034.923] CLOCK_REALTIME phc offset 15 s2 freq -9379 delay 531
Aug 01 18:49:59 moto phc2sys[3287995]: [810035.923] CLOCK_REALTIME phc offset -12 s2 freq -9401 delay 521
Aug 01 18:50:00 moto phc2sys[3287995]: [810036.923] CLOCK_REALTIME phc offset -12 s2 freq -9405 delay 531
Aug 01 18:50:01 moto phc2sys[3287995]: [810037.923] CLOCK_REALTIME phc offset 17 s2 freq -9379 delay 541
Aug 01 18:50:02 moto phc2sys[3287995]: [810038.923] CLOCK_REALTIME phc offset 13 s2 freq -9378 delay 541
Aug 01 18:50:03 moto phc2sys[3287995]: [810039.923] CLOCK_REALTIME phc offset -9 s2 freq -9396 delay 531
Aug 01 18:50:04 moto phc2sys[3287995]: [810040.923] CLOCK_REALTIME phc offset -5 s2 freq -9395 delay 531
Aug 01 18:50:05 moto phc2sys[3287995]: [810041.923] CLOCK_REALTIME phc offset 2 s2 freq -9389 delay 531
Aug 01 18:50:06 moto phc2sys[3287995]: [810042.923] CLOCK_REALTIME phc offset -9 s2 freq -9400 delay 531
Check that the service is running (i.e. the status is running
) and that the values reported after offset
are not higher that +/- 30.
Sample output of a failed service:
$ sudo systemctl status phc2sys.service
× phc2sys.service - Synchronize system clock or PTP hardware clock (PHC)
Loaded: loaded (/etc/systemd/system/phc2sys.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2025-07-26 05:38:23 CEST; 6 days ago
Docs: man:phc2sys
Main PID: 3622303 (code=exited, status=255/EXCEPTION)
CPU: 2.302s
Jul 26 05:38:16 moto phc2sys[3622303]: [244133.013] CLOCK_REALTIME phc offset 9 s2 freq -9085 delay 551
Jul 26 05:38:17 moto phc2sys[3622303]: [244134.013] CLOCK_REALTIME phc offset -1 s2 freq -9093 delay 511
Jul 26 05:38:18 moto phc2sys[3622303]: [244135.013] CLOCK_REALTIME phc offset 1 s2 freq -9091 delay 551
Jul 26 05:38:19 moto phc2sys[3622303]: [244136.013] CLOCK_REALTIME phc offset 12 s2 freq -9080 delay 521
Jul 26 05:38:20 moto phc2sys[3622303]: [244137.013] CLOCK_REALTIME phc offset -1 s2 freq -9089 delay 551
Jul 26 05:38:21 moto phc2sys[3622303]: [244138.013] CLOCK_REALTIME phc offset 7 s2 freq -9081 delay 490
Jul 26 05:38:22 moto phc2sys[3622303]: [244139.014] CLOCK_REALTIME phc offset 6 s2 freq -9080 delay 541
Jul 26 05:38:23 moto systemd[1]: phc2sys.service: Main process exited, code=exited, status=255/EXCEPTION
Jul 26 05:38:23 moto systemd[1]: phc2sys.service: Failed with result 'exit-code'.
Jul 26 05:38:23 moto systemd[1]: phc2sys.service: Consumed 2.302s CPU time.
If either of the services have failed or values are exceptionally high you may try restarting the services :
sudo systemctl restart ptp4l.service
sudo systemctl restart phc2sys.service
Deployment
After all checks have been done on the gNB/CU-DU node, back on the control-plane node run the command brc install network benetel.yaml
to deploy the network.
It should finish without errors and printout the Kubernetes resource names that were created.
Check for the status of the deployment using the command brc observe
.
Wait until all the Elements other than the UE are in the STATUS
set to
1/1 Y
state.
For the first deployment and every time configuration is changed for any of the fields in the cells
section of the .yaml, the option extras.t9s.io/benetel-ru-reboot
has to been set to true
.
Otherwise, for a faster deployment you may leave it as false
.
Once the network is running, you may install either an in-cluster or external UE as explained in the previous section.