Lab 2: Network Slicing using Service Profile
In this lab you will learn how to leverage BubbleRAN's Slice Operator (SO) for deploying end-to-end network slices through the definition of ServiceProfiles
.
Slice Creation
In this sample, a ServiceProfile with a single eMBB traffic slice is defined with a maximum throughput of 70Mbps and an average throughput of 60Mbps in downlink.
apiVersion: athena.trirematics.io/v1
kind: ServiceProfile
metadata:
name: embb
namespace: trirematics
spec:
coverage:
- region: region1
zones:
- name: zone-one
user-density: 1
user-equipment-type: rfsim
reporting-period: 1h
data-flows:
- name: embb
description: |
Data flows designated for embb-traffic.
data-network: internet
traffic-class: burstable
content-rate:
downlink:
maximum: 90Mbps
average: 80Mbps
uplink:
maximum: 10Mbps
average: 2Mbps
latency-class: unrestricted
payload-types:
- use-case: data-transfer
application-protocol: http/3
transport-protocol: tcp
content-type:
- application/octet-stream
To deploy this Service Profile run brc install service eMBB_service.yaml
.
It should finish without errors and printout the three Kubernetes resource names that were created.
Check for the status of the deployment using the command brc observe
.
Wait until all the Elements are in the STATUS
set to 1/1 Y
state.
Within the brc observe
dashboard you may also press Tab key until reaching the Services
tab of the dashboard where you should see the following slice information.
To list the details of the deployed service you may run the brc list service
command showcasing the following output:
$ brc list service
╭──────────────┬─────────┬──────────┬──────────────┬───────────────────────────┬──────────────────────────┬───────────────┬───────────────┬────────────────────╮
│ SERVICE NAME │ REGIONS │ ZONES │ DATA NETWORK │ DOWNLINK RATE │ UPLINK RATE │ LATENCY CLASS │ TRAFFIC CLASS │ TRANSPORT PROTOCOL │
├──────────────┼─────────┼──────────┼──────────────┼───────────────────────────┼──────────────────────────┼───────────────┼───────────────┼────────────────────┤
│ embb │ region1 │ zone-one │ internet │ avg: 20Mbps (max: 50Mbps) │ avg: 2Mbps (max: 10Mbps) │ unrestricted │ burstable │ tcp │
╰──────────────┴─────────┴──────────┴──────────────┴───────────────────────────┴──────────────────────────┴───────────────┴───────────────┴────────────────────╯
╭──────────┬─────────────────┬───────┬──────────────────┬──────────┬─────┬──────┬─────┬───────────────┬─────────────╮
│ SLICE ID │ SERVICE PROFILE │ PLMN │ REGION-ZONE │ DNN │ SD │ SST │ QFI │ AMBR DOWNLINK │ AMBR UPLINK │
├──────────┼─────────────────┼───────┼──────────────────┼──────────┼─────┼──────┼─────┼───────────────┼─────────────┤
│ 16777217 │ embb │ 00102 │ region1-zone-one │ internet │ 0x1 │ eMBB │ 15 │ 52428 Kbps │ 10485 Kbps │
╰──────────┴─────────────────┴───────┴──────────────────┴──────────┴─────┴──────┴─────┴───────────────┴─────────────╯
Networks marked with 🔒 are protected by a finalizer
╭──────────────┬─────────┬───────┬───────┬─────────┬───────────┬───────╮
│ NETWORK │ #ACCESS │ #CORE │ #EDGE │ #SLICES │ #ELEMENTS │ READY │
├──────────────┼─────────┼───────┼───────┼─────────┼───────────┼───────┤
│ 🔒 networkso │ 1 │ 13 │ 2 │ 1 │ 16 │ ✅ │
╰──────────────┴─────────┴───────┴───────┴─────────┴───────────┴───────╯
╭────────────────────────────────┬───────────┬────────────────────────┬───────┬────────┬──────────┬──────────╮
│ ACCESS SECTION │ NETWORK │ MODEL │ CELLS │ RADIO │ SLICES │ ISOLATED │
├────────────────────────────────┼───────────┼────────────────────────┼───────┼────────┼──────────┼──────────┤
│ oai-monolithic-shared-16777217 │ networkso │ oai-ran/monolithic-gnb │ n78 │ rf-sim │ 16777217 │ false │
╰────────────────────────────────┴───────────┴────────────────────────┴───────┴────────┴──────────┴──────────╯
╭─────────────── ───────────┬────────────────┬──────────────┬──────────┬──────────┬─────╮
│ CORE SECTION │ PARENT NETWORK │ MODEL │ SLICES │ ISOLATED │ SST │
├──────────────────────────┼────────────────┼──────────────┼──────────┼──────────┼─────┤
│ ogs-mdb-shared │ networkso │ open5gs/mdb │ 16777217 │ false │ 0x1 │
│ ogs-amf-shared-16777217 │ networkso │ open5gs/amf │ 16777217 │ false │ 0x1 │
│ ogs-ausf-shared-16777217 │ networkso │ open5gs/ausf │ 16777217 │ false │ 0x1 │
│ ogs-bsf-shared-16777217 │ networkso │ open5gs/bsf │ 16777217 │ false │ 0x1 │
│ ogs-nrf-shared-16777217 │ networkso │ open5gs/nrf │ 16777217 │ false │ 0x1 │
│ ogs-nssf-shared-16777217 │ networkso │ open5gs/nssf │ 16777217 │ false │ 0x1 │
│ ogs-pcf-shared-16777217 │ networkso │ open5gs/pcf │ 16777217 │ false │ 0x1 │
│ ogs-scp-shared-16777217 │ networkso │ open5gs/scp │ 16777217 │ false │ 0x1 │
│ ogs-udm-shared-16777217 │ networkso │ open5gs/udm │ 16777217 │ false │ 0x1 │
│ ogs-udr-shared-16777217 │ networkso │ open5gs/udr │ 16777217 │ false │ 0x1 │
│ ogs-upf-shared-16777217 │ networkso │ open5gs/upf │ 16777217 │ false │ 0x1 │
│ ogs-smf-shared-16777217 │ networkso │ open5gs/smf │ 16777217 │ false │ 0x1 │
│ ogs-pcrf-shared-16777217 │ networkso │ open5gs/pcrf │ 16777217 │ false │ 0x1 │
╰──────────────────────────┴────────────────┴──────────────┴──────────┴──────────┴─────╯
╭─────────────────────┬───────────┬────────────────────┬──────────┬──────────┬─ ───────╮
│ EDGE SECTION │ NETWORK │ MODEL │ SLICES │ ISOLATED │ RIC TA │
├─────────────────────┼───────────┼────────────────────┼──────────┼──────────┼────────┤
│ ric-shared-16777217 │ networkso │ mosaic5g/flexric │ 16777217 │ false │ 1281 │
│ sdl-shared-16777217 │ networkso │ mosaic5g/xapps-sdl │ 16777217 │ false │ 1281 │
╰─────────────────────┴───────────┴────────────────────┴──────────┴──────────┴────────╯
Terminals marked with 🔒 are protected by a finalizer
╭──────────┬───────┬────────┬──────────────────┬──────┬─────┬──────────────┬───────╮
│ TERMINAL │ MODEL │ VENDOR │ PREFERRED ACCESS │ IMSI │ DNN │ TARGET CORES │ READY │
├──────────┼───────┼────────┼──────────────────┼──────┼─────┼──────────────┼───────┤
╰──────────┴───────┴────────┴──────────────────┴──────┴─────┴──────────────┴───────╯
The Slice ID is a unique numeric identifier generated by encoding:
- SST (Slice/Service Type, 8 bits)
- SD (Slice Differentiator, 24 bits)
The formula used is:
SliceID = (SST << 24) | SD
SST << 24
places the SST value in the most significant byte (top 8 bits).SD
occupies the lower 24 bits.
The result is a single 32-bit integer representing both values.
Example: eMBB Slice
- SST = 1 (eMBB)
- SD = 0x1
Calculation:
1 << 24 = 16.777,216
SD = 1
Slice ID = 16,777,216 + 1 = 16,777,217
Thus for the eMBB service defined in this lab, the Slice ID is 16777217 for the eMBB slice.
Connecting a UE to the network slice
Now that we have created our network slice we may connect a UE using the following Terminal descriptor.
apiVersion: athena.trirematics.io/v1
kind: Terminal
metadata:
name: ue1
namespace: trirematics
spec:
identity:
imsi: "001020000000001"
key: 0xfec86ba6eb707ed08905757b1bb44b8f
opc: 0xc42449363bbad02b66d16bc975d77cc1
pin: "1234"
model: terminal/nr-rfsim
preferred-access: oai-monolithic-shared-16777217.networkso
radio:
bands:
- n78
slice:
differentiator: 1
dnn: internet
network-mode: IPV4
service-type: eMBB
stack: 5g-sa
target-cores:
- ogs-mdb-shared.networkso
vendor: oai
readiness-check:
method: ping
target: google-ip
interface-name: oaitun_ue0
For defining the preferred-access
and target-cores
you may rely on the output shown from brc list service
.
Deploy the terminal using the command brc install network eMBB_terminal.yaml
.
Check for the status of the terminal using the command brc observe
. Wait until all the Element is in the STATUS set to 1/1 Y state.
Once the terminal is connected you may perform a throughput test in DL with the command brc test throughput ue1 dl gateway
obtaining an output like the following:
$ brc test throughput ue1 dl gateway -- -t 10
Connecting to host 12.231.203.1, port 5201
Reverse mode, remote host 12.231.203.1 is sending
[ 5] local 12.231.203.2 port 50875 connected to 12.231.203.1 port 5201
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 15.3 MBytes 128 Mbits/sec
[ 5] 1.00-2.00 sec 17.6 MBytes 148 Mbits/sec
[ 5] 2.00-3.00 sec 15.4 MBytes 129 Mbits/sec
[ 5] 3.00-4.00 sec 12.2 MBytes 103 Mbits/sec
[ 5] 4.00-5.00 sec 9.77 MBytes 82.0 Mbits/sec
[ 5] 5.00-6.00 sec 9.02 MBytes 75.6 Mbits/sec
[ 5] 6.00-7.00 sec 8.94 MBytes 75.0 Mbits/sec
[ 5] 7.00-8.00 sec 9.86 MBytes 82.7 Mbits/sec
[ 5] 8.00-9.00 sec 10.4 MBytes 87.6 Mbits/sec
[ 5] 9.00-10.00 sec 10.5 MBytes 88.2 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 130 MBytes 109 Mbits/sec 0 sender
[ 5] 0.00-10.00 sec 119 MBytes 99.9 Mbits/sec receiver
As can be seen from the throughput test output, as soon as the UE starts sending iperf traffic, the PolicyJob's SLA xApp (as described here here) starts its control loop to enforce the SLA defined for the ServiceProfile.
With the command brc extract logs sla.dynxapp-ric-shared-16777217.networkso
the control loop can be observed in action adjusting the PRBs to achieve the desired throughput.
Slice Update
Now that we have successfully created our network slice we can update it by changing the throughput requirements like the in the example below. This time we have increased the maximum and average downlink throughput up to 90Mbps and 80Mbps respectively.
apiVersion: athena.trirematics.io/v1
kind: ServiceProfile
metadata:
name: embb
namespace: trirematics
spec:
coverage:
- region: region1
zones:
- name: zone-one
user-density: 1
user-equipment-type: rfsim
reporting-period: 1h
data-flows:
- name: embb
description: |
Data flows designated for embb-traffic.
data-network: internet
traffic-class: burstable
content-rate:
downlink:
maximum: 70Mbps
average: 60Mbps
uplink:
maximum: 10Mbps
average: 2Mbps
latency-class: unrestricted
payload-types:
- use-case: data-transfer
application-protocol: http/3
transport-protocol: tcp
content-type:
- application/octet-stream
To deploy this Service Profile run brc install service eMBB_service.yaml
.
It should finish without errors and printout the three Kubernetes resource names that were created.
Check for the status of the deployment using the command brc observe
.
Wait until all the Elements are in the STATUS
set to 1/1 Y
state.
When performing a Slice Update operation, only the SLA xApp is re-deployed (the previous PolicyJob is deleted and a new one is created). In this way, the rest of NFs are untouched and thus, the UE connectivity is maintained.
Once the slice is updated you may perform a new throughput test in DL with the command
brc test throughput ue1 dl gateway
to observe the new SLA throughput:
$ brc test throughput ue1 dl gateway
Connecting to host 12.231.203.1, port 5201
Reverse mode, remote host 12.231.203.1 is sending
[ 5] local 12.231.203.3 port 40649 connected to 12.231.203.1 port 5201
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 13.6 MBytes 114 Mbits/sec
[ 5] 1.00-2.00 sec 13.4 MBytes 112 Mbits/sec
[ 5] 2.00-3.00 sec 10.9 MBytes 91.8 Mbits/sec
[ 5] 3.00-4.00 sec 7.75 MBytes 65.0 Mbits/sec
[ 5] 4.00-5.00 sec 6.25 MBytes 52.4 Mbits/sec
[ 5] 5.00-6.00 sec 6.92 MBytes 58.1 Mbits/sec
[ 5] 6.00-7.00 sec 7.27 MBytes 61.0 Mbits/sec
[ 5] 7.00-8.00 sec 7.27 MBytes 61.0 Mbits/sec
[ 5] 8.00-9.00 sec 7.28 MBytes 61.1 Mbits/sec
[ 5] 9.00-10.00 sec 7.27 MBytes 60.9 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-9.00 sec 98.9 MBytes 92.1 Mbits/sec 0 sender
[ 5] 0.00-10.00 sec 88.0 MBytes 73.8 Mbits/sec receiver
iperf Done.
Add new slice
In this sample, a ServiceProfile with a single p2p traffic slice is defined with a maximum throughput of 30Mbps and an average throughput of 30Mbps in downlink.
apiVersion: athena.trirematics.io/v1
kind: ServiceProfile
metadata:
name: p2p
namespace: trirematics
spec:
coverage:
- region: region1
zones:
- name: zone-one
user-density: 1
user-equipment-type: rfsim
reporting-period: 20m
data-flows:
- name: videochat
description: |
Data flows designated for video chat.
data-network: p2p
traffic-class: burstable
content-rate:
downlink:
maximum: 20Mbps
average: 20Mbps
uplink:
maximum: 10Mbps
average: 10Mbps
latency-class: p2p
payload-types:
- use-case: video-chat
application-protocol: rtp
transport-protocol: udp
content-type:
- video/h264
- audio/opus
To deploy this Service Profile run brc install service eMBB_service.yaml
.
It should finish without errors and printout the three Kubernetes resource names that were created.
Check for the status of the deployment using the command brc observe
.
Wait until all the Elements are in the STATUS
set to 1/1 Y
state.
As can be seen from the output of brc list service
$ brc list service
╭──────────────┬─────────┬──────────┬──────────────┬───────────────────────────┬───────────────────────────┬───────────────┬───────────────┬────────────────────╮
│ SERVICE NAME │ REGIONS │ ZONES │ DATA NETWORK │ DOWNLINK RATE │ UPLINK RATE │ LATENCY CLASS │ TRAFFIC CLASS │ TRANSPORT PROTOCOL │
├──────────────┼─── ──────┼──────────┼──────────────┼───────────────────────────┼───────────────────────────┼───────────────┼───────────────┼────────────────────┤
│ embb │ region1 │ zone-one │ internet │ avg: 80Mbps (max: 90Mbps) │ avg: 2Mbps (max: 10Mbps) │ unrestricted │ burstable │ tcp │
│ p2p │ region1 │ zone-one │ p2p │ avg: 20Mbps (max: 20Mbps) │ avg: 10Mbps (max: 10Mbps) │ p2p │ burstable │ udp │
╰──────────────┴─────────┴──────────┴──────────────┴───────────────────────────┴───────────────────────────┴───────────────┴───────────────┴────────────────────╯
╭──────────┬─────────────────┬───────┬──────────────────┬──────────┬─────┬───────┬─────┬───────────────┬─────────────╮
│ SLICE ID │ SERVICE PROFILE │ PLMN │ REGION-ZONE │ DNN │ SD │ SST │ QFI │ AMBR DOWNLINK │ AMBR UPLINK │
├──────────┼─────────────────┼───────┼──────────────────┼──────────┼─────┼───────┼─────┼───────────────┼─────────────┤
│ 16777217 │ embb │ 00102 │ region1-zone-one │ internet │ 0x1 │ eMBB │ 15 │ 94371 Kbps │ 10485 Kbps │
│ 33554433 │ p2p │ 00102 │ region1-zone-one │ p2p │ 0x1 │ URLLC │ 5 │ 20971 Kbps │ 10485 Kbps │
╰──────────┴─────────────────┴───────┴──────────────────┴──────────┴─────┴───────┴─────┴───────────────┴─────────────╯
Networks marked with 🔒 are protected by a finalizer
╭──────────────┬─────────┬───────┬───────┬─────────┬───────────┬───────╮
│ NETWORK │ #ACCESS │ #CORE │ #EDGE │ #SLICES │ #ELEMENTS │ READY │
├──────────────┼─────────┼───────┼───────┼─────────┼───────────┼───────┤
│ 🔒 networkso │ 1 │ 16 │ 2 │ 2 │ 19 │ ✅ │
╰──────────────┴─────────┴───────┴───────┴─────────┴───────────┴───────╯
╭──────────────────────────────────┬───────────┬────────────────────────┬───────┬────────┬───────────────────┬──────────╮
│ ACCESS SECTION │ NETWORK │ MODEL │ CELLS │ RADIO │ SLICES │ ISOLATED │
├──────────────────────────────────┼─ ──────────┼────────────────────────┼───────┼────────┼───────────────────┼──────────┤
│ oai-monolithic-shared-3119573096 │ networkso │ oai-ran/monolithic-gnb │ n78 │ rf-sim │ 16777217-33554433 │ false │
╰──────────────────────────────────┴───────────┴────────────────────────┴───────┴────────┴───────────────────┴──────────╯
╭────────────────────────────┬────────────────┬──────────────┬───────────────────┬──────────┬──────────╮
│ CORE SECTION │ PARENT NETWORK │ MODEL │ SLICES │ ISOLATED │ SST │
├────────────────────────────┼────────────────┼──────────────┼───────────────────┼──────────┼──────────┤
│ ogs-mdb-shared │ networkso │ open5gs/mdb │ 16777217-33554433 │ false │ 0x1, 0x1 │
│ ogs-amf-shared-3119573096 │ networkso │ open5gs/amf │ 16777217-33554433 │ false │ 0x1, 0x1 │
│ ogs-ausf-shared-3119573096 │ networkso │ open5gs/ausf │ 16777217-33554433 │ false │ 0x1, 0x1 │
│ ogs-bsf-shared-3119573096 │ networkso │ open5gs/bsf │ 16777217-33554433 │ false │ 0x1, 0x1 │
│ ogs-nrf-shared-3119573096 │ networkso │ open5gs/nrf │ 16777217-33554433 │ false │ 0x1, 0x1 │
│ ogs-nssf-shared-3119573096 │ networkso │ open5gs/nssf │ 16777217-33554433 │ false │ 0x1, 0x1 │
│ ogs-pcf-shared-3119573096 │ networkso │ open5gs/pcf │ 16777217-33554433 │ false │ 0x1, 0x1 │
│ ogs-scp-shared-3119573096 │ networkso │ open5gs/scp │ 16777217-33554433 │ false │ 0x1, 0x1 │
│ ogs-udm-shared-3119573096 │ networkso │ open5gs/udm │ 16777217-33554433 │ false │ 0x1, 0x1 │
│ ogs-udr-shared-3119573096 │ networkso │ open5gs/udr │ 16777217-33554433 │ false │ 0x1, 0x1 │
│ ogs-upf-shared-16777217 │ networkso │ open5gs/upf │ 16777217 │ false │ 0x1 │
│ ogs-smf-shared-16777217 │ networkso │ open5gs/smf │ 16777217 │ false │ 0x1 │
│ ogs-pcrf-shared-16777217 │ networkso │ open5gs/pcrf │ 16777217 │ false │ 0x1 │
│ ogs-upf-shared-33554433 │ networkso │ open5gs/upf │ 33554433 │ false │ 0x1 │
│ ogs-smf-shared-33554433 │ networkso │ open5gs/smf │ 33554433 │ false │ 0x1 │
│ ogs-pcrf-shared-33554433 │ networkso │ open5gs/pcrf │ 33554433 │ false │ 0x1 │
╰────────────────────────────┴────────────────┴──────────────┴───────────────────┴──────────┴──────────╯
╭───────────────────────┬───── ──────┬────────────────────┬───────────────────┬──────────┬────────╮
│ EDGE SECTION │ NETWORK │ MODEL │ SLICES │ ISOLATED │ RIC TA │
├───────────────────────┼───────────┼────────────────────┼───────────────────┼──────────┼────────┤
│ ric-shared-3119573096 │ networkso │ mosaic5g/flexric │ 16777217-33554433 │ false │ 1281 │
│ sdl-shared-3119573096 │ networkso │ mosaic5g/xapps-sdl │ 16777217-33554433 │ false │ 1281 │
╰───────────────────────┴───────────┴────────────────────┴───────────────────┴──────────┴────────╯
Terminals marked with 🔒 are protected by a finalizer
╭──────────┬───────┬────────┬──── ──────────────┬──────┬─────┬──────────────┬───────╮
│ TERMINAL │ MODEL │ VENDOR │ PREFERRED ACCESS │ IMSI │ DNN │ TARGET CORES │ READY │
├──────────┼───────┼────────┼──────────────────┼──────┼─────┼──────────────┼───────┤
╰──────────┴───────┴────────┴──────────────────┴──────┴─────┴──────────────┴───────╯
Now 2 slices are deployed in the network with IDs 16777217 and 33554433, with some NFs being shared between the slices (with the identifier 3119573096) and some NFs are only belonging to a specific slice (e.g. UPFs and PCRFs). Considering that now the gNB has a new identifier, the preferred-access section of the terminal yaml needs to be updated accordingly. In the following sample we define one UE for each slice.
apiVersion: athena.trirematics.io/v1
kind: Terminal
metadata:
name: ue1
namespace: trirematics
spec:
identity:
imsi: "001020000000001"
key: 0xfec86ba6eb707ed08905757b1bb44b8f
opc: 0xc42449363bbad02b66d16bc975d77cc1
pin: "1234"
model: terminal/nr-rfsim
preferred-access: oai-monolithic-shared-3119573096.networkso
radio:
bands:
- n78
slice:
differentiator: 1
dnn: internet
network-mode: IPV4
service-type: eMBB
stack: 5g-sa
target-cores:
- ogs-mdb-shared.networkso
vendor: oai
readiness-check:
method: ping
target: google-ip
interface-name: oaitun_ue0
---
apiVersion: athena.trirematics.io/v1
kind: Terminal
metadata:
name: ue2
namespace: trirematics
spec:
identity:
imsi: "001020000000002"
key: 0xfec86ba6eb707ed08905757b1bb44b8f
opc: 0xc42449363bbad02b66d16bc975d77cc1
pin: "1234"
model: terminal/nr-rfsim
preferred-access: oai-monolithic-shared-3119573096.networkso
radio:
bands:
- n78
slice:
differentiator: 1
dnn: p2p
network-mode: IPV4
service-type: 2
stack: 5g-sa
target-cores:
- ogs-mdb-shared.networkso
vendor: oai
readiness-check:
method: ping
target: google-ip
interface-name: oaitun_ue0
Deploy the terminal using the command brc install network terminals.yaml
.
Check for the status of the terminals using the command brc observe
. Wait until all the Element is in the STATUS set to 1/1 Y state.
After installing the 2 terminals you may test in 2 different linux terminals the throughput for each:
# TERMINAL 1
$ brc test throughput ue1 dl gateway
Connecting to host 12.231.203.1, port 5201
Reverse mode, remote host 12.231.203.1 is sending
[ 5] local 12.231.203.3 port 40649 connected to 12.231.203.1 port 5201
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 13.6 MBytes 114 Mbits/sec
[ 5] 1.00-2.00 sec 13.4 MBytes 112 Mbits/sec
[ 5] 2.00-3.00 sec 10.9 MBytes 91.8 Mbits/sec
[ 5] 3.00-4.00 sec 7.75 MBytes 65.0 Mbits/sec
[ 5] 4.00-5.00 sec 6.25 MBytes 52.4 Mbits/sec
[ 5] 5.00-6.00 sec 6.92 MBytes 58.1 Mbits/sec
[ 5] 6.00-7.00 sec 7.27 MBytes 61.0 Mbits/sec
[ 5] 7.00-8.00 sec 7.27 MBytes 61.0 Mbits/sec
[ 5] 8.00-9.00 sec 7.28 MBytes 61.1 Mbits/sec
[ 5] 9.00-10.00 sec 7.27 MBytes 60.9 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-9.00 sec 98.9 MBytes 92.1 Mbits/sec 0 sender
[ 5] 0.00-10.00 sec 88.0 MBytes 73.8 Mbits/sec receiver
iperf Done.
# TERMINAL 2
$ brc test throughput ue2 dl gateway -- -t 10
Connecting to host 12.5.205.1, port 5201
Reverse mode, remote host 12.5.205.1 is sending
[ 5] local 12.5.205.2 port 42491 connected to 12.5.205.1 port 5201
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 19.3 MBytes 162 Mbits/sec
[ 5] 1.00-2.00 sec 5.87 MBytes 49.3 Mbits/sec
[ 5] 2.00-3.00 sec 4.60 MBytes 38.6 Mbits/sec
[ 5] 3.00-4.00 sec 3.44 MBytes 28.9 Mbits/sec
[ 5] 4.00-5.00 sec 2.82 MBytes 23.7 Mbits/sec
[ 5] 5.00-6.00 sec 1.97 MBytes 16.6 Mbits/sec
[ 5] 6.00-7.00 sec 1.91 MBytes 16.0 Mbits/sec
[ 5] 7.00-8.00 sec 2.69 MBytes 22.6 Mbits/sec
[ 5] 8.00-9.00 sec 2.75 MBytes 23.1 Mbits/sec
[ 5] 9.00-10.00 sec 2.33 MBytes 19.6 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.02 sec 57.2 MBytes 47.9 Mbits/sec 1 sender
[ 5] 0.00-10.00 sec 47.7 MBytes 40.0 Mbits/sec receiver
iperf Done.
Slice Upgrade
In addition to updating a slice, the SO supports for slice upgrade operation such as changing the region and zone, user density or use-case.
In the following sample we will upgrade the p2p slice to support a different use-case from video-chat
to push-to-talk
.
apiVersion: athena.trirematics.io/v1
kind: ServiceProfile
metadata:
name: p2p
namespace: trirematics
spec:
coverage:
- region: region1
zones:
- name: zone-one
user-density: 1
user-equipment-type: rfsim
reporting-period: 20m
data-flows:
- name: p2t
description: |
Data flows designated for push-to-talk application.
data-network: p2p
traffic-class: burstable
content-rate:
downlink:
maximum: 10Mbps
average: 10Mbps
uplink:
maximum: 10Mbps
average: 10Mbps
latency-class: p2p
payload-types:
- use-case: push-to-talk
application-protocol: http/3
transport-protocol: rtp
content-type:
- audio/opus
To deploy this Service Profile run brc install service p2p_service.yaml
.
It should finish without errors and printout the three Kubernetes resource names that were created.
Check for the status of the deployment using the command brc observe
.
Wait until all the Elements are in the STATUS
set to 1/1 Y
state.