Skip to main content
Version: v4.0.0 [Denim]

Data Sources and Available Metrics

The monitoring stack combines various data sources into a unified view. Every metric includes a job label, which allows you to easily filter and group metrics based on their originating source.

This document serves as a catalog of the different data sources available.

Infrastructure Metrics

Kube State Metrics

These metrics are generated by the kube-state-metrics exporter and provide a snapshot of the state of Kubernetes objects.

API Server

These metrics provide information about the Kubernetes API server's performance.

CoreDNS

These metrics provide information about the CoreDNS server, which handles DNS resolution within the cluster.

Node Exporter

These metrics provide low-level information about the nodes (hosts) in the cluster, such as CPU, memory, and disk usage.

VictoriaMetrics Internal Metrics

These metrics provide insights into the health and performance of the VictoriaMetrics components themselves. This helps in monitoring the monitoring system.

  • Job: "vmsingle-vmks-victoria-metrics-k8s-stack". Metrics from the vmsingle instance, covering its internal operations, resource usage, and query performance.
    • Examples: vm_rows_inserted_total, vm_requests_total.
  • Job: "vmagent-vmks-victoria-metrics-k8s-stack". Metrics from the vmagent instances, indicating scraping efficiency, remote write operations, and agent health.
    • Examples: vmagent_remotewrite_succeeded_samples_total, vmagent_targets_active.
  • Job: "vmks-victoria-metrics-operator". Metrics exposing the state and operations of the VictoriaMetrics Operator, which manages the lifecycle of VictoriaMetrics components in Kubernetes.
    • Examples: vm_operator_reconciliations_total, vm_operator_managed_resources.

Full List: Refer to the VictoriaMetrics documentation for detailed metric lists for each component.

Energy Consumption Metrics

Kepler

Kepler provides estimated energy consumption metrics for pods and containers. It uses eBPF to probe CPU performance counters and Linux kernel tracepoints to generate these estimates.

Raritan PDU

These metrics are scraped from a proprietary Prometheus endpoint for Raritan PDUs, providing direct power consumption measurements.

  • Job: "pdu"
  • Examples: raritan_pdu_activepower_watt, raritan_pdu_voltage_volt.

RAN Metrics

xApps Generated Metrics

Metrics related to the Radio Access Network (RAN) that are sent directly from xApps to VictoriaMetrics.

  • Job: "raw_ran"

  • Data Organization: In addition to the job label, these metrics include other important labels to identify the source and context:

    • sm=["kpm","mac","pdcp","rlc","slice","gtp","llc"]: groups together metrics for the same service model.
    • scenario=[scenario]: groups together metrics from the same test run or xApp. If not specified, the scenario is default.
    • e2node_id=[id]: if applicable, groups together metrics for the same E2-Node (NodeB).
    • ran_ue_id=[id]: if applicable, groups together metrics for the same UE.
  • Examples: kpm_drb_ue_thp_dl, mac_bsr, pdcp_rxpdu_bytes, rlc_rxpdu_pkts, slice_ue_associated, gtp_qfi.

tip

The scenario label is crucial for separating data collections. You should use it to distinguish between metrics generated by different xApps, or to isolate different test runs from one another. It can be specified in the network blueprint as follows:

        -   name: monitoring-xapp
stack: 5g-sa
model: mosaic5g/monitoring-c
profiles:
...
- database
annotations:
extras.t9s.io/scenario: 'handover'

Therefore, any metrics written by the deployed xApp will be tagged with the specified scenario label (handover in this example).