usrp-users@lists.ettus.com

Discussion and technical support related to USRP, UHD, RFNoC

View all threads

Drop packets and sequence errors during X410 DPDK benchmark test

D
dhpanchaai@gmail.com
Tue, Oct 29, 2024 11:38 PM

Hi,

I’m trying to conduct the UHD benchmark test using DPDK on X410 radio. I’m using the NI Dual 100 Gigabit Ethernet PCIe NIC card, using the Mellanox drivers, and have the UC_200 fpga image loaded on the radio. However, I keep experiencing packet drops and sequence errors when I do that. Any idea why that’s happening?

/usr/local/lib/uhd/examples$ sudo ./benchmark_rate --args "type=x4xx,product=x410,addr=192.168.20.3,mgmt_addr=192.168.1.3,use_dpdk=1" --priority "high" --multi_streamer --rx_rate 245.76e6 --rx_subdev "B:1" --tx_rate 245.76e6 --tx_subdev "B:0"

[INFO] [UHD] linux; GNU C++ version 11.4.0; Boost_107400; DPDK_21.11; UHD_4.7.0.HEAD-0-ga5ed1872

EAL: Detected CPU lcores: 32

EAL: Detected NUMA nodes: 1

EAL: Detected shared linkage of DPDK

EAL: Multi-process socket /var/run/dpdk/rte/mp_socket

EAL: Selected IOVA mode 'VA'

EAL: No available 1048576 kB hugepages reported

EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:01:00.0 (socket 0)

EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:01:00.1 (socket 0)

TELEMETRY: No legacy callbacks, legacy socket not created

[00:00:00.000109] Creating the usrp device with: type=x4xx,product=x410,addr=192.168.20.3,mgmt_addr=192.168.1.3,use_dpdk=1...

[INFO] [MPMD] Initializing 1 device(s) in parallel with args: mgmt_addr=192.168.1.3,type=x4xx,product=x410,serial=328AFD7,name=ni-x4xx-328AFD7,fpga=UC_200,claimed=False,addr=192.168.20.3,use_dpdk=1

[INFO] [MPM.PeriphManager] init() called with device args `fpga=UC_200,mgmt_addr=192.168.1.3,name=ni-x4xx-328AFD7,product=x410,use_dpdk=1,clock_source=internal,time_source=internal,initializing=True'.

Using Device: Single USRP:

Device: X400-Series Device

Mboard 0: x410

RX Channel: 0

RX DSP: 0

RX Dboard: B

RX Subdev: 1

TX Channel: 0

TX DSP: 0

TX Dboard: B

TX Subdev: 0

[00:00:01.970153754] Setting device timestamp to 0...

[00:00:01.971248509] Testing receive rate 245.760000 Msps on 1 channels

Setting TX spb to 1992

[00:00:01.972147276] Testing transmit rate 245.760000 Msps on 1 channels

U[D00:00:02.502074084] Detected Rx sequence error.

U[D00:00:03.501866063] Detected Rx sequence error.

U[D00:00:04.501965973] Detected Rx sequence error.

U[D00:00:05.501905705] Detected Rx sequence error.

U[D00:00:06.501533956] Detected Rx sequence error.

U[D00:00:07.501567020] Detected Rx sequence error.

U[D00:00:08.501554331] Detected Rx sequence error.

U[D00:00:09.501610267] Detected Rx sequence error.

U[D00:00:10.501971471] Detected Rx sequence error.

U[D00:00:11.501931301] Detected Rx sequence error.

[00:00:11.973155250] Benchmark complete.

Benchmark rate summary:

Num received samples:    2344330478

Num dropped samples:      113209128

Num overruns detected:    0

Num transmitted samples:  2335492512

Num sequence errors (Tx): 0

Num sequence errors (Rx): 10

Num underruns detected:  10

Num late commands:        0

Num timeouts (Tx):        0

Num timeouts (Rx):        0

Done!

Hi, I’m trying to conduct the UHD benchmark test using DPDK on X410 radio. I’m using the NI Dual 100 Gigabit Ethernet PCIe NIC card, using the Mellanox drivers, and have the UC_200 fpga image loaded on the radio. However, I keep experiencing packet drops and sequence errors when I do that. Any idea why that’s happening? /usr/local/lib/uhd/examples$ sudo ./benchmark_rate --args "type=x4xx,product=x410,addr=192.168.20.3,mgmt_addr=192.168.1.3,use_dpdk=1" --priority "high" --multi_streamer --rx_rate 245.76e6 --rx_subdev "B:1" --tx_rate 245.76e6 --tx_subdev "B:0" \[INFO\] \[UHD\] linux; GNU C++ version 11.4.0; Boost_107400; DPDK_21.11; UHD_4.7.0.HEAD-0-ga5ed1872 EAL: Detected CPU lcores: 32 EAL: Detected NUMA nodes: 1 EAL: Detected shared linkage of DPDK EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'VA' EAL: No available 1048576 kB hugepages reported EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:01:00.0 (socket 0) EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:01:00.1 (socket 0) TELEMETRY: No legacy callbacks, legacy socket not created \[00:00:00.000109\] Creating the usrp device with: type=x4xx,product=x410,addr=192.168.20.3,mgmt_addr=192.168.1.3,use_dpdk=1... \[INFO\] \[MPMD\] Initializing 1 device(s) in parallel with args: mgmt_addr=192.168.1.3,type=x4xx,product=x410,serial=328AFD7,name=ni-x4xx-328AFD7,fpga=UC_200,claimed=False,addr=192.168.20.3,use_dpdk=1 \[INFO\] \[MPM.PeriphManager\] init() called with device args \`fpga=UC_200,mgmt_addr=192.168.1.3,name=ni-x4xx-328AFD7,product=x410,use_dpdk=1,clock_source=internal,time_source=internal,initializing=True'. Using Device: Single USRP: Device: X400-Series Device Mboard 0: x410 RX Channel: 0 RX DSP: 0 RX Dboard: B RX Subdev: 1 TX Channel: 0 TX DSP: 0 TX Dboard: B TX Subdev: 0 \[00:00:01.970153754\] Setting device timestamp to 0... \[00:00:01.971248509\] Testing receive rate 245.760000 Msps on 1 channels Setting TX spb to 1992 \[00:00:01.972147276\] Testing transmit rate 245.760000 Msps on 1 channels U\[D00:00:02.502074084\] Detected Rx sequence error. U\[D00:00:03.501866063\] Detected Rx sequence error. U\[D00:00:04.501965973\] Detected Rx sequence error. U\[D00:00:05.501905705\] Detected Rx sequence error. U\[D00:00:06.501533956\] Detected Rx sequence error. U\[D00:00:07.501567020\] Detected Rx sequence error. U\[D00:00:08.501554331\] Detected Rx sequence error. U\[D00:00:09.501610267\] Detected Rx sequence error. U\[D00:00:10.501971471\] Detected Rx sequence error. U\[D00:00:11.501931301\] Detected Rx sequence error. \[00:00:11.973155250\] Benchmark complete. Benchmark rate summary: Num received samples: 2344330478 Num dropped samples: 113209128 Num overruns detected: 0 Num transmitted samples: 2335492512 Num sequence errors (Tx): 0 Num sequence errors (Rx): 10 Num underruns detected: 10 Num late commands: 0 Num timeouts (Tx): 0 Num timeouts (Rx): 0 Done!
MD
Marcus D. Leech
Wed, Oct 30, 2024 12:08 AM

On 29/10/2024 19:38, dhpanchaai@gmail.com wrote:

Hi,

I’m trying to conduct the UHD benchmark test using DPDK on X410 radio.
I’m using the NI Dual 100 Gigabit Ethernet PCIe NIC card, using the
Mellanox drivers, and have the UC_200 fpga image loaded on the radio.
However, I keep experiencing packet drops and sequence errors when I
do that. Any idea why that’s happening?

/usr/local/lib/uhd/examples$ sudo ./benchmark_rate --args
"type=x4xx,product=x410,addr=192.168.20.3,mgmt_addr=192.168.1.3,use_dpdk=1"
--priority "high" --multi_streamer --rx_rate 245.76e6 --rx_subdev
"B:1" --tx_rate 245.76e6 --tx_subdev "B:0"

[INFO] [UHD] linux; GNU C++ version 11.4.0; Boost_107400; DPDK_21.11;
UHD_4.7.0.HEAD-0-ga5ed1872

EAL: Detected CPU lcores: 32

EAL: Detected NUMA nodes: 1

EAL: Detected shared linkage of DPDK

EAL: Multi-process socket /var/run/dpdk/rte/mp_socket

EAL: Selected IOVA mode 'VA'

EAL: No available 1048576 kB hugepages reported

EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:01:00.0
(socket 0)

EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:01:00.1
(socket 0)

TELEMETRY: No legacy callbacks, legacy socket not created

[00:00:00.000109] Creating the usrp device with:
type=x4xx,product=x410,addr=192.168.20.3,mgmt_addr=192.168.1.3,use_dpdk=1...

[INFO] [MPMD] Initializing 1 device(s) in parallel with args:
mgmt_addr=192.168.1.3,type=x4xx,product=x410,serial=328AFD7,name=ni-x4xx-328AFD7,fpga=UC_200,claimed=False,addr=192.168.20.3,use_dpdk=1

[INFO] [MPM.PeriphManager] init() called with device args
`fpga=UC_200,mgmt_addr=192.168.1.3,name=ni-x4xx-328AFD7,product=x410,use_dpdk=1,clock_source=internal,time_source=internal,initializing=True'.

Using Device: Single USRP:

Device: X400-Series Device

Mboard 0: x410

RX Channel: 0

RX DSP: 0

RX Dboard: B

RX Subdev: 1

TX Channel: 0

TX DSP: 0

TX Dboard: B

TX Subdev: 0

[00:00:01.970153754] Setting device timestamp to 0...

[00:00:01.971248509] Testing receive rate 245.760000 Msps on 1 channels

Setting TX spb to 1992

[00:00:01.972147276] Testing transmit rate 245.760000 Msps on 1 channels

U[D00:00:02.502074084] Detected Rx sequence error.

U[D00:00:03.501866063] Detected Rx sequence error.

U[D00:00:04.501965973] Detected Rx sequence error.

U[D00:00:05.501905705] Detected Rx sequence error.

U[D00:00:06.501533956] Detected Rx sequence error.

U[D00:00:07.501567020] Detected Rx sequence error.

U[D00:00:08.501554331] Detected Rx sequence error.

U[D00:00:09.501610267] Detected Rx sequence error.

U[D00:00:10.501971471] Detected Rx sequence error.

U[D00:00:11.501931301] Detected Rx sequence error.

[00:00:11.973155250] Benchmark complete.

Benchmark rate summary:

Num received samples: 2344330478

Num dropped samples: 113209128

Num overruns detected: 0

Num transmitted samples: 2335492512

Num sequence errors (Tx): 0

Num sequence errors (Rx): 10

Num underruns detected: 10

Num late commands: 0

Num timeouts (Tx): 0

Num timeouts (Rx): 0

Done!


USRP-users mailing list -- usrp-users@lists.ettus.com
To unsubscribe send an email to usrp-users-leave@lists.ettus.com

I don't think "multi_streamer" is going to do anything for you here,
since you're only configuring a single channel in each direction.
  I THINK multi_streamer will have ZERO effect, but you could try
again without it.

Doing the math, your system is trying to move about 2Gbyte/second
into/out-of that NIC, and it may be running out of
  bus bandwidth and/or CPU.

I assume that you've configured your CPU for "Performance" mode?

IF you cut the sample-rate in half, do you still see this problem?

On 29/10/2024 19:38, dhpanchaai@gmail.com wrote: > > Hi, > > I’m trying to conduct the UHD benchmark test using DPDK on X410 radio. > I’m using the NI Dual 100 Gigabit Ethernet PCIe NIC card, using the > Mellanox drivers, and have the UC_200 fpga image loaded on the radio. > However, I keep experiencing packet drops and sequence errors when I > do that. Any idea why that’s happening? > > /usr/local/lib/uhd/examples$ sudo ./benchmark_rate --args > "type=x4xx,product=x410,addr=192.168.20.3,mgmt_addr=192.168.1.3,use_dpdk=1" > --priority "high" --multi_streamer --rx_rate 245.76e6 --rx_subdev > "B:1" --tx_rate 245.76e6 --tx_subdev "B:0" > > [INFO] [UHD] linux; GNU C++ version 11.4.0; Boost_107400; DPDK_21.11; > UHD_4.7.0.HEAD-0-ga5ed1872 > > EAL: Detected CPU lcores: 32 > > EAL: Detected NUMA nodes: 1 > > EAL: Detected shared linkage of DPDK > > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket > > EAL: Selected IOVA mode 'VA' > > EAL: No available 1048576 kB hugepages reported > > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:01:00.0 > (socket 0) > > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:01:00.1 > (socket 0) > > TELEMETRY: No legacy callbacks, legacy socket not created > > [00:00:00.000109] Creating the usrp device with: > type=x4xx,product=x410,addr=192.168.20.3,mgmt_addr=192.168.1.3,use_dpdk=1... > > [INFO] [MPMD] Initializing 1 device(s) in parallel with args: > mgmt_addr=192.168.1.3,type=x4xx,product=x410,serial=328AFD7,name=ni-x4xx-328AFD7,fpga=UC_200,claimed=False,addr=192.168.20.3,use_dpdk=1 > > [INFO] [MPM.PeriphManager] init() called with device args > `fpga=UC_200,mgmt_addr=192.168.1.3,name=ni-x4xx-328AFD7,product=x410,use_dpdk=1,clock_source=internal,time_source=internal,initializing=True'. > > Using Device: Single USRP: > > Device: X400-Series Device > > Mboard 0: x410 > > RX Channel: 0 > > RX DSP: 0 > > RX Dboard: B > > RX Subdev: 1 > > TX Channel: 0 > > TX DSP: 0 > > TX Dboard: B > > TX Subdev: 0 > > [00:00:01.970153754] Setting device timestamp to 0... > > [00:00:01.971248509] Testing receive rate 245.760000 Msps on 1 channels > > Setting TX spb to 1992 > > [00:00:01.972147276] Testing transmit rate 245.760000 Msps on 1 channels > > U[D00:00:02.502074084] Detected Rx sequence error. > > U[D00:00:03.501866063] Detected Rx sequence error. > > U[D00:00:04.501965973] Detected Rx sequence error. > > U[D00:00:05.501905705] Detected Rx sequence error. > > U[D00:00:06.501533956] Detected Rx sequence error. > > U[D00:00:07.501567020] Detected Rx sequence error. > > U[D00:00:08.501554331] Detected Rx sequence error. > > U[D00:00:09.501610267] Detected Rx sequence error. > > U[D00:00:10.501971471] Detected Rx sequence error. > > U[D00:00:11.501931301] Detected Rx sequence error. > > [00:00:11.973155250] Benchmark complete. > > Benchmark rate summary: > > Num received samples: 2344330478 > > Num dropped samples: 113209128 > > Num overruns detected: 0 > > Num transmitted samples: 2335492512 > > Num sequence errors (Tx): 0 > > Num sequence errors (Rx): 10 > > Num underruns detected: 10 > > Num late commands: 0 > > Num timeouts (Tx): 0 > > Num timeouts (Rx): 0 > > Done! > > > _______________________________________________ > USRP-users mailing list -- usrp-users@lists.ettus.com > To unsubscribe send an email to usrp-users-leave@lists.ettus.com I don't think "multi_streamer" is going to do anything for you here, since you're only configuring a single channel in each direction.   I *THINK* multi_streamer will have ZERO effect, but you could try again without it. Doing the math, your system is trying to move about 2Gbyte/second into/out-of that NIC, and it may be running out of   bus bandwidth and/or CPU. I assume that you've configured your CPU for "Performance" mode? IF you cut the sample-rate in half, do you still see this problem?
D
dhpanchaai@gmail.com
Wed, Oct 30, 2024 5:38 PM

I had to change my 100G IP address to 192.168.120.2 and channels on the X410 to “A”.

I set the CPU to Performance Mode and lowered the sample rate to 122.88e6.

However, I’m still experiencing dropped packets and sequence errors.

/usr/local/lib/uhd/examples$ sudo ./benchmark_rate --args "type=x4xx,product=x410,addr=192.168.120.2,mgmt_addr=192.168.1.3,use_dpdk=1" --priority "high" --rx_rate 122.88e6 --rx_subdev "A:1" --tx_rate 122.88e6 --tx_subdev "A:0"

[INFO] [UHD] linux; GNU C++ version 11.4.0; Boost_107400; DPDK_21.11; UHD_4.7.0.HEAD-0-ga5ed1872

EAL: Detected CPU lcores: 32

EAL: Detected NUMA nodes: 1

EAL: Detected shared linkage of DPDK

EAL: Multi-process socket /var/run/dpdk/rte/mp_socket

EAL: Selected IOVA mode 'VA'

EAL: No available 1048576 kB hugepages reported

EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:01:00.0 (socket 0)

EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:01:00.1 (socket 0)

TELEMETRY: No legacy callbacks, legacy socket not created

[00:00:00.000094] Creating the usrp device with: type=x4xx,product=x410,addr=192.168.120.2,mgmt_addr=192.168.1.3,use_dpdk=1...

[INFO] [MPMD] Initializing 1 device(s) in parallel with args: mgmt_addr=192.168.1.3,type=x4xx,product=x410,serial=328AFD7,name=ni-x4xx-328AFD7,fpga=UC_200,claimed=False,addr=192.168.120.2,use_dpdk=1

[INFO] [MPM.PeriphManager] init() called with device args `fpga=UC_200,mgmt_addr=192.168.1.3,name=ni-x4xx-328AFD7,product=x410,use_dpdk=1,clock_source=internal,time_source=internal,initializing=True'.

Using Device: Single USRP:

Device: X400-Series Device

Mboard 0: x410

RX Channel: 0

RX DSP: 0

RX Dboard: A

RX Subdev: 1

TX Channel: 0

TX DSP: 0

TX Dboard: A

TX Subdev: 0

[00:00:01.954717000] Setting device timestamp to 0...

[00:00:01.955999062] Testing receive rate 122.880000 Msps on 1 channels

Setting TX spp to 1992

[00:00:01.956816655] Testing transmit rate 122.880000 Msps on 1 channels

U[00:00:02.575486749] Detected Rx sequence error.

DU[00:00:03.575529623] Detected Rx sequence error.

DU[00:00:04.575537036] Detected Rx sequence error.

DU[00:00:05.575477062] Detected Rx sequence error.

DU[00:00:06.575465296] Detected Rx sequence error.

DU[00:00:07.575549183] Detected Rx sequence error.

DU[00:00:08.575539569] Detected Rx sequence error.

DU[00:00:09.575532702] Detected Rx sequence error.

DU[00:00:10.575479853] Detected Rx sequence error.

DU[00:00:11.575464597] Detected Rx sequence error.

D[00:00:11.958336752] Benchmark complete.

Benchmark rate summary:

Num received samples:    1176736199

Num dropped samples:      52167456

Num overruns detected:    0

Num transmitted samples:  1168152624

Num sequence errors (Tx): 0

Num sequence errors (Rx): 10

Num underruns detected:  10

Num late commands:        0

Num timeouts (Tx):        0

Num timeouts (Rx):        0

Done!

My /root/.config/uhd.conf file:

[use_dpdk=1]

dpdk_mtu=9000

dpdk_driver=/usr/lib/x86_64-linux-gnu/dpdk/pmds-22.0/

dpdk_corelist=10,11

dpdk_num_mbufs=4095

dpdk_mbuf_cache_size=315

[dpdk_mac=b8:3f:d2:b0:d7:58]

dpdk_lcore = 11

dpdk_ipv4 = 192.168.120.33/24

#dpdk_num_desc = 4096

I have attached screenshot of the performance GUI and system monitor of the CPU usage

I had to change my 100G IP address to 192.168.120.2 and channels on the X410 to “A”. I set the CPU to Performance Mode and lowered the sample rate to 122.88e6. \ \ However, I’m still experiencing dropped packets and sequence errors.\ \ /usr/local/lib/uhd/examples$ sudo ./benchmark_rate --args "type=x4xx,product=x410,addr=192.168.120.2,mgmt_addr=192.168.1.3,use_dpdk=1" --priority "high" --rx_rate 122.88e6 --rx_subdev "A:1" --tx_rate 122.88e6 --tx_subdev "A:0" \[INFO\] \[UHD\] linux; GNU C++ version 11.4.0; Boost_107400; DPDK_21.11; UHD_4.7.0.HEAD-0-ga5ed1872 EAL: Detected CPU lcores: 32 EAL: Detected NUMA nodes: 1 EAL: Detected shared linkage of DPDK EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'VA' EAL: No available 1048576 kB hugepages reported EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:01:00.0 (socket 0) EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:01:00.1 (socket 0) TELEMETRY: No legacy callbacks, legacy socket not created \[00:00:00.000094\] Creating the usrp device with: type=x4xx,product=x410,addr=192.168.120.2,mgmt_addr=192.168.1.3,use_dpdk=1... \[INFO\] \[MPMD\] Initializing 1 device(s) in parallel with args: mgmt_addr=192.168.1.3,type=x4xx,product=x410,serial=328AFD7,name=ni-x4xx-328AFD7,fpga=UC_200,claimed=False,addr=192.168.120.2,use_dpdk=1 \[INFO\] \[MPM.PeriphManager\] init() called with device args \`fpga=UC_200,mgmt_addr=192.168.1.3,name=ni-x4xx-328AFD7,product=x410,use_dpdk=1,clock_source=internal,time_source=internal,initializing=True'. Using Device: Single USRP: Device: X400-Series Device Mboard 0: x410 RX Channel: 0 RX DSP: 0 RX Dboard: A RX Subdev: 1 TX Channel: 0 TX DSP: 0 TX Dboard: A TX Subdev: 0 \[00:00:01.954717000\] Setting device timestamp to 0... \[00:00:01.955999062\] Testing receive rate 122.880000 Msps on 1 channels Setting TX spp to 1992 \[00:00:01.956816655\] Testing transmit rate 122.880000 Msps on 1 channels U\[00:00:02.575486749\] Detected Rx sequence error. DU\[00:00:03.575529623\] Detected Rx sequence error. DU\[00:00:04.575537036\] Detected Rx sequence error. DU\[00:00:05.575477062\] Detected Rx sequence error. DU\[00:00:06.575465296\] Detected Rx sequence error. DU\[00:00:07.575549183\] Detected Rx sequence error. DU\[00:00:08.575539569\] Detected Rx sequence error. DU\[00:00:09.575532702\] Detected Rx sequence error. DU\[00:00:10.575479853\] Detected Rx sequence error. DU\[00:00:11.575464597\] Detected Rx sequence error. D\[00:00:11.958336752\] Benchmark complete. Benchmark rate summary: Num received samples: 1176736199 Num dropped samples: 52167456 Num overruns detected: 0 Num transmitted samples: 1168152624 Num sequence errors (Tx): 0 Num sequence errors (Rx): 10 Num underruns detected: 10 Num late commands: 0 Num timeouts (Tx): 0 Num timeouts (Rx): 0 Done! My /root/.config/uhd.conf file: \[use_dpdk=1\] dpdk_mtu=9000 dpdk_driver=/usr/lib/x86_64-linux-gnu/dpdk/pmds-22.0/ dpdk_corelist=10,11 dpdk_num_mbufs=4095 dpdk_mbuf_cache_size=315 \[dpdk_mac=b8:3f:d2:b0:d7:58\] dpdk_lcore = 11 dpdk_ipv4 = 192.168.120.33/24 \#dpdk_num_desc = 4096 I have attached screenshot of the performance GUI and system monitor of the CPU usage
D
dhpanchaai@gmail.com
Wed, Oct 30, 2024 5:49 PM

Here is the cpuinfo from the terminal:

$ sudo cpufreq-set -c 11 -g performance

$ cpufreq-info

cpufrequtils 008: cpufreq-info (C) Dominik Brodowski 2004-2009

Report errors and bugs to cpufreq@vger.kernel.org, please.

analyzing CPU 0:

driver: intel_pstate

CPUs which run at the same hardware frequency: 0

CPUs which need to have their frequency coordinated by software: 0

maximum transition latency: 4294.55 ms.

hardware limits: 800 MHz - 5.70 GHz

available cpufreq governors: performance, powersave

current policy: frequency should be within 5.70 GHz and 5.70 GHz.

              The governor "performance" may decide which speed to use

              within this range.

current CPU frequency is 5.70 GHz.

analyzing CPU 1:

driver: intel_pstate

CPUs which run at the same hardware frequency: 1

CPUs which need to have their frequency coordinated by software: 1

maximum transition latency: 4294.55 ms.

hardware limits: 800 MHz - 5.70 GHz

available cpufreq governors: performance, powersave

current policy: frequency should be within 5.70 GHz and 5.70 GHz.

              The governor "performance" may decide which speed to use

              within this range.

current CPU frequency is 5.70 GHz.

analyzing CPU 2:

driver: intel_pstate

CPUs which run at the same hardware frequency: 2

CPUs which need to have their frequency coordinated by software: 2

maximum transition latency: 4294.55 ms.

hardware limits: 800 MHz - 5.70 GHz

available cpufreq governors: performance, powersave

current policy: frequency should be within 5.70 GHz and 5.70 GHz.

              The governor "performance" may decide which speed to use

              within this range.

current CPU frequency is 5.70 GHz.

analyzing CPU 3:

driver: intel_pstate

CPUs which run at the same hardware frequency: 3

CPUs which need to have their frequency coordinated by software: 3

maximum transition latency: 4294.55 ms.

hardware limits: 800 MHz - 5.70 GHz

available cpufreq governors: performance, powersave

current policy: frequency should be within 5.70 GHz and 5.70 GHz.

              The governor "performance" may decide which speed to use

              within this range.

current CPU frequency is 5.70 GHz.

analyzing CPU 4:

driver: intel_pstate

CPUs which run at the same hardware frequency: 4

CPUs which need to have their frequency coordinated by software: 4

maximum transition latency: 4294.55 ms.

hardware limits: 800 MHz - 5.70 GHz

available cpufreq governors: performance, powersave

current policy: frequency should be within 5.70 GHz and 5.70 GHz.

              The governor "performance" may decide which speed to use

              within this range.

current CPU frequency is 5.70 GHz.

analyzing CPU 5:

driver: intel_pstate

CPUs which run at the same hardware frequency: 5

CPUs which need to have their frequency coordinated by software: 5

maximum transition latency: 4294.55 ms.

hardware limits: 800 MHz - 5.70 GHz

available cpufreq governors: performance, powersave

current policy: frequency should be within 5.70 GHz and 5.70 GHz.

              The governor "performance" may decide which speed to use

              within this range.

current CPU frequency is 5.70 GHz.

analyzing CPU 6:

driver: intel_pstate

CPUs which run at the same hardware frequency: 6

CPUs which need to have their frequency coordinated by software: 6

maximum transition latency: 4294.55 ms.

hardware limits: 800 MHz - 5.70 GHz

available cpufreq governors: performance, powersave

current policy: frequency should be within 5.70 GHz and 5.70 GHz.

              The governor "performance" may decide which speed to use

              within this range.

current CPU frequency is 5.70 GHz.

analyzing CPU 7:

driver: intel_pstate

CPUs which run at the same hardware frequency: 7

CPUs which need to have their frequency coordinated by software: 7

maximum transition latency: 4294.55 ms.

hardware limits: 800 MHz - 5.70 GHz

available cpufreq governors: performance, powersave

current policy: frequency should be within 5.70 GHz and 5.70 GHz.

              The governor "performance" may decide which speed to use

              within this range.

current CPU frequency is 5.70 GHz.

analyzing CPU 8:

driver: intel_pstate

CPUs which run at the same hardware frequency: 8

CPUs which need to have their frequency coordinated by software: 8

maximum transition latency: 4294.55 ms.

hardware limits: 800 MHz - 6.00 GHz

available cpufreq governors: performance, powersave

current policy: frequency should be within 5.70 GHz and 5.70 GHz.

              The governor "performance" may decide which speed to use

              within this range.

current CPU frequency is 5.70 GHz.

analyzing CPU 9:

driver: intel_pstate

CPUs which run at the same hardware frequency: 9

CPUs which need to have their frequency coordinated by software: 9

maximum transition latency: 4294.55 ms.

hardware limits: 800 MHz - 6.00 GHz

available cpufreq governors: performance, powersave

current policy: frequency should be within 5.70 GHz and 5.70 GHz.

              The governor "performance" may decide which speed to use

              within this range.

current CPU frequency is 5.70 GHz.

analyzing CPU 10:

driver: intel_pstate

CPUs which run at the same hardware frequency: 10

CPUs which need to have their frequency coordinated by software: 10

maximum transition latency: 4294.55 ms.

hardware limits: 800 MHz - 6.00 GHz

available cpufreq governors: performance, powersave

current policy: frequency should be within 5.70 GHz and 5.70 GHz.

              The governor "performance" may decide which speed to use

              within this range.

current CPU frequency is 5.70 GHz.

analyzing CPU 11:

driver: intel_pstate

CPUs which run at the same hardware frequency: 11

CPUs which need to have their frequency coordinated by software: 11

maximum transition latency: 4294.55 ms.

hardware limits: 800 MHz - 6.00 GHz

available cpufreq governors: performance, powersave

current policy: frequency should be within 5.70 GHz and 5.70 GHz.

              The governor "performance" may decide which speed to use

              within this range.

current CPU frequency is 5.70 GHz.

analyzing CPU 12:

driver: intel_pstate

CPUs which run at the same hardware frequency: 12

CPUs which need to have their frequency coordinated by software: 12

maximum transition latency: 4294.55 ms.

hardware limits: 800 MHz - 5.70 GHz

available cpufreq governors: performance, powersave

current policy: frequency should be within 5.70 GHz and 5.70 GHz.

              The governor "performance" may decide which speed to use

              within this range.

current CPU frequency is 5.70 GHz.

analyzing CPU 13:

driver: intel_pstate

CPUs which run at the same hardware frequency: 13

CPUs which need to have their frequency coordinated by software: 13

maximum transition latency: 4294.55 ms.

hardware limits: 800 MHz - 5.70 GHz

available cpufreq governors: performance, powersave

current policy: frequency should be within 5.70 GHz and 5.70 GHz.

              The governor "performance" may decide which speed to use

              within this range.

current CPU frequency is 5.70 GHz.

analyzing CPU 14:

driver: intel_pstate

CPUs which run at the same hardware frequency: 14

CPUs which need to have their frequency coordinated by software: 14

maximum transition latency: 4294.55 ms.

hardware limits: 800 MHz - 5.70 GHz

available cpufreq governors: performance, powersave

current policy: frequency should be within 5.70 GHz and 5.70 GHz.

              The governor "performance" may decide which speed to use

              within this range.

current CPU frequency is 5.70 GHz.

analyzing CPU 15:

driver: intel_pstate

CPUs which run at the same hardware frequency: 15

CPUs which need to have their frequency coordinated by software: 15

maximum transition latency: 4294.55 ms.

hardware limits: 800 MHz - 5.70 GHz

available cpufreq governors: performance, powersave

current policy: frequency should be within 5.70 GHz and 5.70 GHz.

              The governor "performance" may decide which speed to use

              within this range.

current CPU frequency is 5.70 GHz.

analyzing CPU 16:

driver: intel_pstate

CPUs which run at the same hardware frequency: 16

CPUs which need to have their frequency coordinated by software: 16

maximum transition latency: 4294.55 ms.

hardware limits: 800 MHz - 4.40 GHz

available cpufreq governors: performance, powersave

current policy: frequency should be within 4.40 GHz and 4.40 GHz.

              The governor "performance" may decide which speed to use

              within this range.

current CPU frequency is 4.40 GHz.

analyzing CPU 17:

driver: intel_pstate

CPUs which run at the same hardware frequency: 17

CPUs which need to have their frequency coordinated by software: 17

maximum transition latency: 4294.55 ms.

hardware limits: 800 MHz - 4.40 GHz

available cpufreq governors: performance, powersave

current policy: frequency should be within 4.40 GHz and 4.40 GHz.

              The governor "performance" may decide which speed to use

              within this range.

current CPU frequency is 4.40 GHz.

analyzing CPU 18:

driver: intel_pstate

CPUs which run at the same hardware frequency: 18

CPUs which need to have their frequency coordinated by software: 18

maximum transition latency: 4294.55 ms.

hardware limits: 800 MHz - 4.40 GHz

available cpufreq governors: performance, powersave

current policy: frequency should be within 4.40 GHz and 4.40 GHz.

              The governor "performance" may decide which speed to use

              within this range.

current CPU frequency is 4.40 GHz.

analyzing CPU 19:

driver: intel_pstate

CPUs which run at the same hardware frequency: 19

CPUs which need to have their frequency coordinated by software: 19

maximum transition latency: 4294.55 ms.

hardware limits: 800 MHz - 4.40 GHz

available cpufreq governors: performance, powersave

current policy: frequency should be within 4.40 GHz and 4.40 GHz.

              The governor "performance" may decide which speed to use

              within this range.

current CPU frequency is 4.40 GHz.

analyzing CPU 20:

driver: intel_pstate

CPUs which run at the same hardware frequency: 20

CPUs which need to have their frequency coordinated by software: 20

maximum transition latency: 4294.55 ms.

hardware limits: 800 MHz - 4.40 GHz

available cpufreq governors: performance, powersave

current policy: frequency should be within 4.40 GHz and 4.40 GHz.

              The governor "performance" may decide which speed to use

              within this range.

current CPU frequency is 4.40 GHz.

analyzing CPU 21:

driver: intel_pstate

CPUs which run at the same hardware frequency: 21

CPUs which need to have their frequency coordinated by software: 21

maximum transition latency: 4294.55 ms.

hardware limits: 800 MHz - 4.40 GHz

available cpufreq governors: performance, powersave

current policy: frequency should be within 4.40 GHz and 4.40 GHz.

              The governor "performance" may decide which speed to use

              within this range.

current CPU frequency is 4.40 GHz.

analyzing CPU 22:

driver: intel_pstate

CPUs which run at the same hardware frequency: 22

CPUs which need to have their frequency coordinated by software: 22

maximum transition latency: 4294.55 ms.

hardware limits: 800 MHz - 4.40 GHz

available cpufreq governors: performance, powersave

current policy: frequency should be within 4.40 GHz and 4.40 GHz.

              The governor "performance" may decide which speed to use

              within this range.

current CPU frequency is 4.40 GHz.

analyzing CPU 23:

driver: intel_pstate

CPUs which run at the same hardware frequency: 23

CPUs which need to have their frequency coordinated by software: 23

maximum transition latency: 4294.55 ms.

hardware limits: 800 MHz - 4.40 GHz

available cpufreq governors: performance, powersave

current policy: frequency should be within 4.40 GHz and 4.40 GHz.

              The governor "performance" may decide which speed to use

              within this range.

current CPU frequency is 4.40 GHz.

analyzing CPU 24:

driver: intel_pstate

CPUs which run at the same hardware frequency: 24

CPUs which need to have their frequency coordinated by software: 24

maximum transition latency: 4294.55 ms.

hardware limits: 800 MHz - 4.40 GHz

available cpufreq governors: performance, powersave

current policy: frequency should be within 4.40 GHz and 4.40 GHz.

              The governor "performance" may decide which speed to use

              within this range.

current CPU frequency is 4.40 GHz.

analyzing CPU 25:

driver: intel_pstate

CPUs which run at the same hardware frequency: 25

CPUs which need to have their frequency coordinated by software: 25

maximum transition latency: 4294.55 ms.

hardware limits: 800 MHz - 4.40 GHz

available cpufreq governors: performance, powersave

current policy: frequency should be within 4.40 GHz and 4.40 GHz.

              The governor "performance" may decide which speed to use

              within this range.

current CPU frequency is 4.40 GHz.

analyzing CPU 26:

driver: intel_pstate

CPUs which run at the same hardware frequency: 26

CPUs which need to have their frequency coordinated by software: 26

maximum transition latency: 4294.55 ms.

hardware limits: 800 MHz - 4.40 GHz

available cpufreq governors: performance, powersave

current policy: frequency should be within 4.40 GHz and 4.40 GHz.

              The governor "performance" may decide which speed to use

              within this range.

current CPU frequency is 4.40 GHz.

analyzing CPU 27:

driver: intel_pstate

CPUs which run at the same hardware frequency: 27

CPUs which need to have their frequency coordinated by software: 27

maximum transition latency: 4294.55 ms.

hardware limits: 800 MHz - 4.40 GHz

available cpufreq governors: performance, powersave

current policy: frequency should be within 4.40 GHz and 4.40 GHz.

              The governor "performance" may decide which speed to use

              within this range.

current CPU frequency is 4.40 GHz.

analyzing CPU 28:

driver: intel_pstate

CPUs which run at the same hardware frequency: 28

CPUs which need to have their frequency coordinated by software: 28

maximum transition latency: 4294.55 ms.

hardware limits: 800 MHz - 4.40 GHz

available cpufreq governors: performance, powersave

current policy: frequency should be within 4.40 GHz and 4.40 GHz.

              The governor "performance" may decide which speed to use

              within this range.

current CPU frequency is 4.40 GHz.

analyzing CPU 29:

driver: intel_pstate

CPUs which run at the same hardware frequency: 29

CPUs which need to have their frequency coordinated by software: 29

maximum transition latency: 4294.55 ms.

hardware limits: 800 MHz - 4.40 GHz

available cpufreq governors: performance, powersave

current policy: frequency should be within 4.40 GHz and 4.40 GHz.

              The governor "performance" may decide which speed to use

              within this range.

current CPU frequency is 4.40 GHz.

analyzing CPU 30:

driver: intel_pstate

CPUs which run at the same hardware frequency: 30

CPUs which need to have their frequency coordinated by software: 30

maximum transition latency: 4294.55 ms.

hardware limits: 800 MHz - 4.40 GHz

available cpufreq governors: performance, powersave

current policy: frequency should be within 4.40 GHz and 4.40 GHz.

              The governor "performance" may decide which speed to use

              within this range.

current CPU frequency is 4.40 GHz.

analyzing CPU 31:

driver: intel_pstate

CPUs which run at the same hardware frequency: 31

CPUs which need to have their frequency coordinated by software: 31

maximum transition latency: 4294.55 ms.

hardware limits: 800 MHz - 4.40 GHz

available cpufreq governors: performance, powersave

current policy: frequency should be within 4.40 GHz and 4.40 GHz.

              The governor "performance" may decide which speed to use

              within this range.

current CPU frequency is 4.40 GHz.

Here is the cpuinfo from the terminal:\ \ $ sudo cpufreq-set -c 11 -g performance $ cpufreq-info cpufrequtils 008: cpufreq-info (C) Dominik Brodowski 2004-2009 Report errors and bugs to cpufreq@vger.kernel.org, please. analyzing CPU 0: driver: intel_pstate CPUs which run at the same hardware frequency: 0 CPUs which need to have their frequency coordinated by software: 0 maximum transition latency: 4294.55 ms. hardware limits: 800 MHz - 5.70 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 5.70 GHz and 5.70 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 5.70 GHz. analyzing CPU 1: driver: intel_pstate CPUs which run at the same hardware frequency: 1 CPUs which need to have their frequency coordinated by software: 1 maximum transition latency: 4294.55 ms. hardware limits: 800 MHz - 5.70 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 5.70 GHz and 5.70 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 5.70 GHz. analyzing CPU 2: driver: intel_pstate CPUs which run at the same hardware frequency: 2 CPUs which need to have their frequency coordinated by software: 2 maximum transition latency: 4294.55 ms. hardware limits: 800 MHz - 5.70 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 5.70 GHz and 5.70 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 5.70 GHz. analyzing CPU 3: driver: intel_pstate CPUs which run at the same hardware frequency: 3 CPUs which need to have their frequency coordinated by software: 3 maximum transition latency: 4294.55 ms. hardware limits: 800 MHz - 5.70 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 5.70 GHz and 5.70 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 5.70 GHz. analyzing CPU 4: driver: intel_pstate CPUs which run at the same hardware frequency: 4 CPUs which need to have their frequency coordinated by software: 4 maximum transition latency: 4294.55 ms. hardware limits: 800 MHz - 5.70 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 5.70 GHz and 5.70 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 5.70 GHz. analyzing CPU 5: driver: intel_pstate CPUs which run at the same hardware frequency: 5 CPUs which need to have their frequency coordinated by software: 5 maximum transition latency: 4294.55 ms. hardware limits: 800 MHz - 5.70 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 5.70 GHz and 5.70 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 5.70 GHz. analyzing CPU 6: driver: intel_pstate CPUs which run at the same hardware frequency: 6 CPUs which need to have their frequency coordinated by software: 6 maximum transition latency: 4294.55 ms. hardware limits: 800 MHz - 5.70 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 5.70 GHz and 5.70 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 5.70 GHz. analyzing CPU 7: driver: intel_pstate CPUs which run at the same hardware frequency: 7 CPUs which need to have their frequency coordinated by software: 7 maximum transition latency: 4294.55 ms. hardware limits: 800 MHz - 5.70 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 5.70 GHz and 5.70 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 5.70 GHz. analyzing CPU 8: driver: intel_pstate CPUs which run at the same hardware frequency: 8 CPUs which need to have their frequency coordinated by software: 8 maximum transition latency: 4294.55 ms. hardware limits: 800 MHz - 6.00 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 5.70 GHz and 5.70 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 5.70 GHz. analyzing CPU 9: driver: intel_pstate CPUs which run at the same hardware frequency: 9 CPUs which need to have their frequency coordinated by software: 9 maximum transition latency: 4294.55 ms. hardware limits: 800 MHz - 6.00 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 5.70 GHz and 5.70 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 5.70 GHz. analyzing CPU 10: driver: intel_pstate CPUs which run at the same hardware frequency: 10 CPUs which need to have their frequency coordinated by software: 10 maximum transition latency: 4294.55 ms. hardware limits: 800 MHz - 6.00 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 5.70 GHz and 5.70 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 5.70 GHz. analyzing CPU 11: driver: intel_pstate CPUs which run at the same hardware frequency: 11 CPUs which need to have their frequency coordinated by software: 11 maximum transition latency: 4294.55 ms. hardware limits: 800 MHz - 6.00 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 5.70 GHz and 5.70 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 5.70 GHz. analyzing CPU 12: driver: intel_pstate CPUs which run at the same hardware frequency: 12 CPUs which need to have their frequency coordinated by software: 12 maximum transition latency: 4294.55 ms. hardware limits: 800 MHz - 5.70 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 5.70 GHz and 5.70 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 5.70 GHz. analyzing CPU 13: driver: intel_pstate CPUs which run at the same hardware frequency: 13 CPUs which need to have their frequency coordinated by software: 13 maximum transition latency: 4294.55 ms. hardware limits: 800 MHz - 5.70 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 5.70 GHz and 5.70 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 5.70 GHz. analyzing CPU 14: driver: intel_pstate CPUs which run at the same hardware frequency: 14 CPUs which need to have their frequency coordinated by software: 14 maximum transition latency: 4294.55 ms. hardware limits: 800 MHz - 5.70 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 5.70 GHz and 5.70 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 5.70 GHz. analyzing CPU 15: driver: intel_pstate CPUs which run at the same hardware frequency: 15 CPUs which need to have their frequency coordinated by software: 15 maximum transition latency: 4294.55 ms. hardware limits: 800 MHz - 5.70 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 5.70 GHz and 5.70 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 5.70 GHz. analyzing CPU 16: driver: intel_pstate CPUs which run at the same hardware frequency: 16 CPUs which need to have their frequency coordinated by software: 16 maximum transition latency: 4294.55 ms. hardware limits: 800 MHz - 4.40 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 4.40 GHz and 4.40 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 4.40 GHz. analyzing CPU 17: driver: intel_pstate CPUs which run at the same hardware frequency: 17 CPUs which need to have their frequency coordinated by software: 17 maximum transition latency: 4294.55 ms. hardware limits: 800 MHz - 4.40 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 4.40 GHz and 4.40 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 4.40 GHz. analyzing CPU 18: driver: intel_pstate CPUs which run at the same hardware frequency: 18 CPUs which need to have their frequency coordinated by software: 18 maximum transition latency: 4294.55 ms. hardware limits: 800 MHz - 4.40 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 4.40 GHz and 4.40 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 4.40 GHz. analyzing CPU 19: driver: intel_pstate CPUs which run at the same hardware frequency: 19 CPUs which need to have their frequency coordinated by software: 19 maximum transition latency: 4294.55 ms. hardware limits: 800 MHz - 4.40 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 4.40 GHz and 4.40 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 4.40 GHz. analyzing CPU 20: driver: intel_pstate CPUs which run at the same hardware frequency: 20 CPUs which need to have their frequency coordinated by software: 20 maximum transition latency: 4294.55 ms. hardware limits: 800 MHz - 4.40 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 4.40 GHz and 4.40 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 4.40 GHz. analyzing CPU 21: driver: intel_pstate CPUs which run at the same hardware frequency: 21 CPUs which need to have their frequency coordinated by software: 21 maximum transition latency: 4294.55 ms. hardware limits: 800 MHz - 4.40 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 4.40 GHz and 4.40 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 4.40 GHz. analyzing CPU 22: driver: intel_pstate CPUs which run at the same hardware frequency: 22 CPUs which need to have their frequency coordinated by software: 22 maximum transition latency: 4294.55 ms. hardware limits: 800 MHz - 4.40 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 4.40 GHz and 4.40 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 4.40 GHz. analyzing CPU 23: driver: intel_pstate CPUs which run at the same hardware frequency: 23 CPUs which need to have their frequency coordinated by software: 23 maximum transition latency: 4294.55 ms. hardware limits: 800 MHz - 4.40 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 4.40 GHz and 4.40 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 4.40 GHz. analyzing CPU 24: driver: intel_pstate CPUs which run at the same hardware frequency: 24 CPUs which need to have their frequency coordinated by software: 24 maximum transition latency: 4294.55 ms. hardware limits: 800 MHz - 4.40 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 4.40 GHz and 4.40 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 4.40 GHz. analyzing CPU 25: driver: intel_pstate CPUs which run at the same hardware frequency: 25 CPUs which need to have their frequency coordinated by software: 25 maximum transition latency: 4294.55 ms. hardware limits: 800 MHz - 4.40 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 4.40 GHz and 4.40 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 4.40 GHz. analyzing CPU 26: driver: intel_pstate CPUs which run at the same hardware frequency: 26 CPUs which need to have their frequency coordinated by software: 26 maximum transition latency: 4294.55 ms. hardware limits: 800 MHz - 4.40 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 4.40 GHz and 4.40 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 4.40 GHz. analyzing CPU 27: driver: intel_pstate CPUs which run at the same hardware frequency: 27 CPUs which need to have their frequency coordinated by software: 27 maximum transition latency: 4294.55 ms. hardware limits: 800 MHz - 4.40 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 4.40 GHz and 4.40 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 4.40 GHz. analyzing CPU 28: driver: intel_pstate CPUs which run at the same hardware frequency: 28 CPUs which need to have their frequency coordinated by software: 28 maximum transition latency: 4294.55 ms. hardware limits: 800 MHz - 4.40 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 4.40 GHz and 4.40 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 4.40 GHz. analyzing CPU 29: driver: intel_pstate CPUs which run at the same hardware frequency: 29 CPUs which need to have their frequency coordinated by software: 29 maximum transition latency: 4294.55 ms. hardware limits: 800 MHz - 4.40 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 4.40 GHz and 4.40 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 4.40 GHz. analyzing CPU 30: driver: intel_pstate CPUs which run at the same hardware frequency: 30 CPUs which need to have their frequency coordinated by software: 30 maximum transition latency: 4294.55 ms. hardware limits: 800 MHz - 4.40 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 4.40 GHz and 4.40 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 4.40 GHz. analyzing CPU 31: driver: intel_pstate CPUs which run at the same hardware frequency: 31 CPUs which need to have their frequency coordinated by software: 31 maximum transition latency: 4294.55 ms. hardware limits: 800 MHz - 4.40 GHz available cpufreq governors: performance, powersave current policy: frequency should be within 4.40 GHz and 4.40 GHz. The governor "performance" may decide which speed to use within this range. current CPU frequency is 4.40 GHz.
MD
Marcus D. Leech
Wed, Oct 30, 2024 6:07 PM

On 30/10/2024 13:38, dhpanchaai@gmail.com wrote:

I had to change my 100G IP address to 192.168.120.2 and channels on
the X410 to “A”.

I set the CPU to Performance Mode and lowered the sample rate to
122.88e6.

However, I’m still experiencing dropped packets and sequence errors.

/usr/local/lib/uhd/examples$ sudo ./benchmark_rate --args
"type=x4xx,product=x410,addr=192.168.120.2,mgmt_addr=192.168.1.3,use_dpdk=1"
--priority "high" --rx_rate 122.88e6 --rx_subdev "A:1" --tx_rate
122.88e6 --tx_subdev "A:0"

[INFO] [UHD] linux; GNU C++ version 11.4.0; Boost_107400; DPDK_21.11;
UHD_4.7.0.HEAD-0-ga5ed1872

EAL: Detected CPU lcores: 32

EAL: Detected NUMA nodes: 1

EAL: Detected shared linkage of DPDK

EAL: Multi-process socket /var/run/dpdk/rte/mp_socket

EAL: Selected IOVA mode 'VA'

EAL: No available 1048576 kB hugepages reported

EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:01:00.0
(socket 0)

EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:01:00.1
(socket 0)

TELEMETRY: No legacy callbacks, legacy socket not created

[00:00:00.000094] Creating the usrp device with:
type=x4xx,product=x410,addr=192.168.120.2,mgmt_addr=192.168.1.3,use_dpdk=1...

[INFO] [MPMD] Initializing 1 device(s) in parallel with args:
mgmt_addr=192.168.1.3,type=x4xx,product=x410,serial=328AFD7,name=ni-x4xx-328AFD7,fpga=UC_200,claimed=False,addr=192.168.120.2,use_dpdk=1

[INFO] [MPM.PeriphManager] init() called with device args
`fpga=UC_200,mgmt_addr=192.168.1.3,name=ni-x4xx-328AFD7,product=x410,use_dpdk=1,clock_source=internal,time_source=internal,initializing=True'.

Using Device: Single USRP:

Device: X400-Series Device

Mboard 0: x410

RX Channel: 0

RX DSP: 0

RX Dboard: A

RX Subdev: 1

TX Channel: 0

TX DSP: 0

TX Dboard: A

TX Subdev: 0

[00:00:01.954717000] Setting device timestamp to 0...

[00:00:01.955999062] Testing receive rate 122.880000 Msps on 1 channels

Setting TX spp to 1992

[00:00:01.956816655] Testing transmit rate 122.880000 Msps on 1 channels

U[00:00:02.575486749] Detected Rx sequence error.

DU[00:00:03.575529623] Detected Rx sequence error.

DU[00:00:04.575537036] Detected Rx sequence error.

DU[00:00:05.575477062] Detected Rx sequence error.

DU[00:00:06.575465296] Detected Rx sequence error.

DU[00:00:07.575549183] Detected Rx sequence error.

DU[00:00:08.575539569] Detected Rx sequence error.

DU[00:00:09.575532702] Detected Rx sequence error.

DU[00:00:10.575479853] Detected Rx sequence error.

DU[00:00:11.575464597] Detected Rx sequence error.

D[00:00:11.958336752] Benchmark complete.

Benchmark rate summary:

Num received samples: 1176736199

Num dropped samples: 52167456

Num overruns detected: 0

Num transmitted samples: 1168152624

Num sequence errors (Tx): 0

Num sequence errors (Rx): 10

Num underruns detected: 10

Num late commands: 0

Num timeouts (Tx): 0

Num timeouts (Rx): 0

Done!

My /root/.config/uhd.conf file:

[use_dpdk=1]

dpdk_mtu=9000

dpdk_driver=/usr/lib/x86_64-linux-gnu/dpdk/pmds-22.0/

dpdk_corelist=10,11

dpdk_num_mbufs=4095

dpdk_mbuf_cache_size=315

[dpdk_mac=b8:3f:d2:b0:d7:58]

dpdk_lcore = 11

dpdk_ipv4 = 192.168.120.33/24

#dpdk_num_desc = 4096

I have attached screenshot of the performance GUI and system monitor
of the CPU usage


USRP-users mailing list -- usrp-users@lists.ettus.com
To unsubscribe send an email to usrp-users-leave@lists.ettus.com

This is on a "bare metal" system, and NOT on a VM, I assume?

I just ran a test (using a different USRP) doing 125Msps of receive into
my system, over a cheap 10GiGe card.  Worked without
  ANY dropped samples--just using the "benchmark_rate" application as
you have.  My system is a 8-year-old dual-chip,
  6-core Xeon system with 32G of memory, running on Ubuntu 22.04. Your
system SHOULD be capable of MUCH more.

I assume you've followed the various bits of advice here:

https://kb.ettus.com/USRP_Host_Performance_Tuning_Tips_and_Tricks#Increasing_Ring_Buffers

I wonder if you have a PHY-layer issue with your cabling?

On 30/10/2024 13:38, dhpanchaai@gmail.com wrote: > > I had to change my 100G IP address to 192.168.120.2 and channels on > the X410 to “A”. > > I set the CPU to Performance Mode and lowered the sample rate to > 122.88e6. > > However, I’m still experiencing dropped packets and sequence errors. > > /usr/local/lib/uhd/examples$ sudo ./benchmark_rate --args > "type=x4xx,product=x410,addr=192.168.120.2,mgmt_addr=192.168.1.3,use_dpdk=1" > --priority "high" --rx_rate 122.88e6 --rx_subdev "A:1" --tx_rate > 122.88e6 --tx_subdev "A:0" > > [INFO] [UHD] linux; GNU C++ version 11.4.0; Boost_107400; DPDK_21.11; > UHD_4.7.0.HEAD-0-ga5ed1872 > > EAL: Detected CPU lcores: 32 > > EAL: Detected NUMA nodes: 1 > > EAL: Detected shared linkage of DPDK > > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket > > EAL: Selected IOVA mode 'VA' > > EAL: No available 1048576 kB hugepages reported > > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:01:00.0 > (socket 0) > > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:01:00.1 > (socket 0) > > TELEMETRY: No legacy callbacks, legacy socket not created > > [00:00:00.000094] Creating the usrp device with: > type=x4xx,product=x410,addr=192.168.120.2,mgmt_addr=192.168.1.3,use_dpdk=1... > > [INFO] [MPMD] Initializing 1 device(s) in parallel with args: > mgmt_addr=192.168.1.3,type=x4xx,product=x410,serial=328AFD7,name=ni-x4xx-328AFD7,fpga=UC_200,claimed=False,addr=192.168.120.2,use_dpdk=1 > > [INFO] [MPM.PeriphManager] init() called with device args > `fpga=UC_200,mgmt_addr=192.168.1.3,name=ni-x4xx-328AFD7,product=x410,use_dpdk=1,clock_source=internal,time_source=internal,initializing=True'. > > Using Device: Single USRP: > > Device: X400-Series Device > > Mboard 0: x410 > > RX Channel: 0 > > RX DSP: 0 > > RX Dboard: A > > RX Subdev: 1 > > TX Channel: 0 > > TX DSP: 0 > > TX Dboard: A > > TX Subdev: 0 > > [00:00:01.954717000] Setting device timestamp to 0... > > [00:00:01.955999062] Testing receive rate 122.880000 Msps on 1 channels > > Setting TX spp to 1992 > > [00:00:01.956816655] Testing transmit rate 122.880000 Msps on 1 channels > > U[00:00:02.575486749] Detected Rx sequence error. > > DU[00:00:03.575529623] Detected Rx sequence error. > > DU[00:00:04.575537036] Detected Rx sequence error. > > DU[00:00:05.575477062] Detected Rx sequence error. > > DU[00:00:06.575465296] Detected Rx sequence error. > > DU[00:00:07.575549183] Detected Rx sequence error. > > DU[00:00:08.575539569] Detected Rx sequence error. > > DU[00:00:09.575532702] Detected Rx sequence error. > > DU[00:00:10.575479853] Detected Rx sequence error. > > DU[00:00:11.575464597] Detected Rx sequence error. > > D[00:00:11.958336752] Benchmark complete. > > Benchmark rate summary: > > Num received samples: 1176736199 > > Num dropped samples: 52167456 > > Num overruns detected: 0 > > Num transmitted samples: 1168152624 > > Num sequence errors (Tx): 0 > > Num sequence errors (Rx): 10 > > Num underruns detected: 10 > > Num late commands: 0 > > Num timeouts (Tx): 0 > > Num timeouts (Rx): 0 > > Done! > > > My /root/.config/uhd.conf file: > > [use_dpdk=1] > > dpdk_mtu=9000 > > dpdk_driver=/usr/lib/x86_64-linux-gnu/dpdk/pmds-22.0/ > > dpdk_corelist=10,11 > > dpdk_num_mbufs=4095 > > dpdk_mbuf_cache_size=315 > > [dpdk_mac=b8:3f:d2:b0:d7:58] > > dpdk_lcore = 11 > > dpdk_ipv4 = 192.168.120.33/24 > > #dpdk_num_desc = 4096 > > > I have attached screenshot of the performance GUI and system monitor > of the CPU usage > > > _______________________________________________ > USRP-users mailing list -- usrp-users@lists.ettus.com > To unsubscribe send an email to usrp-users-leave@lists.ettus.com This is on a "bare metal" system, and NOT on a VM, I assume? I just ran a test (using a different USRP) doing 125Msps of receive into my system, over a cheap 10GiGe card.  Worked without   ANY dropped samples--just using the "benchmark_rate" application as you have.  My system is a 8-year-old dual-chip,   6-core Xeon system with 32G of memory, running on Ubuntu 22.04. Your system SHOULD be capable of MUCH more. I assume you've followed the various bits of advice here: https://kb.ettus.com/USRP_Host_Performance_Tuning_Tips_and_Tricks#Increasing_Ring_Buffers I wonder if you have a PHY-layer issue with your cabling?
D
dhpanchaai@gmail.com
Wed, Oct 30, 2024 7:54 PM

Yes, its a bare metal system (no VM involved). I have the RF cable connected from A:0 to A:1 and I’m using the Mellanox cable connected from OSFP28-1 (right port on X410) to right port on 100G NIC card.

I’m using Xubuntu 22.04.
$ lsb_release -a

No LSB modules are available.

Distributor ID: Ubuntu

Description: Ubuntu 22.04.5 LTS

Release: 22.04

Codename: jammy

$ ethtool -g enp1s0f0np0

Ring parameters for enp1s0f0np0:

Pre-set maximums:

RX: 8192

RX Mini: n/a

RX Jumbo: n/a

TX: 8192

Current hardware settings:

RX: 4096

RX Mini: n/a

RX Jumbo: n/a

TX: 4096

$ sudo sysctl -w net.core.rmem_max=250000000

net.core.rmem_max = 250000000

$ sudo sysctl -w net.core.wmem_max=250000000

net.core.wmem_max = 250000000

$ lscpu

Architecture:            x86_64

CPU op-mode(s):        32-bit, 64-bit

Address sizes:          46 bits physical, 48 bits virtual

Byte Order:            Little Endian

CPU(s):                  32

On-line CPU(s) list:    0-31

Vendor ID:                GenuineIntel

Model name:            Intel(R) Core(TM) i9-14900K

CPU family:           6

Model:                183

Thread(s) per core:   2

Core(s) per socket:   24

Socket(s):            1

Stepping:             1

CPU max MHz:          6000.0000

CPU min MHz:          800.0000

BogoMIPS:             6374.40

Flags:                fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge m

                      ca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 s

                      s ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc 

                      art arch_perfmon pebs bts rep_good nopl xtopology nons

                      top_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq 

                      dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma c

                      x16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt t

                      sc_deadline_timer aes xsave avx f16c rdrand lahf_lm ab

                      m 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp i

                      brs_enhanced tpr_shadow flexpriority ept vpid ept_ad f

                      sgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rd

                      seed adx smap clflushopt clwb intel_pt sha_ni xsaveopt

                       xsavec xgetbv1 xsaves split_lock_detect user_shstk av

                      x_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_

                      window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke wai

                      tpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b 

                      fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d

                       arch_capabilities

Virtualization features:

Virtualization:        VT-x

Caches (sum of all):

L1d:                    896 KiB (24 instances)

L1i:                    1.3 MiB (24 instances)

L2:                    32 MiB (12 instances)

L3:                    36 MiB (1 instance)

NUMA:

NUMA node(s):          1

NUMA node0 CPU(s):      0-31

Vulnerabilities:

Gather data sampling:  Not affected

Itlb multihit:          Not affected

L1tf:                  Not affected

Mds:                    Not affected

Meltdown:              Not affected

Mmio stale data:        Not affected

Reg file data sampling: Mitigation; Clear Register File

Retbleed:              Not affected

Spec rstack overflow:  Not affected

Spec store bypass:      Mitigation; Speculative Store Bypass disabled via prct

                      l

Spectre v1:            Mitigation; usercopy/swapgs barriers and __user pointe

                      r sanitization

Spectre v2:            Mitigation; Enhanced / Automatic IBRS; IBPB conditiona

                      l; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S

Srbds:                  Not affected

Tsx async abort:        Not affected

Yes, its a bare metal system (no VM involved). I have the RF cable connected from A:0 to A:1 and I’m using the Mellanox cable connected from OSFP28-1 (right port on X410) to right port on 100G NIC card. I’m using Xubuntu 22.04.\ $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 22.04.5 LTS Release: 22.04 Codename: jammy $ ethtool -g enp1s0f0np0 Ring parameters for enp1s0f0np0: Pre-set maximums: RX: 8192 RX Mini: n/a RX Jumbo: n/a TX: 8192 Current hardware settings: RX: 4096 RX Mini: n/a RX Jumbo: n/a TX: 4096 $ sudo sysctl -w net.core.rmem_max=250000000 net.core.rmem_max = 250000000 $ sudo sysctl -w net.core.wmem_max=250000000 net.core.wmem_max = 250000000 $ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 32 On-line CPU(s) list: 0-31 Vendor ID: GenuineIntel Model name: Intel(R) Core(TM) i9-14900K CPU family: 6 Model: 183 Thread(s) per core: 2 Core(s) per socket: 24 Socket(s): 1 Stepping: 1 CPU max MHz: 6000.0000 CPU min MHz: 800.0000 BogoMIPS: 6374.40 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge m ca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 s s ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nons top_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma c x16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt t sc_deadline_timer aes xsave avx f16c rdrand lahf_lm ab m 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp i brs_enhanced tpr_shadow flexpriority ept vpid ept_ad f sgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rd seed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk av x_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_ window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke wai tpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities Virtualization features: Virtualization: VT-x Caches (sum of all): L1d: 896 KiB (24 instances) L1i: 1.3 MiB (24 instances) L2: 32 MiB (12 instances) L3: 36 MiB (1 instance) NUMA: NUMA node(s): 1 NUMA node0 CPU(s): 0-31 Vulnerabilities: Gather data sampling: Not affected Itlb multihit: Not affected L1tf: Not affected Mds: Not affected Meltdown: Not affected Mmio stale data: Not affected Reg file data sampling: Mitigation; Clear Register File Retbleed: Not affected Spec rstack overflow: Not affected Spec store bypass: Mitigation; Speculative Store Bypass disabled via prct l Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointe r sanitization Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditiona l; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S Srbds: Not affected Tsx async abort: Not affected
D
dhpanchaai@gmail.com
Tue, Nov 5, 2024 1:12 AM

I got it work. It seems RT_RUNTIME_SHARE disabled was the culprit. I re-enabled it using these instructions and the benchmark worked without packet drops or underruns:

Underruns Every Second with DPDK + Ubuntu

With Linux kernels 5.10 and beyond, we have observed periodic underruns on systems that otherwise have no issues. These Linux kernel versions are the default for Ubuntu 20.04.3 LTS and later. The underrun issue is due to the RT_RUNTIME_SHARE feature being disabled by default in these versions of the Linux kernel (shown as NO_RT_RUNTIME_SHARE). The following procedure can be used to enable this feature. This process was tested on Linux kernel version 5.13; the procedure may be slightly different on other kernel versions. To determine the Linux kernel version of your system, in a terminal issue the command uname -r.

$ sudo -s
$ cd /sys/kernel/debug/sched
$ cat features | tr ' ' '\n' | grep RUNTIME_SHARE
NO_RT_RUNTIME_SHARE
$ echo RT_RUNTIME_SHARE > features
$ cat features | tr ' ' '\n' | grep RUNTIME_SHARE
RT_RUNTIME_SHARE
I got it work. It seems RT_RUNTIME_SHARE disabled was the culprit. I re-enabled it using these instructions and the benchmark worked without packet drops or underruns:\ \ **Underruns Every Second with DPDK + Ubuntu** With Linux kernels 5.10 and beyond, we have observed periodic underruns on systems that otherwise have no issues. These Linux kernel versions are the default for Ubuntu 20.04.3 LTS and later. The underrun issue is due to the `RT_RUNTIME_SHARE` feature being disabled by default in these versions of the Linux kernel (shown as `NO_RT_RUNTIME_SHARE`). The following procedure can be used to enable this feature. This process was tested on Linux kernel version 5.13; the procedure may be slightly different on other kernel versions. To determine the Linux kernel version of your system, in a terminal issue the command `uname -r`. ``` $ sudo -s $ cd /sys/kernel/debug/sched $ cat features | tr ' ' '\n' | grep RUNTIME_SHARE NO_RT_RUNTIME_SHARE $ echo RT_RUNTIME_SHARE > features $ cat features | tr ' ' '\n' | grep RUNTIME_SHARE RT_RUNTIME_SHARE ```
MD
Marcus D. Leech
Tue, Nov 5, 2024 1:20 AM

On 04/11/2024 20:12, dhpanchaai@gmail.com wrote:

I got it work. It seems RT_RUNTIME_SHARE disabled was the culprit. I
re-enabled it using these instructions and the benchmark worked
without packet drops or underruns:

Underruns Every Second with DPDK + Ubuntu

With Linux kernels 5.10 and beyond, we have observed periodic
underruns on systems that otherwise have no issues. These Linux kernel
versions are the default for Ubuntu 20.04.3 LTS and later. The
underrun issue is due to the |RT_RUNTIME_SHARE| feature being disabled
by default in these versions of the Linux kernel (shown as
|NO_RT_RUNTIME_SHARE|). The following procedure can be used to enable
this feature. This process was tested on Linux kernel version 5.13;
the procedure may be slightly different on other kernel versions. To
determine the Linux kernel version of your system, in a terminal issue
the command |uname -r|.

|$ sudo -s $ cd /sys/kernel/debug/sched $ cat features | tr ' ' '\n' |
grep RUNTIME_SHARE NO_RT_RUNTIME_SHARE $ echo RT_RUNTIME_SHARE >
features $ cat features | tr ' ' '\n' | grep RUNTIME_SHARE
RT_RUNTIME_SHARE|


USRP-users mailing list --usrp-users@lists.ettus.com
To unsubscribe send an email tousrp-users-leave@lists.ettus.com

I never would have suspected a kernel scheduler subtlety.  But, there it is:

https://lore.kernel.org/lkml/c596a06773658d976fb839e02843a459ed4c2edf.1479204252.git.bristot@redhat.com/

It's fantastic that you found this!

On 04/11/2024 20:12, dhpanchaai@gmail.com wrote: > > I got it work. It seems RT_RUNTIME_SHARE disabled was the culprit. I > re-enabled it using these instructions and the benchmark worked > without packet drops or underruns: > > *Underruns Every Second with DPDK + Ubuntu* > > With Linux kernels 5.10 and beyond, we have observed periodic > underruns on systems that otherwise have no issues. These Linux kernel > versions are the default for Ubuntu 20.04.3 LTS and later. The > underrun issue is due to the |RT_RUNTIME_SHARE| feature being disabled > by default in these versions of the Linux kernel (shown as > |NO_RT_RUNTIME_SHARE|). The following procedure can be used to enable > this feature. This process was tested on Linux kernel version 5.13; > the procedure may be slightly different on other kernel versions. To > determine the Linux kernel version of your system, in a terminal issue > the command |uname -r|. > > |$ sudo -s $ cd /sys/kernel/debug/sched $ cat features | tr ' ' '\n' | > grep RUNTIME_SHARE NO_RT_RUNTIME_SHARE $ echo RT_RUNTIME_SHARE > > features $ cat features | tr ' ' '\n' | grep RUNTIME_SHARE > RT_RUNTIME_SHARE| > > _______________________________________________ > USRP-users mailing list --usrp-users@lists.ettus.com > To unsubscribe send an email tousrp-users-leave@lists.ettus.com I never would have suspected a kernel scheduler subtlety.  But, there it is: https://lore.kernel.org/lkml/c596a06773658d976fb839e02843a459ed4c2edf.1479204252.git.bristot@redhat.com/ It's fantastic that you found this!
MD
Marcus D. Leech
Wed, Nov 6, 2024 1:44 PM

On 04/11/2024 20:12, dhpanchaai@gmail.com wrote:

I got it work. It seems RT_RUNTIME_SHARE disabled was the culprit. I
re-enabled it using these instructions and the benchmark worked
without packet drops or underruns:

Underruns Every Second with DPDK + Ubuntu

With Linux kernels 5.10 and beyond, we have observed periodic
underruns on systems that otherwise have no issues. These Linux kernel
versions are the default for Ubuntu 20.04.3 LTS and later. The
underrun issue is due to the |RT_RUNTIME_SHARE| feature being disabled
by default in these versions of the Linux kernel (shown as
|NO_RT_RUNTIME_SHARE|). The following procedure can be used to enable
this feature. This process was tested on Linux kernel version 5.13;
the procedure may be slightly different on other kernel versions. To
determine the Linux kernel version of your system, in a terminal issue
the command |uname -r|.

|$ sudo -s $ cd /sys/kernel/debug/sched $ cat features | tr ' ' '\n' |
grep RUNTIME_SHARE NO_RT_RUNTIME_SHARE $ echo RT_RUNTIME_SHARE >
features $ cat features | tr ' ' '\n' | grep RUNTIME_SHARE
RT_RUNTIME_SHARE|


USRP-users mailing list --usrp-users@lists.ettus.com
To unsubscribe send an email tousrp-users-leave@lists.ettus.com

Just to fill people in, this appears in the DPDK document:

https://kb.ettus.com/Getting_Started_with_DPDK_and_UHD

But NOT in the general "Performance Tuning Tips and Tricks", which is
the one I'm most familiar with:

https://kb.ettus.com/USRP_Host_Performance_Tuning_Tips_and_Tricks

We will probably move that note on RT_RUNTIME_SHARE to the "Performance
Tuning" document, or at least
  replicate it.

On 04/11/2024 20:12, dhpanchaai@gmail.com wrote: > > I got it work. It seems RT_RUNTIME_SHARE disabled was the culprit. I > re-enabled it using these instructions and the benchmark worked > without packet drops or underruns: > > *Underruns Every Second with DPDK + Ubuntu* > > With Linux kernels 5.10 and beyond, we have observed periodic > underruns on systems that otherwise have no issues. These Linux kernel > versions are the default for Ubuntu 20.04.3 LTS and later. The > underrun issue is due to the |RT_RUNTIME_SHARE| feature being disabled > by default in these versions of the Linux kernel (shown as > |NO_RT_RUNTIME_SHARE|). The following procedure can be used to enable > this feature. This process was tested on Linux kernel version 5.13; > the procedure may be slightly different on other kernel versions. To > determine the Linux kernel version of your system, in a terminal issue > the command |uname -r|. > > |$ sudo -s $ cd /sys/kernel/debug/sched $ cat features | tr ' ' '\n' | > grep RUNTIME_SHARE NO_RT_RUNTIME_SHARE $ echo RT_RUNTIME_SHARE > > features $ cat features | tr ' ' '\n' | grep RUNTIME_SHARE > RT_RUNTIME_SHARE| > > _______________________________________________ > USRP-users mailing list --usrp-users@lists.ettus.com > To unsubscribe send an email tousrp-users-leave@lists.ettus.com Just to fill people in, this appears in the DPDK document: https://kb.ettus.com/Getting_Started_with_DPDK_and_UHD But NOT in the general "Performance Tuning Tips and Tricks", which is the one I'm most familiar with: https://kb.ettus.com/USRP_Host_Performance_Tuning_Tips_and_Tricks We will probably move that note on RT_RUNTIME_SHARE to the "Performance Tuning" document, or at least   replicate it.