[USRP-users] about 10 Gig ethernet configuration

Michael West michael.west at ettus.com
Mon Oct 26 04:05:40 EDT 2015


Hi Sanjoy,

I agree with Marcus.  It looks like the governor is not working.  The
performance governor should be setting the CPU frequency to its highest
value.

Try setting the frequency by running:
> cpufreq-set -r -f 3200000000
or setting the min frequency by running:
> cpufreq-set -r -d 3200000000

Regards,
Michael

On Mon, Oct 26, 2015 at 12:43 AM, Sanjoy Basak <sanjoybasak14 at gmail.com>
wrote:

> Hi Michael,
> The OS is native. Ubuntu 14.4.3.
>
>
> On Mon, Oct 26, 2015 at 9:37 AM, Michael West <michael.west at ettus.com>
> wrote:
>
>> Is the OS native or are you using a virtual machine?
>>
>> On Sun, Oct 25, 2015 at 1:30 PM, Marcus Müller <marcus.mueller at ettus.com>
>> wrote:
>>
>>> Hm, that looks as if the CPU governor doesn't actually do its job...
>>>
>>> On 25.10.2015 17:44, Sanjoy Basak wrote:
>>>
>>> Hi Marcus,
>>> Thanks for the reply.
>>>
>>> I tried this
>>> sudo sysctl -w net.core.wmem_max = 536870912
>>> sudo sysctl -w net.core.rmem_max = 536870912
>>>
>>> Now
>>> net.core.rmem_max = 536870912
>>> net.core.wmem_max = 536870912
>>>
>>> Even now it shows drop at 25 MSps as previous.
>>>
>>> I tried cpufreq-info again. It's always at performance. However, the cpu
>>> freq at different core is different. I am also putting the output here. Is
>>> it making any issue? Should the cpufreq at each core be at maximum?
>>>
>>> analyzing CPU 0:
>>>   driver: intel_pstate
>>>   CPUs which run at the same hardware frequency: 0
>>>   CPUs which need to have their frequency coordinated by software: 0
>>>   maximum transition latency: 0.97 ms.
>>>   hardware limits: 1.20 GHz - 3.20 GHz
>>>   available cpufreq governors: performance, powersave
>>>   current policy: frequency should be within 1.20 GHz and 3.20 GHz.
>>>                   The governor "performance" may decide which speed to
>>> use
>>>                   within this range.
>>>   current CPU frequency is 1.85 GHz.
>>> analyzing CPU 1:
>>>   driver: intel_pstate
>>>   CPUs which run at the same hardware frequency: 1
>>>   CPUs which need to have their frequency coordinated by software: 1
>>>   maximum transition latency: 0.97 ms.
>>>   hardware limits: 1.20 GHz - 3.20 GHz
>>>   available cpufreq governors: performance, powersave
>>>   current policy: frequency should be within 1.20 GHz and 3.20 GHz.
>>>                   The governor "performance" may decide which speed to
>>> use
>>>                   within this range.
>>>   current CPU frequency is 2.24 GHz.
>>> analyzing CPU 2:
>>>   driver: intel_pstate
>>>   CPUs which run at the same hardware frequency: 2
>>>   CPUs which need to have their frequency coordinated by software: 2
>>>   maximum transition latency: 0.97 ms.
>>>   hardware limits: 1.20 GHz - 3.20 GHz
>>>   available cpufreq governors: performance, powersave
>>>   current policy: frequency should be within 1.20 GHz and 3.20 GHz.
>>>                   The governor "performance" may decide which speed to
>>> use
>>>                   within this range.
>>>   current CPU frequency is 2.25 GHz.
>>> analyzing CPU 3:
>>>   driver: intel_pstate
>>>   CPUs which run at the same hardware frequency: 3
>>>   CPUs which need to have their frequency coordinated by software: 3
>>>   maximum transition latency: 0.97 ms.
>>>   hardware limits: 1.20 GHz - 3.20 GHz
>>>   available cpufreq governors: performance, powersave
>>>   current policy: frequency should be within 1.20 GHz and 3.20 GHz.
>>>                   The governor "performance" may decide which speed to
>>> use
>>>                   within this range.
>>>   current CPU frequency is 2.77 GHz.
>>> analyzing CPU 4:
>>>   driver: intel_pstate
>>>   CPUs which run at the same hardware frequency: 4
>>>   CPUs which need to have their frequency coordinated by software: 4
>>>   maximum transition latency: 0.97 ms.
>>>   hardware limits: 1.20 GHz - 3.20 GHz
>>>   available cpufreq governors: performance, powersave
>>>   current policy: frequency should be within 1.20 GHz and 3.20 GHz.
>>>                   The governor "performance" may decide which speed to
>>> use
>>>                   within this range.
>>>   current CPU frequency is 2.18 GHz.
>>> analyzing CPU 5:
>>>   driver: intel_pstate
>>>   CPUs which run at the same hardware frequency: 5
>>>   CPUs which need to have their frequency coordinated by software: 5
>>>   maximum transition latency: 0.97 ms.
>>>   hardware limits: 1.20 GHz - 3.20 GHz
>>>   available cpufreq governors: performance, powersave
>>>   current policy: frequency should be within 1.20 GHz and 3.20 GHz.
>>>                   The governor "performance" may decide which speed to
>>> use
>>>                   within this range.
>>>   current CPU frequency is 1.95 GHz.
>>> analyzing CPU 6:
>>>   driver: intel_pstate
>>>   CPUs which run at the same hardware frequency: 6
>>>   CPUs which need to have their frequency coordinated by software: 6
>>>   maximum transition latency: 0.97 ms.
>>>   hardware limits: 1.20 GHz - 3.20 GHz
>>>   available cpufreq governors: performance, powersave
>>>   current policy: frequency should be within 1.20 GHz and 3.20 GHz.
>>>                   The governor "performance" may decide which speed to
>>> use
>>>                   within this range.
>>>   current CPU frequency is 2.12 GHz.
>>> analyzing CPU 7:
>>>   driver: intel_pstate
>>>   CPUs which run at the same hardware frequency: 7
>>>   CPUs which need to have their frequency coordinated by software: 7
>>>   maximum transition latency: 0.97 ms.
>>>   hardware limits: 1.20 GHz - 3.20 GHz
>>>   available cpufreq governors: performance, powersave
>>>   current policy: frequency should be within 1.20 GHz and 3.20 GHz.
>>>                   The governor "performance" may decide which speed to
>>> use
>>>                   within this range.
>>>   current CPU frequency is 2.39 GHz.
>>> analyzing CPU 8:
>>>   driver: intel_pstate
>>>   CPUs which run at the same hardware frequency: 8
>>>   CPUs which need to have their frequency coordinated by software: 8
>>>   maximum transition latency: 0.97 ms.
>>>   hardware limits: 1.20 GHz - 3.20 GHz
>>>   available cpufreq governors: performance, powersave
>>>   current policy: frequency should be within 1.20 GHz and 3.20 GHz.
>>>                   The governor "performance" may decide which speed to
>>> use
>>>                   within this range.
>>>   current CPU frequency is 2.53 GHz.
>>> analyzing CPU 9:
>>>   driver: intel_pstate
>>>   CPUs which run at the same hardware frequency: 9
>>>   CPUs which need to have their frequency coordinated by software: 9
>>>   maximum transition latency: 0.97 ms.
>>>   hardware limits: 1.20 GHz - 3.20 GHz
>>>   available cpufreq governors: performance, powersave
>>>   current policy: frequency should be within 1.20 GHz and 3.20 GHz.
>>>                   The governor "performance" may decide which speed to
>>> use
>>>                   within this range.
>>>   current CPU frequency is 2.79 GHz.
>>> analyzing CPU 10:
>>>   driver: intel_pstate
>>>   CPUs which run at the same hardware frequency: 10
>>>   CPUs which need to have their frequency coordinated by software: 10
>>>   maximum transition latency: 0.97 ms.
>>>   hardware limits: 1.20 GHz - 3.20 GHz
>>>   available cpufreq governors: performance, powersave
>>>   current policy: frequency should be within 1.20 GHz and 3.20 GHz.
>>>                   The governor "performance" may decide which speed to
>>> use
>>>                   within this range.
>>>   current CPU frequency is 2.24 GHz.
>>> analyzing CPU 11:
>>>   driver: intel_pstate
>>>   CPUs which run at the same hardware frequency: 11
>>>   CPUs which need to have their frequency coordinated by software: 11
>>>   maximum transition latency: 0.97 ms.
>>>   hardware limits: 1.20 GHz - 3.20 GHz
>>>   available cpufreq governors: performance, powersave
>>>   current policy: frequency should be within 1.20 GHz and 3.20 GHz.
>>>                   The governor "performance" may decide which speed to
>>> use
>>>                   within this range.
>>>   current CPU frequency is 2.21 GHz.
>>>
>>>
>>> Best regards
>>> Sanjoy
>>>
>>> On Sun, Oct 25, 2015 at 10:22 AM, Marcus Müller <
>>> marcus.mueller at ettus.com> wrote:
>>>
>>>> Hi Sanjoy,
>>>>
>>>> Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
>>>> eth2 9000 0 1084 0 0 0 1153 0 0 0 BMRU
>>>>
>>>>
>>>>
>>>> You get 1153 packets that were dropped due to not being picked up on RX
>>>> side.
>>>> I was first not quite sure who really defines this value, whether it's
>>>> the network hardware or the linux kernel itself, but after reading netstat
>>>> code and linux kernel code, it seems that netstat kind of mislabels what is
>>>> called "rx_fifo_errors" in the kernel as "RX-OVR". That didn't make things
>>>> more intuitive researching (because of course there's also a overrun field
>>>> in the stats struct in the netdev in the kernel... ).
>>>> The good news: rx_fifo_errors is not something the Intel ixgbe driver
>>>> modifies -- which means it doesn't seem to be a hardware counter on packets
>>>> that weren't fetched from the card.
>>>>
>>>> So, could you share what
>>>>
>>>> sysctl net.core.rmem_max net.core.wmem_max
>>>>
>>>>
>>>> says? It's the maximum memory that your linux might allocate for
>>>> receive and transmit buffers on the card.
>>>>
>>>> sudo sysctl -w net.core.rmem_max $(( 512 * 2**20 ))
>>>> sudo sysctl -w net.core.wmem_max $(( 512 * 2**20 ))
>>>>
>>>>
>>>> should set the sizes to 512MB (cave: only til next reboot). Could you
>>>> try with that?
>>>>
>>>> Best regards,
>>>> Marcus
>>>>
>>>>
>>>> On 24.10.2015 18:28, Sanjoy Basak wrote:
>>>>
>>>> Hi Marcus,
>>>> Thanks a lot for the quick reply. The kernel version is
>>>> 3.19.0-31-generic
>>>>
>>>> I just checked again at 30s, 25MSps. I got 4 D and 5 D for 2
>>>> consecutive initiations.
>>>>
>>>> *@30sec*
>>>>
>>>> Setting RX Rate: 25.000000 Msps...
>>>>
>>>> Actual RX Rate: 25.000000 Msps...
>>>>
>>>> Setting RX Freq: 2000.000000 MHz...
>>>>
>>>> Actual RX Freq: 2000.000000 MHz...
>>>>
>>>> Waiting for "lo_locked": ++++++++++ locked.
>>>>
>>>> Press Ctrl + C to stop streaming...
>>>>
>>>> DGot an overflow indication. Please consider the following:
>>>>
>>>> Your write medium must sustain a rate of 100.000000MB/s.
>>>>
>>>> Dropped samples will not be written to the file.
>>>>
>>>> Please modify this example for your purposes.
>>>>
>>>> This message will not appear again.
>>>>
>>>> DDDD
>>>>
>>>> Done!
>>>>
>>>>
>>>> *@240 sec*
>>>>
>>>> Setting RX Rate: 25.000000 Msps...
>>>>
>>>> Actual RX Rate: 25.000000 Msps...
>>>>
>>>> Setting RX Freq: 2000.000000 MHz...
>>>>
>>>> Actual RX Freq: 2000.000000 MHz...
>>>>
>>>> Waiting for "lo_locked": ++++++++++ locked.
>>>>
>>>> Press Ctrl + C to stop streaming...
>>>>
>>>> DGot an overflow indication. Please consider the following:
>>>>
>>>> Your write medium must sustain a rate of 100.000000MB/s.
>>>>
>>>> Dropped samples will not be written to the file.
>>>>
>>>> Please modify this example for your purposes.
>>>>
>>>> This message will not appear again.
>>>>
>>>> DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD
>>>>
>>>> Done!
>>>>
>>>>
>>>> This is the netstat right after restarting the computer
>>>>
>>>> netstat -i eth2
>>>>
>>>> Kernel Interface table
>>>>
>>>> Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
>>>>
>>>> eth0 1500 0 9 0 0 0 26 0 0 0 BMRU
>>>>
>>>> eth1 1500 0 0 0 0 0 0 0 0 0 BMU
>>>>
>>>> eth2 9000 0 1084 0 0 0 1153 0 0 0 BMRU
>>>>
>>>> lo 65536 0 177 0 0 0 177 0 0 0 L
>>>>
>>>>
>>>> And this is netstat after
>>>>
>>>> ./rx_samples_to_file --duration 240 --rate 200e6 --freq=2e9 --file
>>>> /dev/null
>>>>
>>>> which gave a lot of D (did not count how many but a lot)
>>>>
>>>> Kernel Interface table
>>>>
>>>> Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
>>>>
>>>> eth0 1500 0 94 0 0 0 30 0 0 0 BMRU
>>>>
>>>> eth1 1500 0 0 0 0 0 0 0 0 0 BMU
>>>>
>>>> eth2 9000 0 24190524 936 0 0 355025 0 0 0 BMRU
>>>>
>>>> lo 65536 0 183 0 0 0 183 0 0 0 LRU
>>>>
>>>>
>>>> Best regards
>>>>
>>>> Sanjoy
>>>>
>>>>
>>>>
>>>>
>>>> On Sat, Oct 24, 2015 at 5:31 PM, Marcus Müller <
>>>> <marcus.mueller at ettus.com>marcus.mueller at ettus.com> wrote:
>>>>
>>>>> Hi Sanjoy,
>>>>>
>>>>> Two observations:
>>>>> Your routing table looks fine.
>>>>>
>>>>> Then, these rx_samples... results are interesting. Basically, 30s of
>>>>> 200MS/s are 8 times the amount of packets as 30s at 25MS/s; however the
>>>>> former produces 53 sequence errors, and the latter only 3. That's off by a
>>>>> factor of 2. To verify, could you try 240s at 25MS/s?
>>>>> If this holds, the sequence errors would seem to be load-related.
>>>>> Kernel version currently used (uname -r)?
>>>>> Let's analyze where things go missing. Could you compare the output of
>>>>> "netstat -i eth2" before and after a rx_samples_to_file run that produced
>>>>> lots of D?
>>>>>
>>>>> Best regards,
>>>>> Marcus
>>>>>
>>>>>
>>>>> Am 24. Oktober 2015 15:58:28 MESZ, schrieb Sanjoy Basak <
>>>>> sanjoybasak14 at gmail.com>:
>>>>>>
>>>>>> Hi Micheal,
>>>>>> I made the cpu governor to performance. This is always performance
>>>>>> now, does not change. We did not buy the cable from ettus.
>>>>>> We bought DeLOCK Twinaxial-Kabel SFP+ M SFP+ M 3,0m SFF-8431 86222
>>>>>> from a store here.
>>>>>>
>>>>>> Hi Rob,
>>>>>> Thanks for the command. I am not really sure which cable you used.
>>>>>> But I am still having drop of packets. I don't have any other cable right
>>>>>> now to test.
>>>>>>
>>>>>>
>>>>>> Hi Marcus,
>>>>>> I tried your suggestions. I put the output here.
>>>>>>
>>>>>> *ip route *
>>>>>>
>>>>>> 192.168.40.0/24 dev eth2 proto kernel scope link src 192.168.40.1
>>>>>> metric 1
>>>>>>
>>>>>>
>>>>>> I don't get any drop at this.
>>>>>>
>>>>>> rx_samples_to_file --rate 200e6 --file /dev/null --nsamps $(( 2 *
>>>>>> 1000  * 1000 ))
>>>>>>
>>>>>> Setting RX Rate: 200.000000 Msps...
>>>>>>
>>>>>> Actual RX Rate: 200.000000 Msps...
>>>>>>
>>>>>> Setting RX Freq: 0.000000 MHz...
>>>>>>
>>>>>> Actual RX Freq: 340.000000 MHz...
>>>>>>
>>>>>> Waiting for "lo_locked": ++++++++++ locked.
>>>>>>
>>>>>> Done!
>>>>>>
>>>>>>
>>>>>> However with
>>>>>>
>>>>>> ./rx_samples_to_file --duration 30 --rate 200e6 --freq=2e9 --file
>>>>>> /dev/null
>>>>>>
>>>>>> Setting RX Rate: 200.000000 Msps...
>>>>>>
>>>>>> Actual RX Rate: 200.000000 Msps...
>>>>>>
>>>>>> Setting RX Freq: 2000.000000 MHz...
>>>>>>
>>>>>> Actual RX Freq: 2000.000000 MHz...
>>>>>>
>>>>>> Waiting for "lo_locked": ++++++++++ locked.
>>>>>>
>>>>>> Press Ctrl + C to stop streaming...
>>>>>>
>>>>>> DGot an overflow indication. Please consider the following:
>>>>>>
>>>>>> Your write medium must sustain a rate of 800.000000MB/s.
>>>>>>
>>>>>> Dropped samples will not be written to the file.
>>>>>>
>>>>>> Please modify this example for your purposes.
>>>>>>
>>>>>> This message will not appear again.
>>>>>>
>>>>>> DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD
>>>>>>
>>>>>> Done!
>>>>>>
>>>>>>
>>>>>> Even at 25 MSps I got drop of packet
>>>>>>
>>>>>> ./rx_samples_to_file --duration 30 --rate 25e6 --freq=2e9 --file
>>>>>> /dev/null
>>>>>>
>>>>>> Setting RX Rate: 25.000000 Msps...
>>>>>>
>>>>>> Actual RX Rate: 25.000000 Msps...
>>>>>>
>>>>>> Setting RX Freq: 2000.000000 MHz...
>>>>>>
>>>>>> Actual RX Freq: 2000.000000 MHz...
>>>>>>
>>>>>> Waiting for "lo_locked": ++++++++++ locked.
>>>>>>
>>>>>> Press Ctrl + C to stop streaming...
>>>>>>
>>>>>> DGot an overflow indication. Please consider the following:
>>>>>>
>>>>>>   Your write medium must sustain a rate of 100.000000MB/s.
>>>>>>
>>>>>>   Dropped samples will not be written to the file.
>>>>>>
>>>>>>   Please modify this example for your purposes.
>>>>>>
>>>>>>   This message will not appear again.
>>>>>>
>>>>>> DD
>>>>>>
>>>>>> Done!
>>>>>>
>>>>>>
>>>>>> But with  1 Gig interface with 25 MSps I am not getting any drop
>>>>>>
>>>>>> ./rx_samples_to_file --duration 30 --rate 25e6 --freq=2e9 --file
>>>>>> /dev/null
>>>>>>
>>>>>> Setting RX Rate: 25.000000 Msps...
>>>>>>
>>>>>> Actual RX Rate: 25.000000 Msps...
>>>>>>
>>>>>> Setting RX Freq: 2000.000000 MHz...
>>>>>>
>>>>>> Actual RX Freq: 2000.000000 MHz...
>>>>>>
>>>>>> Waiting for "lo_locked": ++++++++++ locked.
>>>>>>
>>>>>> Press Ctrl + C to stop streaming...
>>>>>>
>>>>>> Done!
>>>>>>
>>>>>> I hope raid is not making an issue here, as I don't see any
>>>>>> indication when I tried with 1 Gig interface.
>>>>>>
>>>>>> Please let me know from  seeing the output what you think making the
>>>>>> trouble.
>>>>>>
>>>>>>
>>>>>> Best regards
>>>>>> Sanjoy
>>>>>>
>>>>>>
>>>>>> On Sat, Oct 24, 2015 at 1:48 AM, Michael West <
>>>>>> <michael.west at ettus.com>michael.west at ettus.com> wrote:
>>>>>>
>>>>>>> I was corrected regarding the 10 GbE copper cables.  The long copper
>>>>>>> cables were failing our tests due to a bug in the IP used for the 10 GbE in
>>>>>>> the FPGA image that has since been resolved.  We are currently using cables
>>>>>>> up to 3m with no problems.  We have found some 5m cables work and some
>>>>>>> don't depending on the quality of the cable.  The 10 GbE cables and NICs
>>>>>>> sold on the Ettus website all work well.
>>>>>>>
>>>>>>> Regards,
>>>>>>> Michael
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Oct 23, 2015 at 3:47 PM, Marcus Müller <
>>>>>>> <marcus.mueller at ettus.com>marcus.mueller at ettus.com> wrote:
>>>>>>>
>>>>>>>> So to understand this
>>>>>>>>
>>>>>>>> TX much more stable than RX, and RX showing not a single "O" (a
>>>>>>>> proper overflow) but a lot of "D" packet errors (which are, in fact, either
>>>>>>>> very serious packet losses, or package reordering).
>>>>>>>> It's probably the cable, or your network stack might be up to no
>>>>>>>> good.
>>>>>>>>
>>>>>>>> Could you share the output of
>>>>>>>>
>>>>>>>> ip route
>>>>>>>>
>>>>>>>> Also, it's possible that some default firewall rule makes your CPU
>>>>>>>> break a sweat; you could try to bypass firewall for the eth2 device
>>>>>>>>
>>>>>>>> sudo iptables -A INPUT -i eth2 -j ACCEPT
>>>>>>>>
>>>>>>>> (this is *not* permanent).
>>>>>>>> If that doesn't help, maybe inspecting the packets captured with
>>>>>>>> "wireshark" might help. For example, set up wireshark to listen to eth2,
>>>>>>>> power up the USRP; you should see a few packets being exchanged on the
>>>>>>>> interface.
>>>>>>>> Then run
>>>>>>>>
>>>>>>>> rx_samples_to_file --rate 200e6 --file /dev/null --nsamps $(( 2 *
>>>>>>>> 1000  * 1000 ))
>>>>>>>>
>>>>>>>> then stop the capture. Did you see "D"s? If yes, I'd like to have a
>>>>>>>> look at the captured data; please save it. We'll figure out a way to
>>>>>>>> exchange it.
>>>>>>>>
>>>>>>>> Best regards,
>>>>>>>> Marcus
>>>>>>>> On 23.10.2015 21:21, Sanjoy Basak wrote:
>>>>>>>>
>>>>>>>> Hi Michael, Neel and Marcus,
>>>>>>>>
>>>>>>>> Thanks a lot for the suggestions. I tried each of those.
>>>>>>>> I updated the kernel to 3.19.0-31-generic.
>>>>>>>>
>>>>>>>> CPU governors, I set to performance as instruction given in this
>>>>>>>> link.
>>>>>>>>
>>>>>>>> http://files.ettus.com/manual/page_usrp_x3x0_config.html#x3x0cfg_hostpc_pwr
>>>>>>>> Now, after setting it to performance and restarting, if I check
>>>>>>>> with cpu-freq, I find all governers are set at performance. But after a
>>>>>>>> while(5/10 mins later) if I check again cpu-freq info, I see all governers
>>>>>>>> are set to powersave.
>>>>>>>> Could you please tell me why and how to configure it properly?
>>>>>>>>
>>>>>>>> I tried with benchmark_rate to check the performance. With tx-rate
>>>>>>>> till 200 MSps I do not get any overflow/underflow. However with rx-rate
>>>>>>>> 25/50/100 MSps I get drop of packets. I made a direct connection from
>>>>>>>> computer to the X310. No switch is used.
>>>>>>>>
>>>>>>>> On the contrary with shorter(copper) cable at 1Gig interface I do
>>>>>>>> not get any drop of packets at 25MSps (on both tx_rate and rx_rate).
>>>>>>>> Is 3m copper cable really bad?
>>>>>>>>
>>>>>>>> We don't have fibre cable right now. We already ordered. I hope we
>>>>>>>> will get it next week and check how it is. We don't have any SFP+ 10 Gig
>>>>>>>> interface. So can't really test 10 gig interface with short 10 gig copper
>>>>>>>> cable.
>>>>>>>>
>>>>>>>> These are the benchmark_rate results for 10 Gig interface
>>>>>>>>
>>>>>>>>
>>>>>>>> Testing receive rate 25.000000 Msps on 1 channels
>>>>>>>> DDDDDDDD
>>>>>>>> Benchmark rate summary:
>>>>>>>>   Num received samples:    249845308
>>>>>>>>   Num dropped samples:     15968
>>>>>>>>   Num overflows detected:  0
>>>>>>>>   Num transmitted samples: 0
>>>>>>>>   Num sequence errors:     0
>>>>>>>>   Num underflows detected: 0
>>>>>>>>
>>>>>>>>
>>>>>>>> Testing receive rate 100.000000 Msps on 1 channels
>>>>>>>> DDDDDDDDDDDDDDDDDDDDD
>>>>>>>> Benchmark rate summary:
>>>>>>>>   Num received samples:    999934124
>>>>>>>>   Num dropped samples:     41916
>>>>>>>>   Num overflows detected:  0
>>>>>>>>   Num transmitted samples: 0
>>>>>>>>   Num sequence errors:     0
>>>>>>>>   Num underflows detected: 0
>>>>>>>>
>>>>>>>>  I also used this command
>>>>>>>> sudo ethtool -C eth2 rx-usecs 16 rx-frames 20
>>>>>>>>
>>>>>>>> Please let me know what you think occurring the problem and
>>>>>>>> whether replacing copper cable with short fibre cable solves the problem.
>>>>>>>>
>>>>>>>> Best regards
>>>>>>>> Sanjoy
>>>>>>>>
>>>>>>>>
>>>>>>>> On Wed, Oct 21, 2015 at 12:27 PM, Marcus Müller <
>>>>>>>> <usrp-users at lists.ettus.com>usrp-users at lists.ettus.com> wrote:
>>>>>>>>
>>>>>>>>> Hi Sanjoy,
>>>>>>>>>
>>>>>>>>> furthermore, and I found that to be crucial on my system, you
>>>>>>>>> should make sure that your 10GE card doesn't generate an interrupt for
>>>>>>>>> every packet it receives (your application will pull these packets as fast
>>>>>>>>> as possible, anyway).
>>>>>>>>> On linux, you'd do
>>>>>>>>>
>>>>>>>>> sudo ethtool -c {name of ethernet interface, e.g. eth1}
>>>>>>>>>
>>>>>>>>> to see the coalescing options available with your device and
>>>>>>>>> kernel version,
>>>>>>>>> and modify a setting using
>>>>>>>>>
>>>>>>>>> sudo ethtool -C {name of ethernet interface, e.g. eth1} {name of
>>>>>>>>> setting} {new value of setting}
>>>>>>>>>
>>>>>>>>> For example, I set "rx_usecs" (microseconds to wait between
>>>>>>>>> triggering an interrupt) to 16, and "rx_frames" to 20 or so -- your milage
>>>>>>>>> might vary, depending on your application, OS and also a few hardware
>>>>>>>>> factors.
>>>>>>>>>
>>>>>>>>> Best regards,
>>>>>>>>> Marcus
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 19.10.2015 21:56, Michael West via USRP-users wrote:
>>>>>>>>>
>>>>>>>>> Also, check the CPU governor and make sure it is set to
>>>>>>>>> "performance".
>>>>>>>>>
>>>>>>>>> Your output indicates frame errors on the interface, which could
>>>>>>>>> be due to the cable.  If possible, try using a shorter copper cable or
>>>>>>>>> switch to a fibre cable.  With 10 GbE over copper, the shorter the better.
>>>>>>>>> If you need the length, you should be using fibre.
>>>>>>>>>
>>>>>>>>> If you still have issues, please provide a little more
>>>>>>>>> information:  Are you connected directly with the X310 or through a
>>>>>>>>> switch?  How are you testing the performance?  Are you using your own
>>>>>>>>> application or benchmark_rate?  If your own application, what is it doing
>>>>>>>>> with the received data (processing it with the same thread or different
>>>>>>>>> thread, saving to disk, etc...)?
>>>>>>>>>
>>>>>>>>> Regards,
>>>>>>>>> Michael
>>>>>>>>>
>>>>>>>>> On Mon, Oct 19, 2015 at 12:51 PM, Neel Pandeya via USRP-users <
>>>>>>>>> <usrp-users at lists.ettus.com>usrp-users at lists.ettus.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hello Sanjoy:
>>>>>>>>>>
>>>>>>>>>> That Intel X520 10GbE card with Ubuntu 14.04.3 should be working
>>>>>>>>>> well for you.
>>>>>>>>>>
>>>>>>>>>> Could you try upgrading to kernel 3.16 or 3.19, and let us know
>>>>>>>>>> your results?
>>>>>>>>>>
>>>>>>>>>> To upgrade to kernel 3.16, run "sudo apt-get install
>>>>>>>>>> linux-generic-lts-utopic".
>>>>>>>>>>
>>>>>>>>>> To upgrade to kernel 3.19, run "sudo apt-get install
>>>>>>>>>> linux-generic-lts-vivid".
>>>>>>>>>>
>>>>>>>>>> After the upgrade, be sure to reboot the system.
>>>>>>>>>>
>>>>>>>>>> Also, check that you have the CPU governors set to "performance",
>>>>>>>>>> and that you have the highest clock rate set.
>>>>>>>>>>
>>>>>>>>>> --Neel
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 19 October 2015 at 12:03, Sanjoy Basak via USRP-users <
>>>>>>>>>> <usrp-users at lists.ettus.com>usrp-users at lists.ettus.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi Experts,
>>>>>>>>>>> We recently bought a 10 Gig ethernet card (X510-DA2) and Twinaxial-Kabel
>>>>>>>>>>> SFP+ M SFP+ M 3,0m. I installed it in our pc (OS: Ubuntu Mate 14.04.3 LTS)
>>>>>>>>>>> and can communicate with X310/SBX-120 properly. However, the performance we
>>>>>>>>>>> are getting is quite poor. I am getting dropped packets (D O) mostly at
>>>>>>>>>>> 25,50 MSps and goes quite crazy at 100 MSps and sometimes even at lower
>>>>>>>>>>> sample rates.
>>>>>>>>>>> Previously I tested with 1 Gig ethernet, at 25 MSps I could
>>>>>>>>>>> stream without any drop/late packet or underrun.
>>>>>>>>>>> The CPU uses or network uses is also not reaching 100%.
>>>>>>>>>>>
>>>>>>>>>>> Could you please tell me what to check/correct so I can stream
>>>>>>>>>>> at 100 MSps without any error?
>>>>>>>>>>>
>>>>>>>>>>> I am putting the ifconfig and ethtool results and the kernel
>>>>>>>>>>> version(3.16.0-30-generic)
>>>>>>>>>>>
>>>>>>>>>>> eth2      Link encap:Ethernet  HWaddr 90:e2:ba:a6:a9:dd
>>>>>>>>>>>           inet6 addr: fe80::92e2:baff:fea6:a9dd/64 Scope:Link
>>>>>>>>>>>           UP BROADCAST MULTICAST  MTU:9000  Metric:1
>>>>>>>>>>>           RX packets:4530726 errors:146 dropped:0 overruns:0
>>>>>>>>>>> frame:146
>>>>>>>>>>>           TX packets:6811057 errors:0 dropped:0 overruns:0
>>>>>>>>>>> carrier:0
>>>>>>>>>>>           collisions:0 txqueuelen:1000
>>>>>>>>>>>           RX bytes:26331306938 (26.3 GB)  TX bytes:35856233935
>>>>>>>>>>> (35.8 GB)
>>>>>>>>>>>
>>>>>>>>>>> Settings for eth2:
>>>>>>>>>>> Supported ports: [ FIBRE ]
>>>>>>>>>>> Supported link modes:   10000baseT/Full
>>>>>>>>>>> Supported pause frame use: No
>>>>>>>>>>> Supports auto-negotiation: No
>>>>>>>>>>> Advertised link modes:  10000baseT/Full
>>>>>>>>>>> Advertised pause frame use: No
>>>>>>>>>>> Advertised auto-negotiation: No
>>>>>>>>>>> Speed: 10000Mb/s
>>>>>>>>>>> Duplex: Full
>>>>>>>>>>> Port: Other
>>>>>>>>>>> PHYAD: 0
>>>>>>>>>>> Transceiver: external
>>>>>>>>>>> Auto-negotiation: off
>>>>>>>>>>> Cannot get wake-on-lan settings: Operation not permitted
>>>>>>>>>>> Current message level: 0x00000007 (7)
>>>>>>>>>>>       drv probe link
>>>>>>>>>>> Link detected: yes
>>>>>>>>>>>
>>>>>>>>>>> Our computer config is:
>>>>>>>>>>> Processor: 12*Intel(R) Xeon (R) CPU E5-2620 v3 @2.40 GHz
>>>>>>>>>>> RAM: 65871 MB
>>>>>>>>>>> SSD raid 5
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Best regards
>>>>>>>>>>> Sanjoy
>>>>>>>>>>>
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> USRP-users mailing list
>>>>>>>>>>> <USRP-users at lists.ettus.com>USRP-users at lists.ettus.com
>>>>>>>>>>>
>>>>>>>>>>> <http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com>
>>>>>>>>>>> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> _______________________________________________
>>>>>>>>>> USRP-users mailing list
>>>>>>>>>> <USRP-users at lists.ettus.com>USRP-users at lists.ettus.com
>>>>>>>>>>
>>>>>>>>>> <http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com>
>>>>>>>>>> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> USRP-users mailing listUSRP-users at lists.ettus.comhttp://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> USRP-users mailing list
>>>>>>>>> <USRP-users at lists.ettus.com>USRP-users at lists.ettus.com
>>>>>>>>>
>>>>>>>>> <http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com>
>>>>>>>>> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>> --
>>>>> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ettus.com/pipermail/usrp-users_lists.ettus.com/attachments/20151026/57570ffe/attachment-0002.html>


More information about the USRP-users mailing list