[USRP-users] Bound in streaming X310 through PCIE

LEMENAGER Claude claude.lemenager at thalesgroup.com
Thu Dec 4 09:23:58 EST 2014


Hello,

This mail just to share and maybe receive information about the operational capacity in streaming (receive) data from the X310.
My system:

WBX   ==>
                  X310==>PCIE-NIUSRPRIO==>
WBX   ==>
                .
                                                                               ==>UHD003.007.003==>PC XEON/UBUNTU 14.04
.
WBX   ==>
                  X310==>PCIE-NIUSRPRIO==>
WBX   ==>

PC is 32cores, very high speed bus (>1333MHz)
>From a software point of view, I just configure up to 3 (actually) X310 that are synchronized through daisy chain (10MHz and PPS), then test the highest data rate usable to memory (Shared Memory through BOOST programming).

And the result on my system is that I can't transfer data at higher speed than 150Msample/s without overflow
For example I can configure:

-          6 channel at 25MHz : same thread that occupy 95% of one core capacity and 4 cores at 35% capacity (UHD?) 3*X310

-          2 channels at 50MHz: same thread that occupy 60% of one core capacity (A:0,B:0)

-          1 channel at 100MHz: same thread that occupy 60% of one core capacity (A:0)

-          1 channel at 100MHz and one channel at 50MHZ on two X310 (A:0), two instance of the prgm (2 threads)
I never obtained better results than those (seems to be a bound at 150Msamples/s).
For any combination I tested above, I can record data from the shm to disk in real time without overflow problem.

So my questions are:

-          Did somebody obtained better streaming results? (seems 200Msamples/s full duplex, i.e. 400 Msamples/s could be obtained)

-          Where could be the bottleneck?

-          Could the synchronization figure impact the result?

Thank you in advance for answers .

Claude Leménager
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ettus.com/pipermail/usrp-users_lists.ettus.com/attachments/20141204/ee9f0a88/attachment-0002.html>


More information about the USRP-users mailing list