[USRP-users] USRP latency problem

YENDstudio . yend19 at gmail.com
Wed Nov 22 04:34:30 EST 2017


Hi,

I have an application (written in C++) that periodically transmits and
receives fixed-size bursts (N=2000 samples long) in full-duplex mode. To
account for TX-RX lattency, I introduced an offset of 4N ticks to the TX
timestamp. Though the device is configured to stream in continuous mode,
the time_spec of TX metadata is incremented by N while the 'start_of_burst'
flag is off except for the first burst. On the RX side, I make a 'rcv' call
until I get N samples. If the time_spec in RX metadata is different
(greater) than expected, the lost samples are accounted for by adding
padding samples to the received burst.

With the above setup, the application runs normal for a while, and then I
start getting a lot (and endless) of L's (device late's).

During each transmission I checked the timestamp of the current burst which
will be transmitted, and the device time by calling get_time_now()
function. The gap between these two times stays nearly same (around 4N) the
whole time.

I was wondering if it is possible to make sure that the streaming runs
stably while keeping TX and RX bursts time-synchronized.

Regards
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ettus.com/pipermail/usrp-users_lists.ettus.com/attachments/20171122/ac4711a9/attachment-0002.html>


More information about the USRP-users mailing list