[USRP-users] Strange behavior of N210 with timed commands

LOUF Laurent laurent.louf at thalesgroup.com
Mon Apr 13 08:34:58 EDT 2015


Hello,

I can't find the answer I got to the following message, but for anyone interested, I seem to have solved this problem. Doing some latency measurement with some lab equipement, I realized that the latency I got  from my whole system was fine, but just at the beginning, the latency increasing over time. As I was adding 625,5µs for each frame (each frame corresponding to 625µs of data), that was quite normal. So I tried a few things and found my answer.

The problem was that I was using :
start_of_burst = true
end_of_burst = true
And when I switched to
start_of_burst = true
end_of_burst = false
Everything came to work perfectly. So I guess that what was happening is that you can't send different bursts one just after the other and you need to add a little offset (I didn't try to get the exact value, but I think that adding 0,05µs was still ok) to make it possible to handle for the USRP. So if you want to send a continuous stream of data, you can't use different bursts and have to keep with one.

I have one question tough, how are handled multiple commands with these settings :
start_of_burst = true
end_of_burst = false
? Does the USRP understand that as long as I don't send a packet with end_of_burst = true, I want it to consider only one packet ? Or should I change my program to first send a frame with start_of_burst = true, end_of_burst = false then frames with start_of_burst = false, end_of_burst = false ?

Anyway, thanks for the help.
Laurent Louf.


De : USRP-users [mailto:usrp-users-bounces at lists.ettus.com] De la part de LOUF Laurent via USRP-users
Envoyé : vendredi 12 décembre 2014 14:58
À : usrp-users at lists.ettus.com
Objet : [USRP-users] Strange behavior of N210 with timed commands

Hello,

I'm not quite sure how to use this mailing list so do not hesitate to redirect me to a better place or to someone that may know the answer to my question.

That being said, I'm currently working with an USRP N210 at work and observed a strange behavior on which I would like to have your input. I use the USRP to receive a signal, apply a channel simulation on it and transmit the signal obtained. I have absolutely no problem on the receive part but one little on the transmit part. The signal I use is sampled at 2Msamples / second, and I work on frames of 1250 samples, so frames of 625µs. So the whole process goes that way, I receive a frame, apply the channel simulation and then transmit it, at a fixed interval.

The problem occurs now : I do some calculations based on the time when I receive the first frame, add the latency of my system and a number of times 625µs corresponding to the number of frames sent, which makes the time to transmit the frame, that is sent to the USRP along with the current frame. If I put an interval of exactly 625µs, I get a fastpath message "L" every two frames, and I can see that the data of the second frame is not transmitted. If I put an interval of 625,1µs, everything seems to work fine and I don't get any error. The metadata I send along with the frame are : start_of_burst = true , end_of_burst = true , has_time_spec = true and the time spec I got from my calculations. That feels strange to me since I don't even have to add the time corresponding to one sample to get it to work (0,5µs). Could it be something related to the burst mode that I'm using ? I did not fully get that part reading the documentation, so feel free to enlighten me on that subject.

Thanks in advance for any answer,
Laurent Louf.

[@@OPEN @@]



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ettus.com/pipermail/usrp-users_lists.ettus.com/attachments/20150413/52af20b9/attachment-0002.html>


More information about the USRP-users mailing list