[USRP-users] recv_frame_size and num_recv_frames for B210

Michael West michael.west at ettus.com
Thu Jul 9 14:23:25 EDT 2015


Hi Dario,

Answers inline....

Regards,
Michael

On Thu, Jul 9, 2015 at 10:40 AM, Dario Fertonani <dario.fertonani at gmail.com>
wrote:

> Thank you Michael.
> Can you please confirm/correct the following?
>
>    - The parameter num_recv_frames sets the maximum number of frames that
>    can be buffered. If this buffer fills up an ERROR_CODE_OVERFLOW is thrown.
>
>  There is buffering in the FPGA on the USRP and in UHD on the host side.
The ERROR_CODE_OVERFLOW is returned when the buffer in the FPGA runs out of
space, which indicates either a transport issue or back pressure from the
buffers in UHD filling up.

>
>    - Each of the num_recv_frames buffers is allocated a size of
>    recv_frame_size bytes.
>
>  Yes.

>
>    - The "frame" concept for num_recv_frames and recv_frame_size is the
>    same as for the "spp" option used for rx_stream args as at
>    https://github.com/EttusResearch/uhd/wiki/Latency. For example, if I
>    set spp to 1000 [and 1000 <= rxStream->get_max_num_samps( ) ], then the
>    "frame" has 1000 samples even when recv_frame_size is "oversized" to 65536.
>
> Correct.


> The above points are more critical then the following ones, which are just
> about avoiding memory waste.
>
>    - If I set num_rec_frames to 32 in a 2 rx system (we use B210), does
>    that mean 32 frames for each antenna or 32 frames overall (16 frames for
>    each antenna)?
>
> The value is for the transport, so 32 frames overall.

>
>    - If I use float32 as CPU format (8 bytes per complex sample) and set
>    spp to 1000, can I assume that each frame will effectively use 8000 bytes,
>    or is there some major overhead to account for?
>
> No.  All buffering is done in wire format, not CPU format.  The data is
only converted into CPU format upon the recv() call.  Assuming a wire
format of sc16, each frame would contain 4000 bytes of sample data.  There
24-32 bytes of header information that does use space in the frame.


> Thanks,
> Dario
>
>
>
>
>
>
>
>
>
>
>
> On Thu, Jul 9, 2015 at 9:27 AM, Michael West <michael.west at ettus.com>
> wrote:
>
>> Hi Dario,
>>
>> Increasing the buffering on the receive side will not change the average
>> latency, but will allow for a higher peak latency.  The fact that
>> increasing the buffering resolved the overflow issue indicates that the
>> buffer was not being drained fast enough, which suggests that the
>> application has excess latency between recv() calls.  This is commonly
>> caused by the thread performing some sort of IO operation (such  as writing
>> data to disk).
>>
>> Regards,
>> Michael
>>
>> On Wed, Jul 8, 2015 at 10:18 AM, Dario Fertonani via USRP-users <
>> usrp-users at lists.ettus.com> wrote:
>>
>>> I was able to eliminate the rare overflow errors just by using higher
>>> values of recv_frame_size and num_recv_frames for the rx stream. Our stack
>>> easily meet the timeline, so we're looking at eliminating rare overflow
>>> errors due to the non-deterministic nature of the non-real-time OS (in our
>>> case, Ubuntu 14.04 low latency), rather than to slow stack processing.
>>>
>>> Any drawback in using high values of recv_frame_size
>>> and num_recv_frames, besides memory usage/waste? For example, I don't fully
>>> understand what happens when num_recv_frames is bumped up to 256 from the
>>> default value of 16. Does the worst-case latency increases by 16 times?
>>>
>>> Thanks,
>>> Dario
>>>
>>> _______________________________________________
>>> USRP-users mailing list
>>> USRP-users at lists.ettus.com
>>> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ettus.com/pipermail/usrp-users_lists.ettus.com/attachments/20150709/1d5f7729/attachment-0002.html>


More information about the USRP-users mailing list