[USRP-users] E310: a circular buffer to save rx samples to SD Card

Neel Pandeya neel.pandeya at ettus.com
Thu Jul 20 00:15:58 EDT 2017


Hello Maurizio:

I have not reviewed your code in detail, but a circular buffer using two
separate threads, one for writing into the buffer, and one for reading from
the buffer and writing to the SD card, seems like a reasonable approach.
You will be limited by how fast samples can be DMA'd from the FPGA to the
ARM CPU, which is probably around 12 Msps, as well as how fast you can
write to the SD card. You might want to benchmark the write speed to the SD
card separately, using a separate stand-alone program. What speed of SD card
are you using? Have you been able to make any further progress on this
issue since initially making this post?

--​Neel Pandeya



On 11 May 2017 at 09:22, Crozzoli Maurizio via USRP-users <
usrp-users at lists.ettus.com> wrote:

> Hi!
> We working on an E310.
> We are trying to implement a receiver operating at 1.92 Msps which is able
> to store data directly to the SD card of the E310 (a partitioned space with
> optimized ext4 performance).
> Our implementation is base do two threads:
> 1. the main one receives samples from the E310 RX stages and stores them
> in a sort of circular buffer" in order to try and extend the total
> acquisition time frame (if we do not use the "circular buffer" approach we
> can save up to 84 million samples from two channels but we would like to
> get more if feasible);
> 2. another one is in charge of getting data out of the "circular buffer"
> and save them to a file in the SD Card.
>
> A skeleton of our source code based on " rx_multi_samples.cpp" example
> provided by Ettus is given below.
>
> As a matter of fact the code runs and it can save about 17 million samples
> (8800*(1016*2)) from two channels with a buffer really small
> (samps_per_buff = 1016 elements; vect_size = samps_per_buff * 2) which
> would lead us to conclude that the activity related to writing data to SD
> is not as slow as we would have expected.
>
> If we can get such result with such a small buffer we would expect to be
> able to get more interesting with a much larger buffer (1016*N, N>>2) but,
> in practice, we found out that our idea is not true. We tried to use a
> buffer large enough to store 84 million (N=82000) samples (as we could do
> before implementing current "circular buffer" approach), but no way to save
> to SD more than the 17 million samples we could get with N=2.
>
> Is there anyone in the list who could help us to understand what is going
> on and, if possible, to find a solution?
>
> TIA!
>
> BR,
> Maurizio.
>
>
> /// GLOBAL VARIABLES //////////////////////////////////////////
> const size_t samps_per_buff = 1016;
> size_t vect_size = samps_per_buff * 2; // Vector dimension
> size_t iRead  = 0;                     // Index of read item
> size_t iWrite = 0;                     // Index of written item
>
> vector<vector<complex<short> > > buffs(2, vector<complex<short>
> >(vect_size));
> ///////////////////////////////////////////////////////////////
>
>
> /// THREAD ////////////////////////////////////////////////////
> void* writeData(void* param)
> {
>         [...]
>
>     do
>     {
>        if(iRead > iWrite)
>        {
>           buf_ptr[0] = &buffs[0].at(iWrite%vect_size);
>           buf_ptr[1] = &buffs[1].at(iWrite%vect_size);
>           if(iRead%vect_size == iWrite%vect_size) pthread_mutex_lock(&mtx);
>           fwrite(buf_ptr[0],4,samps_per_buff,OutRXA);
>           fwrite(buf_ptr[1],4,samps_per_buff,OutRXB);
>           if(iRead%vect_size == iWrite%vect_size)
> pthread_mutex_unlock(&mtx);
>           iWrite += samps_per_buff;
>        }
>     }
>     while (!done);
>
>     return NULL;
> }
> ///////////////////////////////////////////////////////////////
>
>
>
> /// START MAIN ////////////////////////////////////////////////
>     //create a usrp device
>     //always select the subdevice first, the channel mapping affects the
> other settings
>     //set the rx sample rate (sets across all channels)
>     //detect which channels to use
>
>     //create a receive streamer
>     //linearly map channels (index0 = channel0, index1 = channel1, ...)
>     //meta-data will be filled in by recv()
>
>     //create a vector of pointers to point to each of the channel buffers
>     //setup streaming
>     //issue stream command
>
>         ////////////////////////////////////////////////////////////
> ///////////
>     // Start Thread to enable "parallel" writing activity to SD (see above)
>     ////////////////////////////////////////////////////////////
> ///////////
>
>
>     int iLoopMax = total_num_samps/samps_per_buff;
>     for(int iLoop = 0; iLoop < iLoopMax; iLoop++)
>     {
>        if ( (iRead-iWrite) >= vect_size )
>          break;
>
>        // E310 reads bursts of size_t samps_per_buff = 1016 elements
>        size_t num_rx_samps = rx_stream->recv(buff_ptrs, samps_per_buff,
> md, timeout);
>
>        iRead += num_rx_samps;
>
>        //use a small timeout for subsequent packets (large enough to
> receive all samples)
>        timeout = ((double)samps_per_buff / rate) +0.1;
>
>        if(iRead < total_num_samps)
>        {
>           //advance pointers
>        }
>     }
> ///////////////////////////////////////////////////////////////
>
> Questo messaggio e i suoi allegati sono indirizzati esclusivamente alle
> persone indicate. La diffusione, copia o qualsiasi altra azione derivante
> dalla conoscenza di queste informazioni sono rigorosamente vietate. Qualora
> abbiate ricevuto questo documento per errore siete cortesemente pregati di
> darne immediata comunicazione al mittente e di provvedere alla sua
> distruzione, Grazie.
>
> This e-mail and any attachments is confidential and may contain privileged
> information intended for the addressee(s) only. Dissemination, copying,
> printing or use by anybody else is unauthorised. If you are not the
> intended recipient, please delete this message and any attachments and
> advise the sender by return e-mail, Thanks.
>
> Rispetta l'ambiente. Non stampare questa mail se non è necessario.
>
> _______________________________________________
> USRP-users mailing list
> USRP-users at lists.ettus.com
> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ettus.com/pipermail/usrp-users_lists.ettus.com/attachments/20170719/e7d1ea81/attachment-0002.html>


More information about the USRP-users mailing list