time-nuts@lists.febo.com

Discussion of precise time and frequency measurement

View all threads

Zero dead time and average frequency estimation

MD
Magnus Danielson
Mon, Feb 1, 2010 2:53 AM

Fellow time-nuts,

I keep poking around various processing algorithms trying to figure out
what they do and perform. One aspect which may be interesting to know
about is the use of zero dead time phase or frequency data and the
frequency estimation from that data. One may be compelled to
differentiate the time data into frequency data by using nearby data
samples, according to y(i) = (x(i+1)-x(i))/tau0 and then just form the
average of those. The interesting thing about that calculations is that
the x(i+1) and x(i) terms cancels except for x(1) and x(N) so
effectively only two samples of phase data is being used. This is a
simple illustration of how algorithms may provide less degrees of
freedom than one may initially assume it to have (N-1 in this case).

Similar type of cancellation occurs in linear drift estimation.

Maybe this could spark some interest in the way one estimates the
various parameters and what different estimators may do to cancel noise
of individual samples.

Cheers,
Magnus

Fellow time-nuts, I keep poking around various processing algorithms trying to figure out what they do and perform. One aspect which may be interesting to know about is the use of zero dead time phase or frequency data and the frequency estimation from that data. One may be compelled to differentiate the time data into frequency data by using nearby data samples, according to y(i) = (x(i+1)-x(i))/tau0 and then just form the average of those. The interesting thing about that calculations is that the x(i+1) and x(i) terms cancels except for x(1) and x(N) so effectively only two samples of phase data is being used. This is a simple illustration of how algorithms may provide less degrees of freedom than one may initially assume it to have (N-1 in this case). Similar type of cancellation occurs in linear drift estimation. Maybe this could spark some interest in the way one estimates the various parameters and what different estimators may do to cancel noise of individual samples. Cheers, Magnus
TV
Tom Van Baak
Mon, Feb 1, 2010 3:50 AM

Magnus,

Correct, all the terms cancel between the end points. Note
that this is exactly equivalent to the way a traditional gated
frequency counter works -- you open the gate, wait some
sample period (maybe 1, 10, or 100 seconds) and then
close the gate. In this scenario it's clear that all the phase
information during the interval is ignored; the only points
that matter are the start and the stop.

Modern high-resolution frequency counters don't do this;
and instead they use a form of "continuous counting" and
take a massive number of short phase samples and create
a more precise average frequency out of that.

There are some excellent papers on the subject; start with
the one by Rubiola:

< http://www.femto-st.fr/~rubiola/pdf-articles/journal/2005rsi-hi-res-freq-counters.pdf >

There are additional papers (perhaps Bruce can locate them).

I wonder if fully overlapped frequency calculations would be
one solution to your query; similar to the advantage that the
overlapping ADEV sometimes has over back-to-back ADEV.

Related to that, I recently looked into the side-effects of using
running averages on phase or frequency data, specifically
what it does to a frequency stability plot (ADEV). See:

http://www.leapsecond.com/pages/adev-avg/

Not surprising, you get artificially low ADEV numbers when
you average in this way; the reason is that running averages,
by design, tend to smooth out (low pass filter) the raw data.

One thing you can play with is computing average frequency
using the technique that MDEV uses.

/tvb

----- Original Message -----
From: "Magnus Danielson" magnus@rubidium.dyndns.org
To: "Discussion of precise time and frequency measurement" time-nuts@febo.com
Sent: Sunday, January 31, 2010 6:53 PM
Subject: [time-nuts] Zero dead time and average frequency estimation

Fellow time-nuts,

I keep poking around various processing algorithms trying to figure out
what they do and perform. One aspect which may be interesting to know
about is the use of zero dead time phase or frequency data and the
frequency estimation from that data. One may be compelled to
differentiate the time data into frequency data by using nearby data
samples, according to y(i) = (x(i+1)-x(i))/tau0 and then just form the
average of those. The interesting thing about that calculations is that
the x(i+1) and x(i) terms cancels except for x(1) and x(N) so
effectively only two samples of phase data is being used. This is a
simple illustration of how algorithms may provide less degrees of
freedom than one may initially assume it to have (N-1 in this case).

Similar type of cancellation occurs in linear drift estimation.

Maybe this could spark some interest in the way one estimates the
various parameters and what different estimators may do to cancel noise
of individual samples.

Cheers,
Magnus

Magnus, Correct, all the terms cancel between the end points. Note that this is exactly equivalent to the way a traditional gated frequency counter works -- you open the gate, wait some sample period (maybe 1, 10, or 100 seconds) and then close the gate. In this scenario it's clear that all the phase information during the interval is ignored; the only points that matter are the start and the stop. Modern high-resolution frequency counters don't do this; and instead they use a form of "continuous counting" and take a massive number of short phase samples and create a more precise average frequency out of that. There are some excellent papers on the subject; start with the one by Rubiola: < http://www.femto-st.fr/~rubiola/pdf-articles/journal/2005rsi-hi-res-freq-counters.pdf > There are additional papers (perhaps Bruce can locate them). I wonder if fully overlapped frequency calculations would be one solution to your query; similar to the advantage that the overlapping ADEV sometimes has over back-to-back ADEV. Related to that, I recently looked into the side-effects of using running averages on phase or frequency data, specifically what it does to a frequency stability plot (ADEV). See: http://www.leapsecond.com/pages/adev-avg/ Not surprising, you get artificially low ADEV numbers when you average in this way; the reason is that running averages, by design, tend to smooth out (low pass filter) the raw data. One thing you can play with is computing average frequency using the technique that MDEV uses. /tvb ----- Original Message ----- From: "Magnus Danielson" <magnus@rubidium.dyndns.org> To: "Discussion of precise time and frequency measurement" <time-nuts@febo.com> Sent: Sunday, January 31, 2010 6:53 PM Subject: [time-nuts] Zero dead time and average frequency estimation > Fellow time-nuts, > > I keep poking around various processing algorithms trying to figure out > what they do and perform. One aspect which may be interesting to know > about is the use of zero dead time phase or frequency data and the > frequency estimation from that data. One may be compelled to > differentiate the time data into frequency data by using nearby data > samples, according to y(i) = (x(i+1)-x(i))/tau0 and then just form the > average of those. The interesting thing about that calculations is that > the x(i+1) and x(i) terms cancels except for x(1) and x(N) so > effectively only two samples of phase data is being used. This is a > simple illustration of how algorithms may provide less degrees of > freedom than one may initially assume it to have (N-1 in this case). > > Similar type of cancellation occurs in linear drift estimation. > > Maybe this could spark some interest in the way one estimates the > various parameters and what different estimators may do to cancel noise > of individual samples. > > Cheers, > Magnus
PR
Pete Rawson
Mon, Feb 1, 2010 4:40 AM

Gentlemen,

You've hit a topic I've become more confused about after
researching some of the original papers on this subject.
Here are a few questions which I would like to become
educated about.

  1. Will the calculated results of ADEV, ODEV, MDEV & TOTDEV
    suggest which result applies best to the data being analyzed?

  2. What attributes of the data to be analyzed suggest which
    computation is most appropriate?

  3. Will some computed results indicate that the analysis is NOT
    appropriate? (Are false results obvious?)

I'm sure there are more aspects worth learning than these, but
they might serve to get a conversation underway.

Any enlightenment would be greatly appreciated.

Pete Rawson

On Jan 31, 2010, at 8:50 PM, Tom Van Baak wrote:

Magnus,

Correct, all the terms cancel between the end points. Note
that this is exactly equivalent to the way a traditional gated
frequency counter works -- you open the gate, wait some
sample period (maybe 1, 10, or 100 seconds) and then
close the gate. In this scenario it's clear that all the phase
information during the interval is ignored; the only points
that matter are the start and the stop.

Modern high-resolution frequency counters don't do this;
and instead they use a form of "continuous counting" and
take a massive number of short phase samples and create
a more precise average frequency out of that.

There are some excellent papers on the subject; start with
the one by Rubiola:

< http://www.femto-st.fr/~rubiola/pdf-articles/journal/2005rsi-hi-res-freq-counters.pdf >

There are additional papers (perhaps Bruce can locate them).

I wonder if fully overlapped frequency calculations would be
one solution to your query; similar to the advantage that the
overlapping ADEV sometimes has over back-to-back ADEV.

Related to that, I recently looked into the side-effects of using
running averages on phase or frequency data, specifically
what it does to a frequency stability plot (ADEV). See:

http://www.leapsecond.com/pages/adev-avg/

Not surprising, you get artificially low ADEV numbers when
you average in this way; the reason is that running averages,
by design, tend to smooth out (low pass filter) the raw data.

One thing you can play with is computing average frequency
using the technique that MDEV uses.

/tvb

----- Original Message ----- From: "Magnus Danielson" magnus@rubidium.dyndns.org
To: "Discussion of precise time and frequency measurement" time-nuts@febo.com
Sent: Sunday, January 31, 2010 6:53 PM
Subject: [time-nuts] Zero dead time and average frequency estimation

Fellow time-nuts,
I keep poking around various processing algorithms trying to figure out what they do and perform. One aspect which may be interesting to know about is the use of zero dead time phase or frequency data and the frequency estimation from that data. One may be compelled to differentiate the time data into frequency data by using nearby data samples, according to y(i) = (x(i+1)-x(i))/tau0 and then just form the average of those. The interesting thing about that calculations is that the x(i+1) and x(i) terms cancels except for x(1) and x(N) so effectively only two samples of phase data is being used. This is a simple illustration of how algorithms may provide less degrees of freedom than one may initially assume it to have (N-1 in this case).
Similar type of cancellation occurs in linear drift estimation.
Maybe this could spark some interest in the way one estimates the various parameters and what different estimators may do to cancel noise of individual samples.
Cheers,
Magnus


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Gentlemen, You've hit a topic I've become more confused about after researching some of the original papers on this subject. Here are a few questions which I would like to become educated about. 1) Will the calculated results of ADEV, ODEV, MDEV & TOTDEV suggest which result applies best to the data being analyzed? 2) What attributes of the data to be analyzed suggest which computation is most appropriate? 3) Will some computed results indicate that the analysis is NOT appropriate? (Are false results obvious?) I'm sure there are more aspects worth learning than these, but they might serve to get a conversation underway. Any enlightenment would be greatly appreciated. Pete Rawson On Jan 31, 2010, at 8:50 PM, Tom Van Baak wrote: > Magnus, > > Correct, all the terms cancel between the end points. Note > that this is exactly equivalent to the way a traditional gated > frequency counter works -- you open the gate, wait some > sample period (maybe 1, 10, or 100 seconds) and then > close the gate. In this scenario it's clear that all the phase > information during the interval is ignored; the only points > that matter are the start and the stop. > > Modern high-resolution frequency counters don't do this; > and instead they use a form of "continuous counting" and > take a massive number of short phase samples and create > a more precise average frequency out of that. > > There are some excellent papers on the subject; start with > the one by Rubiola: > > < http://www.femto-st.fr/~rubiola/pdf-articles/journal/2005rsi-hi-res-freq-counters.pdf > > > There are additional papers (perhaps Bruce can locate them). > > I wonder if fully overlapped frequency calculations would be > one solution to your query; similar to the advantage that the > overlapping ADEV sometimes has over back-to-back ADEV. > > Related to that, I recently looked into the side-effects of using > running averages on phase or frequency data, specifically > what it does to a frequency stability plot (ADEV). See: > > http://www.leapsecond.com/pages/adev-avg/ > > Not surprising, you get artificially low ADEV numbers when > you average in this way; the reason is that running averages, > by design, tend to smooth out (low pass filter) the raw data. > > One thing you can play with is computing average frequency > using the technique that MDEV uses. > > /tvb > > ----- Original Message ----- From: "Magnus Danielson" <magnus@rubidium.dyndns.org> > To: "Discussion of precise time and frequency measurement" <time-nuts@febo.com> > Sent: Sunday, January 31, 2010 6:53 PM > Subject: [time-nuts] Zero dead time and average frequency estimation > > >> Fellow time-nuts, >> I keep poking around various processing algorithms trying to figure out what they do and perform. One aspect which may be interesting to know about is the use of zero dead time phase or frequency data and the frequency estimation from that data. One may be compelled to differentiate the time data into frequency data by using nearby data samples, according to y(i) = (x(i+1)-x(i))/tau0 and then just form the average of those. The interesting thing about that calculations is that the x(i+1) and x(i) terms cancels except for x(1) and x(N) so effectively only two samples of phase data is being used. This is a simple illustration of how algorithms may provide less degrees of freedom than one may initially assume it to have (N-1 in this case). >> Similar type of cancellation occurs in linear drift estimation. >> Maybe this could spark some interest in the way one estimates the various parameters and what different estimators may do to cancel noise of individual samples. >> Cheers, >> Magnus > > > > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there.
MD
Magnus Danielson
Mon, Feb 1, 2010 9:42 AM

Tom,

Tom Van Baak wrote:

Magnus,

Correct, all the terms cancel between the end points. Note
that this is exactly equivalent to the way a traditional gated
frequency counter works -- you open the gate, wait some
sample period (maybe 1, 10, or 100 seconds) and then
close the gate. In this scenario it's clear that all the phase
information during the interval is ignored; the only points
that matter are the start and the stop.

There is a technical merit to take samples in between even if they
cancel... you avoid counter overflow, but you can do better, much better.

Modern high-resolution frequency counters don't do this;
and instead they use a form of "continuous counting" and
take a massive number of short phase samples and create
a more precise average frequency out of that.

Yes, and the main point for creating this little thread is to make
people aware that how you process your data do make a difference. It can
make a huge difference in fact. The effective two-point frequency
calculation only use two sample point to estimate the frequency and thus
also use the systematic value and noise of one sample to cancel the
noise of the first sample. For 1/f power noises this is an effect that
can even becomes larger (for 1/f^3 noise) with time as it is
non-convergent, so by looking at it briefly you don't realize it is a
noisy number.

There are some excellent papers on the subject; start with
the one by Rubiola:

<
http://www.femto-st.fr/~rubiola/pdf-articles/journal/2005rsi-hi-res-freq-counters.pdf

There are additional papers (perhaps Bruce can locate them).

In particular, there is one paper that corrects some mistakes of
Rubiola, Australian if I remember correctly.

I wonder if fully overlapped frequency calculations would be
one solution to your query; similar to the advantage that the
overlapping ADEV sometimes has over back-to-back ADEV.

A very simple extension leads to that. Consider the frequency estimate of

      x(i+m) - x(i)

y(i,m) = -------------
m*tau0

And averaging over those:

        N-m
        ---
     1  \

y(m) = ---  >  y(i,m)
N-m /
---
i=1

You get 2m samples in each end and the maximum m is of course N/2. This
can be written in another form as N/2 number of m=N/2 overlapping
frequency estimates. This is equivalent to averaging the first and
second half, subtract the first from the second and divide by m
tau0.

Related to that, I recently looked into the side-effects of using
running averages on phase or frequency data, specifically
what it does to a frequency stability plot (ADEV). See:

http://www.leapsecond.com/pages/adev-avg/

Not surprising, you get artificially low ADEV numbers when
you average in this way; the reason is that running averages,
by design, tend to smooth out (low pass filter) the raw data.

Indeed. It creates a bias on the measurement that needs to be taken out
for a valid measurement.

One thing you can play with is computing average frequency
using the technique that MDEV uses.

Which is inspired by an article by Snyder that details how to perform
one such overlapping estimation in the counter core for better
convergence in noisy data. The MDEV is a separate basic measurement than
ADEV. So one needs to be careful not to mix results freely.

Cheers,
Magnus

Tom, Tom Van Baak wrote: > Magnus, > > Correct, all the terms cancel between the end points. Note > that this is exactly equivalent to the way a traditional gated > frequency counter works -- you open the gate, wait some > sample period (maybe 1, 10, or 100 seconds) and then > close the gate. In this scenario it's clear that all the phase > information during the interval is ignored; the only points > that matter are the start and the stop. There is a technical merit to take samples in between even if they cancel... you avoid counter overflow, but you can do better, much better. > Modern high-resolution frequency counters don't do this; > and instead they use a form of "continuous counting" and > take a massive number of short phase samples and create > a more precise average frequency out of that. Yes, and the main point for creating this little thread is to make people aware that how you process your data do make a difference. It can make a huge difference in fact. The effective two-point frequency calculation only use two sample point to estimate the frequency and thus also use the systematic value and noise of one sample to cancel the noise of the first sample. For 1/f power noises this is an effect that can even becomes larger (for 1/f^3 noise) with time as it is non-convergent, so by looking at it briefly you don't realize it is a noisy number. > There are some excellent papers on the subject; start with > the one by Rubiola: > > < > http://www.femto-st.fr/~rubiola/pdf-articles/journal/2005rsi-hi-res-freq-counters.pdf > > > > There are additional papers (perhaps Bruce can locate them). In particular, there is one paper that corrects some mistakes of Rubiola, Australian if I remember correctly. > I wonder if fully overlapped frequency calculations would be > one solution to your query; similar to the advantage that the > overlapping ADEV sometimes has over back-to-back ADEV. A very simple extension leads to that. Consider the frequency estimate of x(i+m) - x(i) y(i,m) = ------------- m*tau0 And averaging over those: N-m --- 1 \ y(m) = --- > y(i,m) N-m / --- i=1 You get 2*m samples in each end and the maximum m is of course N/2. This can be written in another form as N/2 number of m=N/2 overlapping frequency estimates. This is equivalent to averaging the first and second half, subtract the first from the second and divide by m*tau0. > Related to that, I recently looked into the side-effects of using > running averages on phase or frequency data, specifically > what it does to a frequency stability plot (ADEV). See: > > http://www.leapsecond.com/pages/adev-avg/ > > Not surprising, you get artificially low ADEV numbers when > you average in this way; the reason is that running averages, > by design, tend to smooth out (low pass filter) the raw data. Indeed. It creates a bias on the measurement that needs to be taken out for a valid measurement. > One thing you can play with is computing average frequency > using the technique that MDEV uses. Which is inspired by an article by Snyder that details how to perform one such overlapping estimation in the counter core for better convergence in noisy data. The MDEV is a separate basic measurement than ADEV. So one needs to be careful not to mix results freely. Cheers, Magnus
MD
Magnus Danielson
Mon, Feb 1, 2010 9:54 AM

Pete Rawson wrote:

Gentlemen,

You've hit a topic I've become more confused about after
researching some of the original papers on this subject.
Here are a few questions which I would like to become
educated about.

  1. Will the calculated results of ADEV, ODEV, MDEV & TOTDEV
    suggest which result applies best to the data being analyzed?

  2. What attributes of the data to be analyzed suggest which
    computation is most appropriate?

  3. Will some computed results indicate that the analysis is NOT
    appropriate? (Are false results obvious?)

There are two things to keep in mind, the bias and the error bars.

Some of these estimators produces biased values as a result of the
dominant noise source. You need to identify the dominant noise source
(use the lag 1 autocorrelation noise identification, almost trivial to
perform). Then with the dominant noise source identified the bias can be
determined.

Error bars will high-light in which area of the graph where a particular
estimator has problems. Comparing the spread of the error-bars between
various estimators allow for identifying which is best for the task.
Look at TOTAL and Theo variants.

Error bars is essentially a reformulation of the Equivalent Degrees of
Freedom (EDF) and EDF change quite drastically with m. Comparison
between different measurements can be done on EDF for m, and highest EDF
wins. It's a measurement on how well the data in the sequence is used by
the estimator.

I'm sure there are more aspects worth learning than these, but
they might serve to get a conversation underway.

Any enlightenment would be greatly appreciated.

This has been the point of the exercise... spreading the knowledge.

I am digging too... but the little stuff I have picked up could probably
be good knowledge to others, so I stirred the pot a little.

Cheers,
Magnus

Pete Rawson wrote: > Gentlemen, > > You've hit a topic I've become more confused about after > researching some of the original papers on this subject. > Here are a few questions which I would like to become > educated about. > > 1) Will the calculated results of ADEV, ODEV, MDEV & TOTDEV > suggest which result applies best to the data being analyzed? > > 2) What attributes of the data to be analyzed suggest which > computation is most appropriate? > > 3) Will some computed results indicate that the analysis is NOT > appropriate? (Are false results obvious?) There are two things to keep in mind, the bias and the error bars. Some of these estimators produces biased values as a result of the dominant noise source. You need to identify the dominant noise source (use the lag 1 autocorrelation noise identification, almost trivial to perform). Then with the dominant noise source identified the bias can be determined. Error bars will high-light in which area of the graph where a particular estimator has problems. Comparing the spread of the error-bars between various estimators allow for identifying which is best for the task. Look at TOTAL and Theo variants. Error bars is essentially a reformulation of the Equivalent Degrees of Freedom (EDF) and EDF change quite drastically with m. Comparison between different measurements can be done on EDF for m, and highest EDF wins. It's a measurement on how well the data in the sequence is used by the estimator. > I'm sure there are more aspects worth learning than these, but > they might serve to get a conversation underway. > > Any enlightenment would be greatly appreciated. This has been the point of the exercise... spreading the knowledge. I am digging too... but the little stuff I have picked up could probably be good knowledge to others, so I stirred the pot a little. Cheers, Magnus
BG
Bruce Griffiths
Mon, Feb 1, 2010 12:16 PM

Magnus

Magnus Danielson wrote:

Tom,

Tom Van Baak wrote:

Magnus,

Correct, all the terms cancel between the end points. Note
that this is exactly equivalent to the way a traditional gated
frequency counter works -- you open the gate, wait some
sample period (maybe 1, 10, or 100 seconds) and then
close the gate. In this scenario it's clear that all the phase
information during the interval is ignored; the only points
that matter are the start and the stop.

There is a technical merit to take samples in between even if they
cancel... you avoid counter overflow, but you can do better, much better.

Modern high-resolution frequency counters don't do this;
and instead they use a form of "continuous counting" and
take a massive number of short phase samples and create
a more precise average frequency out of that.

Yes, and the main point for creating this little thread is to make
people aware that how you process your data do make a difference. It
can make a huge difference in fact. The effective two-point frequency
calculation only use two sample point to estimate the frequency and
thus also use the systematic value and noise of one sample to cancel
the noise of the first sample. For 1/f power noises this is an effect
that can even becomes larger (for 1/f^3 noise) with time as it is
non-convergent, so by looking at it briefly you don't realize it is a
noisy number.

There are some excellent papers on the subject; start with
the one by Rubiola:

<
http://www.femto-st.fr/~rubiola/pdf-articles/journal/2005rsi-hi-res-freq-counters.pdf

There are additional papers (perhaps Bruce can locate them).

In particular, there is one paper that corrects some mistakes of
Rubiola, Australian if I remember correctly.

Magnus Magnus Danielson wrote: > Tom, > > Tom Van Baak wrote: >> Magnus, >> >> Correct, all the terms cancel between the end points. Note >> that this is exactly equivalent to the way a traditional gated >> frequency counter works -- you open the gate, wait some >> sample period (maybe 1, 10, or 100 seconds) and then >> close the gate. In this scenario it's clear that all the phase >> information during the interval is ignored; the only points >> that matter are the start and the stop. > > There is a technical merit to take samples in between even if they > cancel... you avoid counter overflow, but you can do better, much better. > >> Modern high-resolution frequency counters don't do this; >> and instead they use a form of "continuous counting" and >> take a massive number of short phase samples and create >> a more precise average frequency out of that. > > Yes, and the main point for creating this little thread is to make > people aware that how you process your data do make a difference. It > can make a huge difference in fact. The effective two-point frequency > calculation only use two sample point to estimate the frequency and > thus also use the systematic value and noise of one sample to cancel > the noise of the first sample. For 1/f power noises this is an effect > that can even becomes larger (for 1/f^3 noise) with time as it is > non-convergent, so by looking at it briefly you don't realize it is a > noisy number. > >> There are some excellent papers on the subject; start with >> the one by Rubiola: >> >> < >> http://www.femto-st.fr/~rubiola/pdf-articles/journal/2005rsi-hi-res-freq-counters.pdf >> > >> >> There are additional papers (perhaps Bruce can locate them). > > In particular, there is one paper that corrects some mistakes of > Rubiola, Australian if I remember correctly. Yes, the paper by Dawkins, McFerran and Luiten: http://ieeexplore.ieee.org/Xplore/login.jsp?url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4318993%2F4318994%2F04319178.pdf%3Farnumber%3D4319178&authDecision=-203 <http://ieeexplore.ieee.org/Xplore/login.jsp?url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4318993%2F4318994%2F04319178.pdf%3Farnumber%3D4319178&authDecision=-203> Bruce
MD
Magnus Danielson
Mon, Feb 1, 2010 11:06 PM

Bruce,

Bruce Griffiths wrote:

There are some excellent papers on the subject; start with
the one by Rubiola:

<
http://www.femto-st.fr/~rubiola/pdf-articles/journal/2005rsi-hi-res-freq-counters.pdf

There are additional papers (perhaps Bruce can locate them).

In particular, there is one paper that corrects some mistakes of
Rubiola, Australian if I remember correctly.

Yes, many thanks.

This article by J.J. Snyder, "An Ultra-High Resolution Frequency Meter",
FCS #35, is very useful:
http://www.ieee-uffc.org/main/publications/fcs/proceed/1981/s8110464.pdf

It describes an hardware implementation of a zero dead-time counter
(crude!) implementing the algorithm. It performs the averaging in the
way I described earlier. The above paper by Snyder is the inspiration to
the original MDEV paper published in for the same conference and
directly following:

David W. Allan and James A. Barnes, "A modified Allan Variance with
increased oscillator characterization ability", FCS #35.
http://www.ieee-uffc.org/main/publications/fcs/proceed/1981/s8110470.pdf

From the same conference exists a summary paper of Howe, Allan and
Barnes on measurements, spending time on comparing over-lapping and
non-overlapping estimators, effective use of data, DF/EDF, chi-square
etc. etc.

Cheers,
Magnus

Bruce, Bruce Griffiths wrote: >>> There are some excellent papers on the subject; start with >>> the one by Rubiola: >>> >>> < >>> http://www.femto-st.fr/~rubiola/pdf-articles/journal/2005rsi-hi-res-freq-counters.pdf >>> > >>> >>> There are additional papers (perhaps Bruce can locate them). >> >> In particular, there is one paper that corrects some mistakes of >> Rubiola, Australian if I remember correctly. > Yes, the paper by Dawkins, McFerran and Luiten: > http://ieeexplore.ieee.org/Xplore/login.jsp?url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4318993%2F4318994%2F04319178.pdf%3Farnumber%3D4319178&authDecision=-203 > <http://ieeexplore.ieee.org/Xplore/login.jsp?url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4318993%2F4318994%2F04319178.pdf%3Farnumber%3D4319178&authDecision=-203> Yes, many thanks. This article by J.J. Snyder, "An Ultra-High Resolution Frequency Meter", FCS #35, is very useful: http://www.ieee-uffc.org/main/publications/fcs/proceed/1981/s8110464.pdf It describes an hardware implementation of a zero dead-time counter (crude!) implementing the algorithm. It performs the averaging in the way I described earlier. The above paper by Snyder is the inspiration to the original MDEV paper published in for the same conference and directly following: David W. Allan and James A. Barnes, "A modified Allan Variance with increased oscillator characterization ability", FCS #35. http://www.ieee-uffc.org/main/publications/fcs/proceed/1981/s8110470.pdf From the same conference exists a summary paper of Howe, Allan and Barnes on measurements, spending time on comparing over-lapping and non-overlapping estimators, effective use of data, DF/EDF, chi-square etc. etc. Cheers, Magnus
PS
paul swed
Tue, Feb 2, 2010 12:08 AM

Unfortunately can't download these

On Mon, Feb 1, 2010 at 6:06 PM, Magnus Danielson <magnus@rubidium.dyndns.org

wrote:

Bruce,

Bruce Griffiths wrote:

There are some excellent papers on the subject; start with

the one by Rubiola:

<
http://www.femto-st.fr/~rubiola/pdf-articles/journal/2005rsi-hi-res-freq-counters.pdf>

There are additional papers (perhaps Bruce can locate them).

In particular, there is one paper that corrects some mistakes of Rubiola,
Australian if I remember correctly.

Yes, many thanks.

This article by J.J. Snyder, "An Ultra-High Resolution Frequency Meter",
FCS #35, is very useful:
http://www.ieee-uffc.org/main/publications/fcs/proceed/1981/s8110464.pdf

It describes an hardware implementation of a zero dead-time counter
(crude!) implementing the algorithm. It performs the averaging in the way I
described earlier. The above paper by Snyder is the inspiration to the
original MDEV paper published in for the same conference and directly
following:

David W. Allan and James A. Barnes, "A modified Allan Variance with
increased oscillator characterization ability", FCS #35.
http://www.ieee-uffc.org/main/publications/fcs/proceed/1981/s8110470.pdf

From the same conference exists a summary paper of Howe, Allan and Barnes
on measurements, spending time on comparing over-lapping and non-overlapping
estimators, effective use of data, DF/EDF, chi-square etc. etc.

Cheers,
Magnus


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Unfortunately can't download these On Mon, Feb 1, 2010 at 6:06 PM, Magnus Danielson <magnus@rubidium.dyndns.org > wrote: > Bruce, > > Bruce Griffiths wrote: > >> There are some excellent papers on the subject; start with >>>> the one by Rubiola: >>>> >>>> >>>> < >>>> http://www.femto-st.fr/~rubiola/pdf-articles/journal/2005rsi-hi-res-freq-counters.pdf> >>>> >>>> There are additional papers (perhaps Bruce can locate them). >>>> >>> >>> In particular, there is one paper that corrects some mistakes of Rubiola, >>> Australian if I remember correctly. >>> >> Yes, the paper by Dawkins, McFerran and Luiten: >> >> http://ieeexplore.ieee.org/Xplore/login.jsp?url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4318993%2F4318994%2F04319178.pdf%3Farnumber%3D4319178&authDecision=-203< >> http://ieeexplore.ieee.org/Xplore/login.jsp?url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4318993%2F4318994%2F04319178.pdf%3Farnumber%3D4319178&authDecision=-203> >> >> > > Yes, many thanks. > > This article by J.J. Snyder, "An Ultra-High Resolution Frequency Meter", > FCS #35, is very useful: > http://www.ieee-uffc.org/main/publications/fcs/proceed/1981/s8110464.pdf > > It describes an hardware implementation of a zero dead-time counter > (crude!) implementing the algorithm. It performs the averaging in the way I > described earlier. The above paper by Snyder is the inspiration to the > original MDEV paper published in for the same conference and directly > following: > > David W. Allan and James A. Barnes, "A modified Allan Variance with > increased oscillator characterization ability", FCS #35. > http://www.ieee-uffc.org/main/publications/fcs/proceed/1981/s8110470.pdf > > From the same conference exists a summary paper of Howe, Allan and Barnes > on measurements, spending time on comparing over-lapping and non-overlapping > estimators, effective use of data, DF/EDF, chi-square etc. etc. > > Cheers, > Magnus > > > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to > https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there. >
MD
Magnus Danielson
Tue, Feb 2, 2010 12:27 AM

paul swed wrote:

Unfortunately can't download these

You need the IEEE UFFC account. Now you know why I have mine. Those two
articles is fairly short, so getting an UFFC account for those alone is
kind of meaningless.

Cheers,
Magnus

paul swed wrote: > Unfortunately can't download these You need the IEEE UFFC account. Now you know why I have mine. Those two articles is fairly short, so getting an UFFC account for those alone is kind of meaningless. Cheers, Magnus
EP
Ed Palmer
Tue, Feb 2, 2010 2:20 AM

Go to your public library and request the articles via interlibrary
loan.  I recently got Oliver Collins paper on Low Jitter Hard Limiters
that way.  Depending on your library's policies it might be free or cost
a few dollars.  It cost me $2.50 for photocopying.  I'm not sure if you
can get the actual journal.

Ed

paul swed wrote:

Unfortunately can't download these

On Mon, Feb 1, 2010 at 6:06 PM, Magnus Danielson <magnus@rubidium.dyndns.org

wrote:

Bruce,

Bruce Griffiths wrote:

There are some excellent papers on the subject; start with

Yes, many thanks.

This article by J.J. Snyder, "An Ultra-High Resolution Frequency Meter",
FCS #35, is very useful:
http://www.ieee-uffc.org/main/publications/fcs/proceed/1981/s8110464.pdf

It describes an hardware implementation of a zero dead-time counter
(crude!) implementing the algorithm. It performs the averaging in the way I
described earlier. The above paper by Snyder is the inspiration to the
original MDEV paper published in for the same conference and directly
following:

David W. Allan and James A. Barnes, "A modified Allan Variance with
increased oscillator characterization ability", FCS #35.
http://www.ieee-uffc.org/main/publications/fcs/proceed/1981/s8110470.pdf

From the same conference exists a summary paper of Howe, Allan and Barnes
on measurements, spending time on comparing over-lapping and non-overlapping
estimators, effective use of data, DF/EDF, chi-square etc. etc.

Cheers,
Magnus

Go to your public library and request the articles via interlibrary loan. I recently got Oliver Collins paper on Low Jitter Hard Limiters that way. Depending on your library's policies it might be free or cost a few dollars. It cost me $2.50 for photocopying. I'm not sure if you can get the actual journal. Ed paul swed wrote: > Unfortunately can't download these > > On Mon, Feb 1, 2010 at 6:06 PM, Magnus Danielson <magnus@rubidium.dyndns.org > >> wrote: >> >> Bruce, >> >> Bruce Griffiths wrote: >> >> >>> There are some excellent papers on the subject; start with >>> >>>>> the one by Rubiola: >>>>> < >>>>> http://www.femto-st.fr/~rubiola/pdf-articles/journal/2005rsi-hi-res-freq-counters.pdf> >>>>> >>>>> There are additional papers (perhaps Bruce can locate them). >>>>> >>>>> >>>> In particular, there is one paper that corrects some mistakes of Rubiola, >>>> Australian if I remember correctly. >>>> Yes, the paper by Dawkins, McFerran and Luiten: >>>> >>>> http://ieeexplore.ieee.org/Xplore/login.jsp?url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4318993%2F4318994%2F04319178.pdf%3Farnumber%3D4319178&authDecision=-203< >>>> http://ieeexplore.ieee.org/Xplore/login.jsp?url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4318993%2F4318994%2F04319178.pdf%3Farnumber%3D4319178&authDecision=-203> >>>> >>>> >>>> >> Yes, many thanks. >> >> This article by J.J. Snyder, "An Ultra-High Resolution Frequency Meter", >> FCS #35, is very useful: >> http://www.ieee-uffc.org/main/publications/fcs/proceed/1981/s8110464.pdf >> >> It describes an hardware implementation of a zero dead-time counter >> (crude!) implementing the algorithm. It performs the averaging in the way I >> described earlier. The above paper by Snyder is the inspiration to the >> original MDEV paper published in for the same conference and directly >> following: >> >> David W. Allan and James A. Barnes, "A modified Allan Variance with >> increased oscillator characterization ability", FCS #35. >> http://www.ieee-uffc.org/main/publications/fcs/proceed/1981/s8110470.pdf >> >> From the same conference exists a summary paper of Howe, Allan and Barnes >> on measurements, spending time on comparing over-lapping and non-overlapping >> estimators, effective use of data, DF/EDF, chi-square etc. etc. >> >> Cheers, >> Magnus >>