time-nuts@lists.febo.com

Discussion of precise time and frequency measurement

View all threads

frequency stabilty question

PC
Paul Cianciolo
Mon, Aug 15, 2011 3:10 AM

Folks,
 
I amtrying to understand some of the terms used here quite often.
I quoted this from Wikipedia
 
An Allan deviation of 1.3×10−9 at observation time 1 s (i.e. τ = 1 s) should be interpreted as there being an instability in frequency between two observations a second apart with a relative root mean square(RMS) value of 1.3×10−9.
 
Does this mean the observations made were at the very begining and the very end of the 1 second time.
If so what value about all the values in between?   What happens if the oscillator deviated far worse than this during the interrim.
 
 
Or does the measurement consist of making  measurements every cycle during that 1 second and then entering all those values into a formula that accounts for them all??
 
Maybe a very basic tutorial on this topic would help but I cant find one

 
Signed very confused,
Thank You
 
PauLC
W1VLF

Folks,   I amtrying to understand some of the terms used here quite often. I quoted this from Wikipedia   An Allan deviation of 1.3×10−9 at observation time 1 s (i.e. τ = 1 s) should be interpreted as there being an instability in frequency between two observations a second apart with a relative root mean square(RMS) value of 1.3×10−9.   Does this mean the observations made were at the very begining and the very end of the 1 second time. If so what value about all the values in between?   What happens if the oscillator deviated far worse than this during the interrim.     Or does the measurement consist of making  measurements every cycle during that 1 second and then entering all those values into a formula that accounts for them all??   Maybe a very basic tutorial on this topic would help but I cant find one   Signed very confused, Thank You   PauLC W1VLF
TV
Tom Van Baak
Mon, Aug 15, 2011 6:19 AM

Does this mean the observations made were at the very begining
and the very end of the 1 second time.

Correct.

If so what value about all the values in between? What happens
if the oscillator deviated far worse than this during the interrim.

Oscillator behavior in between 1 second intervals is unknown. As
you say it could be much worse, or it could be much better. If you
need to know for sure, then you must re-run the measurement
and collect data at shorter intervals this time.

Specifically, if you make only one measurement per second you
can compute ADEV(1 second), or actually ADEV(N seconds).

By contrast, if you collect 100 measurements per second you
can compute ADEV(0.01 x N seconds). This allows you to see
how well the oscillator performs for intervals less than a second.

Or does the measurement consist of making measurements
every cycle during that 1 second and then entering all those
values into a formula that accounts for them all??

If you had measurements every cycle it would be great. But not
all measurement systems can measure every cycle. In the real
world there are limitations on the rate of measurement. There
are also limitations on the resolution of each measurement.
With some instruments you can increase the rate but you then
loose resolution, or visa-versa.

These two limits dictate how short a tau you can plot and how low
an ADEV you can observe. I can provide examples if you wish.

/tvb

> Does this mean the observations made were at the very begining > and the very end of the 1 second time. Correct. > If so what value about all the values in between? What happens > if the oscillator deviated far worse than this during the interrim. Oscillator behavior in between 1 second intervals is unknown. As you say it could be much worse, or it could be much better. If you need to know for sure, then you must re-run the measurement and collect data at shorter intervals this time. Specifically, if you make only one measurement per second you can compute ADEV(1 second), or actually ADEV(N seconds). By contrast, if you collect 100 measurements per second you can compute ADEV(0.01 x N seconds). This allows you to see how well the oscillator performs for intervals less than a second. > Or does the measurement consist of making measurements > every cycle during that 1 second and then entering all those > values into a formula that accounts for them all?? If you had measurements every cycle it would be great. But not all measurement systems can measure every cycle. In the real world there are limitations on the rate of measurement. There are also limitations on the resolution of each measurement. With some instruments you can increase the rate but you then loose resolution, or visa-versa. These two limits dictate how short a tau you can plot and how low an ADEV you can observe. I can provide examples if you wish. /tvb
W
WB6BNQ
Mon, Aug 15, 2011 7:42 AM

Tom Van Baak wrote:

Does this mean the observations made were at the very begining
and the very end of the 1 second time.

Correct.

If so what value about all the values in between? What happens
if the oscillator deviated far worse than this during the interrim.

Oscillator behavior in between 1 second intervals is unknown. As
you say it could be much worse, or it could be much better. If you
need to know for sure, then you must re-run the measurement
and collect data at shorter intervals this time.

Specifically, if you make only one measurement per second you
can compute ADEV(1 second), or actually ADEV(N seconds).

By contrast, if you collect 100 measurements per second you
can compute ADEV(0.01 x N seconds). This allows you to see
how well the oscillator performs for intervals less than a second.

Or does the measurement consist of making measurements
every cycle during that 1 second and then entering all those
values into a formula that accounts for them all??

If you had measurements every cycle it would be great. But not
all measurement systems can measure every cycle. In the real
world there are limitations on the rate of measurement. There
are also limitations on the resolution of each measurement.
With some instruments you can increase the rate but you then
loose resolution, or visa-versa.

These two limits dictate how short a tau you can plot and how low
an ADEV you can observe. I can provide examples if you wish.

/tvb

Hi Tom,

I was under the impression that the ADEV curves predict a confidence level
between measurement points based on the averaging of the noise over the time
between measurement points ?

For example, a quality oscillator would have more noise shown at times shorter
then 1 second, but the curve goes towards less noise as the time frame lengthens
out.  The point being that the 1 second ADEV value would indicate that over a 1
second time frame the reliability of a measurement would be the ADEV of the 1
second time frame.

If the above is true and in relation to a frequency counter using a 1 second
gating period, then the issue of what is going on during the gating period is of
little concern or consequence if the time base is behaving as described by the
ADEV curve.  Is that a correct observation ?

If Paul was going to make shorter measurement times, then the shorter time frame
ADEV values would, obviously, be important.  I was not sure his question came
from the perspective you saw and that is why I was bringing up the above
questions.

Bill....WB6BNQ

Tom Van Baak wrote: > > Does this mean the observations made were at the very begining > > and the very end of the 1 second time. > > Correct. > > > If so what value about all the values in between? What happens > > if the oscillator deviated far worse than this during the interrim. > > Oscillator behavior in between 1 second intervals is unknown. As > you say it could be much worse, or it could be much better. If you > need to know for sure, then you must re-run the measurement > and collect data at shorter intervals this time. > > Specifically, if you make only one measurement per second you > can compute ADEV(1 second), or actually ADEV(N seconds). > > By contrast, if you collect 100 measurements per second you > can compute ADEV(0.01 x N seconds). This allows you to see > how well the oscillator performs for intervals less than a second. > > > Or does the measurement consist of making measurements > > every cycle during that 1 second and then entering all those > > values into a formula that accounts for them all?? > > If you had measurements every cycle it would be great. But not > all measurement systems can measure every cycle. In the real > world there are limitations on the rate of measurement. There > are also limitations on the resolution of each measurement. > With some instruments you can increase the rate but you then > loose resolution, or visa-versa. > > These two limits dictate how short a tau you can plot and how low > an ADEV you can observe. I can provide examples if you wish. > > /tvb > Hi Tom, I was under the impression that the ADEV curves predict a confidence level between measurement points based on the averaging of the noise over the time between measurement points ? For example, a quality oscillator would have more noise shown at times shorter then 1 second, but the curve goes towards less noise as the time frame lengthens out. The point being that the 1 second ADEV value would indicate that over a 1 second time frame the reliability of a measurement would be the ADEV of the 1 second time frame. If the above is true and in relation to a frequency counter using a 1 second gating period, then the issue of what is going on during the gating period is of little concern or consequence if the time base is behaving as described by the ADEV curve. Is that a correct observation ? If Paul was going to make shorter measurement times, then the shorter time frame ADEV values would, obviously, be important. I was not sure his question came from the perspective you saw and that is why I was bringing up the above questions. Bill....WB6BNQ
MD
Magnus Danielson
Mon, Aug 15, 2011 8:05 AM

Hi Paul,

On 15/08/11 05:10, Paul Cianciolo wrote:

Folks,

I amtrying to understand some of the terms used here quite often.
I quoted this from Wikipedia

An Allan deviation of 1.3×10−9 at observation time 1 s (i.e. τ = 1 s) should be interpreted as there being an instability in frequency between two observations a second apart with a relative root mean square(RMS) value of 1.3×10−9.

OK, I take the blame for that one, as I wrote it.

Does this mean the observations made were at the very begining and the very end of the 1 second time.

Yes, as the observation interval is one second in this case, but you can
make it arbitrary to fit your needs, your application, such as 314,159 s
or whatever.

If so what value about all the values in between?  What happens if the oscillator deviated far worse than this during the interrim.

Well, you should not interpret it as a particular interval, but rather a
typical interval. ADEV is there to handle noises. If you use a shorter
interval, the collection of noises will be different so your ADEV will
be different. If you only have one ADEV value, you need to get the right
interval. If your interval is inbetween known values, you can kind of
guess as for pure noises the slopes is smooth.

But, the important aspect is that you need to measure for the interval
of your interest, a single measure (such as RMS) will not satisfy your
needs. It can be better or worse than the single point you have.

However, if you measure with a basic interval (tau0) you can
algoritmically achieve integer multiple intervals. Modern algorithms
interlace in an overlapping fashion these measures, so parts of an
interval is used by several intervals being used as samples in the
estimate. Hence, no particular interval can be expected.

Or does the measurement consist of making  measurements every cycle during that 1 second and then entering all those values into a formula that accounts for them all??

Assuming that we are after ADEV(1s) then you can make say 100 measures 1
s inbetween. This takes 100 s. We process these for tau0=1 and
tau-multiplier 1... which is the classic simple ADEV formula.

Another approach is to take samples more often, say every 100 ms and
then take 100 such measures. This takes 10 s. We process these for
tau0=0,1 and tau-multiplier 10. The benefit is a significant shorter
measurement interval, which isn't too great effort... but using 100 ms
measurement interval you can get 10 times the samples for the same
measurement time, so you can gain in improved predictor precission.

Thus, I toss in the confusing factor of improving quality of measure by
increasing amount of samples.

However, I want to show you that you can achieve the measures by
different approaches, essentially showing that you will not have to make
unique measurement series for 1s, 2s, 4s, 10s etc. but that you can get
these out of the same measurement series and in fact I highly encourage
you to do so and regularly plot them.

Maybe a very basic tutorial on this topic would help but I cant find one

You only proove that there is more work to be done on the wikipedia
article. It also lacks some illustrative plots.

Cheers,
Magnus

Hi Paul, On 15/08/11 05:10, Paul Cianciolo wrote: > Folks, > > I amtrying to understand some of the terms used here quite often. > I quoted this from Wikipedia > > An Allan deviation of 1.3×10−9 at observation time 1 s (i.e. τ = 1 s) should be interpreted as there being an instability in frequency between two observations a second apart with a relative root mean square(RMS) value of 1.3×10−9. OK, I take the blame for that one, as I wrote it. > Does this mean the observations made were at the very begining and the very end of the 1 second time. Yes, as the observation interval is one second in this case, but you can make it arbitrary to fit your needs, your application, such as 314,159 s or whatever. > If so what value about all the values in between? What happens if the oscillator deviated far worse than this during the interrim. Well, you should not interpret it as a particular interval, but rather a typical interval. ADEV is there to handle noises. If you use a shorter interval, the collection of noises will be different so your ADEV will be different. If you only have one ADEV value, you need to get the right interval. If your interval is inbetween known values, you can kind of guess as for pure noises the slopes is smooth. But, the important aspect is that you need to measure for the interval of your interest, a single measure (such as RMS) will not satisfy your needs. It can be better or worse than the single point you have. However, if you measure with a basic interval (tau0) you can algoritmically achieve integer multiple intervals. Modern algorithms interlace in an overlapping fashion these measures, so parts of an interval is used by several intervals being used as samples in the estimate. Hence, no particular interval can be expected. > Or does the measurement consist of making measurements every cycle during that 1 second and then entering all those values into a formula that accounts for them all?? Assuming that we are after ADEV(1s) then you can make say 100 measures 1 s inbetween. This takes 100 s. We process these for tau0=1 and tau-multiplier 1... which is the classic simple ADEV formula. Another approach is to take samples more often, say every 100 ms and then take 100 such measures. This takes 10 s. We process these for tau0=0,1 and tau-multiplier 10. The benefit is a significant shorter measurement interval, which isn't too great effort... but using 100 ms measurement interval you can get 10 times the samples for the same measurement time, so you can gain in improved predictor precission. Thus, I toss in the confusing factor of improving quality of measure by increasing amount of samples. However, I want to show you that you can achieve the measures by different approaches, essentially showing that you will not have to make unique measurement series for 1s, 2s, 4s, 10s etc. but that you can get these out of the same measurement series and in fact I highly encourage you to do so and regularly plot them. > Maybe a very basic tutorial on this topic would help but I cant find one You only proove that there is more work to be done on the wikipedia article. It also lacks some illustrative plots. Cheers, Magnus
MD
Magnus Danielson
Mon, Aug 15, 2011 8:13 AM

On 15/08/11 09:42, WB6BNQ wrote:

Hi Tom,

I was under the impression that the ADEV curves predict a confidence level
between measurement points based on the averaging of the noise over the time
between measurement points ?

For example, a quality oscillator would have more noise shown at times shorter
then 1 second, but the curve goes towards less noise as the time frame lengthens
out.  The point being that the 1 second ADEV value would indicate that over a 1
second time frame the reliability of a measurement would be the ADEV of the 1
second time frame.

If the above is true and in relation to a frequency counter using a 1 second
gating period, then the issue of what is going on during the gating period is of
little concern or consequence if the time base is behaving as described by the
ADEV curve.  Is that a correct observation ?

If Paul was going to make shorter measurement times, then the shorter time frame
ADEV values would, obviously, be important.  I was not sure his question came
from the perspective you saw and that is why I was bringing up the above
questions.

You can extend the graph downwards if you identified the noise sources
and their levels properly in the graph. This should always be done with
due care, and many times it is possible to measure, so that is preferred.

ADEV in itself doesn't give you a promise, but rather you need it as a
tool to separate noise-sources (together with MADEV for some noises) in
order to identify the levels of these noise sources and only then you
can go back to the graph and predict it.

So, you should not give general recommendations like that "extend freely
below 1 s" as it can fool you. Also, for shorter taus your measurement
rigg may be setting the limit, and not the oscillator... if you have a
quality oscillator. So, be careful there.

Cheers,
Magnus

On 15/08/11 09:42, WB6BNQ wrote: > Hi Tom, > > I was under the impression that the ADEV curves predict a confidence level > between measurement points based on the averaging of the noise over the time > between measurement points ? > > For example, a quality oscillator would have more noise shown at times shorter > then 1 second, but the curve goes towards less noise as the time frame lengthens > out. The point being that the 1 second ADEV value would indicate that over a 1 > second time frame the reliability of a measurement would be the ADEV of the 1 > second time frame. > > If the above is true and in relation to a frequency counter using a 1 second > gating period, then the issue of what is going on during the gating period is of > little concern or consequence if the time base is behaving as described by the > ADEV curve. Is that a correct observation ? > > If Paul was going to make shorter measurement times, then the shorter time frame > ADEV values would, obviously, be important. I was not sure his question came > from the perspective you saw and that is why I was bringing up the above > questions. You can extend the graph downwards if you identified the noise sources and their levels properly in the graph. This should always be done with due care, and many times it is possible to measure, so that is preferred. ADEV in itself doesn't give you a promise, but rather you need it as a tool to separate noise-sources (together with MADEV for some noises) in order to identify the levels of these noise sources and only then you can go back to the graph and predict it. So, you should not give general recommendations like that "extend freely below 1 s" as it can fool you. Also, for shorter taus your measurement rigg may be setting the limit, and not the oscillator... if you have a quality oscillator. So, be careful there. Cheers, Magnus
JL
Jim Lux
Mon, Aug 15, 2011 1:31 PM

On 8/14/11 8:10 PM, Paul Cianciolo wrote:

Folks,

I amtrying to understand some of the terms used here quite often.
I quoted this from Wikipedia

An Allan deviation of 1.3×10−9 at observation time 1 s (i.e. τ = 1 s) should be interpreted as there being an instability in frequency between two observations a second apart with a relative root mean square(RMS) value of 1.3×10−9.

Does this mean the observations made were at the very begining and the very end of the 1 second time.
If so what value about all the values in between?  What happens if the oscillator deviated far worse than this during the interrim.

A measurement at tau=1 second doesn't say anything about what happened
at shorter intervals.

Or does the measurement consist of making  measurements every cycle during that 1 second and then entering all those values into a formula that accounts for them all??

Nope..
It's if you measured the frequency (instantaneously) at one second
intervals, and calculated the standard deviation, that would be the ADEV
for tau=1 second.

measure at 10 second intervals and you get ADEV(Tau=10sec), etc.

That's why you typically see an ADEV as a series of performances at
different taus.

You can also fill in the gaps in the curve, to a certain extent, because
physical oscillators have constraints on what the ADEV can do (i.e.
you're not likely to see 1E-9 at tau=1 second, 1E-5 at tau=2 seconds,
1E-10 at tau=3 seconds)

In fact, if you do the ADEV measurement and it's NOT a nice curve and
has spikes and weirdnesses, that starts to tell you have either a
measurement system problem or a problem with your frequency standard.

(sort of like spurs in a phase noise plot from 120Hz line interference,
or reference clock leakage)

As an example of measurement system problems, it's pretty common to see
a "hump" in ADEV around 500-1500 seconds (around 15-20 minutes) because
of temperature effects on the test equipment or unit under test as the
airconditioning/heating cycles on and off.

Maybe a very basic tutorial on this topic would help but I cant find one

Signed very confused,
Thank You

PauLC
W1VLF


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

On 8/14/11 8:10 PM, Paul Cianciolo wrote: > Folks, > > I amtrying to understand some of the terms used here quite often. > I quoted this from Wikipedia > > An Allan deviation of 1.3×10−9 at observation time 1 s (i.e. τ = 1 s) should be interpreted as there being an instability in frequency between two observations a second apart with a relative root mean square(RMS) value of 1.3×10−9. > > Does this mean the observations made were at the very begining and the very end of the 1 second time. > If so what value about all the values in between? What happens if the oscillator deviated far worse than this during the interrim. > A measurement at tau=1 second doesn't say anything about what happened at shorter intervals. > > Or does the measurement consist of making measurements every cycle during that 1 second and then entering all those values into a formula that accounts for them all?? > Nope.. It's if you measured the frequency (instantaneously) at one second intervals, and calculated the standard deviation, that would be the ADEV for tau=1 second. measure at 10 second intervals and you get ADEV(Tau=10sec), etc. That's why you typically see an ADEV as a series of performances at different taus. You can also fill in the gaps in the curve, to a certain extent, because physical oscillators have constraints on what the ADEV can do (i.e. you're not likely to see 1E-9 at tau=1 second, 1E-5 at tau=2 seconds, 1E-10 at tau=3 seconds) In fact, if you do the ADEV measurement and it's NOT a nice curve and has spikes and weirdnesses, that starts to tell you have either a measurement system problem or a problem with your frequency standard. (sort of like spurs in a phase noise plot from 120Hz line interference, or reference clock leakage) As an example of measurement system problems, it's pretty common to see a "hump" in ADEV around 500-1500 seconds (around 15-20 minutes) because of temperature effects on the test equipment or unit under test as the airconditioning/heating cycles on and off. > Maybe a very basic tutorial on this topic would help but I cant find one > > > Signed very confused, > Thank You > > PauLC > W1VLF > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there.
S
shalimr9@gmail.com
Mon, Aug 15, 2011 2:44 PM

Is it correct to assume that if you collect data every 10 ms for 10 seconds (1000 data points in total), you could actually split that data set into 100 sets of data at 1 seconds for 10 seconds (10 data points in each set), which would have the same statistical quality (I am sure caveats would apply) as a single data set of 1000 data points collected at one second interval over a total period of 1,000 seconds?

Didier KO4BB

Sent from my BlackBerry Wireless thingy while I do other things...

-----Original Message-----
From: Jim Lux jimlux@earthlink.net
Sender: time-nuts-bounces@febo.com
Date: Mon, 15 Aug 2011 06:31:21
To: time-nuts@febo.com
Reply-To: Discussion of precise time and frequency measurement
time-nuts@febo.com
Subject: Re: [time-nuts] frequency stabilty question

On 8/14/11 8:10 PM, Paul Cianciolo wrote:

Folks,

I amtrying to understand some of the terms used here quite often.
I quoted this from Wikipedia

An Allan deviation of 1.3×10−9 at observation time 1 s (i.e. τ = 1 s) should be interpreted as there being an instability in frequency between two observations a second apart with a relative root mean square(RMS) value of 1.3×10−9.

Does this mean the observations made were at the very begining and the very end of the 1 second time.
If so what value about all the values in between?  What happens if the oscillator deviated far worse than this during the interrim.

A measurement at tau=1 second doesn't say anything about what happened
at shorter intervals.

Or does the measurement consist of making  measurements every cycle during that 1 second and then entering all those values into a formula that accounts for them all??

Nope..
It's if you measured the frequency (instantaneously) at one second
intervals, and calculated the standard deviation, that would be the ADEV
for tau=1 second.

measure at 10 second intervals and you get ADEV(Tau=10sec), etc.

That's why you typically see an ADEV as a series of performances at
different taus.

You can also fill in the gaps in the curve, to a certain extent, because
physical oscillators have constraints on what the ADEV can do (i.e.
you're not likely to see 1E-9 at tau=1 second, 1E-5 at tau=2 seconds,
1E-10 at tau=3 seconds)

In fact, if you do the ADEV measurement and it's NOT a nice curve and
has spikes and weirdnesses, that starts to tell you have either a
measurement system problem or a problem with your frequency standard.

(sort of like spurs in a phase noise plot from 120Hz line interference,
or reference clock leakage)

As an example of measurement system problems, it's pretty common to see
a "hump" in ADEV around 500-1500 seconds (around 15-20 minutes) because
of temperature effects on the test equipment or unit under test as the
airconditioning/heating cycles on and off.

Maybe a very basic tutorial on this topic would help but I cant find one

Signed very confused,
Thank You

PauLC
W1VLF


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Is it correct to assume that if you collect data every 10 ms for 10 seconds (1000 data points in total), you could actually split that data set into 100 sets of data at 1 seconds for 10 seconds (10 data points in each set), which would have the same statistical quality (I am sure caveats would apply) as a single data set of 1000 data points collected at one second interval over a total period of 1,000 seconds? Didier KO4BB Sent from my BlackBerry Wireless thingy while I do other things... -----Original Message----- From: Jim Lux <jimlux@earthlink.net> Sender: time-nuts-bounces@febo.com Date: Mon, 15 Aug 2011 06:31:21 To: <time-nuts@febo.com> Reply-To: Discussion of precise time and frequency measurement <time-nuts@febo.com> Subject: Re: [time-nuts] frequency stabilty question On 8/14/11 8:10 PM, Paul Cianciolo wrote: > Folks, > > I amtrying to understand some of the terms used here quite often. > I quoted this from Wikipedia > > An Allan deviation of 1.3×10−9 at observation time 1 s (i.e. τ = 1 s) should be interpreted as there being an instability in frequency between two observations a second apart with a relative root mean square(RMS) value of 1.3×10−9. > > Does this mean the observations made were at the very begining and the very end of the 1 second time. > If so what value about all the values in between? What happens if the oscillator deviated far worse than this during the interrim. > A measurement at tau=1 second doesn't say anything about what happened at shorter intervals. > > Or does the measurement consist of making measurements every cycle during that 1 second and then entering all those values into a formula that accounts for them all?? > Nope.. It's if you measured the frequency (instantaneously) at one second intervals, and calculated the standard deviation, that would be the ADEV for tau=1 second. measure at 10 second intervals and you get ADEV(Tau=10sec), etc. That's why you typically see an ADEV as a series of performances at different taus. You can also fill in the gaps in the curve, to a certain extent, because physical oscillators have constraints on what the ADEV can do (i.e. you're not likely to see 1E-9 at tau=1 second, 1E-5 at tau=2 seconds, 1E-10 at tau=3 seconds) In fact, if you do the ADEV measurement and it's NOT a nice curve and has spikes and weirdnesses, that starts to tell you have either a measurement system problem or a problem with your frequency standard. (sort of like spurs in a phase noise plot from 120Hz line interference, or reference clock leakage) As an example of measurement system problems, it's pretty common to see a "hump" in ADEV around 500-1500 seconds (around 15-20 minutes) because of temperature effects on the test equipment or unit under test as the airconditioning/heating cycles on and off. > Maybe a very basic tutorial on this topic would help but I cant find one > > > Signed very confused, > Thank You > > PauLC > W1VLF > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there. _______________________________________________ time-nuts mailing list -- time-nuts@febo.com To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts and follow the instructions there.
UB
Ulrich Bangert
Mon, Aug 15, 2011 3:07 PM

Jim,

pardon to correct you but

It's if you measured the frequency (instantaneously) at one second
intervals, and calculated the standard deviation, that would
be the ADEV for tau=1 second.

is simply wrong in at least two aspects.

First: The measurements need to be the AVERAGE frequency measured over a time interval of one second, which most easily is done by setting the counter's gate time to one second.

Note that the misunderstanding about the difference between an instantaneous measurement of frequency (which is possible too, but not with counter based methods) and an averaging measurement of frequency over a given time interval was one of the sources of the upsetting discussion between Bruce and WarrenS in this group about the tight PLL method. For that reason I think we need to be very carefull in not to use terms as "instantaneous" in the wrong sense.

Second: It may be due to that you wanted to explain something as easy as possible but I hope that it is clear to you that Allan's fame relates to the fact that he found out that the standard deviation is exactly the WRONG tool to use in this case and that he needed to formulate a new species of deviation that is today called after him.

Regards
Ulrich Bangert

-----Ursprüngliche Nachricht-----
Von: time-nuts-bounces@febo.com
[mailto:time-nuts-bounces@febo.com] Im Auftrag von Jim Lux
Gesendet: Montag, 15. August 2011 15:31
An: time-nuts@febo.com
Betreff: [?? Probable Spam] Re: [time-nuts] frequency
stabilty question

On 8/14/11 8:10 PM, Paul Cianciolo wrote:

Folks,

I amtrying to understand some of the terms used here quite often. I
quoted this from Wikipedia

An Allan deviation of 1.3×10−9 at observation time 1 s

(i.e. τ = 1 s)

should be interpreted as there being an instability in frequency
between two observations a second apart with a relative root mean
square(RMS) value of 1.3×10−9.

Does this mean the observations made were at the very

begining and the very end of the 1 second time.

If so what value about all the values in between?  What

happens if the oscillator deviated far worse than this during
the interrim.

A measurement at tau=1 second doesn't say anything about what
happened
at shorter intervals.

Or does the measurement consist of making  measurements every cycle
during that 1 second and then entering all those values

into a formula

that accounts for them all??

Nope..
It's if you measured the frequency (instantaneously) at one second
intervals, and calculated the standard deviation, that would
be the ADEV
for tau=1 second.

measure at 10 second intervals and you get ADEV(Tau=10sec), etc.

That's why you typically see an ADEV as a series of performances at
different taus.

You can also fill in the gaps in the curve, to a certain
extent, because
physical oscillators have constraints on what the ADEV can do (i.e.
you're not likely to see 1E-9 at tau=1 second, 1E-5 at tau=2 seconds,
1E-10 at tau=3 seconds)

In fact, if you do the ADEV measurement and it's NOT a nice curve and
has spikes and weirdnesses, that starts to tell you have either a
measurement system problem or a problem with your frequency standard.

(sort of like spurs in a phase noise plot from 120Hz line
interference,
or reference clock leakage)

As an example of measurement system problems, it's pretty
common to see
a "hump" in ADEV around 500-1500 seconds (around 15-20
minutes) because
of temperature effects on the test equipment or unit under
test as the
airconditioning/heating cycles on and off.

Maybe a very basic tutorial on this topic would help but I

cant find

one

Signed very confused,
Thank You

PauLC
W1VLF


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Jim, pardon to correct you but > It's if you measured the frequency (instantaneously) at one second > intervals, and calculated the standard deviation, that would > be the ADEV for tau=1 second. is simply wrong in at least two aspects. First: The measurements need to be the AVERAGE frequency measured over a time interval of one second, which most easily is done by setting the counter's gate time to one second. Note that the misunderstanding about the difference between an instantaneous measurement of frequency (which is possible too, but not with counter based methods) and an averaging measurement of frequency over a given time interval was one of the sources of the upsetting discussion between Bruce and WarrenS in this group about the tight PLL method. For that reason I think we need to be very carefull in not to use terms as "instantaneous" in the wrong sense. Second: It may be due to that you wanted to explain something as easy as possible but I hope that it is clear to you that Allan's fame relates to the fact that he found out that the standard deviation is exactly the WRONG tool to use in this case and that he needed to formulate a new species of deviation that is today called after him. Regards Ulrich Bangert > -----Ursprüngliche Nachricht----- > Von: time-nuts-bounces@febo.com > [mailto:time-nuts-bounces@febo.com] Im Auftrag von Jim Lux > Gesendet: Montag, 15. August 2011 15:31 > An: time-nuts@febo.com > Betreff: [?? Probable Spam] Re: [time-nuts] frequency > stabilty question > > > On 8/14/11 8:10 PM, Paul Cianciolo wrote: > > Folks, > > > > I amtrying to understand some of the terms used here quite often. I > > quoted this from Wikipedia > > > > An Allan deviation of 1.3×10−9 at observation time 1 s > (i.e. τ = 1 s) > > should be interpreted as there being an instability in frequency > > between two observations a second apart with a relative root mean > > square(RMS) value of 1.3×10−9. > > > > Does this mean the observations made were at the very > begining and the very end of the 1 second time. > > If so what value about all the values in between? What > happens if the oscillator deviated far worse than this during > the interrim. > > > > A measurement at tau=1 second doesn't say anything about what > happened > at shorter intervals. > > > > > Or does the measurement consist of making measurements every cycle > > during that 1 second and then entering all those values > into a formula > > that accounts for them all?? > > > > Nope.. > It's if you measured the frequency (instantaneously) at one second > intervals, and calculated the standard deviation, that would > be the ADEV > for tau=1 second. > > measure at 10 second intervals and you get ADEV(Tau=10sec), etc. > > > That's why you typically see an ADEV as a series of performances at > different taus. > > You can also fill in the gaps in the curve, to a certain > extent, because > physical oscillators have constraints on what the ADEV can do (i.e. > you're not likely to see 1E-9 at tau=1 second, 1E-5 at tau=2 seconds, > 1E-10 at tau=3 seconds) > > In fact, if you do the ADEV measurement and it's NOT a nice curve and > has spikes and weirdnesses, that starts to tell you have either a > measurement system problem or a problem with your frequency standard. > > (sort of like spurs in a phase noise plot from 120Hz line > interference, > or reference clock leakage) > > As an example of measurement system problems, it's pretty > common to see > a "hump" in ADEV around 500-1500 seconds (around 15-20 > minutes) because > of temperature effects on the test equipment or unit under > test as the > airconditioning/heating cycles on and off. > > > > > > > Maybe a very basic tutorial on this topic would help but I > cant find > > one > > > > > > Signed very confused, > > Thank You > > > > PauLC > > W1VLF > > _______________________________________________ > > time-nuts mailing list -- time-nuts@febo.com > > To unsubscribe, go to > > https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > > and follow the instructions there. > > > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to > https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there. >
TV
Tom Van Baak
Mon, Aug 15, 2011 3:42 PM

Is it correct to assume that if you collect data every 10 ms for
10 seconds (1000 data points in total), you could actually split
that data set into 100 sets of data at 1 seconds for 10 seconds
(10 data points in each set), which would have the same statistical
quality (I am sure caveats would apply) as a single data set of
1000 data points collected at one second interval over a total
period of 1,000 seconds?

Didier KO4BB

If you look every 10 ms and collect data for 10 seconds you will
be able to make a log-log ADEV plot for points from 0.01 s to 2
seconds, no more, no less.

In order to plot ADEV points less than tau 0.01 s you need to
re-measure using an instrument that gives data faster than one
every 10 ms.

In order to plot ADEV points more than tau 2 seconds you simply
need to let the measurement run much longer than 10 seconds.
For example, if you measure for 4 or 5 hours then you can plot
ADEV out to 1 hour.

In other words, the leftmost point on any log-log ADEV plot is
determined by how fast you can collect data and the rightmost
point is determined by how long you collect data.

/tvb

> Is it correct to assume that if you collect data every 10 ms for > 10 seconds (1000 data points in total), you could actually split > that data set into 100 sets of data at 1 seconds for 10 seconds > (10 data points in each set), which would have the same statistical > quality (I am sure caveats would apply) as a single data set of > 1000 data points collected at one second interval over a total > period of 1,000 seconds? > > Didier KO4BB If you look every 10 ms and collect data for 10 seconds you will be able to make a log-log ADEV plot for points from 0.01 s to 2 seconds, no more, no less. In order to plot ADEV points less than tau 0.01 s you need to re-measure using an instrument that gives data faster than one every 10 ms. In order to plot ADEV points more than tau 2 seconds you simply need to let the measurement run much longer than 10 seconds. For example, if you measure for 4 or 5 hours then you can plot ADEV out to 1 hour. In other words, the leftmost point on any log-log ADEV plot is determined by how *fast* you can collect data and the rightmost point is determined by how *long* you collect data. /tvb
BC
Bob Camp
Mon, Aug 15, 2011 4:01 PM

Hi

One more nit to pick...

ADEV looks at frequency differences. You would take the standard deviation
of the delta frequencies (F0-F1 etc) rather than the standard deviation of
the frequency (F0,F1...) readings.

Bob

-----Original Message-----
From: time-nuts-bounces@febo.com [mailto:time-nuts-bounces@febo.com] On
Behalf Of Jim Lux
Sent: Monday, August 15, 2011 9:31 AM
To: time-nuts@febo.com
Subject: Re: [time-nuts] frequency stabilty question

On 8/14/11 8:10 PM, Paul Cianciolo wrote:

Folks,

I amtrying to understand some of the terms used here quite often.
I quoted this from Wikipedia

An Allan deviation of 1.3×10-9 at observation time 1 s (i.e. t = 1 s)

should be interpreted as there being an instability in frequency between two
observations a second apart with a relative root mean square(RMS) value of
1.3×10-9.

Does this mean the observations made were at the very begining and the

very end of the 1 second time.

If so what value about all the values in between?  What happens if the

oscillator deviated far worse than this during the interrim.

A measurement at tau=1 second doesn't say anything about what happened
at shorter intervals.

Or does the measurement consist of making  measurements every cycle during

that 1 second and then entering all those values into a formula that
accounts for them all??

Nope..
It's if you measured the frequency (instantaneously) at one second
intervals, and calculated the standard deviation, that would be the ADEV
for tau=1 second.

measure at 10 second intervals and you get ADEV(Tau=10sec), etc.

That's why you typically see an ADEV as a series of performances at
different taus.

You can also fill in the gaps in the curve, to a certain extent, because
physical oscillators have constraints on what the ADEV can do (i.e.
you're not likely to see 1E-9 at tau=1 second, 1E-5 at tau=2 seconds,
1E-10 at tau=3 seconds)

In fact, if you do the ADEV measurement and it's NOT a nice curve and
has spikes and weirdnesses, that starts to tell you have either a
measurement system problem or a problem with your frequency standard.

(sort of like spurs in a phase noise plot from 120Hz line interference,
or reference clock leakage)

As an example of measurement system problems, it's pretty common to see
a "hump" in ADEV around 500-1500 seconds (around 15-20 minutes) because
of temperature effects on the test equipment or unit under test as the
airconditioning/heating cycles on and off.

Maybe a very basic tutorial on this topic would help but I cant find one

Signed very confused,
Thank You

PauLC
W1VLF


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to

and follow the instructions there.


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Hi One more nit to pick... ADEV looks at frequency differences. You would take the standard deviation of the delta frequencies (F0-F1 etc) rather than the standard deviation of the frequency (F0,F1...) readings. Bob -----Original Message----- From: time-nuts-bounces@febo.com [mailto:time-nuts-bounces@febo.com] On Behalf Of Jim Lux Sent: Monday, August 15, 2011 9:31 AM To: time-nuts@febo.com Subject: Re: [time-nuts] frequency stabilty question On 8/14/11 8:10 PM, Paul Cianciolo wrote: > Folks, > > I amtrying to understand some of the terms used here quite often. > I quoted this from Wikipedia > > An Allan deviation of 1.3×10-9 at observation time 1 s (i.e. t = 1 s) should be interpreted as there being an instability in frequency between two observations a second apart with a relative root mean square(RMS) value of 1.3×10-9. > > Does this mean the observations made were at the very begining and the very end of the 1 second time. > If so what value about all the values in between? What happens if the oscillator deviated far worse than this during the interrim. > A measurement at tau=1 second doesn't say anything about what happened at shorter intervals. > > Or does the measurement consist of making measurements every cycle during that 1 second and then entering all those values into a formula that accounts for them all?? > Nope.. It's if you measured the frequency (instantaneously) at one second intervals, and calculated the standard deviation, that would be the ADEV for tau=1 second. measure at 10 second intervals and you get ADEV(Tau=10sec), etc. That's why you typically see an ADEV as a series of performances at different taus. You can also fill in the gaps in the curve, to a certain extent, because physical oscillators have constraints on what the ADEV can do (i.e. you're not likely to see 1E-9 at tau=1 second, 1E-5 at tau=2 seconds, 1E-10 at tau=3 seconds) In fact, if you do the ADEV measurement and it's NOT a nice curve and has spikes and weirdnesses, that starts to tell you have either a measurement system problem or a problem with your frequency standard. (sort of like spurs in a phase noise plot from 120Hz line interference, or reference clock leakage) As an example of measurement system problems, it's pretty common to see a "hump" in ADEV around 500-1500 seconds (around 15-20 minutes) because of temperature effects on the test equipment or unit under test as the airconditioning/heating cycles on and off. > Maybe a very basic tutorial on this topic would help but I cant find one > > > Signed very confused, > Thank You > > PauLC > W1VLF > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there. _______________________________________________ time-nuts mailing list -- time-nuts@febo.com To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts and follow the instructions there.
BC
Bob Camp
Mon, Aug 15, 2011 4:08 PM

Hi

The "how long can I go?" part of the question goes immediately to "how good
do I want?". For a good confidence level, you might want 100 samples at your
longest Tau rather than 5...
For long Tau, that can indeed get pretty nasty. 100 samples at 10,000
seconds is a very long time. It is worth keeping in mind when you look at
most plots. The confidence drops off significantly at the longer Tau.

Bob

-----Original Message-----
From: time-nuts-bounces@febo.com [mailto:time-nuts-bounces@febo.com] On
Behalf Of Tom Van Baak
Sent: Monday, August 15, 2011 11:42 AM
To: Discussion of precise time and frequency measurement
Subject: Re: [time-nuts] frequency stabilty question

Is it correct to assume that if you collect data every 10 ms for
10 seconds (1000 data points in total), you could actually split
that data set into 100 sets of data at 1 seconds for 10 seconds
(10 data points in each set), which would have the same statistical
quality (I am sure caveats would apply) as a single data set of
1000 data points collected at one second interval over a total
period of 1,000 seconds?

Didier KO4BB

If you look every 10 ms and collect data for 10 seconds you will
be able to make a log-log ADEV plot for points from 0.01 s to 2
seconds, no more, no less.

In order to plot ADEV points less than tau 0.01 s you need to
re-measure using an instrument that gives data faster than one
every 10 ms.

In order to plot ADEV points more than tau 2 seconds you simply
need to let the measurement run much longer than 10 seconds.
For example, if you measure for 4 or 5 hours then you can plot
ADEV out to 1 hour.

In other words, the leftmost point on any log-log ADEV plot is
determined by how fast you can collect data and the rightmost
point is determined by how long you collect data.

/tvb


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Hi The "how long can I go?" part of the question goes immediately to "how good do I want?". For a good confidence level, you might want 100 samples at your longest Tau rather than 5... For long Tau, that can indeed get pretty nasty. 100 samples at 10,000 seconds is a very long time. It is worth keeping in mind when you look at most plots. The confidence drops off significantly at the longer Tau. Bob -----Original Message----- From: time-nuts-bounces@febo.com [mailto:time-nuts-bounces@febo.com] On Behalf Of Tom Van Baak Sent: Monday, August 15, 2011 11:42 AM To: Discussion of precise time and frequency measurement Subject: Re: [time-nuts] frequency stabilty question > Is it correct to assume that if you collect data every 10 ms for > 10 seconds (1000 data points in total), you could actually split > that data set into 100 sets of data at 1 seconds for 10 seconds > (10 data points in each set), which would have the same statistical > quality (I am sure caveats would apply) as a single data set of > 1000 data points collected at one second interval over a total > period of 1,000 seconds? > > Didier KO4BB If you look every 10 ms and collect data for 10 seconds you will be able to make a log-log ADEV plot for points from 0.01 s to 2 seconds, no more, no less. In order to plot ADEV points less than tau 0.01 s you need to re-measure using an instrument that gives data faster than one every 10 ms. In order to plot ADEV points more than tau 2 seconds you simply need to let the measurement run much longer than 10 seconds. For example, if you measure for 4 or 5 hours then you can plot ADEV out to 1 hour. In other words, the leftmost point on any log-log ADEV plot is determined by how *fast* you can collect data and the rightmost point is determined by how *long* you collect data. /tvb _______________________________________________ time-nuts mailing list -- time-nuts@febo.com To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts and follow the instructions there.
TV
Tom Van Baak
Mon, Aug 15, 2011 4:43 PM

Correct. Some ADEV plots conveniently include error bars so
the effect of sample count on confidence is in your face. The
TSC 5110 does this. See for example:
http://leapsecond.com/pages/gpsdo/log35824v.gif
http://leapsecond.com/pages/gpsdo/log35825v.gif

There are also a number of modern variants on regular old Allan
deviation that improve confidence even given the same sample
count. In some cases the trade-off here is computation time.

/tvb

Hi

The "how long can I go?" part of the question goes immediately to "how good
do I want?". For a good confidence level, you might want 100 samples at your
longest Tau rather than 5...
For long Tau, that can indeed get pretty nasty. 100 samples at 10,000
seconds is a very long time. It is worth keeping in mind when you look at
most plots. The confidence drops off significantly at the longer Tau.

Bob

Correct. Some ADEV plots conveniently include error bars so the effect of sample count on confidence is in your face. The TSC 5110 does this. See for example: http://leapsecond.com/pages/gpsdo/log35824v.gif http://leapsecond.com/pages/gpsdo/log35825v.gif There are also a number of modern variants on regular old Allan deviation that improve confidence even given the same sample count. In some cases the trade-off here is computation time. /tvb > Hi > > The "how long can I go?" part of the question goes immediately to "how good > do I want?". For a good confidence level, you might want 100 samples at your > longest Tau rather than 5... > For long Tau, that can indeed get pretty nasty. 100 samples at 10,000 > seconds is a very long time. It is worth keeping in mind when you look at > most plots. The confidence drops off significantly at the longer Tau. > > Bob
PA
Paul A. Cianciolo
Mon, Aug 15, 2011 11:57 PM

Hi Tom,

Ok .. what you say makes good sense.  If one is interested in the
performance over a specific period of time,  than do a test with that
specific period of time in mind.
I notice in a later email you posted a chart showing the ADEV beginning .01
seconds.  As expected it is worse at that short interval and gets better as
the averaging time increases.

This seems to me to be a little like and aliasing problem in the video
world.  If you are recording a rotating wheel(say at 1 rev per second) with
an index mark to indicate when the wheel is at zero degrees,  the sampling
rate had better not be 2 X the rotating rate or the wheel will appear to not
rotate. And who knows what happened between the 2 samples.  If you need to
find out what happened in the period between rotations one will need to
increase the sampling rate.

So use the proper Tau for the information you wish to acquire. Or use a
group of readings as in your Chart.

Thank you Tom,

Paul A. Cianciolo
W1VLF
http://www.rescueelectronics.com/
Our business computer network is  powered exclusively by solar and wind
power.
Converting Photons to Electrons for over 20 years

-----Original Message-----
From: time-nuts-bounces@febo.com [mailto:time-nuts-bounces@febo.com] On
Behalf Of Tom Van Baak
Sent: Monday, August 15, 2011 2:20 AM
To: Discussion of precise time and frequency measurement
Subject: Re: [time-nuts] frequency stabilty question

Does this mean the observations made were at the very begining and the
very end of the 1 second time.

Correct.

If so what value about all the values in between? What happens if the
oscillator deviated far worse than this during the interrim.

Oscillator behavior in between 1 second intervals is unknown. As you say it
could be much worse, or it could be much better. If you need to know for
sure, then you must re-run the measurement and collect data at shorter
intervals this time.

Specifically, if you make only one measurement per second you can compute
ADEV(1 second), or actually ADEV(N seconds).

By contrast, if you collect 100 measurements per second you can compute
ADEV(0.01 x N seconds). This allows you to see how well the oscillator
performs for intervals less than a second.

Or does the measurement consist of making measurements every cycle
during that 1 second and then entering all those values into a formula
that accounts for them all??

If you had measurements every cycle it would be great. But not all
measurement systems can measure every cycle. In the real world there are
limitations on the rate of measurement. There are also limitations on the
resolution of each measurement.
With some instruments you can increase the rate but you then loose
resolution, or visa-versa.

These two limits dictate how short a tau you can plot and how low an ADEV
you can observe. I can provide examples if you wish.

/tvb


time-nuts mailing list -- time-nuts@febo.com To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Hi Tom, Ok .. what you say makes good sense. If one is interested in the performance over a specific period of time, than do a test with that specific period of time in mind. I notice in a later email you posted a chart showing the ADEV beginning .01 seconds. As expected it is worse at that short interval and gets better as the averaging time increases. This seems to me to be a little like and aliasing problem in the video world. If you are recording a rotating wheel(say at 1 rev per second) with an index mark to indicate when the wheel is at zero degrees, the sampling rate had better not be 2 X the rotating rate or the wheel will appear to not rotate. And who knows what happened between the 2 samples. If you need to find out what happened in the period between rotations one will need to increase the sampling rate. So use the proper Tau for the information you wish to acquire. Or use a group of readings as in your Chart. Thank you Tom, Paul A. Cianciolo W1VLF http://www.rescueelectronics.com/ Our business computer network is powered exclusively by solar and wind power. Converting Photons to Electrons for over 20 years -----Original Message----- From: time-nuts-bounces@febo.com [mailto:time-nuts-bounces@febo.com] On Behalf Of Tom Van Baak Sent: Monday, August 15, 2011 2:20 AM To: Discussion of precise time and frequency measurement Subject: Re: [time-nuts] frequency stabilty question > Does this mean the observations made were at the very begining and the > very end of the 1 second time. Correct. > If so what value about all the values in between? What happens if the > oscillator deviated far worse than this during the interrim. Oscillator behavior in between 1 second intervals is unknown. As you say it could be much worse, or it could be much better. If you need to know for sure, then you must re-run the measurement and collect data at shorter intervals this time. Specifically, if you make only one measurement per second you can compute ADEV(1 second), or actually ADEV(N seconds). By contrast, if you collect 100 measurements per second you can compute ADEV(0.01 x N seconds). This allows you to see how well the oscillator performs for intervals less than a second. > Or does the measurement consist of making measurements every cycle > during that 1 second and then entering all those values into a formula > that accounts for them all?? If you had measurements every cycle it would be great. But not all measurement systems can measure every cycle. In the real world there are limitations on the rate of measurement. There are also limitations on the resolution of each measurement. With some instruments you can increase the rate but you then loose resolution, or visa-versa. These two limits dictate how short a tau you can plot and how low an ADEV you can observe. I can provide examples if you wish. /tvb _______________________________________________ time-nuts mailing list -- time-nuts@febo.com To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts and follow the instructions there.
PA
Paul A. Cianciolo
Tue, Aug 16, 2011 12:07 AM

Magnus,

Now that I have a basic understanding, the way you explained seems just rite.
My lack of even a basic understanding is what led to the confusion.
Sorry if I caused you any  trouble.
I am going to go back and read the article again.

Thank you Magnus

Paul A. Cianciolo
W1VLF
http://www.rescueelectronics.com/
Our business computer network is  powered exclusively by solar and wind power.
Converting Photons to Electrons for over 20 years

-----Original Message-----
From: time-nuts-bounces@febo.com [mailto:time-nuts-bounces@febo.com] On Behalf Of Magnus Danielson
Sent: Monday, August 15, 2011 4:06 AM
To: time-nuts@febo.com
Subject: Re: [time-nuts] frequency stabilty question

Hi Paul,

On 15/08/11 05:10, Paul Cianciolo wrote:

Folks,

I amtrying to understand some of the terms used here quite often.
I quoted this from Wikipedia

An Allan deviation of 1.3×10−9 at observation time 1 s (i.e. τ = 1 s) should be interpreted as there being an instability in frequency between two observations a second apart with a relative root mean square(RMS) value of 1.3×10−9.

OK, I take the blame for that one, as I wrote it.

Does this mean the observations made were at the very begining and the very end of the 1 second time.

Yes, as the observation interval is one second in this case, but you can make it arbitrary to fit your needs, your application, such as 314,159 s or whatever.

If so what value about all the values in between?  What happens if the oscillator deviated far worse than this during the interrim.

Well, you should not interpret it as a particular interval, but rather a typical interval. ADEV is there to handle noises. If you use a shorter interval, the collection of noises will be different so your ADEV will be different. If you only have one ADEV value, you need to get the right interval. If your interval is inbetween known values, you can kind of guess as for pure noises the slopes is smooth.

But, the important aspect is that you need to measure for the interval of your interest, a single measure (such as RMS) will not satisfy your needs. It can be better or worse than the single point you have.

However, if you measure with a basic interval (tau0) you can algoritmically achieve integer multiple intervals. Modern algorithms interlace in an overlapping fashion these measures, so parts of an interval is used by several intervals being used as samples in the estimate. Hence, no particular interval can be expected.

Or does the measurement consist of making  measurements every cycle during that 1 second and then entering all those values into a formula that accounts for them all??

Assuming that we are after ADEV(1s) then you can make say 100 measures 1 s inbetween. This takes 100 s. We process these for tau0=1 and tau-multiplier 1... which is the classic simple ADEV formula.

Another approach is to take samples more often, say every 100 ms and then take 100 such measures. This takes 10 s. We process these for
tau0=0,1 and tau-multiplier 10. The benefit is a significant shorter measurement interval, which isn't too great effort... but using 100 ms measurement interval you can get 10 times the samples for the same measurement time, so you can gain in improved predictor precission.

Thus, I toss in the confusing factor of improving quality of measure by increasing amount of samples.

However, I want to show you that you can achieve the measures by different approaches, essentially showing that you will not have to make unique measurement series for 1s, 2s, 4s, 10s etc. but that you can get these out of the same measurement series and in fact I highly encourage you to do so and regularly plot them.

Maybe a very basic tutorial on this topic would help but I cant find
one

You only proove that there is more work to be done on the wikipedia article. It also lacks some illustrative plots.

Cheers,
Magnus


time-nuts mailing list -- time-nuts@febo.com To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Magnus, Now that I have a basic understanding, the way you explained seems just rite. My lack of even a basic understanding is what led to the confusion. Sorry if I caused you any trouble. I am going to go back and read the article again. Thank you Magnus Paul A. Cianciolo W1VLF http://www.rescueelectronics.com/ Our business computer network is powered exclusively by solar and wind power. Converting Photons to Electrons for over 20 years -----Original Message----- From: time-nuts-bounces@febo.com [mailto:time-nuts-bounces@febo.com] On Behalf Of Magnus Danielson Sent: Monday, August 15, 2011 4:06 AM To: time-nuts@febo.com Subject: Re: [time-nuts] frequency stabilty question Hi Paul, On 15/08/11 05:10, Paul Cianciolo wrote: > Folks, > > I amtrying to understand some of the terms used here quite often. > I quoted this from Wikipedia > > An Allan deviation of 1.3×10−9 at observation time 1 s (i.e. τ = 1 s) should be interpreted as there being an instability in frequency between two observations a second apart with a relative root mean square(RMS) value of 1.3×10−9. OK, I take the blame for that one, as I wrote it. > Does this mean the observations made were at the very begining and the very end of the 1 second time. Yes, as the observation interval is one second in this case, but you can make it arbitrary to fit your needs, your application, such as 314,159 s or whatever. > If so what value about all the values in between? What happens if the oscillator deviated far worse than this during the interrim. Well, you should not interpret it as a particular interval, but rather a typical interval. ADEV is there to handle noises. If you use a shorter interval, the collection of noises will be different so your ADEV will be different. If you only have one ADEV value, you need to get the right interval. If your interval is inbetween known values, you can kind of guess as for pure noises the slopes is smooth. But, the important aspect is that you need to measure for the interval of your interest, a single measure (such as RMS) will not satisfy your needs. It can be better or worse than the single point you have. However, if you measure with a basic interval (tau0) you can algoritmically achieve integer multiple intervals. Modern algorithms interlace in an overlapping fashion these measures, so parts of an interval is used by several intervals being used as samples in the estimate. Hence, no particular interval can be expected. > Or does the measurement consist of making measurements every cycle during that 1 second and then entering all those values into a formula that accounts for them all?? Assuming that we are after ADEV(1s) then you can make say 100 measures 1 s inbetween. This takes 100 s. We process these for tau0=1 and tau-multiplier 1... which is the classic simple ADEV formula. Another approach is to take samples more often, say every 100 ms and then take 100 such measures. This takes 10 s. We process these for tau0=0,1 and tau-multiplier 10. The benefit is a significant shorter measurement interval, which isn't too great effort... but using 100 ms measurement interval you can get 10 times the samples for the same measurement time, so you can gain in improved predictor precission. Thus, I toss in the confusing factor of improving quality of measure by increasing amount of samples. However, I want to show you that you can achieve the measures by different approaches, essentially showing that you will not have to make unique measurement series for 1s, 2s, 4s, 10s etc. but that you can get these out of the same measurement series and in fact I highly encourage you to do so and regularly plot them. > Maybe a very basic tutorial on this topic would help but I cant find > one You only proove that there is more work to be done on the wikipedia article. It also lacks some illustrative plots. Cheers, Magnus _______________________________________________ time-nuts mailing list -- time-nuts@febo.com To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts and follow the instructions there.
JL
Jim Lux
Tue, Aug 16, 2011 3:39 AM

On 8/15/11 8:07 AM, Ulrich Bangert wrote:

Jim,

pardon to correct you but

no apologies needed..

It's if you measured the frequency (instantaneously) at one second
intervals, and calculated the standard deviation, that would be the
ADEV for tau=1 second.

is simply wrong in at least two aspects.

First: The measurements need to be the AVERAGE frequency measured
over a time interval of one second, which most easily is done by
setting the counter's gate time to one second.

Note that the misunderstanding about the difference between an
instantaneous measurement of frequency (which is possible too, but
not with counter based methods) and an averaging measurement of
frequency over a given time interval was one of the sources of the
upsetting discussion between Bruce and WarrenS in this group about
the tight PLL method. For that reason I think we need to be very
carefull in not to use terms as "instantaneous" in the wrong sense.

Second: It may be due to that you wanted to explain something as easy
as possible but I hope that it is clear to you that Allan's fame
relates to the fact that he found out that the standard deviation is
exactly the WRONG tool to use in this case and that he needed to
formulate a new species of deviation that is today called after him.

Corrections gratefully acknowledged..

(and, as well, the correction that it's actually first differences that
are being used, not difference against the nominal)

On 8/15/11 8:07 AM, Ulrich Bangert wrote: > Jim, > > pardon to correct you but no apologies needed.. > >> It's if you measured the frequency (instantaneously) at one second >> intervals, and calculated the standard deviation, that would be the >> ADEV for tau=1 second. > > is simply wrong in at least two aspects. > > First: The measurements need to be the AVERAGE frequency measured > over a time interval of one second, which most easily is done by > setting the counter's gate time to one second. > > Note that the misunderstanding about the difference between an > instantaneous measurement of frequency (which is possible too, but > not with counter based methods) and an averaging measurement of > frequency over a given time interval was one of the sources of the > upsetting discussion between Bruce and WarrenS in this group about > the tight PLL method. For that reason I think we need to be very > carefull in not to use terms as "instantaneous" in the wrong sense. > > Second: It may be due to that you wanted to explain something as easy > as possible but I hope that it is clear to you that Allan's fame > relates to the fact that he found out that the standard deviation is > exactly the WRONG tool to use in this case and that he needed to > formulate a new species of deviation that is today called after him. > Corrections gratefully acknowledged.. (and, as well, the correction that it's actually first differences that are being used, not difference against the nominal)
MD
Magnus Danielson
Tue, Aug 16, 2011 4:02 AM

Paul,

On 16/08/11 02:07, Paul A. Cianciolo wrote:

Magnus,

Now that I have a basic understanding, the way you explained seems just rite.
My lack of even a basic understanding is what led to the confusion.
Sorry if I caused you any  trouble.
I am going to go back and read the article again.

Thank you Magnus

You are welcome. However, I wish that you and anyone else should be able
to get the basic understanding from that article, and then be able to
pick up more and more of the quirks. I still consider that the article
is "work in progress" even if it covers quite some ground by now. It
lacks some theoretical analysis aspects, but could be better in
depicting some key aspects.

So, for me it is important to find out what was missing for you to "get it".

Cheers,
Magnus

Paul, On 16/08/11 02:07, Paul A. Cianciolo wrote: > Magnus, > > Now that I have a basic understanding, the way you explained seems just rite. > My lack of even a basic understanding is what led to the confusion. > Sorry if I caused you any trouble. > I am going to go back and read the article again. > > Thank you Magnus You are welcome. However, I wish that you and anyone else should be able to get the basic understanding from that article, and then be able to pick up more and more of the quirks. I still consider that the article is "work in progress" even if it covers quite some ground by now. It lacks some theoretical analysis aspects, but could be better in depicting some key aspects. So, for me it is important to find out what was missing for you to "get it". Cheers, Magnus
MD
Magnus Danielson
Tue, Aug 16, 2011 4:22 AM

On 15/08/11 18:43, Tom Van Baak wrote:

Correct. Some ADEV plots conveniently include error bars so
the effect of sample count on confidence is in your face. The
TSC 5110 does this. See for example:
http://leapsecond.com/pages/gpsdo/log35824v.gif
http://leapsecond.com/pages/gpsdo/log35825v.gif

There are also a number of modern variants on regular old Allan
deviation that improve confidence even given the same sample
count. In some cases the trade-off here is computation time.

I've spent some time in the Allan deviation article to numerical freedom
and confidence intervals. Also, today can Allan deviation be considered
more of a statistical scale rather than a particular algorithm. Already
within the Allan deviation there are some algorithmic differences, but
Hadamard, Total and Theo algorithms bring in different approaches on the
same scale, with the same statistical bias properties but with improved
statistical confidence intervals.

It is highly educating to see the realtime updates of TimeLab for
instance as it gathers more data. You can see how the upper end swings
wildly as data comes in, but for a particular tau the amplitude of the
swing lowers and becomes more and more subtle. This is the result of the
statistical properties of degrees of freedom and the effect on
confidence intervals in action.

The algorithmic advances is about to give as tight confidence interval
as possible for as short measurement time as possible, and the basic
trick is to use overlapping estimations in combination with "over the
edge" analysis. The Total analysis mirrors the data-sequence around the
edge to create a three-times longer sequence, but to avoid biases the
sequence is frequency corrected first, or else the unwrapping would
introduce false systematic noise. This could be due to low-frequency
noise or systematic effects of lower frequency than the sequence allow
for analysis, since it is a finite sequence of data... a key limitation
which is easilly forgot. The noise isn't white for longer times... which
is what causes us to go into the statistical predictor efforts of Allan
deviation and friends.

No, this isn't an easy topic, it took decades to learn and develop for
the professional researchers... and the work keeps progressing. It is
also a re-occurring discussion here.

Hopefully the Allan deviation article can get you up to speed...

Cheers,
Magnus

On 15/08/11 18:43, Tom Van Baak wrote: > Correct. Some ADEV plots conveniently include error bars so > the effect of sample count on confidence is in your face. The > TSC 5110 does this. See for example: > http://leapsecond.com/pages/gpsdo/log35824v.gif > http://leapsecond.com/pages/gpsdo/log35825v.gif > > There are also a number of modern variants on regular old Allan > deviation that improve confidence even given the same sample > count. In some cases the trade-off here is computation time. I've spent some time in the Allan deviation article to numerical freedom and confidence intervals. Also, today can Allan deviation be considered more of a statistical scale rather than a particular algorithm. Already within the Allan deviation there are some algorithmic differences, but Hadamard, Total and Theo algorithms bring in different approaches on the same scale, with the same statistical bias properties but with improved statistical confidence intervals. It is highly educating to see the realtime updates of TimeLab for instance as it gathers more data. You can see how the upper end swings wildly as data comes in, but for a particular tau the amplitude of the swing lowers and becomes more and more subtle. This is the result of the statistical properties of degrees of freedom and the effect on confidence intervals in action. The algorithmic advances is about to give as tight confidence interval as possible for as short measurement time as possible, and the basic trick is to use overlapping estimations in combination with "over the edge" analysis. The Total analysis mirrors the data-sequence around the edge to create a three-times longer sequence, but to avoid biases the sequence is frequency corrected first, or else the unwrapping would introduce false systematic noise. This could be due to low-frequency noise or systematic effects of lower frequency than the sequence allow for analysis, since it is a finite sequence of data... a key limitation which is easilly forgot. The noise isn't white for longer times... which is what causes us to go into the statistical predictor efforts of Allan deviation and friends. No, this isn't an easy topic, it took decades to learn and develop for the professional researchers... and the work keeps progressing. It is also a re-occurring discussion here. Hopefully the Allan deviation article can get you up to speed... Cheers, Magnus
MD
Magnus Danielson
Tue, Aug 16, 2011 5:03 AM

On 15/08/11 17:07, Ulrich Bangert wrote:

Jim,

pardon to correct you but

It's if you measured the frequency (instantaneously) at one second
intervals, and calculated the standard deviation, that would
be the ADEV for tau=1 second.

is simply wrong in at least two aspects.

First: The measurements need to be the AVERAGE frequency measured over a time interval of one second, which most easily is done by setting the counter's gate time to one second.

Note that the misunderstanding about the difference between an instantaneous measurement of frequency (which is possible too, but not with counter based methods) and an averaging measurement of frequency over a given time interval was one of the sources of the upsetting discussion between Bruce and WarrenS in this group about the tight PLL method. For that reason I think we need to be very carefull in not to use terms as "instantaneous" in the wrong sense.

Second: It may be due to that you wanted to explain something as easy as possible but I hope that it is clear to you that Allan's fame relates to the fact that he found out that the standard deviation is exactly the WRONG tool to use in this case and that he needed to formulate a new species of deviation that is today called after him.

He provided both an analysis method which handled the non-white noise as
well as the analysis method to prove that M-sample variance blew up.

Actually, they had been beating around the same bush for a couple of
years and several reseachers where considering the same concept.
However, by the theoretical analysis of bias functions equivalence
between different number of samples was established, and it proved easy
to compare 10-sample variance with 7-sample variance by converting
through 2-sample variance. Thus, it was easier to measure according to
2-sample variance directly now that both 7-sample and 10-sample
variances where just bias functions away from 2-sample variance. The
bias function was all of a sudden known and the bias grew for some
noises as you went for 1000 samples, 10000 samples etc. to infinity...
so 2-sample variance was indeed more useful. That's how it was coined as
Allan's variance by fellow researchers.

He also looked at the bias function for measurements with dead-time,
another obsticle of its day. It created another bias function which
would allow for comparable measures. By including that he forged
together one ring to control them all... it was indeed a break-through
in comparable measures and understanding of how they interacted.

The statistical bias functions allow for conversion between different
sample variants and different taus by producing different multiplying
values to convert from one to the other measure. Bias functions can be
applied also to the T measurement period for tau measurement time, where
the T-tau difference is the dead-time. However, using bias corrections
often require that the dominant noise source (of that tau) is known, so
it needs to be identified. While there is algorithms to assist in that,
using the autocorrelation properties, it becomes a bit of a processing
to do. It has become easy to avoid dead-time and we have all settled for
2-sample variance so it is mostly tau-bias that we care about today. In
this sense, bias functions has become less of importance, but they form
the aid in understanding the behaviours so they are still useful to
learn about.

Let's say that spending time to read up about this topic has been quite
rewarding in both understanding the topic, the efforts put into it and
just how far we have gone.

Cheers,
Magnus

On 15/08/11 17:07, Ulrich Bangert wrote: > Jim, > > pardon to correct you but > >> It's if you measured the frequency (instantaneously) at one second >> intervals, and calculated the standard deviation, that would >> be the ADEV for tau=1 second. > > is simply wrong in at least two aspects. > > First: The measurements need to be the AVERAGE frequency measured over a time interval of one second, which most easily is done by setting the counter's gate time to one second. > > Note that the misunderstanding about the difference between an instantaneous measurement of frequency (which is possible too, but not with counter based methods) and an averaging measurement of frequency over a given time interval was one of the sources of the upsetting discussion between Bruce and WarrenS in this group about the tight PLL method. For that reason I think we need to be very carefull in not to use terms as "instantaneous" in the wrong sense. > > Second: It may be due to that you wanted to explain something as easy as possible but I hope that it is clear to you that Allan's fame relates to the fact that he found out that the standard deviation is exactly the WRONG tool to use in this case and that he needed to formulate a new species of deviation that is today called after him. He provided both an analysis method which handled the non-white noise as well as the analysis method to prove that M-sample variance blew up. Actually, they had been beating around the same bush for a couple of years and several reseachers where considering the same concept. However, by the theoretical analysis of bias functions equivalence between different number of samples was established, and it proved easy to compare 10-sample variance with 7-sample variance by converting through 2-sample variance. Thus, it was easier to measure according to 2-sample variance directly now that both 7-sample and 10-sample variances where just bias functions away from 2-sample variance. The bias function was all of a sudden known and the bias grew for some noises as you went for 1000 samples, 10000 samples etc. to infinity... so 2-sample variance was indeed more useful. That's how it was coined as Allan's variance by fellow researchers. He also looked at the bias function for measurements with dead-time, another obsticle of its day. It created another bias function which would allow for comparable measures. By including that he forged together one ring to control them all... it was indeed a break-through in comparable measures and understanding of how they interacted. The statistical bias functions allow for conversion between different sample variants and different taus by producing different multiplying values to convert from one to the other measure. Bias functions can be applied also to the T measurement period for tau measurement time, where the T-tau difference is the dead-time. However, using bias corrections often require that the dominant noise source (of that tau) is known, so it needs to be identified. While there is algorithms to assist in that, using the autocorrelation properties, it becomes a bit of a processing to do. It has become easy to avoid dead-time and we have all settled for 2-sample variance so it is mostly tau-bias that we care about today. In this sense, bias functions has become less of importance, but they form the aid in understanding the behaviours so they are still useful to learn about. Let's say that spending time to read up about this topic has been quite rewarding in both understanding the topic, the efforts put into it and just how far we have gone. Cheers, Magnus
BC
Bob Camp
Tue, Aug 16, 2011 4:20 PM

Hi

This brings in another subtle but significant issue.

We talk about the ADEV being done as the standard deviation of the frequency
differences, but often that's not what's done. Even with zero dead time,
there's another bit of magic in there. Drift is removed before the samples
are used.

Oddly there are multiple approaches to drift removal. It comes as no
surprise that the more aggressive the drift removal, the better looking the
result. If you are looking at ADEV, it's always worth asking if (and how)
the drift was removed.

Of course there's also pre-filtering as a function of Tau, but that's even
more exotic.

Bob

-----Original Message-----
From: time-nuts-bounces@febo.com [mailto:time-nuts-bounces@febo.com] On
Behalf Of Magnus Danielson
Sent: Tuesday, August 16, 2011 12:23 AM
To: time-nuts@febo.com
Subject: Re: [time-nuts] frequency stabilty question

On 15/08/11 18:43, Tom Van Baak wrote:

Correct. Some ADEV plots conveniently include error bars so
the effect of sample count on confidence is in your face. The
TSC 5110 does this. See for example:
http://leapsecond.com/pages/gpsdo/log35824v.gif
http://leapsecond.com/pages/gpsdo/log35825v.gif

There are also a number of modern variants on regular old Allan
deviation that improve confidence even given the same sample
count. In some cases the trade-off here is computation time.

I've spent some time in the Allan deviation article to numerical freedom
and confidence intervals. Also, today can Allan deviation be considered
more of a statistical scale rather than a particular algorithm. Already
within the Allan deviation there are some algorithmic differences, but
Hadamard, Total and Theo algorithms bring in different approaches on the
same scale, with the same statistical bias properties but with improved
statistical confidence intervals.

It is highly educating to see the realtime updates of TimeLab for
instance as it gathers more data. You can see how the upper end swings
wildly as data comes in, but for a particular tau the amplitude of the
swing lowers and becomes more and more subtle. This is the result of the
statistical properties of degrees of freedom and the effect on
confidence intervals in action.

The algorithmic advances is about to give as tight confidence interval
as possible for as short measurement time as possible, and the basic
trick is to use overlapping estimations in combination with "over the
edge" analysis. The Total analysis mirrors the data-sequence around the
edge to create a three-times longer sequence, but to avoid biases the
sequence is frequency corrected first, or else the unwrapping would
introduce false systematic noise. This could be due to low-frequency
noise or systematic effects of lower frequency than the sequence allow
for analysis, since it is a finite sequence of data... a key limitation
which is easilly forgot. The noise isn't white for longer times... which
is what causes us to go into the statistical predictor efforts of Allan
deviation and friends.

No, this isn't an easy topic, it took decades to learn and develop for
the professional researchers... and the work keeps progressing. It is
also a re-occurring discussion here.

Hopefully the Allan deviation article can get you up to speed...

Cheers,
Magnus


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Hi This brings in another subtle but significant issue. We talk about the ADEV being done as the standard deviation of the frequency differences, but often that's not what's done. Even with zero dead time, there's another bit of magic in there. Drift is removed before the samples are used. Oddly there are multiple approaches to drift removal. It comes as no surprise that the more aggressive the drift removal, the better looking the result. If you are looking at ADEV, it's always worth asking if (and how) the drift was removed. Of course there's also pre-filtering as a function of Tau, but that's even more exotic. Bob -----Original Message----- From: time-nuts-bounces@febo.com [mailto:time-nuts-bounces@febo.com] On Behalf Of Magnus Danielson Sent: Tuesday, August 16, 2011 12:23 AM To: time-nuts@febo.com Subject: Re: [time-nuts] frequency stabilty question On 15/08/11 18:43, Tom Van Baak wrote: > Correct. Some ADEV plots conveniently include error bars so > the effect of sample count on confidence is in your face. The > TSC 5110 does this. See for example: > http://leapsecond.com/pages/gpsdo/log35824v.gif > http://leapsecond.com/pages/gpsdo/log35825v.gif > > There are also a number of modern variants on regular old Allan > deviation that improve confidence even given the same sample > count. In some cases the trade-off here is computation time. I've spent some time in the Allan deviation article to numerical freedom and confidence intervals. Also, today can Allan deviation be considered more of a statistical scale rather than a particular algorithm. Already within the Allan deviation there are some algorithmic differences, but Hadamard, Total and Theo algorithms bring in different approaches on the same scale, with the same statistical bias properties but with improved statistical confidence intervals. It is highly educating to see the realtime updates of TimeLab for instance as it gathers more data. You can see how the upper end swings wildly as data comes in, but for a particular tau the amplitude of the swing lowers and becomes more and more subtle. This is the result of the statistical properties of degrees of freedom and the effect on confidence intervals in action. The algorithmic advances is about to give as tight confidence interval as possible for as short measurement time as possible, and the basic trick is to use overlapping estimations in combination with "over the edge" analysis. The Total analysis mirrors the data-sequence around the edge to create a three-times longer sequence, but to avoid biases the sequence is frequency corrected first, or else the unwrapping would introduce false systematic noise. This could be due to low-frequency noise or systematic effects of lower frequency than the sequence allow for analysis, since it is a finite sequence of data... a key limitation which is easilly forgot. The noise isn't white for longer times... which is what causes us to go into the statistical predictor efforts of Allan deviation and friends. No, this isn't an easy topic, it took decades to learn and develop for the professional researchers... and the work keeps progressing. It is also a re-occurring discussion here. Hopefully the Allan deviation article can get you up to speed... Cheers, Magnus _______________________________________________ time-nuts mailing list -- time-nuts@febo.com To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts and follow the instructions there.
MD
Magnus Danielson
Tue, Aug 16, 2011 5:00 PM

Hi Bob,

On 16/08/11 18:20, Bob Camp wrote:

Hi

This brings in another subtle but significant issue.

We talk about the ADEV being done as the standard deviation of the frequency
differences, but often that's not what's done. Even with zero dead time,
there's another bit of magic in there. Drift is removed before the samples
are used.

Oddly there are multiple approaches to drift removal. It comes as no
surprise that the more aggressive the drift removal, the better looking the
result. If you are looking at ADEV, it's always worth asking if (and how)
the drift was removed.

Indeed. Some (incorrectly) beleive that Hadamard deviation does this.
Well, Hadamard will cancel linear frequency drift, i.e. f = f_0 + D*t
but any other drift function will not be fully cancelled. Crystals for
instance can be found to better match

f = f_0 + A ln (B*t)

Now, toss the Hadamard on that it will make it's third difference on the
phase and consume the first degree drift, but the higher parts of the
logarithm expression will still be there. This can be seen as a rising
slope at higher taus. Sure, the Hadamard will remove much of the first
degree effect, but for best result use propper higher degrees matching
and remove the systematic effects such that only noise effects remains.
Hadamard is however powerful to provide a preview while collecting data.
For very low rate aging it can also be useful.

In general, ADEV is nice for noises, but fails to give you propper
feeling of systematic effects, which is best treated in their own
problem domain. A linear drift can be illustrated with ADEV, but ADEV
will display it with a bias of 1/sqrt(2) so the composite is actually
not trustworthy.

Also, you need to check the frequency and phase plots to learn what is
causing unexpected results. When using TimeLab, the wrapped and
unwrapped phase and the frequency plots all needs to be checked as they
highlight different errors. Thermal variations and its systematic effect
isn't best treated by ADEV for instance.

Of course there's also pre-filtering as a function of Tau, but that's even
more exotic.

... and complex.

There are many aspects to correct measurements.

I think I covered some of them in the Allan Deviation article.

Cheers,
Magnus

Hi Bob, On 16/08/11 18:20, Bob Camp wrote: > Hi > > This brings in another subtle but significant issue. > > We talk about the ADEV being done as the standard deviation of the frequency > differences, but often that's not what's done. Even with zero dead time, > there's another bit of magic in there. Drift is removed before the samples > are used. > > Oddly there are multiple approaches to drift removal. It comes as no > surprise that the more aggressive the drift removal, the better looking the > result. If you are looking at ADEV, it's always worth asking if (and how) > the drift was removed. Indeed. Some (incorrectly) beleive that Hadamard deviation does this. Well, Hadamard will cancel linear frequency drift, i.e. f = f_0 + D*t but any other drift function will not be fully cancelled. Crystals for instance can be found to better match f = f_0 + A ln (B*t) Now, toss the Hadamard on that it will make it's third difference on the phase and consume the first degree drift, but the higher parts of the logarithm expression will still be there. This can be seen as a rising slope at higher taus. Sure, the Hadamard will remove much of the first degree effect, but for best result use propper higher degrees matching and remove the systematic effects such that only noise effects remains. Hadamard is however powerful to provide a preview while collecting data. For very low rate aging it can also be useful. In general, ADEV is nice for noises, but fails to give you propper feeling of systematic effects, which is best treated in their own problem domain. A linear drift can be illustrated with ADEV, but ADEV will display it with a bias of 1/sqrt(2) so the composite is actually not trustworthy. Also, you need to check the frequency and phase plots to learn what is causing unexpected results. When using TimeLab, the wrapped and unwrapped phase and the frequency plots all needs to be checked as they highlight different errors. Thermal variations and its systematic effect isn't best treated by ADEV for instance. > Of course there's also pre-filtering as a function of Tau, but that's even > more exotic. ... and complex. There are many aspects to correct measurements. I think I covered some of them in the Allan Deviation article. Cheers, Magnus