Bob Camp posted (Wed Oct 22 20:38:47 EDT 2014)
Re: [time-nuts] Phase, One edge or two?
ADEV most certainly does change with time, even for short tau's.
Can you elaborate?
Such as when, why, what kind of change, how much change,
at how short of tau's, over how long of time,
and using what type Oscillators?
Do you know what in the freq or Phase plot is causing the ADEV to change?
There all kinds of Oscillator things that change over time
and/or need a lot of time to properly measure,
but I was under the impression that this is not the case for short tau ADEV.
Of the many OCXO type Oscillators that I've tested (HP10811 & MV89),
seldom have I seen any significant change (say greater than 10%),
in the short tau (0.01 sec to 1 sec) ADEV values, after the systematic
type errors are removed. (even when starting soon after turn on)
ADEV is used to measure random types of noise so there are of
course the statistical uncertainty variations that are a function of
the number of valid data points. I find that using a minimum of
a thousand points at each tau gives good consistent results.
What I have seen when measuring the ADEV on time nut type Osc,
is that it generally takes only a couple of minutes to get enough
samples to reliability predict what the short term tau is when
using a fast (120 sps), high resolution (0.1ps) tester.
Trouble shooting hint:
If the 0.1 sec to 10 second ADEV tau plot is not flat on a good
time nut type undisciplined OXCO, then the first places to look
for problems is in the tester, or the setup, or the procedure,
not the oscillator.
ws
ADEV most certainly does change with time, even for short tau's.
Can you elaborate?
Such as when, why, what kind of change, how much change,
at how short of tau's, over how long of time,
and using what type Oscillators?
Do you know what in the freq or Phase plot is causing the ADEV to change?
I'm happy to let Bob answer his own claim here. I'm curious as well. Unless he's talking about thermal noise, in which case I now believe him 100%.
OTOH, for time intervals of minutes to hours or days, the plotted ADEV can often vary. When in doubt, enable error bars in your ADEV calculations or use DAVAR in Stable32, or use "Trace History" of TmeLab to expose how little or much the computed ADEV depends on tau and N.
In general, never do an ADEV calculation without visually checking the phase or frequency time series first.
Of the many OCXO type Oscillators that I've tested (HP10811 & MV89),
seldom have I seen any significant change (say greater than 10%),
in the short tau (0.01 sec to 1 sec) ADEV values, after the systematic
type errors are removed. (even when starting soon after turn on)
This is not my experience at all. Let's figure out what's happening to you.
If all your standards look sort the same from tau 0.01 to tau 0.1 to tau 1 then either you need more oscillators to play with or maybe you have a measurement problem. This is especially true if you are doing post-comparator averaging. Averaging, by definition, tends to remove noise, to smooth things out. If your goal is to measure noise, the last thing you want to do is create any electronics or use any analog or digital or numerical filtering that removes or reduces the very thing you're trying to measure.
I remind you of this page http://leapsecond.com/pages/adev-avg/ of the perils of averaging data.
For most of the world, there's signal and noise. Signal good. Noise bad. But for us, measuring precision clocks, the noise is the signal. So don't do anything that removes or reduces noise.
ADEV is used to measure random types of noise so there are of
course the statistical uncertainty variations that are a function of
the number of valid data points. I find that using a minimum of
a thousand points at each tau gives good consistent results.
Are you crazy? The minimum is just 3 or 4 or 5 data points. Not 1000! You should not see much difference at 10 or 100 or 1000 points. If so, something is wrong with your measurement model. If ADEV(tau) is that dependent on tau, check the frequency time-series. Consider removing drift or using HDEV instead of ADEV. We need to talk. If your logic was true, we'd all have to wait 3 years before we could compute the ADEV of a GPSDO at tau 1 day.
/tvb
Tom,
On 10/24/2014 11:31 PM, Tom Van Baak wrote:
ADEV most certainly does change with time, even for short tau's.
Can you elaborate?
Such as when, why, what kind of change, how much change,
at how short of tau's, over how long of time,
and using what type Oscillators?
Do you know what in the freq or Phase plot is causing the ADEV to change?
I'm happy to let Bob answer his own claim here. I'm curious as well. Unless he's talking about thermal noise, in which case I now believe him 100%.
OTOH, for time intervals of minutes to hours or days, the plotted ADEV can often vary. When in doubt, enable error bars in your ADEV calculations or use DAVAR in Stable32, or use "Trace History" of TmeLab to expose how little or much the computed ADEV depends on tau and N.
In general, never do an ADEV calculation without visually checking the phase or frequency time series first.
You should make sure that you remove all forms of systematic effects
before turning the residue random noise over to ADEV.
If you have random noise being modulated in amplitude, you need to
measure long enough for the averaging end not to have a great impact on
the result.
Of the many OCXO type Oscillators that I've tested (HP10811 & MV89),
seldom have I seen any significant change (say greater than 10%),
in the short tau (0.01 sec to 1 sec) ADEV values, after the systematic
type errors are removed. (even when starting soon after turn on)
This is not my experience at all. Let's figure out what's happening to you.
If all your standards look sort the same from tau 0.01 to tau 0.1 to tau 1 then either you need more oscillators to play with or maybe you have a measurement problem. This is especially true if you are doing post-comparator averaging. Averaging, by definition, tends to remove noise, to smooth things out. If your goal is to measure noise, the last thing you want to do is create any electronics or use any analog or digital or numerical filtering that removes or reduces the very thing you're trying to measure.
I remind you of this page http://leapsecond.com/pages/adev-avg/ of the perils of averaging data.
For most of the world, there's signal and noise. Signal good. Noise bad. But for us, measuring precision clocks, the noise is the signal. So don't do anything that removes or reduces noise.
Systematic signals is however disturbances for the ADEV.
ADEV is used to measure random types of noise so there are of
course the statistical uncertainty variations that are a function of
the number of valid data points. I find that using a minimum of
a thousand points at each tau gives good consistent results.
Are you crazy? The minimum is just 3 or 4 or 5 data points. Not 1000! You should not see much difference at 10 or 100 or 1000 points. If so, something is wrong with your measurement model. If ADEV(tau) is that dependent on tau, check the frequency time-series. Consider removing drift or using HDEV instead of ADEV. We need to talk. If your logic was true, we'd all have to wait 3 years before we could compute the ADEV of a GPSDO at tau 1 day.
No, he is not crazy on this point. While the algorithm only needs 3
points to produce a value, that value will have so bad degrees of
freedom that the confidence interval is WAAAY out there.
The reason that we don't need to wait 3 years for tau of 1 day is that
we learned to use interleaving spans of time. We have since had much
more development in the algorithms to improve the degrees of freedom for
the same N samples, all in an effort to achieve as small confidence
interval as possible. Also, the degrees of freedom achieved varies with
the tau0 multiple m, and with dominant noise-type.
A great way to illustrate the point of degrees of freedom and the number
of sample-points needed to get tight confidence intervals is to see how
the high-tau end of a curve updates in TimeLab and behaves as the
jiggeling end of a long rope, and as more samples comes in, the
jiggeling end moves towards higher taus, but for a particular tau, the
amplitude of the jiggeling decreases until it almost stops. This is the
effect of the confidence intervals becoming tighter, the range within
the real value is becomes smaller and eventually is very tight.
The modern algorithms like TOTAL and Theo can cram out really impressive
degrees of freedom for a particular set of N and m compared to older
algorithms, which effectively translates into tighter confidence
intervals for the same N and m.
Cheers,
Magnus
Hi
Grab an OCXO that has been powered off for a long time.
Turn it on and start plotting ADEV. Do it from about 10 minutes after turn on. Run 15 to 30 minute tests every so often for the first few hours.
Come back the next day and run the same series for a few hours. Repeat a week later, and a month later.
Curve fit out the drift and run the ADEV numbers out to < 100 seconds tau. That’s true even if you compare the best of each batch. It really is getting better.
Do that on enough oscillators and you will indeed find many that do get better (like 2X better for some, 10 or 20% for others) on ADEV after they have been on a while.
——————————
Run an OCXO and watch the ADEV on the Time Pod. Look at enough of them and you will find some that drop ADEV for a while (say 10 minutes or so) and then climb by a bit (say 1.5:1). Hmmm, what’s going on? Look at the phase plot and there’s an abrupt shift in phase over some period ( which depends on the cause, there’s more than one possibility). Let’s say it’s 10 seconds. The whole ADEV plot climbs, not just the part for > 10 second tau. Why - there’s energy there both at short and long tau.
——————————————
Look at a GPSDO / disciplined oscillator / temperature compensated Rb. Let it run for a good long time. If it’s got a loop that steps out to long time constants, it may only bump the frequency once every 15 minutes or longer. Plot the ADEV over the time segment when it steps and compare it to the time period it does not step. Short tau ADEV is worse at the step.
————————————————
Look at a very normal OCXO on your TimePod. After 100 seconds, the 1 second ADEV should be pretty well determined. After 1000 seconds it should be very well determined. Flip on the error bars if you want an idea of how good it should be.
Watch for a while, Does it move outside the error bars? Hmmmm ….. It’s not the error bars that are the problem. The math is correct. The statistics are what is the issue. The ADEV hast changed for the worse as the run has gone on. It’s a very common thing.
———————————
Those are just the first few off the top of my head.
Bob
On Oct 24, 2014, at 5:31 PM, Tom Van Baak tvb@LeapSecond.com wrote:
ADEV most certainly does change with time, even for short tau's.
Can you elaborate?
Such as when, why, what kind of change, how much change,
at how short of tau's, over how long of time,
and using what type Oscillators?
Do you know what in the freq or Phase plot is causing the ADEV to change?
I'm happy to let Bob answer his own claim here. I'm curious as well. Unless he's talking about thermal noise, in which case I now believe him 100%.
OTOH, for time intervals of minutes to hours or days, the plotted ADEV can often vary. When in doubt, enable error bars in your ADEV calculations or use DAVAR in Stable32, or use "Trace History" of TmeLab to expose how little or much the computed ADEV depends on tau and N.
In general, never do an ADEV calculation without visually checking the phase or frequency time series first.
Of the many OCXO type Oscillators that I've tested (HP10811 & MV89),
seldom have I seen any significant change (say greater than 10%),
in the short tau (0.01 sec to 1 sec) ADEV values, after the systematic
type errors are removed. (even when starting soon after turn on)
This is not my experience at all. Let's figure out what's happening to you.
If all your standards look sort the same from tau 0.01 to tau 0.1 to tau 1 then either you need more oscillators to play with or maybe you have a measurement problem. This is especially true if you are doing post-comparator averaging. Averaging, by definition, tends to remove noise, to smooth things out. If your goal is to measure noise, the last thing you want to do is create any electronics or use any analog or digital or numerical filtering that removes or reduces the very thing you're trying to measure.
I remind you of this page http://leapsecond.com/pages/adev-avg/ of the perils of averaging data.
For most of the world, there's signal and noise. Signal good. Noise bad. But for us, measuring precision clocks, the noise is the signal. So don't do anything that removes or reduces noise.
ADEV is used to measure random types of noise so there are of
course the statistical uncertainty variations that are a function of
the number of valid data points. I find that using a minimum of
a thousand points at each tau gives good consistent results.
Are you crazy? The minimum is just 3 or 4 or 5 data points. Not 1000! You should not see much difference at 10 or 100 or 1000 points. If so, something is wrong with your measurement model. If ADEV(tau) is that dependent on tau, check the frequency time-series. Consider removing drift or using HDEV instead of ADEV. We need to talk. If your logic was true, we'd all have to wait 3 years before we could compute the ADEV of a GPSDO at tau 1 day.
/tvb
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
In message 544AD1D1.4040608@rubidium.dyndns.org, Magnus Danielson writes:
A great way to illustrate the point of degrees of freedom and the number
of sample-points needed to get tight confidence intervals is to see how
the high-tau end of a curve updates in TimeLab and behaves as the
jiggeling end of a long rope, and as more samples comes in, the
jiggeling end moves towards higher taus, but for a particular tau, the
amplitude of the jiggeling decreases until it almost stops. This is the
effect of the confidence intervals becoming tighter, the range within
the real value is becomes smaller and eventually is very tight.
ADEV snakes about at the far right end primarily becuase of the
phase jitter which dominates at your minimum tau.
If you have a phase measurement white noise of 1 ns = ADEV(tau=1)
and you expect your ADEV curve to bottom out around 1e-12 at some
larger tau, then you have a factor thousand of noise to average
out, before a valid ADEV comes out of the noise.
This is not any different from any other "average out the noise"
situation in any significant way, sqrt(N) rules, and there is
nothing you can do about.
except to decrease your phase measurement white noise, which is
why tuning your 5370 for peak performance is worth days of measurements
at the other end of the ADEV.
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
On Oct 24, 2014, at 6:25 PM, Magnus Danielson magnus@rubidium.dyndns.org wrote:
Tom,
On 10/24/2014 11:31 PM, Tom Van Baak wrote:
ADEV most certainly does change with time, even for short tau's.
Can you elaborate?
Such as when, why, what kind of change, how much change,
at how short of tau's, over how long of time,
and using what type Oscillators?
Do you know what in the freq or Phase plot is causing the ADEV to change?
I'm happy to let Bob answer his own claim here. I'm curious as well. Unless he's talking about thermal noise, in which case I now believe him 100%.
OTOH, for time intervals of minutes to hours or days, the plotted ADEV can often vary. When in doubt, enable error bars in your ADEV calculations or use DAVAR in Stable32, or use "Trace History" of TmeLab to expose how little or much the computed ADEV depends on tau and N.
In general, never do an ADEV calculation without visually checking the phase or frequency time series first.
You should make sure that you remove all forms of systematic effects before turning the residue random noise over to ADEV.
If you have random noise being modulated in amplitude, you need to measure long enough for the averaging end not to have a great impact on the result.
Is days long enough for a 1 second tau? If you define 1,000 x tau as “long enough” you are being way more conservative than just about anybody out there. My claim is that rather than telling everybody to run for 10,000 or 100,000 x tau, simply accept that ADEV does / may change.
Removing this or that before you do ADEV can get you on a very slippery slope indeed. Removing drift (either time or frequency) - fine. Removing this or that couple of minutes of data because it makes the result look better, that’s likely to get you in trouble. Your customer (or system, or test setup) isn’t likely to accept a “ADEV is ok most of the time” specification.
Is ADEV a good way to measure temperature stability - no of course not. We do indeed have rooms that vary in temperature. They do impact ADEV on a Rb. Removing the delta temperature related data from your ADEV input is not at all easy (1,000 to 10,000 second data …) and I believe it would mis-represent the part. Running in the real world, it’s going to have that ADEV hump.
Yes indeed you can find FCS papers with all sorts of interesting “adjustments” or no processing at all. The consensus seems to be that if you go past drift correction, you really should have a footnote.
Bob
Of the many OCXO type Oscillators that I've tested (HP10811 & MV89),
seldom have I seen any significant change (say greater than 10%),
in the short tau (0.01 sec to 1 sec) ADEV values, after the systematic
type errors are removed. (even when starting soon after turn on)
This is not my experience at all. Let's figure out what's happening to you.
If all your standards look sort the same from tau 0.01 to tau 0.1 to tau 1 then either you need more oscillators to play with or maybe you have a measurement problem. This is especially true if you are doing post-comparator averaging. Averaging, by definition, tends to remove noise, to smooth things out. If your goal is to measure noise, the last thing you want to do is create any electronics or use any analog or digital or numerical filtering that removes or reduces the very thing you're trying to measure.
I remind you of this page http://leapsecond.com/pages/adev-avg/ of the perils of averaging data.
For most of the world, there's signal and noise. Signal good. Noise bad. But for us, measuring precision clocks, the noise is the signal. So don't do anything that removes or reduces noise.
Systematic signals is however disturbances for the ADEV.
ADEV is used to measure random types of noise so there are of
course the statistical uncertainty variations that are a function of
the number of valid data points. I find that using a minimum of
a thousand points at each tau gives good consistent results.
Are you crazy? The minimum is just 3 or 4 or 5 data points. Not 1000! You should not see much difference at 10 or 100 or 1000 points. If so, something is wrong with your measurement model. If ADEV(tau) is that dependent on tau, check the frequency time-series. Consider removing drift or using HDEV instead of ADEV. We need to talk. If your logic was true, we'd all have to wait 3 years before we could compute the ADEV of a GPSDO at tau 1 day.
No, he is not crazy on this point. While the algorithm only needs 3 points to produce a value, that value will have so bad degrees of freedom that the confidence interval is WAAAY out there.
The reason that we don't need to wait 3 years for tau of 1 day is that we learned to use interleaving spans of time. We have since had much more development in the algorithms to improve the degrees of freedom for the same N samples, all in an effort to achieve as small confidence interval as possible. Also, the degrees of freedom achieved varies with the tau0 multiple m, and with dominant noise-type.
A great way to illustrate the point of degrees of freedom and the number of sample-points needed to get tight confidence intervals is to see how the high-tau end of a curve updates in TimeLab and behaves as the jiggeling end of a long rope, and as more samples comes in, the jiggeling end moves towards higher taus, but for a particular tau, the amplitude of the jiggeling decreases until it almost stops. This is the effect of the confidence intervals becoming tighter, the range within the real value is becomes smaller and eventually is very tight.
The modern algorithms like TOTAL and Theo can cram out really impressive degrees of freedom for a particular set of N and m compared to older algorithms, which effectively translates into tighter confidence intervals for the same N and m.
Cheers,
Magnus
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Hi
On Oct 25, 2014, at 3:39 AM, Poul-Henning Kamp phk@phk.freebsd.dk wrote:
In message 544AD1D1.4040608@rubidium.dyndns.org, Magnus Danielson writes:
A great way to illustrate the point of degrees of freedom and the number
of sample-points needed to get tight confidence intervals is to see how
the high-tau end of a curve updates in TimeLab and behaves as the
jiggeling end of a long rope, and as more samples comes in, the
jiggeling end moves towards higher taus, but for a particular tau, the
amplitude of the jiggeling decreases until it almost stops. This is the
effect of the confidence intervals becoming tighter, the range within
the real value is becomes smaller and eventually is very tight.
ADEV snakes about at the far right end primarily becuase of the
phase jitter which dominates at your minimum tau.
If you have a phase measurement white noise of 1 ns = ADEV(tau=1)
and you expect your ADEV curve to bottom out around 1e-12 at some
larger tau, then you have a factor thousand of noise to average
out, before a valid ADEV comes out of the noise.
This is not any different from any other "average out the noise"
situation in any significant way, sqrt(N) rules, and there is
nothing you can do about.
In the case of the TimePod, the data can be presented when you have very few samples to work with. That said, it is interesting to watch it bring up error bars (which are indeed correctly calculated) and then see the trace walk outside those error bars as the run progresses. There are other measurements that are a bit less susceptible to this. None of them have any magic to get around sqrt(N).
Bob
except to decrease your phase measurement white noise, which is
why tuning your 5370 for peak performance is worth days of measurements
at the other end of the ADEV.
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Bob,
On 10/25/2014 02:02 PM, Bob Camp wrote:
On Oct 24, 2014, at 6:25 PM, Magnus Danielson magnus@rubidium.dyndns.org wrote:
Tom,
On 10/24/2014 11:31 PM, Tom Van Baak wrote:
ADEV most certainly does change with time, even for short tau's.
Can you elaborate?
Such as when, why, what kind of change, how much change,
at how short of tau's, over how long of time,
and using what type Oscillators?
Do you know what in the freq or Phase plot is causing the ADEV to change?
I'm happy to let Bob answer his own claim here. I'm curious as well. Unless he's talking about thermal noise, in which case I now believe him 100%.
OTOH, for time intervals of minutes to hours or days, the plotted ADEV can often vary. When in doubt, enable error bars in your ADEV calculations or use DAVAR in Stable32, or use "Trace History" of TmeLab to expose how little or much the computed ADEV depends on tau and N.
In general, never do an ADEV calculation without visually checking the phase or frequency time series first.
You should make sure that you remove all forms of systematic effects before turning the residue random noise over to ADEV.
If you have random noise being modulated in amplitude, you need to measure long enough for the averaging end not to have a great impact on the result.
Is days long enough for a 1 second tau? If you define 1,000 x tau as “long enough” you are being way more
conservative than just about anybody out there. My claim is that rather than telling everybody to run for 10,000 or
100,000 x tau, simply accept that ADEV does / may change.
I did not say that you need to do 1000xtau, that was what someone else
said. If you paid attention I said that the number of samples N and the
tau0-multiple m for a particular dominant noise (of that tau) creates a
certain degree of freedom for a particular ADEV estimator algorithm.
Discussing the length of the measure without discussing which estimator
algorithm you're using and what confidence interval you aim to reach is
just taking a single value and run with it without thinking about it.
For ITU-T telecom standards, the measurement length is 12 times the
maximum tau, using the overlapping estimator (see O.172, §10.5.1 for
limit and G.810 §II.3 for TDEV algorithm). That was chosen to ensure
comparability between different implementations for the same type of
measure. See O.172 for other relevant details on limits for
implementation, tau0 has an upper limit, so does bandwidth. Naturally,
these limits is for this specific purpose, algorithms etc. which may not
fit the needs of other needs or choices.
It's only when you do old style non-overlapping that you need to go
towards 1000*tau for some reasonable results.
Removing this or that before you do ADEV can get you on a very slippery slope indeed. Removing drift (either time
or frequency) - fine. Removing this or that couple of minutes of data because it makes the
result look better, that’s
likely to get you in trouble. Your customer (or system, or test setup) isn’t likely to accept a “ADEV is ok most of
the time” specification.
Agreed. But I've never advocated cutting away samples like that, but
rather cancel out the systematics which is not part of the random noise.
Is ADEV a good way to measure temperature stability - no of course not. We do indeed have rooms that vary in
temperature. They do impact ADEV on a Rb. Removing the delta temperature related data from your ADEV input is not at
all easy (1,000 to 10,000 second data …) and I believe it would mis-represent the part. Running in the real world,
it’s going to have that ADEV hump.
ADEV hump for such systematic is maybe an indication, but not the best
way of represent that. It's also not part of the ADEV intended purpose,
namely to estimate the random noise effects.
Yes indeed you can find FCS papers with all sorts of interesting “adjustments” or no processing at all. The consensus
seems to be that if you go past drift correction, you really should have a footnote.
When you do not make a drift compensation, and that line shows up, you
better explain that too.
In the end, ADEV is overused to represent things for which it is not a
good tool. You will need other tools in the tool-box to build a good
estimation of how that oscillator will behave at some tau.
Cheers,
Magnus
Let's take a real example.
Use your own phase data, or grab any of my large data sets (http://leapsecond.com/pages/gpsdo-sim) like ocxo.dat.gz which is a good example of real-life OCXO performance (400,000 seconds of data).
Attached are Stable32 plots of frequency, ADEV/MDEV, and dynamic ADEV. As a 3D plot, the latter shows how ADEV(tau) varies during the run. In this case the full data set is broken into about 90 pieces and ADEV is computed for each segment of data ("window"). If you study the frequency plot you may be able to convince yourself why the DADEV plot looks like it does; ADEV(small tau) is quite constant, while ADEV(larger tau) varies quite a bit. To me, this is as it should be, given how the raw data looks.
To explore dynamic ADEV without Stable32 or to go deep with the effects of sample size, see adev6.c / adev6.exe in my tools directory (www.leapsecond.com/tools).
Most programs compute ADEV based on the entire data set. But adev6 will compute ADEV(tau) in user defined subsets of data. So, for example, instead of computing ADEV(tau 1) from 400,000 points, you can compute ADEV(tau 1) 400 times in blocks of 1000 points each, or 4000 times in blocks of 100 points each, etc. The default is back-to-back segments but you can specify overlapping segments. Using various combination of parameters, it's pretty instructive to see the "noise" in computed value of ADEV.
The 4th attachment is a TimeLab plot of the data set with trace = 1 (default), 10, and 100. In some cases I prefer this sort of display. One can get complacent with simple error bars and forget that 1 - 68% = 32% of the points must always lie outside the error bars, by definition.
/tvb
In the case of the TimePod, the data can be presented when you have very few samples to work with.
That said, it is interesting to watch it bring up error bars (which are indeed correctly calculated)
and then see the trace walk outside those error bars as the run progresses.
There are other measurements that are a bit less susceptible to this.
None of them have any magic to get around sqrt(N).
Bob
Maybe I don't understand error bars. For a dynamic display like TimeLab, walking outside (1 sigma) error bars is expected about 1/3 of the time, no?
/tvb
Hi
If you plot the data versus the six sigma bars (should have been more precise), you will see it walk outside them as well.
Bob
On Oct 25, 2014, at 1:06 PM, Tom Van Baak tvb@LeapSecond.com wrote:
In the case of the TimePod, the data can be presented when you have very few samples to work with.
That said, it is interesting to watch it bring up error bars (which are indeed correctly calculated)
and then see the trace walk outside those error bars as the run progresses.
There are other measurements that are a bit less susceptible to this.
None of them have any magic to get around sqrt(N).
Bob
Maybe I don't understand error bars. For a dynamic display like TimeLab, walking outside (1 sigma) error bars is expected about 1/3 of the time, no?
/tvb
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Hi
On Oct 25, 2014, at 12:19 PM, Magnus Danielson magnus@rubidium.dyndns.org wrote:
Bob,
On 10/25/2014 02:02 PM, Bob Camp wrote:
On Oct 24, 2014, at 6:25 PM, Magnus Danielson magnus@rubidium.dyndns.org wrote:
Tom,
On 10/24/2014 11:31 PM, Tom Van Baak wrote:
ADEV most certainly does change with time, even for short tau's.
Can you elaborate?
Such as when, why, what kind of change, how much change,
at how short of tau's, over how long of time,
and using what type Oscillators?
Do you know what in the freq or Phase plot is causing the ADEV to change?
I'm happy to let Bob answer his own claim here. I'm curious as well. Unless he's talking about thermal noise, in which case I now believe him 100%.
OTOH, for time intervals of minutes to hours or days, the plotted ADEV can often vary. When in doubt, enable error bars in your ADEV calculations or use DAVAR in Stable32, or use "Trace History" of TmeLab to expose how little or much the computed ADEV depends on tau and N.
In general, never do an ADEV calculation without visually checking the phase or frequency time series first.
You should make sure that you remove all forms of systematic effects before turning the residue random noise over to ADEV.
If you have random noise being modulated in amplitude, you need to measure long enough for the averaging end not to have a great impact on the result.
Is days long enough for a 1 second tau? If you define 1,000 x tau as “long enough” you are being way more
conservative than just about anybody out there. My claim is that rather than telling everybody to run for 10,000 or
100,000 x tau, simply accept that ADEV does / may change.
I did not say that you need to do 1000xtau, that was what someone else said. If you paid attention I said that the number of samples N and the tau0-multiple m for a particular dominant noise (of that tau) creates a certain degree of freedom for a particular ADEV estimator algorithm. Discussing the length of the measure without discussing which estimator algorithm you're using and what confidence interval you aim to reach is just taking a single value and run with it without thinking about it.
For ITU-T telecom standards, the measurement length is 12 times the maximum tau, using the overlapping estimator (see O.172, §10.5.1 for limit and G.810 §II.3 for TDEV algorithm). That was chosen to ensure comparability between different implementations for the same type of measure. See O.172 for other relevant details on limits for implementation, tau0 has an upper limit, so does bandwidth. Naturally, these limits is for this specific purpose, algorithms etc. which may not fit the needs of other needs or choices.
If you are using under 100 samples for the test (overlapping or not), your confidence is not as high as it might be. You can see ADEV “drift in” over a period of days, even with a lot more than 10 samples.
It's only when you do old style non-overlapping that you need to go towards 1000*tau for some reasonable results.
Removing this or that before you do ADEV can get you on a very slippery slope indeed. Removing drift (either time
or frequency) - fine. Removing this or that couple of minutes of data because it makes the
result look better, that’s
likely to get you in trouble. Your customer (or system, or test setup) isn’t likely to accept a “ADEV is ok most of
the time” specification.
Agreed. But I've never advocated cutting away samples like that, but rather cancel out the systematics which is not part of the random noise.
Is ADEV a good way to measure temperature stability - no of course not. We do indeed have rooms that vary in
temperature. They do impact ADEV on a Rb. Removing the delta temperature related data from your ADEV input is not at
all easy (1,000 to 10,000 second data …) and I believe it would mis-represent the part. Running in the real world,
it’s going to have that ADEV hump.
ADEV hump for such systematic is maybe an indication, but not the best way of represent that. It's also not part of the ADEV intended purpose, namely to estimate the random noise effects.
Yes indeed you can find FCS papers with all sorts of interesting “adjustments” or no processing at all. The consensus
seems to be that if you go past drift correction, you really should have a footnote.
When you do not make a drift compensation, and that line shows up, you better explain that too.
In the end, ADEV is overused to represent things for which it is not a good tool. You will need other tools in the tool-box to build a good estimation of how that oscillator will behave at some tau.
Except that ADEV is used by many as an acceptance test on systems and oscillators. Saying it’s OK to pull data out of a test run makes for a very interesting test design. We certainly use ADEV (without subtractions) here on the list to compare things like GPSDO’s at the system level.
Bob
Cheers,
Magnus
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
On 10/25/2014 07:06 PM, Tom Van Baak wrote:
In the case of the TimePod, the data can be presented when you have very few samples to work with.
That said, it is interesting to watch it bring up error bars (which are indeed correctly calculated)
and then see the trace walk outside those error bars as the run progresses.
There are other measurements that are a bit less susceptible to this.
None of them have any magic to get around sqrt(N).
Bob
Maybe I don't understand error bars. For a dynamic display like TimeLab, walking outside (1 sigma) error bars is expected about 1/3 of the time, no?
Error bars works a little differently, as they indicate with some
probability (say 1-sigma) within which range the real value is.
By the way, sqrt(N) is not very accurate estimator.
Cheers,
Magnus
Hi
How many hours / days/ months / years had the OCXO been off power before the run was started?
How soon after turn on did you start taking data?
Bob
On Oct 25, 2014, at 12:35 PM, Tom Van Baak tvb@LeapSecond.com wrote:
Let's take a real example.
Use your own phase data, or grab any of my large data sets (http://leapsecond.com/pages/gpsdo-sim) like ocxo.dat.gz which is a good example of real-life OCXO performance (400,000 seconds of data).
Attached are Stable32 plots of frequency, ADEV/MDEV, and dynamic ADEV. As a 3D plot, the latter shows how ADEV(tau) varies during the run. In this case the full data set is broken into about 90 pieces and ADEV is computed for each segment of data ("window"). If you study the frequency plot you may be able to convince yourself why the DADEV plot looks like it does; ADEV(small tau) is quite constant, while ADEV(larger tau) varies quite a bit. To me, this is as it should be, given how the raw data looks.
To explore dynamic ADEV without Stable32 or to go deep with the effects of sample size, see adev6.c / adev6.exe in my tools directory (www.leapsecond.com/tools).
Most programs compute ADEV based on the entire data set. But adev6 will compute ADEV(tau) in user defined subsets of data. So, for example, instead of computing ADEV(tau 1) from 400,000 points, you can compute ADEV(tau 1) 400 times in blocks of 1000 points each, or 4000 times in blocks of 100 points each, etc. The default is back-to-back segments but you can specify overlapping segments. Using various combination of parameters, it's pretty instructive to see the "noise" in computed value of ADEV.
The 4th attachment is a TimeLab plot of the data set with trace = 1 (default), 10, and 100. In some cases I prefer this sort of display. One can get complacent with simple error bars and forget that 1 - 68% = 32% of the points must always lie outside the error bars, by definition.
/tvb<ocxo-freq-1.gif><ocxo-amdev-1.gif><ocxo-dadev-1.gif><ocxo-trace-1-10-100.gif>_______________________________________________
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Hi
On Oct 25, 2014, at 2:18 PM, Magnus Danielson magnus@rubidium.dyndns.org wrote:
On 10/25/2014 07:06 PM, Tom Van Baak wrote:
In the case of the TimePod, the data can be presented when you have very few samples to work with.
That said, it is interesting to watch it bring up error bars (which are indeed correctly calculated)
and then see the trace walk outside those error bars as the run progresses.
There are other measurements that are a bit less susceptible to this.
None of them have any magic to get around sqrt(N).
Bob
Maybe I don't understand error bars. For a dynamic display like TimeLab, walking outside (1 sigma) error bars is expected about 1/3 of the time, no?
Error bars works a little differently, as they indicate with some probability (say 1-sigma) within which range the real value is.
By the way, sqrt(N) is not very accurate estimator.
But it is a common way to express the fact that your data is unlikely to converge any faster than sqrt(N)…
Bob
Cheers,
Magnus
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
How many hours / days/ months / years had the OCXO been off power before the run was started?
How soon after turn on did you start taking data?
Hi Bob,
On the ocxo.dat data set, the frequency drift rate was down to just 5e-11 a day so it's likely the OCXO had been powered for many days, even weeks. I don't know for sure w/o finding an old log book. The web page says the data came from "run3004/log50187.txt", which is was a free-running TBolt in November 2008 measured with a TSC 5120 against a locked HP 58503B. I can re-run the measurement if you wish.
Why do you ask? I have many data sets here, both with lower drift (e.g., rubidium or masers), or higher drift, or a variety of phase measurement instruments. Lots of samples is usually better than few samples, but it doesn't take a lot to pin the stability of an oscillator down to a couple of dB.
In some cases, computed ADEV is not a number that gets more precise or more accurate the more data you collect. You can hit a floor and it starts to diverge if you collect for too many weeks or months. This is expected. HDEV would be better.
An analogy: no one is interested in the mean of 1,000,000 days of earth temperature data. Yes, it will be a "very precise" number, if you apply the mindless sqrt(N) rule-of-thumb. But once you get enough data, looking at periodicity, jumps, outliers, and trends over time is usually far more important than blindly calculating a simple mean or standard deviation from an entire data set.
You can argue all day if the ADEV(tau 1000s) should be 3.7e-12 or 3.75e-12 or 4e-12. Regardless, it's clearly about halfway between 1e-12 and 1e-11. Below 1 dB, the rest is what day it is, what hour you started the run, how long you collected data, or how your lab feels that day. Here's an example of ADEV(tau 1000) from adev6:
C:\Tmp>adev6 /a < ocxo.dat 1000
1000 0/400000 a 3.706451e-012 398000
C:\Tmp>adev6 /a < ocxo.dat 1000 40000
1000 0/400000 a 4.100519e-012 38000
1000 40000/400000 a 3.912714e-012 38000
1000 80000/400000 a 3.736134e-012 38000
1000 120000/400000 a 4.413685e-012 38000
1000 160000/400000 a 3.050424e-012 38000
1000 200000/400000 a 3.692079e-012 38000
1000 240000/400000 a 3.367214e-012 38000
1000 280000/400000 a 3.223972e-012 38000
1000 320000/400000 a 3.742055e-012 38000
1000 360000/400000 a 3.897041e-012 38000
C:\Tmp>adev6 /a < ocxo.dat 1000 4000
1000 0/400000 a 7.804138e-012 2000
1000 4000/400000 a 4.085721e-012 2000
1000 8000/400000 a 3.368610e-012 2000
1000 12000/400000 a 2.890283e-012 2000
1000 16000/400000 a 2.408464e-012 2000
1000 20000/400000 a 5.823737e-012 2000
1000 24000/400000 a 4.127749e-012 2000
1000 28000/400000 a 4.310555e-012 2000
1000 32000/400000 a 3.375545e-012 2000
1000 36000/400000 a 4.166632e-012 2000
1000 40000/400000 a 3.052641e-012 2000
1000 44000/400000 a 4.718652e-012 2000
1000 48000/400000 a 4.238576e-012 2000
1000 52000/400000 a 5.275587e-012 2000
1000 56000/400000 a 5.695453e-012 2000
1000 60000/400000 a 3.669497e-012 2000
1000 64000/400000 a 3.107038e-012 2000
1000 68000/400000 a 4.863025e-012 2000
1000 72000/400000 a 1.882393e-012 2000
1000 76000/400000 a 2.395768e-012 2000
1000 80000/400000 a 1.606562e-012 2000
1000 84000/400000 a 6.180515e-012 2000
1000 88000/400000 a 3.201972e-012 2000
1000 92000/400000 a 2.023414e-012 2000
1000 96000/400000 a 1.515005e-012 2000
1000 100000/400000 a 2.343072e-012 2000
1000 104000/400000 a 4.249873e-012 2000
1000 108000/400000 a 2.676816e-012 2000
1000 112000/400000 a 1.656133e-012 2000
1000 116000/400000 a 2.411179e-012 2000
1000 120000/400000 a 4.081474e-012 2000
1000 124000/400000 a 2.997803e-012 2000
1000 128000/400000 a 2.095393e-012 2000
1000 132000/400000 a 5.760947e-012 2000
1000 136000/400000 a 7.075811e-012 2000
1000 140000/400000 a 1.769521e-012 2000
1000 144000/400000 a 3.358276e-012 2000
1000 148000/400000 a 4.893182e-012 2000
1000 152000/400000 a 1.936321e-012 2000
1000 156000/400000 a 1.578596e-012 2000
1000 160000/400000 a 3.601683e-012 2000
1000 164000/400000 a 2.287769e-012 2000
1000 168000/400000 a 3.073412e-012 2000
1000 172000/400000 a 2.291148e-012 2000
1000 176000/400000 a 5.813071e-012 2000
1000 180000/400000 a 3.669111e-012 2000
1000 184000/400000 a 1.766833e-012 2000
1000 188000/400000 a 2.527836e-012 2000
1000 192000/400000 a 1.982012e-012 2000
1000 196000/400000 a 2.387086e-012 2000
1000 200000/400000 a 4.483388e-012 2000
1000 204000/400000 a 1.825970e-012 2000
1000 208000/400000 a 1.405565e-012 2000
1000 212000/400000 a 5.431766e-012 2000
1000 216000/400000 a 1.325479e-012 2000
1000 220000/400000 a 1.874571e-012 2000
1000 224000/400000 a 8.372485e-012 2000
1000 228000/400000 a 4.277784e-012 2000
1000 232000/400000 a 2.616340e-012 2000
1000 236000/400000 a 3.765100e-012 2000
1000 240000/400000 a 1.840977e-012 2000
1000 244000/400000 a 2.921888e-012 2000
1000 248000/400000 a 1.532576e-012 2000
1000 252000/400000 a 2.774957e-012 2000
1000 256000/400000 a 5.713711e-012 2000
1000 260000/400000 a 4.725035e-012 2000
1000 264000/400000 a 3.428511e-012 2000
1000 268000/400000 a 2.549448e-012 2000
1000 272000/400000 a 8.913688e-013 2000
1000 276000/400000 a 4.408449e-012 2000
1000 280000/400000 a 2.503479e-012 2000
1000 284000/400000 a 1.883790e-012 2000
1000 288000/400000 a 3.782682e-012 2000
1000 292000/400000 a 3.132628e-012 2000
1000 296000/400000 a 2.913452e-012 2000
1000 300000/400000 a 2.021695e-012 2000
1000 304000/400000 a 3.370930e-012 2000
1000 308000/400000 a 2.043129e-012 2000
1000 312000/400000 a 5.285278e-012 2000
1000 316000/400000 a 3.020556e-012 2000
1000 320000/400000 a 3.454389e-012 2000
1000 324000/400000 a 5.324399e-012 2000
1000 328000/400000 a 4.481485e-012 2000
1000 332000/400000 a 1.773463e-012 2000
1000 336000/400000 a 4.372101e-012 2000
1000 340000/400000 a 5.504813e-012 2000
1000 344000/400000 a 3.117502e-012 2000
1000 348000/400000 a 5.678955e-012 2000
1000 352000/400000 a 2.934761e-012 2000
1000 356000/400000 a 3.119271e-012 2000
1000 360000/400000 a 2.867365e-012 2000
1000 364000/400000 a 5.534214e-012 2000
1000 368000/400000 a 1.696574e-012 2000
1000 372000/400000 a 1.954016e-012 2000
1000 376000/400000 a 3.051420e-012 2000
1000 380000/400000 a 5.290654e-012 2000
1000 384000/400000 a 3.908393e-012 2000
1000 388000/400000 a 2.984008e-012 2000
1000 392000/400000 a 2.348755e-012 2000
1000 396000/400000 a 4.209189e-012 2000
/tvb
Bob,
On 10/25/2014 08:15 PM, Bob Camp wrote:
Hi
On Oct 25, 2014, at 12:19 PM, Magnus Danielson magnus@rubidium.dyndns.org wrote:
Bob,
On 10/25/2014 02:02 PM, Bob Camp wrote:
On Oct 24, 2014, at 6:25 PM, Magnus Danielson magnus@rubidium.dyndns.org wrote:
Tom,
On 10/24/2014 11:31 PM, Tom Van Baak wrote:
ADEV most certainly does change with time, even for short tau's.
Can you elaborate?
Such as when, why, what kind of change, how much change,
at how short of tau's, over how long of time,
and using what type Oscillators?
Do you know what in the freq or Phase plot is causing the ADEV to change?
I'm happy to let Bob answer his own claim here. I'm curious as well. Unless he's talking about thermal noise, in which case I now believe him 100%.
OTOH, for time intervals of minutes to hours or days, the plotted ADEV can often vary. When in doubt, enable error bars in your ADEV calculations or use DAVAR in Stable32, or use "Trace History" of TmeLab to expose how little or much the computed ADEV depends on tau and N.
In general, never do an ADEV calculation without visually checking the phase or frequency time series first.
You should make sure that you remove all forms of systematic effects before turning the residue random noise over to ADEV.
If you have random noise being modulated in amplitude, you need to measure long enough for the averaging end not to have a great impact on the result.
Is days long enough for a 1 second tau? If you define 1,000 x tau as “long enough” you are being way more
conservative than just about anybody out there. My claim is that rather than telling everybody to run for 10,000 or
100,000 x tau, simply accept that ADEV does / may change.
I did not say that you need to do 1000xtau, that was what someone else said. If you paid attention I said that the number of samples N and the tau0-multiple m for a particular dominant noise (of that tau) creates a certain degree of freedom for a particular ADEV estimator algorithm. Discussing the length of the measure without discussing which estimator algorithm you're using and what confidence interval you aim to reach is just taking a single value and run with it without thinking about it.
For ITU-T telecom standards, the measurement length is 12 times the maximum tau, using the overlapping estimator (see O.172, §10.5.1 for limit and G.810 §II.3 for TDEV algorithm). That was chosen to ensure comparability between different implementations for the same type of measure. See O.172 for other relevant details on limits for implementation, tau0 has an upper limit, so does bandwidth. Naturally, these limits is for this specific purpose, algorithms etc. which may not fit the needs of other needs or choices.
If you are using under 100 samples for the test (overlapping or not), your confidence is not as high as it might be.
You can see ADEV “drift in” over a period of days, even with a lot more than 10 samples.
Yes. One needs to look at what happens to judge when you can trust the
values. In the standard case, there would be a lot of samples, with a
minumum of 360 for the extreme-case.
Yes indeed you can find FCS papers with all sorts of interesting “adjustments” or no processing at all. The consensus
seems to be that if you go past drift correction, you really should have a footnote.
When you do not make a drift compensation, and that line shows up, you better explain that too.
In the end, ADEV is overused to represent things for which it is not a good tool. You will need other tools in the tool-box to build a good estimation of how that oscillator will behave at some tau.
Except that ADEV is used by many as an acceptance test on systems and oscillators. Saying it’s OK to pull data out
of a test run makes for a very interesting test design. We certainly use ADEV (without subtractions) here on the list
to compare things like GPSDO’s at the system level.
I use ADEV, TDEV, phase-plot and frequency plot to best illustrate and
understand what is happening. Would be using FFT for long-term if only
TimeLab would support it for normal counter measures. Would be using
phase-noise more if I had a TimePod at work.
Cheers,
Magnus
Bob,
On 10/25/2014 08:54 PM, Bob Camp wrote:
Hi
On Oct 25, 2014, at 2:18 PM, Magnus Danielson magnus@rubidium.dyndns.org wrote:
On 10/25/2014 07:06 PM, Tom Van Baak wrote:
In the case of the TimePod, the data can be presented when you have very few samples to work with.
That said, it is interesting to watch it bring up error bars (which are indeed correctly calculated)
and then see the trace walk outside those error bars as the run progresses.
There are other measurements that are a bit less susceptible to this.
None of them have any magic to get around sqrt(N).
Bob
Maybe I don't understand error bars. For a dynamic display like TimeLab, walking outside (1 sigma) error bars is expected about 1/3 of the time, no?
Error bars works a little differently, as they indicate with some probability (say 1-sigma) within which range the real value is.
By the way, sqrt(N) is not very accurate estimator.
But it is a common way to express the fact that your data is unlikely to converge any faster than sqrt(N)…
Yes, but it gives you false hope of how quickly it really converged, as
the cross-correlations make you converge even slower than sqrt(N).
Cheers,
Magnus
Hi
On Oct 25, 2014, at 3:34 PM, Tom Van Baak tvb@LeapSecond.com wrote:
How many hours / days/ months / years had the OCXO been off power before the run was started?
How soon after turn on did you start taking data?
Hi Bob,
On the ocxo.dat data set, the frequency drift rate was down to just 5e-11 a day so it's likely the OCXO had been powered for many days, even weeks. I don't know for sure w/o finding an old log book. The web page says the data came from "run3004/log50187.txt", which is was a free-running TBolt in November 2008 measured with a TSC 5120 against a locked HP 58503B. I can re-run the measurement if you wish.
Why do you ask?
I ask because the most common place I see ADEV changing over time without systematic issues is when they have been off power for a long time. If you warm them up and run them for a while, the ADEV tends to become much more predictable.
Bob
I have many data sets here, both with lower drift (e.g., rubidium or masers), or higher drift, or a variety of phase measurement instruments. Lots of samples is usually better than few samples, but it doesn't take a lot to pin the stability of an oscillator down to a couple of dB.
In some cases, computed ADEV is not a number that gets more precise or more accurate the more data you collect. You can hit a floor and it starts to diverge if you collect for too many weeks or months. This is expected. HDEV would be better.
An analogy: no one is interested in the mean of 1,000,000 days of earth temperature data. Yes, it will be a "very precise" number, if you apply the mindless sqrt(N) rule-of-thumb. But once you get enough data, looking at periodicity, jumps, outliers, and trends over time is usually far more important than blindly calculating a simple mean or standard deviation from an entire data set.
You can argue all day if the ADEV(tau 1000s) should be 3.7e-12 or 3.75e-12 or 4e-12. Regardless, it's clearly about halfway between 1e-12 and 1e-11. Below 1 dB, the rest is what day it is, what hour you started the run, how long you collected data, or how your lab feels that day. Here's an example of ADEV(tau 1000) from adev6:
C:\Tmp>adev6 /a < ocxo.dat 1000
1000 0/400000 a 3.706451e-012 398000
C:\Tmp>adev6 /a < ocxo.dat 1000 40000
1000 0/400000 a 4.100519e-012 38000
1000 40000/400000 a 3.912714e-012 38000
1000 80000/400000 a 3.736134e-012 38000
1000 120000/400000 a 4.413685e-012 38000
1000 160000/400000 a 3.050424e-012 38000
1000 200000/400000 a 3.692079e-012 38000
1000 240000/400000 a 3.367214e-012 38000
1000 280000/400000 a 3.223972e-012 38000
1000 320000/400000 a 3.742055e-012 38000
1000 360000/400000 a 3.897041e-012 38000
C:\Tmp>adev6 /a < ocxo.dat 1000 4000
1000 0/400000 a 7.804138e-012 2000
1000 4000/400000 a 4.085721e-012 2000
1000 8000/400000 a 3.368610e-012 2000
1000 12000/400000 a 2.890283e-012 2000
1000 16000/400000 a 2.408464e-012 2000
1000 20000/400000 a 5.823737e-012 2000
1000 24000/400000 a 4.127749e-012 2000
1000 28000/400000 a 4.310555e-012 2000
1000 32000/400000 a 3.375545e-012 2000
1000 36000/400000 a 4.166632e-012 2000
1000 40000/400000 a 3.052641e-012 2000
1000 44000/400000 a 4.718652e-012 2000
1000 48000/400000 a 4.238576e-012 2000
1000 52000/400000 a 5.275587e-012 2000
1000 56000/400000 a 5.695453e-012 2000
1000 60000/400000 a 3.669497e-012 2000
1000 64000/400000 a 3.107038e-012 2000
1000 68000/400000 a 4.863025e-012 2000
1000 72000/400000 a 1.882393e-012 2000
1000 76000/400000 a 2.395768e-012 2000
1000 80000/400000 a 1.606562e-012 2000
1000 84000/400000 a 6.180515e-012 2000
1000 88000/400000 a 3.201972e-012 2000
1000 92000/400000 a 2.023414e-012 2000
1000 96000/400000 a 1.515005e-012 2000
1000 100000/400000 a 2.343072e-012 2000
1000 104000/400000 a 4.249873e-012 2000
1000 108000/400000 a 2.676816e-012 2000
1000 112000/400000 a 1.656133e-012 2000
1000 116000/400000 a 2.411179e-012 2000
1000 120000/400000 a 4.081474e-012 2000
1000 124000/400000 a 2.997803e-012 2000
1000 128000/400000 a 2.095393e-012 2000
1000 132000/400000 a 5.760947e-012 2000
1000 136000/400000 a 7.075811e-012 2000
1000 140000/400000 a 1.769521e-012 2000
1000 144000/400000 a 3.358276e-012 2000
1000 148000/400000 a 4.893182e-012 2000
1000 152000/400000 a 1.936321e-012 2000
1000 156000/400000 a 1.578596e-012 2000
1000 160000/400000 a 3.601683e-012 2000
1000 164000/400000 a 2.287769e-012 2000
1000 168000/400000 a 3.073412e-012 2000
1000 172000/400000 a 2.291148e-012 2000
1000 176000/400000 a 5.813071e-012 2000
1000 180000/400000 a 3.669111e-012 2000
1000 184000/400000 a 1.766833e-012 2000
1000 188000/400000 a 2.527836e-012 2000
1000 192000/400000 a 1.982012e-012 2000
1000 196000/400000 a 2.387086e-012 2000
1000 200000/400000 a 4.483388e-012 2000
1000 204000/400000 a 1.825970e-012 2000
1000 208000/400000 a 1.405565e-012 2000
1000 212000/400000 a 5.431766e-012 2000
1000 216000/400000 a 1.325479e-012 2000
1000 220000/400000 a 1.874571e-012 2000
1000 224000/400000 a 8.372485e-012 2000
1000 228000/400000 a 4.277784e-012 2000
1000 232000/400000 a 2.616340e-012 2000
1000 236000/400000 a 3.765100e-012 2000
1000 240000/400000 a 1.840977e-012 2000
1000 244000/400000 a 2.921888e-012 2000
1000 248000/400000 a 1.532576e-012 2000
1000 252000/400000 a 2.774957e-012 2000
1000 256000/400000 a 5.713711e-012 2000
1000 260000/400000 a 4.725035e-012 2000
1000 264000/400000 a 3.428511e-012 2000
1000 268000/400000 a 2.549448e-012 2000
1000 272000/400000 a 8.913688e-013 2000
1000 276000/400000 a 4.408449e-012 2000
1000 280000/400000 a 2.503479e-012 2000
1000 284000/400000 a 1.883790e-012 2000
1000 288000/400000 a 3.782682e-012 2000
1000 292000/400000 a 3.132628e-012 2000
1000 296000/400000 a 2.913452e-012 2000
1000 300000/400000 a 2.021695e-012 2000
1000 304000/400000 a 3.370930e-012 2000
1000 308000/400000 a 2.043129e-012 2000
1000 312000/400000 a 5.285278e-012 2000
1000 316000/400000 a 3.020556e-012 2000
1000 320000/400000 a 3.454389e-012 2000
1000 324000/400000 a 5.324399e-012 2000
1000 328000/400000 a 4.481485e-012 2000
1000 332000/400000 a 1.773463e-012 2000
1000 336000/400000 a 4.372101e-012 2000
1000 340000/400000 a 5.504813e-012 2000
1000 344000/400000 a 3.117502e-012 2000
1000 348000/400000 a 5.678955e-012 2000
1000 352000/400000 a 2.934761e-012 2000
1000 356000/400000 a 3.119271e-012 2000
1000 360000/400000 a 2.867365e-012 2000
1000 364000/400000 a 5.534214e-012 2000
1000 368000/400000 a 1.696574e-012 2000
1000 372000/400000 a 1.954016e-012 2000
1000 376000/400000 a 3.051420e-012 2000
1000 380000/400000 a 5.290654e-012 2000
1000 384000/400000 a 3.908393e-012 2000
1000 388000/400000 a 2.984008e-012 2000
1000 392000/400000 a 2.348755e-012 2000
1000 396000/400000 a 4.209189e-012 2000
/tvb
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Hi
On Oct 25, 2014, at 4:04 PM, Magnus Danielson magnus@rubidium.dyndns.org wrote:
Bob,
On 10/25/2014 08:15 PM, Bob Camp wrote:
Hi
On Oct 25, 2014, at 12:19 PM, Magnus Danielson magnus@rubidium.dyndns.org wrote:
Bob,
On 10/25/2014 02:02 PM, Bob Camp wrote:
On Oct 24, 2014, at 6:25 PM, Magnus Danielson magnus@rubidium.dyndns.org wrote:
Tom,
On 10/24/2014 11:31 PM, Tom Van Baak wrote:
ADEV most certainly does change with time, even for short tau's.
Can you elaborate?
Such as when, why, what kind of change, how much change,
at how short of tau's, over how long of time,
and using what type Oscillators?
Do you know what in the freq or Phase plot is causing the ADEV to change?
I'm happy to let Bob answer his own claim here. I'm curious as well. Unless he's talking about thermal noise, in which case I now believe him 100%.
OTOH, for time intervals of minutes to hours or days, the plotted ADEV can often vary. When in doubt, enable error bars in your ADEV calculations or use DAVAR in Stable32, or use "Trace History" of TmeLab to expose how little or much the computed ADEV depends on tau and N.
In general, never do an ADEV calculation without visually checking the phase or frequency time series first.
You should make sure that you remove all forms of systematic effects before turning the residue random noise over to ADEV.
If you have random noise being modulated in amplitude, you need to measure long enough for the averaging end not to have a great impact on the result.
Is days long enough for a 1 second tau? If you define 1,000 x tau as “long enough” you are being way more
conservative than just about anybody out there. My claim is that rather than telling everybody to run for 10,000 or
100,000 x tau, simply accept that ADEV does / may change.
I did not say that you need to do 1000xtau, that was what someone else said. If you paid attention I said that the number of samples N and the tau0-multiple m for a particular dominant noise (of that tau) creates a certain degree of freedom for a particular ADEV estimator algorithm. Discussing the length of the measure without discussing which estimator algorithm you're using and what confidence interval you aim to reach is just taking a single value and run with it without thinking about it.
For ITU-T telecom standards, the measurement length is 12 times the maximum tau, using the overlapping estimator (see O.172, §10.5.1 for limit and G.810 §II.3 for TDEV algorithm). That was chosen to ensure comparability between different implementations for the same type of measure. See O.172 for other relevant details on limits for implementation, tau0 has an upper limit, so does bandwidth. Naturally, these limits is for this specific purpose, algorithms etc. which may not fit the needs of other needs or choices.
If you are using under 100 samples for the test (overlapping or not), your confidence is not as high as it might be.
You can see ADEV “drift in” over a period of days, even with a lot more than 10 samples.
Yes. One needs to look at what happens to judge when you can trust the values. In the standard case, there would be a lot of samples, with a minumum of 360 for the extreme-case.
Yes indeed you can find FCS papers with all sorts of interesting “adjustments” or no processing at all. The consensus
seems to be that if you go past drift correction, you really should have a footnote.
When you do not make a drift compensation, and that line shows up, you better explain that too.
In the end, ADEV is overused to represent things for which it is not a good tool. You will need other tools in the tool-box to build a good estimation of how that oscillator will behave at some tau.
Except that ADEV is used by many as an acceptance test on systems and oscillators. Saying it’s OK to pull data out
of a test run makes for a very interesting test design. We certainly use ADEV (without subtractions) here on the list
to compare things like GPSDO’s at the system level.
I use ADEV, TDEV, phase-plot and frequency plot to best illustrate and understand what is happening. Would be using FFT for long-term if only TimeLab would support it for normal counter measures. Would be using phase-noise more if I had a TimePod at work.
I would suggest adding the Hadamard deviation to that list. It highlights some things that the others do not.
Bob
Cheers,
Magnus
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.