time-nuts@lists.febo.com

Discussion of precise time and frequency measurement

View all threads

Re: [time-nuts] GPSDO control loops and correcting quantization error

HM
Hal Murray
Fri, Sep 14, 2012 10:08 PM

Michael: Actually implementing a 16 bit DAC to its 1-bit minimum resolution
will be headache enough. You will gain a real education in good grounding
practice, shielding, power supply stability and noise, and other Murphy
intrusion. A 32 bit DAC IMHO, is impossible, and that's the name of that
tune.

The trick for this application is that you don't need full accuracy over the
full range of the DAC.  All you need is roughly linear and stable around the
operating point.  The PLL will take care of any offset.  Any gain error is
just a minor change to the overall gain.

This thread started with "16 bit PWM DAC".  I think that matches the
requirements.

I would expect a problem area would be filtering the PWM output.  Anything
you don't filter out will turn into close in spikes.  It might be fun to try
to measure them.

64K/72 MHz is about 1 ms.  32bits at 72 MHz is 60 seconds.

Has anybody compared DDS style DACs with PWM?  The idea is to spread the
pulses out to make the filtering easier.  Instead of 1111100000, you would
get to work with 1010101010

--
These are my opinions.  I hate spam.

djl@montana.com said: > Michael: Actually implementing a 16 bit DAC to its 1-bit minimum resolution > will be headache enough. You will gain a real education in good grounding > practice, shielding, power supply stability and noise, and other Murphy > intrusion. A 32 bit DAC IMHO, is impossible, and that's the name of that > tune. The trick for this application is that you don't need full accuracy over the full range of the DAC. All you need is roughly linear and stable around the operating point. The PLL will take care of any offset. Any gain error is just a minor change to the overall gain. This thread started with "16 bit PWM DAC". I think that matches the requirements. I would expect a problem area would be filtering the PWM output. Anything you don't filter out will turn into close in spikes. It might be fun to try to measure them. 64K/72 MHz is about 1 ms. 32bits at 72 MHz is 60 seconds. Has anybody compared DDS style DACs with PWM? The idea is to spread the pulses out to make the filtering easier. Instead of 1111100000, you would get to work with 1010101010 -- These are my opinions. I hate spam.
D
David
Fri, Sep 14, 2012 11:18 PM

On Fri, 14 Sep 2012 15:08:53 -0700, Hal Murray
hmurray@megapathdsl.net wrote:

Michael: Actually implementing a 16 bit DAC to its 1-bit minimum resolution
will be headache enough. You will gain a real education in good grounding
practice, shielding, power supply stability and noise, and other Murphy
intrusion. A 32 bit DAC IMHO, is impossible, and that's the name of that
tune.

The trick for this application is that you don't need full accuracy over the
full range of the DAC.  All you need is roughly linear and stable around the
operating point.  The PLL will take care of any offset.  Any gain error is
just a minor change to the overall gain.

One thing you sure want is a DAC that is monotonic.  Differential
nonlinearity larger than the least significant bit can cause some
rather peculiar servo loop problems.

On Fri, 14 Sep 2012 15:08:53 -0700, Hal Murray <hmurray@megapathdsl.net> wrote: >djl@montana.com said: >> Michael: Actually implementing a 16 bit DAC to its 1-bit minimum resolution >> will be headache enough. You will gain a real education in good grounding >> practice, shielding, power supply stability and noise, and other Murphy >> intrusion. A 32 bit DAC IMHO, is impossible, and that's the name of that >> tune. > >The trick for this application is that you don't need full accuracy over the >full range of the DAC. All you need is roughly linear and stable around the >operating point. The PLL will take care of any offset. Any gain error is >just a minor change to the overall gain. One thing you sure want is a DAC that is monotonic. Differential nonlinearity larger than the least significant bit can cause some rather peculiar servo loop problems.
AB
Azelio Boriani
Fri, Sep 14, 2012 11:19 PM

My 2.5nS TIC? Very simple: a 400MHz counter start-stop gated with the two
signal to compare. I have published here the VHDL code for it few months
ago. Really nothing new but simple and useful for a 35-40nS GPSDO PPS
output from an OCXO. The Rb PPS wondering is actually under evaluation
against the Z3815A.

On Sat, Sep 15, 2012 at 12:08 AM, Hal Murray hmurray@megapathdsl.netwrote:

Michael: Actually implementing a 16 bit DAC to its 1-bit minimum

resolution

will be headache enough. You will gain a real education in good grounding
practice, shielding, power supply stability and noise, and other Murphy
intrusion. A 32 bit DAC IMHO, is impossible, and that's the name of that
tune.

The trick for this application is that you don't need full accuracy over
the
full range of the DAC.  All you need is roughly linear and stable around
the
operating point.  The PLL will take care of any offset.  Any gain error is
just a minor change to the overall gain.

This thread started with "16 bit PWM DAC".  I think that matches the
requirements.

I would expect a problem area would be filtering the PWM output.  Anything
you don't filter out will turn into close in spikes.  It might be fun to
try
to measure them.

64K/72 MHz is about 1 ms.  32bits at 72 MHz is 60 seconds.

Has anybody compared DDS style DACs with PWM?  The idea is to spread the
pulses out to make the filtering easier.  Instead of 1111100000, you would
get to work with 1010101010

--
These are my opinions.  I hate spam.


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

My 2.5nS TIC? Very simple: a 400MHz counter start-stop gated with the two signal to compare. I have published here the VHDL code for it few months ago. Really nothing new but simple and useful for a 35-40nS GPSDO PPS output from an OCXO. The Rb PPS wondering is actually under evaluation against the Z3815A. On Sat, Sep 15, 2012 at 12:08 AM, Hal Murray <hmurray@megapathdsl.net>wrote: > > djl@montana.com said: > > Michael: Actually implementing a 16 bit DAC to its 1-bit minimum > resolution > > will be headache enough. You will gain a real education in good grounding > > practice, shielding, power supply stability and noise, and other Murphy > > intrusion. A 32 bit DAC IMHO, is impossible, and that's the name of that > > tune. > > The trick for this application is that you don't need full accuracy over > the > full range of the DAC. All you need is roughly linear and stable around > the > operating point. The PLL will take care of any offset. Any gain error is > just a minor change to the overall gain. > > This thread started with "16 bit PWM DAC". I think that matches the > requirements. > > I would expect a problem area would be filtering the PWM output. Anything > you don't filter out will turn into close in spikes. It might be fun to > try > to measure them. > > 64K/72 MHz is about 1 ms. 32bits at 72 MHz is 60 seconds. > > Has anybody compared DDS style DACs with PWM? The idea is to spread the > pulses out to make the filtering easier. Instead of 1111100000, you would > get to work with 1010101010 > > > -- > These are my opinions. I hate spam. > > > > > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to > https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there. >
TV
Tom Van Baak
Sat, Sep 15, 2012 2:10 AM

I would expect a problem area would be filtering the PWM output.  Anything
you don't filter out will turn into close in spikes.  It might be fun to try
to measure them.

64K/72 MHz is about 1 ms.  32bits at 72 MHz is 60 seconds.

Has anybody compared DDS style DACs with PWM?  The idea is to spread the
pulses out to make the filtering easier.  Instead of 1111100000, you would
get to work with 1010101010

Hal,

The Z3801A 16-bit DAC is dithered at 102.4 Hz and LP filtered. Details and plots:
http://leapsecond.com/pages/z3801a-efc/

/tvb

> I would expect a problem area would be filtering the PWM output. Anything > you don't filter out will turn into close in spikes. It might be fun to try > to measure them. > > 64K/72 MHz is about 1 ms. 32bits at 72 MHz is 60 seconds. > > Has anybody compared DDS style DACs with PWM? The idea is to spread the > pulses out to make the filtering easier. Instead of 1111100000, you would > get to work with 1010101010 Hal, The Z3801A 16-bit DAC is dithered at 102.4 Hz and LP filtered. Details and plots: http://leapsecond.com/pages/z3801a-efc/ /tvb
PK
Poul-Henning Kamp
Sat, Sep 15, 2012 7:36 AM

I did some experiments with a charge-transfer D/A and at least as far
as I can see, that has the potential to go beyond 30 bits.

The key observation is, as others have pointed out, that we only really
care about relative local linearity, the PLL loop will take care of
everything else.

What I did was take a large-ish low-leakage capacitor and put a voltage
follower after it, to drive the EFC pin.

The PLL loop, implemented in software, controlled two fet-switches
which would deliver positive or negative pulses to the capacitor
through matched resistors.

The software involvement allows for trivial calibration of any
unbalance between the switches & resistors, and even, if you manage
to measure and model it, I didn't, the capacitors leakage current.

By controlling the length of the pulses, you can get incredibly small
"steps" in your output voltage while retaining a wide dynamic range.

The neat thing is that you do not need a particularly precise or stable
reference voltage for this design.

Ideally you would want to drive the switches with constant current
sources, but if you put an 10bit ADC on the voltage followers output,
so you know the approximate voltage over the capacitor, it is trivial
to model the charge delivered per unit time in software.

At the time I had not read the HPJ article about the 3458A, and if
I do it again, there are several tricks from there I would employ:
Balanced pulses to linearize switch-noise and multiple drivers of
different strengths for faster capture.

--
Poul-Henning Kamp      | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG        | TCP/IP since RFC 956
FreeBSD committer      | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.

I did some experiments with a charge-transfer D/A and at least as far as I can see, that has the potential to go beyond 30 bits. The key observation is, as others have pointed out, that we only really care about relative local linearity, the PLL loop will take care of everything else. What I did was take a large-ish low-leakage capacitor and put a voltage follower after it, to drive the EFC pin. The PLL loop, implemented in software, controlled two fet-switches which would deliver positive or negative pulses to the capacitor through matched resistors. The software involvement allows for trivial calibration of any unbalance between the switches & resistors, and even, if you manage to measure and model it, I didn't, the capacitors leakage current. By controlling the length of the pulses, you can get incredibly small "steps" in your output voltage while retaining a wide dynamic range. The neat thing is that you do not need a particularly precise or stable reference voltage for this design. Ideally you would want to drive the switches with constant current sources, but if you put an 10bit ADC on the voltage followers output, so you know the approximate voltage over the capacitor, it is trivial to model the charge delivered per unit time in software. At the time I had not read the HPJ article about the 3458A, and if I do it again, there are several tricks from there I would employ: Balanced pulses to linearize switch-noise and multiple drivers of different strengths for faster capture. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk@FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence.
BG
Bruce Griffiths
Sat, Sep 15, 2012 9:06 AM

Hal Murray wrote:

Michael: Actually implementing a 16 bit DAC to its 1-bit minimum resolution
will be headache enough. You will gain a real education in good grounding
practice, shielding, power supply stability and noise, and other Murphy
intrusion. A 32 bit DAC IMHO, is impossible, and that's the name of that
tune.

The trick for this application is that you don't need full accuracy over the
full range of the DAC.  All you need is roughly linear and stable around the
operating point.  The PLL will take care of any offset.  Any gain error is
just a minor change to the overall gain.

This thread started with "16 bit PWM DAC".  I think that matches the
requirements.

I would expect a problem area would be filtering the PWM output.  Anything
you don't filter out will turn into close in spikes.  It might be fun to try
to measure them.

64K/72 MHz is about 1 ms.  32bits at 72 MHz is 60 seconds.

Has anybody compared DDS style DACs with PWM?  The idea is to spread the
pulses out to make the filtering easier.  Instead of 1111100000, you would
get to work with 1010101010

Using a synchronous filter for the PWM DAC eases the additional
filtering required considerably.
24 bit resolution is readily achieved by combining the outputs of a pair
of PWM sources sharing a single synchronous filter.

Bruce

Hal Murray wrote: > djl@montana.com said: > >> Michael: Actually implementing a 16 bit DAC to its 1-bit minimum resolution >> will be headache enough. You will gain a real education in good grounding >> practice, shielding, power supply stability and noise, and other Murphy >> intrusion. A 32 bit DAC IMHO, is impossible, and that's the name of that >> tune. >> > The trick for this application is that you don't need full accuracy over the > full range of the DAC. All you need is roughly linear and stable around the > operating point. The PLL will take care of any offset. Any gain error is > just a minor change to the overall gain. > > This thread started with "16 bit PWM DAC". I think that matches the > requirements. > > I would expect a problem area would be filtering the PWM output. Anything > you don't filter out will turn into close in spikes. It might be fun to try > to measure them. > > 64K/72 MHz is about 1 ms. 32bits at 72 MHz is 60 seconds. > > Has anybody compared DDS style DACs with PWM? The idea is to spread the > pulses out to make the filtering easier. Instead of 1111100000, you would > get to work with 1010101010 > > > Using a synchronous filter for the PWM DAC eases the additional filtering required considerably. 24 bit resolution is readily achieved by combining the outputs of a pair of PWM sources sharing a single synchronous filter. Bruce
MD
Magnus Danielson
Sat, Sep 15, 2012 10:13 AM

On 09/15/2012 12:08 AM, Hal Murray wrote:

Michael: Actually implementing a 16 bit DAC to its 1-bit minimum resolution
will be headache enough. You will gain a real education in good grounding
practice, shielding, power supply stability and noise, and other Murphy
intrusion. A 32 bit DAC IMHO, is impossible, and that's the name of that
tune.

The trick for this application is that you don't need full accuracy over the
full range of the DAC.  All you need is roughly linear and stable around the
operating point.  The PLL will take care of any offset.  Any gain error is
just a minor change to the overall gain.

This thread started with "16 bit PWM DAC".  I think that matches the
requirements.

I would expect a problem area would be filtering the PWM output.  Anything
you don't filter out will turn into close in spikes.  It might be fun to try
to measure them.

64K/72 MHz is about 1 ms.  32bits at 72 MHz is 60 seconds.

Has anybody compared DDS style DACs with PWM?  The idea is to spread the
pulses out to make the filtering easier.  Instead of 1111100000, you would
get to work with 1010101010

PWM has the fantastic power of putting most energy into the lowest
frequency, which makes analog filtering extra hard, so you need to move
the bandwidth down or use higher degrees filter for a good filter slope.
The filter bandwidth puts an upper limit to the PLL bandwidth.

I've done a inverse PWM spectrum modulation, which isn't all that hard,
and it has significant improvements over PWM.

Another approach is to use the sigma-delta approach which smooths the
frequency spikes out to noise.

Cheers,
Magnus

On 09/15/2012 12:08 AM, Hal Murray wrote: > > djl@montana.com said: >> Michael: Actually implementing a 16 bit DAC to its 1-bit minimum resolution >> will be headache enough. You will gain a real education in good grounding >> practice, shielding, power supply stability and noise, and other Murphy >> intrusion. A 32 bit DAC IMHO, is impossible, and that's the name of that >> tune. > > The trick for this application is that you don't need full accuracy over the > full range of the DAC. All you need is roughly linear and stable around the > operating point. The PLL will take care of any offset. Any gain error is > just a minor change to the overall gain. > > This thread started with "16 bit PWM DAC". I think that matches the > requirements. > > I would expect a problem area would be filtering the PWM output. Anything > you don't filter out will turn into close in spikes. It might be fun to try > to measure them. > > 64K/72 MHz is about 1 ms. 32bits at 72 MHz is 60 seconds. > > Has anybody compared DDS style DACs with PWM? The idea is to spread the > pulses out to make the filtering easier. Instead of 1111100000, you would > get to work with 1010101010 > > PWM has the fantastic power of putting most energy into the lowest frequency, which makes analog filtering extra hard, so you need to move the bandwidth down or use higher degrees filter for a good filter slope. The filter bandwidth puts an upper limit to the PLL bandwidth. I've done a inverse PWM spectrum modulation, which isn't all that hard, and it has significant improvements over PWM. Another approach is to use the sigma-delta approach which smooths the frequency spikes out to noise. Cheers, Magnus
BC
Bob Camp
Sat, Sep 15, 2012 4:46 PM

Hi

If the objective is to build a GPSDO that needs a 32 bit D/A as opposed to a 16 to 20 bit part, there are some things you have to consider.

The output of your GPS has jitter on it. How much jitter is a "that depends" sort of thing, but there's always more jitter than on the output of a good OCXO or Rb. The idea is to get the short term stability of the OCXO or Rb and the long term stability of the GPS. To do that, you are going to set the cross over between the GPS and OCXO at some magic point. Exactly where depends on the actual noise plots of your OCXO and your GPS. With a good DOCXO you can easily have a cross over out in the 1,000 to 5,000 second range. With a Rb the cross over is likely to be in the 100,000 to 200,000 second range. If it's closer in you degrade the short term stability of the OCXO or Rb.

If the cross over is at 100,000 seconds, everything that happens quicker than 100,000 seconds is ignored by the PLL. Stuff that happens more slowly than 100,000 seconds is corrected by the PLL. No, it's not exactly a brick wall, but it does fundamentally work that way.

What ever happens with the DAC quicker than the cross over, passes straight through to the OCXO or Rb. In the case of a 100,000 second cross over, daily temperature cycling in the lab winds up as short term instability and is not corrected by the PLL. Hourly cycles (think HVAC cycles) very much will show up as short term issues that are not corrected. If indeed 32 bits matters, then instability at the 32 bit level will show up. Indeed temperature is not the only issue, noise on the DAC output is also a concern. Johnson noise is one source, there are others.

No free lunch…

Bob

On Sep 15, 2012, at 3:36 AM, Poul-Henning Kamp phk@phk.freebsd.dk wrote:

I did some experiments with a charge-transfer D/A and at least as far
as I can see, that has the potential to go beyond 30 bits.

The key observation is, as others have pointed out, that we only really
care about relative local linearity, the PLL loop will take care of
everything else.

What I did was take a large-ish low-leakage capacitor and put a voltage
follower after it, to drive the EFC pin.

The PLL loop, implemented in software, controlled two fet-switches
which would deliver positive or negative pulses to the capacitor
through matched resistors.

The software involvement allows for trivial calibration of any
unbalance between the switches & resistors, and even, if you manage
to measure and model it, I didn't, the capacitors leakage current.

By controlling the length of the pulses, you can get incredibly small
"steps" in your output voltage while retaining a wide dynamic range.

The neat thing is that you do not need a particularly precise or stable
reference voltage for this design.

Ideally you would want to drive the switches with constant current
sources, but if you put an 10bit ADC on the voltage followers output,
so you know the approximate voltage over the capacitor, it is trivial
to model the charge delivered per unit time in software.

At the time I had not read the HPJ article about the 3458A, and if
I do it again, there are several tricks from there I would employ:
Balanced pulses to linearize switch-noise and multiple drivers of
different strengths for faster capture.

--
Poul-Henning Kamp      | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG        | TCP/IP since RFC 956
FreeBSD committer      | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Hi If the objective is to build a GPSDO that *needs* a 32 bit D/A as opposed to a 16 to 20 bit part, there are some things you have to consider. The output of your GPS has jitter on it. How much jitter is a "that depends" sort of thing, but there's always more jitter than on the output of a good OCXO or Rb. The idea is to get the short term stability of the OCXO or Rb and the long term stability of the GPS. To do that, you are going to set the cross over between the GPS and OCXO at some magic point. Exactly where depends on the actual noise plots of your OCXO and your GPS. With a good DOCXO you can easily have a cross over out in the 1,000 to 5,000 second range. With a Rb the cross over is likely to be in the 100,000 to 200,000 second range. If it's closer in you degrade the short term stability of the OCXO or Rb. If the cross over is at 100,000 seconds, everything that happens quicker than 100,000 seconds is ignored by the PLL. Stuff that happens more slowly than 100,000 seconds is corrected by the PLL. No, it's not exactly a brick wall, but it does fundamentally work that way. What ever happens with the DAC quicker than the cross over, passes straight through to the OCXO or Rb. In the case of a 100,000 second cross over, daily temperature cycling in the lab winds up as short term instability and is not corrected by the PLL. Hourly cycles (think HVAC cycles) very much will show up as short term issues that are not corrected. If indeed 32 bits matters, then instability at the 32 bit level will show up. Indeed temperature is not the only issue, noise on the DAC output is also a concern. Johnson noise is one source, there are others. No free lunch… Bob On Sep 15, 2012, at 3:36 AM, Poul-Henning Kamp <phk@phk.freebsd.dk> wrote: > > I did some experiments with a charge-transfer D/A and at least as far > as I can see, that has the potential to go beyond 30 bits. > > The key observation is, as others have pointed out, that we only really > care about relative local linearity, the PLL loop will take care of > everything else. > > What I did was take a large-ish low-leakage capacitor and put a voltage > follower after it, to drive the EFC pin. > > The PLL loop, implemented in software, controlled two fet-switches > which would deliver positive or negative pulses to the capacitor > through matched resistors. > > The software involvement allows for trivial calibration of any > unbalance between the switches & resistors, and even, if you manage > to measure and model it, I didn't, the capacitors leakage current. > > By controlling the length of the pulses, you can get incredibly small > "steps" in your output voltage while retaining a wide dynamic range. > > The neat thing is that you do not need a particularly precise or stable > reference voltage for this design. > > Ideally you would want to drive the switches with constant current > sources, but if you put an 10bit ADC on the voltage followers output, > so you know the approximate voltage over the capacitor, it is trivial > to model the charge delivered per unit time in software. > > At the time I had not read the HPJ article about the 3458A, and if > I do it again, there are several tricks from there I would employ: > Balanced pulses to linearize switch-noise and multiple drivers of > different strengths for faster capture. > > -- > Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 > phk@FreeBSD.ORG | TCP/IP since RFC 956 > FreeBSD committer | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there.
PK
Poul-Henning Kamp
Sat, Sep 15, 2012 7:12 PM

In message 437C6604-20A5-4E05-BE99-9E26B01D80B7@rtty.us, Bob Camp writes:

If indeed 32 bits matters, then instability at the 32 bit level will show up.
[...]
No free lunch

Absolutely agree there.

My point was merely that the requirements for the DAC after a
software-PLL are very narrow and specific, compared the the quality
metrics of DAC's in general, and that low cost methods are available
to meet those needs, so there is no need to spend a fortune on a
general X bit DAC, if cheaper special purpose one will do.

One of the advantages of the charge-transfer DAC concept is that
there is only thermal noise for a stable output, there is no reference
voltage which can suffer from tempco or noise bleed-through.

You'll have the tempco of your integration capacitor to deal with,
but already around 16-20 bits you will be ovenizing everything anyway.

The noise on EFC voltage changes can be given an almost arbitrary high
frequency spectrum which will filter out long before it reaches the
EFC input of the OCXO or Rb.

In steady state, my experiment fired a single 40nsec pulse every
some seconds, and I never managed to detect these pulses on the
output of the voltage follower, because the integration capacitor
is a glorified low-pass filter.

Even at high update rates, it would be possible to use a PRNG to
spread out the spectrum of the update noise.

--
Poul-Henning Kamp      | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG        | TCP/IP since RFC 956
FreeBSD committer      | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.

In message <437C6604-20A5-4E05-BE99-9E26B01D80B7@rtty.us>, Bob Camp writes: >If indeed 32 bits matters, then instability at the 32 bit level will show up. >[...] >No free lunch Absolutely agree there. My point was merely that the requirements for the DAC after a software-PLL are very narrow and specific, compared the the quality metrics of DAC's in general, and that low cost methods are available to meet those needs, so there is no need to spend a fortune on a general X bit DAC, if cheaper special purpose one will do. One of the advantages of the charge-transfer DAC concept is that there is only thermal noise for a stable output, there is no reference voltage which can suffer from tempco or noise bleed-through. You'll have the tempco of your integration capacitor to deal with, but already around 16-20 bits you will be ovenizing everything anyway. The noise on EFC voltage changes can be given an almost arbitrary high frequency spectrum which will filter out long before it reaches the EFC input of the OCXO or Rb. In steady state, my experiment fired a single 40nsec pulse every some seconds, and I never managed to detect these pulses on the output of the voltage follower, because the integration capacitor is a glorified low-pass filter. Even at high update rates, it would be possible to use a PRNG to spread out the spectrum of the update noise. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk@FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence.
BC
Bob Camp
Sat, Sep 15, 2012 8:41 PM

Hi

The real answer is that you don't need anything near 32 bits. Anything with a <1.0x10^-13 AVAR isn't going to have much of a tuning range on the EFC. 22 bits will do just fine for a 1 ppm EFC range part with 1.0 x10^-12 at 1 second. With that sort of sensitivity you will have a very hard time getting the voltage into the OCXO without a gotcha right at the terminals, unless the unit has a fully isolated EFC input. That rules out roughly 99.99% of all OCXO's.

Bob

On Sep 15, 2012, at 3:12 PM, Poul-Henning Kamp phk@phk.freebsd.dk wrote:

In message 437C6604-20A5-4E05-BE99-9E26B01D80B7@rtty.us, Bob Camp writes:

If indeed 32 bits matters, then instability at the 32 bit level will show up.
[...]
No free lunch

Absolutely agree there.

My point was merely that the requirements for the DAC after a
software-PLL are very narrow and specific, compared the the quality
metrics of DAC's in general, and that low cost methods are available
to meet those needs, so there is no need to spend a fortune on a
general X bit DAC, if cheaper special purpose one will do.

One of the advantages of the charge-transfer DAC concept is that
there is only thermal noise for a stable output, there is no reference
voltage which can suffer from tempco or noise bleed-through.

You'll have the tempco of your integration capacitor to deal with,
but already around 16-20 bits you will be ovenizing everything anyway.

The noise on EFC voltage changes can be given an almost arbitrary high
frequency spectrum which will filter out long before it reaches the
EFC input of the OCXO or Rb.

In steady state, my experiment fired a single 40nsec pulse every
some seconds, and I never managed to detect these pulses on the
output of the voltage follower, because the integration capacitor
is a glorified low-pass filter.

Even at high update rates, it would be possible to use a PRNG to
spread out the spectrum of the update noise.

--
Poul-Henning Kamp      | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG        | TCP/IP since RFC 956
FreeBSD committer      | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Hi The real answer is that you don't need anything near 32 bits. Anything with a <1.0x10^-13 AVAR isn't going to have much of a tuning range on the EFC. 22 bits will do just fine for a 1 ppm EFC range part with 1.0 x10^-12 at 1 second. With that sort of sensitivity you will have a *very* hard time getting the voltage into the OCXO without a gotcha right at the terminals, unless the unit has a fully isolated EFC input. That rules out roughly 99.99% of all OCXO's. Bob On Sep 15, 2012, at 3:12 PM, Poul-Henning Kamp <phk@phk.freebsd.dk> wrote: > In message <437C6604-20A5-4E05-BE99-9E26B01D80B7@rtty.us>, Bob Camp writes: > >> If indeed 32 bits matters, then instability at the 32 bit level will show up. >> [...] >> No free lunch > > Absolutely agree there. > > My point was merely that the requirements for the DAC after a > software-PLL are very narrow and specific, compared the the quality > metrics of DAC's in general, and that low cost methods are available > to meet those needs, so there is no need to spend a fortune on a > general X bit DAC, if cheaper special purpose one will do. > > One of the advantages of the charge-transfer DAC concept is that > there is only thermal noise for a stable output, there is no reference > voltage which can suffer from tempco or noise bleed-through. > > You'll have the tempco of your integration capacitor to deal with, > but already around 16-20 bits you will be ovenizing everything anyway. > > The noise on EFC voltage changes can be given an almost arbitrary high > frequency spectrum which will filter out long before it reaches the > EFC input of the OCXO or Rb. > > In steady state, my experiment fired a single 40nsec pulse every > some seconds, and I never managed to detect these pulses on the > output of the voltage follower, because the integration capacitor > is a glorified low-pass filter. > > Even at high update rates, it would be possible to use a PRNG to > spread out the spectrum of the update noise. > > -- > Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 > phk@FreeBSD.ORG | TCP/IP since RFC 956 > FreeBSD committer | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there.
TV
Tom Van Baak
Sun, Sep 16, 2012 4:40 AM

I worry in your example about the long cross-over time. This may be ideal for frequency stability, but probably is not good for time accuracy. If one is using the GPSDO as a timing reference, I would think a shorter time constant will keep the rms time error down. Has anyone on the list done work optimizing the timing accuracy rather than the frequency stability?

/tvb

----- Original Message -----
From: "Bob Camp" lists@rtty.us
To: "Discussion of precise time and frequency measurement" time-nuts@febo.com
Sent: Saturday, September 15, 2012 9:46 AM
Subject: Re: [time-nuts] GPSDO control loops and correcting quantizationerror

Hi

If the objective is to build a GPSDO that needs a 32 bit D/A as opposed to a 16 to 20 bit part, there are some things you have to consider.

The output of your GPS has jitter on it. How much jitter is a "that depends" sort of thing, but there's always more jitter than on the output of a good OCXO or Rb. The idea is to get the short term stability of the OCXO or Rb and the long term stability of the GPS. To do that, you are going to set the cross over between the GPS and OCXO at some magic point. Exactly where depends on the actual noise plots of your OCXO and your GPS. With a good DOCXO you can easily have a cross over out in the 1,000 to 5,000 second range. With a Rb the cross over is likely to be in the 100,000 to 200,000 second range. If it's closer in you degrade the short term stability of the OCXO or Rb.

If the cross over is at 100,000 seconds, everything that happens quicker than 100,000 seconds is ignored by the PLL. Stuff that happens more slowly than 100,000 seconds is corrected by the PLL. No, it's not exactly a brick wall, but it does fundamentally work that way.

What ever happens with the DAC quicker than the cross over, passes straight through to the OCXO or Rb. In the case of a 100,000 second cross over, daily temperature cycling in the lab winds up as short term instability and is not corrected by the PLL. Hourly cycles (think HVAC cycles) very much will show up as short term issues that are not corrected. If indeed 32 bits matters, then instability at the 32 bit level will show up. Indeed temperature is not the only issue, noise on the DAC output is also a concern. Johnson noise is one source, there are others.

No free lunch…

Bob

I worry in your example about the long cross-over time. This may be ideal for frequency stability, but probably is not good for time accuracy. If one is using the GPSDO as a timing reference, I would think a shorter time constant will keep the rms time error down. Has anyone on the list done work optimizing the timing accuracy rather than the frequency stability? /tvb ----- Original Message ----- From: "Bob Camp" <lists@rtty.us> To: "Discussion of precise time and frequency measurement" <time-nuts@febo.com> Sent: Saturday, September 15, 2012 9:46 AM Subject: Re: [time-nuts] GPSDO control loops and correcting quantizationerror Hi If the objective is to build a GPSDO that *needs* a 32 bit D/A as opposed to a 16 to 20 bit part, there are some things you have to consider. The output of your GPS has jitter on it. How much jitter is a "that depends" sort of thing, but there's always more jitter than on the output of a good OCXO or Rb. The idea is to get the short term stability of the OCXO or Rb and the long term stability of the GPS. To do that, you are going to set the cross over between the GPS and OCXO at some magic point. Exactly where depends on the actual noise plots of your OCXO and your GPS. With a good DOCXO you can easily have a cross over out in the 1,000 to 5,000 second range. With a Rb the cross over is likely to be in the 100,000 to 200,000 second range. If it's closer in you degrade the short term stability of the OCXO or Rb. If the cross over is at 100,000 seconds, everything that happens quicker than 100,000 seconds is ignored by the PLL. Stuff that happens more slowly than 100,000 seconds is corrected by the PLL. No, it's not exactly a brick wall, but it does fundamentally work that way. What ever happens with the DAC quicker than the cross over, passes straight through to the OCXO or Rb. In the case of a 100,000 second cross over, daily temperature cycling in the lab winds up as short term instability and is not corrected by the PLL. Hourly cycles (think HVAC cycles) very much will show up as short term issues that are not corrected. If indeed 32 bits matters, then instability at the 32 bit level will show up. Indeed temperature is not the only issue, noise on the DAC output is also a concern. Johnson noise is one source, there are others. No free lunch… Bob
TV
Tom Van Baak
Sun, Sep 16, 2012 4:47 AM

Agreed. The DAC resolution only needs to be a little better than the short-term noise of the OCXO. For example, there's no point at all stepping the DAC by 1e-13 if the OCXO's own noise is on the order of 1e-11 or 1e-12. On the other hand, what you want to avoid is having the DAC/EFC increase the short-term noise of the OCXO. Sadly this seems to happen a lot. Just measure the stability of your favorite GPSDO with and without the DAC/EFC operating.

I've not seen graphs but I assume there's also a cross-over where DAC resolution/noise as a function of update rate meets OCXO stability as a function of tau.

/tvb

----- Original Message -----
From: "Bob Camp" lists@rtty.us
To: "Discussion of precise time and frequency measurement" time-nuts@febo.com
Sent: Saturday, September 15, 2012 1:41 PM
Subject: Re: [time-nuts] GPSDO control loops and correcting quantizationerror

Hi

The real answer is that you don't need anything near 32 bits. Anything with a <1.0x10^-13 AVAR isn't going to have much of a tuning range on the EFC. 22 bits will do just fine for a 1 ppm EFC range part with 1.0 x10^-12 at 1 second. With that sort of sensitivity you will have a very hard time getting the voltage into the OCXO without a gotcha right at the terminals, unless the unit has a fully isolated EFC input. That rules out roughly 99.99% of all OCXO's.

Bob

Agreed. The DAC resolution only needs to be a little better than the short-term noise of the OCXO. For example, there's no point at all stepping the DAC by 1e-13 if the OCXO's own noise is on the order of 1e-11 or 1e-12. On the other hand, what you want to avoid is having the DAC/EFC increase the short-term noise of the OCXO. Sadly this seems to happen a lot. Just measure the stability of your favorite GPSDO with and without the DAC/EFC operating. I've not seen graphs but I assume there's also a cross-over where DAC resolution/noise as a function of update rate meets OCXO stability as a function of tau. /tvb ----- Original Message ----- From: "Bob Camp" <lists@rtty.us> To: "Discussion of precise time and frequency measurement" <time-nuts@febo.com> Sent: Saturday, September 15, 2012 1:41 PM Subject: Re: [time-nuts] GPSDO control loops and correcting quantizationerror > Hi > > The real answer is that you don't need anything near 32 bits. Anything with a <1.0x10^-13 AVAR isn't going to have much of a tuning range on the EFC. 22 bits will do just fine for a 1 ppm EFC range part with 1.0 x10^-12 at 1 second. With that sort of sensitivity you will have a *very* hard time getting the voltage into the OCXO without a gotcha right at the terminals, unless the unit has a fully isolated EFC input. That rules out roughly 99.99% of all OCXO's. > > Bob
PK
Poul-Henning Kamp
Sun, Sep 16, 2012 6:24 AM

In message 5C1BBD844F6145969FE8A4785FEE490A@pc52, "Tom Van Baak" writes:

Has anyone on the list done work optimizing the timing accuracy rather than
the frequency stability?

Yes, timing accuracy has been my main focus and in general I have been
using integration times on the low side of 10000 seconds for that,
but it depends a lot on the OCXO/Rb and environment.

The PLL in NTPns is a (by now) old attempt to make a self-tuning PLL
for optimal time stability, and it does a surprisingly good job at it.

--
Poul-Henning Kamp      | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG        | TCP/IP since RFC 956
FreeBSD committer      | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.

In message <5C1BBD844F6145969FE8A4785FEE490A@pc52>, "Tom Van Baak" writes: >Has anyone on the list done work optimizing the timing accuracy rather than >the frequency stability? Yes, timing accuracy has been my main focus and in general I have been using integration times on the low side of 10000 seconds for that, but it depends a lot on the OCXO/Rb and environment. The PLL in NTPns is a (by now) old attempt to make a self-tuning PLL for optimal time stability, and it does a surprisingly good job at it. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk@FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence.
TV
Tom Van Baak
Sun, Sep 16, 2012 8:16 AM

Yes, timing accuracy has been my main focus and in general I have been
using integration times on the low side of 10000 seconds for that,
but it depends a lot on the OCXO/Rb and environment.

The PLL in NTPns is a (by now) old attempt to make a self-tuning PLL
for optimal time stability, and it does a surprisingly good job at it.

Are there papers that talk about how to optimize for best timing or best frequency or (no free lunch) some compromise combination of the two?

/tvb

> Yes, timing accuracy has been my main focus and in general I have been > using integration times on the low side of 10000 seconds for that, > but it depends a lot on the OCXO/Rb and environment. > > The PLL in NTPns is a (by now) old attempt to make a self-tuning PLL > for optimal time stability, and it does a surprisingly good job at it. Are there papers that talk about how to optimize for best timing or best frequency or (no free lunch) some compromise combination of the two? /tvb
BC
Bob Camp
Sun, Sep 16, 2012 12:12 PM

Hi

If your definition of timing accuracy is "within 100 ns of GPS time ten minutes after lock" then a faster crossover is a better idea. A faster loop will track GPS better. If your GPS noise is on the order of  10 ns, your time error will be pretty low. An example: 100s loop and 10 ns GPS, => 1x10^-10 noise.

If your definition of timing accuracy is "best short term stability plot" then picking a long crossover is the way to do it. You want the PLL to only kick in once it's going to do no harm to the noise signature. An example: 10,000s loop and 10 ns GPS => 1x10^-12 noise.

If your GPS noise is lower / higher, the numbers obviously will change accordingly. If your GPS noise is dimensioned in one set of units and your OCXO noise in another set of units, that's going to require a units conversion (pk-pk != rms != 3 sigma etc).

Bob

On Sep 16, 2012, at 12:40 AM, Tom Van Baak tvb@LeapSecond.com wrote:

I worry in your example about the long cross-over time. This may be ideal for frequency stability, but probably is not good for time accuracy. If one is using the GPSDO as a timing reference, I would think a shorter time constant will keep the rms time error down. Has anyone on the list done work optimizing the timing accuracy rather than the frequency stability?

/tvb

----- Original Message -----
From: "Bob Camp" lists@rtty.us
To: "Discussion of precise time and frequency measurement" time-nuts@febo.com
Sent: Saturday, September 15, 2012 9:46 AM
Subject: Re: [time-nuts] GPSDO control loops and correcting quantizationerror

Hi

If the objective is to build a GPSDO that needs a 32 bit D/A as opposed to a 16 to 20 bit part, there are some things you have to consider.

The output of your GPS has jitter on it. How much jitter is a "that depends" sort of thing, but there's always more jitter than on the output of a good OCXO or Rb. The idea is to get the short term stability of the OCXO or Rb and the long term stability of the GPS. To do that, you are going to set the cross over between the GPS and OCXO at some magic point. Exactly where depends on the actual noise plots of your OCXO and your GPS. With a good DOCXO you can easily have a cross over out in the 1,000 to 5,000 second range. With a Rb the cross over is likely to be in the 100,000 to 200,000 second range. If it's closer in you degrade the short term stability of the OCXO or Rb.

If the cross over is at 100,000 seconds, everything that happens quicker than 100,000 seconds is ignored by the PLL. Stuff that happens more slowly than 100,000 seconds is corrected by the PLL. No, it's not exactly a brick wall, but it does fundamentally work that way.

What ever happens with the DAC quicker than the cross over, passes straight through to the OCXO or Rb. In the case of a 100,000 second cross over, daily temperature cycling in the lab winds up as short term instability and is not corrected by the PLL. Hourly cycles (think HVAC cycles) very much will show up as short term issues that are not corrected. If indeed 32 bits matters, then instability at the 32 bit level will show up. Indeed temperature is not the only issue, noise on the DAC output is also a concern. Johnson noise is one source, there are others.

No free lunch…

Bob


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Hi If your definition of timing accuracy is "within 100 ns of GPS time ten minutes after lock" then a faster crossover is a better idea. A faster loop will track GPS better. If your GPS noise is on the order of 10 ns, your time error will be pretty low. An example: 100s loop and 10 ns GPS, => 1x10^-10 noise. If your definition of timing accuracy is "best short term stability plot" then picking a long crossover is the way to do it. You want the PLL to only kick in once it's going to do no harm to the noise signature. An example: 10,000s loop and 10 ns GPS => 1x10^-12 noise. If your GPS noise is lower / higher, the numbers obviously will change accordingly. If your GPS noise is dimensioned in one set of units and your OCXO noise in another set of units, that's going to require a units conversion (pk-pk != rms != 3 sigma etc). Bob On Sep 16, 2012, at 12:40 AM, Tom Van Baak <tvb@LeapSecond.com> wrote: > I worry in your example about the long cross-over time. This may be ideal for frequency stability, but probably is not good for time accuracy. If one is using the GPSDO as a timing reference, I would think a shorter time constant will keep the rms time error down. Has anyone on the list done work optimizing the timing accuracy rather than the frequency stability? > > /tvb > > ----- Original Message ----- > From: "Bob Camp" <lists@rtty.us> > To: "Discussion of precise time and frequency measurement" <time-nuts@febo.com> > Sent: Saturday, September 15, 2012 9:46 AM > Subject: Re: [time-nuts] GPSDO control loops and correcting quantizationerror > > > Hi > > If the objective is to build a GPSDO that *needs* a 32 bit D/A as opposed to a 16 to 20 bit part, there are some things you have to consider. > > The output of your GPS has jitter on it. How much jitter is a "that depends" sort of thing, but there's always more jitter than on the output of a good OCXO or Rb. The idea is to get the short term stability of the OCXO or Rb and the long term stability of the GPS. To do that, you are going to set the cross over between the GPS and OCXO at some magic point. Exactly where depends on the actual noise plots of your OCXO and your GPS. With a good DOCXO you can easily have a cross over out in the 1,000 to 5,000 second range. With a Rb the cross over is likely to be in the 100,000 to 200,000 second range. If it's closer in you degrade the short term stability of the OCXO or Rb. > > If the cross over is at 100,000 seconds, everything that happens quicker than 100,000 seconds is ignored by the PLL. Stuff that happens more slowly than 100,000 seconds is corrected by the PLL. No, it's not exactly a brick wall, but it does fundamentally work that way. > > What ever happens with the DAC quicker than the cross over, passes straight through to the OCXO or Rb. In the case of a 100,000 second cross over, daily temperature cycling in the lab winds up as short term instability and is not corrected by the PLL. Hourly cycles (think HVAC cycles) very much will show up as short term issues that are not corrected. If indeed 32 bits matters, then instability at the 32 bit level will show up. Indeed temperature is not the only issue, noise on the DAC output is also a concern. Johnson noise is one source, there are others. > > No free lunch… > > Bob > > > > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there.
PK
Poul-Henning Kamp
Sun, Sep 16, 2012 1:49 PM

In message CE93652A-1DA6-48E3-9883-D7616AC24A78@rtty.us, Bob Camp writes:

Bob,

There's one thing makes me scratch my head here:  Why do you keep
arguing like the timeconstant cannot be changed dynamically ?

I use a very aggresive timeconstants initially, to quickly get the
phase offset under control, and then I ramp up the timeconstant in
order to reduce phase noise of the GPS, until I hit something which
looks like the "Allan-intercept" (as Dave Mills calls it).

It' won't take long time for us to agree that the timeconstant
is a tradeoff between phase and frequency error, but just because
it is called a "timeconstant" doesn't mean we cannot change it.

--
Poul-Henning Kamp      | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG        | TCP/IP since RFC 956
FreeBSD committer      | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.

In message <CE93652A-1DA6-48E3-9883-D7616AC24A78@rtty.us>, Bob Camp writes: Bob, There's one thing makes me scratch my head here: Why do you keep arguing like the timeconstant cannot be changed dynamically ? I use a very aggresive timeconstants initially, to quickly get the phase offset under control, and then I ramp up the timeconstant in order to reduce phase noise of the GPS, until I hit something which looks like the "Allan-intercept" (as Dave Mills calls it). It' won't take long time for us to agree that the timeconstant is a tradeoff between phase and frequency error, but just because it is called a "timeconstant" doesn't mean we cannot change it. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk@FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence.
PK
Poul-Henning Kamp
Sun, Sep 16, 2012 3:47 PM

In message 5C52FBDBA5084AD4A36300FBA73BEF5E@pc52, "Tom Van Baak" writes:

Yes, timing accuracy has been my main focus and in general I have been
using integration times on the low side of 10000 seconds for that,
but it depends a lot on the OCXO/Rb and environment.

The PLL in NTPns is a (by now) old attempt to make a self-tuning PLL
for optimal time stability, and it does a surprisingly good job at it.

Are there papers that talk about how to optimize for best timing or best
frequency or (no free lunch) some compromise combination of the two?

The only writings I am aware of, is what Dave Mills has written and
the PLL code in NTPns, but I havn't followed this closely in the last
10 years, so do check for newer writings.

Dave Mills coined the term "allan intercept" as the cross over of
the two sources allan variances and it's a good google search for
his relevant papers.

I'm not entirely sure his rule of thumb for regulating to that point
is mathematically sound & precise, but the concept itself is certainly
valid, even if you have to compensate for the timeconstant of the
PLL you use to regulate to that point.

I spent a lot of time with the code in NTPns, to try to get that PLL
to converge on the optimum, and while generally good, it's not perfect.

The basic problem is that the data you have available for autotuning,
is the allan variance between your input and your steered source.

If you also have the allan variance between the steered source and
a 3rd, better, source, the task is pretty trivial:  Minimize the
area below that curve.

But if you do that on the curve you have, you don't optimize, you
pessimize, since the lowest area, is with a timeconstant of zero.

Going the other direction and maximizing the area is no good either
and trying to balance the area around some pivot related to the
present PLL timeconstant does not converge in my experience.

What I did instead was to (badly) reinvent Shewarts ideas for testing
if the phase residual is under "statistical process control":

I increase the timeconstant if the phase residual has too frequent
zero-crossings and loosen it if they happen too seldom.

Having read a lot more about statistical process control, since I
built those NTP servers for the Air Traffic Control 10 years ago,
I would leverage more of the theory and heuristics developed in
process control. (3sigma violations, length of monotonic direction
etc. etc.)

--
Poul-Henning Kamp      | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG        | TCP/IP since RFC 956
FreeBSD committer      | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.

In message <5C52FBDBA5084AD4A36300FBA73BEF5E@pc52>, "Tom Van Baak" writes: >> Yes, timing accuracy has been my main focus and in general I have been >> using integration times on the low side of 10000 seconds for that, >> but it depends a lot on the OCXO/Rb and environment. >> >> The PLL in NTPns is a (by now) old attempt to make a self-tuning PLL >> for optimal time stability, and it does a surprisingly good job at it. > >Are there papers that talk about how to optimize for best timing or best >frequency or (no free lunch) some compromise combination of the two? The only writings I am aware of, is what Dave Mills has written and the PLL code in NTPns, but I havn't followed this closely in the last 10 years, so do check for newer writings. Dave Mills coined the term "allan intercept" as the cross over of the two sources allan variances and it's a good google search for his relevant papers. I'm not entirely sure his rule of thumb for regulating to that point is mathematically sound & precise, but the concept itself is certainly valid, even if you have to compensate for the timeconstant of the PLL you use to regulate to that point. I spent a lot of time with the code in NTPns, to try to get that PLL to converge on the optimum, and while generally good, it's not perfect. The basic problem is that the data you have available for autotuning, is the allan variance between your input and your steered source. If you also have the allan variance between the steered source and a 3rd, better, source, the task is pretty trivial: Minimize the area below that curve. But if you do that on the curve you have, you don't optimize, you pessimize, since the lowest area, is with a timeconstant of zero. Going the other direction and maximizing the area is no good either and trying to balance the area around some pivot related to the present PLL timeconstant does not converge in my experience. What I did instead was to (badly) reinvent Shewarts ideas for testing if the phase residual is under "statistical process control": I increase the timeconstant if the phase residual has too frequent zero-crossings and loosen it if they happen too seldom. Having read a lot more about statistical process control, since I built those NTP servers for the Air Traffic Control 10 years ago, I would leverage more of the theory and heuristics developed in process control. (3sigma violations, length of monotonic direction etc. etc.) -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk@FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence.
MD
Magnus Danielson
Sun, Sep 16, 2012 5:20 PM

On 09/16/2012 05:47 PM, Poul-Henning Kamp wrote:

In message5C52FBDBA5084AD4A36300FBA73BEF5E@pc52, "Tom Van Baak" writes:

Yes, timing accuracy has been my main focus and in general I have been
using integration times on the low side of 10000 seconds for that,
but it depends a lot on the OCXO/Rb and environment.

The PLL in NTPns is a (by now) old attempt to make a self-tuning PLL
for optimal time stability, and it does a surprisingly good job at it.

Are there papers that talk about how to optimize for best timing or best
frequency or (no free lunch) some compromise combination of the two?

The only writings I am aware of, is what Dave Mills has written and
the PLL code in NTPns, but I havn't followed this closely in the last
10 years, so do check for newer writings.

Dave Mills coined the term "allan intercept" as the cross over of
the two sources allan variances and it's a good google search for
his relevant papers.

I'm not entirely sure his rule of thumb for regulating to that point
is mathematically sound&  precise, but the concept itself is certainly
valid, even if you have to compensate for the timeconstant of the
PLL you use to regulate to that point.

Well, what is being used is phase-noise intercept. Conceptually a
similar intercept point will be available in Allan variance. However, as
you shift between noise-variants, the Allan (and Modified Allan)
variance has different scaling factor to the underlying phase noise
amplitudes. The danger of using the Allan variance variant is that you
get a bias in position compared to the phase-noise plots cross-overs.
However, the concept is essentially the same, and the relative slopes is
the same. You get in the right neighbourhood thought.

The concept has been in use in the phasenoise world of things, so you
would need to search the phase-noise articles to find the real source.
It's been used to generate stable high-frequency signals.

The analysis of PLL based splicing of ADEV curves is tricky, and I have
not seen any good comprehensive analysis even if the general concept is
roughly understood. The equivalent on phase-noise is however well
understood and leaves no magic too it.

I spent a lot of time with the code in NTPns, to try to get that PLL
to converge on the optimum, and while generally good, it's not perfect.

The basic problem is that the data you have available for autotuning,
is the allan variance between your input and your steered source.

You need to treat the data as loose and tight PLL measure, depending on
what you look for. There is loads of calibration issues, covered in
literature.

If you also have the allan variance between the steered source and
a 3rd, better, source, the task is pretty trivial:  Minimize the
area below that curve.

But if you do that on the curve you have, you don't optimize, you
pessimize, since the lowest area, is with a timeconstant of zero.

Going the other direction and maximizing the area is no good either
and trying to balance the area around some pivot related to the
present PLL timeconstant does not converge in my experience.

What I did instead was to (badly) reinvent Shewarts ideas for testing
if the phase residual is under "statistical process control":

I increase the timeconstant if the phase residual has too frequent
zero-crossings and loosen it if they happen too seldom.

Having read a lot more about statistical process control, since I
built those NTP servers for the Air Traffic Control 10 years ago,
I would leverage more of the theory and heuristics developed in
process control. (3sigma violations, length of monotonic direction
etc. etc.)

It's a complex field, and things like temperature dependencies helps to
confuse you.

Cheers,
Magnus

On 09/16/2012 05:47 PM, Poul-Henning Kamp wrote: > In message<5C52FBDBA5084AD4A36300FBA73BEF5E@pc52>, "Tom Van Baak" writes: >>> Yes, timing accuracy has been my main focus and in general I have been >>> using integration times on the low side of 10000 seconds for that, >>> but it depends a lot on the OCXO/Rb and environment. >>> >>> The PLL in NTPns is a (by now) old attempt to make a self-tuning PLL >>> for optimal time stability, and it does a surprisingly good job at it. >> >> Are there papers that talk about how to optimize for best timing or best >> frequency or (no free lunch) some compromise combination of the two? > > The only writings I am aware of, is what Dave Mills has written and > the PLL code in NTPns, but I havn't followed this closely in the last > 10 years, so do check for newer writings. > > Dave Mills coined the term "allan intercept" as the cross over of > the two sources allan variances and it's a good google search for > his relevant papers. > > I'm not entirely sure his rule of thumb for regulating to that point > is mathematically sound& precise, but the concept itself is certainly > valid, even if you have to compensate for the timeconstant of the > PLL you use to regulate to that point. Well, what is being used is phase-noise intercept. Conceptually a similar intercept point will be available in Allan variance. However, as you shift between noise-variants, the Allan (and Modified Allan) variance has different scaling factor to the underlying phase noise amplitudes. The danger of using the Allan variance variant is that you get a bias in position compared to the phase-noise plots cross-overs. However, the concept is essentially the same, and the relative slopes is the same. You get in the right neighbourhood thought. The concept has been in use in the phasenoise world of things, so you would need to search the phase-noise articles to find the real source. It's been used to generate stable high-frequency signals. The analysis of PLL based splicing of ADEV curves is tricky, and I have not seen any good comprehensive analysis even if the general concept is roughly understood. The equivalent on phase-noise is however well understood and leaves no magic too it. > I spent a lot of time with the code in NTPns, to try to get that PLL > to converge on the optimum, and while generally good, it's not perfect. > > The basic problem is that the data you have available for autotuning, > is the allan variance between your input and your steered source. You need to treat the data as loose and tight PLL measure, depending on what you look for. There is loads of calibration issues, covered in literature. > If you also have the allan variance between the steered source and > a 3rd, better, source, the task is pretty trivial: Minimize the > area below that curve. > > But if you do that on the curve you have, you don't optimize, you > pessimize, since the lowest area, is with a timeconstant of zero. > > Going the other direction and maximizing the area is no good either > and trying to balance the area around some pivot related to the > present PLL timeconstant does not converge in my experience. > > What I did instead was to (badly) reinvent Shewarts ideas for testing > if the phase residual is under "statistical process control": > > I increase the timeconstant if the phase residual has too frequent > zero-crossings and loosen it if they happen too seldom. > > Having read a lot more about statistical process control, since I > built those NTP servers for the Air Traffic Control 10 years ago, > I would leverage more of the theory and heuristics developed in > process control. (3sigma violations, length of monotonic direction > etc. etc.) > It's a complex field, and things like temperature dependencies helps to confuse you. Cheers, Magnus
BC
Bob Camp
Sun, Sep 16, 2012 5:34 PM

Hi

The time constant can indeed be changed dynamically, that's what is often done.

The purpose of my examples was to keep things simple and look at the "running condition" of the loop rather than it's performance while it settles down.

Bob

On Sep 16, 2012, at 9:49 AM, Poul-Henning Kamp phk@phk.freebsd.dk wrote:

In message CE93652A-1DA6-48E3-9883-D7616AC24A78@rtty.us, Bob Camp writes:

Bob,

There's one thing makes me scratch my head here:  Why do you keep
arguing like the timeconstant cannot be changed dynamically ?

I use a very aggresive timeconstants initially, to quickly get the
phase offset under control, and then I ramp up the timeconstant in
order to reduce phase noise of the GPS, until I hit something which
looks like the "Allan-intercept" (as Dave Mills calls it).

It' won't take long time for us to agree that the timeconstant
is a tradeoff between phase and frequency error, but just because
it is called a "timeconstant" doesn't mean we cannot change it.

--
Poul-Henning Kamp      | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG        | TCP/IP since RFC 956
FreeBSD committer      | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Hi The time constant can indeed be changed dynamically, that's what is often done. The purpose of my examples was to keep things simple and look at the "running condition" of the loop rather than it's performance while it settles down. Bob On Sep 16, 2012, at 9:49 AM, Poul-Henning Kamp <phk@phk.freebsd.dk> wrote: > In message <CE93652A-1DA6-48E3-9883-D7616AC24A78@rtty.us>, Bob Camp writes: > > Bob, > > There's one thing makes me scratch my head here: Why do you keep > arguing like the timeconstant cannot be changed dynamically ? > > I use a very aggresive timeconstants initially, to quickly get the > phase offset under control, and then I ramp up the timeconstant in > order to reduce phase noise of the GPS, until I hit something which > looks like the "Allan-intercept" (as Dave Mills calls it). > > It' won't take long time for us to agree that the timeconstant > is a tradeoff between phase and frequency error, but just because > it is called a "timeconstant" doesn't mean we cannot change it. > > -- > Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 > phk@FreeBSD.ORG | TCP/IP since RFC 956 > FreeBSD committer | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there.
PK
Poul-Henning Kamp
Sun, Sep 16, 2012 5:48 PM

In message 50560A58.5010607@rubidium.dyndns.org, Magnus Danielson writes:

What I did instead was to (badly) reinvent Shewarts ideas for testing
if the phase residual is under "statistical process control":

I increase the timeconstant if the phase residual has too frequent
zero-crossings and loosen it if they happen too seldom.

Having read a lot more about statistical process control, since I
built those NTP servers for the Air Traffic Control 10 years ago,
I would leverage more of the theory and heuristics developed in
process control. (3sigma violations, length of monotonic direction
etc. etc.)

It's a complex field, and things like temperature dependencies helps to
confuse you.

No, not really.

The reason I went with the "statistical control" approach was exactly
to not be confused or mislead by environmental or other factors: I
wanted a PLL which on its own would adapt to circumstances on its
own, while still maximizing the hold-over time in case of GPS loss,
and all in all it has worked very well.

So far I have seen it cope admirably with an OCXO which went from
indoors to outdoors environment in the middle of winter, a PRS10
which gradually ran out of steam and only locked 40% of the time and
various other odd-ball events, so I think I'm justified in saying
that it does a pretty good job for autonomous and even unattended
operation.

It's certainly not perfect, but it is painfully obvious that the
adaptive PLL based on statical control heuristics is much more
resilient than a fixed or hand-tuned PLL.

I've been trying to find an excuse for giving NTPns an overhaul,
(PTP is a leading candidate for this) to get a chance to iron out
the kinks I have spotted over the last 10 years, but so far
life keeps me busy with other interesting stuff.

--
Poul-Henning Kamp      | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG        | TCP/IP since RFC 956
FreeBSD committer      | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.

In message <50560A58.5010607@rubidium.dyndns.org>, Magnus Danielson writes: >> What I did instead was to (badly) reinvent Shewarts ideas for testing >> if the phase residual is under "statistical process control": >> >> I increase the timeconstant if the phase residual has too frequent >> zero-crossings and loosen it if they happen too seldom. >> >> Having read a lot more about statistical process control, since I >> built those NTP servers for the Air Traffic Control 10 years ago, >> I would leverage more of the theory and heuristics developed in >> process control. (3sigma violations, length of monotonic direction >> etc. etc.) > >It's a complex field, and things like temperature dependencies helps to >confuse you. No, not really. The reason I went with the "statistical control" approach was exactly to not be confused or mislead by environmental or other factors: I wanted a PLL which on its own would adapt to circumstances on its own, while still maximizing the hold-over time in case of GPS loss, and all in all it has worked very well. So far I have seen it cope admirably with an OCXO which went from indoors to outdoors environment in the middle of winter, a PRS10 which gradually ran out of steam and only locked 40% of the time and various other odd-ball events, so I think I'm justified in saying that it does a pretty good job for autonomous and even unattended operation. It's certainly not perfect, but it is painfully obvious that the adaptive PLL based on statical control heuristics is much more resilient than a fixed or hand-tuned PLL. I've been trying to find an excuse for giving NTPns an overhaul, (PTP is a leading candidate for this) to get a chance to iron out the kinks I have spotted over the last 10 years, but so far life keeps me busy with other interesting stuff. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk@FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence.
BC
Bob Camp
Sun, Sep 16, 2012 5:48 PM

Hi

By far the most common approach to optimizing these is the "measure it and see" approach.

  1. measure the noise out of the GPS ( must be done no matter what)
  2. measurer the noise of the specific OCXO (again must be done)
  3. guess at a cross over
  4. try it and measure the result.
  5. step and repeat 3 and 4 until exhaustion sets in

Indeed converting the data to phase noise rather than ADEV helps the guess process. You can go a bit crazy with math to get a better first guess. Unless you measure what you get, you won't find all the silly little things you forgot to put into your math model.
If you simply try a dynamic tune approach, you never really get to an optimum point. You need a "better than" reference to let you know where you are. You can keep pushing out the time constant and watching with just a GPS and OCXO, but you never really know when to stop.

Bob

On Sep 16, 2012, at 11:47 AM, Poul-Henning Kamp phk@phk.freebsd.dk wrote:

In message 5C52FBDBA5084AD4A36300FBA73BEF5E@pc52, "Tom Van Baak" writes:

Yes, timing accuracy has been my main focus and in general I have been
using integration times on the low side of 10000 seconds for that,
but it depends a lot on the OCXO/Rb and environment.

The PLL in NTPns is a (by now) old attempt to make a self-tuning PLL
for optimal time stability, and it does a surprisingly good job at it.

Are there papers that talk about how to optimize for best timing or best
frequency or (no free lunch) some compromise combination of the two?

The only writings I am aware of, is what Dave Mills has written and
the PLL code in NTPns, but I havn't followed this closely in the last
10 years, so do check for newer writings.

Dave Mills coined the term "allan intercept" as the cross over of
the two sources allan variances and it's a good google search for
his relevant papers.

I'm not entirely sure his rule of thumb for regulating to that point
is mathematically sound & precise, but the concept itself is certainly
valid, even if you have to compensate for the timeconstant of the
PLL you use to regulate to that point.

I spent a lot of time with the code in NTPns, to try to get that PLL
to converge on the optimum, and while generally good, it's not perfect.

The basic problem is that the data you have available for autotuning,
is the allan variance between your input and your steered source.

If you also have the allan variance between the steered source and
a 3rd, better, source, the task is pretty trivial:  Minimize the
area below that curve.

But if you do that on the curve you have, you don't optimize, you
pessimize, since the lowest area, is with a timeconstant of zero.

Going the other direction and maximizing the area is no good either
and trying to balance the area around some pivot related to the
present PLL timeconstant does not converge in my experience.

What I did instead was to (badly) reinvent Shewarts ideas for testing
if the phase residual is under "statistical process control":

I increase the timeconstant if the phase residual has too frequent
zero-crossings and loosen it if they happen too seldom.

Having read a lot more about statistical process control, since I
built those NTP servers for the Air Traffic Control 10 years ago,
I would leverage more of the theory and heuristics developed in
process control. (3sigma violations, length of monotonic direction
etc. etc.)

--
Poul-Henning Kamp      | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG        | TCP/IP since RFC 956
FreeBSD committer      | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Hi By far the most common approach to optimizing these is the "measure it and see" approach. 1) measure the noise out of the GPS ( must be done no matter what) 2) measurer the noise of the specific OCXO (again must be done) 3) *guess* at a cross over 4) try it and measure the result. 5) step and repeat 3 and 4 until exhaustion sets in Indeed converting the data to phase noise rather than ADEV helps the guess process. You can go a bit crazy with math to get a better first guess. Unless you measure what you get, you won't find all the silly little things you forgot to put into your math model. If you simply try a dynamic tune approach, you never really get to an optimum point. You need a "better than" reference to let you know where you are. You can keep pushing out the time constant and watching with just a GPS and OCXO, but you never really know when to stop. Bob On Sep 16, 2012, at 11:47 AM, Poul-Henning Kamp <phk@phk.freebsd.dk> wrote: > In message <5C52FBDBA5084AD4A36300FBA73BEF5E@pc52>, "Tom Van Baak" writes: >>> Yes, timing accuracy has been my main focus and in general I have been >>> using integration times on the low side of 10000 seconds for that, >>> but it depends a lot on the OCXO/Rb and environment. >>> >>> The PLL in NTPns is a (by now) old attempt to make a self-tuning PLL >>> for optimal time stability, and it does a surprisingly good job at it. >> >> Are there papers that talk about how to optimize for best timing or best >> frequency or (no free lunch) some compromise combination of the two? > > The only writings I am aware of, is what Dave Mills has written and > the PLL code in NTPns, but I havn't followed this closely in the last > 10 years, so do check for newer writings. > > Dave Mills coined the term "allan intercept" as the cross over of > the two sources allan variances and it's a good google search for > his relevant papers. > > I'm not entirely sure his rule of thumb for regulating to that point > is mathematically sound & precise, but the concept itself is certainly > valid, even if you have to compensate for the timeconstant of the > PLL you use to regulate to that point. > > I spent a lot of time with the code in NTPns, to try to get that PLL > to converge on the optimum, and while generally good, it's not perfect. > > The basic problem is that the data you have available for autotuning, > is the allan variance between your input and your steered source. > > If you also have the allan variance between the steered source and > a 3rd, better, source, the task is pretty trivial: Minimize the > area below that curve. > > But if you do that on the curve you have, you don't optimize, you > pessimize, since the lowest area, is with a timeconstant of zero. > > Going the other direction and maximizing the area is no good either > and trying to balance the area around some pivot related to the > present PLL timeconstant does not converge in my experience. > > What I did instead was to (badly) reinvent Shewarts ideas for testing > if the phase residual is under "statistical process control": > > I increase the timeconstant if the phase residual has too frequent > zero-crossings and loosen it if they happen too seldom. > > Having read a lot more about statistical process control, since I > built those NTP servers for the Air Traffic Control 10 years ago, > I would leverage more of the theory and heuristics developed in > process control. (3sigma violations, length of monotonic direction > etc. etc.) > > -- > Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 > phk@FreeBSD.ORG | TCP/IP since RFC 956 > FreeBSD committer | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there.
PK
Poul-Henning Kamp
Sun, Sep 16, 2012 5:56 PM

In message ACD158CA-D76C-4A8B-B77D-4FA691D0B50B@rtty.us, Bob Camp writes:

The purpose of my examples was to keep things simple and look at
the "running condition" of the loop rather than it's performance
while it settles down.

But what is "running condition" ?

I see my PLL adjust to thermal conditions during summer (A/C) and
winter (heating) and even to GPS constellation changes...

--
Poul-Henning Kamp      | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG        | TCP/IP since RFC 956
FreeBSD committer      | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.

In message <ACD158CA-D76C-4A8B-B77D-4FA691D0B50B@rtty.us>, Bob Camp writes: >The purpose of my examples was to keep things simple and look at >the "running condition" of the loop rather than it's performance >while it settles down. But what is "running condition" ? I see my PLL adjust to thermal conditions during summer (A/C) and winter (heating) and even to GPS constellation changes... -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk@FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence.
PK
Poul-Henning Kamp
Sun, Sep 16, 2012 6:01 PM

In message 3A001267-52DE-4EE9-B6EE-6638FB270C06@rtty.us, Bob Camp writes:

Hi

By far the most common approach to optimizing these is the "measure
it and see" approach.

  1. measure the noise out of the GPS ( must be done no matter what)
  2. measurer the noise of the specific OCXO (again must be done)
  3. guess at a cross over
  4. try it and measure the result.
  5. step and repeat 3 and 4 until exhaustion sets in

Indeed.

What I've done is to automate that, using the zero-crossing
frequency of the residual as input.

If you simply try a dynamic tune approach, you never really get
to an optimum point.

For the stuff I did, hitting the optimum point exactly from the
beginning, was not nearly as important as getting close to the
optimum point when circumstances changed.

But with that being said:  Even in the "ideal scientific" setting,
I think my approach is not only valid, I think it is one of the
most efficient ones, because you don't need a 3rd reference to
measure against.

If you have a 3rd (& better) reference, by all means use it, but
if all you have is a GPSDO, my method delivers better results than
I have seen from anything else.

--
Poul-Henning Kamp      | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG        | TCP/IP since RFC 956
FreeBSD committer      | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.

In message <3A001267-52DE-4EE9-B6EE-6638FB270C06@rtty.us>, Bob Camp writes: >Hi > >By far the most common approach to optimizing these is the "measure >it and see" approach. > >1) measure the noise out of the GPS ( must be done no matter what) >2) measurer the noise of the specific OCXO (again must be done) >3) *guess* at a cross over >4) try it and measure the result. >5) step and repeat 3 and 4 until exhaustion sets in Indeed. What I've done is to automate that, using the zero-crossing frequency of the residual as input. >If you simply try a dynamic tune approach, you never really get >to an optimum point. For the stuff I did, hitting the optimum point exactly from the beginning, was not nearly as important as getting close to the optimum point when circumstances changed. But with that being said: Even in the "ideal scientific" setting, I think my approach is not only valid, I think it is one of the most efficient ones, because you don't need a 3rd reference to measure against. If you have a 3rd (& better) reference, by all means use it, but if all you have is a GPSDO, my method delivers better results than I have seen from anything else. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk@FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence.
TK
Tom Knox
Sun, Sep 16, 2012 6:58 PM

Great dialog, One thing I have seen is the Allan intercept almost always has a "knee". If you wanted the best possible GPS quartz reference developing a variable Allan intercept would allow this knee to be moved and then mathematically removed during a gated measurement.
Allowing to effectively see behind he knee offering lower uncertainty in this important area.

Thomas Knox

Date: Sun, 16 Sep 2012 19:20:24 +0200
From: magnus@rubidium.dyndns.org
To: time-nuts@febo.com
Subject: Re: [time-nuts] GPSDO control loops and correcting quantizationerror

On 09/16/2012 05:47 PM, Poul-Henning Kamp wrote:

In message5C52FBDBA5084AD4A36300FBA73BEF5E@pc52, "Tom Van Baak" writes:

Yes, timing accuracy has been my main focus and in general I have been
using integration times on the low side of 10000 seconds for that,
but it depends a lot on the OCXO/Rb and environment.

The PLL in NTPns is a (by now) old attempt to make a self-tuning PLL
for optimal time stability, and it does a surprisingly good job at it.

Are there papers that talk about how to optimize for best timing or best
frequency or (no free lunch) some compromise combination of the two?

The only writings I am aware of, is what Dave Mills has written and
the PLL code in NTPns, but I havn't followed this closely in the last
10 years, so do check for newer writings.

Dave Mills coined the term "allan intercept" as the cross over of
the two sources allan variances and it's a good google search for
his relevant papers.

I'm not entirely sure his rule of thumb for regulating to that point
is mathematically sound&  precise, but the concept itself is certainly
valid, even if you have to compensate for the timeconstant of the
PLL you use to regulate to that point.

Well, what is being used is phase-noise intercept. Conceptually a
similar intercept point will be available in Allan variance. However, as
you shift between noise-variants, the Allan (and Modified Allan)
variance has different scaling factor to the underlying phase noise
amplitudes. The danger of using the Allan variance variant is that you
get a bias in position compared to the phase-noise plots cross-overs.
However, the concept is essentially the same, and the relative slopes is
the same. You get in the right neighbourhood thought.

The concept has been in use in the phasenoise world of things, so you
would need to search the phase-noise articles to find the real source.
It's been used to generate stable high-frequency signals.

The analysis of PLL based splicing of ADEV curves is tricky, and I have
not seen any good comprehensive analysis even if the general concept is
roughly understood. The equivalent on phase-noise is however well
understood and leaves no magic too it.

I spent a lot of time with the code in NTPns, to try to get that PLL
to converge on the optimum, and while generally good, it's not perfect.

The basic problem is that the data you have available for autotuning,
is the allan variance between your input and your steered source.

You need to treat the data as loose and tight PLL measure, depending on
what you look for. There is loads of calibration issues, covered in
literature.

If you also have the allan variance between the steered source and
a 3rd, better, source, the task is pretty trivial:  Minimize the
area below that curve.

But if you do that on the curve you have, you don't optimize, you
pessimize, since the lowest area, is with a timeconstant of zero.

Going the other direction and maximizing the area is no good either
and trying to balance the area around some pivot related to the
present PLL timeconstant does not converge in my experience.

What I did instead was to (badly) reinvent Shewarts ideas for testing
if the phase residual is under "statistical process control":

I increase the timeconstant if the phase residual has too frequent
zero-crossings and loosen it if they happen too seldom.

Having read a lot more about statistical process control, since I
built those NTP servers for the Air Traffic Control 10 years ago,
I would leverage more of the theory and heuristics developed in
process control. (3sigma violations, length of monotonic direction
etc. etc.)

It's a complex field, and things like temperature dependencies helps to
confuse you.

Cheers,
Magnus


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Great dialog, One thing I have seen is the Allan intercept almost always has a "knee". If you wanted the best possible GPS quartz reference developing a variable Allan intercept would allow this knee to be moved and then mathematically removed during a gated measurement. Allowing to effectively see behind he knee offering lower uncertainty in this important area. Thomas Knox > Date: Sun, 16 Sep 2012 19:20:24 +0200 > From: magnus@rubidium.dyndns.org > To: time-nuts@febo.com > Subject: Re: [time-nuts] GPSDO control loops and correcting quantizationerror > > On 09/16/2012 05:47 PM, Poul-Henning Kamp wrote: > > In message<5C52FBDBA5084AD4A36300FBA73BEF5E@pc52>, "Tom Van Baak" writes: > >>> Yes, timing accuracy has been my main focus and in general I have been > >>> using integration times on the low side of 10000 seconds for that, > >>> but it depends a lot on the OCXO/Rb and environment. > >>> > >>> The PLL in NTPns is a (by now) old attempt to make a self-tuning PLL > >>> for optimal time stability, and it does a surprisingly good job at it. > >> > >> Are there papers that talk about how to optimize for best timing or best > >> frequency or (no free lunch) some compromise combination of the two? > > > > The only writings I am aware of, is what Dave Mills has written and > > the PLL code in NTPns, but I havn't followed this closely in the last > > 10 years, so do check for newer writings. > > > > Dave Mills coined the term "allan intercept" as the cross over of > > the two sources allan variances and it's a good google search for > > his relevant papers. > > > > I'm not entirely sure his rule of thumb for regulating to that point > > is mathematically sound& precise, but the concept itself is certainly > > valid, even if you have to compensate for the timeconstant of the > > PLL you use to regulate to that point. > > Well, what is being used is phase-noise intercept. Conceptually a > similar intercept point will be available in Allan variance. However, as > you shift between noise-variants, the Allan (and Modified Allan) > variance has different scaling factor to the underlying phase noise > amplitudes. The danger of using the Allan variance variant is that you > get a bias in position compared to the phase-noise plots cross-overs. > However, the concept is essentially the same, and the relative slopes is > the same. You get in the right neighbourhood thought. > > The concept has been in use in the phasenoise world of things, so you > would need to search the phase-noise articles to find the real source. > It's been used to generate stable high-frequency signals. > > The analysis of PLL based splicing of ADEV curves is tricky, and I have > not seen any good comprehensive analysis even if the general concept is > roughly understood. The equivalent on phase-noise is however well > understood and leaves no magic too it. > > > I spent a lot of time with the code in NTPns, to try to get that PLL > > to converge on the optimum, and while generally good, it's not perfect. > > > > The basic problem is that the data you have available for autotuning, > > is the allan variance between your input and your steered source. > > You need to treat the data as loose and tight PLL measure, depending on > what you look for. There is loads of calibration issues, covered in > literature. > > > If you also have the allan variance between the steered source and > > a 3rd, better, source, the task is pretty trivial: Minimize the > > area below that curve. > > > > But if you do that on the curve you have, you don't optimize, you > > pessimize, since the lowest area, is with a timeconstant of zero. > > > > Going the other direction and maximizing the area is no good either > > and trying to balance the area around some pivot related to the > > present PLL timeconstant does not converge in my experience. > > > > What I did instead was to (badly) reinvent Shewarts ideas for testing > > if the phase residual is under "statistical process control": > > > > I increase the timeconstant if the phase residual has too frequent > > zero-crossings and loosen it if they happen too seldom. > > > > Having read a lot more about statistical process control, since I > > built those NTP servers for the Air Traffic Control 10 years ago, > > I would leverage more of the theory and heuristics developed in > > process control. (3sigma violations, length of monotonic direction > > etc. etc.) > > > > It's a complex field, and things like temperature dependencies helps to > confuse you. > > Cheers, > Magnus > > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there.
BC
Bob Camp
Sun, Sep 16, 2012 7:05 PM

Hi

The basic assumption is that this is a lab gizmo and that there is indeed a static adev (or very low frequency phase noise) plot for the OCXO (or Rb).  The other assumption is that this plot is quite good (say decreasing or flat to >10,000 seconds).

IF that's all true, then the "running condition" is the pll loop frequency / time constant / cross over that does not degrade that adev (or phase noise) plot with noise from the GPS.

Bob

On Sep 16, 2012, at 1:56 PM, Poul-Henning Kamp phk@phk.freebsd.dk wrote:

In message ACD158CA-D76C-4A8B-B77D-4FA691D0B50B@rtty.us, Bob Camp writes:

The purpose of my examples was to keep things simple and look at
the "running condition" of the loop rather than it's performance
while it settles down.

But what is "running condition" ?

I see my PLL adjust to thermal conditions during summer (A/C) and
winter (heating) and even to GPS constellation changes...

--
Poul-Henning Kamp      | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG        | TCP/IP since RFC 956
FreeBSD committer      | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Hi The basic assumption is that this is a lab gizmo and that there is indeed a static adev (or very low frequency phase noise) plot for the OCXO (or Rb). The other assumption is that this plot is quite good (say decreasing or flat to >10,000 seconds). IF that's all true, then the "running condition" is the pll loop frequency / time constant / cross over that does not degrade that adev (or phase noise) plot with noise from the GPS. Bob On Sep 16, 2012, at 1:56 PM, Poul-Henning Kamp <phk@phk.freebsd.dk> wrote: > In message <ACD158CA-D76C-4A8B-B77D-4FA691D0B50B@rtty.us>, Bob Camp writes: > >> The purpose of my examples was to keep things simple and look at >> the "running condition" of the loop rather than it's performance >> while it settles down. > > But what is "running condition" ? > > I see my PLL adjust to thermal conditions during summer (A/C) and > winter (heating) and even to GPS constellation changes... > > -- > Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 > phk@FreeBSD.ORG | TCP/IP since RFC 956 > FreeBSD committer | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there.
BC
Bob Camp
Sun, Sep 16, 2012 7:09 PM

Hi

The "knee" is a basic artifact of the cross over in the noise of one (say the OCXO) to the noise of the other (say the GPS). It's one of those things you can reduce, but never eliminate completely.  The noise of the combined pair will always be slightly worse than the best of the two when they are in the cross over region.

Bob

On Sep 16, 2012, at 2:58 PM, Tom Knox actast@hotmail.com wrote:

Great dialog, One thing I have seen is the Allan intercept almost always has a "knee". If you wanted the best possible GPS quartz reference developing a variable Allan intercept would allow this knee to be moved and then mathematically removed during a gated measurement.
Allowing to effectively see behind he knee offering lower uncertainty in this important area.

Thomas Knox

Date: Sun, 16 Sep 2012 19:20:24 +0200
From: magnus@rubidium.dyndns.org
To: time-nuts@febo.com
Subject: Re: [time-nuts] GPSDO control loops and correcting quantizationerror

On 09/16/2012 05:47 PM, Poul-Henning Kamp wrote:

In message5C52FBDBA5084AD4A36300FBA73BEF5E@pc52, "Tom Van Baak" writes:

Yes, timing accuracy has been my main focus and in general I have been
using integration times on the low side of 10000 seconds for that,
but it depends a lot on the OCXO/Rb and environment.

The PLL in NTPns is a (by now) old attempt to make a self-tuning PLL
for optimal time stability, and it does a surprisingly good job at it.

Are there papers that talk about how to optimize for best timing or best
frequency or (no free lunch) some compromise combination of the two?

The only writings I am aware of, is what Dave Mills has written and
the PLL code in NTPns, but I havn't followed this closely in the last
10 years, so do check for newer writings.

Dave Mills coined the term "allan intercept" as the cross over of
the two sources allan variances and it's a good google search for
his relevant papers.

I'm not entirely sure his rule of thumb for regulating to that point
is mathematically sound&  precise, but the concept itself is certainly
valid, even if you have to compensate for the timeconstant of the
PLL you use to regulate to that point.

Well, what is being used is phase-noise intercept. Conceptually a
similar intercept point will be available in Allan variance. However, as
you shift between noise-variants, the Allan (and Modified Allan)
variance has different scaling factor to the underlying phase noise
amplitudes. The danger of using the Allan variance variant is that you
get a bias in position compared to the phase-noise plots cross-overs.
However, the concept is essentially the same, and the relative slopes is
the same. You get in the right neighbourhood thought.

The concept has been in use in the phasenoise world of things, so you
would need to search the phase-noise articles to find the real source.
It's been used to generate stable high-frequency signals.

The analysis of PLL based splicing of ADEV curves is tricky, and I have
not seen any good comprehensive analysis even if the general concept is
roughly understood. The equivalent on phase-noise is however well
understood and leaves no magic too it.

I spent a lot of time with the code in NTPns, to try to get that PLL
to converge on the optimum, and while generally good, it's not perfect.

The basic problem is that the data you have available for autotuning,
is the allan variance between your input and your steered source.

You need to treat the data as loose and tight PLL measure, depending on
what you look for. There is loads of calibration issues, covered in
literature.

If you also have the allan variance between the steered source and
a 3rd, better, source, the task is pretty trivial:  Minimize the
area below that curve.

But if you do that on the curve you have, you don't optimize, you
pessimize, since the lowest area, is with a timeconstant of zero.

Going the other direction and maximizing the area is no good either
and trying to balance the area around some pivot related to the
present PLL timeconstant does not converge in my experience.

What I did instead was to (badly) reinvent Shewarts ideas for testing
if the phase residual is under "statistical process control":

I increase the timeconstant if the phase residual has too frequent
zero-crossings and loosen it if they happen too seldom.

Having read a lot more about statistical process control, since I
built those NTP servers for the Air Traffic Control 10 years ago,
I would leverage more of the theory and heuristics developed in
process control. (3sigma violations, length of monotonic direction
etc. etc.)

It's a complex field, and things like temperature dependencies helps to
confuse you.

Cheers,
Magnus


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Hi The "knee" is a basic artifact of the cross over in the noise of one (say the OCXO) to the noise of the other (say the GPS). It's one of those things you can reduce, but never eliminate completely. The noise of the combined pair will always be slightly worse than the best of the two when they are in the cross over region. Bob On Sep 16, 2012, at 2:58 PM, Tom Knox <actast@hotmail.com> wrote: > > Great dialog, One thing I have seen is the Allan intercept almost always has a "knee". If you wanted the best possible GPS quartz reference developing a variable Allan intercept would allow this knee to be moved and then mathematically removed during a gated measurement. > Allowing to effectively see behind he knee offering lower uncertainty in this important area. > > Thomas Knox > > > >> Date: Sun, 16 Sep 2012 19:20:24 +0200 >> From: magnus@rubidium.dyndns.org >> To: time-nuts@febo.com >> Subject: Re: [time-nuts] GPSDO control loops and correcting quantizationerror >> >> On 09/16/2012 05:47 PM, Poul-Henning Kamp wrote: >>> In message<5C52FBDBA5084AD4A36300FBA73BEF5E@pc52>, "Tom Van Baak" writes: >>>>> Yes, timing accuracy has been my main focus and in general I have been >>>>> using integration times on the low side of 10000 seconds for that, >>>>> but it depends a lot on the OCXO/Rb and environment. >>>>> >>>>> The PLL in NTPns is a (by now) old attempt to make a self-tuning PLL >>>>> for optimal time stability, and it does a surprisingly good job at it. >>>> >>>> Are there papers that talk about how to optimize for best timing or best >>>> frequency or (no free lunch) some compromise combination of the two? >>> >>> The only writings I am aware of, is what Dave Mills has written and >>> the PLL code in NTPns, but I havn't followed this closely in the last >>> 10 years, so do check for newer writings. >>> >>> Dave Mills coined the term "allan intercept" as the cross over of >>> the two sources allan variances and it's a good google search for >>> his relevant papers. >>> >>> I'm not entirely sure his rule of thumb for regulating to that point >>> is mathematically sound& precise, but the concept itself is certainly >>> valid, even if you have to compensate for the timeconstant of the >>> PLL you use to regulate to that point. >> >> Well, what is being used is phase-noise intercept. Conceptually a >> similar intercept point will be available in Allan variance. However, as >> you shift between noise-variants, the Allan (and Modified Allan) >> variance has different scaling factor to the underlying phase noise >> amplitudes. The danger of using the Allan variance variant is that you >> get a bias in position compared to the phase-noise plots cross-overs. >> However, the concept is essentially the same, and the relative slopes is >> the same. You get in the right neighbourhood thought. >> >> The concept has been in use in the phasenoise world of things, so you >> would need to search the phase-noise articles to find the real source. >> It's been used to generate stable high-frequency signals. >> >> The analysis of PLL based splicing of ADEV curves is tricky, and I have >> not seen any good comprehensive analysis even if the general concept is >> roughly understood. The equivalent on phase-noise is however well >> understood and leaves no magic too it. >> >>> I spent a lot of time with the code in NTPns, to try to get that PLL >>> to converge on the optimum, and while generally good, it's not perfect. >>> >>> The basic problem is that the data you have available for autotuning, >>> is the allan variance between your input and your steered source. >> >> You need to treat the data as loose and tight PLL measure, depending on >> what you look for. There is loads of calibration issues, covered in >> literature. >> >>> If you also have the allan variance between the steered source and >>> a 3rd, better, source, the task is pretty trivial: Minimize the >>> area below that curve. >>> >>> But if you do that on the curve you have, you don't optimize, you >>> pessimize, since the lowest area, is with a timeconstant of zero. >>> >>> Going the other direction and maximizing the area is no good either >>> and trying to balance the area around some pivot related to the >>> present PLL timeconstant does not converge in my experience. >>> >>> What I did instead was to (badly) reinvent Shewarts ideas for testing >>> if the phase residual is under "statistical process control": >>> >>> I increase the timeconstant if the phase residual has too frequent >>> zero-crossings and loosen it if they happen too seldom. >>> >>> Having read a lot more about statistical process control, since I >>> built those NTP servers for the Air Traffic Control 10 years ago, >>> I would leverage more of the theory and heuristics developed in >>> process control. (3sigma violations, length of monotonic direction >>> etc. etc.) >>> >> >> It's a complex field, and things like temperature dependencies helps to >> confuse you. >> >> Cheers, >> Magnus >> >> _______________________________________________ >> time-nuts mailing list -- time-nuts@febo.com >> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts >> and follow the instructions there. > > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there.
PK
Poul-Henning Kamp
Sun, Sep 16, 2012 7:30 PM

In message BAY162-W9FFF4214C4DE8FD1752F3DF960@phx.gbl, Tom Knox writes:

Great dialog, One thing I have seen is the Allan intercept almost
always has a "knee". If you wanted the best possible GPS quartz
reference developing a variable Allan intercept would allow this
knee to be moved and then mathematically removed during a gated
measurement.
Allowing to effectively see behind he knee offering lower uncertainty
in this important area.

I did try a spectral approach before I settled on the current approach,
because I foresaw the precence of 12 and 24 hour periodicities, but
while good on the paper and post-factum, I never managed to get it to
autoestimate reliably in real-time.

If you can find the paper about the algorithm timing.com was founded
on, you will find much interesting fodder therein, but my
reimplementation of their algorithem only worked for Rb's I could
never get it to do anything usable for OCXOs.

--
Poul-Henning Kamp      | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG        | TCP/IP since RFC 956
FreeBSD committer      | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.

In message <BAY162-W9FFF4214C4DE8FD1752F3DF960@phx.gbl>, Tom Knox writes: >Great dialog, One thing I have seen is the Allan intercept almost >always has a "knee". If you wanted the best possible GPS quartz >reference developing a variable Allan intercept would allow this >knee to be moved and then mathematically removed during a gated >measurement. >Allowing to effectively see behind he knee offering lower uncertainty >in this important area. I did try a spectral approach before I settled on the current approach, because I foresaw the precence of 12 and 24 hour periodicities, but while good on the paper and post-factum, I never managed to get it to autoestimate reliably in real-time. If you can find the paper about the algorithm timing.com was founded on, you will find much interesting fodder therein, but my reimplementation of their algorithem only worked for Rb's I could never get it to do anything usable for OCXOs. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk@FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence.
PK
Poul-Henning Kamp
Sun, Sep 16, 2012 7:39 PM

In message 2DEA9396-95EB-4092-A443-A72350CC1D19@rtty.us, Bob Camp writes:

The basic assumption is that this is a lab gizmo and that there
is indeed a static adev (or very low frequency phase noise) plot
for the OCXO (or Rb).

Bob,

I think this is where the premier-league differs from the
amateurs-leagues in the time-nuts competition :-)

I suspect that the majority of GPSDO's on this mailinglists do
not have access to a independent frequency standard good enough to
make that measurement, much less a temperature controlled environment.

Yes, in a lab environment, you can measure and adjust it once and
for all, or at least once for every few years.

The rest of us may find it easier to have a PLL that auto-optimizes
so that we don't have to waste our limited time-nut time on
recalibrating our house-standard.

--
Poul-Henning Kamp      | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG        | TCP/IP since RFC 956
FreeBSD committer      | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.

In message <2DEA9396-95EB-4092-A443-A72350CC1D19@rtty.us>, Bob Camp writes: >The basic assumption is that this is a lab gizmo and that there >is indeed a static adev (or very low frequency phase noise) plot >for the OCXO (or Rb). Bob, I think this is where the premier-league differs from the amateurs-leagues in the time-nuts competition :-) I suspect that the majority of GPSDO's on this mailinglists do not have access to a independent frequency standard good enough to make that measurement, much less a temperature controlled environment. Yes, in a lab environment, you can measure and adjust it once and for all, or at least once for every few years. The rest of us may find it easier to have a PLL that auto-optimizes so that we don't have to waste our limited time-nut time on recalibrating our house-standard. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk@FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence.
BC
Bob Camp
Sun, Sep 16, 2012 7:51 PM

Hi

…but endless testing for minimal return is what being a Time Nut is all about ….

Bob

On Sep 16, 2012, at 3:39 PM, "Poul-Henning Kamp" phk@phk.freebsd.dk wrote:

In message 2DEA9396-95EB-4092-A443-A72350CC1D19@rtty.us, Bob Camp writes:

The basic assumption is that this is a lab gizmo and that there
is indeed a static adev (or very low frequency phase noise) plot
for the OCXO (or Rb).

Bob,

I think this is where the premier-league differs from the
amateurs-leagues in the time-nuts competition :-)

I suspect that the majority of GPSDO's on this mailinglists do
not have access to a independent frequency standard good enough to
make that measurement, much less a temperature controlled environment.

Yes, in a lab environment, you can measure and adjust it once and
for all, or at least once for every few years.

The rest of us may find it easier to have a PLL that auto-optimizes
so that we don't have to waste our limited time-nut time on
recalibrating our house-standard.

--
Poul-Henning Kamp      | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG        | TCP/IP since RFC 956
FreeBSD committer      | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Hi …but endless testing for minimal return is what being a Time Nut is all about …. Bob On Sep 16, 2012, at 3:39 PM, "Poul-Henning Kamp" <phk@phk.freebsd.dk> wrote: > In message <2DEA9396-95EB-4092-A443-A72350CC1D19@rtty.us>, Bob Camp writes: > >> The basic assumption is that this is a lab gizmo and that there >> is indeed a static adev (or very low frequency phase noise) plot >> for the OCXO (or Rb). > > Bob, > > I think this is where the premier-league differs from the > amateurs-leagues in the time-nuts competition :-) > > I suspect that the majority of GPSDO's on this mailinglists do > not have access to a independent frequency standard good enough to > make that measurement, much less a temperature controlled environment. > > Yes, in a lab environment, you can measure and adjust it once and > for all, or at least once for every few years. > > The rest of us may find it easier to have a PLL that auto-optimizes > so that we don't have to waste our limited time-nut time on > recalibrating our house-standard. > > -- > Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 > phk@FreeBSD.ORG | TCP/IP since RFC 956 > FreeBSD committer | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there.
DF
Dennis Ferguson
Sun, Sep 16, 2012 8:12 PM

On 16 Sep, 2012, at 00:40 , Tom Van Baak wrote:

I worry in your example about the long cross-over time. This may be ideal for frequency stability, but probably is not good for time accuracy. If one is using the GPSDO as a timing reference, I would think a shorter time constant will keep the rms time error down. Has anyone on the list done work optimizing the timing accuracy rather than the frequency stability?

I'm not sure there could be a difference between the goals of
frequency accuracy and time accuracy that would effect that time
constant. The time error is the time integral of the frequency
error, so anything which manages to minimize the frequency error
of the oscillator (both the magnitude of the error and its
duration) will also minimize the time error.  The time constant
is selected to be the minimum value which makes it probable that
the frequency or time error you have measured (for a PLL the data
are time errors) is in fact an error that the oscillator has
made rather than an artifact of the noise in the measurement
system.

There might be a difference in the best control action to take to
optimally achieve each of those goals.  In particular if your goal
is frequency accuracy the best control action in response to the
measurement of a frequency error might be to correct that error,
i.e. to minimize the frequency error once you know you have one.
If your goal is time accuracy, however, then the response to a
measured frequency error is going to be to intentionally make a
frequency error in the other direction for a while to correct the
accumulated time error.  In this case, though, it seems to me
that by selecting a PLL as the control discipline (rather than, say,
a FLL) you've already made the decision to take control actions
which ensure time accuracy.

Dennis Ferguson

On 16 Sep, 2012, at 00:40 , Tom Van Baak wrote: > I worry in your example about the long cross-over time. This may be ideal for frequency stability, but probably is not good for time accuracy. If one is using the GPSDO as a timing reference, I would think a shorter time constant will keep the rms time error down. Has anyone on the list done work optimizing the timing accuracy rather than the frequency stability? I'm not sure there could be a difference between the goals of frequency accuracy and time accuracy that would effect that time constant. The time error is the time integral of the frequency error, so anything which manages to minimize the frequency error of the oscillator (both the magnitude of the error and its duration) will also minimize the time error. The time constant is selected to be the minimum value which makes it probable that the frequency or time error you have measured (for a PLL the data are time errors) is in fact an error that the oscillator has made rather than an artifact of the noise in the measurement system. There might be a difference in the best control action to take to optimally achieve each of those goals. In particular if your goal is frequency accuracy the best control action in response to the measurement of a frequency error might be to correct that error, i.e. to minimize the frequency error once you know you have one. If your goal is time accuracy, however, then the response to a measured frequency error is going to be to intentionally make a frequency error in the other direction for a while to correct the accumulated time error. In this case, though, it seems to me that by selecting a PLL as the control discipline (rather than, say, a FLL) you've already made the decision to take control actions which ensure time accuracy. Dennis Ferguson
JL
Jim Lux
Sun, Sep 16, 2012 8:28 PM

On 9/16/12 10:20 AM, Magnus Danielson wrote:

On 09/16/2012 05:47 PM, Poul-Henning Kamp wrote:

Dave Mills coined the term "allan intercept" as the cross over of
the two sources allan variances and it's a good google search for
his relevant papers.

I'm not entirely sure his rule of thumb for regulating to that point
is mathematically sound&  precise, but the concept itself is certainly
valid, even if you have to compensate for the timeconstant of the
PLL you use to regulate to that point.

Well, what is being used is phase-noise intercept. Conceptually a
similar intercept point will be available in Allan variance. However, as
you shift between noise-variants, the Allan (and Modified Allan)
variance has different scaling factor to the underlying phase noise
amplitudes. The danger of using the Allan variance variant is that you
get a bias in position compared to the phase-noise plots cross-overs.
However, the concept is essentially the same, and the relative slopes is
the same. You get in the right neighbourhood thought.

The concept has been in use in the phasenoise world of things, so you
would need to search the phase-noise articles to find the real source.
It's been used to generate stable high-frequency signals.

The analysis of PLL based splicing of ADEV curves is tricky, and I have
not seen any good comprehensive analysis even if the general concept is
roughly understood. The equivalent on phase-noise is however well
understood and leaves no magic too it.

I'm not sure that the theory of phase noise intercepts, in practical
systems, is actually used.  It seems that everyone I've talked to uses
the theory to "get in the ballpark" and then does simulations at the
design review, and ultimately, builds it and tests, and then tweaks the
implementation to optimize (especially if the loop closure is
implemented digitally in software/FPGA)

When talking real high performance, there's so many confounding error
factors that it's not like you can build what theory says and hit the
mark.  The actual noise distributions follow the Leeson model in
general, but have lumps and bumps, and there's always narrow band
oddities (power supply filtering, noise from switching power converters,
etc.)

Let's face it, real high performance source design has a lot of art and
craft in it.  You can't get to that point without sound engineering, but
that last order of magnitude is all about suck it and see.

I spent a lot of time with the code in NTPns, to try to get that PLL
to converge on the optimum, and while generally good, it's not perfect.

The basic problem is that the data you have available for autotuning,
is the allan variance between your input and your steered source.

It's a complex field, and things like temperature dependencies helps to
confuse you.

Ain't that the truth..

And then, there's proving that what you built is actually doing what you
claim.  State of the art sources require beyond state of the art
verification methods...

It's easy to write a spec for, say, incremental Allan Dev of 1E-16 at
some tau.  A bit harder to test at a constant frequency.  Now throw in a
varying frequency (say, because of temperature variation or Doppler)..

On 9/16/12 10:20 AM, Magnus Danielson wrote: > On 09/16/2012 05:47 PM, Poul-Henning Kamp wrote: >> Dave Mills coined the term "allan intercept" as the cross over of >> the two sources allan variances and it's a good google search for >> his relevant papers. >> >> I'm not entirely sure his rule of thumb for regulating to that point >> is mathematically sound& precise, but the concept itself is certainly >> valid, even if you have to compensate for the timeconstant of the >> PLL you use to regulate to that point. > > Well, what is being used is phase-noise intercept. Conceptually a > similar intercept point will be available in Allan variance. However, as > you shift between noise-variants, the Allan (and Modified Allan) > variance has different scaling factor to the underlying phase noise > amplitudes. The danger of using the Allan variance variant is that you > get a bias in position compared to the phase-noise plots cross-overs. > However, the concept is essentially the same, and the relative slopes is > the same. You get in the right neighbourhood thought. > > The concept has been in use in the phasenoise world of things, so you > would need to search the phase-noise articles to find the real source. > It's been used to generate stable high-frequency signals. > > The analysis of PLL based splicing of ADEV curves is tricky, and I have > not seen any good comprehensive analysis even if the general concept is > roughly understood. The equivalent on phase-noise is however well > understood and leaves no magic too it. I'm not sure that the theory of phase noise intercepts, in practical systems, is actually used. It seems that everyone I've talked to uses the theory to "get in the ballpark" and then does simulations at the design review, and ultimately, builds it and tests, and then tweaks the implementation to optimize (especially if the loop closure is implemented digitally in software/FPGA) When talking real high performance, there's so many confounding error factors that it's not like you can build what theory says and hit the mark. The *actual* noise distributions follow the Leeson model in general, but have lumps and bumps, and there's always narrow band oddities (power supply filtering, noise from switching power converters, etc.) Let's face it, real high performance source design has a lot of art and craft in it. You can't get to that point without sound engineering, but that last order of magnitude is all about suck it and see. > >> I spent a lot of time with the code in NTPns, to try to get that PLL >> to converge on the optimum, and while generally good, it's not perfect. >> >> The basic problem is that the data you have available for autotuning, >> is the allan variance between your input and your steered source. > > > It's a complex field, and things like temperature dependencies helps to > confuse you. Ain't that the truth.. And then, there's proving that what you built is actually doing what you claim. State of the art sources require beyond state of the art verification methods... It's easy to write a spec for, say, incremental Allan Dev of 1E-16 at some tau. A bit harder to test at a constant frequency. Now throw in a varying frequency (say, because of temperature variation or Doppler)..
PK
Poul-Henning Kamp
Sun, Sep 16, 2012 8:30 PM

In message 34D5C3CE-6B3D-4944-996A-7637373B2857@gmail.com, Dennis Ferguson wr
ites:

I'm not sure there could be a difference between the goals of
frequency accuracy and time accuracy that would effect that time
constant.

It does.

A PLL more or less corresponds to an "PI" regulation, where a FLL
only needs to have the "I" term.

Because you don't have the interaction between the P and I terms,
the I-timeconstant can be longer.

--
Poul-Henning Kamp      | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG        | TCP/IP since RFC 956
FreeBSD committer      | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.

In message <34D5C3CE-6B3D-4944-996A-7637373B2857@gmail.com>, Dennis Ferguson wr ites: >I'm not sure there could be a difference between the goals of >frequency accuracy and time accuracy that would effect that time >constant. It does. A PLL more or less corresponds to an "PI" regulation, where a FLL only needs to have the "I" term. Because you don't have the interaction between the P and I terms, the I-timeconstant can be longer. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk@FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence.
MD
Magnus Danielson
Sun, Sep 16, 2012 9:12 PM

On 09/16/2012 10:28 PM, Jim Lux wrote:

On 9/16/12 10:20 AM, Magnus Danielson wrote:

On 09/16/2012 05:47 PM, Poul-Henning Kamp wrote:

Dave Mills coined the term "allan intercept" as the cross over of
the two sources allan variances and it's a good google search for
his relevant papers.

I'm not entirely sure his rule of thumb for regulating to that point
is mathematically sound& precise, but the concept itself is certainly
valid, even if you have to compensate for the timeconstant of the
PLL you use to regulate to that point.

Well, what is being used is phase-noise intercept. Conceptually a
similar intercept point will be available in Allan variance. However, as
you shift between noise-variants, the Allan (and Modified Allan)
variance has different scaling factor to the underlying phase noise
amplitudes. The danger of using the Allan variance variant is that you
get a bias in position compared to the phase-noise plots cross-overs.
However, the concept is essentially the same, and the relative slopes is
the same. You get in the right neighbourhood thought.

The concept has been in use in the phasenoise world of things, so you
would need to search the phase-noise articles to find the real source.
It's been used to generate stable high-frequency signals.

The analysis of PLL based splicing of ADEV curves is tricky, and I have
not seen any good comprehensive analysis even if the general concept is
roughly understood. The equivalent on phase-noise is however well
understood and leaves no magic too it.

I'm not sure that the theory of phase noise intercepts, in practical
systems, is actually used. It seems that everyone I've talked to uses
the theory to "get in the ballpark" and then does simulations at the
design review, and ultimately, builds it and tests, and then tweaks the
implementation to optimize (especially if the loop closure is
implemented digitally in software/FPGA)

When talking real high performance, there's so many confounding error
factors that it's not like you can build what theory says and hit the
mark. The actual noise distributions follow the Leeson model in
general, but have lumps and bumps, and there's always narrow band
oddities (power supply filtering, noise from switching power converters,
etc.)

Let's face it, real high performance source design has a lot of art and
craft in it. You can't get to that point without sound engineering, but
that last order of magnitude is all about suck it and see.

I agree, but my point was that "Allan intercept" might be an attempt for
the "phase-noise intercept" which is better understood. Then again, as
always there are other things to consider.

Looking single-mindedly on Allan deviation or phase-noise plots will
make you loose other details, like systematic features and their
tracking, the systematic errors of the loop, the hold-over properties of
the loop and track-in properties etc. etc.

I am also amazed when comparing the resolution to ADEV noise. They have
different properties when you changes tau, and also if you want to make
it work very well, lowering added noise should be important, no? Only in
economic "balanced" designs would roughly equal noises be used.

I spent a lot of time with the code in NTPns, to try to get that PLL
to converge on the optimum, and while generally good, it's not perfect.

The basic problem is that the data you have available for autotuning,
is the allan variance between your input and your steered source.

It's a complex field, and things like temperature dependencies helps to
confuse you.

Ain't that the truth..

And then, there's proving that what you built is actually doing what you
claim. State of the art sources require beyond state of the art
verification methods...

True that.

It's easy to write a spec for, say, incremental Allan Dev of 1E-16 at
some tau. A bit harder to test at a constant frequency. Now throw in a
varying frequency (say, because of temperature variation or Doppler)..

... or varying phase...

It seems like much effort goes into the noise aspect, but not enough on
the systematics... and how those interact for varying degrees of tau.

Cheers,
Magnus

On 09/16/2012 10:28 PM, Jim Lux wrote: > On 9/16/12 10:20 AM, Magnus Danielson wrote: >> On 09/16/2012 05:47 PM, Poul-Henning Kamp wrote: > >>> Dave Mills coined the term "allan intercept" as the cross over of >>> the two sources allan variances and it's a good google search for >>> his relevant papers. >>> >>> I'm not entirely sure his rule of thumb for regulating to that point >>> is mathematically sound& precise, but the concept itself is certainly >>> valid, even if you have to compensate for the timeconstant of the >>> PLL you use to regulate to that point. >> >> Well, what is being used is phase-noise intercept. Conceptually a >> similar intercept point will be available in Allan variance. However, as >> you shift between noise-variants, the Allan (and Modified Allan) >> variance has different scaling factor to the underlying phase noise >> amplitudes. The danger of using the Allan variance variant is that you >> get a bias in position compared to the phase-noise plots cross-overs. >> However, the concept is essentially the same, and the relative slopes is >> the same. You get in the right neighbourhood thought. >> >> The concept has been in use in the phasenoise world of things, so you >> would need to search the phase-noise articles to find the real source. >> It's been used to generate stable high-frequency signals. >> >> The analysis of PLL based splicing of ADEV curves is tricky, and I have >> not seen any good comprehensive analysis even if the general concept is >> roughly understood. The equivalent on phase-noise is however well >> understood and leaves no magic too it. > > I'm not sure that the theory of phase noise intercepts, in practical > systems, is actually used. It seems that everyone I've talked to uses > the theory to "get in the ballpark" and then does simulations at the > design review, and ultimately, builds it and tests, and then tweaks the > implementation to optimize (especially if the loop closure is > implemented digitally in software/FPGA) > > When talking real high performance, there's so many confounding error > factors that it's not like you can build what theory says and hit the > mark. The *actual* noise distributions follow the Leeson model in > general, but have lumps and bumps, and there's always narrow band > oddities (power supply filtering, noise from switching power converters, > etc.) > > Let's face it, real high performance source design has a lot of art and > craft in it. You can't get to that point without sound engineering, but > that last order of magnitude is all about suck it and see. I agree, but my point was that "Allan intercept" might be an attempt for the "phase-noise intercept" which is better understood. Then again, as always there are other things to consider. Looking single-mindedly on Allan deviation or phase-noise plots will make you loose other details, like systematic features and their tracking, the systematic errors of the loop, the hold-over properties of the loop and track-in properties etc. etc. I am also amazed when comparing the resolution to ADEV noise. They have different properties when you changes tau, and also if you want to make it work very well, lowering added noise should be important, no? Only in economic "balanced" designs would roughly equal noises be used. > >> >>> I spent a lot of time with the code in NTPns, to try to get that PLL >>> to converge on the optimum, and while generally good, it's not perfect. >>> >>> The basic problem is that the data you have available for autotuning, >>> is the allan variance between your input and your steered source. >> >> >> It's a complex field, and things like temperature dependencies helps to >> confuse you. > > Ain't that the truth.. > > And then, there's proving that what you built is actually doing what you > claim. State of the art sources require beyond state of the art > verification methods... True that. > It's easy to write a spec for, say, incremental Allan Dev of 1E-16 at > some tau. A bit harder to test at a constant frequency. Now throw in a > varying frequency (say, because of temperature variation or Doppler).. ... or varying phase... It seems like much effort goes into the noise aspect, but not enough on the systematics... and how those interact for varying degrees of tau. Cheers, Magnus
MD
Magnus Danielson
Sun, Sep 16, 2012 9:21 PM

On 09/16/2012 10:30 PM, Poul-Henning Kamp wrote:

In message34D5C3CE-6B3D-4944-996A-7637373B2857@gmail.com, Dennis Ferguson wr
ites:

I'm not sure there could be a difference between the goals of
frequency accuracy and time accuracy that would effect that time
constant.

It does.

A PLL more or less corresponds to an "PI" regulation, where a FLL
only needs to have the "I" term.

Because you don't have the interaction between the P and I terms,
the I-timeconstant can be longer.

The balance between P and I is important to establish the damping of
your PI regulator.

A good PI-based PLL actually combines the FLL and PLL domains by
differentiating the phase detector and feeding that through a scaling
into the integrator, adding to the phase-detector scaled by the I
factor. That way you can get very good pull-in properties which then
gently goes over to PLL properties. When PLL lock is achieved the FLL
scale-factor can be removed, as it only contributes noise.

A strict FLL would have the differentiated phase scaled and added into
the frequency steering, after the PI-regulators integrator. This D term
would set the frequency right, but the integrator would not learn the
frequency as quick and there would be tracking errors until it learns.

This differentiated phase aided integrator solves the bad pull-in
behaviour for the case when the reference signal and the oscillators
signal is far apart.

Cheers,
Magnus

On 09/16/2012 10:30 PM, Poul-Henning Kamp wrote: > In message<34D5C3CE-6B3D-4944-996A-7637373B2857@gmail.com>, Dennis Ferguson wr > ites: > >> I'm not sure there could be a difference between the goals of >> frequency accuracy and time accuracy that would effect that time >> constant. > > It does. > > A PLL more or less corresponds to an "PI" regulation, where a FLL > only needs to have the "I" term. > > Because you don't have the interaction between the P and I terms, > the I-timeconstant can be longer. > The balance between P and I is important to establish the damping of your PI regulator. A good PI-based PLL actually combines the FLL and PLL domains by differentiating the phase detector and feeding that through a scaling into the integrator, adding to the phase-detector scaled by the I factor. That way you can get very good pull-in properties which then gently goes over to PLL properties. When PLL lock is achieved the FLL scale-factor can be removed, as it only contributes noise. A strict FLL would have the differentiated phase scaled and added into the frequency steering, after the PI-regulators integrator. This D term would set the frequency right, but the integrator would not learn the frequency as quick and there would be tracking errors until it learns. This differentiated phase aided integrator solves the bad pull-in behaviour for the case when the reference signal and the oscillators signal is far apart. Cheers, Magnus
DF
Dennis Ferguson
Sun, Sep 16, 2012 9:46 PM

On 16 Sep, 2012, at 16:30 , Poul-Henning Kamp wrote:

In message 34D5C3CE-6B3D-4944-996A-7637373B2857@gmail.com, Dennis Ferguson wr
ites:

I'm not sure there could be a difference between the goals of
frequency accuracy and time accuracy that would effect that time
constant.

Note that the "that time constant" referred to here, the topic of
the message I was responding to, was explicitly a PLL time constant.
If you have decided to use a PLL as your control discipline I think
you end up with the same time constant whether your goal is accurate
frequency or accurate time since, with a PLL, these end up being
the same problem.

It does.

A PLL more or less corresponds to an "PI" regulation, where a FLL
only needs to have the "I" term.

Because you don't have the interaction between the P and I terms,
the I-timeconstant can be longer.

This sounds right.  As I said, if you pick a control discipline other
than a PLL, as might be advantageous to do if your concern is solely
with accurate frequency, then the optimum might be different.  If you
are using a PLL in both cases, however, then the problems are
essentially the same.

Dennis Ferguson

On 16 Sep, 2012, at 16:30 , Poul-Henning Kamp wrote: > In message <34D5C3CE-6B3D-4944-996A-7637373B2857@gmail.com>, Dennis Ferguson wr > ites: > >> I'm not sure there could be a difference between the goals of >> frequency accuracy and time accuracy that would effect that time >> constant. Note that the "that time constant" referred to here, the topic of the message I was responding to, was explicitly a PLL time constant. If you have decided to use a PLL as your control discipline I think you end up with the same time constant whether your goal is accurate frequency or accurate time since, with a PLL, these end up being the same problem. > It does. > > A PLL more or less corresponds to an "PI" regulation, where a FLL > only needs to have the "I" term. > > Because you don't have the interaction between the P and I terms, > the I-timeconstant can be longer. This sounds right. As I said, if you pick a control discipline other than a PLL, as might be advantageous to do if your concern is solely with accurate frequency, then the optimum might be different. If you are using a PLL in both cases, however, then the problems are essentially the same. Dennis Ferguson
PK
Poul-Henning Kamp
Sun, Sep 16, 2012 9:47 PM

In message 505642F5.1000101@rubidium.dyndns.org, Magnus Danielson writes:

On 09/16/2012 10:30 PM, Poul-Henning Kamp wrote:

A good PI-based PLL actually combines the FLL and PLL domains [...]

But it is the phase correction that doubles the (absolute) magnitude
of the frequency noise, by "overcompensating" frequency errors in
order to catch up with the integrated phase error they have caused.

Optimal frequency stability will always be at the expense of
phase offset.

I belive this is the main rationale behind the EAL timescale.

A strict FLL would have the differentiated [...]

Uhm, sorry, that sounds like rubbish to me.

A FLL corrects by the average of the change of phase per unit of
time, and that's that:

If your phase changes by one microsecond in 1000 seconds, you tweak
the frequency 1e-9 in the appropriate direction.

There is no (D)ifferential and no (P)roportional term in a FLL,
only the (I)ntegral term used to calculate that average.

With all that said, when you are doing things in software, you can
have both:  Steer the local osc by FLL to get optimal frequency
(and thus hold-over), and estimate and compensate for the phase
error in software.

I tried that with a PRS10:  I disabled it's internal PLL and instead
used the serial port to apply FLL corrections, and let software
deal with the random-walk phase offset.

In theory that should have roughly doubled the hold-over performance
but in practice I could not get a statistical significance due
to more significant effects such as drift.

A second order FLL might be able to solve that, but the swing-in
of that was far longer than the relevant "tune in" spec.

And that is essentially what timing.com's algorithm for clock
discipline does for Cs's:  Steer the individual Cs' to optimal
frequency and keep track of the phase error by other means.

--
Poul-Henning Kamp      | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG        | TCP/IP since RFC 956
FreeBSD committer      | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.

In message <505642F5.1000101@rubidium.dyndns.org>, Magnus Danielson writes: >On 09/16/2012 10:30 PM, Poul-Henning Kamp wrote: >A good PI-based PLL actually combines the FLL and PLL domains [...] But it is the phase correction that doubles the (absolute) magnitude of the frequency noise, by "overcompensating" frequency errors in order to catch up with the integrated phase error they have caused. Optimal frequency stability will always be at the expense of phase offset. I belive this is the main rationale behind the EAL timescale. >A strict FLL would have the differentiated [...] Uhm, sorry, that sounds like rubbish to me. A FLL corrects by the average of the change of phase per unit of time, and that's that: If your phase changes by one microsecond in 1000 seconds, you tweak the frequency 1e-9 in the appropriate direction. There is no (D)ifferential and no (P)roportional term in a FLL, only the (I)ntegral term used to calculate that average. With all that said, when you are doing things in software, you *can* have both: Steer the local osc by FLL to get optimal frequency (and thus hold-over), and estimate and compensate for the phase error in software. I tried that with a PRS10: I disabled it's internal PLL and instead used the serial port to apply FLL corrections, and let software deal with the random-walk phase offset. In theory that should have roughly doubled the hold-over performance but in practice I could not get a statistical significance due to more significant effects such as drift. A second order FLL might be able to solve that, but the swing-in of that was far longer than the relevant "tune in" spec. And that is essentially what timing.com's algorithm for clock discipline does for Cs's: Steer the individual Cs' to optimal frequency and keep track of the phase error by other means. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk@FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence.
PK
Poul-Henning Kamp
Sun, Sep 16, 2012 9:51 PM

In message AD054298-F656-477F-9FB1-5D48C1B07C31@gmail.com, Dennis Ferguson wr
ites:

If you
are using a PLL in both cases, however, then the problems are
essentially the same.

Well, not quite:  Depending on the stiffness of your PLL, you can
minimize phase error at the cost of frequency error or frequency
error at the cost of phase error, and either is a valid engineering
decision depending which of the two are more important to you.

--
Poul-Henning Kamp      | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG        | TCP/IP since RFC 956
FreeBSD committer      | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.

In message <AD054298-F656-477F-9FB1-5D48C1B07C31@gmail.com>, Dennis Ferguson wr ites: >If you >are using a PLL in both cases, however, then the problems are >essentially the same. Well, not quite: Depending on the stiffness of your PLL, you can minimize phase error at the cost of frequency error or frequency error at the cost of phase error, and either is a valid engineering decision depending which of the two are more important to you. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk@FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence.
BC
Bob Camp
Sun, Sep 16, 2012 10:29 PM

HI

In some cases, the difference can be your definition of "time accuracy". If short term GPS time is what you are worried about, then indeed that's a different beast than a 30 day plot against your direct line to USNO.

Bob

On Sep 16, 2012, at 4:12 PM, Dennis Ferguson dennis.c.ferguson@gmail.com wrote:

On 16 Sep, 2012, at 00:40 , Tom Van Baak wrote:

I worry in your example about the long cross-over time. This may be ideal for frequency stability, but probably is not good for time accuracy. If one is using the GPSDO as a timing reference, I would think a shorter time constant will keep the rms time error down. Has anyone on the list done work optimizing the timing accuracy rather than the frequency stability?

I'm not sure there could be a difference between the goals of
frequency accuracy and time accuracy that would effect that time
constant. The time error is the time integral of the frequency
error, so anything which manages to minimize the frequency error
of the oscillator (both the magnitude of the error and its
duration) will also minimize the time error.  The time constant
is selected to be the minimum value which makes it probable that
the frequency or time error you have measured (for a PLL the data
are time errors) is in fact an error that the oscillator has
made rather than an artifact of the noise in the measurement
system.

There might be a difference in the best control action to take to
optimally achieve each of those goals.  In particular if your goal
is frequency accuracy the best control action in response to the
measurement of a frequency error might be to correct that error,
i.e. to minimize the frequency error once you know you have one.
If your goal is time accuracy, however, then the response to a
measured frequency error is going to be to intentionally make a
frequency error in the other direction for a while to correct the
accumulated time error.  In this case, though, it seems to me
that by selecting a PLL as the control discipline (rather than, say,
a FLL) you've already made the decision to take control actions
which ensure time accuracy.

Dennis Ferguson


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

HI In some cases, the difference can be your definition of "time accuracy". If short term GPS time is what you are worried about, then indeed that's a different beast than a 30 day plot against your direct line to USNO. Bob On Sep 16, 2012, at 4:12 PM, Dennis Ferguson <dennis.c.ferguson@gmail.com> wrote: > > On 16 Sep, 2012, at 00:40 , Tom Van Baak wrote: >> I worry in your example about the long cross-over time. This may be ideal for frequency stability, but probably is not good for time accuracy. If one is using the GPSDO as a timing reference, I would think a shorter time constant will keep the rms time error down. Has anyone on the list done work optimizing the timing accuracy rather than the frequency stability? > > I'm not sure there could be a difference between the goals of > frequency accuracy and time accuracy that would effect that time > constant. The time error is the time integral of the frequency > error, so anything which manages to minimize the frequency error > of the oscillator (both the magnitude of the error and its > duration) will also minimize the time error. The time constant > is selected to be the minimum value which makes it probable that > the frequency or time error you have measured (for a PLL the data > are time errors) is in fact an error that the oscillator has > made rather than an artifact of the noise in the measurement > system. > > There might be a difference in the best control action to take to > optimally achieve each of those goals. In particular if your goal > is frequency accuracy the best control action in response to the > measurement of a frequency error might be to correct that error, > i.e. to minimize the frequency error once you know you have one. > If your goal is time accuracy, however, then the response to a > measured frequency error is going to be to intentionally make a > frequency error in the other direction for a while to correct the > accumulated time error. In this case, though, it seems to me > that by selecting a PLL as the control discipline (rather than, say, > a FLL) you've already made the decision to take control actions > which ensure time accuracy. > > Dennis Ferguson > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there.
MD
Magnus Danielson
Sun, Sep 16, 2012 10:58 PM

On 09/16/2012 11:47 PM, Poul-Henning Kamp wrote:

In message505642F5.1000101@rubidium.dyndns.org, Magnus Danielson writes:

On 09/16/2012 10:30 PM, Poul-Henning Kamp wrote:

A good PI-based PLL actually combines the FLL and PLL domains [...]

But it is the phase correction that doubles the (absolute) magnitude
of the frequency noise, by "overcompensating" frequency errors in
order to catch up with the integrated phase error they have caused.

Optimal frequency stability will always be at the expense of
phase offset.

I belive this is the main rationale behind the EAL timescale.

EAL is a paper scale and not a locked loop thing. You get different
properties as well as different abilities.

A strict FLL would have the differentiated [...]

Uhm, sorry, that sounds like rubbish to me.

I think you misunderstood my wording in that case.

A FLL corrects by the average of the change of phase per unit of
time, and that's that:

The FLL uses a frequency detector rather than a phase detector, if you
have a phase detector response you can get your frequency error by
differentiation. Does that make you disagree wildly?

If your phase changes by one microsecond in 1000 seconds, you tweak
the frequency 1e-9 in the appropriate direction.

There is no (D)ifferential and no (P)roportional term in a FLL,
only the (I)ntegral term used to calculate that average.

Strange, as I have seen many FLLs having properties like this in both
books and papers.

It is not uncommon in GPS receivers to both produce a Phase and a
frequency detection, and then use a combined FLL/PLL topology with PI
properties for tracking, and then let software trim the various gains.

Chapter 5.3, 5.4 and 5.5 of Elliott Kaplan "Understanding GPS principles
and applications" (second edition, I can look up the chapters for first
edition if needed) illustrates what I mean.

You can naturally do FLLs in first-degree (proportional scaling), second
degree (PI or smoothing filter) or higher.

With all that said, when you are doing things in software, you can
have both:  Steer the local osc by FLL to get optimal frequency
(and thus hold-over), and estimate and compensate for the phase
error in software.

Many users of the (in)famous 4046 has been using both since its
inception, and the concept was not new at the time. Not saying it is
anywhere close to optimum performance, but PLL and FLL in analogue
hardware have been done with slide-ruler level of design.

I tried that with a PRS10:  I disabled it's internal PLL and instead
used the serial port to apply FLL corrections, and let software
deal with the random-walk phase offset.

In theory that should have roughly doubled the hold-over performance
but in practice I could not get a statistical significance due
to more significant effects such as drift.

A second order FLL might be able to solve that, but the swing-in
of that was far longer than the relevant "tune in" spec.

And that is essentially what timing.com's algorithm for clock
discipline does for Cs's:  Steer the individual Cs' to optimal
frequency and keep track of the phase error by other means.

There are many ways to steer things. BIPM does EAL and then TAI for many
reasons, among other that many clocks is just stable but not very
accurate. Only a handful of clocks contribute to the phase of TAI, but
around 350 contribute to the stability of EAL.

Cheers,
Magnus

On 09/16/2012 11:47 PM, Poul-Henning Kamp wrote: > In message<505642F5.1000101@rubidium.dyndns.org>, Magnus Danielson writes: >> On 09/16/2012 10:30 PM, Poul-Henning Kamp wrote: > >> A good PI-based PLL actually combines the FLL and PLL domains [...] > > But it is the phase correction that doubles the (absolute) magnitude > of the frequency noise, by "overcompensating" frequency errors in > order to catch up with the integrated phase error they have caused. > > Optimal frequency stability will always be at the expense of > phase offset. > > I belive this is the main rationale behind the EAL timescale. EAL is a paper scale and not a locked loop thing. You get different properties as well as different abilities. >> A strict FLL would have the differentiated [...] > > Uhm, sorry, that sounds like rubbish to me. I think you misunderstood my wording in that case. > A FLL corrects by the average of the change of phase per unit of > time, and that's that: The FLL uses a frequency detector rather than a phase detector, if you have a phase detector response you can get your frequency error by differentiation. Does that make you disagree wildly? > If your phase changes by one microsecond in 1000 seconds, you tweak > the frequency 1e-9 in the appropriate direction. > > There is no (D)ifferential and no (P)roportional term in a FLL, > only the (I)ntegral term used to calculate that average. Strange, as I have seen many FLLs having properties like this in both books and papers. It is not uncommon in GPS receivers to both produce a Phase and a frequency detection, and then use a combined FLL/PLL topology with PI properties for tracking, and then let software trim the various gains. Chapter 5.3, 5.4 and 5.5 of Elliott Kaplan "Understanding GPS principles and applications" (second edition, I can look up the chapters for first edition if needed) illustrates what I mean. You can naturally do FLLs in first-degree (proportional scaling), second degree (PI or smoothing filter) or higher. > With all that said, when you are doing things in software, you *can* > have both: Steer the local osc by FLL to get optimal frequency > (and thus hold-over), and estimate and compensate for the phase > error in software. Many users of the (in)famous 4046 has been using both since its inception, and the concept was not new at the time. Not saying it is anywhere close to optimum performance, but PLL and FLL in analogue hardware have been done with slide-ruler level of design. > I tried that with a PRS10: I disabled it's internal PLL and instead > used the serial port to apply FLL corrections, and let software > deal with the random-walk phase offset. > > In theory that should have roughly doubled the hold-over performance > but in practice I could not get a statistical significance due > to more significant effects such as drift. > > A second order FLL might be able to solve that, but the swing-in > of that was far longer than the relevant "tune in" spec. > > And that is essentially what timing.com's algorithm for clock > discipline does for Cs's: Steer the individual Cs' to optimal > frequency and keep track of the phase error by other means. > There are many ways to steer things. BIPM does EAL and then TAI for many reasons, among other that many clocks is just stable but not very accurate. Only a handful of clocks contribute to the phase of TAI, but around 350 contribute to the stability of EAL. Cheers, Magnus
MD
Magnus Danielson
Sun, Sep 16, 2012 11:15 PM

On 09/16/2012 11:51 PM, Poul-Henning Kamp wrote:

In messageAD054298-F656-477F-9FB1-5D48C1B07C31@gmail.com, Dennis Ferguson wr
ites:

If you
are using a PLL in both cases, however, then the problems are
essentially the same.

Well, not quite:  Depending on the stiffness of your PLL, you can
minimize phase error at the cost of frequency error or frequency
error at the cost of phase error, and either is a valid engineering
decision depending which of the two are more important to you.

Sometimes such compromises is the only way to go, but sometimes you may
consider to raise your system complexity. One such thing is to increase
the PLL degree. There are many tools in the toolbox.

Another example is the OCXO oven control. A typical OCXO oven tries to
quickly steer back the temperature. During the little temperature trip,
the oscillator will have the wrong frequency, but as the oven settles
again, it will be more or less back where you started. Trouble is, often
you have only gone above or below frequency, so the integral of that
frequency error is a phase-shift. oups. Hope your application wasn't
phase-stability sensitive... I have seen only one vendor address this
issue, complete with graphs showing the phase-creep over several
temperature cycles, and yes... a typical oven shifts phase with a
residual error after a full temperature cycle of ambient temperature,
since the errors doesn't cancel completely.

While this example may not be spot on to the point Poul-Henning is
making, it can be used as a good illustration that frequency stability
goal and phase stability goals isn't necessarily the same.

Going back to the PLL, with a tight PLL, you track in errors quickly.
This looks good as you then track in phase errors and the time error as
it accumulates doesn't become large. On the other hand, when doing this
you need to steer your frequency wider in order to more quickly track in
that phase error. A looser PLL will track in errors more sluggishly, and
hence will use less frequency deviations for track-in, but with the
downside that the frequency errors will remain longer and the time error
will become larger. These are the systematic reactions to phase and
frequency steps and ramps. The degree of the system will also change
these parameters.

It is also important to remember that changes in the reference and
changes within the loop gets low-passed and high-passed (respectively)
by the loop bandwidth. A temperature shift on the locked oscillator will
be a typical in-loop effect which gets high-passed.

Then there is the background noise processes to consider, but we spend
so much time on them already.

Cheers,
Magnsu

On 09/16/2012 11:51 PM, Poul-Henning Kamp wrote: > In message<AD054298-F656-477F-9FB1-5D48C1B07C31@gmail.com>, Dennis Ferguson wr > ites: > >> If you >> are using a PLL in both cases, however, then the problems are >> essentially the same. > > Well, not quite: Depending on the stiffness of your PLL, you can > minimize phase error at the cost of frequency error or frequency > error at the cost of phase error, and either is a valid engineering > decision depending which of the two are more important to you. > Sometimes such compromises is the only way to go, but sometimes you may consider to raise your system complexity. One such thing is to increase the PLL degree. There are many tools in the toolbox. Another example is the OCXO oven control. A typical OCXO oven tries to quickly steer back the temperature. During the little temperature trip, the oscillator will have the wrong frequency, but as the oven settles again, it will be more or less back where you started. Trouble is, often you have only gone above or below frequency, so the integral of that frequency error is a phase-shift. oups. Hope your application wasn't phase-stability sensitive... I have seen only one vendor address this issue, complete with graphs showing the phase-creep over several temperature cycles, and yes... a typical oven shifts phase with a residual error after a full temperature cycle of ambient temperature, since the errors doesn't cancel completely. While this example may not be spot on to the point Poul-Henning is making, it can be used as a good illustration that frequency stability goal and phase stability goals isn't necessarily the same. Going back to the PLL, with a tight PLL, you track in errors quickly. This looks good as you then track in phase errors and the time error as it accumulates doesn't become large. On the other hand, when doing this you need to steer your frequency wider in order to more quickly track in that phase error. A looser PLL will track in errors more sluggishly, and hence will use less frequency deviations for track-in, but with the downside that the frequency errors will remain longer and the time error will become larger. These are the systematic reactions to phase and frequency steps and ramps. The degree of the system will also change these parameters. It is also important to remember that changes in the reference and changes within the loop gets low-passed and high-passed (respectively) by the loop bandwidth. A temperature shift on the locked oscillator will be a typical in-loop effect which gets high-passed. Then there is the background noise processes to consider, but we spend so much time on them already. Cheers, Magnsu