time-nuts@lists.febo.com

Discussion of precise time and frequency measurement

View all threads

Re: The SI second and the ease of realization (was: leap seconds finally being retired?)

HM
Hal Murray
Fri, Nov 25, 2022 6:31 AM

Thanks to Magnus and Attila for a wonderful discussion.

Attila Kinali said:

I would like to add here, that we already have this problem. If you look at
the current list of primary standards contributing to TAI https://
webtai.bipm.org/database/show_psfs.html you see that it's only a few labs.
And it was just SYRTE, PTB, NIST and INRIM 20 years ago. Also note the huge
gaps most of the primary standards have. I.e. very few are run once a month,
much less continuous. And this is a technology that's quite mature and well
understood.[1]

I'm missing a key step.  If the primary standards are only run once a month,
how can they contribute to TAI?

I'm guessing that they are used to calibrate non-primary standards AND that
the non-primary standards are known to drift slowly relative to how often the
primary standards are run.

Does that mean that back in the early days of primary standards, they were run
for long-enough to get good data on the non-primary standards?

Crystals are known to have jumps.  Do boxes based on atomic properties also
jump?

Yes, this means that any time-nut with a GPS disciplined Rb gets to within
1-2 orders of magnitude of an average NMI. And yes, I find this incredible!

+1  :)

Sure, there is no legal traceability for a time-nuts lab, but who needs that
anyways?

Is there legal traceability to GPS?  I thought somebody offcial (in a legal
sense) published the offset between GPS and UTC.

I guess my real question is what does "legal traceability" mean to the US
court system?

In the stock markets there are rules I don't understand involving time.  I
think the typical computers involved get their time via NTP from a GPS box.
These days, they probably use PTP to shave a bit on the error bars.

--
These are my opinions.  I hate spam.

Thanks to Magnus and Attila for a wonderful discussion. Attila Kinali said: > I would like to add here, that we already have this problem. If you look at > the current list of primary standards contributing to TAI https:// > webtai.bipm.org/database/show_psfs.html you see that it's only a few labs. > And it was just SYRTE, PTB, NIST and INRIM 20 years ago. Also note the huge > gaps most of the primary standards have. I.e. very few are run once a month, > much less continuous. And this is a technology that's quite mature and well > understood.[1] I'm missing a key step. If the primary standards are only run once a month, how can they contribute to TAI? I'm guessing that they are used to calibrate non-primary standards AND that the non-primary standards are known to drift slowly relative to how often the primary standards are run. Does that mean that back in the early days of primary standards, they were run for long-enough to get good data on the non-primary standards? Crystals are known to have jumps. Do boxes based on atomic properties also jump? > Yes, this means that any time-nut with a GPS disciplined Rb gets to within > 1-2 orders of magnitude of an average NMI. And yes, I find this incredible! +1 :) > Sure, there is no legal traceability for a time-nuts lab, but who needs that > anyways? Is there legal traceability to GPS? I thought somebody offcial (in a legal sense) published the offset between GPS and UTC. I guess my real question is what does "legal traceability" mean to the US court system? In the stock markets there are rules I don't understand involving time. I think the typical computers involved get their time via NTP from a GPS box. These days, they probably use PTP to shave a bit on the error bars. -- These are my opinions. I hate spam.
AK
Attila Kinali
Fri, Nov 25, 2022 8:22 AM

Good morning Hal,

On Thu, 24 Nov 2022 22:31:05 -0800
Hal Murray halmurray@sonic.net wrote:

Attila Kinali said:

I would like to add here, that we already have this problem. If you look at
the current list of primary standards contributing to TAI https://
webtai.bipm.org/database/show_psfs.html you see that it's only a few labs.
And it was just SYRTE, PTB, NIST and INRIM 20 years ago. Also note the huge
gaps most of the primary standards have. I.e. very few are run once a month,
much less continuous. And this is a technology that's quite mature and well
understood.[1]

I'm missing a key step.  If the primary standards are only run once a month,
how can they contribute to TAI?

I'm guessing that they are used to calibrate non-primary standards AND that
the non-primary standards are known to drift slowly relative to how often the
primary standards are run.

This is exactly how it works.

If you look at the ADEV plot of the iMaser3000 (see attachment) you see
that it goes down to a few parts in 1e-16 at about a day. The medium
term drift (a day to a few weeks) of hydrogen masers is very predictable
and usually follows a linear or low order polynomial (see [1]). Hence it
is easy to measure and correct for, even when measurements are done
only every few days/weeks. A 5071's medium term behaviour is a bit more
chaotic, but still quite stable.

The reason this works is because, for a lot if not most frequency standards,
once they hit the flicker frequency floor, their behaviour is not domiated
by a noise process, but by various drift processes. Hence their behaviour
is correlated in time and thus becomes, to some extend, predictable. But
in our normal way of analysis, this gets hidden in the ADEV plot, or rather
we do usually not really care what the source of random walk frequency is.

Does that mean that back in the early days of primary standards, they were run
for long-enough to get good data on the non-primary standards?

As far as I am aware of, they used a similar technique of using some
flywheel standards (either crystal oscillators or commercial caesium
beam standards) that would run continuously to fill the gaps when the
primary standard wasn't running. I am not aware what the uptime percentage
for those old standards was. I know that beam standards are less finicky
than fountains, but I do not know how the uptime of the old beam standards
relate to our modern fountains.

Crystals are known to have jumps.  Do boxes based on atomic properties also
jump?

Yes, but the crystals that end up in atomic clocks pass to some stringend
screening to ensure that they either do not exhibit jumps or have a very
low rate. Besides, the loop bandwidth for atomic clocks is usually relatively
high, somewhere between 10Hz and 1kHz. So these jumps, while they do degrade
the performance, do not have a significant effect on the medium and long term
performance.

Sure, there is no legal traceability for a time-nuts lab, but who needs that
anyways?

Is there legal traceability to GPS?  I thought somebody offcial (in a legal
sense) published the offset between GPS and UTC.

In most juristictions GPS alone is not considered legaly tracable.
Depending on where you live you also have to have your receiver certified
and regularly calibrated.

In the stock markets there are rules I don't understand involving time.  I
think the typical computers involved get their time via NTP from a GPS box.
These days, they probably use PTP to shave a bit on the error bars.

There it's mostly an issue of time-stamping. High frequency trading depends
on who gets their request processed first. And this is depends on who's
request arrived first. Thus time-stamping is used to ensure a proper ordering.
And because there is a lot of money involved, regulators demand that the
time-stamping devices are all synchronized to UTC with low uncertainty and
traceability.

			Attila Kinali

[1] "Medium-Term Frequency Stability of Hydrogen Masers as Measured
by a Cesium Fountain", by T.E. Parker, S.R. Jefferts and T.P. Heavner, 2010
https://doi.org/10.1109/CCA.1995.555631
https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=906021

Science is made up of so many things that appear obvious
after they are explained. -- Pardot Kynes

Good morning Hal, On Thu, 24 Nov 2022 22:31:05 -0800 Hal Murray <halmurray@sonic.net> wrote: > Attila Kinali said: > > I would like to add here, that we already have this problem. If you look at > > the current list of primary standards contributing to TAI https:// > > webtai.bipm.org/database/show_psfs.html you see that it's only a few labs. > > And it was just SYRTE, PTB, NIST and INRIM 20 years ago. Also note the huge > > gaps most of the primary standards have. I.e. very few are run once a month, > > much less continuous. And this is a technology that's quite mature and well > > understood.[1] > > I'm missing a key step. If the primary standards are only run once a month, > how can they contribute to TAI? > > I'm guessing that they are used to calibrate non-primary standards AND that > the non-primary standards are known to drift slowly relative to how often the > primary standards are run. This is exactly how it works. If you look at the ADEV plot of the iMaser3000 (see attachment) you see that it goes down to a few parts in 1e-16 at about a day. The medium term drift (a day to a few weeks) of hydrogen masers is very predictable and usually follows a linear or low order polynomial (see [1]). Hence it is easy to measure and correct for, even when measurements are done only every few days/weeks. A 5071's medium term behaviour is a bit more chaotic, but still quite stable. The reason this works is because, for a lot if not most frequency standards, once they hit the flicker frequency floor, their behaviour is not domiated by a noise process, but by various drift processes. Hence their behaviour is correlated in time and thus becomes, to some extend, predictable. But in our normal way of analysis, this gets hidden in the ADEV plot, or rather we do usually not really care what the source of random walk frequency is. > Does that mean that back in the early days of primary standards, they were run > for long-enough to get good data on the non-primary standards? As far as I am aware of, they used a similar technique of using some flywheel standards (either crystal oscillators or commercial caesium beam standards) that would run continuously to fill the gaps when the primary standard wasn't running. I am not aware what the uptime percentage for those old standards was. I know that beam standards are less finicky than fountains, but I do not know how the uptime of the old beam standards relate to our modern fountains. > Crystals are known to have jumps. Do boxes based on atomic properties also > jump? Yes, but the crystals that end up in atomic clocks pass to some stringend screening to ensure that they either do not exhibit jumps or have a very low rate. Besides, the loop bandwidth for atomic clocks is usually relatively high, somewhere between 10Hz and 1kHz. So these jumps, while they do degrade the performance, do not have a significant effect on the medium and long term performance. > > Sure, there is no legal traceability for a time-nuts lab, but who needs that > > anyways? > > Is there legal traceability to GPS? I thought somebody offcial (in a legal > sense) published the offset between GPS and UTC. In most juristictions GPS alone is not considered legaly tracable. Depending on where you live you also have to have your receiver certified and regularly calibrated. > In the stock markets there are rules I don't understand involving time. I > think the typical computers involved get their time via NTP from a GPS box. > These days, they probably use PTP to shave a bit on the error bars. There it's mostly an issue of time-stamping. High frequency trading depends on who gets their request processed first. And this is depends on who's request arrived first. Thus time-stamping is used to ensure a proper ordering. And because there is a lot of money involved, regulators demand that the time-stamping devices are all synchronized to UTC with low uncertainty and traceability. Attila Kinali [1] "Medium-Term Frequency Stability of Hydrogen Masers as Measured by a Cesium Fountain", by T.E. Parker, S.R. Jefferts and T.P. Heavner, 2010 https://doi.org/10.1109/CCA.1995.555631 https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=906021 -- Science is made up of so many things that appear obvious after they are explained. -- Pardot Kynes
MD
Magnus Danielson
Fri, Nov 25, 2022 9:55 PM

Hi Hal,

On 2022-11-25 07:31, Hal Murray via time-nuts wrote:

Thanks to Magnus and Attila for a wonderful discussion.

Thanks. I saw the opportunity to go into aspects that we rarely touch
on, and there is plenty of aspects in it that we should be talking to. I
saw the oppertunity to potentially raise the knowledge. In this thread,
we now use the primary reference roughly in accordance with it's defined
meaning in VIM, and it's practical operations. This alone is different
from the commercialized form of "primary reference" to be that of a
cesium clock. Wonderful marketing trick. I think we can do better and we
do that by learning more about how things actually work. Also, as things
is being redefined, it is worthwhile to consider what goes into a
definition. Turns out that it's a fairly complex issue, and there are
many facettes to it that comes into a decision. All we can do is to
contribute to it by doing the exercise to help sketch up some of it.
Others will contribute their insights and at the end a more robust line
of arguments will make clear if the decision is mature enough and wise
enough.

Attila Kinali said:

I would like to add here, that we already have this problem. If you look at
the current list of primary standards contributing to TAI https://
webtai.bipm.org/database/show_psfs.html you see that it's only a few labs.
And it was just SYRTE, PTB, NIST and INRIM 20 years ago. Also note the huge
gaps most of the primary standards have. I.e. very few are run once a month,
much less continuous. And this is a technology that's quite mature and well
understood.[1]

I'm missing a key step.  If the primary standards are only run once a month,
how can they contribute to TAI?

I'm guessing that they are used to calibrate non-primary standards AND that
the non-primary standards are known to drift slowly relative to how often the
primary standards are run.

Yes, more or the less that. I wrote a separate reply to Attila that
sketches the EAL-TAI-UTC process.

Does that mean that back in the early days of primary standards, they were run
for long-enough to get good data on the non-primary standards?

Recall that what is used as primary standards is the cutting edge of
it's time, and that shifts. The primary standards tends to be hand-built
devices. They are tested to the other clocks and also to the TAI/UTC
through the traceability work and their performance develop that way. If
they show the needed performance and long-term behaviour, clocks can be
considered to contribute as primary standard rather than being just
considered for secondary standards. Even contribution to the secondary
standard in EAL/TAI means evaluation over time to ensure both the lab,
test-link and clock stability.

Crystals are known to have jumps.  Do boxes based on atomic properties also
jump?

Well, not if well designed. However, crystal jumps to cause output to
deviate until tracked in. As you lock a crystal oscillator to an atomic
reference, the phase-response of the crystal will be high-pass filtered.
A frequency jump will cause a phase-ramp to start from the previous
locked state, but as the integrator updates the correction the frequency
error hunts in and a remaining phase-shift often being the end result,
looking like a phase-step. A higher degree compensation may track in the
phase-step. Do look at Gardner book.

Yes, this means that any time-nut with a GPS disciplined Rb gets to within
1-2 orders of magnitude of an average NMI. And yes, I find this incredible!

+1  :)

Sure, there is no legal traceability for a time-nuts lab, but who needs that
anyways?

Is there legal traceability to GPS?  I thought somebody offcial (in a legal
sense) published the offset between GPS and UTC.

You can achieve it, but it requires "unbroken chain of calibrations".
Just steering up a clock to track GPS will not suffice. That will not be
a traceable replica of GPC(MC) or UTC(USNO). Traceability is defined in
Vocabulary of International Metrology (VIM) as an unbroken chain of
calibrations, and calibration is then the measure of a device to a
traceable reference to establish it's deviation with known documented
uncertainty. Further reading I recommend both VIM and Guide to
Uncertainty in Measurement (GUM). Both VIM and GUM is available for free
download from BIPM.

I guess my real question is what does "legal traceability" mean to the US
court system?

There is a paper trail that points out the legal support for
traceability and realization of national measures in most countries to
their NMI. The "legal traceability" ties to VIM and GUM, points to the
NMI and the connectivity to the internaltional agreements for the SI
system. For the US this points to NIST, so NIST traceability has that
context. For military the USNO fills the role for time and frequency.
Once you shown traceability to any lab k, you can use it's traceability
record to show relationship to international agreements, but also to
another lab l. This is the strengths of the traceability system, the
agreed system establish the traceability for compareable of measures.

In the stock markets there are rules I don't understand involving time.  I
think the typical computers involved get their time via NTP from a GPS box.
These days, they probably use PTP to shave a bit on the error bars.

The requirement for legal traceability is there in SOX for the US and
MIFID II for EU. The traceability need is there to meet the needs to be
able to review the logs and essentially the traceable time-stamps become
measures for evidence. I recall MIFID II require traceability to be
within +/- 100 us of UTC.

It's fairly well understood and realized on regular basis.

Cheers,
Magnus

Hi Hal, On 2022-11-25 07:31, Hal Murray via time-nuts wrote: > Thanks to Magnus and Attila for a wonderful discussion. Thanks. I saw the opportunity to go into aspects that we rarely touch on, and there is plenty of aspects in it that we should be talking to. I saw the oppertunity to potentially raise the knowledge. In this thread, we now use the primary reference roughly in accordance with it's defined meaning in VIM, and it's practical operations. This alone is different from the commercialized form of "primary reference" to be that of a cesium clock. Wonderful marketing trick. I think we can do better and we do that by learning more about how things actually work. Also, as things is being redefined, it is worthwhile to consider what goes into a definition. Turns out that it's a fairly complex issue, and there are many facettes to it that comes into a decision. All we can do is to contribute to it by doing the exercise to help sketch up some of it. Others will contribute their insights and at the end a more robust line of arguments will make clear if the decision is mature enough and wise enough. > > Attila Kinali said: >> I would like to add here, that we already have this problem. If you look at >> the current list of primary standards contributing to TAI https:// >> webtai.bipm.org/database/show_psfs.html you see that it's only a few labs. >> And it was just SYRTE, PTB, NIST and INRIM 20 years ago. Also note the huge >> gaps most of the primary standards have. I.e. very few are run once a month, >> much less continuous. And this is a technology that's quite mature and well >> understood.[1] > I'm missing a key step. If the primary standards are only run once a month, > how can they contribute to TAI? > > I'm guessing that they are used to calibrate non-primary standards AND that > the non-primary standards are known to drift slowly relative to how often the > primary standards are run. Yes, more or the less that. I wrote a separate reply to Attila that sketches the EAL-TAI-UTC process. > Does that mean that back in the early days of primary standards, they were run > for long-enough to get good data on the non-primary standards? Recall that what is used as primary standards is the cutting edge of it's time, and that shifts. The primary standards tends to be hand-built devices. They are tested to the other clocks and also to the TAI/UTC through the traceability work and their performance develop that way. If they show the needed performance and long-term behaviour, clocks can be considered to contribute as primary standard rather than being just considered for secondary standards. Even contribution to the secondary standard in EAL/TAI means evaluation over time to ensure both the lab, test-link and clock stability. > > Crystals are known to have jumps. Do boxes based on atomic properties also > jump? Well, not if well designed. However, crystal jumps to cause output to deviate until tracked in. As you lock a crystal oscillator to an atomic reference, the phase-response of the crystal will be high-pass filtered. A frequency jump will cause a phase-ramp to start from the previous locked state, but as the integrator updates the correction the frequency error hunts in and a remaining phase-shift often being the end result, looking like a phase-step. A higher degree compensation may track in the phase-step. Do look at Gardner book. > >> Yes, this means that any time-nut with a GPS disciplined Rb gets to within >> 1-2 orders of magnitude of an average NMI. And yes, I find this incredible! > +1 :) > > >> Sure, there is no legal traceability for a time-nuts lab, but who needs that >> anyways? > Is there legal traceability to GPS? I thought somebody offcial (in a legal > sense) published the offset between GPS and UTC. You can achieve it, but it requires "unbroken chain of calibrations". Just steering up a clock to track GPS will not suffice. That will not be a traceable replica of GPC(MC) or UTC(USNO). Traceability is defined in Vocabulary of International Metrology (VIM) as an unbroken chain of calibrations, and calibration is then the measure of a device to a traceable reference to establish it's deviation with known documented uncertainty. Further reading I recommend both VIM and Guide to Uncertainty in Measurement (GUM). Both VIM and GUM is available for free download from BIPM. > I guess my real question is what does "legal traceability" mean to the US > court system? There is a paper trail that points out the legal support for traceability and realization of national measures in most countries to their NMI. The "legal traceability" ties to VIM and GUM, points to the NMI and the connectivity to the internaltional agreements for the SI system. For the US this points to NIST, so NIST traceability has that context. For military the USNO fills the role for time and frequency. Once you shown traceability to any lab k, you can use it's traceability record to show relationship to international agreements, but also to another lab l. This is the strengths of the traceability system, the agreed system establish the traceability for compareable of measures. > In the stock markets there are rules I don't understand involving time. I > think the typical computers involved get their time via NTP from a GPS box. > These days, they probably use PTP to shave a bit on the error bars. The requirement for legal traceability is there in SOX for the US and MIFID II for EU. The traceability need is there to meet the needs to be able to review the logs and essentially the traceable time-stamps become measures for evidence. I recall MIFID II require traceability to be within +/- 100 us of UTC. It's fairly well understood and realized on regular basis. Cheers, Magnus
R(
Richard (Rick) Karlquist
Sat, Nov 26, 2022 2:54 AM

On 11/25/2022 1:55 PM, Magnus Danielson via time-nuts wrote:

error hunts in and a remaining phase-shift often being the end result,
looking like a phase-step. A higher degree compensation may track in the
phase-step. Do look at Gardner book.

Gardner was the book that taught me how to do phase locked loops.

The 5061 had an analog dual integrator using a then $100 op amp,
specifically to address the problem you are describing.  Len
Cutler went to a lot of trouble to get this right.  The 5071
of course also has a dual integrator, but it's just DSP code.
Len was also responsible for the dual integrator in the E1938A
oven control loop, which was amazing.

Rick N6RK

On 11/25/2022 1:55 PM, Magnus Danielson via time-nuts wrote: > error hunts in and a remaining phase-shift often being the end result, > looking like a phase-step. A higher degree compensation may track in the > phase-step. Do look at Gardner book. Gardner was the book that taught me how to do phase locked loops. The 5061 had an analog dual integrator using a then $100 op amp, specifically to address the problem you are describing. Len Cutler went to a lot of trouble to get this right. The 5071 of course also has a dual integrator, but it's just DSP code. Len was also responsible for the dual integrator in the E1938A oven control loop, which was amazing. Rick N6RK
MD
Magnus Danielson
Sat, Nov 26, 2022 3:35 PM

Hi Rick,

On 2022-11-26 03:54, Richard (Rick) Karlquist wrote:

On 11/25/2022 1:55 PM, Magnus Danielson via time-nuts wrote:

error hunts in and a remaining phase-shift often being the end
result, looking like a phase-step. A higher degree compensation may
track in the phase-step. Do look at Gardner book.

Gardner was the book that taught me how to do phase locked loops.

Yes. I think it is extremely good at teaching all the key aspects in
such a way that one can get the needed understanding. I strongly advice
people to get it and read it. Other books provide additional insights
and details that Gardner does not explicitly do. If I only got to keep
one book on PLLs, Gardner will be my keeper. Best and Wolaver books have
their benefits, and I prefer the Wolaver book over the Best book. I find
that I dip my nose into all three depending on the particular problem I
want to check things, which I rarely do since once learned, many things
can be very quickly derived directly. Wolaver has an elegance to it with
rich set of illustrations of many techniques, while providing necessary
support in math. It does have jitter peaking well described and also how
PLL lock behaves with parallel injection locking. I then have additional
books for more specific contexts, such as the clock recovery of signal
and the effect on bit error rates and such. Jitter peaking becomes
extremely important there.

The 5061 had an analog dual integrator using a then $100 op amp,
specifically to address the problem you are describing.  Len Cutler
went to a lot of trouble to get this right.  The 5071 of course also
has a dual integrator, but it's just DSP code.

Yes, once you have DSP code, doing it becomes more an issue of setting
up the coefficients properly. A DSP integrator can be made effectively
non-lossy while actual analog integrators have a shelfving of their gain
due to both open loop gain of op-amps and leakage of integrating
capacitor circuit (both internal and surrounding circuits). So, for the
5061 time, the op-amp needed was for sure expensive. Modern technology
for sure have simplified this, but long term memory is best kept digital
rather than analog, and for the time-constants this is where DSP side
shines over analog, while for much shorter time-constants it should be
kept in analog domain.

The issue at hand is often analyzed in terms of the loops error
function, that is how the output of the loop is in error with the input.
For a PLL this ends up being a high-pass function. As one then expose it
to a disruption and look at the error, depending on the degree of the
loop, it will be able to zero, get a constant error or unbounded error
for various types of signals. So the degree of the loop to handle
frequency jump of a locked oscillator is defined fully by the fact that
an frequency jump forms a second degree function and a third degree
function is needed to combat that to zero it, while a second degree
function can only keep it within some limit. It's all classical control
theory, for which Len for sure needed to know well. One just need to
learn it and realize it's applicability to the problem at hand.

Got a 5060, 5061, 5062 and 5071 among other units in my collection.

Len was also responsible for the dual integrator in the E1938A oven
control loop, which was amazing.

Oven control is quite similar control-wise, but many treat it with less
care than they should.

Cheers,
Magnus

Hi Rick, On 2022-11-26 03:54, Richard (Rick) Karlquist wrote: > > > On 11/25/2022 1:55 PM, Magnus Danielson via time-nuts wrote: > >> error hunts in and a remaining phase-shift often being the end >> result, looking like a phase-step. A higher degree compensation may >> track in the phase-step. Do look at Gardner book. > > Gardner was the book that taught me how to do phase locked loops. Yes. I think it is extremely good at teaching all the key aspects in such a way that one can get the needed understanding. I strongly advice people to get it and read it. Other books provide additional insights and details that Gardner does not explicitly do. If I only got to keep one book on PLLs, Gardner will be my keeper. Best and Wolaver books have their benefits, and I prefer the Wolaver book over the Best book. I find that I dip my nose into all three depending on the particular problem I want to check things, which I rarely do since once learned, many things can be very quickly derived directly. Wolaver has an elegance to it with rich set of illustrations of many techniques, while providing necessary support in math. It does have jitter peaking well described and also how PLL lock behaves with parallel injection locking. I then have additional books for more specific contexts, such as the clock recovery of signal and the effect on bit error rates and such. Jitter peaking becomes extremely important there. > > The 5061 had an analog dual integrator using a then $100 op amp, > specifically to address the problem you are describing.  Len Cutler > went to a lot of trouble to get this right.  The 5071 of course also > has a dual integrator, but it's just DSP code. Yes, once you have DSP code, doing it becomes more an issue of setting up the coefficients properly. A DSP integrator can be made effectively non-lossy while actual analog integrators have a shelfving of their gain due to both open loop gain of op-amps and leakage of integrating capacitor circuit (both internal and surrounding circuits). So, for the 5061 time, the op-amp needed was for sure expensive. Modern technology for sure have simplified this, but long term memory is best kept digital rather than analog, and for the time-constants this is where DSP side shines over analog, while for much shorter time-constants it should be kept in analog domain. The issue at hand is often analyzed in terms of the loops error function, that is how the output of the loop is in error with the input. For a PLL this ends up being a high-pass function. As one then expose it to a disruption and look at the error, depending on the degree of the loop, it will be able to zero, get a constant error or unbounded error for various types of signals. So the degree of the loop to handle frequency jump of a locked oscillator is defined fully by the fact that an frequency jump forms a second degree function and a third degree function is needed to combat that to zero it, while a second degree function can only keep it within some limit. It's all classical control theory, for which Len for sure needed to know well. One just need to learn it and realize it's applicability to the problem at hand. Got a 5060, 5061, 5062 and 5071 among other units in my collection. > Len was also responsible for the dual integrator in the E1938A oven > control loop, which was amazing. Oven control is quite similar control-wise, but many treat it with less care than they should. Cheers, Magnus