time-nuts@lists.febo.com

Discussion of precise time and frequency measurement

View all threads

NCOCXO anyone?

NS
Nick Sayer
Thu, Jul 21, 2016 5:05 PM

Would anyone see any value in a board that had an OH300 with a serial interface for tuning?

I had a thought perhaps to make one starting with my GPSDO and just ditching the GPS part and possibly adding an RS-232 level converter.  I could conceivably bring it all out to a DB9 and emulate an FE-5680 (obviously it's long term stability would be poorer without some discipline) or just make my own protocol up.

Sent from my iPhone

Would anyone see any value in a board that had an OH300 with a serial interface for tuning? I had a thought perhaps to make one starting with my GPSDO and just ditching the GPS part and possibly adding an RS-232 level converter. I could conceivably bring it all out to a DB9 and emulate an FE-5680 (obviously it's long term stability would be poorer without some discipline) or just make my own protocol up. Sent from my iPhone
TV
Tom Van Baak
Thu, Jul 21, 2016 10:39 PM

Hi Nick,

There may be several threads in the time-nuts archives related to your question. The greater concept is a phase microstepper (aka microphase stepper). Imagine a small board that takes =10 MHz in and puts ~10 MHz out. Using RS232 (or SPI or I2C) you control the phase, or even the phase over time, which is to say, the frequency offset. Maybe do it with analog (EFC). Maybe do it with digital (DDS).

There are highly-prized commercial instruments that do this. But no amateur has tried yet. You should be the first. If you think about what we do with steering oscillators -- for GPSDO, or for dual-mixers, or for home time scales, or even sidereal or mars time -- having a device that cleanly steers phase and frequency with simple RS232 would be very useful. For example, it would allow anyone to steer a Rb or Cs standard, even though many of these lab instruments do not have analog EFC or digital tuning options.

The possibility of this at the amateur level occurred to me when I played with Bert's 9913:

http://leapsecond.com/pages/ad9913/

Read especially about the "programmable modulus mode" which can be used to avoid truncation errors and achieve perfect long-term phase; kind of like the difference between PLL and FLL in a GPSDO's 1PPS.

Look also at how the amazing FE-405 oscillator works:

http://leapsecond.com/pages/fe405/

And the idea of [mis]using a DDS as a high-resolution phase measurement technique was confirmed with the PicoPak project:

http://www.wriley.com/PicoPak%20App%20Notes%20Links.htm

So, yes, please take the bait and play with all aspects of your NCOCXO idea.

/tvb

----- Original Message -----
From: "Nick Sayer via time-nuts" time-nuts@febo.com
To: "Chris Arnold via time-nuts" time-nuts@febo.com
Sent: Thursday, July 21, 2016 10:05 AM
Subject: [time-nuts] NCOCXO anyone?

Would anyone see any value in a board that had an OH300 with a serial interface for tuning?

I had a thought perhaps to make one starting with my GPSDO and just ditching the GPS part and possibly adding an RS-232 level converter.  I could conceivably bring it all out to a DB9 and emulate an FE-5680 (obviously it's long term stability would be poorer without some discipline) or just make my own protocol up.

Sent from my iPhone

Hi Nick, There may be several threads in the time-nuts archives related to your question. The greater concept is a phase microstepper (aka microphase stepper). Imagine a small board that takes =10 MHz in and puts ~10 MHz out. Using RS232 (or SPI or I2C) you control the phase, or even the phase over time, which is to say, the frequency offset. Maybe do it with analog (EFC). Maybe do it with digital (DDS). There are highly-prized commercial instruments that do this. But no amateur has tried yet. You should be the first. If you think about what we do with steering oscillators -- for GPSDO, or for dual-mixers, or for home time scales, or even sidereal or mars time -- having a device that cleanly steers phase and frequency with simple RS232 would be very useful. For example, it would allow anyone to steer a Rb or Cs standard, even though many of these lab instruments do not have analog EFC or digital tuning options. The possibility of this at the amateur level occurred to me when I played with Bert's 9913: http://leapsecond.com/pages/ad9913/ Read especially about the "programmable modulus mode" which can be used to avoid truncation errors and achieve perfect long-term phase; kind of like the difference between PLL and FLL in a GPSDO's 1PPS. Look also at how the amazing FE-405 oscillator works: http://leapsecond.com/pages/fe405/ And the idea of [mis]using a DDS as a high-resolution phase measurement technique was confirmed with the PicoPak project: http://www.wriley.com/PicoPak%20App%20Notes%20Links.htm So, yes, please take the bait and play with all aspects of your NCOCXO idea. /tvb ----- Original Message ----- From: "Nick Sayer via time-nuts" <time-nuts@febo.com> To: "Chris Arnold via time-nuts" <time-nuts@febo.com> Sent: Thursday, July 21, 2016 10:05 AM Subject: [time-nuts] NCOCXO anyone? > Would anyone see any value in a board that had an OH300 with a serial interface for tuning? > > I had a thought perhaps to make one starting with my GPSDO and just ditching the GPS part and possibly adding an RS-232 level converter. I could conceivably bring it all out to a DB9 and emulate an FE-5680 (obviously it's long term stability would be poorer without some discipline) or just make my own protocol up. > > Sent from my iPhone
PK
Poul-Henning Kamp
Thu, Jul 21, 2016 11:17 PM

In message 4763643485B04450A76F7C04BA8CFB63@pc52, "Tom Van Baak" writes:

There are highly-prized commercial instruments that do this. But
no amateur has tried yet.

It would be more precise to say that no amateur has been willing to
talk about their results yet.

I personally know several have tried and failed at various levels
of performance.

My own personal experience, both analog and VHDL, is that there is
a particularly long and noisy way from theory to practice in this
space.

The one thing I have not tried, and the only one I think has any
realistic chances, is to use a DDS chip which has a phase modulation
register.

That should get you to a nanosecond without too much trouble.

--
Poul-Henning Kamp      | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG        | TCP/IP since RFC 956
FreeBSD committer      | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.

-------- In message <4763643485B04450A76F7C04BA8CFB63@pc52>, "Tom Van Baak" writes: >There are highly-prized commercial instruments that do this. But >no amateur has tried yet. It would be more precise to say that no amateur has been willing to talk about their results yet. I personally know several have tried and failed at various levels of performance. My own personal experience, both analog and VHDL, is that there is a particularly long and noisy way from theory to practice in this space. The one thing I have *not* tried, and the only one I think has any realistic chances, is to use a DDS chip which has a phase modulation register. That should get you to a nanosecond without too much trouble. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk@FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence.
NS
Nick Sayer
Thu, Jul 21, 2016 11:56 PM

Oh my. That’s a bit more than I was originally considering… What I had in mind was adding a DAC front end to an OCXO so that you could tune the EFC with serial commands rather than analog and calling that a product.

A simple version of what you seem to be describing, however, sounds to me something like this:

The microcontroller has the same phase discriminator system the GPSDO has, except that instead of the reference signal coming from a PPS, it comes from an input reference. The microcontroller can get a phase difference reading between the oscillator output and the reference and in software can tune the oscillator DAC output to arrange for a certain rate-of-change, adjustable via serial commands.

Does that sound about right?

Perhaps a more traditional PLL approach - using the 4046 PC2 output with an RC and simply allowing the controller to sample that makes some sense (calibrating it may be painful). I’ll have to do some more thinking about it. :)

On Jul 21, 2016, at 3:39 PM, Tom Van Baak tvb@LeapSecond.com wrote:

Hi Nick,

There may be several threads in the time-nuts archives related to your question. The greater concept is a phase microstepper (aka microphase stepper). Imagine a small board that takes =10 MHz in and puts ~10 MHz out. Using RS232 (or SPI or I2C) you control the phase, or even the phase over time, which is to say, the frequency offset. Maybe do it with analog (EFC). Maybe do it with digital (DDS).

There are highly-prized commercial instruments that do this. But no amateur has tried yet. You should be the first. If you think about what we do with steering oscillators -- for GPSDO, or for dual-mixers, or for home time scales, or even sidereal or mars time -- having a device that cleanly steers phase and frequency with simple RS232 would be very useful. For example, it would allow anyone to steer a Rb or Cs standard, even though many of these lab instruments do not have analog EFC or digital tuning options.

The possibility of this at the amateur level occurred to me when I played with Bert's 9913:

http://leapsecond.com/pages/ad9913/

Read especially about the "programmable modulus mode" which can be used to avoid truncation errors and achieve perfect long-term phase; kind of like the difference between PLL and FLL in a GPSDO's 1PPS.

Look also at how the amazing FE-405 oscillator works:

http://leapsecond.com/pages/fe405/

And the idea of [mis]using a DDS as a high-resolution phase measurement technique was confirmed with the PicoPak project:

http://www.wriley.com/PicoPak%20App%20Notes%20Links.htm

So, yes, please take the bait and play with all aspects of your NCOCXO idea.

/tvb

----- Original Message -----
From: "Nick Sayer via time-nuts" time-nuts@febo.com
To: "Chris Arnold via time-nuts" time-nuts@febo.com
Sent: Thursday, July 21, 2016 10:05 AM
Subject: [time-nuts] NCOCXO anyone?

Would anyone see any value in a board that had an OH300 with a serial interface for tuning?

I had a thought perhaps to make one starting with my GPSDO and just ditching the GPS part and possibly adding an RS-232 level converter.  I could conceivably bring it all out to a DB9 and emulate an FE-5680 (obviously it's long term stability would be poorer without some discipline) or just make my own protocol up.

Sent from my iPhone


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Oh my. That’s a bit more than I was originally considering… What I had in mind was adding a DAC front end to an OCXO so that you could tune the EFC with serial commands rather than analog and calling that a product. A simple version of what you seem to be describing, however, *sounds* to me something like this: The microcontroller has the same phase discriminator system the GPSDO has, except that instead of the reference signal coming from a PPS, it comes from an input reference. The microcontroller can get a phase difference reading between the oscillator output and the reference and in software can tune the oscillator DAC output to arrange for a certain rate-of-change, adjustable via serial commands. Does that sound about right? Perhaps a more traditional PLL approach - using the 4046 PC2 output with an RC and simply allowing the controller to sample that makes some sense (calibrating it may be painful). I’ll have to do some more thinking about it. :) > On Jul 21, 2016, at 3:39 PM, Tom Van Baak <tvb@LeapSecond.com> wrote: > > Hi Nick, > > There may be several threads in the time-nuts archives related to your question. The greater concept is a phase microstepper (aka microphase stepper). Imagine a small board that takes =10 MHz in and puts ~10 MHz out. Using RS232 (or SPI or I2C) you control the phase, or even the phase over time, which is to say, the frequency offset. Maybe do it with analog (EFC). Maybe do it with digital (DDS). > > There are highly-prized commercial instruments that do this. But no amateur has tried yet. You should be the first. If you think about what we do with steering oscillators -- for GPSDO, or for dual-mixers, or for home time scales, or even sidereal or mars time -- having a device that cleanly steers phase and frequency with simple RS232 would be very useful. For example, it would allow anyone to steer a Rb or Cs standard, even though many of these lab instruments do not have analog EFC or digital tuning options. > > The possibility of this at the amateur level occurred to me when I played with Bert's 9913: > > http://leapsecond.com/pages/ad9913/ > > Read especially about the "programmable modulus mode" which can be used to avoid truncation errors and achieve perfect long-term phase; kind of like the difference between PLL and FLL in a GPSDO's 1PPS. > > Look also at how the amazing FE-405 oscillator works: > > http://leapsecond.com/pages/fe405/ > > And the idea of [mis]using a DDS as a high-resolution phase measurement technique was confirmed with the PicoPak project: > > http://www.wriley.com/PicoPak%20App%20Notes%20Links.htm > > So, yes, please take the bait and play with all aspects of your NCOCXO idea. > > /tvb > > ----- Original Message ----- > From: "Nick Sayer via time-nuts" <time-nuts@febo.com> > To: "Chris Arnold via time-nuts" <time-nuts@febo.com> > Sent: Thursday, July 21, 2016 10:05 AM > Subject: [time-nuts] NCOCXO anyone? > > >> Would anyone see any value in a board that had an OH300 with a serial interface for tuning? >> >> I had a thought perhaps to make one starting with my GPSDO and just ditching the GPS part and possibly adding an RS-232 level converter. I could conceivably bring it all out to a DB9 and emulate an FE-5680 (obviously it's long term stability would be poorer without some discipline) or just make my own protocol up. >> >> Sent from my iPhone > > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there.
BC
Bob Camp
Fri, Jul 22, 2016 12:22 AM

Hi

On Jul 21, 2016, at 7:17 PM, Poul-Henning Kamp phk@phk.freebsd.dk wrote:


In message 4763643485B04450A76F7C04BA8CFB63@pc52, "Tom Van Baak" writes:

There are highly-prized commercial instruments that do this. But
no amateur has tried yet.

It would be more precise to say that no amateur has been willing to
talk about their results yet.

I personally know several have tried and failed at various levels
of performance.

My own personal experience, both analog and VHDL, is that there is
a particularly long and noisy way from theory to practice in this
space.

The one thing I have not tried, and the only one I think has any
realistic chances, is to use a DDS chip which has a phase modulation
register.

If you go the DDS route, it really needs a post filter to make it “fly right”. The narrower the
filter, the better it gets. Pretty quickly you are into an ovenized filter.  Is that better or worse
than an ovenized phase modulator? Not at all clear.

Bob

That should get you to a nanosecond without too much trouble.

--
Poul-Henning Kamp      | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG        | TCP/IP since RFC 956
FreeBSD committer      | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Hi > On Jul 21, 2016, at 7:17 PM, Poul-Henning Kamp <phk@phk.freebsd.dk> wrote: > > -------- > In message <4763643485B04450A76F7C04BA8CFB63@pc52>, "Tom Van Baak" writes: > >> There are highly-prized commercial instruments that do this. But >> no amateur has tried yet. > > It would be more precise to say that no amateur has been willing to > talk about their results yet. > > I personally know several have tried and failed at various levels > of performance. > > My own personal experience, both analog and VHDL, is that there is > a particularly long and noisy way from theory to practice in this > space. > > The one thing I have *not* tried, and the only one I think has any > realistic chances, is to use a DDS chip which has a phase modulation > register. If you go the DDS route, it really needs a post filter to make it “fly right”. The narrower the filter, the better it gets. Pretty quickly you are into an ovenized filter. Is that better or worse than an ovenized phase modulator? Not at all clear. Bob > > That should get you to a nanosecond without too much trouble. > > -- > Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 > phk@FreeBSD.ORG | TCP/IP since RFC 956 > FreeBSD committer | BSD since 4.3-tahoe > Never attribute to malice what can adequately be explained by incompetence. > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there.
BC
Bob Camp
Fri, Jul 22, 2016 1:13 AM

Hi

Ideally a phase micro stepper would have an ADEV floor that is lower than anything you would run through it.
That way the ADEV in would be the same as the ADEV out. Since there are things out there that are lower
ADEV than an OCXO, that’s not a good thing to put in the middle of the beast.

Bob

On Jul 21, 2016, at 7:56 PM, Nick Sayer via time-nuts time-nuts@febo.com wrote:

Oh my. That’s a bit more than I was originally considering… What I had in mind was adding a DAC front end to an OCXO so that you could tune the EFC with serial commands rather than analog and calling that a product.

A simple version of what you seem to be describing, however, sounds to me something like this:

The microcontroller has the same phase discriminator system the GPSDO has, except that instead of the reference signal coming from a PPS, it comes from an input reference. The microcontroller can get a phase difference reading between the oscillator output and the reference and in software can tune the oscillator DAC output to arrange for a certain rate-of-change, adjustable via serial commands.

Does that sound about right?

Perhaps a more traditional PLL approach - using the 4046 PC2 output with an RC and simply allowing the controller to sample that makes some sense (calibrating it may be painful). I’ll have to do some more thinking about it. :)

On Jul 21, 2016, at 3:39 PM, Tom Van Baak tvb@LeapSecond.com wrote:

Hi Nick,

There may be several threads in the time-nuts archives related to your question. The greater concept is a phase microstepper (aka microphase stepper). Imagine a small board that takes =10 MHz in and puts ~10 MHz out. Using RS232 (or SPI or I2C) you control the phase, or even the phase over time, which is to say, the frequency offset. Maybe do it with analog (EFC). Maybe do it with digital (DDS).

There are highly-prized commercial instruments that do this. But no amateur has tried yet. You should be the first. If you think about what we do with steering oscillators -- for GPSDO, or for dual-mixers, or for home time scales, or even sidereal or mars time -- having a device that cleanly steers phase and frequency with simple RS232 would be very useful. For example, it would allow anyone to steer a Rb or Cs standard, even though many of these lab instruments do not have analog EFC or digital tuning options.

The possibility of this at the amateur level occurred to me when I played with Bert's 9913:

http://leapsecond.com/pages/ad9913/

Read especially about the "programmable modulus mode" which can be used to avoid truncation errors and achieve perfect long-term phase; kind of like the difference between PLL and FLL in a GPSDO's 1PPS.

Look also at how the amazing FE-405 oscillator works:

http://leapsecond.com/pages/fe405/

And the idea of [mis]using a DDS as a high-resolution phase measurement technique was confirmed with the PicoPak project:

http://www.wriley.com/PicoPak%20App%20Notes%20Links.htm

So, yes, please take the bait and play with all aspects of your NCOCXO idea.

/tvb

----- Original Message -----
From: "Nick Sayer via time-nuts" time-nuts@febo.com
To: "Chris Arnold via time-nuts" time-nuts@febo.com
Sent: Thursday, July 21, 2016 10:05 AM
Subject: [time-nuts] NCOCXO anyone?

Would anyone see any value in a board that had an OH300 with a serial interface for tuning?

I had a thought perhaps to make one starting with my GPSDO and just ditching the GPS part and possibly adding an RS-232 level converter.  I could conceivably bring it all out to a DB9 and emulate an FE-5680 (obviously it's long term stability would be poorer without some discipline) or just make my own protocol up.

Sent from my iPhone


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Hi Ideally a phase micro stepper would have an ADEV floor that is lower than anything you would run through it. That way the ADEV in would be the same as the ADEV out. Since there are things out there that are lower ADEV than an OCXO, that’s not a good thing to put in the middle of the beast. Bob > On Jul 21, 2016, at 7:56 PM, Nick Sayer via time-nuts <time-nuts@febo.com> wrote: > > Oh my. That’s a bit more than I was originally considering… What I had in mind was adding a DAC front end to an OCXO so that you could tune the EFC with serial commands rather than analog and calling that a product. > > A simple version of what you seem to be describing, however, *sounds* to me something like this: > > The microcontroller has the same phase discriminator system the GPSDO has, except that instead of the reference signal coming from a PPS, it comes from an input reference. The microcontroller can get a phase difference reading between the oscillator output and the reference and in software can tune the oscillator DAC output to arrange for a certain rate-of-change, adjustable via serial commands. > > Does that sound about right? > > Perhaps a more traditional PLL approach - using the 4046 PC2 output with an RC and simply allowing the controller to sample that makes some sense (calibrating it may be painful). I’ll have to do some more thinking about it. :) > >> On Jul 21, 2016, at 3:39 PM, Tom Van Baak <tvb@LeapSecond.com> wrote: >> >> Hi Nick, >> >> There may be several threads in the time-nuts archives related to your question. The greater concept is a phase microstepper (aka microphase stepper). Imagine a small board that takes =10 MHz in and puts ~10 MHz out. Using RS232 (or SPI or I2C) you control the phase, or even the phase over time, which is to say, the frequency offset. Maybe do it with analog (EFC). Maybe do it with digital (DDS). >> >> There are highly-prized commercial instruments that do this. But no amateur has tried yet. You should be the first. If you think about what we do with steering oscillators -- for GPSDO, or for dual-mixers, or for home time scales, or even sidereal or mars time -- having a device that cleanly steers phase and frequency with simple RS232 would be very useful. For example, it would allow anyone to steer a Rb or Cs standard, even though many of these lab instruments do not have analog EFC or digital tuning options. >> >> The possibility of this at the amateur level occurred to me when I played with Bert's 9913: >> >> http://leapsecond.com/pages/ad9913/ >> >> Read especially about the "programmable modulus mode" which can be used to avoid truncation errors and achieve perfect long-term phase; kind of like the difference between PLL and FLL in a GPSDO's 1PPS. >> >> Look also at how the amazing FE-405 oscillator works: >> >> http://leapsecond.com/pages/fe405/ >> >> And the idea of [mis]using a DDS as a high-resolution phase measurement technique was confirmed with the PicoPak project: >> >> http://www.wriley.com/PicoPak%20App%20Notes%20Links.htm >> >> So, yes, please take the bait and play with all aspects of your NCOCXO idea. >> >> /tvb >> >> ----- Original Message ----- >> From: "Nick Sayer via time-nuts" <time-nuts@febo.com> >> To: "Chris Arnold via time-nuts" <time-nuts@febo.com> >> Sent: Thursday, July 21, 2016 10:05 AM >> Subject: [time-nuts] NCOCXO anyone? >> >> >>> Would anyone see any value in a board that had an OH300 with a serial interface for tuning? >>> >>> I had a thought perhaps to make one starting with my GPSDO and just ditching the GPS part and possibly adding an RS-232 level converter. I could conceivably bring it all out to a DB9 and emulate an FE-5680 (obviously it's long term stability would be poorer without some discipline) or just make my own protocol up. >>> >>> Sent from my iPhone >> >> _______________________________________________ >> time-nuts mailing list -- time-nuts@febo.com >> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts >> and follow the instructions there. > > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there.
R(
Richard (Rick) Karlquist
Fri, Jul 22, 2016 1:47 AM

On 7/21/2016 4:56 PM, Nick Sayer via time-nuts wrote:

Oh my. That’s a bit more than I was originally considering… What I had in mind was adding a DAC front end to an OCXO so that you could tune the EFC with serial commands rather than analog and calling that a product.

20 years ago when HP was working on a so called "smart clock"
GPS box, they decided to do what you said, use a DAC to
tune the EFC on the E1938A oscillator.  I
recommended to them NOT to try to do that, but they
didn't listen to me.  At that time, it
was nearly impossible to come up with a DAC, buffer
amplifier, and voltage reference that didn't degrade
the stability of the E1938A, which isn't even as stable
as a 10811.  What you need to ask yourself is:  in
2016, can I finally get analog components that are
an order of magnitude or two better than what was
available in 1996?  I don't know, without researching
it.  Certainly, we can't depend on Moore's law coming
to the rescue.  If anything, that works against analog
IC's by obsoleting older analog processes.

Also in 1996, phase microsteppers were already legacy
technology and didn't have a good reputation for spectral
purity.  Another non-panacea.

Rick

On 7/21/2016 4:56 PM, Nick Sayer via time-nuts wrote: > Oh my. That’s a bit more than I was originally considering… What I had in mind was adding a DAC front end to an OCXO so that you could tune the EFC with serial commands rather than analog and calling that a product. > 20 years ago when HP was working on a so called "smart clock" GPS box, they decided to do what you said, use a DAC to tune the EFC on the E1938A oscillator. I recommended to them NOT to try to do that, but they didn't listen to me. At that time, it was nearly impossible to come up with a DAC, buffer amplifier, and voltage reference that didn't degrade the stability of the E1938A, which isn't even as stable as a 10811. What you need to ask yourself is: in 2016, can I finally get analog components that are an order of magnitude or two better than what was available in 1996? I don't know, without researching it. Certainly, we can't depend on Moore's law coming to the rescue. If anything, that works against analog IC's by obsoleting older analog processes. Also in 1996, phase microsteppers were already legacy technology and didn't have a good reputation for spectral purity. Another non-panacea. Rick
D
David
Fri, Jul 22, 2016 3:22 AM

On Thu, 21 Jul 2016 18:47:24 -0700, you wrote:

On 7/21/2016 4:56 PM, Nick Sayer via time-nuts wrote:

Oh my. That’s a bit more than I was originally considering… What I had in mind was adding a DAC front end to an OCXO so that you could tune the EFC with serial commands rather than analog and calling that a product.

20 years ago when HP was working on a so called "smart clock"
GPS box, they decided to do what you said, use a DAC to
tune the EFC on the E1938A oscillator.  I
recommended to them NOT to try to do that, but they
didn't listen to me.  At that time, it
was nearly impossible to come up with a DAC, buffer
amplifier, and voltage reference that didn't degrade
the stability of the E1938A, which isn't even as stable
as a 10811.  What you need to ask yourself is:  in
2016, can I finally get analog components that are
an order of magnitude or two better than what was
available in 1996?  I don't know, without researching
it.  Certainly, we can't depend on Moore's law coming
to the rescue.  If anything, that works against analog
IC's by obsoleting older analog processes.

Also in 1996, phase microsteppers were already legacy
technology and didn't have a good reputation for spectral
purity.  Another non-panacea.

Rick

Increased integration has only helped insofar as you can attempt
designs which would have been prohibitive before.

I keep trying to come up with a charge balancing design but what about
Linear Technology's solution from back in 2001?

A Standards Lab Grade 20-Bit DAC with 0.1ppm/°C Drift
http://www.linear.com/docs/4177

On Thu, 21 Jul 2016 18:47:24 -0700, you wrote: >On 7/21/2016 4:56 PM, Nick Sayer via time-nuts wrote: > >> Oh my. That’s a bit more than I was originally considering… What I had in mind was adding a DAC front end to an OCXO so that you could tune the EFC with serial commands rather than analog and calling that a product. >> > >20 years ago when HP was working on a so called "smart clock" >GPS box, they decided to do what you said, use a DAC to >tune the EFC on the E1938A oscillator. I >recommended to them NOT to try to do that, but they >didn't listen to me. At that time, it >was nearly impossible to come up with a DAC, buffer >amplifier, and voltage reference that didn't degrade >the stability of the E1938A, which isn't even as stable >as a 10811. What you need to ask yourself is: in >2016, can I finally get analog components that are >an order of magnitude or two better than what was >available in 1996? I don't know, without researching >it. Certainly, we can't depend on Moore's law coming >to the rescue. If anything, that works against analog >IC's by obsoleting older analog processes. > >Also in 1996, phase microsteppers were already legacy >technology and didn't have a good reputation for spectral >purity. Another non-panacea. > >Rick Increased integration has only helped insofar as you can attempt designs which would have been prohibitive before. I keep trying to come up with a charge balancing design but what about Linear Technology's solution from back in 2001? A Standards Lab Grade 20-Bit DAC with 0.1ppm/°C Drift http://www.linear.com/docs/4177
AK
Attila Kinali
Fri, Jul 22, 2016 9:22 AM

Hoi Rick,

On Thu, 21 Jul 2016 18:47:24 -0700
"Richard (Rick) Karlquist" richard@karlquist.com wrote:

Also in 1996, phase microsteppers were already legacy
technology and didn't have a good reputation for spectral
purity.  Another non-panacea.

If they are legacy, what is current state of the art?
And is that how your DDS approach came to be?

		Attila Kinali

--
It is upon moral qualities that a society is ultimately founded. All
the prosperity and technological sophistication in the world is of no
use without that foundation.
-- Miss Matheson, The Diamond Age, Neil Stephenson

Hoi Rick, On Thu, 21 Jul 2016 18:47:24 -0700 "Richard (Rick) Karlquist" <richard@karlquist.com> wrote: > Also in 1996, phase microsteppers were already legacy > technology and didn't have a good reputation for spectral > purity. Another non-panacea. If they are legacy, what is current state of the art? And is that how your DDS approach came to be? Attila Kinali -- It is upon moral qualities that a society is ultimately founded. All the prosperity and technological sophistication in the world is of no use without that foundation. -- Miss Matheson, The Diamond Age, Neil Stephenson
AK
Attila Kinali
Fri, Jul 22, 2016 9:33 AM

On Thu, 21 Jul 2016 22:22:14 -0500
David davidwhess@gmail.com wrote:

Increased integration has only helped insofar as you can attempt
designs which would have been prohibitive before.

I keep trying to come up with a charge balancing design but what about
Linear Technology's solution from back in 2001?

A Standards Lab Grade 20-Bit DAC with 0.1ppm/°C Drift
http://www.linear.com/docs/4177

You can already get 24bit DAC's off the shelf from TI (DAC1282).
I do not know how stable they are in reality. I looked into high
precision DAC's a year or two ago and figured out that once you
cross the 20bit line, all kind of weird stuff happens that is
hard or almost impossible to compensate for. The trick with using
an ADC (which are available up to 32bit) doesn't really work either,
as offset drifts, thermal voltage etc are hard to impossible to
compensate completely. If you go through the volt-nuts mailinglist,
you'll see how much effort it is to even keep 1ppm (~20bit) stability
of a voltage reference... and then to get that performance out of a DAC.

If anything, I think the better approach is to use high resolution DAC
like the DAC1282 or maybe the DAC1280 with a custom modulator and put
it inside the control loop such that the real measurement happens in
the frequency/time domain. The results from Sherman & Jördens[1]
seems to indicate that sub-100fs stability should be possible, though
there are a couple of open questions there.

		Attila Kinali

[1] "Oscillator metrology with software defined radio",
by Jeff Sherman and Robert Jördens, 2016
http://dx.doi.org/10.1063/1.4950898
(it's available from NIST as well)

It is upon moral qualities that a society is ultimately founded. All
the prosperity and technological sophistication in the world is of no
use without that foundation.
-- Miss Matheson, The Diamond Age, Neil Stephenson

On Thu, 21 Jul 2016 22:22:14 -0500 David <davidwhess@gmail.com> wrote: > Increased integration has only helped insofar as you can attempt > designs which would have been prohibitive before. > > I keep trying to come up with a charge balancing design but what about > Linear Technology's solution from back in 2001? > > A Standards Lab Grade 20-Bit DAC with 0.1ppm/°C Drift > http://www.linear.com/docs/4177 You can already get 24bit DAC's off the shelf from TI (DAC1282). I do not know how stable they are in reality. I looked into high precision DAC's a year or two ago and figured out that once you cross the 20bit line, all kind of weird stuff happens that is hard or almost impossible to compensate for. The trick with using an ADC (which are available up to 32bit) doesn't really work either, as offset drifts, thermal voltage etc are hard to impossible to compensate completely. If you go through the volt-nuts mailinglist, you'll see how much effort it is to even keep 1ppm (~20bit) stability of a voltage reference... and then to get that performance out of a DAC. If anything, I think the better approach is to use high resolution DAC like the DAC1282 or maybe the DAC1280 with a custom modulator and put it inside the control loop such that the real measurement happens in the frequency/time domain. The results from Sherman & Jördens[1] seems to indicate that sub-100fs stability should be possible, though there are a couple of open questions there. Attila Kinali [1] "Oscillator metrology with software defined radio", by Jeff Sherman and Robert Jördens, 2016 http://dx.doi.org/10.1063/1.4950898 (it's available from NIST as well) -- It is upon moral qualities that a society is ultimately founded. All the prosperity and technological sophistication in the world is of no use without that foundation. -- Miss Matheson, The Diamond Age, Neil Stephenson
R(
Richard (Rick) Karlquist
Fri, Jul 22, 2016 4:22 PM

On 7/22/2016 2:22 AM, Attila Kinali wrote:

Hoi Rick,

On Thu, 21 Jul 2016 18:47:24 -0700
"Richard (Rick) Karlquist" richard@karlquist.com wrote:

Also in 1996, phase microsteppers were already legacy
technology and didn't have a good reputation for spectral
purity.  Another non-panacea.

If they are legacy, what is current state of the art?
And is that how your DDS approach came to be?

		Attila Kinali

Yes the DDS papers of 1995/1996 were in response to
the limitations of phase microsteppers.  I have been
out of the precision frequency field since 1998,
so I can't speak for the current state of the art.

Rick

On 7/22/2016 2:22 AM, Attila Kinali wrote: > Hoi Rick, > > On Thu, 21 Jul 2016 18:47:24 -0700 > "Richard (Rick) Karlquist" <richard@karlquist.com> wrote: > >> Also in 1996, phase microsteppers were already legacy >> technology and didn't have a good reputation for spectral >> purity. Another non-panacea. > > If they are legacy, what is current state of the art? > And is that how your DDS approach came to be? > > > Attila Kinali > Yes the DDS papers of 1995/1996 were in response to the limitations of phase microsteppers. I have been out of the precision frequency field since 1998, so I can't speak for the current state of the art. Rick
SS
Scott Stobbe
Fri, Jul 22, 2016 4:51 PM

Capacitor matching (Moore's law) has improved for switch-cap designs.

Also depends on the tuning gain, 10 ppm/V would be very demanding versus 10
ppb/V.

On Thu, Jul 21, 2016 at 9:47 PM, Richard (Rick) Karlquist <
richard@karlquist.com> wrote:

On 7/21/2016 4:56 PM, Nick Sayer via time-nuts wrote:

Oh my. That’s a bit more than I was originally considering… What I had in
mind was adding a DAC front end to an OCXO so that you could tune the EFC
with serial commands rather than analog and calling that a product.

20 years ago when HP was working on a so called "smart clock"
GPS box, they decided to do what you said, use a DAC to
tune the EFC on the E1938A oscillator.  I
recommended to them NOT to try to do that, but they
didn't listen to me.  At that time, it
was nearly impossible to come up with a DAC, buffer
amplifier, and voltage reference that didn't degrade
the stability of the E1938A, which isn't even as stable
as a 10811.  What you need to ask yourself is:  in
2016, can I finally get analog components that are
an order of magnitude or two better than what was
available in 1996?  I don't know, without researching
it.  Certainly, we can't depend on Moore's law coming
to the rescue.  If anything, that works against analog
IC's by obsoleting older analog processes.

Also in 1996, phase microsteppers were already legacy
technology and didn't have a good reputation for spectral
purity.  Another non-panacea.

Rick


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Capacitor matching (Moore's law) has improved for switch-cap designs. Also depends on the tuning gain, 10 ppm/V would be very demanding versus 10 ppb/V. On Thu, Jul 21, 2016 at 9:47 PM, Richard (Rick) Karlquist < richard@karlquist.com> wrote: > > > On 7/21/2016 4:56 PM, Nick Sayer via time-nuts wrote: > >> Oh my. That’s a bit more than I was originally considering… What I had in >> mind was adding a DAC front end to an OCXO so that you could tune the EFC >> with serial commands rather than analog and calling that a product. >> >> > 20 years ago when HP was working on a so called "smart clock" > GPS box, they decided to do what you said, use a DAC to > tune the EFC on the E1938A oscillator. I > recommended to them NOT to try to do that, but they > didn't listen to me. At that time, it > was nearly impossible to come up with a DAC, buffer > amplifier, and voltage reference that didn't degrade > the stability of the E1938A, which isn't even as stable > as a 10811. What you need to ask yourself is: in > 2016, can I finally get analog components that are > an order of magnitude or two better than what was > available in 1996? I don't know, without researching > it. Certainly, we can't depend on Moore's law coming > to the rescue. If anything, that works against analog > IC's by obsoleting older analog processes. > > Also in 1996, phase microsteppers were already legacy > technology and didn't have a good reputation for spectral > purity. Another non-panacea. > > Rick > > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to > https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there. >
D
David
Fri, Jul 22, 2016 5:15 PM

On Fri, 22 Jul 2016 11:33:12 +0200, you wrote:

On Thu, 21 Jul 2016 22:22:14 -0500
David davidwhess@gmail.com wrote:

Increased integration has only helped insofar as you can attempt
designs which would have been prohibitive before.

I keep trying to come up with a charge balancing design but what about
Linear Technology's solution from back in 2001?

A Standards Lab Grade 20-Bit DAC with 0.1ppm/°C Drift
http://www.linear.com/docs/4177

You can already get 24bit DAC's off the shelf from TI (DAC1282).
I do not know how stable they are in reality.

There are a lot of DACs like the TI DAC1282 which are not primarily
intended for DC applications.  At least according to its
specifications, it's gain error drift is 100 times greater and its
offset error drift is 150 times greater than the LTC2400 ADC used for
discipline in the Linear Technology application note.  The best DC
DACs I could quickly find are only 4 time better than that so still
more than an order of magnitude below the performance of ADCs.

I looked into high
precision DAC's a year or two ago and figured out that once you
cross the 20bit line, all kind of weird stuff happens that is
hard or almost impossible to compensate for. The trick with using
an ADC (which are available up to 32bit) doesn't really work either,
as offset drifts, thermal voltage etc are hard to impossible to
compensate completely. If you go through the volt-nuts mailinglist,
you'll see how much effort it is to even keep 1ppm (~20bit) stability
of a voltage reference... and then to get that performance out of a DAC.

If you expect analog specifications in line with the claimed digital
resolution of ADCs and DACs, you will be disappointed.  20 bits is
about where they top out no matter how many bits are available; the
best you can hope for is that they are monotonic but how meaningful is
that when it is buried in noise?

The LTC2400 is considered suitable for 6 digit designs before software
calibration is used which the application note and datasheet mention.
In this case, it is its repeatable INL which can be corrected for and
its low gain and offset drift which matter.

It is too bad voltage control of an oscillator cannot be made
ratiometric.  Or can it?  I have never heard of such a thing.  That
would remove some of the demands on a low drift reference.

If anything, I think the better approach is to use high resolution DAC
like the DAC1282 or maybe the DAC1280 with a custom modulator and put
it inside the control loop such that the real measurement happens in
the frequency/time domain. The results from Sherman & Jördens[1]
seems to indicate that sub-100fs stability should be possible, though
there are a couple of open questions there.

		Attila Kinali

[1] "Oscillator metrology with software defined radio",
by Jeff Sherman and Robert Jördens, 2016
http://dx.doi.org/10.1063/1.4950898
(it's available from NIST as well)

Based on Rick's description of the problem, it seemed like that was
what HP tried and it did not work because the DAC had too much drift
to compensate for in the frequency/time domain:

20 years ago when HP was working on a so called "smart clock"
GPS box, they decided to do what you said, use a DAC to
tune the EFC on the E1938A oscillator.  I
recommended to them NOT to try to do that, but they
didn't listen to me.  At that time, it
was nearly impossible to come up with a DAC, buffer
amplifier, and voltage reference that didn't degrade
the stability of the E1938A, which isn't even as stable
as a 10811.  What you need to ask yourself is:  in
2016, can I finally get analog components that are
an order of magnitude or two better than what was
available in 1996?  I don't know, without researching
it.  Certainly, we can't depend on Moore's law coming
to the rescue.  If anything, that works against analog
IC's by obsoleting older analog processes.

On Fri, 22 Jul 2016 11:33:12 +0200, you wrote: >On Thu, 21 Jul 2016 22:22:14 -0500 >David <davidwhess@gmail.com> wrote: > >> Increased integration has only helped insofar as you can attempt >> designs which would have been prohibitive before. >> >> I keep trying to come up with a charge balancing design but what about >> Linear Technology's solution from back in 2001? >> >> A Standards Lab Grade 20-Bit DAC with 0.1ppm/°C Drift >> http://www.linear.com/docs/4177 > >You can already get 24bit DAC's off the shelf from TI (DAC1282). >I do not know how stable they are in reality. There are a lot of DACs like the TI DAC1282 which are not primarily intended for DC applications. At least according to its specifications, it's gain error drift is 100 times greater and its offset error drift is 150 times greater than the LTC2400 ADC used for discipline in the Linear Technology application note. The best DC DACs I could quickly find are only 4 time better than that so still more than an order of magnitude below the performance of ADCs. >I looked into high >precision DAC's a year or two ago and figured out that once you >cross the 20bit line, all kind of weird stuff happens that is >hard or almost impossible to compensate for. The trick with using >an ADC (which are available up to 32bit) doesn't really work either, >as offset drifts, thermal voltage etc are hard to impossible to >compensate completely. If you go through the volt-nuts mailinglist, >you'll see how much effort it is to even keep 1ppm (~20bit) stability >of a voltage reference... and then to get that performance out of a DAC. If you expect analog specifications in line with the claimed digital resolution of ADCs and DACs, you will be disappointed. 20 bits is about where they top out no matter how many bits are available; the best you can hope for is that they are monotonic but how meaningful is that when it is buried in noise? The LTC2400 is considered suitable for 6 digit designs before software calibration is used which the application note and datasheet mention. In this case, it is its repeatable INL which can be corrected for and its low gain and offset drift which matter. It is too bad voltage control of an oscillator cannot be made ratiometric. Or can it? I have never heard of such a thing. That would remove some of the demands on a low drift reference. >If anything, I think the better approach is to use high resolution DAC >like the DAC1282 or maybe the DAC1280 with a custom modulator and put >it inside the control loop such that the real measurement happens in >the frequency/time domain. The results from Sherman & Jördens[1] >seems to indicate that sub-100fs stability should be possible, though >there are a couple of open questions there. > > Attila Kinali > >[1] "Oscillator metrology with software defined radio", >by Jeff Sherman and Robert Jördens, 2016 >http://dx.doi.org/10.1063/1.4950898 >(it's available from NIST as well) Based on Rick's description of the problem, it seemed like that was what HP tried and it did not work because the DAC had too much drift to compensate for in the frequency/time domain: >20 years ago when HP was working on a so called "smart clock" >GPS box, they decided to do what you said, use a DAC to >tune the EFC on the E1938A oscillator. I >recommended to them NOT to try to do that, but they >didn't listen to me. At that time, it >was nearly impossible to come up with a DAC, buffer >amplifier, and voltage reference that didn't degrade >the stability of the E1938A, which isn't even as stable >as a 10811. What you need to ask yourself is: in >2016, can I finally get analog components that are >an order of magnitude or two better than what was >available in 1996? I don't know, without researching >it. Certainly, we can't depend on Moore's law coming >to the rescue. If anything, that works against analog >IC's by obsoleting older analog processes.
R(
Richard (Rick) Karlquist
Fri, Jul 22, 2016 6:23 PM

On 7/22/2016 10:15 AM, David wrote:

It is too bad voltage control of an oscillator cannot be made
ratiometric.  Or can it?  I have never heard of such a thing.  That
would remove some of the demands on a low drift reference.

That's what we tried to do with the E1938A.  A multiplying DAC
is used based on a reference that is ovenized instead the
crystal oven.  That certainly eliminated the tempco issue
with the reference, but then we discovered 1/f noise on the
reference.  We had to redesign with a different reference.

Rick

On 7/22/2016 10:15 AM, David wrote: > > It is too bad voltage control of an oscillator cannot be made > ratiometric. Or can it? I have never heard of such a thing. That > would remove some of the demands on a low drift reference. > That's what we tried to do with the E1938A. A multiplying DAC is used based on a reference that is ovenized instead the crystal oven. That certainly eliminated the tempco issue with the reference, but then we discovered 1/f noise on the reference. We had to redesign with a different reference. Rick
PK
Poul-Henning Kamp
Fri, Jul 22, 2016 7:01 PM

In message 20160722113312.f1f292c42ba086aafd6d46e1@kinali.ch, Attila Kinali w
rites:

You can already get 24bit DAC's off the shelf from TI (DAC1282).

Precisely(!) as stable as the voltage reference you feed them.

These are oversampling designs which by definition cannot attenuate
Vref noise.

--
Poul-Henning Kamp      | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG        | TCP/IP since RFC 956
FreeBSD committer      | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.

-------- In message <20160722113312.f1f292c42ba086aafd6d46e1@kinali.ch>, Attila Kinali w rites: >You can already get 24bit DAC's off the shelf from TI (DAC1282). Precisely(!) as stable as the voltage reference you feed them. These are oversampling designs which *by definition* cannot attenuate Vref noise. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk@FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence.
AK
Attila Kinali
Sat, Jul 23, 2016 6:36 PM

On Fri, 22 Jul 2016 12:15:25 -0500
David davidwhess@gmail.com wrote:

If you expect analog specifications in line with the claimed digital
resolution of ADCs and DACs, you will be disappointed.  20 bits is
about where they top out no matter how many bits are available; the
best you can hope for is that they are monotonic but how meaningful is
that when it is buried in noise?

Depends on your application. If the circuitry that follows the DAC has
some integrative/low-pass characteristics, then bits burried in noise
might be not that bad. E.g. when controlling an VCXO, any noise beyond
10-50kHz will be filtered out by the crystal and its high Q.

Similarly, low frequency noise might be eaten up by the surrounding
control-loop. This is especially beneficial when dealing with a
circuit that has high 1/f noise. The drawback is, that high loop frequency
(something around 10-100Hz is the minimum required to filter 1/f noise)
is not that easy to achieve. It requires carefull design and makes things
generally a lot more expensive.

The LTC2400 is considered suitable for 6 digit designs before software
calibration is used which the application note and datasheet mention.
In this case, it is its repeatable INL which can be corrected for and
its low gain and offset drift which matter.

Yes, but the LTC2400, as all delta-sigma converters, has the big problem
that it only reaches the full performance at a very low sampling rate.
In case of the LTC2400 it's a whooping 7sps. Ie, that would limit the
DAC build with an LTC2400 in its feedback path to at most 3sps, probably
even lower.

On the other hand, a modern DACs like the AD5791 reaches full 20bit at 1Msps
(resp 1us settling time to 0.02% @10V step, or 1us to 1LSB @500 code step).
But using the AD5791 in a design isn't easy either. The dual voltage reference
that is required to reach full spec is kind of inconvenient. And as phk already
wrote, these DACs deliver you the reference accuracy and noise very precisely.

A nice write-up on issues in this area can be found at[1]

		Attila Kinali

[1] "The 20-Bit DAC Is the Easiest Part of a 1-ppm-Accurate Precision
Voltage Source", by Maurice Egan, 2010
http://www.analog.com/library/analogdialogue/archives/44-04/ad5791.html

--
Malek's Law:
Any simple idea will be worded in the most complicated way.

On Fri, 22 Jul 2016 12:15:25 -0500 David <davidwhess@gmail.com> wrote: > If you expect analog specifications in line with the claimed digital > resolution of ADCs and DACs, you will be disappointed. 20 bits is > about where they top out no matter how many bits are available; the > best you can hope for is that they are monotonic but how meaningful is > that when it is buried in noise? Depends on your application. If the circuitry that follows the DAC has some integrative/low-pass characteristics, then bits burried in noise might be not that bad. E.g. when controlling an VCXO, any noise beyond 10-50kHz will be filtered out by the crystal and its high Q. Similarly, low frequency noise might be eaten up by the surrounding control-loop. This is especially beneficial when dealing with a circuit that has high 1/f noise. The drawback is, that high loop frequency (something around 10-100Hz is the minimum required to filter 1/f noise) is not that easy to achieve. It requires carefull design and makes things generally a lot more expensive. > The LTC2400 is considered suitable for 6 digit designs before software > calibration is used which the application note and datasheet mention. > In this case, it is its repeatable INL which can be corrected for and > its low gain and offset drift which matter. Yes, but the LTC2400, as all delta-sigma converters, has the big problem that it only reaches the full performance at a very low sampling rate. In case of the LTC2400 it's a whooping 7sps. Ie, that would limit the DAC build with an LTC2400 in its feedback path to at most 3sps, probably even lower. On the other hand, a modern DACs like the AD5791 reaches full 20bit at 1Msps (resp 1us settling time to 0.02% @10V step, or 1us to 1LSB @500 code step). But using the AD5791 in a design isn't easy either. The dual voltage reference that is required to reach full spec is kind of inconvenient. And as phk already wrote, these DACs deliver you the reference accuracy and noise very precisely. A nice write-up on issues in this area can be found at[1] Attila Kinali [1] "The 20-Bit DAC Is the Easiest Part of a 1-ppm-Accurate Precision Voltage Source", by Maurice Egan, 2010 http://www.analog.com/library/analogdialogue/archives/44-04/ad5791.html -- Malek's Law: Any simple idea will be worded in the most complicated way.
BG
Bruce Griffiths
Sun, Jul 24, 2016 12:18 AM

The AD5791 evaluation board has an unpopulated area for what appears to be an LTZ1000 reference circuit.

Bruce

On Sunday, 24 July 2016 7:00 AM, Attila Kinali <attila@kinali.ch> wrote:

On Fri, 22 Jul 2016 12:15:25 -0500
David davidwhess@gmail.com wrote:

If you expect analog specifications in line with the claimed digital
resolution of ADCs and DACs, you will be disappointed.  20 bits is
about where they top out no matter how many bits are available; the
best you can hope for is that they are monotonic but how meaningful is
that when it is buried in noise?

Depends on your application. If the circuitry that follows the DAC has
some integrative/low-pass characteristics, then bits burried in noise
might be not that bad. E.g. when controlling an VCXO, any noise beyond
10-50kHz will be filtered out by the crystal and its high Q.

Similarly, low frequency noise might be eaten up by the surrounding
control-loop. This is especially beneficial when dealing with a
circuit that has high 1/f noise. The drawback is, that high loop frequency
(something around 10-100Hz is the minimum required to filter 1/f noise)
is not that easy to achieve. It requires carefull design and makes things
generally a lot more expensive.

The LTC2400 is considered suitable for 6 digit designs before software
calibration is used which the application note and datasheet mention.
In this case, it is its repeatable INL which can be corrected for and
its low gain and offset drift which matter.

Yes, but the LTC2400, as all delta-sigma converters, has the big problem
that it only reaches the full performance at a very low sampling rate.
In case of the LTC2400 it's a whooping 7sps. Ie, that would limit the
DAC build with an LTC2400 in its feedback path to at most 3sps, probably
even lower.

On the other hand, a modern DACs like the AD5791 reaches full 20bit at 1Msps
(resp 1us settling time to 0.02% @10V step, or 1us to 1LSB @500 code step).
But using the AD5791 in a design isn't easy either. The dual voltage reference
that is required to reach full spec is kind of inconvenient. And as phk already
wrote, these DACs deliver you the reference accuracy and noise very precisely.

A nice write-up on issues in this area can be found at[1]

            Attila Kinali

[1] "The 20-Bit DAC Is the Easiest Part of a 1-ppm-Accurate Precision
Voltage Source", by Maurice Egan, 2010
http://www.analog.com/library/analogdialogue/archives/44-04/ad5791.html

--
Malek's Law:
        Any simple idea will be worded in the most complicated way.


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

The AD5791 evaluation board has an unpopulated area for what appears to be an LTZ1000 reference circuit. Bruce On Sunday, 24 July 2016 7:00 AM, Attila Kinali <attila@kinali.ch> wrote: On Fri, 22 Jul 2016 12:15:25 -0500 David <davidwhess@gmail.com> wrote: > If you expect analog specifications in line with the claimed digital > resolution of ADCs and DACs, you will be disappointed.  20 bits is > about where they top out no matter how many bits are available; the > best you can hope for is that they are monotonic but how meaningful is > that when it is buried in noise? Depends on your application. If the circuitry that follows the DAC has some integrative/low-pass characteristics, then bits burried in noise might be not that bad. E.g. when controlling an VCXO, any noise beyond 10-50kHz will be filtered out by the crystal and its high Q. Similarly, low frequency noise might be eaten up by the surrounding control-loop. This is especially beneficial when dealing with a circuit that has high 1/f noise. The drawback is, that high loop frequency (something around 10-100Hz is the minimum required to filter 1/f noise) is not that easy to achieve. It requires carefull design and makes things generally a lot more expensive. > The LTC2400 is considered suitable for 6 digit designs before software > calibration is used which the application note and datasheet mention. > In this case, it is its repeatable INL which can be corrected for and > its low gain and offset drift which matter. Yes, but the LTC2400, as all delta-sigma converters, has the big problem that it only reaches the full performance at a very low sampling rate. In case of the LTC2400 it's a whooping 7sps. Ie, that would limit the DAC build with an LTC2400 in its feedback path to at most 3sps, probably even lower. On the other hand, a modern DACs like the AD5791 reaches full 20bit at 1Msps (resp 1us settling time to 0.02% @10V step, or 1us to 1LSB @500 code step). But using the AD5791 in a design isn't easy either. The dual voltage reference that is required to reach full spec is kind of inconvenient. And as phk already wrote, these DACs deliver you the reference accuracy and noise very precisely. A nice write-up on issues in this area can be found at[1]             Attila Kinali [1] "The 20-Bit DAC Is the Easiest Part of a 1-ppm-Accurate Precision Voltage Source", by Maurice Egan, 2010 http://www.analog.com/library/analogdialogue/archives/44-04/ad5791.html -- Malek's Law:         Any simple idea will be worded in the most complicated way. _______________________________________________ time-nuts mailing list -- time-nuts@febo.com To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts and follow the instructions there.
D
David
Mon, Jul 25, 2016 12:17 AM

On Sat, 23 Jul 2016 20:36:28 +0200, you wrote:

On the other hand, a modern DACs like the AD5791 reaches full 20bit at 1Msps
(resp 1us settling time to 0.02% @10V step, or 1us to 1LSB @500 code step).
But using the AD5791 in a design isn't easy either. The dual voltage reference
that is required to reach full spec is kind of inconvenient. And as phk already
wrote, these DACs deliver you the reference accuracy and noise very precisely.

A nice write-up on issues in this area can be found at[1]

		Attila Kinali

[1] "The 20-Bit DAC Is the Easiest Part of a 1-ppm-Accurate Precision
Voltage Source", by Maurice Egan, 2010
http://www.analog.com/library/analogdialogue/archives/44-04/ad5791.html

The AD5791 is pretty impressive although it comes with an equally
impressive price.  That right there looks like 10 years of improvement
given the age of the LTC2400.

I see what you mean about the positive and negative voltage reference
requirement.  Analog Devices might as well have written "magic
positive and negative reference source here" in their typical
operating circuit schematic.

There has to be a better way to do this.  Maybe we could build a
wooden badger ...

On Sat, 23 Jul 2016 20:36:28 +0200, you wrote: >On the other hand, a modern DACs like the AD5791 reaches full 20bit at 1Msps >(resp 1us settling time to 0.02% @10V step, or 1us to 1LSB @500 code step). >But using the AD5791 in a design isn't easy either. The dual voltage reference >that is required to reach full spec is kind of inconvenient. And as phk already >wrote, these DACs deliver you the reference accuracy and noise very precisely. > >A nice write-up on issues in this area can be found at[1] > > Attila Kinali > >[1] "The 20-Bit DAC Is the Easiest Part of a 1-ppm-Accurate Precision >Voltage Source", by Maurice Egan, 2010 >http://www.analog.com/library/analogdialogue/archives/44-04/ad5791.html The AD5791 is pretty impressive although it comes with an equally impressive price. That right there looks like 10 years of improvement given the age of the LTC2400. I see what you mean about the positive and negative voltage reference requirement. Analog Devices might as well have written "magic positive and negative reference source here" in their typical operating circuit schematic. There *has* to be a better way to do this. Maybe we could build a wooden badger ...
SS
Scott Stobbe
Mon, Jul 25, 2016 3:48 AM

I doubt the AD5791 does much better than 16 bits operating at 1 Msps, when
you include glitch energy, noise, and distortion.

On Saturday, 23 July 2016, Attila Kinali attila@kinali.ch wrote:

On Fri, 22 Jul 2016 12:15:25 -0500
David <davidwhess@gmail.com javascript:;> wrote:

If you expect analog specifications in line with the claimed digital
resolution of ADCs and DACs, you will be disappointed.  20 bits is
about where they top out no matter how many bits are available; the
best you can hope for is that they are monotonic but how meaningful is
that when it is buried in noise?

Depends on your application. If the circuitry that follows the DAC has
some integrative/low-pass characteristics, then bits burried in noise
might be not that bad. E.g. when controlling an VCXO, any noise beyond
10-50kHz will be filtered out by the crystal and its high Q.

Similarly, low frequency noise might be eaten up by the surrounding
control-loop. This is especially beneficial when dealing with a
circuit that has high 1/f noise. The drawback is, that high loop frequency
(something around 10-100Hz is the minimum required to filter 1/f noise)
is not that easy to achieve. It requires carefull design and makes things
generally a lot more expensive.

The LTC2400 is considered suitable for 6 digit designs before software
calibration is used which the application note and datasheet mention.
In this case, it is its repeatable INL which can be corrected for and
its low gain and offset drift which matter.

Yes, but the LTC2400, as all delta-sigma converters, has the big problem
that it only reaches the full performance at a very low sampling rate.
In case of the LTC2400 it's a whooping 7sps. Ie, that would limit the
DAC build with an LTC2400 in its feedback path to at most 3sps, probably
even lower.

On the other hand, a modern DACs like the AD5791 reaches full 20bit at
1Msps
(resp 1us settling time to 0.02% @10V step, or 1us to 1LSB @500 code step).
But using the AD5791 in a design isn't easy either. The dual voltage
reference
that is required to reach full spec is kind of inconvenient. And as phk
already
wrote, these DACs deliver you the reference accuracy and noise very
precisely.

A nice write-up on issues in this area can be found at[1]

                     Attila Kinali

[1] "The 20-Bit DAC Is the Easiest Part of a 1-ppm-Accurate Precision
Voltage Source", by Maurice Egan, 2010
http://www.analog.com/library/analogdialogue/archives/44-04/ad5791.html

--
Malek's Law:
Any simple idea will be worded in the most complicated way.


time-nuts mailing list -- time-nuts@febo.com javascript:;
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

I doubt the AD5791 does much better than 16 bits operating at 1 Msps, when you include glitch energy, noise, and distortion. On Saturday, 23 July 2016, Attila Kinali <attila@kinali.ch> wrote: > On Fri, 22 Jul 2016 12:15:25 -0500 > David <davidwhess@gmail.com <javascript:;>> wrote: > > > If you expect analog specifications in line with the claimed digital > > resolution of ADCs and DACs, you will be disappointed. 20 bits is > > about where they top out no matter how many bits are available; the > > best you can hope for is that they are monotonic but how meaningful is > > that when it is buried in noise? > > Depends on your application. If the circuitry that follows the DAC has > some integrative/low-pass characteristics, then bits burried in noise > might be not that bad. E.g. when controlling an VCXO, any noise beyond > 10-50kHz will be filtered out by the crystal and its high Q. > > Similarly, low frequency noise might be eaten up by the surrounding > control-loop. This is especially beneficial when dealing with a > circuit that has high 1/f noise. The drawback is, that high loop frequency > (something around 10-100Hz is the minimum required to filter 1/f noise) > is not that easy to achieve. It requires carefull design and makes things > generally a lot more expensive. > > > The LTC2400 is considered suitable for 6 digit designs before software > > calibration is used which the application note and datasheet mention. > > In this case, it is its repeatable INL which can be corrected for and > > its low gain and offset drift which matter. > > Yes, but the LTC2400, as all delta-sigma converters, has the big problem > that it only reaches the full performance at a very low sampling rate. > In case of the LTC2400 it's a whooping 7sps. Ie, that would limit the > DAC build with an LTC2400 in its feedback path to at most 3sps, probably > even lower. > > On the other hand, a modern DACs like the AD5791 reaches full 20bit at > 1Msps > (resp 1us settling time to 0.02% @10V step, or 1us to 1LSB @500 code step). > But using the AD5791 in a design isn't easy either. The dual voltage > reference > that is required to reach full spec is kind of inconvenient. And as phk > already > wrote, these DACs deliver you the reference accuracy and noise very > precisely. > > A nice write-up on issues in this area can be found at[1] > > Attila Kinali > > > [1] "The 20-Bit DAC Is the Easiest Part of a 1-ppm-Accurate Precision > Voltage Source", by Maurice Egan, 2010 > http://www.analog.com/library/analogdialogue/archives/44-04/ad5791.html > > > -- > Malek's Law: > Any simple idea will be worded in the most complicated way. > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com <javascript:;> > To unsubscribe, go to > https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there. >
D
David
Mon, Jul 25, 2016 10:20 AM

The AD5791 specifications under various conditions are all roughly
consistent; 20 bits at DC, 16 bits at 10 ksps based on SFDR, and 12
bits at 1 Msps for large code changes.

Its intended application is DC where its 1 Msps update rate applies
for code steps of 500 or smaller and settling time will be within 1
LSB.

On Sun, 24 Jul 2016 23:48:05 -0400, you wrote:

I doubt the AD5791 does much better than 16 bits operating at 1 Msps, when
you include glitch energy, noise, and distortion.

The AD5791 specifications under various conditions are all roughly consistent; 20 bits at DC, 16 bits at 10 ksps based on SFDR, and 12 bits at 1 Msps for large code changes. Its intended application is DC where its 1 Msps update rate applies for code steps of 500 or smaller and settling time will be within 1 LSB. On Sun, 24 Jul 2016 23:48:05 -0400, you wrote: >I doubt the AD5791 does much better than 16 bits operating at 1 Msps, when >you include glitch energy, noise, and distortion.
AK
Attila Kinali
Mon, Jul 25, 2016 11:13 PM

On Sun, 24 Jul 2016 19:17:29 -0500
David davidwhess@gmail.com wrote:

There has to be a better way to do this.  Maybe we could build a
wooden badger ...

What? Has the wooden rabbit failed?

As I said, I looked into this some time ago and I couldn't come up
with any "easy" way to build a DAC with more than 20 (real) bits that
has a sampling rate higher than a couple of Hz. Even what the audio
people do, if they do not use of the shelf snake-oil DACs is pretty
hard core tech (and smells even more like snake oil)... nothing you'd
do ony your kitchen table. Somehow building your own atomic frequency
standard looked easier than doing your own high precision DAC.

If anyone has a good idea how to build a DAC with more than 20bit
that is somewhat DC stable (better than ~1ppm within an hour should
be enough), i'd like to hear about that.

		Attila Kinali

--
Malek's Law:
Any simple idea will be worded in the most complicated way.

On Sun, 24 Jul 2016 19:17:29 -0500 David <davidwhess@gmail.com> wrote: > There *has* to be a better way to do this. Maybe we could build a > wooden badger ... What? Has the wooden rabbit failed? As I said, I looked into this some time ago and I couldn't come up with any "easy" way to build a DAC with more than 20 (real) bits that has a sampling rate higher than a couple of Hz. Even what the audio people do, if they do not use of the shelf snake-oil DACs is pretty hard core tech (and smells even more like snake oil)... nothing you'd do ony your kitchen table. Somehow building your own atomic frequency standard looked easier than doing your own high precision DAC. If anyone has a good idea how to build a DAC with more than 20bit that is somewhat DC stable (better than ~1ppm within an hour should be enough), i'd like to hear about that. Attila Kinali -- Malek's Law: Any simple idea will be worded in the most complicated way.
AK
Attila Kinali
Mon, Jul 25, 2016 11:40 PM

On Sun, 24 Jul 2016 23:48:05 -0400
Scott Stobbe scott.j.stobbe@gmail.com wrote:

I doubt the AD5791 does much better than 16 bits operating at 1 Msps, when
you include glitch energy, noise, and distortion.

What makes you think so?

Yes, if you are using the full 500kHz bandwidth then the rms noise voltage
will be 5uV.. or 35uVp-p. But even just going down to a 1kHz bandwith
gives 235nVrms/1.5uVp-p (plus 1/f noise of 1.1uVp-p). So we are within
the 1ppm for any output larger than ~2V.

The DNL and INL are low enough that I don't think they cause any more
trouble then you'd expect from a DAC normally.

I don't know how to make use of the glitch specs and turn them
precision values.

		Attila Kinali

--
Malek's Law:
Any simple idea will be worded in the most complicated way.

On Sun, 24 Jul 2016 23:48:05 -0400 Scott Stobbe <scott.j.stobbe@gmail.com> wrote: > I doubt the AD5791 does much better than 16 bits operating at 1 Msps, when > you include glitch energy, noise, and distortion. What makes you think so? Yes, if you are using the full 500kHz bandwidth then the rms noise voltage will be 5uV.. or 35uVp-p. But even just going down to a 1kHz bandwith gives 235nVrms/1.5uVp-p (plus 1/f noise of 1.1uVp-p). So we are within the 1ppm for any output larger than ~2V. The DNL and INL are low enough that I don't think they cause any more trouble then you'd expect from a DAC normally. I don't know how to make use of the glitch specs and turn them precision values. Attila Kinali -- Malek's Law: Any simple idea will be worded in the most complicated way.
SS
Scott Stobbe
Tue, Jul 26, 2016 5:42 AM

As a clarification, the AD5791 is the minimum implementation of a DAC, it's
merely a resistor array with SPI controllable switches. (But an impressive
set of resistors, no doubt. Maybe with a dash of secret sauce in digital
calibration). The only guaranteed specs for the AD5791 are at DC,
everything else is up to the designer.

The thermal noise of the AD5791 is 4.07 nV/rtHz * sqrt( 3.4 ) = 7.5 nV/rtHz
(same as spec'd), where 3.4 is the nominal output resistance in kOhms.
Flicker noise is due to the voltage reference, reference buffers, and post
DAC buffer.

The settling time after code transition is complete, is based of the load
capacitance as seen by the DAC, and the buffer amplifier's transient
response. The settling time of a pole to +- 0.5 ppm is 15 time constants.

Even the sample application circuit only achieved a THD of 97 dB for a 1
kHz tone, which is equivalent to 16 bits.

The INL/DNL are measured at DC, if you were to measure the INL/DNL at 1Msps
on a half bit code across span (dither 1 LSB), the results would be
dramatically different due to glitching on code transition. That being
said, they are kept separate not to confuse sources of error.

20 effective bits is 122 dB signal to every thing else.

On Mon, Jul 25, 2016 at 7:40 PM, Attila Kinali attila@kinali.ch wrote:

On Sun, 24 Jul 2016 23:48:05 -0400
Scott Stobbe scott.j.stobbe@gmail.com wrote:

I doubt the AD5791 does much better than 16 bits operating at 1 Msps,

when

you include glitch energy, noise, and distortion.

What makes you think so?

Yes, if you are using the full 500kHz bandwidth then the rms noise voltage
will be 5uV.. or 35uVp-p. But even just going down to a 1kHz bandwith
gives 235nVrms/1.5uVp-p (plus 1/f noise of 1.1uVp-p). So we are within
the 1ppm for any output larger than ~2V.

The DNL and INL are low enough that I don't think they cause any more
trouble then you'd expect from a DAC normally.

I don't know how to make use of the glitch specs and turn them
precision values.

                     Attila Kinali

--
Malek's Law:
Any simple idea will be worded in the most complicated way.


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

As a clarification, the AD5791 is the minimum implementation of a DAC, it's merely a resistor array with SPI controllable switches. (But an impressive set of resistors, no doubt. Maybe with a dash of secret sauce in digital calibration). The only guaranteed specs for the AD5791 are at DC, everything else is up to the designer. The thermal noise of the AD5791 is 4.07 nV/rtHz * sqrt( 3.4 ) = 7.5 nV/rtHz (same as spec'd), where 3.4 is the nominal output resistance in kOhms. Flicker noise is due to the voltage reference, reference buffers, and post DAC buffer. The settling time after code transition is complete, is based of the load capacitance as seen by the DAC, and the buffer amplifier's transient response. The settling time of a pole to +- 0.5 ppm is 15 time constants. Even the sample application circuit only achieved a THD of 97 dB for a 1 kHz tone, which is equivalent to 16 bits. The INL/DNL are measured at DC, if you were to measure the INL/DNL at 1Msps on a half bit code across span (dither 1 LSB), the results would be dramatically different due to glitching on code transition. That being said, they are kept separate not to confuse sources of error. 20 effective bits is 122 dB signal to every thing else. On Mon, Jul 25, 2016 at 7:40 PM, Attila Kinali <attila@kinali.ch> wrote: > On Sun, 24 Jul 2016 23:48:05 -0400 > Scott Stobbe <scott.j.stobbe@gmail.com> wrote: > > > I doubt the AD5791 does much better than 16 bits operating at 1 Msps, > when > > you include glitch energy, noise, and distortion. > > What makes you think so? > > Yes, if you are using the full 500kHz bandwidth then the rms noise voltage > will be 5uV.. or 35uVp-p. But even just going down to a 1kHz bandwith > gives 235nVrms/1.5uVp-p (plus 1/f noise of 1.1uVp-p). So we are within > the 1ppm for any output larger than ~2V. > > The DNL and INL are low enough that I don't think they cause any more > trouble then you'd expect from a DAC normally. > > I don't know how to make use of the glitch specs and turn them > precision values. > > Attila Kinali > > -- > Malek's Law: > Any simple idea will be worded in the most complicated way. > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to > https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there. >
R(
Richard (Rick) Karlquist
Tue, Jul 26, 2016 6:46 PM

On 7/25/2016 10:42 PM, Scott Stobbe wrote:

dramatically different due to glitching on code transition. That being
said, they are kept separate not to confuse sources of error.

FWIW:

The 5071A has a "home brew" DDS that was designed by the late
(and great) Robin Giffard.  He used what he called a "blanking"
circuit that disconnected the DAC during the time period when it
is at risk for glitching on code transitions.  He described it
in terms of some new innovation he had invented.  It seemed to
me an "obvious" (that loaded word) idea that surely must have
been used before.  In any event, he was clearly doing something
right (as usual).

Rick

On 7/25/2016 10:42 PM, Scott Stobbe wrote: > > dramatically different due to glitching on code transition. That being > said, they are kept separate not to confuse sources of error. > FWIW: The 5071A has a "home brew" DDS that was designed by the late (and great) Robin Giffard. He used what he called a "blanking" circuit that disconnected the DAC during the time period when it is at risk for glitching on code transitions. He described it in terms of some new innovation he had invented. It seemed to me an "obvious" (that loaded word) idea that surely must have been used before. In any event, he was clearly doing something right (as usual). Rick
PK
Poul-Henning Kamp
Wed, Jul 27, 2016 5:47 AM

In message 2ea1326f-0690-6884-8fc4-29f45c57fd7b@karlquist.com, "Richard (Rick
) Karlquist" writes:

The 5071A has a "home brew" DDS that was designed by the late
(and great) Robin Giffard.  He used what he called a "blanking"
circuit that disconnected the DAC during the time period when it
is at risk for glitching on code transitions.  He described it
in terms of some new innovation he had invented.  It seemed to
me an "obvious" (that loaded word) idea that surely must have
been used before.

For instance on page 66 in HP Journal february 1989 ?

--
Poul-Henning Kamp      | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG        | TCP/IP since RFC 956
FreeBSD committer      | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.

-------- In message <2ea1326f-0690-6884-8fc4-29f45c57fd7b@karlquist.com>, "Richard (Rick ) Karlquist" writes: >The 5071A has a "home brew" DDS that was designed by the late >(and great) Robin Giffard. He used what he called a "blanking" >circuit that disconnected the DAC during the time period when it >is at risk for glitching on code transitions. He described it >in terms of some new innovation he had invented. It seemed to >me an "obvious" (that loaded word) idea that surely must have >been used before. For instance on page 66 in HP Journal february 1989 ? -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk@FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence.
SS
Scott Stobbe
Wed, Jul 27, 2016 6:15 PM

The first reference at hand I checked was the ADI Data Converter Handbook,
1986. Pg 60 Discusses track & hold, with a reference to the HDD-1206 as
including track/hold on die.

On Tue, Jul 26, 2016 at 2:46 PM, Richard (Rick) Karlquist <
richard@karlquist.com> wrote:

On 7/25/2016 10:42 PM, Scott Stobbe wrote:

dramatically different due to glitching on code transition. That being
said, they are kept separate not to confuse sources of error.

FWIW:

The 5071A has a "home brew" DDS that was designed by the late
(and great) Robin Giffard.  He used what he called a "blanking"
circuit that disconnected the DAC during the time period when it
is at risk for glitching on code transitions.  He described it
in terms of some new innovation he had invented.  It seemed to
me an "obvious" (that loaded word) idea that surely must have
been used before.  In any event, he was clearly doing something
right (as usual).

Rick


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

The first reference at hand I checked was the ADI Data Converter Handbook, 1986. Pg 60 Discusses track & hold, with a reference to the HDD-1206 as including track/hold on die. On Tue, Jul 26, 2016 at 2:46 PM, Richard (Rick) Karlquist < richard@karlquist.com> wrote: > > > On 7/25/2016 10:42 PM, Scott Stobbe wrote: > >> >> dramatically different due to glitching on code transition. That being >> said, they are kept separate not to confuse sources of error. >> >> > FWIW: > > The 5071A has a "home brew" DDS that was designed by the late > (and great) Robin Giffard. He used what he called a "blanking" > circuit that disconnected the DAC during the time period when it > is at risk for glitching on code transitions. He described it > in terms of some new innovation he had invented. It seemed to > me an "obvious" (that loaded word) idea that surely must have > been used before. In any event, he was clearly doing something > right (as usual). > > Rick > > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to > https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there. >
RL
Robert LaJeunesse
Wed, Jul 27, 2016 6:55 PM

The concept of track & hold was well understood in 1966 and documented in the classic Philbrick Researches amplifier application book, now available at http://www.analog.com/library/analogdialogue/archives/philbrick/computing_amplifiers.html

Bob LaJeunesse

Sent: Wednesday, July 27, 2016 at 2:15 PM
From: "Scott Stobbe" scott.j.stobbe@gmail.com
To: "Discussion of precise time and frequency measurement" time-nuts@febo.com
Subject: Re: [time-nuts] Precision DACs

The first reference at hand I checked was the ADI Data Converter Handbook,
1986. Pg 60 Discusses track & hold, with a reference to the HDD-1206 as
including track/hold on die.

On Tue, Jul 26, 2016 at 2:46 PM, Richard (Rick) Karlquist <
richard@karlquist.com> wrote:

On 7/25/2016 10:42 PM, Scott Stobbe wrote:

dramatically different due to glitching on code transition. That being
said, they are kept separate not to confuse sources of error.

FWIW:

The 5071A has a "home brew" DDS that was designed by the late
(and great) Robin Giffard.  He used what he called a "blanking"
circuit that disconnected the DAC during the time period when it
is at risk for glitching on code transitions.  He described it
in terms of some new innovation he had invented.  It seemed to
me an "obvious" (that loaded word) idea that surely must have
been used before.  In any event, he was clearly doing something
right (as usual).

Rick

The concept of track & hold was well understood in 1966 and documented in the classic Philbrick Researches amplifier application book, now available at http://www.analog.com/library/analogdialogue/archives/philbrick/computing_amplifiers.html Bob LaJeunesse > Sent: Wednesday, July 27, 2016 at 2:15 PM > From: "Scott Stobbe" <scott.j.stobbe@gmail.com> > To: "Discussion of precise time and frequency measurement" <time-nuts@febo.com> > Subject: Re: [time-nuts] Precision DACs > > The first reference at hand I checked was the ADI Data Converter Handbook, > 1986. Pg 60 Discusses track & hold, with a reference to the HDD-1206 as > including track/hold on die. > > On Tue, Jul 26, 2016 at 2:46 PM, Richard (Rick) Karlquist < > richard@karlquist.com> wrote: > > > > > > > On 7/25/2016 10:42 PM, Scott Stobbe wrote: > > > >> > >> dramatically different due to glitching on code transition. That being > >> said, they are kept separate not to confuse sources of error. > >> > >> > > FWIW: > > > > The 5071A has a "home brew" DDS that was designed by the late > > (and great) Robin Giffard. He used what he called a "blanking" > > circuit that disconnected the DAC during the time period when it > > is at risk for glitching on code transitions. He described it > > in terms of some new innovation he had invented. It seemed to > > me an "obvious" (that loaded word) idea that surely must have > > been used before. In any event, he was clearly doing something > > right (as usual). > > > > Rick > >
BG
Bruce Griffiths
Wed, Jul 27, 2016 9:47 PM

On Wednesday, July 27, 2016 08:55:20 PM Robert LaJeunesse wrote:

The concept of track & hold was well understood in 1966 and documented in
the classic Philbrick Researches amplifier application book, now available
at
http://www.analog.com/library/analogdialogue/archives/philbrick/computing_a
mplifiers.html

Bob LaJeunesse

Sent: Wednesday, July 27, 2016 at 2:15 PM
From: "Scott Stobbe" scott.j.stobbe@gmail.com
To: "Discussion of precise time and frequency measurement"
time-nuts@febo.com Subject: Re: [time-nuts] Precision DACs

The first reference at hand I checked was the ADI Data Converter Handbook,
1986. Pg 60 Discusses track & hold, with a reference to the HDD-1206 as
including track/hold on die.

On Tue, Jul 26, 2016 at 2:46 PM, Richard (Rick) Karlquist <

richard@karlquist.com> wrote:

On 7/25/2016 10:42 PM, Scott Stobbe wrote:

dramatically different due to glitching on code transition. That being
said, they are kept separate not to confuse sources of error.

FWIW:

The 5071A has a "home brew" DDS that was designed by the late
(and great) Robin Giffard.  He used what he called a "blanking"
circuit that disconnected the DAC during the time period when it
is at risk for glitching on code transitions.  He described it
in terms of some new innovation he had invented.  It seemed to
me an "obvious" (that loaded word) idea that surely must have
been used before.  In any event, he was clearly doing something
right (as usual).

Rick


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts and follow the
instructions there.

For the particular application a track and hold circuit is not necessary a
much simpler circuit that merely disconnects the DAC from the filter input and
connects the filter input to ground will suffice.
This would result if done properly in a set of constant amplitude glitches at
the sample rate which can easily be rejected by the reconstruction  filter.
The DAC output in effect would be modulated by a square wave with a duty cycle
and phase with respect to DAC transitions selected to remove DAC glitches from
the filter input. The only effect of this modulation on the filtered output
signal will be a small reduction in amplitude to that achievable with a sample
and hold style deglitcher.

Bruce

On Wednesday, July 27, 2016 08:55:20 PM Robert LaJeunesse wrote: > The concept of track & hold was well understood in 1966 and documented in > the classic Philbrick Researches amplifier application book, now available > at > http://www.analog.com/library/analogdialogue/archives/philbrick/computing_a > mplifiers.html > > Bob LaJeunesse > > > Sent: Wednesday, July 27, 2016 at 2:15 PM > > From: "Scott Stobbe" <scott.j.stobbe@gmail.com> > > To: "Discussion of precise time and frequency measurement" > > <time-nuts@febo.com> Subject: Re: [time-nuts] Precision DACs > > > > The first reference at hand I checked was the ADI Data Converter Handbook, > > 1986. Pg 60 Discusses track & hold, with a reference to the HDD-1206 as > > including track/hold on die. > > > > On Tue, Jul 26, 2016 at 2:46 PM, Richard (Rick) Karlquist < > > > > richard@karlquist.com> wrote: > > > On 7/25/2016 10:42 PM, Scott Stobbe wrote: > > >> dramatically different due to glitching on code transition. That being > > >> said, they are kept separate not to confuse sources of error. > > > > > > FWIW: > > > > > > The 5071A has a "home brew" DDS that was designed by the late > > > (and great) Robin Giffard. He used what he called a "blanking" > > > circuit that disconnected the DAC during the time period when it > > > is at risk for glitching on code transitions. He described it > > > in terms of some new innovation he had invented. It seemed to > > > me an "obvious" (that loaded word) idea that surely must have > > > been used before. In any event, he was clearly doing something > > > right (as usual). > > > > > > Rick > > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to > https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts and follow the > instructions there. For the particular application a track and hold circuit is not necessary a much simpler circuit that merely disconnects the DAC from the filter input and connects the filter input to ground will suffice. This would result if done properly in a set of constant amplitude glitches at the sample rate which can easily be rejected by the reconstruction filter. The DAC output in effect would be modulated by a square wave with a duty cycle and phase with respect to DAC transitions selected to remove DAC glitches from the filter input. The only effect of this modulation on the filtered output signal will be a small reduction in amplitude to that achievable with a sample and hold style deglitcher. Bruce