time-nuts@lists.febo.com

Discussion of precise time and frequency measurement

View all threads

Re: [time-nuts] Linear voltage regulator hints...

D
dan@irtelemetrics.com
Fri, Dec 12, 2014 1:20 AM
 Hi Bob,

Some numbers. Maybe you can double check my math, just to be sure I'm
not getting something completely wrong. That is very possible, since
I'm new here... ;) 
The DAC is moving up and down about 7.5 counts with room temp
swings. 20 bit resolution at 6.6volts full scale output.
6.6 volts * 7.5counts / (2^20) =  +/- 47uV.  (This is verified as
reasonable with a 24bit data logger, as it's seeing about +/-50 uV temp
swings on EFC. Resolution of about 1uV.)
Tuning sensitivity of the oscillator is 1Hz/10Volts. Or 47uV *
1Hz/10V = +/- 4.7uHz. 
The temp swing is +/- 2 degF with ~45 minute period. So, in the
ballpark of your +/- 1 Deg C guess. 
+/-4.7uHz / 10e6 Hz oscillator = +/-4.7e-13, or near a 1e-12 full
swing over 2.2 Deg C. (Or, am I completely out to lunch here???) 
I should qualify, there is aging/retrace here. It's in the range of
3e-11 per day right now, and I took the +/- 7.5counts off of what was
left after removing the slope of the aging drift. The aging looks huge
over a day compared to the thermal cycling. 
Currently the system has ~2ppm/C reference, and .04nV/C opamps. So,
Yeah a little overkill. But these things are getting cheap nowadays, so
why not? Before the 'good' reference and opamps, there was about 10
times as much swing in the PWM DAC over temp cycles. As you have
suggested there is probably some room to 'relax the spec', and still be
fine...
 
Thanks,
Dan
 
 
 
 
 
 

Hi

If your OCXO has a stability of +/-3x10^-10 over 0 to 60C (numbers

picked to make math easy):

6x10^-10 / 60 = 1x10^-11 per C

If the OCXO is 10X better than that (unlike) you are at 1x10^-12/C

If the room temperature swings 1 C, you get a 1x10^-11 swing in the OCXO.

Keep in mind that the OCXO also responds to things like gradients

as much as it does to absolute temperature.

If your EFC range is 1x10^-7 for 5V (pretty common):

Each 1 mv change is 2 x10^-11.

A 78L05 will hold 1 mv over 1C. Roughly 90% of them will hold that

over 20C. That’s a cheap regulator for 2x10^-12.

A 10 ppm / C reference will get you to 1x10^-13 / C

You don’t need an EFC at 1x10^-7. Something 1/10 that size is

probably good enough. Knocking it down to that level is just a couple
of resistors. Way less money than fancy references.

Bob
 

Hi Bob, Some numbers. Maybe you can double check my math, just to be sure I'm not getting something completely wrong. That is very possible, since I'm new here... ;)  The DAC is moving up and down about 7.5 counts with room temp swings. 20 bit resolution at 6.6volts full scale output. 6.6 volts * 7.5counts / (2^20) =  +/- 47uV.  (This is verified as reasonable with a 24bit data logger, as it's seeing about +/-50 uV temp swings on EFC. Resolution of about 1uV.) Tuning sensitivity of the oscillator is 1Hz/10Volts. Or 47uV * 1Hz/10V = +/- 4.7uHz.  The temp swing is +/- 2 degF with ~45 minute period. So, in the ballpark of your +/- 1 Deg C guess.  +/-4.7uHz / 10e6 Hz oscillator = +/-4.7e-13, or near a 1e-12 full swing over 2.2 Deg C. (Or, am I completely out to lunch here???)  I should qualify, there is aging/retrace here. It's in the range of 3e-11 per day right now, and I took the +/- 7.5counts off of what was left after removing the slope of the aging drift. The aging looks huge over a day compared to the thermal cycling.  Currently the system has ~2ppm/C reference, and .04nV/C opamps. So, Yeah a little overkill. But these things are getting cheap nowadays, so why not? Before the 'good' reference and opamps, there was about 10 times as much swing in the PWM DAC over temp cycles. As you have suggested there is probably some room to 'relax the spec', and still be fine...   Thanks, Dan             > > Hi > > If your OCXO has a stability of +/-3x10^-10 over 0 to 60C (numbers picked to make math easy): > > 6x10^-10 / 60 = 1x10^-11 per C > > If the OCXO is 10X better than that (unlike) you are at 1x10^-12/C > > If the room temperature swings 1 C, you get a 1x10^-11 swing in the OCXO. > > Keep in mind that the OCXO also responds to things like gradients as much as it does to absolute temperature. > > If your EFC range is 1x10^-7 for 5V (pretty common): > > Each 1 mv change is 2 x10^-11. > > A 78L05 will hold 1 mv over 1C. Roughly 90% of them will hold that over 20C. That’s a cheap regulator for 2x10^-12. > > A 10 ppm / C reference will get you to 1x10^-13 / C > > You don’t *need* an EFC at 1x10^-7. Something 1/10 that size is probably good enough. Knocking it down to that level is just a couple of resistors. Way less money than fancy references. > > Bob > 
BC
Bob Camp
Fri, Dec 12, 2014 1:47 AM

Hi

On Dec 11, 2014, at 8:20 PM, dan@irtelemetrics.com wrote:

Hi Bob,

Some numbers. Maybe you can double check my math, just to be sure I'm not getting something completely wrong. That is very possible, since I'm new here... ;)
The DAC is moving up and down about 7.5 counts with room temp swings. 20 bit resolution at 6.6volts full scale output.  6.6 volts * 7.5counts / (2^20) =  +/- 47uV.  (This is verified as reasonable with a 24bit data logger, as it's seeing about +/-50 uV temp swings on EFC. Resolution of about 1uV.)
Tuning sensitivity of the oscillator is 1Hz/10Volts. Or 47uV * 1Hz/10V = +/- 4.7uHz.

or 1x10^-8 per volt. If it’s a 10 MHz OCXO.
That would be 1x10^-14 per uV
4.7 x 10^-13 for 47 uV

The temp swing is +/- 2 degF with ~45 minute period. So, in the ballpark of your +/- 1 Deg C guess.

That’s pretty normal for a modern HVAC system.

+/-4.7uHz / 10e6 Hz oscillator = +/-4.7e-13, or near a 1e-12 full swing over 2.2 Deg C.

1x10^-12 for full swing is about right.

(Or, am I completely out to lunch here???)
I should qualify, there is aging/retrace here. It's in the range of 3e-11 per day right now, and I took the +/- 7.5counts off of what was left after removing the slope of the aging drift.

That’s a very common (and legit) thing to do.

The aging looks huge over a day compared to the thermal cycling.
Currently the system has ~2ppm/C reference, and .04nV/C opamps. So, Yeah a little overkill.

What makes you believe that the OCXO’s temperature performance it not the issue?

1x10^-12 per 2 C -> 1x10^-10 over 100 C (say -30 to +70C). That’s a very good spec on an OCXO. Also consider that gradients could easily amplify the impact by 2X or more.

But these things are getting cheap nowadays, so why not? Before the 'good' reference and opamps, there was about 10 times as much swing in the PWM DAC over temp cycles. As you have suggested there is probably some room to 'relax the spec', and still be fine…

I’d guess that the analog stuff is much better than it needs to be.

Bob

Thanks,
Dan

Hi

If your OCXO has a stability of +/-3x10^-10 over 0 to 60C (numbers picked to make math easy):

6x10^-10 / 60 = 1x10^-11 per C

If the OCXO is 10X better than that (unlike) you are at 1x10^-12/C

If the room temperature swings 1 C, you get a 1x10^-11 swing in the OCXO.  >
Keep in mind that the OCXO also responds to things like gradients as much as it does to absolute temperature.  >
If your EFC range is 1x10^-7 for 5V (pretty common):

Each 1 mv change is 2 x10^-11.  >
A 78L05 will hold 1 mv over 1C. Roughly 90% of them will hold that over 20C. That’s a cheap regulator for 2x10^-12.  >
A 10 ppm / C reference will get you to 1x10^-13 / C

You don’t need an EFC at 1x10^-7. Something 1/10 that size is probably good enough. Knocking it down to that level is just a couple of resistors. Way less money than fancy references.  >
Bob


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Hi > On Dec 11, 2014, at 8:20 PM, dan@irtelemetrics.com wrote: > > Hi Bob, > Some numbers. Maybe you can double check my math, just to be sure I'm not getting something completely wrong. That is very possible, since I'm new here... ;) > The DAC is moving up and down about 7.5 counts with room temp swings. 20 bit resolution at 6.6volts full scale output. 6.6 volts * 7.5counts / (2^20) = +/- 47uV. (This is verified as reasonable with a 24bit data logger, as it's seeing about +/-50 uV temp swings on EFC. Resolution of about 1uV.) > Tuning sensitivity of the oscillator is 1Hz/10Volts. Or 47uV * 1Hz/10V = +/- 4.7uHz. or 1x10^-8 per volt. If it’s a 10 MHz OCXO. That would be 1x10^-14 per uV 4.7 x 10^-13 for 47 uV > The temp swing is +/- 2 degF with ~45 minute period. So, in the ballpark of your +/- 1 Deg C guess. That’s pretty normal for a modern HVAC system. > +/-4.7uHz / 10e6 Hz oscillator = +/-4.7e-13, or near a 1e-12 full swing over 2.2 Deg C. 1x10^-12 for full swing is about right. > (Or, am I completely out to lunch here???) > I should qualify, there is aging/retrace here. It's in the range of 3e-11 per day right now, and I took the +/- 7.5counts off of what was left after removing the slope of the aging drift. That’s a very common (and legit) thing to do. > The aging looks huge over a day compared to the thermal cycling. > Currently the system has ~2ppm/C reference, and .04nV/C opamps. So, Yeah a little overkill. What makes you believe that the OCXO’s temperature performance it not the issue? 1x10^-12 per 2 C -> 1x10^-10 over 100 C (say -30 to +70C). That’s a very good spec on an OCXO. Also consider that gradients could easily amplify the impact by 2X or more. > But these things are getting cheap nowadays, so why not? Before the 'good' reference and opamps, there was about 10 times as much swing in the PWM DAC over temp cycles. As you have suggested there is probably some room to 'relax the spec', and still be fine… I’d guess that the analog stuff is much better than it needs to be. Bob > Thanks, > Dan > > > > > > > > > > Hi > > > > If your OCXO has a stability of +/-3x10^-10 over 0 to 60C (numbers picked to make math easy): > > > > 6x10^-10 / 60 = 1x10^-11 per C > > > > If the OCXO is 10X better than that (unlike) you are at 1x10^-12/C > > > > If the room temperature swings 1 C, you get a 1x10^-11 swing in the OCXO. > > > Keep in mind that the OCXO also responds to things like gradients as much as it does to absolute temperature. > > > If your EFC range is 1x10^-7 for 5V (pretty common): > > > > Each 1 mv change is 2 x10^-11. > > > A 78L05 will hold 1 mv over 1C. Roughly 90% of them will hold that over 20C. That’s a cheap regulator for 2x10^-12. > > > A 10 ppm / C reference will get you to 1x10^-13 / C > > > > You don’t *need* an EFC at 1x10^-7. Something 1/10 that size is probably good enough. Knocking it down to that level is just a couple of resistors. Way less money than fancy references. > > > Bob > > > > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there.
BC
Bob Camp
Fri, Dec 12, 2014 2:15 AM

Hi

Separate from the analysis of the voltage on the OCXO, there is another part to this:

Ok, so why am I harping on the “need” for all this from a system standpoint ?

  1. Adding stuff to a design that does not make it measurably better is simply a waste of money. That’s ok if it’s your money.

  2. Others read these threads and decide “maybe I need to do that..”. Now it’s a waste of somebody else’s money.

  3. Still others look at this and decide “If I need to do that, I’m not even going to start”. That’s not good either.

  4. Analysis is part of the design process. It should very much be part of this.

  5. Focusing on a design aspect “because I can” is a very common thing. I do it all the time :) Because I fall into the trap often, I recognize just how much time gets soaked up on dead ends this way.

  6. There are very real issues when doing this. Sorting out what’s real and what isn’t is far from easy. The more “noise” in with the signal, the less likely others are to figure out a good approach.

I’m quite sure this thread will keep going for quite a while on references. At some point we will be talking about 1 pV / C 1KV designs with 0.1 nV/Hz noise levels. It worries me that people may believe that in some way that applies to a time or frequency standard ….

Bob

On Dec 11, 2014, at 8:47 PM, Bob Camp kb8tq@n1k.org wrote:

Hi

On Dec 11, 2014, at 8:20 PM, dan@irtelemetrics.com wrote:

Hi Bob,
Some numbers. Maybe you can double check my math, just to be sure I'm not getting something completely wrong. That is very possible, since I'm new here... ;)
The DAC is moving up and down about 7.5 counts with room temp swings. 20 bit resolution at 6.6volts full scale output.  6.6 volts * 7.5counts / (2^20) =  +/- 47uV.  (This is verified as reasonable with a 24bit data logger, as it's seeing about +/-50 uV temp swings on EFC. Resolution of about 1uV.)
Tuning sensitivity of the oscillator is 1Hz/10Volts. Or 47uV * 1Hz/10V = +/- 4.7uHz.

or 1x10^-8 per volt. If it’s a 10 MHz OCXO.
That would be 1x10^-14 per uV
4.7 x 10^-13 for 47 uV

The temp swing is +/- 2 degF with ~45 minute period. So, in the ballpark of your +/- 1 Deg C guess.

That’s pretty normal for a modern HVAC system.

+/-4.7uHz / 10e6 Hz oscillator = +/-4.7e-13, or near a 1e-12 full swing over 2.2 Deg C.

1x10^-12 for full swing is about right.

(Or, am I completely out to lunch here???)
I should qualify, there is aging/retrace here. It's in the range of 3e-11 per day right now, and I took the +/- 7.5counts off of what was left after removing the slope of the aging drift.

That’s a very common (and legit) thing to do.

The aging looks huge over a day compared to the thermal cycling.
Currently the system has ~2ppm/C reference, and .04nV/C opamps. So, Yeah a little overkill.

What makes you believe that the OCXO’s temperature performance it not the issue?

1x10^-12 per 2 C -> 1x10^-10 over 100 C (say -30 to +70C). That’s a very good spec on an OCXO. Also consider that gradients could easily amplify the impact by 2X or more.

But these things are getting cheap nowadays, so why not? Before the 'good' reference and opamps, there was about 10 times as much swing in the PWM DAC over temp cycles. As you have suggested there is probably some room to 'relax the spec', and still be fine…

I’d guess that the analog stuff is much better than it needs to be.

Bob

Thanks,
Dan

Hi

If your OCXO has a stability of +/-3x10^-10 over 0 to 60C (numbers picked to make math easy):

6x10^-10 / 60 = 1x10^-11 per C

If the OCXO is 10X better than that (unlike) you are at 1x10^-12/C

If the room temperature swings 1 C, you get a 1x10^-11 swing in the OCXO.  >
Keep in mind that the OCXO also responds to things like gradients as much as it does to absolute temperature.  >
If your EFC range is 1x10^-7 for 5V (pretty common):

Each 1 mv change is 2 x10^-11.  >
A 78L05 will hold 1 mv over 1C. Roughly 90% of them will hold that over 20C. That’s a cheap regulator for 2x10^-12.  >
A 10 ppm / C reference will get you to 1x10^-13 / C

You don’t need an EFC at 1x10^-7. Something 1/10 that size is probably good enough. Knocking it down to that level is just a couple of resistors. Way less money than fancy references.  >
Bob


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Hi Separate from the analysis of the voltage on the OCXO, there is another part to this: Ok, so why am I harping on the “need” for all this from a system standpoint ? 1) Adding stuff to a design that does not make it measurably better is simply a waste of money. That’s ok if it’s your money. 2) Others read these threads and decide “maybe I need to do that..”. Now it’s a waste of somebody else’s money. 3) Still others look at this and decide “If I need to do that, I’m not even going to start”. That’s not good either. 4) Analysis *is* part of the design process. It should very much be part of this. 5) Focusing on a design aspect “because I can” is a very common thing. I do it all the time :) Because I fall into the trap often, I recognize just how much time gets soaked up on dead ends this way. 6) There are very real issues when doing this. Sorting out what’s real and what isn’t is far from easy. The more “noise” in with the signal, the less likely others are to figure out a good approach. I’m quite sure this thread will keep going for quite a while on references. At some point we will be talking about 1 pV / C 1KV designs with 0.1 nV/Hz noise levels. It worries me that people may believe that in some way that applies to a time or frequency standard …. Bob > On Dec 11, 2014, at 8:47 PM, Bob Camp <kb8tq@n1k.org> wrote: > > Hi > >> On Dec 11, 2014, at 8:20 PM, dan@irtelemetrics.com wrote: >> >> Hi Bob, >> Some numbers. Maybe you can double check my math, just to be sure I'm not getting something completely wrong. That is very possible, since I'm new here... ;) >> The DAC is moving up and down about 7.5 counts with room temp swings. 20 bit resolution at 6.6volts full scale output. 6.6 volts * 7.5counts / (2^20) = +/- 47uV. (This is verified as reasonable with a 24bit data logger, as it's seeing about +/-50 uV temp swings on EFC. Resolution of about 1uV.) >> Tuning sensitivity of the oscillator is 1Hz/10Volts. Or 47uV * 1Hz/10V = +/- 4.7uHz. > > or 1x10^-8 per volt. If it’s a 10 MHz OCXO. > That would be 1x10^-14 per uV > 4.7 x 10^-13 for 47 uV > > >> The temp swing is +/- 2 degF with ~45 minute period. So, in the ballpark of your +/- 1 Deg C guess. > > That’s pretty normal for a modern HVAC system. > >> +/-4.7uHz / 10e6 Hz oscillator = +/-4.7e-13, or near a 1e-12 full swing over 2.2 Deg C. > > 1x10^-12 for full swing is about right. > >> (Or, am I completely out to lunch here???) >> I should qualify, there is aging/retrace here. It's in the range of 3e-11 per day right now, and I took the +/- 7.5counts off of what was left after removing the slope of the aging drift. > > That’s a very common (and legit) thing to do. > >> The aging looks huge over a day compared to the thermal cycling. >> Currently the system has ~2ppm/C reference, and .04nV/C opamps. So, Yeah a little overkill. > > What makes you believe that the OCXO’s temperature performance it not the issue? > > 1x10^-12 per 2 C -> 1x10^-10 over 100 C (say -30 to +70C). That’s a very good spec on an OCXO. Also consider that gradients could easily amplify the impact by 2X or more. > >> But these things are getting cheap nowadays, so why not? Before the 'good' reference and opamps, there was about 10 times as much swing in the PWM DAC over temp cycles. As you have suggested there is probably some room to 'relax the spec', and still be fine… > > I’d guess that the analog stuff is much better than it needs to be. > > Bob > >> Thanks, >> Dan >> >> >> >> >> >> >>> >>> Hi >>> >>> If your OCXO has a stability of +/-3x10^-10 over 0 to 60C (numbers picked to make math easy): >>> >>> 6x10^-10 / 60 = 1x10^-11 per C >>> >>> If the OCXO is 10X better than that (unlike) you are at 1x10^-12/C >>> >>> If the room temperature swings 1 C, you get a 1x10^-11 swing in the OCXO. > >>> Keep in mind that the OCXO also responds to things like gradients as much as it does to absolute temperature. > >>> If your EFC range is 1x10^-7 for 5V (pretty common): >>> >>> Each 1 mv change is 2 x10^-11. > >>> A 78L05 will hold 1 mv over 1C. Roughly 90% of them will hold that over 20C. That’s a cheap regulator for 2x10^-12. > >>> A 10 ppm / C reference will get you to 1x10^-13 / C >>> >>> You don’t *need* an EFC at 1x10^-7. Something 1/10 that size is probably good enough. Knocking it down to that level is just a couple of resistors. Way less money than fancy references. > >>> Bob >>> >> >> _______________________________________________ >> time-nuts mailing list -- time-nuts@febo.com >> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts >> and follow the instructions there. >
CS
Charles Steinmetz
Fri, Dec 12, 2014 5:28 AM

Bob wrote:

Separate from the analysis of the voltage on the OCXO, there is
another part to this:
Ok, so why am I harping on the "need" for all this from a system standpoint ?

We've been around this track a time or two before, me frustrated with
your "make it just good enough" philosophy and you with my "always do
the best you can" philosophy.  We're not likely to persuade each
other, or even influence anybody else, but I think it is worth going
around at least one more time.

  1. Adding stuff to a design that does not make it measurably better
    is simply a waste of money.

Preliminary nit:  I agree that any "improvement" that does not make
something measurably better is of no value.  Indeed, it is no
improvement at all.  But you didn't mean literally "not measurably
better" -- you meant "not better for the task at hand."  A digital
caliper reading to 0.0001" is "measurably better" than a ruler
graduated in 1/32 inch, although the difference is not important if
one is measuring the thickness of a 2x4 for framing a house.  But
some day you may want to measure something besides a 2x4....

On to the substance:

"Do the best you can" isn't necessarily about adding anything to a
design.  It's about carefully determining an error budget and
developing a design that meets the budget.  Of course, you can set
the design goals for each subsystem so that the overall system should
juuuust work if everything else is perfect, or so that the system
should work under most conditions, or so you'll never have to
consider whether that subsystem might be contributing anything
significant to the system errors.  If the latter is no more difficult
and no more expensive than either of the former, why WOULDN'T you
design it that way?  I was taught many years ago that "good thinking
doesn't cost any more than bad thinking," and I have generally found
that to be true.  Meaning, it is frequently the case that "the best
you can do" is no more difficult and no more expensive than doing
something less, it just takes better thinking and a more accurate
analysis.  Whenever that is the case, which IME is very often, doing
less is, IMO, a design fault.

Most often, it's a matter of, "Why ground that capacitor there?  Over
here would be better," or "Why use a noninverting amplifier?  If you
use an inverting amplifier, the HF rolloff can continue beyond unity
gain," or something similar.

Note, also, that many of the people asking questions on the list do
not seem to have a thorough design specification for their project,
and may not even know what all they will use a gizmo for.  Settling
for what a list pundit might think is "good enough" for the person's
needs (e.g., residual phase noise floor ~ -150dB and reverse
isolation of ~ 40dB for a buffer amplifier) may turn out to be
inadequate when the person acquires some better oscillators and a
DMTD setup and needs -175dB and 90dB.  If they do the best they can
the first time, they may not have to re-do it later.

  1. Others read these threads and decide "maybe I need to do that..".
  2. Still others look at this and decide "If I need to do that, I'm
    not even going to start". That's not good either.

Again, neither one is a problem if doing the best one can is no more
difficult and no more expensive than doing something less.  If
someone has already done the good thinking and suggests a workable
approach, and all you have to do is a sanity check to implement the
idea (perhaps even improving on the design), again -- why WOULDN'T
you?  There is always someone handy who is quick to point out all of
the other ways to do things, so the person contemplating the project
can evaluate the different approaches for himself.

Sometimes, of course, going the next step up the "best you can"
ladder involves an expensive part (e.g., silicon-on-sapphire
semiconductors), or a much more complex design, or some use
restriction (must be submerged in liquid nitrogen).  In that case,
one must think very carefully about the error budget and determine if
that step is really necessary.  But the vast majority of the time, we
do not face that situation IME.

The bottom line is:  There is no virtue in doing "just enough,"
certainly not in the case of amateur projects that will not be
manufactured in large numbers for slim profit (where every millipence
must be saved, if the accountants are to be believed -- often, they
shouldn't be, but that's another topic entirely).  Never apologize
for doing better than "just enough," as long as doing so does not
cause collateral problems.

To me, that is the art of design -- knowing that the finished gizmo
is the best I could make at the time and with the resources available.

In philosophy-of-design circles, one sometimes hears that "a race car
should be designed so that everything is totally spent as it crosses
the finish line -- the engine should explode, the transmission should
break, and all four tires should blow out simultaneously.  Anything
that is still working was, by definition, overdesigned."  Aside from
the obvious hyperbole, I think the underlying point is plain
wrong.  I know I admire the designers, whoever they were, when
someone pulls a submarine off the ocean floor after 70 years and the
batteries still have a charge and the gauges and radios still work.

Finally, one not-so-obvious point about amateur designs.  Sometimes,
something is a true one-off -- there will never be another made to
that design.  In that case, some design requirements can be
relaxed.  You can use trimmer caps or resistors where you would
prefer not to in a commercial design, for example, and you may use
disfavored logic kludges to work around timing problems.  But then
there are designs that you will publish or otherwise share -- and
these, I suggest, need to be even more bulletproof than commercial
designs, since you are not in control of the construction, parts
choices, etc. that others who follow your lead will make.  Yes, you
can make disclaimers and suggest where the sensitive bits are, but
for the design to be truly useful to others, you need to pay
attention to all that and design as many of the traps out as you
possibly can -- which can be much harder than designing something to
work properly when it is made in a factory under your supervision.

Best regards,

Charles

Bob wrote: >Separate from the analysis of the voltage on the OCXO, there is >another part to this: >Ok, so why am I harping on the "need" for all this from a system standpoint ? We've been around this track a time or two before, me frustrated with your "make it just good enough" philosophy and you with my "always do the best you can" philosophy. We're not likely to persuade each other, or even influence anybody else, but I think it is worth going around at least one more time. >1) Adding stuff to a design that does not make it measurably better >is simply a waste of money. Preliminary nit: I agree that any "improvement" that does not make something measurably better is of no value. Indeed, it is no improvement at all. But you didn't mean literally "not measurably better" -- you meant "not better for the task at hand." A digital caliper reading to 0.0001" is "measurably better" than a ruler graduated in 1/32 inch, although the difference is not important if one is measuring the thickness of a 2x4 for framing a house. But some day you may want to measure something besides a 2x4.... On to the substance: "Do the best you can" isn't necessarily about adding anything to a design. It's about carefully determining an error budget and developing a design that meets the budget. Of course, you can set the design goals for each subsystem so that the overall system should juuuust work if everything else is perfect, or so that the system should work under most conditions, or so you'll never have to consider whether that subsystem might be contributing anything significant to the system errors. If the latter is no more difficult and no more expensive than either of the former, why WOULDN'T you design it that way? I was taught many years ago that "good thinking doesn't cost any more than bad thinking," and I have generally found that to be true. Meaning, it is frequently the case that "the best you can do" is no more difficult and no more expensive than doing something less, it just takes better thinking and a more accurate analysis. Whenever that is the case, which IME is very often, doing less is, IMO, a design fault. Most often, it's a matter of, "Why ground that capacitor there? Over here would be better," or "Why use a noninverting amplifier? If you use an inverting amplifier, the HF rolloff can continue beyond unity gain," or something similar. Note, also, that many of the people asking questions on the list do not seem to have a thorough design specification for their project, and may not even know what all they will use a gizmo for. Settling for what a list pundit might think is "good enough" for the person's needs (e.g., residual phase noise floor ~ -150dB and reverse isolation of ~ 40dB for a buffer amplifier) may turn out to be inadequate when the person acquires some better oscillators and a DMTD setup and needs -175dB and 90dB. If they do the best they can the first time, they may not have to re-do it later. >2) Others read these threads and decide "maybe I need to do that..". >3) Still others look at this and decide "If I need to do that, I'm >not even going to start". That's not good either. Again, neither one is a problem if doing the best one can is no more difficult and no more expensive than doing something less. If someone has already done the good thinking and suggests a workable approach, and all you have to do is a sanity check to implement the idea (perhaps even improving on the design), again -- why WOULDN'T you? There is always someone handy who is quick to point out all of the other ways to do things, so the person contemplating the project can evaluate the different approaches for himself. Sometimes, of course, going the next step up the "best you can" ladder involves an expensive part (e.g., silicon-on-sapphire semiconductors), or a much more complex design, or some use restriction (must be submerged in liquid nitrogen). In that case, one must think very carefully about the error budget and determine if that step is really necessary. But the vast majority of the time, we do not face that situation IME. The bottom line is: There is no virtue in doing "just enough," certainly not in the case of amateur projects that will not be manufactured in large numbers for slim profit (where every millipence must be saved, if the accountants are to be believed -- often, they shouldn't be, but that's another topic entirely). Never apologize for doing better than "just enough," as long as doing so does not cause collateral problems. To me, that is the art of design -- knowing that the finished gizmo is the best I could make at the time and with the resources available. In philosophy-of-design circles, one sometimes hears that "a race car should be designed so that everything is totally spent as it crosses the finish line -- the engine should explode, the transmission should break, and all four tires should blow out simultaneously. Anything that is still working was, by definition, overdesigned." Aside from the obvious hyperbole, I think the underlying point is plain wrong. I know I admire the designers, whoever they were, when someone pulls a submarine off the ocean floor after 70 years and the batteries still have a charge and the gauges and radios still work. Finally, one not-so-obvious point about amateur designs. Sometimes, something is a true one-off -- there will never be another made to that design. In that case, some design requirements can be relaxed. You can use trimmer caps or resistors where you would prefer not to in a commercial design, for example, and you may use disfavored logic kludges to work around timing problems. But then there are designs that you will publish or otherwise share -- and these, I suggest, need to be even more bulletproof than commercial designs, since you are not in control of the construction, parts choices, etc. that others who follow your lead will make. Yes, you can make disclaimers and suggest where the sensitive bits are, but for the design to be truly useful to others, you need to pay attention to all that and design as many of the traps out as you possibly can -- which can be much harder than designing something to work properly when it is made in a factory under your supervision. Best regards, Charles
BS
Bob Stewart
Fri, Dec 12, 2014 5:54 AM

Hi Charles,
I hope you don't mind if I throw my two cents in, as this began as a question about my GPSDO project.  We had a thermal drift problem that Dan traced to the PWM to EFC interface and resolved.  The question to the list was whether there was a regulator package that had a built-in reference with good thermal performance.  Somehow the thread went off on all the tangents, which can be good.  In the process it became clear that there weren't any regulators that would fit our needs, so we would have to go with a reference and op-amp etc.  So, now we just need to decide whether to use a pass transistor or a controllable regulator.  The budget will probably result in a 25 or 50 cent pass transistor, a good(ish) op-amp and a reference that's multi-purposed for the board's other needs (ADC, RC integrator, etc).  But it's certainly good to see what the other options are just in case.  And if the project stays a two-off, then there's plenty of leeway to use better parts here and there if the pinouts are the same.

Bob - AE6RV

  From: Charles Steinmetz <csteinmetz@yandex.com>

To: Discussion of precise time and frequency measurement time-nuts@febo.com
Sent: Thursday, December 11, 2014 11:28 PM
Subject: Re: [time-nuts] Linear voltage regulator hints... --> WHY?

Bob wrote:

Separate from the analysis of the voltage on the OCXO, there is
another part to this:
Ok, so why am I harping on the "need" for all this from a system standpoint ?

We've been around this track a time or two before, me frustrated with
your "make it just good enough" philosophy and you with my "always do
the best you can" philosophy.  We're not likely to persuade each
other, or even influence anybody else, but I think it is worth going
around at least one more time.

  1. Adding stuff to a design that does not make it measurably better
    is simply a waste of money.

Preliminary nit:  I agree that any "improvement" that does not make
something measurably better is of no value.  Indeed, it is no
improvement at all.  But you didn't mean literally "not measurably
better" -- you meant "not better for the task at hand."  A digital
caliper reading to 0.0001" is "measurably better" than a ruler
graduated in 1/32 inch, although the difference is not important if
one is measuring the thickness of a 2x4 for framing a house.  But
some day you may want to measure something besides a 2x4....

On to the substance:

"Do the best you can" isn't necessarily about adding anything to a
design.  It's about carefully determining an error budget and
developing a design that meets the budget.  Of course, you can set
the design goals for each subsystem so that the overall system should
juuuust work if everything else is perfect, or so that the system
should work under most conditions, or so you'll never have to
consider whether that subsystem might be contributing anything
significant to the system errors.  If the latter is no more difficult
and no more expensive than either of the former, why WOULDN'T you
design it that way?  I was taught many years ago that "good thinking
doesn't cost any more than bad thinking," and I have generally found
that to be true.  Meaning, it is frequently the case that "the best
you can do" is no more difficult and no more expensive than doing
something less, it just takes better thinking and a more accurate
analysis.  Whenever that is the case, which IME is very often, doing
less is, IMO, a design fault.

Most often, it's a matter of, "Why ground that capacitor there?  Over
here would be better," or "Why use a noninverting amplifier?  If you
use an inverting amplifier, the HF rolloff can continue beyond unity
gain," or something similar.

Note, also, that many of the people asking questions on the list do
not seem to have a thorough design specification for their project,
and may not even know what all they will use a gizmo for.  Settling
for what a list pundit might think is "good enough" for the person's
needs (e.g., residual phase noise floor ~ -150dB and reverse
isolation of ~ 40dB for a buffer amplifier) may turn out to be
inadequate when the person acquires some better oscillators and a
DMTD setup and needs -175dB and 90dB.  If they do the best they can
the first time, they may not have to re-do it later.

  1. Others read these threads and decide "maybe I need to do that..".
  2. Still others look at this and decide "If I need to do that, I'm
    not even going to start". That's not good either.

Again, neither one is a problem if doing the best one can is no more
difficult and no more expensive than doing something less.  If
someone has already done the good thinking and suggests a workable
approach, and all you have to do is a sanity check to implement the
idea (perhaps even improving on the design), again -- why WOULDN'T
you?  There is always someone handy who is quick to point out all of
the other ways to do things, so the person contemplating the project
can evaluate the different approaches for himself.

Sometimes, of course, going the next step up the "best you can"
ladder involves an expensive part (e.g., silicon-on-sapphire
semiconductors), or a much more complex design, or some use
restriction (must be submerged in liquid nitrogen).  In that case,
one must think very carefully about the error budget and determine if
that step is really necessary.  But the vast majority of the time, we
do not face that situation IME.

The bottom line is:  There is no virtue in doing "just enough,"
certainly not in the case of amateur projects that will not be
manufactured in large numbers for slim profit (where every millipence
must be saved, if the accountants are to be believed -- often, they
shouldn't be, but that's another topic entirely).  Never apologize
for doing better than "just enough," as long as doing so does not
cause collateral problems.

To me, that is the art of design -- knowing that the finished gizmo
is the best I could make at the time and with the resources available.

In philosophy-of-design circles, one sometimes hears that "a race car
should be designed so that everything is totally spent as it crosses
the finish line -- the engine should explode, the transmission should
break, and all four tires should blow out simultaneously.  Anything
that is still working was, by definition, overdesigned."  Aside from
the obvious hyperbole, I think the underlying point is plain
wrong.  I know I admire the designers, whoever they were, when
someone pulls a submarine off the ocean floor after 70 years and the
batteries still have a charge and the gauges and radios still work.

Finally, one not-so-obvious point about amateur designs.  Sometimes,
something is a true one-off -- there will never be another made to
that design.  In that case, some design requirements can be
relaxed.  You can use trimmer caps or resistors where you would
prefer not to in a commercial design, for example, and you may use
disfavored logic kludges to work around timing problems.  But then
there are designs that you will publish or otherwise share -- and
these, I suggest, need to be even more bulletproof than commercial
designs, since you are not in control of the construction, parts
choices, etc. that others who follow your lead will make.  Yes, you
can make disclaimers and suggest where the sensitive bits are, but
for the design to be truly useful to others, you need to pay
attention to all that and design as many of the traps out as you
possibly can -- which can be much harder than designing something to
work properly when it is made in a factory under your supervision.

Best regards,

Charles


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Hi Charles, I hope you don't mind if I throw my two cents in, as this began as a question about my GPSDO project.  We had a thermal drift problem that Dan traced to the PWM to EFC interface and resolved.  The question to the list was whether there was a regulator package that had a built-in reference with good thermal performance.  Somehow the thread went off on all the tangents, which can be good.  In the process it became clear that there weren't any regulators that would fit our needs, so we would have to go with a reference and op-amp etc.  So, now we just need to decide whether to use a pass transistor or a controllable regulator.  The budget will probably result in a 25 or 50 cent pass transistor, a good(ish) op-amp and a reference that's multi-purposed for the board's other needs (ADC, RC integrator, etc).  But it's certainly good to see what the other options are just in case.  And if the project stays a two-off, then there's plenty of leeway to use better parts here and there if the pinouts are the same. Bob - AE6RV From: Charles Steinmetz <csteinmetz@yandex.com> To: Discussion of precise time and frequency measurement <time-nuts@febo.com> Sent: Thursday, December 11, 2014 11:28 PM Subject: Re: [time-nuts] Linear voltage regulator hints... --> WHY? Bob wrote: >Separate from the analysis of the voltage on the OCXO, there is >another part to this: >Ok, so why am I harping on the "need" for all this from a system standpoint ? We've been around this track a time or two before, me frustrated with your "make it just good enough" philosophy and you with my "always do the best you can" philosophy.  We're not likely to persuade each other, or even influence anybody else, but I think it is worth going around at least one more time. >1) Adding stuff to a design that does not make it measurably better >is simply a waste of money. Preliminary nit:  I agree that any "improvement" that does not make something measurably better is of no value.  Indeed, it is no improvement at all.  But you didn't mean literally "not measurably better" -- you meant "not better for the task at hand."  A digital caliper reading to 0.0001" is "measurably better" than a ruler graduated in 1/32 inch, although the difference is not important if one is measuring the thickness of a 2x4 for framing a house.  But some day you may want to measure something besides a 2x4.... On to the substance: "Do the best you can" isn't necessarily about adding anything to a design.  It's about carefully determining an error budget and developing a design that meets the budget.  Of course, you can set the design goals for each subsystem so that the overall system should juuuust work if everything else is perfect, or so that the system should work under most conditions, or so you'll never have to consider whether that subsystem might be contributing anything significant to the system errors.  If the latter is no more difficult and no more expensive than either of the former, why WOULDN'T you design it that way?  I was taught many years ago that "good thinking doesn't cost any more than bad thinking," and I have generally found that to be true.  Meaning, it is frequently the case that "the best you can do" is no more difficult and no more expensive than doing something less, it just takes better thinking and a more accurate analysis.  Whenever that is the case, which IME is very often, doing less is, IMO, a design fault. Most often, it's a matter of, "Why ground that capacitor there?  Over here would be better," or "Why use a noninverting amplifier?  If you use an inverting amplifier, the HF rolloff can continue beyond unity gain," or something similar. Note, also, that many of the people asking questions on the list do not seem to have a thorough design specification for their project, and may not even know what all they will use a gizmo for.  Settling for what a list pundit might think is "good enough" for the person's needs (e.g., residual phase noise floor ~ -150dB and reverse isolation of ~ 40dB for a buffer amplifier) may turn out to be inadequate when the person acquires some better oscillators and a DMTD setup and needs -175dB and 90dB.  If they do the best they can the first time, they may not have to re-do it later. >2) Others read these threads and decide "maybe I need to do that..". >3) Still others look at this and decide "If I need to do that, I'm >not even going to start". That's not good either. Again, neither one is a problem if doing the best one can is no more difficult and no more expensive than doing something less.  If someone has already done the good thinking and suggests a workable approach, and all you have to do is a sanity check to implement the idea (perhaps even improving on the design), again -- why WOULDN'T you?  There is always someone handy who is quick to point out all of the other ways to do things, so the person contemplating the project can evaluate the different approaches for himself. Sometimes, of course, going the next step up the "best you can" ladder involves an expensive part (e.g., silicon-on-sapphire semiconductors), or a much more complex design, or some use restriction (must be submerged in liquid nitrogen).  In that case, one must think very carefully about the error budget and determine if that step is really necessary.  But the vast majority of the time, we do not face that situation IME. The bottom line is:  There is no virtue in doing "just enough," certainly not in the case of amateur projects that will not be manufactured in large numbers for slim profit (where every millipence must be saved, if the accountants are to be believed -- often, they shouldn't be, but that's another topic entirely).  Never apologize for doing better than "just enough," as long as doing so does not cause collateral problems. To me, that is the art of design -- knowing that the finished gizmo is the best I could make at the time and with the resources available. In philosophy-of-design circles, one sometimes hears that "a race car should be designed so that everything is totally spent as it crosses the finish line -- the engine should explode, the transmission should break, and all four tires should blow out simultaneously.  Anything that is still working was, by definition, overdesigned."  Aside from the obvious hyperbole, I think the underlying point is plain wrong.  I know I admire the designers, whoever they were, when someone pulls a submarine off the ocean floor after 70 years and the batteries still have a charge and the gauges and radios still work. Finally, one not-so-obvious point about amateur designs.  Sometimes, something is a true one-off -- there will never be another made to that design.  In that case, some design requirements can be relaxed.  You can use trimmer caps or resistors where you would prefer not to in a commercial design, for example, and you may use disfavored logic kludges to work around timing problems.  But then there are designs that you will publish or otherwise share -- and these, I suggest, need to be even more bulletproof than commercial designs, since you are not in control of the construction, parts choices, etc. that others who follow your lead will make.  Yes, you can make disclaimers and suggest where the sensitive bits are, but for the design to be truly useful to others, you need to pay attention to all that and design as many of the traps out as you possibly can -- which can be much harder than designing something to work properly when it is made in a factory under your supervision. Best regards, Charles _______________________________________________ time-nuts mailing list -- time-nuts@febo.com To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts and follow the instructions there.
BC
Bob Camp
Fri, Dec 12, 2014 1:09 PM

On Dec 12, 2014, at 12:28 AM, Charles Steinmetz csteinmetz@yandex.com wrote:

Bob wrote:

Separate from the analysis of the voltage on the OCXO, there is another part to this:
Ok, so why am I harping on the "need" for all this from a system standpoint ?

We've been around this track a time or two before, me frustrated with your "make it just good enough" philosophy and you with my "always do the best you can" philosophy.  We're not likely to persuade each other, or even influence anybody else, but I think it is worth going around at least one more time.

  1. Adding stuff to a design that does not make it measurably better is simply a waste of money.

Preliminary nit:  I agree that any "improvement" that does not make something measurably better is of no value.  Indeed, it is no improvement at all.  But you didn't mean literally "not measurably better" -- you meant "not better for the task at hand.”

In the case at hand, the task is a GPSDO with a frequency vs temperature issue. The issue is coming from the OCXO and not the reference. Improving the reference will (in this case) have no impact on the problem.

A digital caliper reading to 0.0001" is "measurably better" than a ruler graduated in 1/32 inch, although the difference is not important if one is measuring the thickness of a 2x4 for framing a house.  But some day you may want to measure something besides a 2x4....

On to the substance:

"Do the best you can" isn't necessarily about adding anything to a design.  It's about carefully determining an error budget and developing a design that meets the budget.  Of course, you can set the design goals for each subsystem so that the overall system should juuuust work if everything else is perfect, or so that the system should work under most conditions, or so you'll never have to consider whether that subsystem might be contributing anything significant to the system errors.  If the latter is no more difficult and no more expensive than either of the former, why WOULDN'T you design it that way?

It is rare that an improvement does not impact cost or complexity. It most certainly is not the case in this situation.

I was taught many years ago that "good thinking doesn't cost any more than bad thinking," and I have generally found that to be true.  Meaning, it is frequently the case that "the best you can do" is no more difficult and no more expensive than doing something less, it just takes better thinking and a more accurate analysis.  Whenever that is the case, which IME is very often, doing less is, IMO, a design fault.

Most often, it's a matter of, "Why ground that capacitor there?  Over here would be better," or "Why use a noninverting amplifier?  If you use an inverting amplifier, the HF rolloff can continue beyond unity gain," or something similar.

Note, also, that many of the people asking questions on the list do not seem to have a thorough design specification for their project, and may not even know what all they will use a gizmo for.  Settling for what a list pundit might think is "good enough" for the person's needs (e.g., residual phase noise floor ~ -150dB and reverse isolation of ~ 40dB for a buffer amplifier) may turn out to be inadequate when the person acquires some better oscillators and a DMTD setup and needs -175dB and 90dB.  If they do the best they can the first time, they may not have to re-do it later.

But - rather than looking at the system and it’s needs, we spin off to “improvements”. Inevitably the result is a -175 db solution to a -145 db problem.

  1. Others read these threads and decide "maybe I need to do that..".
  2. Still others look at this and decide "If I need to do that, I'm not even going to start". That's not good either.

Again, neither one is a problem if doing the best one can is no more difficult and no more expensive than doing something less.

Except that in the actual example case at hand it very much is more expensive and more difficult.

If someone has already done the good thinking and suggests a workable approach, and all you have to do is a sanity check to implement the idea (perhaps even improving on the design), again -- why WOULDN'T you?

That’s not what’s being done here. The example case is not following the course you are talking about at all.

There is always someone handy who is quick to point out all of the other ways to do things, so the person contemplating the project can evaluate the different approaches for himself.

Sometimes, of course, going the next step up the "best you can" ladder involves an expensive part (e.g., silicon-on-sapphire semiconductors), or a much more complex design, or some use restriction (must be submerged in liquid nitrogen).  In that case, one must think very carefully about the error budget and determine if that step is really necessary.  But the vast majority of the time, we do not face that situation IME.

For a very few people the limit may indeed be liquid helium. There is a much larger group who are turned off at a far lower cost or complexity point.

The bottom line is:  There is no virtue in doing "just enough," certainly not in the case of amateur projects that will not be manufactured in large numbers for slim profit (where every millipence must be saved, if the accountants are to be believed -- often, they shouldn't be, but that's another topic entirely).  Never apologize for doing better than "just enough," as long as doing so does not cause collateral problems.

To me, that is the art of design -- knowing that the finished gizmo is the best I could make at the time and with the resources available.

In philosophy-of-design circles, one sometimes hears that "a race car should be designed so that everything is totally spent as it crosses the finish line -- the engine should explode, the transmission should break, and all four tires should blow out simultaneously.  Anything that is still working was, by definition, overdesigned."  Aside from the obvious hyperbole, I think the underlying point is plain wrong.  I know I admire the designers, whoever they were, when someone pulls a submarine off the ocean floor after 70 years and the batteries still have a charge and the gauges and radios still work.

Finally, one not-so-obvious point about amateur designs.  Sometimes, something is a true one-off -- there will never be another made to that design.  In that case, some design requirements can be relaxed.  You can use trimmer caps or resistors where you would prefer not to in a commercial design, for example, and you may use disfavored logic kludges to work around timing problems.  But then there are designs that you will publish or otherwise share -- and these, I suggest, need to be even more bulletproof than commercial designs, since you are not in control of the construction, parts choices, etc. that others who follow your lead will make.  Yes, you can make disclaimers and suggest where the sensitive bits are, but for the design to be truly useful to others, you need to pay attention to all that and design as many of the traps out as you possibly can -- which can be much harder than designing something to work properly when it is made in a factory under your supervision.

The issue is not “do people go overboard”, of course they do. Everybody does. Turning that by it’s self into a virtue is a mistake. In 99.99% of the real world cases, cost and complexity will go up and reliability will thus go down. The result will not be better, it will be worse. The best design that achieves the goal is simplest design. That’s every bit as true in the basement as it is in volume production.

Your comments do not address the other side of what I’ve been trying to convey:

Going overboard with no analysis is not a good way to do any of this. Even after getting the system specs and design constraints for this example, we do not bring that into the discussion. We get into long (and very well written) defenses of complexity for it’s own sake. We don’t get into analysis. The takeaway to the casual reader is that complexity is the goal and that analysis is an un-important part of the process that only optional comes into it.  There are a lot of people reading the list who could execute a simple design. There are far fewer who can properly execute a very complex one. The focus on complexity for it’s own sake does have impact.

————————————

Are we really that far apart - not really. We each are talking about two sides of the same coin. The real world is a messy place. Analysis often takes a back seat to the “fun of doing something”. That’s not to say it should though …

Have Fun

Bob

Best regards,

Charles


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

> On Dec 12, 2014, at 12:28 AM, Charles Steinmetz <csteinmetz@yandex.com> wrote: > > Bob wrote: > >> Separate from the analysis of the voltage on the OCXO, there is another part to this: >> Ok, so why am I harping on the "need" for all this from a system standpoint ? > > We've been around this track a time or two before, me frustrated with your "make it just good enough" philosophy and you with my "always do the best you can" philosophy. We're not likely to persuade each other, or even influence anybody else, but I think it is worth going around at least one more time. > >> 1) Adding stuff to a design that does not make it measurably better is simply a waste of money. > > Preliminary nit: I agree that any "improvement" that does not make something measurably better is of no value. Indeed, it is no improvement at all. But you didn't mean literally "not measurably better" -- you meant "not better for the task at hand.” In the case at hand, the task is a GPSDO with a frequency vs temperature issue. The issue is coming from the OCXO and not the reference. Improving the reference will (in this case) have no impact on the problem. > A digital caliper reading to 0.0001" is "measurably better" than a ruler graduated in 1/32 inch, although the difference is not important if one is measuring the thickness of a 2x4 for framing a house. But some day you may want to measure something besides a 2x4.... > > On to the substance: > > "Do the best you can" isn't necessarily about adding anything to a design. It's about carefully determining an error budget and developing a design that meets the budget. Of course, you can set the design goals for each subsystem so that the overall system should juuuust work if everything else is perfect, or so that the system should work under most conditions, or so you'll never have to consider whether that subsystem might be contributing anything significant to the system errors. If the latter is no more difficult and no more expensive than either of the former, why WOULDN'T you design it that way? It is *rare* that an improvement does not impact cost or complexity. It most certainly is not the case in this situation. > I was taught many years ago that "good thinking doesn't cost any more than bad thinking," and I have generally found that to be true. Meaning, it is frequently the case that "the best you can do" is no more difficult and no more expensive than doing something less, it just takes better thinking and a more accurate analysis. Whenever that is the case, which IME is very often, doing less is, IMO, a design fault. > > Most often, it's a matter of, "Why ground that capacitor there? Over here would be better," or "Why use a noninverting amplifier? If you use an inverting amplifier, the HF rolloff can continue beyond unity gain," or something similar. > > Note, also, that many of the people asking questions on the list do not seem to have a thorough design specification for their project, and may not even know what all they will use a gizmo for. Settling for what a list pundit might think is "good enough" for the person's needs (e.g., residual phase noise floor ~ -150dB and reverse isolation of ~ 40dB for a buffer amplifier) may turn out to be inadequate when the person acquires some better oscillators and a DMTD setup and needs -175dB and 90dB. If they do the best they can the first time, they may not have to re-do it later. But - rather than looking at the system and it’s needs, we spin off to “improvements”. Inevitably the result is a -175 db solution to a -145 db problem. > >> 2) Others read these threads and decide "maybe I need to do that..". >> 3) Still others look at this and decide "If I need to do that, I'm not even going to start". That's not good either. > > Again, neither one is a problem if doing the best one can is no more difficult and no more expensive than doing something less. Except that in the actual example case at hand it very much is more expensive and more difficult. > If someone has already done the good thinking and suggests a workable approach, and all you have to do is a sanity check to implement the idea (perhaps even improving on the design), again -- why WOULDN'T you? That’s not what’s being done here. The example case is not following the course you are talking about at all. > There is always someone handy who is quick to point out all of the other ways to do things, so the person contemplating the project can evaluate the different approaches for himself. > > Sometimes, of course, going the next step up the "best you can" ladder involves an expensive part (e.g., silicon-on-sapphire semiconductors), or a much more complex design, or some use restriction (must be submerged in liquid nitrogen). In that case, one must think very carefully about the error budget and determine if that step is really necessary. But the vast majority of the time, we do not face that situation IME. For a very few people the limit may indeed be liquid helium. There is a *much* larger group who are turned off at a far lower cost or complexity point. > > The bottom line is: There is no virtue in doing "just enough," certainly not in the case of amateur projects that will not be manufactured in large numbers for slim profit (where every millipence must be saved, if the accountants are to be believed -- often, they shouldn't be, but that's another topic entirely). Never apologize for doing better than "just enough," as long as doing so does not cause collateral problems. > > To me, that is the art of design -- knowing that the finished gizmo is the best I could make at the time and with the resources available. > > In philosophy-of-design circles, one sometimes hears that "a race car should be designed so that everything is totally spent as it crosses the finish line -- the engine should explode, the transmission should break, and all four tires should blow out simultaneously. Anything that is still working was, by definition, overdesigned." Aside from the obvious hyperbole, I think the underlying point is plain wrong. I know I admire the designers, whoever they were, when someone pulls a submarine off the ocean floor after 70 years and the batteries still have a charge and the gauges and radios still work. > > Finally, one not-so-obvious point about amateur designs. Sometimes, something is a true one-off -- there will never be another made to that design. In that case, some design requirements can be relaxed. You can use trimmer caps or resistors where you would prefer not to in a commercial design, for example, and you may use disfavored logic kludges to work around timing problems. But then there are designs that you will publish or otherwise share -- and these, I suggest, need to be even more bulletproof than commercial designs, since you are not in control of the construction, parts choices, etc. that others who follow your lead will make. Yes, you can make disclaimers and suggest where the sensitive bits are, but for the design to be truly useful to others, you need to pay attention to all that and design as many of the traps out as you possibly can -- which can be much harder than designing something to work properly when it is made in a factory under your supervision. The issue is not “do people go overboard”, of course they do. Everybody does. Turning that by it’s self into a virtue is a mistake. In 99.99% of the real world cases, cost and complexity will go up and reliability will thus go down. The result will not be better, it will be worse. The best design that achieves the goal is simplest design. That’s every bit as true in the basement as it is in volume production. Your comments do not address the other side of what I’ve been trying to convey: Going overboard with no analysis is *not* a good way to do any of this. Even after getting the system specs and design constraints for this example, we do not bring that into the discussion. We get into long (and very well written) defenses of complexity for it’s own sake. We don’t get into analysis. The takeaway to the casual reader is that complexity is the goal and that analysis is an un-important part of the process that only optional comes into it. There are a *lot* of people reading the list who could execute a simple design. There are far fewer who can properly execute a very complex one. The focus on complexity for it’s own sake does have impact. ———————————— Are we really that far apart - not really. We each are talking about two sides of the same coin. The real world is a messy place. Analysis often takes a back seat to the “fun of doing something”. That’s not to say it should though … Have Fun Bob > > Best regards, > > Charles > > > > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there.
TS
Tim Shoppa
Fri, Dec 12, 2014 1:52 PM

I grew up in an industry where we called everything that was way
overspec'ed, "platinum-iridium xxx".

I think there is a broad interest in e.g. low-tempco engineering or
low-noise regulators and having some in the pocket designs that start with
jellybean discrete parts and occasional hi-spec parts where they actually
matter, is a great idea.

Tim N3QE

On Fri, Dec 12, 2014 at 8:09 AM, Bob Camp kb8tq@n1k.org wrote:

On Dec 12, 2014, at 12:28 AM, Charles Steinmetz csteinmetz@yandex.com

wrote:

Bob wrote:

Separate from the analysis of the voltage on the OCXO, there is another

part to this:

Ok, so why am I harping on the "need" for all this from a system

standpoint ?

We've been around this track a time or two before, me frustrated with

your "make it just good enough" philosophy and you with my "always do the
best you can" philosophy.  We're not likely to persuade each other, or even
influence anybody else, but I think it is worth going around at least one
more time.

  1. Adding stuff to a design that does not make it measurably better is

simply a waste of money.

Preliminary nit:  I agree that any "improvement" that does not make

something measurably better is of no value.  Indeed, it is no improvement
at all.  But you didn't mean literally "not measurably better" -- you meant
"not better for the task at hand.”

In the case at hand, the task is a GPSDO with a frequency vs temperature
issue. The issue is coming from the OCXO and not the reference. Improving
the reference will (in this case) have no impact on the problem.

A digital caliper reading to 0.0001" is "measurably better" than a

ruler graduated in 1/32 inch, although the difference is not important if
one is measuring the thickness of a 2x4 for framing a house.  But some day
you may want to measure something besides a 2x4....

On to the substance:

"Do the best you can" isn't necessarily about adding anything to a

design.  It's about carefully determining an error budget and developing a
design that meets the budget.  Of course, you can set the design goals for
each subsystem so that the overall system should juuuust work if everything
else is perfect, or so that the system should work under most conditions,
or so you'll never have to consider whether that subsystem might be
contributing anything significant to the system errors.  If the latter is
no more difficult and no more expensive than either of the former, why
WOULDN'T you design it that way?

It is rare that an improvement does not impact cost or complexity. It
most certainly is not the case in this situation.

I was taught many years ago that "good thinking doesn't cost any more

than bad thinking," and I have generally found that to be true.  Meaning,
it is frequently the case that "the best you can do" is no more difficult
and no more expensive than doing something less, it just takes better
thinking and a more accurate analysis.  Whenever that is the case, which
IME is very often, doing less is, IMO, a design fault.

Most often, it's a matter of, "Why ground that capacitor there?  Over

here would be better," or "Why use a noninverting amplifier?  If you use an
inverting amplifier, the HF rolloff can continue beyond unity gain," or
something similar.

Note, also, that many of the people asking questions on the list do not

seem to have a thorough design specification for their project, and may not
even know what all they will use a gizmo for.  Settling for what a list
pundit might think is "good enough" for the person's needs (e.g., residual
phase noise floor ~ -150dB and reverse isolation of ~ 40dB for a buffer
amplifier) may turn out to be inadequate when the person acquires some
better oscillators and a DMTD setup and needs -175dB and 90dB.  If they do
the best they can the first time, they may not have to re-do it later.

But - rather than looking at the system and it’s needs, we spin off to
“improvements”. Inevitably the result is a -175 db solution to a -145 db
problem.

  1. Others read these threads and decide "maybe I need to do that..".
  2. Still others look at this and decide "If I need to do that, I'm not

even going to start". That's not good either.

Again, neither one is a problem if doing the best one can is no more

difficult and no more expensive than doing something less.

Except that in the actual example case at hand it very much is more
expensive and more difficult.

If someone has already done the good thinking and suggests a workable

approach, and all you have to do is a sanity check to implement the idea
(perhaps even improving on the design), again -- why WOULDN'T you?

That’s not what’s being done here. The example case is not following the
course you are talking about at all.

There is always someone handy who is quick to point out all of the other

ways to do things, so the person contemplating the project can evaluate the
different approaches for himself.

Sometimes, of course, going the next step up the "best you can" ladder

involves an expensive part (e.g., silicon-on-sapphire semiconductors), or a
much more complex design, or some use restriction (must be submerged in
liquid nitrogen).  In that case, one must think very carefully about the
error budget and determine if that step is really necessary.  But the vast
majority of the time, we do not face that situation IME.

For a very few people the limit may indeed be liquid helium. There is a
much larger group who are turned off at a far lower cost or complexity
point.

The bottom line is:  There is no virtue in doing "just enough,"

certainly not in the case of amateur projects that will not be manufactured
in large numbers for slim profit (where every millipence must be saved, if
the accountants are to be believed -- often, they shouldn't be, but that's
another topic entirely).  Never apologize for doing better than "just
enough," as long as doing so does not cause collateral problems.

To me, that is the art of design -- knowing that the finished gizmo is

the best I could make at the time and with the resources available.

In philosophy-of-design circles, one sometimes hears that "a race car

should be designed so that everything is totally spent as it crosses the
finish line -- the engine should explode, the transmission should break,
and all four tires should blow out simultaneously.  Anything that is still
working was, by definition, overdesigned."  Aside from the obvious
hyperbole, I think the underlying point is plain wrong.  I know I admire
the designers, whoever they were, when someone pulls a submarine off the
ocean floor after 70 years and the batteries still have a charge and the
gauges and radios still work.

Finally, one not-so-obvious point about amateur designs.  Sometimes,

something is a true one-off -- there will never be another made to that
design.  In that case, some design requirements can be relaxed.  You can
use trimmer caps or resistors where you would prefer not to in a commercial
design, for example, and you may use disfavored logic kludges to work
around timing problems.  But then there are designs that you will publish
or otherwise share -- and these, I suggest, need to be even more
bulletproof than commercial designs, since you are not in control of the
construction, parts choices, etc. that others who follow your lead will
make.  Yes, you can make disclaimers and suggest where the sensitive bits
are, but for the design to be truly useful to others, you need to pay
attention to all that and design as many of the traps out as you possibly
can -- which can be much harder than designing something to work properly
when it is made in a factory under your supervision.

The issue is not “do people go overboard”, of course they do. Everybody
does. Turning that by it’s self into a virtue is a mistake. In 99.99% of
the real world cases, cost and complexity will go up and reliability will
thus go down. The result will not be better, it will be worse. The best
design that achieves the goal is simplest design. That’s every bit as true
in the basement as it is in volume production.

Your comments do not address the other side of what I’ve been trying to
convey:

Going overboard with no analysis is not a good way to do any of this.
Even after getting the system specs and design constraints for this
example, we do not bring that into the discussion. We get into long (and
very well written) defenses of complexity for it’s own sake. We don’t get
into analysis. The takeaway to the casual reader is that complexity is the
goal and that analysis is an un-important part of the process that only
optional comes into it.  There are a lot of people reading the list who
could execute a simple design. There are far fewer who can properly execute
a very complex one. The focus on complexity for it’s own sake does have
impact.

————————————

Are we really that far apart - not really. We each are talking about two
sides of the same coin. The real world is a messy place. Analysis often
takes a back seat to the “fun of doing something”. That’s not to say it
should though …

Have Fun

Bob

Best regards,

Charles


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to

and follow the instructions there.


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

I grew up in an industry where we called everything that was way overspec'ed, "platinum-iridium xxx". I think there is a broad interest in e.g. low-tempco engineering or low-noise regulators and having some in the pocket designs that start with jellybean discrete parts and occasional hi-spec parts where they actually matter, is a great idea. Tim N3QE On Fri, Dec 12, 2014 at 8:09 AM, Bob Camp <kb8tq@n1k.org> wrote: > > > > On Dec 12, 2014, at 12:28 AM, Charles Steinmetz <csteinmetz@yandex.com> > wrote: > > > > Bob wrote: > > > >> Separate from the analysis of the voltage on the OCXO, there is another > part to this: > >> Ok, so why am I harping on the "need" for all this from a system > standpoint ? > > > > We've been around this track a time or two before, me frustrated with > your "make it just good enough" philosophy and you with my "always do the > best you can" philosophy. We're not likely to persuade each other, or even > influence anybody else, but I think it is worth going around at least one > more time. > > > >> 1) Adding stuff to a design that does not make it measurably better is > simply a waste of money. > > > > Preliminary nit: I agree that any "improvement" that does not make > something measurably better is of no value. Indeed, it is no improvement > at all. But you didn't mean literally "not measurably better" -- you meant > "not better for the task at hand.” > > In the case at hand, the task is a GPSDO with a frequency vs temperature > issue. The issue is coming from the OCXO and not the reference. Improving > the reference will (in this case) have no impact on the problem. > > > A digital caliper reading to 0.0001" is "measurably better" than a > ruler graduated in 1/32 inch, although the difference is not important if > one is measuring the thickness of a 2x4 for framing a house. But some day > you may want to measure something besides a 2x4.... > > > > On to the substance: > > > > "Do the best you can" isn't necessarily about adding anything to a > design. It's about carefully determining an error budget and developing a > design that meets the budget. Of course, you can set the design goals for > each subsystem so that the overall system should juuuust work if everything > else is perfect, or so that the system should work under most conditions, > or so you'll never have to consider whether that subsystem might be > contributing anything significant to the system errors. If the latter is > no more difficult and no more expensive than either of the former, why > WOULDN'T you design it that way? > > > It is *rare* that an improvement does not impact cost or complexity. It > most certainly is not the case in this situation. > > > I was taught many years ago that "good thinking doesn't cost any more > than bad thinking," and I have generally found that to be true. Meaning, > it is frequently the case that "the best you can do" is no more difficult > and no more expensive than doing something less, it just takes better > thinking and a more accurate analysis. Whenever that is the case, which > IME is very often, doing less is, IMO, a design fault. > > > > Most often, it's a matter of, "Why ground that capacitor there? Over > here would be better," or "Why use a noninverting amplifier? If you use an > inverting amplifier, the HF rolloff can continue beyond unity gain," or > something similar. > > > > Note, also, that many of the people asking questions on the list do not > seem to have a thorough design specification for their project, and may not > even know what all they will use a gizmo for. Settling for what a list > pundit might think is "good enough" for the person's needs (e.g., residual > phase noise floor ~ -150dB and reverse isolation of ~ 40dB for a buffer > amplifier) may turn out to be inadequate when the person acquires some > better oscillators and a DMTD setup and needs -175dB and 90dB. If they do > the best they can the first time, they may not have to re-do it later. > > But - rather than looking at the system and it’s needs, we spin off to > “improvements”. Inevitably the result is a -175 db solution to a -145 db > problem. > > > > >> 2) Others read these threads and decide "maybe I need to do that..". > >> 3) Still others look at this and decide "If I need to do that, I'm not > even going to start". That's not good either. > > > > Again, neither one is a problem if doing the best one can is no more > difficult and no more expensive than doing something less. > > Except that in the actual example case at hand it very much is more > expensive and more difficult. > > > If someone has already done the good thinking and suggests a workable > approach, and all you have to do is a sanity check to implement the idea > (perhaps even improving on the design), again -- why WOULDN'T you? > > That’s not what’s being done here. The example case is not following the > course you are talking about at all. > > > There is always someone handy who is quick to point out all of the other > ways to do things, so the person contemplating the project can evaluate the > different approaches for himself. > > > > Sometimes, of course, going the next step up the "best you can" ladder > involves an expensive part (e.g., silicon-on-sapphire semiconductors), or a > much more complex design, or some use restriction (must be submerged in > liquid nitrogen). In that case, one must think very carefully about the > error budget and determine if that step is really necessary. But the vast > majority of the time, we do not face that situation IME. > > For a very few people the limit may indeed be liquid helium. There is a > *much* larger group who are turned off at a far lower cost or complexity > point. > > > > > The bottom line is: There is no virtue in doing "just enough," > certainly not in the case of amateur projects that will not be manufactured > in large numbers for slim profit (where every millipence must be saved, if > the accountants are to be believed -- often, they shouldn't be, but that's > another topic entirely). Never apologize for doing better than "just > enough," as long as doing so does not cause collateral problems. > > > > To me, that is the art of design -- knowing that the finished gizmo is > the best I could make at the time and with the resources available. > > > > In philosophy-of-design circles, one sometimes hears that "a race car > should be designed so that everything is totally spent as it crosses the > finish line -- the engine should explode, the transmission should break, > and all four tires should blow out simultaneously. Anything that is still > working was, by definition, overdesigned." Aside from the obvious > hyperbole, I think the underlying point is plain wrong. I know I admire > the designers, whoever they were, when someone pulls a submarine off the > ocean floor after 70 years and the batteries still have a charge and the > gauges and radios still work. > > > > Finally, one not-so-obvious point about amateur designs. Sometimes, > something is a true one-off -- there will never be another made to that > design. In that case, some design requirements can be relaxed. You can > use trimmer caps or resistors where you would prefer not to in a commercial > design, for example, and you may use disfavored logic kludges to work > around timing problems. But then there are designs that you will publish > or otherwise share -- and these, I suggest, need to be even more > bulletproof than commercial designs, since you are not in control of the > construction, parts choices, etc. that others who follow your lead will > make. Yes, you can make disclaimers and suggest where the sensitive bits > are, but for the design to be truly useful to others, you need to pay > attention to all that and design as many of the traps out as you possibly > can -- which can be much harder than designing something to work properly > when it is made in a factory under your supervision. > > The issue is not “do people go overboard”, of course they do. Everybody > does. Turning that by it’s self into a virtue is a mistake. In 99.99% of > the real world cases, cost and complexity will go up and reliability will > thus go down. The result will not be better, it will be worse. The best > design that achieves the goal is simplest design. That’s every bit as true > in the basement as it is in volume production. > > Your comments do not address the other side of what I’ve been trying to > convey: > > Going overboard with no analysis is *not* a good way to do any of this. > Even after getting the system specs and design constraints for this > example, we do not bring that into the discussion. We get into long (and > very well written) defenses of complexity for it’s own sake. We don’t get > into analysis. The takeaway to the casual reader is that complexity is the > goal and that analysis is an un-important part of the process that only > optional comes into it. There are a *lot* of people reading the list who > could execute a simple design. There are far fewer who can properly execute > a very complex one. The focus on complexity for it’s own sake does have > impact. > > ———————————— > > Are we really that far apart - not really. We each are talking about two > sides of the same coin. The real world is a messy place. Analysis often > takes a back seat to the “fun of doing something”. That’s not to say it > should though … > > Have Fun > > Bob > > > > > > Best regards, > > > > Charles > > > > > > > > _______________________________________________ > > time-nuts mailing list -- time-nuts@febo.com > > To unsubscribe, go to > https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > > and follow the instructions there. > > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to > https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there. >
JL
Jim Lux
Fri, Dec 12, 2014 2:15 PM

On 12/12/14, 5:09 AM, Bob Camp wrote:
———

Are we really that far apart - not really. We each are talking about
two sides of the same coin. The real world is a messy place. Analysis
often takes a back seat to the “fun of doing something”. That’s not
to say it should though …

And sometimes, the analysis is the fun part.

These days, with awesome computational horsepower at your fingertips,
anywhere, you can do analysis in places and times that are infeasible
for bench work.

I get a lot of useful analysis and modeling done on airplane flights
across the country. No phone calls, no meetings to go to, no emails to
respond to, no self induced distractions. 4-5 glorious mostly
uninterrupted hours with forced seat time. (especially with more planes
having 110V outlets at the seat)

On 12/12/14, 5:09 AM, Bob Camp wrote: ——— > > Are we really that far apart - not really. We each are talking about > two sides of the same coin. The real world is a messy place. Analysis > often takes a back seat to the “fun of doing something”. That’s not > to say it should though … > And sometimes, the analysis is the fun part. These days, with awesome computational horsepower at your fingertips, anywhere, you can do analysis in places and times that are infeasible for bench work. I get a lot of useful analysis and modeling done on airplane flights across the country. No phone calls, no meetings to go to, no emails to respond to, no self induced distractions. 4-5 glorious mostly uninterrupted hours with forced seat time. (especially with more planes having 110V outlets at the seat)
BC
Bob Camp
Fri, Dec 12, 2014 2:20 PM

Hi

On Dec 12, 2014, at 8:52 AM, Tim Shoppa tshoppa@gmail.com wrote:

I grew up in an industry where we called everything that was way
overspec'ed, "platinum-iridium xxx".

I think there is a broad interest in e.g. low-tempco engineering or
low-noise regulators and having some in the pocket designs that start with
jellybean discrete parts and occasional hi-spec parts where they actually
matter, is a great idea.

Which is the reason I split this part of it off from the main thread. There are most certainly reasons why you would use low noise / low temp chef. / low aging references. Digging into that is not a bad thing and I’m in no way knocking that part of it.

Bob

Tim N3QE

On Fri, Dec 12, 2014 at 8:09 AM, Bob Camp kb8tq@n1k.org wrote:

On Dec 12, 2014, at 12:28 AM, Charles Steinmetz csteinmetz@yandex.com

wrote:

Bob wrote:

Separate from the analysis of the voltage on the OCXO, there is another

part to this:

Ok, so why am I harping on the "need" for all this from a system

standpoint ?

We've been around this track a time or two before, me frustrated with

your "make it just good enough" philosophy and you with my "always do the
best you can" philosophy.  We're not likely to persuade each other, or even
influence anybody else, but I think it is worth going around at least one
more time.

  1. Adding stuff to a design that does not make it measurably better is

simply a waste of money.

Preliminary nit:  I agree that any "improvement" that does not make

something measurably better is of no value.  Indeed, it is no improvement
at all.  But you didn't mean literally "not measurably better" -- you meant
"not better for the task at hand.”

In the case at hand, the task is a GPSDO with a frequency vs temperature
issue. The issue is coming from the OCXO and not the reference. Improving
the reference will (in this case) have no impact on the problem.

A digital caliper reading to 0.0001" is "measurably better" than a

ruler graduated in 1/32 inch, although the difference is not important if
one is measuring the thickness of a 2x4 for framing a house.  But some day
you may want to measure something besides a 2x4....

On to the substance:

"Do the best you can" isn't necessarily about adding anything to a

design.  It's about carefully determining an error budget and developing a
design that meets the budget.  Of course, you can set the design goals for
each subsystem so that the overall system should juuuust work if everything
else is perfect, or so that the system should work under most conditions,
or so you'll never have to consider whether that subsystem might be
contributing anything significant to the system errors.  If the latter is
no more difficult and no more expensive than either of the former, why
WOULDN'T you design it that way?

It is rare that an improvement does not impact cost or complexity. It
most certainly is not the case in this situation.

I was taught many years ago that "good thinking doesn't cost any more

than bad thinking," and I have generally found that to be true.  Meaning,
it is frequently the case that "the best you can do" is no more difficult
and no more expensive than doing something less, it just takes better
thinking and a more accurate analysis.  Whenever that is the case, which
IME is very often, doing less is, IMO, a design fault.

Most often, it's a matter of, "Why ground that capacitor there?  Over

here would be better," or "Why use a noninverting amplifier?  If you use an
inverting amplifier, the HF rolloff can continue beyond unity gain," or
something similar.

Note, also, that many of the people asking questions on the list do not

seem to have a thorough design specification for their project, and may not
even know what all they will use a gizmo for.  Settling for what a list
pundit might think is "good enough" for the person's needs (e.g., residual
phase noise floor ~ -150dB and reverse isolation of ~ 40dB for a buffer
amplifier) may turn out to be inadequate when the person acquires some
better oscillators and a DMTD setup and needs -175dB and 90dB.  If they do
the best they can the first time, they may not have to re-do it later.

But - rather than looking at the system and it’s needs, we spin off to
“improvements”. Inevitably the result is a -175 db solution to a -145 db
problem.

  1. Others read these threads and decide "maybe I need to do that..".
  2. Still others look at this and decide "If I need to do that, I'm not

even going to start". That's not good either.

Again, neither one is a problem if doing the best one can is no more

difficult and no more expensive than doing something less.

Except that in the actual example case at hand it very much is more
expensive and more difficult.

If someone has already done the good thinking and suggests a workable

approach, and all you have to do is a sanity check to implement the idea
(perhaps even improving on the design), again -- why WOULDN'T you?

That’s not what’s being done here. The example case is not following the
course you are talking about at all.

There is always someone handy who is quick to point out all of the other

ways to do things, so the person contemplating the project can evaluate the
different approaches for himself.

Sometimes, of course, going the next step up the "best you can" ladder

involves an expensive part (e.g., silicon-on-sapphire semiconductors), or a
much more complex design, or some use restriction (must be submerged in
liquid nitrogen).  In that case, one must think very carefully about the
error budget and determine if that step is really necessary.  But the vast
majority of the time, we do not face that situation IME.

For a very few people the limit may indeed be liquid helium. There is a
much larger group who are turned off at a far lower cost or complexity
point.

The bottom line is:  There is no virtue in doing "just enough,"

certainly not in the case of amateur projects that will not be manufactured
in large numbers for slim profit (where every millipence must be saved, if
the accountants are to be believed -- often, they shouldn't be, but that's
another topic entirely).  Never apologize for doing better than "just
enough," as long as doing so does not cause collateral problems.

To me, that is the art of design -- knowing that the finished gizmo is

the best I could make at the time and with the resources available.

In philosophy-of-design circles, one sometimes hears that "a race car

should be designed so that everything is totally spent as it crosses the
finish line -- the engine should explode, the transmission should break,
and all four tires should blow out simultaneously.  Anything that is still
working was, by definition, overdesigned."  Aside from the obvious
hyperbole, I think the underlying point is plain wrong.  I know I admire
the designers, whoever they were, when someone pulls a submarine off the
ocean floor after 70 years and the batteries still have a charge and the
gauges and radios still work.

Finally, one not-so-obvious point about amateur designs.  Sometimes,

something is a true one-off -- there will never be another made to that
design.  In that case, some design requirements can be relaxed.  You can
use trimmer caps or resistors where you would prefer not to in a commercial
design, for example, and you may use disfavored logic kludges to work
around timing problems.  But then there are designs that you will publish
or otherwise share -- and these, I suggest, need to be even more
bulletproof than commercial designs, since you are not in control of the
construction, parts choices, etc. that others who follow your lead will
make.  Yes, you can make disclaimers and suggest where the sensitive bits
are, but for the design to be truly useful to others, you need to pay
attention to all that and design as many of the traps out as you possibly
can -- which can be much harder than designing something to work properly
when it is made in a factory under your supervision.

The issue is not “do people go overboard”, of course they do. Everybody
does. Turning that by it’s self into a virtue is a mistake. In 99.99% of
the real world cases, cost and complexity will go up and reliability will
thus go down. The result will not be better, it will be worse. The best
design that achieves the goal is simplest design. That’s every bit as true
in the basement as it is in volume production.

Your comments do not address the other side of what I’ve been trying to
convey:

Going overboard with no analysis is not a good way to do any of this.
Even after getting the system specs and design constraints for this
example, we do not bring that into the discussion. We get into long (and
very well written) defenses of complexity for it’s own sake. We don’t get
into analysis. The takeaway to the casual reader is that complexity is the
goal and that analysis is an un-important part of the process that only
optional comes into it.  There are a lot of people reading the list who
could execute a simple design. There are far fewer who can properly execute
a very complex one. The focus on complexity for it’s own sake does have
impact.

————————————

Are we really that far apart - not really. We each are talking about two
sides of the same coin. The real world is a messy place. Analysis often
takes a back seat to the “fun of doing something”. That’s not to say it
should though …

Have Fun

Bob

Best regards,

Charles


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to

and follow the instructions there.


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Hi > On Dec 12, 2014, at 8:52 AM, Tim Shoppa <tshoppa@gmail.com> wrote: > > I grew up in an industry where we called everything that was way > overspec'ed, "platinum-iridium xxx". > > I think there is a broad interest in e.g. low-tempco engineering or > low-noise regulators and having some in the pocket designs that start with > jellybean discrete parts and occasional hi-spec parts where they actually > matter, is a great idea. Which is the reason I split this part of it off from the main thread. There are most certainly reasons why you would use low noise / low temp chef. / low aging references. Digging into that is not a bad thing and I’m in no way knocking that part of it. Bob > > Tim N3QE > > On Fri, Dec 12, 2014 at 8:09 AM, Bob Camp <kb8tq@n1k.org> wrote: >> >> >>> On Dec 12, 2014, at 12:28 AM, Charles Steinmetz <csteinmetz@yandex.com> >> wrote: >>> >>> Bob wrote: >>> >>>> Separate from the analysis of the voltage on the OCXO, there is another >> part to this: >>>> Ok, so why am I harping on the "need" for all this from a system >> standpoint ? >>> >>> We've been around this track a time or two before, me frustrated with >> your "make it just good enough" philosophy and you with my "always do the >> best you can" philosophy. We're not likely to persuade each other, or even >> influence anybody else, but I think it is worth going around at least one >> more time. >>> >>>> 1) Adding stuff to a design that does not make it measurably better is >> simply a waste of money. >>> >>> Preliminary nit: I agree that any "improvement" that does not make >> something measurably better is of no value. Indeed, it is no improvement >> at all. But you didn't mean literally "not measurably better" -- you meant >> "not better for the task at hand.” >> >> In the case at hand, the task is a GPSDO with a frequency vs temperature >> issue. The issue is coming from the OCXO and not the reference. Improving >> the reference will (in this case) have no impact on the problem. >> >>> A digital caliper reading to 0.0001" is "measurably better" than a >> ruler graduated in 1/32 inch, although the difference is not important if >> one is measuring the thickness of a 2x4 for framing a house. But some day >> you may want to measure something besides a 2x4.... >>> >>> On to the substance: >>> >>> "Do the best you can" isn't necessarily about adding anything to a >> design. It's about carefully determining an error budget and developing a >> design that meets the budget. Of course, you can set the design goals for >> each subsystem so that the overall system should juuuust work if everything >> else is perfect, or so that the system should work under most conditions, >> or so you'll never have to consider whether that subsystem might be >> contributing anything significant to the system errors. If the latter is >> no more difficult and no more expensive than either of the former, why >> WOULDN'T you design it that way? >> >> >> It is *rare* that an improvement does not impact cost or complexity. It >> most certainly is not the case in this situation. >> >>> I was taught many years ago that "good thinking doesn't cost any more >> than bad thinking," and I have generally found that to be true. Meaning, >> it is frequently the case that "the best you can do" is no more difficult >> and no more expensive than doing something less, it just takes better >> thinking and a more accurate analysis. Whenever that is the case, which >> IME is very often, doing less is, IMO, a design fault. >>> >>> Most often, it's a matter of, "Why ground that capacitor there? Over >> here would be better," or "Why use a noninverting amplifier? If you use an >> inverting amplifier, the HF rolloff can continue beyond unity gain," or >> something similar. >>> >>> Note, also, that many of the people asking questions on the list do not >> seem to have a thorough design specification for their project, and may not >> even know what all they will use a gizmo for. Settling for what a list >> pundit might think is "good enough" for the person's needs (e.g., residual >> phase noise floor ~ -150dB and reverse isolation of ~ 40dB for a buffer >> amplifier) may turn out to be inadequate when the person acquires some >> better oscillators and a DMTD setup and needs -175dB and 90dB. If they do >> the best they can the first time, they may not have to re-do it later. >> >> But - rather than looking at the system and it’s needs, we spin off to >> “improvements”. Inevitably the result is a -175 db solution to a -145 db >> problem. >> >>> >>>> 2) Others read these threads and decide "maybe I need to do that..". >>>> 3) Still others look at this and decide "If I need to do that, I'm not >> even going to start". That's not good either. >>> >>> Again, neither one is a problem if doing the best one can is no more >> difficult and no more expensive than doing something less. >> >> Except that in the actual example case at hand it very much is more >> expensive and more difficult. >> >>> If someone has already done the good thinking and suggests a workable >> approach, and all you have to do is a sanity check to implement the idea >> (perhaps even improving on the design), again -- why WOULDN'T you? >> >> That’s not what’s being done here. The example case is not following the >> course you are talking about at all. >> >>> There is always someone handy who is quick to point out all of the other >> ways to do things, so the person contemplating the project can evaluate the >> different approaches for himself. >>> >>> Sometimes, of course, going the next step up the "best you can" ladder >> involves an expensive part (e.g., silicon-on-sapphire semiconductors), or a >> much more complex design, or some use restriction (must be submerged in >> liquid nitrogen). In that case, one must think very carefully about the >> error budget and determine if that step is really necessary. But the vast >> majority of the time, we do not face that situation IME. >> >> For a very few people the limit may indeed be liquid helium. There is a >> *much* larger group who are turned off at a far lower cost or complexity >> point. >> >>> >>> The bottom line is: There is no virtue in doing "just enough," >> certainly not in the case of amateur projects that will not be manufactured >> in large numbers for slim profit (where every millipence must be saved, if >> the accountants are to be believed -- often, they shouldn't be, but that's >> another topic entirely). Never apologize for doing better than "just >> enough," as long as doing so does not cause collateral problems. >>> >>> To me, that is the art of design -- knowing that the finished gizmo is >> the best I could make at the time and with the resources available. >>> >>> In philosophy-of-design circles, one sometimes hears that "a race car >> should be designed so that everything is totally spent as it crosses the >> finish line -- the engine should explode, the transmission should break, >> and all four tires should blow out simultaneously. Anything that is still >> working was, by definition, overdesigned." Aside from the obvious >> hyperbole, I think the underlying point is plain wrong. I know I admire >> the designers, whoever they were, when someone pulls a submarine off the >> ocean floor after 70 years and the batteries still have a charge and the >> gauges and radios still work. >>> >>> Finally, one not-so-obvious point about amateur designs. Sometimes, >> something is a true one-off -- there will never be another made to that >> design. In that case, some design requirements can be relaxed. You can >> use trimmer caps or resistors where you would prefer not to in a commercial >> design, for example, and you may use disfavored logic kludges to work >> around timing problems. But then there are designs that you will publish >> or otherwise share -- and these, I suggest, need to be even more >> bulletproof than commercial designs, since you are not in control of the >> construction, parts choices, etc. that others who follow your lead will >> make. Yes, you can make disclaimers and suggest where the sensitive bits >> are, but for the design to be truly useful to others, you need to pay >> attention to all that and design as many of the traps out as you possibly >> can -- which can be much harder than designing something to work properly >> when it is made in a factory under your supervision. >> >> The issue is not “do people go overboard”, of course they do. Everybody >> does. Turning that by it’s self into a virtue is a mistake. In 99.99% of >> the real world cases, cost and complexity will go up and reliability will >> thus go down. The result will not be better, it will be worse. The best >> design that achieves the goal is simplest design. That’s every bit as true >> in the basement as it is in volume production. >> >> Your comments do not address the other side of what I’ve been trying to >> convey: >> >> Going overboard with no analysis is *not* a good way to do any of this. >> Even after getting the system specs and design constraints for this >> example, we do not bring that into the discussion. We get into long (and >> very well written) defenses of complexity for it’s own sake. We don’t get >> into analysis. The takeaway to the casual reader is that complexity is the >> goal and that analysis is an un-important part of the process that only >> optional comes into it. There are a *lot* of people reading the list who >> could execute a simple design. There are far fewer who can properly execute >> a very complex one. The focus on complexity for it’s own sake does have >> impact. >> >> ———————————— >> >> Are we really that far apart - not really. We each are talking about two >> sides of the same coin. The real world is a messy place. Analysis often >> takes a back seat to the “fun of doing something”. That’s not to say it >> should though … >> >> Have Fun >> >> Bob >> >> >>> >>> Best regards, >>> >>> Charles >>> >>> >>> >>> _______________________________________________ >>> time-nuts mailing list -- time-nuts@febo.com >>> To unsubscribe, go to >> https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts >>> and follow the instructions there. >> >> _______________________________________________ >> time-nuts mailing list -- time-nuts@febo.com >> To unsubscribe, go to >> https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts >> and follow the instructions there. >> > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there.
BC
Bob Camp
Fri, Dec 12, 2014 2:22 PM

Hi

On Dec 12, 2014, at 9:15 AM, Jim Lux jimlux@earthlink.net wrote:

On 12/12/14, 5:09 AM, Bob Camp wrote:
———

Are we really that far apart - not really. We each are talking about
two sides of the same coin. The real world is a messy place. Analysis
often takes a back seat to the “fun of doing something”. That’s not
to say it should though …

And sometimes, the analysis is the fun part.

Indeed

These days, with awesome computational horsepower at your fingertips, anywhere, you can do analysis in places and times that are infeasible for bench work.

In some cases, the analysis is pretty simple ...

I get a lot of useful analysis and modeling done on airplane flights across the country. No phone calls, no meetings to go to, no emails to respond to, no self induced distractions. 4-5 glorious mostly uninterrupted hours with forced seat time. (especially with more planes having 110V outlets at the seat)

As long as you don’t have some guy named Bob in the seat next to you doing all sorts of distracting things …:)

Bob


time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Hi > On Dec 12, 2014, at 9:15 AM, Jim Lux <jimlux@earthlink.net> wrote: > > On 12/12/14, 5:09 AM, Bob Camp wrote: > ——— >> >> Are we really that far apart - not really. We each are talking about >> two sides of the same coin. The real world is a messy place. Analysis >> often takes a back seat to the “fun of doing something”. That’s not >> to say it should though … >> > > And sometimes, the analysis is the fun part. Indeed > > These days, with awesome computational horsepower at your fingertips, anywhere, you can do analysis in places and times that are infeasible for bench work. In some cases, the analysis is pretty simple ... > > I get a lot of useful analysis and modeling done on airplane flights across the country. No phone calls, no meetings to go to, no emails to respond to, no self induced distractions. 4-5 glorious mostly uninterrupted hours with forced seat time. (especially with more planes having 110V outlets at the seat) As long as you don’t have some guy named Bob in the seat next to you doing all sorts of distracting things …:) Bob > > _______________________________________________ > time-nuts mailing list -- time-nuts@febo.com > To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts > and follow the instructions there.