time-nuts@lists.febo.com

Discussion of precise time and frequency measurement

View all threads

Simple simulation model for an OCXO?

MW
Matthias Welwarsky
Mon, May 2, 2022 3:12 PM

Dear all,

I'm trying to come up with a reasonably simple model for an OCXO that I can
parametrize to experiment with a GPSDO simulator. For now I have the following
matlab function that "somewhat" does what I think is reasonable, but I would
like a reality check.

This is the matlab code:

function [phase] = synth_osc(samples,da,wn,fn)

aging

phase = (((1:samples)/86400).^2)*da;

white noise

phase += (rand(1,samples)-0.5)*wn;

flicker noise

phase += cumsum(rand(1,samples)-0.5)*fn;
end

There are three components in the model, aging, white noise and flicker noise,
with everything expressed in fractions of seconds.

The first term basically creates a base vector that has a quadratic aging
function. It can be parametrized e.g. from an OCXO datasheet, daily aging
given in s/s per day.

The second term models white noise. It's just a random number scaled to the
desired 1-second uncertainty.

The third term is supposed to model flicker noise. It's basically a random
walk scaled to the desired magnitude.

As an example, the following function call would create a phase vector for a
10MHz oscillator with one day worth of samples, with an aging of about 5
Millihertz per day, 10ps/s white noise and 10ns/s of flicker noise:

phase = osc_synth(86400, -44e-6, 10e-12, 10e-9);

What I'd like to know - is that a "reasonable" model or is it just too far off
of reality to be useful? What could be changed or improved?

Best regards,
Matthias

Dear all, I'm trying to come up with a reasonably simple model for an OCXO that I can parametrize to experiment with a GPSDO simulator. For now I have the following matlab function that "somewhat" does what I think is reasonable, but I would like a reality check. This is the matlab code: function [phase] = synth_osc(samples,da,wn,fn) # aging phase = (((1:samples)/86400).^2)*da; # white noise phase += (rand(1,samples)-0.5)*wn; # flicker noise phase += cumsum(rand(1,samples)-0.5)*fn; end There are three components in the model, aging, white noise and flicker noise, with everything expressed in fractions of seconds. The first term basically creates a base vector that has a quadratic aging function. It can be parametrized e.g. from an OCXO datasheet, daily aging given in s/s per day. The second term models white noise. It's just a random number scaled to the desired 1-second uncertainty. The third term is supposed to model flicker noise. It's basically a random walk scaled to the desired magnitude. As an example, the following function call would create a phase vector for a 10MHz oscillator with one day worth of samples, with an aging of about 5 Millihertz per day, 10ps/s white noise and 10ns/s of flicker noise: phase = osc_synth(86400, -44e-6, 10e-12, 10e-9); What I'd like to know - is that a "reasonable" model or is it just too far off of reality to be useful? What could be changed or improved? Best regards, Matthias
AK
Attila Kinali
Mon, May 2, 2022 8:28 PM

On Mon, 02 May 2022 17:12:47 +0200
Matthias Welwarsky time-nuts@welwarsky.de wrote:

There are three components in the model, aging, white noise and flicker noise,
with everything expressed in fractions of seconds.

So far so good, but you are missing one major component that is
important for a GPSDO: temperature dependence. All OCXO have
a slight temperature dependence that can be anything from a
few 1e-12/K to tens of 1e-9/K. It's often also not linear
and, generally, not well specified in OCXO data sheets.
There were plenty of discussions about this on time-nuts
and especially Bob Camp has shared a lot of good information.

The third term is supposed to model flicker noise. It's basically a random
walk scaled to the desired magnitude.

Random walk is not flicker noise. Random walk has a PSD
that is proportional to 1/f^2 while flicker noise has a PSD
that is proportional to 1/f.

Though, at the time scales that are interesting for a GPSDO
(i.e. (mili-)seconds to minutes) the dominant noise would be flicker
frequency with a (phase) PSD of 1/f^3 or even random walk
frequency with a (phase) PSD of 1/f^4.

What I'd like to know - is that a "reasonable" model or is it just too far off
of reality to be useful? What could be changed or improved?

The model is ok, though I would definitely add the model for
temperature dependence. Also, for a GPSDO you will need to model
more than just a day. A lot of effects, not just temperature,
have periodes of 12-24h. If you simulate only a single day, you'll
not properly simulate all those long term effects. Not to mention
that temperature looks weird when looking at it on a larger scale.

Attached are two representative plots of what the temperature in
a normal (small) office looks like, when nobody is allowed to enter.
One graph is for Jan-Sept of 2021 and one for May (units in °C).
You will see that there is quite some structure and "weirdness"
to it. Especially note that here is a not-quite daily spike in
temperature when the heating comes on somewhen between 18:00
and 22:00 and shuts off again between 21:00 and 00:00.

The noise in the temperature is mostly due to convection in
the room and not due to the noise of the temperature sensor.

I would also recommend that you create your noise vector with
something like sigma-theta by François Vernotte[1,2].
The bruiteur tool can generate all types of noise directly and
give you a single vector to read and process. This should be
faster than building your own power law noise simulator.

		Attila Kinali

[1] https://theta.obs-besancon.fr/spip.php?article103&lang=fr
[2] https://sourcesup.renater.fr/www/sigmatheta/

--
The driving force behind research is the question: "Why?"
There are things we don't understand and things we always
wonder about. And that's why we do research.
-- Kobayashi Makoto

On Mon, 02 May 2022 17:12:47 +0200 Matthias Welwarsky <time-nuts@welwarsky.de> wrote: > There are three components in the model, aging, white noise and flicker noise, > with everything expressed in fractions of seconds. So far so good, but you are missing one major component that is important for a GPSDO: temperature dependence. All OCXO have a slight temperature dependence that can be anything from a few 1e-12/K to tens of 1e-9/K. It's often also not linear and, generally, not well specified in OCXO data sheets. There were plenty of discussions about this on time-nuts and especially Bob Camp has shared a lot of good information. > The third term is supposed to model flicker noise. It's basically a random > walk scaled to the desired magnitude. Random walk is not flicker noise. Random walk has a PSD that is proportional to 1/f^2 while flicker noise has a PSD that is proportional to 1/f. Though, at the time scales that are interesting for a GPSDO (i.e. (mili-)seconds to minutes) the dominant noise would be flicker frequency with a (phase) PSD of 1/f^3 or even random walk frequency with a (phase) PSD of 1/f^4. > What I'd like to know - is that a "reasonable" model or is it just too far off > of reality to be useful? What could be changed or improved? The model is ok, though I would definitely add the model for temperature dependence. Also, for a GPSDO you will need to model more than just a day. A lot of effects, not just temperature, have periodes of 12-24h. If you simulate only a single day, you'll not properly simulate all those long term effects. Not to mention that temperature looks weird when looking at it on a larger scale. Attached are two representative plots of what the temperature in a normal (small) office looks like, when nobody is allowed to enter. One graph is for Jan-Sept of 2021 and one for May (units in °C). You will see that there is quite some structure and "weirdness" to it. Especially note that here is a not-quite daily spike in temperature when the heating comes on somewhen between 18:00 and 22:00 and shuts off again between 21:00 and 00:00. The noise in the temperature is mostly due to convection in the room and not due to the noise of the temperature sensor. I would also recommend that you create your noise vector with something like sigma-theta by François Vernotte[1,2]. The bruiteur tool can generate all types of noise directly and give you a single vector to read and process. This should be faster than building your own power law noise simulator. Attila Kinali [1] https://theta.obs-besancon.fr/spip.php?article103&lang=fr [2] https://sourcesup.renater.fr/www/sigmatheta/ -- The driving force behind research is the question: "Why?" There are things we don't understand and things we always wonder about. And that's why we do research. -- Kobayashi Makoto
BK
Bob kb8tq
Mon, May 2, 2022 9:43 PM

Hi

By far the best approach is to use actual data. Grab a pair of OCXO’s and
compare them. A single mixer setup is one easy ( = cheap ) way to do it. You
will get the sum of the two devices, but for simulation purposes, it will be much
closer to reality than anything you can brew up with a formula.

Bob

On May 2, 2022, at 10:12 AM, Matthias Welwarsky time-nuts@welwarsky.de wrote:

Dear all,

I'm trying to come up with a reasonably simple model for an OCXO that I can
parametrize to experiment with a GPSDO simulator. For now I have the following
matlab function that "somewhat" does what I think is reasonable, but I would
like a reality check.

This is the matlab code:

function [phase] = synth_osc(samples,da,wn,fn)

aging

phase = (((1:samples)/86400).^2)*da;

white noise

phase += (rand(1,samples)-0.5)*wn;

flicker noise

phase += cumsum(rand(1,samples)-0.5)*fn;
end

There are three components in the model, aging, white noise and flicker noise,
with everything expressed in fractions of seconds.

The first term basically creates a base vector that has a quadratic aging
function. It can be parametrized e.g. from an OCXO datasheet, daily aging
given in s/s per day.

The second term models white noise. It's just a random number scaled to the
desired 1-second uncertainty.

The third term is supposed to model flicker noise. It's basically a random
walk scaled to the desired magnitude.

As an example, the following function call would create a phase vector for a
10MHz oscillator with one day worth of samples, with an aging of about 5
Millihertz per day, 10ps/s white noise and 10ns/s of flicker noise:

phase = osc_synth(86400, -44e-6, 10e-12, 10e-9);

What I'd like to know - is that a "reasonable" model or is it just too far off
of reality to be useful? What could be changed or improved?

Best regards,
Matthias


time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an email to time-nuts-leave@lists.febo.com
To unsubscribe, go to and follow the instructions there.

Hi By far the best approach is to use actual data. Grab a pair of OCXO’s and compare them. A single mixer setup is one easy ( = cheap ) way to do it. You will get the sum of the two devices, but for simulation purposes, it will be *much* closer to reality than anything you can brew up with a formula. Bob > On May 2, 2022, at 10:12 AM, Matthias Welwarsky <time-nuts@welwarsky.de> wrote: > > Dear all, > > I'm trying to come up with a reasonably simple model for an OCXO that I can > parametrize to experiment with a GPSDO simulator. For now I have the following > matlab function that "somewhat" does what I think is reasonable, but I would > like a reality check. > > This is the matlab code: > > function [phase] = synth_osc(samples,da,wn,fn) > # aging > phase = (((1:samples)/86400).^2)*da; > # white noise > phase += (rand(1,samples)-0.5)*wn; > # flicker noise > phase += cumsum(rand(1,samples)-0.5)*fn; > end > > There are three components in the model, aging, white noise and flicker noise, > with everything expressed in fractions of seconds. > > The first term basically creates a base vector that has a quadratic aging > function. It can be parametrized e.g. from an OCXO datasheet, daily aging > given in s/s per day. > > The second term models white noise. It's just a random number scaled to the > desired 1-second uncertainty. > > The third term is supposed to model flicker noise. It's basically a random > walk scaled to the desired magnitude. > > As an example, the following function call would create a phase vector for a > 10MHz oscillator with one day worth of samples, with an aging of about 5 > Millihertz per day, 10ps/s white noise and 10ns/s of flicker noise: > > phase = osc_synth(86400, -44e-6, 10e-12, 10e-9); > > What I'd like to know - is that a "reasonable" model or is it just too far off > of reality to be useful? What could be changed or improved? > > Best regards, > Matthias > > > _______________________________________________ > time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an email to time-nuts-leave@lists.febo.com > To unsubscribe, go to and follow the instructions there.
GM
Greg Maxwell
Mon, May 2, 2022 10:13 PM

On Mon, May 2, 2022 at 10:01 PM Bob kb8tq kb8tq@n1k.org wrote:

By far the best approach is to use actual data. Grab a pair of OCXO’s and
compare them. A single mixer setup is one easy ( = cheap ) way to do it. You
will get the sum of the two devices, but for simulation purposes, it will be much
closer to reality than anything you can brew up with a formula.

But a programmatic way of generating plausible OCXO noise lets you
generally millions of times more data, over a much larger spectrum of
plausible operating conditions-- so that you can test the stability of
an algorithm over a wider collection of conditions.

It's not a complete replacement for using real data-- you should do
that too.  But it can be a much more comprehensive test.

On Mon, May 2, 2022 at 10:01 PM Bob kb8tq <kb8tq@n1k.org> wrote: > By far the best approach is to use actual data. Grab a pair of OCXO’s and > compare them. A single mixer setup is one easy ( = cheap ) way to do it. You > will get the sum of the two devices, but for simulation purposes, it will be *much* > closer to reality than anything you can brew up with a formula. But a programmatic way of generating plausible OCXO noise lets you generally millions of times more data, over a much larger spectrum of plausible operating conditions-- so that you can test the stability of an algorithm over a wider collection of conditions. It's not a complete replacement for using real data-- you should do that too. But it can be a much more comprehensive test.
BK
Bob kb8tq
Tue, May 3, 2022 12:14 AM

Hi

….. except that having done this for many decades on hundreds of
designs, , a single data set from a real OCXO is likely to show you
things that millions of simulations from a formula will somehow miss ….

Bob

On May 2, 2022, at 5:13 PM, Greg Maxwell gmaxwell@gmail.com wrote:

On Mon, May 2, 2022 at 10:01 PM Bob kb8tq kb8tq@n1k.org wrote:

By far the best approach is to use actual data. Grab a pair of OCXO’s and
compare them. A single mixer setup is one easy ( = cheap ) way to do it. You
will get the sum of the two devices, but for simulation purposes, it will be much
closer to reality than anything you can brew up with a formula.

But a programmatic way of generating plausible OCXO noise lets you
generally millions of times more data, over a much larger spectrum of
plausible operating conditions-- so that you can test the stability of
an algorithm over a wider collection of conditions.

It's not a complete replacement for using real data-- you should do
that too.  But it can be a much more comprehensive test.


time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an email to time-nuts-leave@lists.febo.com
To unsubscribe, go to and follow the instructions there.

Hi ….. except that having done this for many decades on hundreds of designs, , a single data set from a real OCXO is likely to show you things that millions of simulations from a formula will somehow miss …. Bob > On May 2, 2022, at 5:13 PM, Greg Maxwell <gmaxwell@gmail.com> wrote: > > On Mon, May 2, 2022 at 10:01 PM Bob kb8tq <kb8tq@n1k.org> wrote: >> By far the best approach is to use actual data. Grab a pair of OCXO’s and >> compare them. A single mixer setup is one easy ( = cheap ) way to do it. You >> will get the sum of the two devices, but for simulation purposes, it will be *much* >> closer to reality than anything you can brew up with a formula. > > But a programmatic way of generating plausible OCXO noise lets you > generally millions of times more data, over a much larger spectrum of > plausible operating conditions-- so that you can test the stability of > an algorithm over a wider collection of conditions. > > It's not a complete replacement for using real data-- you should do > that too. But it can be a much more comprehensive test. > _______________________________________________ > time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an email to time-nuts-leave@lists.febo.com > To unsubscribe, go to and follow the instructions there.
MD
Magnus Danielson
Tue, May 3, 2022 1:09 AM

Matthias,

On 2022-05-02 17:12, Matthias Welwarsky wrote:

Dear all,

I'm trying to come up with a reasonably simple model for an OCXO that I can
parametrize to experiment with a GPSDO simulator. For now I have the following
matlab function that "somewhat" does what I think is reasonable, but I would
like a reality check.

This is the matlab code:

function [phase] = synth_osc(samples,da,wn,fn)

aging

phase = (((1:samples)/86400).^2)*da;

white noise

phase += (rand(1,samples)-0.5)*wn;

flicker noise

phase += cumsum(rand(1,samples)-0.5)*fn;
end

There are three components in the model, aging, white noise and flicker noise,
with everything expressed in fractions of seconds.

The first term basically creates a base vector that has a quadratic aging
function. It can be parametrized e.g. from an OCXO datasheet, daily aging
given in s/s per day.

The second term models white noise. It's just a random number scaled to the
desired 1-second uncertainty.

The third term is supposed to model flicker noise. It's basically a random
walk scaled to the desired magnitude.

What you have shown is 2 out of the standard 5 noise-types. The Leeson
model describes how 4 noise-types can be generated, but in addition to
that the Random Walk Frequency Modulation is included. For standard, see
IEEE Std 1139. For Leeson model, see David Leeson articles from special
issue of Feb 1966, as I believe you can find on IEEE UFFC site. The
Wikipedia Allan Deviation article may also be of some guidance.

Noise-types:

White Phase Modulation - your thermal noise on phase

Flicker Phase Modulation - your 1/f flicker noise on phase

Because we have oscillators that integrate inside the resonators
bandwidth, we also get their integrated equivalents

White Frequency Modulation - your thermal noise on frequency - forms a
Random Walk Phase Modulation

Flicker Frequency Modulation - Your 1/f flicker noise on frequency

Random Walk Frequency Modulation - Observed on disturbed oscillators, a
random walk in frequency, mostly analyzed for finding rare faults.

You have only modelled the first two. This may or may not be relevant
depending on the bandwidth of your control loop and the dominant noise
there. It may not be relevant. The phase-noise graph will illustrate
well where the power of the various noise-types intercept and thus
provide a range over frequency for which one or the other is dominant.
You control loop bandwidth will high-pass filter this for your locked
oscillator, and low-pass filter for your reference oscillator/source.
The end result will be a composit of both. The Q-value / damping factor
will control just how much jitter peaking occurrs at the cut-over
frequency, and the basic recommendation is to keep it well damped at all
times.

The ADEV variant of this is similar.

Now, to simulate flicker you have a basic problem, whatever processing
you do will end up needing the full time of your simulation data to
maintain the slope. You can naturally cheat and do a much reduced setup
that only provides the 1/f slope of PSD at and about the area where it
dominates. Notice that this can occurr both for the FPM and FFM cases.
Also notice that if we respect the model, these should be completely
independent.

Simulation wise, you can turn WPM into WFM by integration, and FPM into
FFM by integration. Similarly the WFM becomes RWFM through a second
integration. Just source each of these independent to respect the model.

The Leeson model for an oscillator does not have these fully
independent, but for most uses, simulate fully independent, and you will
not make more fool of yourself than anyone else usually does, present
company included.

As you see, flicker and random-walk is not the same thing. Often "random
walk" implies Random Walk Frequency Modulation.

Another thing. I think the rand function you use will give you a normal
distribution rather than one being Gaussian or at least pseudo-Gaussian.
A very quick-and-dirty trick to get pseudo-Gaussian noise is to take 12
normal distribution random numbers, subtract them pair-wise and then add
the six pairs. The subtraction removes any bias. The 12 samples will
create a normalized deviation of 1.0, but the peak-to-peak limit is
limited to be within +/- 12, so it may not be relevant for all noise
simultations. Another approach is that of Box-Jenkins that creates much
better shape, but comes at some cost in basic processing.

As an example, the following function call would create a phase vector for a
10MHz oscillator with one day worth of samples, with an aging of about 5
Millihertz per day, 10ps/s white noise and 10ns/s of flicker noise:

phase = osc_synth(86400, -44e-6, 10e-12, 10e-9);

What I'd like to know - is that a "reasonable" model or is it just too far off
of reality to be useful? What could be changed or improved?

As I've stated above, I think it misses key noise-types. I really wonder
if the flicker noise model you have is producing real flicker noise.

Using Bob's suggestion of using real data is not bad, noise simulation
is hard and a research field in itself. My quick and dirty methods get
you started at relatively low cost, except for flicker, flicker always hurt.

Cheers,
Magnus

Matthias, On 2022-05-02 17:12, Matthias Welwarsky wrote: > Dear all, > > I'm trying to come up with a reasonably simple model for an OCXO that I can > parametrize to experiment with a GPSDO simulator. For now I have the following > matlab function that "somewhat" does what I think is reasonable, but I would > like a reality check. > > This is the matlab code: > > function [phase] = synth_osc(samples,da,wn,fn) > # aging > phase = (((1:samples)/86400).^2)*da; > # white noise > phase += (rand(1,samples)-0.5)*wn; > # flicker noise > phase += cumsum(rand(1,samples)-0.5)*fn; > end > > There are three components in the model, aging, white noise and flicker noise, > with everything expressed in fractions of seconds. > > The first term basically creates a base vector that has a quadratic aging > function. It can be parametrized e.g. from an OCXO datasheet, daily aging > given in s/s per day. > > The second term models white noise. It's just a random number scaled to the > desired 1-second uncertainty. > > The third term is supposed to model flicker noise. It's basically a random > walk scaled to the desired magnitude. What you have shown is 2 out of the standard 5 noise-types. The Leeson model describes how 4 noise-types can be generated, but in addition to that the Random Walk Frequency Modulation is included. For standard, see IEEE Std 1139. For Leeson model, see David Leeson articles from special issue of Feb 1966, as I believe you can find on IEEE UFFC site. The Wikipedia Allan Deviation article may also be of some guidance. Noise-types: White Phase Modulation - your thermal noise on phase Flicker Phase Modulation - your 1/f flicker noise on phase Because we have oscillators that integrate inside the resonators bandwidth, we also get their integrated equivalents White Frequency Modulation - your thermal noise on frequency - forms a Random Walk Phase Modulation Flicker Frequency Modulation - Your 1/f flicker noise on frequency Random Walk Frequency Modulation - Observed on disturbed oscillators, a random walk in frequency, mostly analyzed for finding rare faults. You have only modelled the first two. This may or may not be relevant depending on the bandwidth of your control loop and the dominant noise there. It may not be relevant. The phase-noise graph will illustrate well where the power of the various noise-types intercept and thus provide a range over frequency for which one or the other is dominant. You control loop bandwidth will high-pass filter this for your locked oscillator, and low-pass filter for your reference oscillator/source. The end result will be a composit of both. The Q-value / damping factor will control just how much jitter peaking occurrs at the cut-over frequency, and the basic recommendation is to keep it well damped at all times. The ADEV variant of this is similar. Now, to simulate flicker you have a basic problem, whatever processing you do will end up needing the full time of your simulation data to maintain the slope. You can naturally cheat and do a much reduced setup that only provides the 1/f slope of PSD at and about the area where it dominates. Notice that this can occurr both for the FPM and FFM cases. Also notice that if we respect the model, these should be completely independent. Simulation wise, you can turn WPM into WFM by integration, and FPM into FFM by integration. Similarly the WFM becomes RWFM through a second integration. Just source each of these independent to respect the model. The Leeson model for an oscillator does not have these fully independent, but for most uses, simulate fully independent, and you will not make more fool of yourself than anyone else usually does, present company included. As you see, flicker and random-walk is not the same thing. Often "random walk" implies Random Walk Frequency Modulation. Another thing. I think the rand function you use will give you a normal distribution rather than one being Gaussian or at least pseudo-Gaussian. A very quick-and-dirty trick to get pseudo-Gaussian noise is to take 12 normal distribution random numbers, subtract them pair-wise and then add the six pairs. The subtraction removes any bias. The 12 samples will create a normalized deviation of 1.0, but the peak-to-peak limit is limited to be within +/- 12, so it may not be relevant for all noise simultations. Another approach is that of Box-Jenkins that creates much better shape, but comes at some cost in basic processing. > As an example, the following function call would create a phase vector for a > 10MHz oscillator with one day worth of samples, with an aging of about 5 > Millihertz per day, 10ps/s white noise and 10ns/s of flicker noise: > > phase = osc_synth(86400, -44e-6, 10e-12, 10e-9); > > What I'd like to know - is that a "reasonable" model or is it just too far off > of reality to be useful? What could be changed or improved? As I've stated above, I think it misses key noise-types. I really wonder if the flicker noise model you have is producing real flicker noise. Using Bob's suggestion of using real data is not bad, noise simulation is hard and a research field in itself. My quick and dirty methods get you started at relatively low cost, except for flicker, flicker always hurt. Cheers, Magnus
LJ
Lux, Jim
Tue, May 3, 2022 1:43 AM

On 5/2/22 6:09 PM, Magnus Danielson via time-nuts wrote:

Matthias,

On 2022-05-02 17:12, Matthias Welwarsky wrote:

Dear all,

I'm trying to come up with a reasonably simple model for an OCXO that
I can
parametrize to experiment with a GPSDO simulator. For now I have the
following
matlab function that "somewhat" does what I think is reasonable, but
I would
like a reality check.

This is the matlab code:

function [phase] = synth_osc(samples,da,wn,fn)

aging

phase = (((1:samples)/86400).^2)*da;

white noise

phase += (rand(1,samples)-0.5)*wn;

flicker noise

phase += cumsum(rand(1,samples)-0.5)*fn;
end

There are three components in the model, aging, white noise and
flicker noise,
with everything expressed in fractions of seconds.

The first term basically creates a base vector that has a quadratic
aging
function. It can be parametrized e.g. from an OCXO datasheet, daily
aging
given in s/s per day.

The second term models white noise. It's just a random number scaled
to the
desired 1-second uncertainty.

The third term is supposed to model flicker noise. It's basically a
random
walk scaled to the desired magnitude.

<snip>

Another thing. I think the rand function you use will give you a
normal distribution rather than one being Gaussian or at least
pseudo-Gaussian.

rand() gives uniform distribution from [0,1). (Matlab's doc says (0,1),
but I've seen zero, but never seen 1.) What you want is randn(), which
gives a zero mean, unity variance Gaussian distribution.

https://www.mathworks.com/help/matlab/ref/randn.html

A very quick-and-dirty trick to get pseudo-Gaussian noise is to take
12 normal distribution random numbers, subtract them pair-wise and
then add the six pairs.

That would be for uniform distribution. A time-honored approach from the
IBM Scientific Subroutine Package.

The subtraction removes any bias. The 12 samples will create a
normalized deviation of 1.0, but the peak-to-peak limit is limited to
be within +/- 12, so it may not be relevant for all noise
simultations. Another approach is that of Box-Jenkins that creates
much better shape, but comes at some cost in basic processing.

On 5/2/22 6:09 PM, Magnus Danielson via time-nuts wrote: > Matthias, > > On 2022-05-02 17:12, Matthias Welwarsky wrote: >> Dear all, >> >> I'm trying to come up with a reasonably simple model for an OCXO that >> I can >> parametrize to experiment with a GPSDO simulator. For now I have the >> following >> matlab function that "somewhat" does what I think is reasonable, but >> I would >> like a reality check. >> >> This is the matlab code: >> >> function [phase] = synth_osc(samples,da,wn,fn) >> # aging >> phase = (((1:samples)/86400).^2)*da; >> # white noise >> phase += (rand(1,samples)-0.5)*wn; >> # flicker noise >> phase += cumsum(rand(1,samples)-0.5)*fn; >> end >> >> There are three components in the model, aging, white noise and >> flicker noise, >> with everything expressed in fractions of seconds. >> >> The first term basically creates a base vector that has a quadratic >> aging >> function. It can be parametrized e.g. from an OCXO datasheet, daily >> aging >> given in s/s per day. >> >> The second term models white noise. It's just a random number scaled >> to the >> desired 1-second uncertainty. >> >> The third term is supposed to model flicker noise. It's basically a >> random >> walk scaled to the desired magnitude. > <snip> > > Another thing. I think the rand function you use will give you a > normal distribution rather than one being Gaussian or at least > pseudo-Gaussian. rand() gives uniform distribution from [0,1). (Matlab's doc says (0,1), but I've seen zero, but never seen 1.) What you want is randn(), which gives a zero mean, unity variance Gaussian distribution. https://www.mathworks.com/help/matlab/ref/randn.html > A very quick-and-dirty trick to get pseudo-Gaussian noise is to take > 12 normal distribution random numbers, subtract them pair-wise and > then add the six pairs. That would be for uniform distribution. A time-honored approach from the IBM Scientific Subroutine Package. > The subtraction removes any bias. The 12 samples will create a > normalized deviation of 1.0, but the peak-to-peak limit is limited to > be within +/- 12, so it may not be relevant for all noise > simultations. Another approach is that of Box-Jenkins that creates > much better shape, but comes at some cost in basic processing.
MD
Magnus Danielson
Tue, May 3, 2022 2:03 AM

Hi Jim,

Thanks for the corrections. Was way to tired to get the uniform and
normal distributions right.

rand() is then by classical UNIX tradition is generated as a unsigned
integer divided by the suitable (32th) power of two, so the maximum
value will not be there, and this is why a small bias is introduced,
since 0 can be reached but not 1.

In practice the bias is small, but care is taken never the less.

Cheers,
Magnus

On 2022-05-03 03:43, Lux, Jim wrote:

On 5/2/22 6:09 PM, Magnus Danielson via time-nuts wrote:

Matthias,

On 2022-05-02 17:12, Matthias Welwarsky wrote:

Dear all,

I'm trying to come up with a reasonably simple model for an OCXO
that I can
parametrize to experiment with a GPSDO simlator. For now I have the
following
matlab function that "somewhat" does what I think is reasonable, but
I would
like a reality check.

This is the matlab code:

function [phase] = synth_osc(samples,da,wn,fn)

aging

phase = (((1:samples)/86400).^2)*da;

white noise

phase += (rand(1,samples)-0.5)*wn;

flicker noise

phase += cumsum(rand(1,samples)-0.5)*fn;
end

There are three components in the model, aging, white noise and
flicker noise,
with everything expressed in fractions of seconds.

The first term basically creates a base vector that has a quadratic
aging
function. It can be parametrized e.g. from an OCXO datasheet, daily
aging
given in s/s per day.

The second term models white noise. It's just a random number scaled
to the
desired 1-second uncertainty.

The third term is supposed to model flicker noise. It's basically a
random
walk scaled to the desired magnitude.

<snip> > > Another thing. I think the rand function you use will give you a > normal distribution rather than one being Gaussian or at least > pseudo-Gaussian.

rand() gives uniform distribution from [0,1). (Matlab's doc says
(0,1), but I've seen zero, but never seen 1.) What you want is
randn(), which gives a zero mean, unity variance Gaussian distribution.

https://www.mathworks.com/help/matlab/ref/randn.html

A very quick-and-dirty trick to get pseudo-Gaussian noise is to take
12 normal distribution random numbers, subtract them pair-wise and
then add the six pairs.

That would be for uniform distribution. A time-honored approach from
the IBM Scientific Subroutine Package.

The subtraction removes any bias. The 12 samples will create a
normalized deviation of 1.0, but the peak-to-peak limit is limited to
be within +/- 12, so it may not be relevant for all noise
simultations. Another approach is that of Box-Jenkins that creates
much better shape, but comes at some cost in basic processing.


time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe
send an email to time-nuts-leave@lists.febo.com
To unsubscribe, go to and follow the instructions there.

Hi Jim, Thanks for the corrections. Was way to tired to get the uniform and normal distributions right. rand() is then by classical UNIX tradition is generated as a unsigned integer divided by the suitable (32th) power of two, so the maximum value will not be there, and this is why a small bias is introduced, since 0 can be reached but not 1. In practice the bias is small, but care is taken never the less. Cheers, Magnus On 2022-05-03 03:43, Lux, Jim wrote: > On 5/2/22 6:09 PM, Magnus Danielson via time-nuts wrote: >> Matthias, >> >> On 2022-05-02 17:12, Matthias Welwarsky wrote: >>> Dear all, >>> >>> I'm trying to come up with a reasonably simple model for an OCXO >>> that I can >>> parametrize to experiment with a GPSDO simlator. For now I have the >>> following >>> matlab function that "somewhat" does what I think is reasonable, but >>> I would >>> like a reality check. >>> >>> This is the matlab code: >>> >>> function [phase] = synth_osc(samples,da,wn,fn) >>> # aging >>> phase = (((1:samples)/86400).^2)*da; >>> # white noise >>> phase += (rand(1,samples)-0.5)*wn; >>> # flicker noise >>> phase += cumsum(rand(1,samples)-0.5)*fn; >>> end >>> >>> There are three components in the model, aging, white noise and >>> flicker noise, >>> with everything expressed in fractions of seconds. >>> >>> The first term basically creates a base vector that has a quadratic >>> aging >>> function. It can be parametrized e.g. from an OCXO datasheet, daily >>> aging >>> given in s/s per day. >>> >>> The second term models white noise. It's just a random number scaled >>> to the >>> desired 1-second uncertainty. >>> >>> The third term is supposed to model flicker noise. It's basically a >>> random >>> walk scaled to the desired magnitude. >> > <snip> >> >> Another thing. I think the rand function you use will give you a >> normal distribution rather than one being Gaussian or at least >> pseudo-Gaussian. > > rand() gives uniform distribution from [0,1). (Matlab's doc says > (0,1), but I've seen zero, but never seen 1.) What you want is > randn(), which gives a zero mean, unity variance Gaussian distribution. > > https://www.mathworks.com/help/matlab/ref/randn.html > > >> A very quick-and-dirty trick to get pseudo-Gaussian noise is to take >> 12 normal distribution random numbers, subtract them pair-wise and >> then add the six pairs. > > That would be for uniform distribution. A time-honored approach from > the IBM Scientific Subroutine Package. > > >> The subtraction removes any bias. The 12 samples will create a >> normalized deviation of 1.0, but the peak-to-peak limit is limited to >> be within +/- 12, so it may not be relevant for all noise >> simultations. Another approach is that of Box-Jenkins that creates >> much better shape, but comes at some cost in basic processing. > _______________________________________________ > time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe > send an email to time-nuts-leave@lists.febo.com > To unsubscribe, go to and follow the instructions there.
LJ
Lux, Jim
Tue, May 3, 2022 3:19 AM

On 5/2/22 7:03 PM, Magnus Danielson via time-nuts wrote:

Hi Jim,

Thanks for the corrections. Was way to tired to get the uniform and
normal distributions right.

rand() is then by classical UNIX tradition is generated as a unsigned
integer divided by the suitable (32th) power of two, so the maximum
value will not be there, and this is why a small bias is introduced,
since 0 can be reached but not 1.

I'll bet it's "pre-Unix" -

System/360 Scientific Subroutine Package Programmer's Manual, Version
III, DOI: 10.3247/SL2Soft08.001 says 1968 version, but I'm pretty sure
the SSP is older (it is Version 3 after all)

I used it on an IBM 1130 as a mere youth in 1969

http://media.ibm1130.org/1130-106-ocr.pdf

    SUBROUTINE RANDU(X,IY,YFL)
    IY = IX*899
    if (IY) 5,6,6
5   IY=IY+32767+1
6   YFL=IY
    YFL=YFL/32767.
    RETURN
    END

GAUSS does normal distribution, with this comment:

Y approaches a true normal distribution asympototically as K approaches
infinity. For this subroutine, K was chosen as 12 to reduce execution time.

It also helps that the variance of a uniform distribution is 1/12, so
summing 12 numbers produces a distribution with a variance of 1.

But it's older than that.. I found a reference to it in some
documentation for 7090 from 1961.  Since Unix wasn't even a name until
1970...

In practice the bias is small, but care is taken never the less.

Yes, that's a clever technique.

And the less said about the actual "randomness" of generators from that
era, the better.

Cheers,
Magnus

On 5/2/22 7:03 PM, Magnus Danielson via time-nuts wrote: > Hi Jim, > > Thanks for the corrections. Was way to tired to get the uniform and > normal distributions right. > > rand() is then by classical UNIX tradition is generated as a unsigned > integer divided by the suitable (32th) power of two, so the maximum > value will not be there, and this is why a small bias is introduced, > since 0 can be reached but not 1. I'll bet it's "pre-Unix" - System/360 Scientific Subroutine Package Programmer's Manual, Version III, DOI: 10.3247/SL2Soft08.001 says 1968 version, but I'm pretty sure the SSP is older (it is Version 3 after all) I used it on an IBM 1130 as a mere youth in 1969 http://media.ibm1130.org/1130-106-ocr.pdf     SUBROUTINE RANDU(X,IY,YFL)     IY = IX*899     if (IY) 5,6,6 5   IY=IY+32767+1 6   YFL=IY     YFL=YFL/32767.     RETURN     END GAUSS does normal distribution, with this comment: Y approaches a true normal distribution asympototically as K approaches infinity. For this subroutine, K was chosen as 12 to reduce execution time. It also helps that the variance of a uniform distribution is 1/12, so summing 12 numbers produces a distribution with a variance of 1. But it's older than that.. I found a reference to it in some documentation for 7090 from 1961.  Since Unix wasn't even a name until 1970... > > In practice the bias is small, but care is taken never the less. Yes, that's a clever technique. And the less said about the actual "randomness" of generators from that era, the better. > > Cheers, > Magnus
MW
Matthias Welwarsky
Tue, May 3, 2022 8:57 AM

Dear all,

thanks for your kind comments, corrections and suggestions. Please forgive if
I don't reply to all of your comments individually. Summary response follows:

Attila - yes, I realize temperature dependence is one key parameter. I model
this meanwhile as a frequency shift over time.

Bob - I agree in principle, real world data is a good reality check for any
model, but there are only so few datasets available and most of the time they
don't contain associated environmental data. You get a mix of effects without
any chance to isolate them.

Magnus, Jim - thanks a lot. Your post encouraged me to look especially into
flicker noise an how to generate it in the time domain. I now use randn() and
a low-pass filter. Also, I think I understood now how to create phase vs
frequency noise.

I've some Timelab screenshots attached, ADEV and frequency plot of a data set
I generated with the following matlab function, plus some temperature response
modeled outside of this function.

function [phase] = synth_osc(samples,da,wpn,wfn,fpn,ffn)
# low-pass butterworth filter for 1/f noise generator
[b,a] = butter(1, 0.1);

# aging
phase = (((1:samples)/86400).^2)*da;
# white phase noise
phase += (randn(1, samples))*wpn;
# white frequency noise
phase += cumsum(randn(1, samples))*wfn;
# 1/f phase noise
phase += filter(b,a,randn(1,samples))*fpn;
# 1/f frequency noise
phase += cumsum(filter(b,a,randn(1,samples))*ffn);

end

osc = synth_osc(400000, -50e-6, 5e-11, 1e-11, 5e-11, 5e-11);

Thanks.

On Montag, 2. Mai 2022 17:12:47 CEST Matthias Welwarsky wrote:

Dear all,

I'm trying to come up with a reasonably simple model for an OCXO that I can
parametrize to experiment with a GPSDO simulator. For now I have the
following matlab function that "somewhat" does what I think is reasonable,
but I would like a reality check.

This is the matlab code:

function [phase] = synth_osc(samples,da,wn,fn)

aging

phase = (((1:samples)/86400).^2)*da;

white noise

phase += (rand(1,samples)-0.5)*wn;

flicker noise

phase += cumsum(rand(1,samples)-0.5)*fn;
end

There are three components in the model, aging, white noise and flicker
noise, with everything expressed in fractions of seconds.

The first term basically creates a base vector that has a quadratic aging
function. It can be parametrized e.g. from an OCXO datasheet, daily aging
given in s/s per day.

The second term models white noise. It's just a random number scaled to the
desired 1-second uncertainty.

The third term is supposed to model flicker noise. It's basically a random
walk scaled to the desired magnitude.

As an example, the following function call would create a phase vector for a
10MHz oscillator with one day worth of samples, with an aging of about 5
Millihertz per day, 10ps/s white noise and 10ns/s of flicker noise:

phase = osc_synth(86400, -44e-6, 10e-12, 10e-9);

What I'd like to know - is that a "reasonable" model or is it just too far
off of reality to be useful? What could be changed or improved?

Best regards,
Matthias


time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an
email to time-nuts-leave@lists.febo.com To unsubscribe, go to and follow
the instructions there.

Dear all, thanks for your kind comments, corrections and suggestions. Please forgive if I don't reply to all of your comments individually. Summary response follows: Attila - yes, I realize temperature dependence is one key parameter. I model this meanwhile as a frequency shift over time. Bob - I agree in principle, real world data is a good reality check for any model, but there are only so few datasets available and most of the time they don't contain associated environmental data. You get a mix of effects without any chance to isolate them. Magnus, Jim - thanks a lot. Your post encouraged me to look especially into flicker noise an how to generate it in the time domain. I now use randn() and a low-pass filter. Also, I think I understood now how to create phase vs frequency noise. I've some Timelab screenshots attached, ADEV and frequency plot of a data set I generated with the following matlab function, plus some temperature response modeled outside of this function. function [phase] = synth_osc(samples,da,wpn,wfn,fpn,ffn) # low-pass butterworth filter for 1/f noise generator [b,a] = butter(1, 0.1); # aging phase = (((1:samples)/86400).^2)*da; # white phase noise phase += (randn(1, samples))*wpn; # white frequency noise phase += cumsum(randn(1, samples))*wfn; # 1/f phase noise phase += filter(b,a,randn(1,samples))*fpn; # 1/f frequency noise phase += cumsum(filter(b,a,randn(1,samples))*ffn); end osc = synth_osc(400000, -50e-6, 5e-11, 1e-11, 5e-11, 5e-11); Thanks. On Montag, 2. Mai 2022 17:12:47 CEST Matthias Welwarsky wrote: > Dear all, > > I'm trying to come up with a reasonably simple model for an OCXO that I can > parametrize to experiment with a GPSDO simulator. For now I have the > following matlab function that "somewhat" does what I think is reasonable, > but I would like a reality check. > > This is the matlab code: > > function [phase] = synth_osc(samples,da,wn,fn) > # aging > phase = (((1:samples)/86400).^2)*da; > # white noise > phase += (rand(1,samples)-0.5)*wn; > # flicker noise > phase += cumsum(rand(1,samples)-0.5)*fn; > end > > There are three components in the model, aging, white noise and flicker > noise, with everything expressed in fractions of seconds. > > The first term basically creates a base vector that has a quadratic aging > function. It can be parametrized e.g. from an OCXO datasheet, daily aging > given in s/s per day. > > The second term models white noise. It's just a random number scaled to the > desired 1-second uncertainty. > > The third term is supposed to model flicker noise. It's basically a random > walk scaled to the desired magnitude. > > As an example, the following function call would create a phase vector for a > 10MHz oscillator with one day worth of samples, with an aging of about 5 > Millihertz per day, 10ps/s white noise and 10ns/s of flicker noise: > > phase = osc_synth(86400, -44e-6, 10e-12, 10e-9); > > What I'd like to know - is that a "reasonable" model or is it just too far > off of reality to be useful? What could be changed or improved? > > Best regards, > Matthias > > > _______________________________________________ > time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an > email to time-nuts-leave@lists.febo.com To unsubscribe, go to and follow > the instructions there.
LJ
Lux, Jim
Tue, May 3, 2022 3:06 PM

On 5/3/22 1:57 AM, Matthias Welwarsky wrote:

Magnus, Jim - thanks a lot. Your post encouraged me to look especially into
flicker noise an how to generate it in the time domain. I now use randn() and
a low-pass filter. Also, I think I understood now how to create phase vs
frequency noise.

There's some papers out there (mentioned on the list in the past) about
synthesizing colored noise. Taking "White" noise and running it through
a filter is one approach. Another is doing an inverse FFT, but that has
the issue of needing to know how many samples you need. Although I
suppose one could do some sort of continuous overlapping window scheme
(which, when it comes right down to it is just another way of filtering)).

https://dl.acm.org/doi/abs/10.1145/368273.368574

or

C. A. Greenhall. Models for Flicker Noise in DSN Oscillators. The Deep
Space Network Progress Report, Volume XIII, pp. 183-193, February 15, 1973.
https://ipnpr.jpl.nasa.gov/progress_report/XIII/XIIIY.PDF

W. J. Hurd. Efficient Generation of Statistically Good Pseudonoise by
Linearly Interconnected Shift Registers. The Deep Space Network Progress
Report, Volume XI, pp. 1-1, October 15, 1972.
https://ipnpr.jpl.nasa.gov/progress_report/XI/XIQ.PDF

A Wideband Digital Pseudo-Gaussian Noise Generator
W. J. Hurd
https://ipnpr.jpl.nasa.gov/progress_report/III/IIIP.PDF

On 5/3/22 1:57 AM, Matthias Welwarsky wrote: > Magnus, Jim - thanks a lot. Your post encouraged me to look especially into > flicker noise an how to generate it in the time domain. I now use randn() and > a low-pass filter. Also, I think I understood now how to create phase vs > frequency noise. There's some papers out there (mentioned on the list in the past) about synthesizing colored noise. Taking "White" noise and running it through a filter is one approach. Another is doing an inverse FFT, but that has the issue of needing to know how many samples you need. Although I suppose one could do some sort of continuous overlapping window scheme (which, when it comes right down to it is just another way of filtering)). https://dl.acm.org/doi/abs/10.1145/368273.368574 or C. A. Greenhall. Models for Flicker Noise in DSN Oscillators. The Deep Space Network Progress Report, Volume XIII, pp. 183-193, February 15, 1973. https://ipnpr.jpl.nasa.gov/progress_report/XIII/XIIIY.PDF W. J. Hurd. Efficient Generation of Statistically Good Pseudonoise by Linearly Interconnected Shift Registers. The Deep Space Network Progress Report, Volume XI, pp. 1-1, October 15, 1972. https://ipnpr.jpl.nasa.gov/progress_report/XI/XIQ.PDF A Wideband Digital Pseudo-Gaussian Noise Generator W. J. Hurd https://ipnpr.jpl.nasa.gov/progress_report/III/IIIP.PDF
MD
Magnus Danielson
Tue, May 3, 2022 8:08 PM

Dear Matthias,

On 2022-05-03 10:57, Matthias Welwarsky wrote:

Dear all,

thanks for your kind comments, corrections and suggestions. Please forgive if
I don't reply to all of your comments individually. Summary response follows:

Attila - yes, I realize temperature dependence is one key parameter. I model
this meanwhile as a frequency shift over time.

Bob - I agree in principle, real world data is a good reality check for any
model, but there are only so few datasets available and most of the time they
don't contain associated environmental data. You get a mix of effects without
any chance to isolate them.

Environmental effects tends to be recognizeable by their periodic
behavior, i.e. period of the day and the period of the heating/AC. Real
oscillator data tends to be quite relevant as you can simulate what it
would mean to lock that oscillator up. TvB made a simulator on those
grounds. Good exercise.

Magnus, Jim - thanks a lot. Your post encouraged me to look especially into
flicker noise an how to generate it in the time domain. I now use randn() and
a low-pass filter. Also, I think I understood now how to create phase vs
frequency noise.

Happy to get you up to speed on that.

One particular name to check out articles for is Charles "Chuck"
Greenhall, JPL.

For early work, also look att James "Jim" Barnes, NBS (later named NIST).

Both these fine gentlement is recommended reading almost anything they
write on the topic actually.

I've some Timelab screenshots attached, ADEV and frequency plot of a data set
I generated with the following matlab function, plus some temperature response
modeled outside of this function.

function [phase] = synth_osc(samples,da,wpn,wfn,fpn,ffn)
# low-pass butterworth filter for 1/f noise generator
[b,a] = butter(1, 0.1);

Notice that 1/f is power-spectrum density, straight filter will give you
1/f^2 in power-spectrum, just as an integration slope.

One approach to flicker filter is an IIR filter with the weighing of
1/sqrt(n+1) where n is tap index, and feed it normal noise. You need to
"flush out" state before you use it so you have a long history to help
shaping. For a 1024 sample series, I do 2048 samples and only use the
last 1024. Efficient? No. Quick-and-dirty? Yes.

The pole/zero type of filters of Barnes let you synthesize an 1/f slope
by balancing the properties. How dense and thus how small ripples you
get, you decide. Greenhall made the point of recording the state, and
provides BASIC code that calculate the state rather than run an infinite
sequence to let the initial state converge to the 1/f state.

Greenhall published an article illustrating a whole range of methods to
do it. He wrote the simulation code to be used in JPL for their clock
development.

Flicker noise is indeed picky.

Cheers,
Magnus

# aging
phase = (((1:samples)/86400).^2)*da;
# white phase noise
phase += (randn(1, samples))*wpn;
# white frequency noise
phase += cumsum(randn(1, samples))*wfn;
# 1/f phase noise
phase += filter(b,a,randn(1,samples))*fpn;
# 1/f frequency noise
phase += cumsum(filter(b,a,randn(1,samples))*ffn);

end

osc = synth_osc(400000, -50e-6, 5e-11, 1e-11, 5e-11, 5e-11);

Thanks.

On Montag, 2. Mai 2022 17:12:47 CEST Matthias Welwarsky wrote:

Dear all,

I'm trying to come up with a reasonably simple model for an OCXO that I can
parametrize to experiment with a GPSDO simulator. For now I have the
following matlab function that "somewhat" does what I think is reasonable,
but I would like a reality check.

This is the matlab code:

function [phase] = synth_osc(samples,da,wn,fn)

aging

phase = (((1:samples)/86400).^2)*da;

white noise

phase += (rand(1,samples)-0.5)*wn;

flicker noise

phase += cumsum(rand(1,samples)-0.5)*fn;
end

There are three components in the model, aging, white noise and flicker
noise, with everything expressed in fractions of seconds.

The first term basically creates a base vector that has a quadratic aging
function. It can be parametrized e.g. from an OCXO datasheet, daily aging
given in s/s per day.

The second term models white noise. It's just a random number scaled to the
desired 1-second uncertainty.

The third term is supposed to model flicker noise. It's basically a random
walk scaled to the desired magnitude.

As an example, the following function call would create a phase vector for a
10MHz oscillator with one day worth of samples, with an aging of about 5
Millihertz per day, 10ps/s white noise and 10ns/s of flicker noise:

phase = osc_synth(86400, -44e-6, 10e-12, 10e-9);

What I'd like to know - is that a "reasonable" model or is it just too far
off of reality to be useful? What could be changed or improved?

Best regards,
Matthias


time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an
email to time-nuts-leave@lists.febo.com To unsubscribe, go to and follow
the instructions there.


time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an email to time-nuts-leave@lists.febo.com
To unsubscribe, go to and follow the instructions there.

Dear Matthias, On 2022-05-03 10:57, Matthias Welwarsky wrote: > Dear all, > > thanks for your kind comments, corrections and suggestions. Please forgive if > I don't reply to all of your comments individually. Summary response follows: > > Attila - yes, I realize temperature dependence is one key parameter. I model > this meanwhile as a frequency shift over time. > > Bob - I agree in principle, real world data is a good reality check for any > model, but there are only so few datasets available and most of the time they > don't contain associated environmental data. You get a mix of effects without > any chance to isolate them. Environmental effects tends to be recognizeable by their periodic behavior, i.e. period of the day and the period of the heating/AC. Real oscillator data tends to be quite relevant as you can simulate what it would mean to lock that oscillator up. TvB made a simulator on those grounds. Good exercise. > > Magnus, Jim - thanks a lot. Your post encouraged me to look especially into > flicker noise an how to generate it in the time domain. I now use randn() and > a low-pass filter. Also, I think I understood now how to create phase vs > frequency noise. Happy to get you up to speed on that. One particular name to check out articles for is Charles "Chuck" Greenhall, JPL. For early work, also look att James "Jim" Barnes, NBS (later named NIST). Both these fine gentlement is recommended reading almost anything they write on the topic actually. > I've some Timelab screenshots attached, ADEV and frequency plot of a data set > I generated with the following matlab function, plus some temperature response > modeled outside of this function. > > function [phase] = synth_osc(samples,da,wpn,wfn,fpn,ffn) > # low-pass butterworth filter for 1/f noise generator > [b,a] = butter(1, 0.1); Notice that 1/f is power-spectrum density, straight filter will give you 1/f^2 in power-spectrum, just as an integration slope. One approach to flicker filter is an IIR filter with the weighing of 1/sqrt(n+1) where n is tap index, and feed it normal noise. You need to "flush out" state before you use it so you have a long history to help shaping. For a 1024 sample series, I do 2048 samples and only use the last 1024. Efficient? No. Quick-and-dirty? Yes. The pole/zero type of filters of Barnes let you synthesize an 1/f slope by balancing the properties. How dense and thus how small ripples you get, you decide. Greenhall made the point of recording the state, and provides BASIC code that calculate the state rather than run an infinite sequence to let the initial state converge to the 1/f state. Greenhall published an article illustrating a whole range of methods to do it. He wrote the simulation code to be used in JPL for their clock development. Flicker noise is indeed picky. Cheers, Magnus > # aging > phase = (((1:samples)/86400).^2)*da; > # white phase noise > phase += (randn(1, samples))*wpn; > # white frequency noise > phase += cumsum(randn(1, samples))*wfn; > # 1/f phase noise > phase += filter(b,a,randn(1,samples))*fpn; > # 1/f frequency noise > phase += cumsum(filter(b,a,randn(1,samples))*ffn); > end > > osc = synth_osc(400000, -50e-6, 5e-11, 1e-11, 5e-11, 5e-11); > > Thanks. > > On Montag, 2. Mai 2022 17:12:47 CEST Matthias Welwarsky wrote: >> Dear all, >> >> I'm trying to come up with a reasonably simple model for an OCXO that I can >> parametrize to experiment with a GPSDO simulator. For now I have the >> following matlab function that "somewhat" does what I think is reasonable, >> but I would like a reality check. >> >> This is the matlab code: >> >> function [phase] = synth_osc(samples,da,wn,fn) >> # aging >> phase = (((1:samples)/86400).^2)*da; >> # white noise >> phase += (rand(1,samples)-0.5)*wn; >> # flicker noise >> phase += cumsum(rand(1,samples)-0.5)*fn; >> end >> >> There are three components in the model, aging, white noise and flicker >> noise, with everything expressed in fractions of seconds. >> >> The first term basically creates a base vector that has a quadratic aging >> function. It can be parametrized e.g. from an OCXO datasheet, daily aging >> given in s/s per day. >> >> The second term models white noise. It's just a random number scaled to the >> desired 1-second uncertainty. >> >> The third term is supposed to model flicker noise. It's basically a random >> walk scaled to the desired magnitude. >> >> As an example, the following function call would create a phase vector for a >> 10MHz oscillator with one day worth of samples, with an aging of about 5 >> Millihertz per day, 10ps/s white noise and 10ns/s of flicker noise: >> >> phase = osc_synth(86400, -44e-6, 10e-12, 10e-9); >> >> What I'd like to know - is that a "reasonable" model or is it just too far >> off of reality to be useful? What could be changed or improved? >> >> Best regards, >> Matthias >> >> >> _______________________________________________ >> time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an >> email to time-nuts-leave@lists.febo.com To unsubscribe, go to and follow >> the instructions there. > > _______________________________________________ > time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an email to time-nuts-leave@lists.febo.com > To unsubscribe, go to and follow the instructions there.
BK
Bob kb8tq
Tue, May 3, 2022 9:23 PM

Hi

The gotcha is that there are a number of very normal OCXO “behaviors” that are not
covered by any of the standard statistical models. Coping with these issue is at least
as important at working with the stuff that is coved by any of the standard statistical
models ….

Bob

On May 3, 2022, at 3:57 AM, Matthias Welwarsky time-nuts@welwarsky.de wrote:

Dear all,

thanks for your kind comments, corrections and suggestions. Please forgive if
I don't reply to all of your comments individually. Summary response follows:

Attila - yes, I realize temperature dependence is one key parameter. I model
this meanwhile as a frequency shift over time.

Bob - I agree in principle, real world data is a good reality check for any
model, but there are only so few datasets available and most of the time they
don't contain associated environmental data. You get a mix of effects without
any chance to isolate them.

Magnus, Jim - thanks a lot. Your post encouraged me to look especially into
flicker noise an how to generate it in the time domain. I now use randn() and
a low-pass filter. Also, I think I understood now how to create phase vs
frequency noise.

I've some Timelab screenshots attached, ADEV and frequency plot of a data set
I generated with the following matlab function, plus some temperature response
modeled outside of this function.

function [phase] = synth_osc(samples,da,wpn,wfn,fpn,ffn)
# low-pass butterworth filter for 1/f noise generator
[b,a] = butter(1, 0.1);

# aging
phase = (((1:samples)/86400).^2)*da;
# white phase noise
phase += (randn(1, samples))*wpn;
# white frequency noise
phase += cumsum(randn(1, samples))*wfn;
# 1/f phase noise
phase += filter(b,a,randn(1,samples))*fpn;
# 1/f frequency noise
phase += cumsum(filter(b,a,randn(1,samples))*ffn);

end

osc = synth_osc(400000, -50e-6, 5e-11, 1e-11, 5e-11, 5e-11);

Thanks.

On Montag, 2. Mai 2022 17:12:47 CEST Matthias Welwarsky wrote:

Dear all,

I'm trying to come up with a reasonably simple model for an OCXO that I can
parametrize to experiment with a GPSDO simulator. For now I have the
following matlab function that "somewhat" does what I think is reasonable,
but I would like a reality check.

This is the matlab code:

function [phase] = synth_osc(samples,da,wn,fn)

aging

phase = (((1:samples)/86400).^2)*da;

white noise

phase += (rand(1,samples)-0.5)*wn;

flicker noise

phase += cumsum(rand(1,samples)-0.5)*fn;
end

There are three components in the model, aging, white noise and flicker
noise, with everything expressed in fractions of seconds.

The first term basically creates a base vector that has a quadratic aging
function. It can be parametrized e.g. from an OCXO datasheet, daily aging
given in s/s per day.

The second term models white noise. It's just a random number scaled to the
desired 1-second uncertainty.

The third term is supposed to model flicker noise. It's basically a random
walk scaled to the desired magnitude.

As an example, the following function call would create a phase vector for a
10MHz oscillator with one day worth of samples, with an aging of about 5
Millihertz per day, 10ps/s white noise and 10ns/s of flicker noise:

phase = osc_synth(86400, -44e-6, 10e-12, 10e-9);

What I'd like to know - is that a "reasonable" model or is it just too far
off of reality to be useful? What could be changed or improved?

Best regards,
Matthias


time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an
email to time-nuts-leave@lists.febo.com To unsubscribe, go to and follow
the instructions there.

<oscillator-freq.png><oscillator.png>_______________________________________________
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an email to time-nuts-leave@lists.febo.com
To unsubscribe, go to and follow the instructions there.

Hi The gotcha is that there are a number of very normal OCXO “behaviors” that are not covered by any of the standard statistical models. Coping with these issue is at least as important at working with the stuff that is coved by any of the standard statistical models …. Bob > On May 3, 2022, at 3:57 AM, Matthias Welwarsky <time-nuts@welwarsky.de> wrote: > > Dear all, > > thanks for your kind comments, corrections and suggestions. Please forgive if > I don't reply to all of your comments individually. Summary response follows: > > Attila - yes, I realize temperature dependence is one key parameter. I model > this meanwhile as a frequency shift over time. > > Bob - I agree in principle, real world data is a good reality check for any > model, but there are only so few datasets available and most of the time they > don't contain associated environmental data. You get a mix of effects without > any chance to isolate them. > > Magnus, Jim - thanks a lot. Your post encouraged me to look especially into > flicker noise an how to generate it in the time domain. I now use randn() and > a low-pass filter. Also, I think I understood now how to create phase vs > frequency noise. > > I've some Timelab screenshots attached, ADEV and frequency plot of a data set > I generated with the following matlab function, plus some temperature response > modeled outside of this function. > > function [phase] = synth_osc(samples,da,wpn,wfn,fpn,ffn) > # low-pass butterworth filter for 1/f noise generator > [b,a] = butter(1, 0.1); > > # aging > phase = (((1:samples)/86400).^2)*da; > # white phase noise > phase += (randn(1, samples))*wpn; > # white frequency noise > phase += cumsum(randn(1, samples))*wfn; > # 1/f phase noise > phase += filter(b,a,randn(1,samples))*fpn; > # 1/f frequency noise > phase += cumsum(filter(b,a,randn(1,samples))*ffn); > end > > osc = synth_osc(400000, -50e-6, 5e-11, 1e-11, 5e-11, 5e-11); > > Thanks. > > On Montag, 2. Mai 2022 17:12:47 CEST Matthias Welwarsky wrote: >> Dear all, >> >> I'm trying to come up with a reasonably simple model for an OCXO that I can >> parametrize to experiment with a GPSDO simulator. For now I have the >> following matlab function that "somewhat" does what I think is reasonable, >> but I would like a reality check. >> >> This is the matlab code: >> >> function [phase] = synth_osc(samples,da,wn,fn) >> # aging >> phase = (((1:samples)/86400).^2)*da; >> # white noise >> phase += (rand(1,samples)-0.5)*wn; >> # flicker noise >> phase += cumsum(rand(1,samples)-0.5)*fn; >> end >> >> There are three components in the model, aging, white noise and flicker >> noise, with everything expressed in fractions of seconds. >> >> The first term basically creates a base vector that has a quadratic aging >> function. It can be parametrized e.g. from an OCXO datasheet, daily aging >> given in s/s per day. >> >> The second term models white noise. It's just a random number scaled to the >> desired 1-second uncertainty. >> >> The third term is supposed to model flicker noise. It's basically a random >> walk scaled to the desired magnitude. >> >> As an example, the following function call would create a phase vector for a >> 10MHz oscillator with one day worth of samples, with an aging of about 5 >> Millihertz per day, 10ps/s white noise and 10ns/s of flicker noise: >> >> phase = osc_synth(86400, -44e-6, 10e-12, 10e-9); >> >> What I'd like to know - is that a "reasonable" model or is it just too far >> off of reality to be useful? What could be changed or improved? >> >> Best regards, >> Matthias >> >> >> _______________________________________________ >> time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an >> email to time-nuts-leave@lists.febo.com To unsubscribe, go to and follow >> the instructions there. > > <oscillator-freq.png><oscillator.png>_______________________________________________ > time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an email to time-nuts-leave@lists.febo.com > To unsubscribe, go to and follow the instructions there.
AK
Attila Kinali
Wed, May 4, 2022 4:49 PM

On Tue, 3 May 2022 08:06:22 -0700
"Lux, Jim" jim@luxfamily.com wrote:

There's some papers out there (mentioned on the list in the past) about
synthesizing colored noise. Taking "White" noise and running it through
a filter is one approach. Another is doing an inverse FFT, but that has
the issue of needing to know how many samples you need. Although I
suppose one could do some sort of continuous overlapping window scheme
(which, when it comes right down to it is just another way of filtering)).

https://dl.acm.org/doi/abs/10.1145/368273.368574

This one does not work with the noises relevant to oscillators.
First of all, 1/f^a-noise is not stationary, the conditional
expectation E[X(t)|X(t_0)], knowing the value X(t_0) at time t_0
is E[X(t)|X(t_0)] = X(t_0) I.e. it is not the mean of the X(t)
neither is it its value X(0) at some start time. Also, the
ensemble variance changes over time grows (iirc with t^a).
Yes, this also means that 1/f^a noise is not ergodic and thus
most of what you learned in statistics classes does not apply!
You have been warned! ;-)

Second, 1/f^a noise has singularities in its auto-covariance
for a= 2*n + 1 (n=0,1,2,...). Most notably for a=1 and a=3.
See Mandelbrot and Van Ness' work [1] on this.

Over the years, I've tried a few methods on generating noise
for oscillator simmulation, but only two did have any practical
value (others had some flaws somewhere that made them sub-optimal):

  1. FFT based systems
  2. Filter based systems

FFT based systems take a white, normal distributed noise source,
Fourier transform it, filter it in frequency domain and transform
it back. Runtime is dominated by the FFT and thus O(n*log(n)).
There was a nice paper by either Barnes or Greenhall (or both?)
on this, which I seem currently unable to find. This is also the
method employed by the bruiteur tool from sigma-theta.

Biggest disadvantage of this method is, that it operates on the
whole sample length multiple times. I.e it becomes slow very
quickly, especially when the whole sample length is larger
than main memory. But they deliver exact results with exactly
the spectrum / time-correlation you want.

Filter based approaches use some approximation using linear
filters to get the same effect. They can acheive O(n*log(n)) as
well, with O(log(n)) consumption in memory if done right [2]
(based on [3] which explains the filter in simpler terms).
Be aware that it's only an approximation and that the spectrum
will have some wiggles. Though this shouldn't be a problem
in practice. Speed is, in my experience, slightly quicker
than FFT based approaches, as less memory is touched. But
it uses a quite a bit more randomness and thus the random
number generator becomes the bottleneck. But I also have to
admit, the implementation I had for comparison was far from
well coded, so take this with a grain of salt.

There is also the way of doing fractional integration as
has been proposed by Barnes and Allan in [4]. Unfortunately,
the naive implementation of this approach leads to a O(n^2)
runtime and size O(n) memory consumption. There are faster
algorithms to do fractional integration (e.g. [5]) and
I have a few ideas of my own, but I haven't had time to try
any of them yet.

And while we are at it, another warning: rand(3) and by extension
all random number generators based on it, has a rather small
number of states. IIRC the one implemented in glibc has only 31
bits of state. While two billion states sounds like a lot, this
isn't that much for simulation of noise in oscillators. Even
algorithms that are efficient in terms of randomness consume
tens of bytes of random numbers per sample produced. If a
normal distribution is approximated by averaging samples, that
goes quickly into the hundreds of bytes. And suddenly, the
random number generator warps around after just a few million
samples. Even less in a filter based approach.

Thus, I would highly recommend using a high number of state
PRNG generator like xoroshiro1024* [6]. That the algorithm
does not pass all randomness tests with perfect score isn't as
much of an issue in this application, as long as the random
numbers are sufficiently uncorrelated (which they are).

I also recommend using an efficient Gaussian random number
generator instead of averaging. Not only does averaging
create a maximum and minimum value based on the number of
samples averaged over, it is also very slow because it uses
a lot of random numbers. A better approach is to use the
Ziggurat algorithm [7] which uses only about 72-80bit
of entropy per generated sample.

And before you ask, yes sigma-theta/bruiteur uses xoroshift1024*
and the Ziggurat algorithm ;-)

		Attila Kinali

[1] Fractional Brownian Motions, Fractional Noises and Applications,
by Mandelbrot and Van Ness, 1968
http://users.math.yale.edu/~bbm3/web_pdfs/052fractionalBrownianMotions.pdf

[2] Efficient Generation of 1/f^a Noise Sequences for Pulsed Radar
Simulation, by Brooker and Inggs, 2010

[3] Efficient modeling of 1/f^a Noise Using Multirate Process,
by Park, Muhammad, and Roy, 2006

[4] A Statistical Model of Flicker Noise, by Barnes and Allan, 1966

[5] A high-speed algorithm for computation of fractional
differentiation and fractional integration,
by Fukunaga and Shimizu, 2013
http://dx.doi.org/10.1098/rsta.2012.0152

[6] https://prng.di.unimi.it/

[7] The Ziggurat Method for Generating Random Variables,
by Marsaglia and Tsang, 2000
http://dx.doi.org/10.18637/jss.v005.i08

The driving force behind research is the question: "Why?"
There are things we don't understand and things we always
wonder about. And that's why we do research.
-- Kobayashi Makoto

On Tue, 3 May 2022 08:06:22 -0700 "Lux, Jim" <jim@luxfamily.com> wrote: > There's some papers out there (mentioned on the list in the past) about > synthesizing colored noise. Taking "White" noise and running it through > a filter is one approach. Another is doing an inverse FFT, but that has > the issue of needing to know how many samples you need. Although I > suppose one could do some sort of continuous overlapping window scheme > (which, when it comes right down to it is just another way of filtering)). > > https://dl.acm.org/doi/abs/10.1145/368273.368574 This one does not work with the noises relevant to oscillators. First of all, 1/f^a-noise is not stationary, the conditional expectation E[X(t)|X(t_0)], knowing the value X(t_0) at time t_0 is E[X(t)|X(t_0)] = X(t_0) I.e. it is not the mean of the X(t) neither is it its value X(0) at some start time. Also, the ensemble variance changes over time grows (iirc with t^a). Yes, this also means that 1/f^a noise is not ergodic and thus most of what you learned in statistics classes does not apply! You have been warned! ;-) Second, 1/f^a noise has singularities in its auto-covariance for a= 2*n + 1 (n=0,1,2,...). Most notably for a=1 and a=3. See Mandelbrot and Van Ness' work [1] on this. Over the years, I've tried a few methods on generating noise for oscillator simmulation, but only two did have any practical value (others had some flaws somewhere that made them sub-optimal): 1) FFT based systems 2) Filter based systems FFT based systems take a white, normal distributed noise source, Fourier transform it, filter it in frequency domain and transform it back. Runtime is dominated by the FFT and thus O(n*log(n)). There was a nice paper by either Barnes or Greenhall (or both?) on this, which I seem currently unable to find. This is also the method employed by the bruiteur tool from sigma-theta. Biggest disadvantage of this method is, that it operates on the whole sample length multiple times. I.e it becomes slow very quickly, especially when the whole sample length is larger than main memory. But they deliver exact results with exactly the spectrum / time-correlation you want. Filter based approaches use some approximation using linear filters to get the same effect. They can acheive O(n*log(n)) as well, with O(log(n)) consumption in memory if done right [2] (based on [3] which explains the filter in simpler terms). Be aware that it's only an approximation and that the spectrum will have some wiggles. Though this shouldn't be a problem in practice. Speed is, in my experience, slightly quicker than FFT based approaches, as less memory is touched. But it uses a quite a bit more randomness and thus the random number generator becomes the bottleneck. But I also have to admit, the implementation I had for comparison was far from well coded, so take this with a grain of salt. There is also the way of doing fractional integration as has been proposed by Barnes and Allan in [4]. Unfortunately, the naive implementation of this approach leads to a O(n^2) runtime and size O(n) memory consumption. There are faster algorithms to do fractional integration (e.g. [5]) and I have a few ideas of my own, but I haven't had time to try any of them yet. And while we are at it, another warning: rand(3) and by extension all random number generators based on it, has a rather small number of states. IIRC the one implemented in glibc has only 31 bits of state. While two billion states sounds like a lot, this isn't that much for simulation of noise in oscillators. Even algorithms that are efficient in terms of randomness consume tens of bytes of random numbers per sample produced. If a normal distribution is approximated by averaging samples, that goes quickly into the hundreds of bytes. And suddenly, the random number generator warps around after just a few million samples. Even less in a filter based approach. Thus, I would highly recommend using a high number of state PRNG generator like xoroshiro1024* [6]. That the algorithm does not pass all randomness tests with perfect score isn't as much of an issue in this application, as long as the random numbers are sufficiently uncorrelated (which they are). I also recommend using an efficient Gaussian random number generator instead of averaging. Not only does averaging create a maximum and minimum value based on the number of samples averaged over, it is also very slow because it uses a lot of random numbers. A better approach is to use the Ziggurat algorithm [7] which uses only about 72-80bit of entropy per generated sample. And before you ask, yes sigma-theta/bruiteur uses xoroshift1024* and the Ziggurat algorithm ;-) Attila Kinali [1] Fractional Brownian Motions, Fractional Noises and Applications, by Mandelbrot and Van Ness, 1968 http://users.math.yale.edu/~bbm3/web_pdfs/052fractionalBrownianMotions.pdf [2] Efficient Generation of 1/f^a Noise Sequences for Pulsed Radar Simulation, by Brooker and Inggs, 2010 [3] Efficient modeling of 1/f^a Noise Using Multirate Process, by Park, Muhammad, and Roy, 2006 [4] A Statistical Model of Flicker Noise, by Barnes and Allan, 1966 [5] A high-speed algorithm for computation of fractional differentiation and fractional integration, by Fukunaga and Shimizu, 2013 http://dx.doi.org/10.1098/rsta.2012.0152 [6] https://prng.di.unimi.it/ [7] The Ziggurat Method for Generating Random Variables, by Marsaglia and Tsang, 2000 http://dx.doi.org/10.18637/jss.v005.i08 -- The driving force behind research is the question: "Why?" There are things we don't understand and things we always wonder about. And that's why we do research. -- Kobayashi Makoto
AK
Attila Kinali
Wed, May 4, 2022 4:50 PM

Hoi Bob,

On Tue, 3 May 2022 16:23:27 -0500
Bob kb8tq kb8tq@n1k.org wrote:

The gotcha is that there are a number of very normal OCXO “behaviors” that are not
covered by any of the standard statistical models.

Could you elaborate a bit on what these "normal behaviours" are?

		Attila Kinali

--
The driving force behind research is the question: "Why?"
There are things we don't understand and things we always
wonder about. And that's why we do research.
-- Kobayashi Makoto

Hoi Bob, On Tue, 3 May 2022 16:23:27 -0500 Bob kb8tq <kb8tq@n1k.org> wrote: > The gotcha is that there are a number of very normal OCXO “behaviors” that are not > covered by any of the standard statistical models. Could you elaborate a bit on what these "normal behaviours" are? Attila Kinali -- The driving force behind research is the question: "Why?" There are things we don't understand and things we always wonder about. And that's why we do research. -- Kobayashi Makoto
MW
Matthias Welwarsky
Wed, May 4, 2022 5:31 PM

Magnus, Attila, Bob,

thanks again for the inspirational posts, truly appreciated.

However. I'm looking for something reasonably simple just for the purpose of
GPSDO simulation. Here, most of the finer details of noise are not very
relevant. I don't really care for PSD, for example. What I'm looking for is a
tool that can produce a phase vector that just resembles what a real
oscillator is doing, looking from afar, with a little squinting. For example:

synth_osc(N, -1e-8, 2.5e-11, 2e-11, 0, 0);

This gives me a vector that, as far as Allan deviation is concerned, looks
remarkably like an LPRO-101. With some other parameters I can produce a
credible resemblance to a PRS10. Add a bit of temperature wiggle and it's
enough to run it through the simulator and tune parameters. The finer details
are anyway completely lost on a GPSDO. Reaction to transients, especially from
GPS, are much more interesting, which is why the logical next step is to
produce a GPS (or GNSS) phase vector that can be parametrized and spiked with
some oddities to see how different control loop parameters influence the
output. But for that I don't have an immediate need, the GPS data files on
Leapsecond.com are enough for now.

Regards,
Matthias

On Dienstag, 3. Mai 2022 22:08:49 CEST Magnus Danielson via time-nuts wrote:

Dear Matthias,

On 2022-05-03 10:57, Matthias Welwarsky wrote:

Dear all,

thanks for your kind comments, corrections and suggestions. Please forgive
if I don't reply to all of your comments individually. Summary response
follows:

Attila - yes, I realize temperature dependence is one key parameter. I
model this meanwhile as a frequency shift over time.

Bob - I agree in principle, real world data is a good reality check for
any
model, but there are only so few datasets available and most of the time
they don't contain associated environmental data. You get a mix of
effects without any chance to isolate them.

Environmental effects tends to be recognizeable by their periodic
behavior, i.e. period of the day and the period of the heating/AC. Real
oscillator data tends to be quite relevant as you can simulate what it
would mean to lock that oscillator up. TvB made a simulator on those
grounds. Good exercise.

Magnus, Jim - thanks a lot. Your post encouraged me to look especially
into
flicker noise an how to generate it in the time domain. I now use randn()
and a low-pass filter. Also, I think I understood now how to create phase
vs frequency noise.

Happy to get you up to speed on that.

One particular name to check out articles for is Charles "Chuck"
Greenhall, JPL.

For early work, also look att James "Jim" Barnes, NBS (later named NIST).

Both these fine gentlement is recommended reading almost anything they
write on the topic actually.

I've some Timelab screenshots attached, ADEV and frequency plot of a data
set I generated with the following matlab function, plus some temperature
response modeled outside of this function.

function [phase] = synth_osc(samples,da,wpn,wfn,fpn,ffn)

# low-pass butterworth filter for 1/f noise generator
[b,a] = butter(1, 0.1);

Notice that 1/f is power-spectrum density, straight filter will give you
1/f^2 in power-spectrum, just as an integration slope.

One approach to flicker filter is an IIR filter with the weighing of
1/sqrt(n+1) where n is tap index, and feed it normal noise. You need to
"flush out" state before you use it so you have a long history to help
shaping. For a 1024 sample series, I do 2048 samples and only use the
last 1024. Efficient? No. Quick-and-dirty? Yes.

The pole/zero type of filters of Barnes let you synthesize an 1/f slope
by balancing the properties. How dense and thus how small ripples you
get, you decide. Greenhall made the point of recording the state, and
provides BASIC code that calculate the state rather than run an infinite
sequence to let the initial state converge to the 1/f state.

Greenhall published an article illustrating a whole range of methods to
do it. He wrote the simulation code to be used in JPL for their clock
development.

Flicker noise is indeed picky.

Cheers,
Magnus

# aging
phase = (((1:samples)/86400).^2)*da;
# white phase noise
phase += (randn(1, samples))*wpn;
# white frequency noise
phase += cumsum(randn(1, samples))*wfn;
# 1/f phase noise
phase += filter(b,a,randn(1,samples))*fpn;
# 1/f frequency noise
phase += cumsum(filter(b,a,randn(1,samples))*ffn);

end

osc = synth_osc(400000, -50e-6, 5e-11, 1e-11, 5e-11, 5e-11);

Thanks.

On Montag, 2. Mai 2022 17:12:47 CEST Matthias Welwarsky wrote:

Dear all,

I'm trying to come up with a reasonably simple model for an OCXO that I
can
parametrize to experiment with a GPSDO simulator. For now I have the
following matlab function that "somewhat" does what I think is
reasonable,
but I would like a reality check.

This is the matlab code:

function [phase] = synth_osc(samples,da,wn,fn)

aging

phase = (((1:samples)/86400).^2)*da;

white noise

phase += (rand(1,samples)-0.5)*wn;

flicker noise

phase += cumsum(rand(1,samples)-0.5)*fn;
end

There are three components in the model, aging, white noise and flicker
noise, with everything expressed in fractions of seconds.

The first term basically creates a base vector that has a quadratic aging
function. It can be parametrized e.g. from an OCXO datasheet, daily aging
given in s/s per day.

The second term models white noise. It's just a random number scaled to
the
desired 1-second uncertainty.

The third term is supposed to model flicker noise. It's basically a
random
walk scaled to the desired magnitude.

As an example, the following function call would create a phase vector
for a 10MHz oscillator with one day worth of samples, with an aging of
about 5 Millihertz per day, 10ps/s white noise and 10ns/s of flicker
noise:

phase = osc_synth(86400, -44e-6, 10e-12, 10e-9);

What I'd like to know - is that a "reasonable" model or is it just too
far
off of reality to be useful? What could be changed or improved?

Best regards,
Matthias


time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send
an email to time-nuts-leave@lists.febo.com To unsubscribe, go to and
follow the instructions there.


time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send
an email to time-nuts-leave@lists.febo.com To unsubscribe, go to and
follow the instructions there.


time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an
email to time-nuts-leave@lists.febo.com To unsubscribe, go to and follow
the instructions there.

Magnus, Attila, Bob, thanks again for the inspirational posts, truly appreciated. However. I'm looking for something reasonably simple just for the purpose of GPSDO simulation. Here, most of the finer details of noise are not very relevant. I don't really care for PSD, for example. What I'm looking for is a tool that can produce a phase vector that just resembles what a real oscillator is doing, looking from afar, with a little squinting. For example: synth_osc(N, -1e-8, 2.5e-11, 2e-11, 0, 0); This gives me a vector that, as far as Allan deviation is concerned, looks remarkably like an LPRO-101. With some other parameters I can produce a credible resemblance to a PRS10. Add a bit of temperature wiggle and it's enough to run it through the simulator and tune parameters. The finer details are anyway completely lost on a GPSDO. Reaction to transients, especially from GPS, are much more interesting, which is why the logical next step is to produce a GPS (or GNSS) phase vector that can be parametrized and spiked with some oddities to see how different control loop parameters influence the output. But for that I don't have an immediate need, the GPS data files on Leapsecond.com are enough for now. Regards, Matthias On Dienstag, 3. Mai 2022 22:08:49 CEST Magnus Danielson via time-nuts wrote: > Dear Matthias, > > On 2022-05-03 10:57, Matthias Welwarsky wrote: > > Dear all, > > > > thanks for your kind comments, corrections and suggestions. Please forgive > > if I don't reply to all of your comments individually. Summary response > > follows: > > > > Attila - yes, I realize temperature dependence is one key parameter. I > > model this meanwhile as a frequency shift over time. > > > > Bob - I agree in principle, real world data is a good reality check for > > any > > model, but there are only so few datasets available and most of the time > > they don't contain associated environmental data. You get a mix of > > effects without any chance to isolate them. > > Environmental effects tends to be recognizeable by their periodic > behavior, i.e. period of the day and the period of the heating/AC. Real > oscillator data tends to be quite relevant as you can simulate what it > would mean to lock that oscillator up. TvB made a simulator on those > grounds. Good exercise. > > > Magnus, Jim - thanks a lot. Your post encouraged me to look especially > > into > > flicker noise an how to generate it in the time domain. I now use randn() > > and a low-pass filter. Also, I think I understood now how to create phase > > vs frequency noise. > > Happy to get you up to speed on that. > > One particular name to check out articles for is Charles "Chuck" > Greenhall, JPL. > > For early work, also look att James "Jim" Barnes, NBS (later named NIST). > > Both these fine gentlement is recommended reading almost anything they > write on the topic actually. > > > I've some Timelab screenshots attached, ADEV and frequency plot of a data > > set I generated with the following matlab function, plus some temperature > > response modeled outside of this function. > > > > function [phase] = synth_osc(samples,da,wpn,wfn,fpn,ffn) > > > > # low-pass butterworth filter for 1/f noise generator > > [b,a] = butter(1, 0.1); > > Notice that 1/f is power-spectrum density, straight filter will give you > 1/f^2 in power-spectrum, just as an integration slope. > > One approach to flicker filter is an IIR filter with the weighing of > 1/sqrt(n+1) where n is tap index, and feed it normal noise. You need to > "flush out" state before you use it so you have a long history to help > shaping. For a 1024 sample series, I do 2048 samples and only use the > last 1024. Efficient? No. Quick-and-dirty? Yes. > > The pole/zero type of filters of Barnes let you synthesize an 1/f slope > by balancing the properties. How dense and thus how small ripples you > get, you decide. Greenhall made the point of recording the state, and > provides BASIC code that calculate the state rather than run an infinite > sequence to let the initial state converge to the 1/f state. > > Greenhall published an article illustrating a whole range of methods to > do it. He wrote the simulation code to be used in JPL for their clock > development. > > Flicker noise is indeed picky. > > Cheers, > Magnus > > > # aging > > phase = (((1:samples)/86400).^2)*da; > > # white phase noise > > phase += (randn(1, samples))*wpn; > > # white frequency noise > > phase += cumsum(randn(1, samples))*wfn; > > # 1/f phase noise > > phase += filter(b,a,randn(1,samples))*fpn; > > # 1/f frequency noise > > phase += cumsum(filter(b,a,randn(1,samples))*ffn); > > > > end > > > > osc = synth_osc(400000, -50e-6, 5e-11, 1e-11, 5e-11, 5e-11); > > > > Thanks. > > > > On Montag, 2. Mai 2022 17:12:47 CEST Matthias Welwarsky wrote: > >> Dear all, > >> > >> I'm trying to come up with a reasonably simple model for an OCXO that I > >> can > >> parametrize to experiment with a GPSDO simulator. For now I have the > >> following matlab function that "somewhat" does what I think is > >> reasonable, > >> but I would like a reality check. > >> > >> This is the matlab code: > >> > >> function [phase] = synth_osc(samples,da,wn,fn) > >> # aging > >> phase = (((1:samples)/86400).^2)*da; > >> # white noise > >> phase += (rand(1,samples)-0.5)*wn; > >> # flicker noise > >> phase += cumsum(rand(1,samples)-0.5)*fn; > >> end > >> > >> There are three components in the model, aging, white noise and flicker > >> noise, with everything expressed in fractions of seconds. > >> > >> The first term basically creates a base vector that has a quadratic aging > >> function. It can be parametrized e.g. from an OCXO datasheet, daily aging > >> given in s/s per day. > >> > >> The second term models white noise. It's just a random number scaled to > >> the > >> desired 1-second uncertainty. > >> > >> The third term is supposed to model flicker noise. It's basically a > >> random > >> walk scaled to the desired magnitude. > >> > >> As an example, the following function call would create a phase vector > >> for a 10MHz oscillator with one day worth of samples, with an aging of > >> about 5 Millihertz per day, 10ps/s white noise and 10ns/s of flicker > >> noise: > >> > >> phase = osc_synth(86400, -44e-6, 10e-12, 10e-9); > >> > >> What I'd like to know - is that a "reasonable" model or is it just too > >> far > >> off of reality to be useful? What could be changed or improved? > >> > >> Best regards, > >> Matthias > >> > >> > >> _______________________________________________ > >> time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send > >> an email to time-nuts-leave@lists.febo.com To unsubscribe, go to and > >> follow the instructions there. > > > > _______________________________________________ > > time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send > > an email to time-nuts-leave@lists.febo.com To unsubscribe, go to and > > follow the instructions there. > > _______________________________________________ > time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an > email to time-nuts-leave@lists.febo.com To unsubscribe, go to and follow > the instructions there.
BK
Bob kb8tq
Thu, May 5, 2022 12:17 AM

Hi

The most basic is the “phase pop” that is not modeled by any of the
normal noise formulas. The further you dig in, the more you find things
that the models really don’t cover.

Bob

On May 4, 2022, at 11:50 AM, Attila Kinali attila@kinali.ch wrote:

Hoi Bob,

On Tue, 3 May 2022 16:23:27 -0500
Bob kb8tq kb8tq@n1k.org wrote:

The gotcha is that there are a number of very normal OCXO “behaviors” that are not
covered by any of the standard statistical models.

Could you elaborate a bit on what these "normal behaviours" are?

		Attila Kinali

--
The driving force behind research is the question: "Why?"
There are things we don't understand and things we always
wonder about. And that's why we do research.
-- Kobayashi Makoto


time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an email to time-nuts-leave@lists.febo.com
To unsubscribe, go to and follow the instructions there.

Hi The most basic is the “phase pop” that is not modeled by any of the normal noise formulas. The further you dig in, the more you find things that the models really don’t cover. Bob > On May 4, 2022, at 11:50 AM, Attila Kinali <attila@kinali.ch> wrote: > > Hoi Bob, > > On Tue, 3 May 2022 16:23:27 -0500 > Bob kb8tq <kb8tq@n1k.org> wrote: > >> The gotcha is that there are a number of very normal OCXO “behaviors” that are not >> covered by any of the standard statistical models. > > Could you elaborate a bit on what these "normal behaviours" are? > > Attila Kinali > > -- > The driving force behind research is the question: "Why?" > There are things we don't understand and things we always > wonder about. And that's why we do research. > -- Kobayashi Makoto > _______________________________________________ > time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an email to time-nuts-leave@lists.febo.com > To unsubscribe, go to and follow the instructions there.
AK
Attila Kinali
Thu, May 5, 2022 4:01 PM

On Wed, 04 May 2022 19:31:08 +0200
Matthias Welwarsky time-nuts@welwarsky.de wrote:

However. I'm looking for something reasonably simple just for the purpose of
GPSDO simulation. Here, most of the finer details of noise are not very
relevant.

If you want it very simple, then I would only add white noise (i.e. uncorrelated
Gaussian noise) and 1/f^4 noise (integrate twice) at the approriate level.
This should give you the simplest yet, kind of faithful approximation.

But be aware that you will miss a lot of the corner cases, of the dynamic
things that happen, which make the control loop design of a GPSDO challenging.

I don't really care for PSD, for example. What I'm looking for is a
tool that can produce a phase vector that just resembles what a real
oscillator is doing, looking from afar, with a little squinting.

This is what we are getting at: a pure noise simulation might not get
you close enough for a faithful simulation to design a GPSDO control loop.

This gives me a vector that, as far as Allan deviation is concerned, looks
remarkably like an LPRO-101. With some other parameters I can produce a
credible resemblance to a PRS10.

Oh.. a big fat warning here: ADEV and friends are very bad tools to check
whether two oscillator data sets are similar in behavior or not. *ADEV
do a lot of data compression. And it's logarithmic too! You enter a million
data points and get ~50 data points out. You enter 10 million data points and
get ~60 data points out. There is a lot lost in this compression. Unless
you make some quite limiting assumptions on what kind of behaviour the
oscillators are allowed to have, *DEV cannot be used for validation.

Attatched is the ADEV plot of an LPRO against an GPSDO. I guess yours looks
similar? You can see the shoulder due to the GPSDO's internal oscillator,
then going down with 1/tau down to a few parts in 1e-13 and going up again.
Looks pretty normal and you can identify white/flicker phase noise and
a frequency random walk region. A white frequency noise region
seems to be absent. So we expect to see mostly white noise and frequency
random walk.

Let us now look at the phase plot.... Does it look like what we expect?
Not quite. We see two/three distinct regions where the frequency seems
to be pretty stable, but inbetween there is a change in frequency which occurs
over the stretch of a few hours/days. Not quite what we expected. It definitely
does not look like frequency random walk. At best we can approximate it with
3 regions of almost constant frequency.

Now, let us zoom in into the first 114 days, which sem to be pretty stable.
This doesn't look like frequency random walk either. Again, we have regions
of almost constant frequency, that are seperated with hours/days of close to
constant frequency drift. But not only that, there seems to be an overreaching
arc of decreasing frequency, but it does not go linearly, but seems to be in
discrete steps that take time.

With this in mind, a faithful model of a LPRO would look quite different
than what one would have guessed from the ADEV.

(Before anyone asks: no, the frequency changes are not correlated with temperature
or ambient air pressure. My best guess is that they are light shift related, or
changes in the vapor cell but I cannot prove it)

			Attila Kinali

--
In science if you know what you are doing you should not be doing it.
In engineering if you do not know what you are doing you should not be doing it.
-- Richard W. Hamming, The Art of Doing Science and Engineering

On Wed, 04 May 2022 19:31:08 +0200 Matthias Welwarsky <time-nuts@welwarsky.de> wrote: > However. I'm looking for something reasonably simple just for the purpose of > GPSDO simulation. Here, most of the finer details of noise are not very > relevant. If you want it very simple, then I would only add white noise (i.e. uncorrelated Gaussian noise) and 1/f^4 noise (integrate twice) at the approriate level. This should give you the simplest yet, kind of faithful approximation. But be aware that you will miss a lot of the corner cases, of the dynamic things that happen, which make the control loop design of a GPSDO challenging. > I don't really care for PSD, for example. What I'm looking for is a > tool that can produce a phase vector that just resembles what a real > oscillator is doing, looking from afar, with a little squinting. This is what we are getting at: a pure noise simulation might not get you close enough for a faithful simulation to design a GPSDO control loop. > This gives me a vector that, as far as Allan deviation is concerned, looks > remarkably like an LPRO-101. With some other parameters I can produce a > credible resemblance to a PRS10. Oh.. a big fat warning here: ADEV and friends are very bad tools to check whether two oscillator data sets are similar in behavior or not. *ADEV do a lot of data compression. And it's logarithmic too! You enter a million data points and get ~50 data points out. You enter 10 million data points and get ~60 data points out. There is a lot lost in this compression. Unless you make some quite limiting assumptions on what kind of behaviour the oscillators are allowed to have, *DEV cannot be used for validation. Attatched is the ADEV plot of an LPRO against an GPSDO. I guess yours looks similar? You can see the shoulder due to the GPSDO's internal oscillator, then going down with 1/tau down to a few parts in 1e-13 and going up again. Looks pretty normal and you can identify white/flicker phase noise and a frequency random walk region. A white frequency noise region seems to be absent. So we expect to see mostly white noise and frequency random walk. Let us now look at the phase plot.... Does it look like what we expect? Not quite. We see two/three distinct regions where the frequency seems to be pretty stable, but inbetween there is a change in frequency which occurs over the stretch of a few hours/days. Not quite what we expected. It definitely does not look like frequency random walk. At best we can approximate it with 3 regions of almost constant frequency. Now, let us zoom in into the first 114 days, which sem to be pretty stable. This doesn't look like frequency random walk either. Again, we have regions of almost constant frequency, that are seperated with hours/days of close to constant frequency drift. But not only that, there seems to be an overreaching arc of decreasing frequency, but it does not go linearly, but seems to be in discrete steps that take time. With this in mind, a faithful model of a LPRO would look quite different than what one would have guessed from the ADEV. (Before anyone asks: no, the frequency changes are not correlated with temperature or ambient air pressure. My best guess is that they are light shift related, or changes in the vapor cell but I cannot prove it) Attila Kinali -- In science if you know what you are doing you should not be doing it. In engineering if you do not know what you are doing you should not be doing it. -- Richard W. Hamming, The Art of Doing Science and Engineering
MD
Magnus Danielson
Thu, May 5, 2022 4:34 PM

Hi,

Could not agree more on this point.

It's even to the point we have two standards for it, the IEEE Std 1139
for the basic measures and noises, and then IEEE Std 1193 for the
"environmentals", or rather, the rest.

Both is being revisioned and 1139 just went out for re-balloting process
after receiving ballotting comments and 1193 is just to get approved to
be sent to balloting. The work have been lead by NIST T&F department
chief Elizabeth Donley, who also got an award at EFTF-IFCS 2022 for her
contribution to the field, and these standards in particular. Very well
desirved I might add.

While simple models help to test and analyze specific things in
separation, as you bring things together, going towards real life
application the reality is always somewhat different to the models. It
takes some time to learn just how much of the things you can pick up
from models and what to use them for to adapt for the expected real life
situation.

The two IEEE standards have tons of references, and it is well worth
following those. The IEEE UFFC also have quite a bit of educational
reference material at hand which can also be worthwhile reading.

Cheers,
Magnus

On 2022-05-03 23:23, Bob kb8tq wrote:

Hi

The gotcha is that there are a number of very normal OCXO “behaviors” that are not
covered by any of the standard statistical models. Coping with these issue is at least
as important at working with the stuff that is coved by any of the standard statistical
models ….

Bob

On May 3, 2022, at 3:57 AM, Matthias Welwarsky time-nuts@welwarsky.de wrote:

Dear all,

thanks for your kind comments, corrections and suggestions. Please forgive if
I don't reply to all of your comments individually. Summary response follows:

Attila - yes, I realize temperature dependence is one key parameter. I model
this meanwhile as a frequency shift over time.

Bob - I agree in principle, real world data is a good reality check for any
model, but there are only so few datasets available and most of the time they
don't contain associated environmental data. You get a mix of effects without
any chance to isolate them.

Magnus, Jim - thanks a lot. Your post encouraged me to look especially into
flicker noise an how to generate it in the time domain. I now use randn() and
a low-pass filter. Also, I think I understood now how to create phase vs
frequency noise.

I've some Timelab screenshots attached, ADEV and frequency plot of a data set
I generated with the following matlab function, plus some temperature response
modeled outside of this function.

function [phase] = synth_osc(samples,da,wpn,wfn,fpn,ffn)

low-pass butterworth filter for 1/f noise generator

[b,a] = butter(1, 0.1);

aging

phase = (((1:samples)/86400).^2)*da;

white phase noise

phase += (randn(1, samples))*wpn;

white frequency noise

phase += cumsum(randn(1, samples))*wfn;

1/f phase noise

phase += filter(b,a,randn(1,samples))*fpn;

1/f frequency noise

phase += cumsum(filter(b,a,randn(1,samples))*ffn);
end

osc = synth_osc(400000, -50e-6, 5e-11, 1e-11, 5e-11, 5e-11);

Thanks.

On Montag, 2. Mai 2022 17:12:47 CEST Matthias Welwarsky wrote:

Dear all,

I'm trying to come up with a reasonably simple model for an OCXO that I can
parametrize to experiment with a GPSDO simulator. For now I have the
following matlab function that "somewhat" does what I think is reasonable,
but I would like a reality check.

This is the matlab code:

function [phase] = synth_osc(samples,da,wn,fn)

aging

phase = (((1:samples)/86400).^2)*da;

white noise

phase += (rand(1,samples)-0.5)*wn;

flicker noise

phase += cumsum(rand(1,samples)-0.5)*fn;
end

There are three components in the model, aging, white noise and flicker
noise, with everything expressed in fractions of seconds.

The first term basically creates a base vector that has a quadratic aging
function. It can be parametrized e.g. from an OCXO datasheet, daily aging
given in s/s per day.

The second term models white noise. It's just a random number scaled to the
desired 1-second uncertainty.

The third term is supposed to model flicker noise. It's basically a random
walk scaled to the desired magnitude.

As an example, the following function call would create a phase vector for a
10MHz oscillator with one day worth of samples, with an aging of about 5
Millihertz per day, 10ps/s white noise and 10ns/s of flicker noise:

phase = osc_synth(86400, -44e-6, 10e-12, 10e-9);

What I'd like to know - is that a "reasonable" model or is it just too far
off of reality to be useful? What could be changed or improved?

Best regards,
Matthias


time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an
email to time-nuts-leave@lists.febo.com To unsubscribe, go to and follow
the instructions there.
<oscillator-freq.png><oscillator.png>_______________________________________________
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an email to time-nuts-leave@lists.febo.com
To unsubscribe, go to and follow the instructions there.


time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an email to time-nuts-leave@lists.febo.com
To unsubscribe, go to and follow the instructions there.

Hi, Could not agree more on this point. It's even to the point we have two standards for it, the IEEE Std 1139 for the basic measures and noises, and then IEEE Std 1193 for the "environmentals", or rather, the rest. Both is being revisioned and 1139 just went out for re-balloting process after receiving ballotting comments and 1193 is just to get approved to be sent to balloting. The work have been lead by NIST T&F department chief Elizabeth Donley, who also got an award at EFTF-IFCS 2022 for her contribution to the field, and these standards in particular. Very well desirved I might add. While simple models help to test and analyze specific things in separation, as you bring things together, going towards real life application the reality is always somewhat different to the models. It takes some time to learn just how much of the things you can pick up from models and what to use them for to adapt for the expected real life situation. The two IEEE standards have tons of references, and it is well worth following those. The IEEE UFFC also have quite a bit of educational reference material at hand which can also be worthwhile reading. Cheers, Magnus On 2022-05-03 23:23, Bob kb8tq wrote: > Hi > > The gotcha is that there are a number of very normal OCXO “behaviors” that are not > covered by any of the standard statistical models. Coping with these issue is at least > as important at working with the stuff that is coved by any of the standard statistical > models …. > > Bob > >> On May 3, 2022, at 3:57 AM, Matthias Welwarsky <time-nuts@welwarsky.de> wrote: >> >> Dear all, >> >> thanks for your kind comments, corrections and suggestions. Please forgive if >> I don't reply to all of your comments individually. Summary response follows: >> >> Attila - yes, I realize temperature dependence is one key parameter. I model >> this meanwhile as a frequency shift over time. >> >> Bob - I agree in principle, real world data is a good reality check for any >> model, but there are only so few datasets available and most of the time they >> don't contain associated environmental data. You get a mix of effects without >> any chance to isolate them. >> >> Magnus, Jim - thanks a lot. Your post encouraged me to look especially into >> flicker noise an how to generate it in the time domain. I now use randn() and >> a low-pass filter. Also, I think I understood now how to create phase vs >> frequency noise. >> >> I've some Timelab screenshots attached, ADEV and frequency plot of a data set >> I generated with the following matlab function, plus some temperature response >> modeled outside of this function. >> >> function [phase] = synth_osc(samples,da,wpn,wfn,fpn,ffn) >> # low-pass butterworth filter for 1/f noise generator >> [b,a] = butter(1, 0.1); >> >> # aging >> phase = (((1:samples)/86400).^2)*da; >> # white phase noise >> phase += (randn(1, samples))*wpn; >> # white frequency noise >> phase += cumsum(randn(1, samples))*wfn; >> # 1/f phase noise >> phase += filter(b,a,randn(1,samples))*fpn; >> # 1/f frequency noise >> phase += cumsum(filter(b,a,randn(1,samples))*ffn); >> end >> >> osc = synth_osc(400000, -50e-6, 5e-11, 1e-11, 5e-11, 5e-11); >> >> Thanks. >> >> On Montag, 2. Mai 2022 17:12:47 CEST Matthias Welwarsky wrote: >>> Dear all, >>> >>> I'm trying to come up with a reasonably simple model for an OCXO that I can >>> parametrize to experiment with a GPSDO simulator. For now I have the >>> following matlab function that "somewhat" does what I think is reasonable, >>> but I would like a reality check. >>> >>> This is the matlab code: >>> >>> function [phase] = synth_osc(samples,da,wn,fn) >>> # aging >>> phase = (((1:samples)/86400).^2)*da; >>> # white noise >>> phase += (rand(1,samples)-0.5)*wn; >>> # flicker noise >>> phase += cumsum(rand(1,samples)-0.5)*fn; >>> end >>> >>> There are three components in the model, aging, white noise and flicker >>> noise, with everything expressed in fractions of seconds. >>> >>> The first term basically creates a base vector that has a quadratic aging >>> function. It can be parametrized e.g. from an OCXO datasheet, daily aging >>> given in s/s per day. >>> >>> The second term models white noise. It's just a random number scaled to the >>> desired 1-second uncertainty. >>> >>> The third term is supposed to model flicker noise. It's basically a random >>> walk scaled to the desired magnitude. >>> >>> As an example, the following function call would create a phase vector for a >>> 10MHz oscillator with one day worth of samples, with an aging of about 5 >>> Millihertz per day, 10ps/s white noise and 10ns/s of flicker noise: >>> >>> phase = osc_synth(86400, -44e-6, 10e-12, 10e-9); >>> >>> What I'd like to know - is that a "reasonable" model or is it just too far >>> off of reality to be useful? What could be changed or improved? >>> >>> Best regards, >>> Matthias >>> >>> >>> _______________________________________________ >>> time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an >>> email to time-nuts-leave@lists.febo.com To unsubscribe, go to and follow >>> the instructions there. >> <oscillator-freq.png><oscillator.png>_______________________________________________ >> time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an email to time-nuts-leave@lists.febo.com >> To unsubscribe, go to and follow the instructions there. > _______________________________________________ > time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an email to time-nuts-leave@lists.febo.com > To unsubscribe, go to and follow the instructions there.
CA
Carsten Andrich
Tue, May 10, 2022 6:20 AM

First, thanks to everyone who chimed in on this highly interesting topic.

On 04.05.22 18:49, Attila Kinali wrote:

FFT based systems take a white, normal distributed noise source,
Fourier transform it, filter it in frequency domain and transform
it back. Runtime is dominated by the FFT and thus O(n*log(n)).
There was a nice paper by either Barnes or Greenhall (or both?)
on this, which I seem currently unable to find. This is also the
method employed by the bruiteur tool from sigma-theta.

Biggest disadvantage of this method is, that it operates on the
whole sample length multiple times. I.e it becomes slow very
quickly, especially when the whole sample length is larger
than main memory. But they deliver exact results with exactly
the spectrum / time-correlation you want.

If you happen to find the paper, please share a reference. I'm curious
about implementation details and side-effects, e.g., whether
implementing the filter via circular convolution (straightforward
multiplication in frequency-domain) carries any penalties regarding
stochastic properties due to periodicity of the generated noise.

Also, any reason to do this via forward and inverse FFT? AFAIK the
Fourier transform of white noise is white noise, because the underlying
Gaussian is an Eigenfuntion of the transform. Generating in time-domain
as follows (Python code using NumPy)

N = int(1e6) # number of samples
A = 1e-9    # Allan deviation for tau = 1 sec

generate white noise in time-domain

X = np.fft.rfft(np.random.normal(0, A * 3**-0.5, N))

multiply with frequency response of desired power-law noise and apply inverse Fourier transform

NOTE: implements circular convolution

x = np.fft.irfft(X * H)

should yield the same properties as generating the white noise directly
in frequency-domain (there may be an off-by-one-or-two in there
regarding variance scaling), but the latter will halve the computational
cost:

generate white noise directly in frequency-domain

NOTE: imaginary components at DC and Nyquist are discarded by irfft()

X = np.random.normal(0, A * 6**-0.5 * N**0.5, N+2).view(np.complex128())

multiply with frequency response of desired power-law noise and apply inverse Fourier transform

NOTE: implements circular convolution

x = np.fft.irfft(X * H)

On 05.05.22 18:34, Magnus Danielson via time-nuts wrote:

Both is being revisioned and 1139 just went out for re-balloting
process after receiving ballotting comments and 1193 is just to get
approved to be sent to balloting.

Any details you can share regarding the changes over the current
version? Are drafts publicly available?

Best regards,
Carsten

First, thanks to everyone who chimed in on this highly interesting topic. On 04.05.22 18:49, Attila Kinali wrote: > FFT based systems take a white, normal distributed noise source, > Fourier transform it, filter it in frequency domain and transform > it back. Runtime is dominated by the FFT and thus O(n*log(n)). > There was a nice paper by either Barnes or Greenhall (or both?) > on this, which I seem currently unable to find. This is also the > method employed by the bruiteur tool from sigma-theta. > > Biggest disadvantage of this method is, that it operates on the > whole sample length multiple times. I.e it becomes slow very > quickly, especially when the whole sample length is larger > than main memory. But they deliver exact results with exactly > the spectrum / time-correlation you want. If you happen to find the paper, please share a reference. I'm curious about implementation details and side-effects, e.g., whether implementing the filter via circular convolution (straightforward multiplication in frequency-domain) carries any penalties regarding stochastic properties due to periodicity of the generated noise. Also, any reason to do this via forward and inverse FFT? AFAIK the Fourier transform of white noise is white noise, because the underlying Gaussian is an Eigenfuntion of the transform. Generating in time-domain as follows (Python code using NumPy) N = int(1e6) # number of samples A = 1e-9 # Allan deviation for tau = 1 sec # generate white noise in time-domain X = np.fft.rfft(np.random.normal(0, A * 3**-0.5, N)) # multiply with frequency response of desired power-law noise and apply inverse Fourier transform # NOTE: implements circular convolution x = np.fft.irfft(X * H) should yield the same properties as generating the white noise directly in frequency-domain (there may be an off-by-one-or-two in there regarding variance scaling), but the latter will halve the computational cost: # generate white noise directly in frequency-domain # NOTE: imaginary components at DC and Nyquist are discarded by irfft() X = np.random.normal(0, A * 6**-0.5 * N**0.5, N+2).view(np.complex128()) # multiply with frequency response of desired power-law noise and apply inverse Fourier transform # NOTE: implements circular convolution x = np.fft.irfft(X * H) On 05.05.22 18:34, Magnus Danielson via time-nuts wrote: > Both is being revisioned and 1139 just went out for re-balloting > process after receiving ballotting comments and 1193 is just to get > approved to be sent to balloting. Any details you can share regarding the changes over the current version? Are drafts publicly available? Best regards, Carsten
NM
Neville Michie
Tue, May 10, 2022 8:37 AM

The use of forward then reverse Fourier transforms is one of the most important
achievements of the Fourier transform. When one data set is convolved with
another data set, it appears impossible to undo the tangle.
But if the data is transformed into the Fourier domain, serial division can separate
the data, and transformation back will yield the original data.
See: "The Fourier transform and its applications” by Ron Bracewell.

Cheers, Neville Michie

On 10 May 2022, at 16:20, Carsten Andrich carsten.andrich@tu-ilmenau.de wrote:

First, thanks to everyone who chimed in on this highly interesting topic.

On 04.05.22 18:49, Attila Kinali wrote:

FFT based systems take a white, normal distributed noise source,
Fourier transform it, filter it in frequency domain and transform
it back. Runtime is dominated by the FFT and thus O(n*log(n)).
There was a nice paper by either Barnes or Greenhall (or both?)
on this, which I seem currently unable to find. This is also the
method employed by the bruiteur tool from sigma-theta.

Biggest disadvantage of this method is, that it operates on the
whole sample length multiple times. I.e it becomes slow very
quickly, especially when the whole sample length is larger
than main memory. But they deliver exact results with exactly
the spectrum / time-correlation you want.

If you happen to find the paper, please share a reference. I'm curious about implementation details and side-effects, e.g., whether implementing the filter via circular convolution (straightforward multiplication in frequency-domain) carries any penalties regarding stochastic properties due to periodicity of the generated noise.

Also, any reason to do this via forward and inverse FFT? AFAIK the Fourier transform of white noise is white noise, because the underlying Gaussian is an Eigenfuntion of the transform. Generating in time-domain as follows (Python code using NumPy)

N = int(1e6) # number of samples
A = 1e-9    # Allan deviation for tau = 1 sec

generate white noise in time-domain

X = np.fft.rfft(np.random.normal(0, A * 3**-0.5, N))

multiply with frequency response of desired power-law noise and apply inverse Fourier transform

NOTE: implements circular convolution

x = np.fft.irfft(X * H)

should yield the same properties as generating the white noise directly in frequency-domain (there may be an off-by-one-or-two in there regarding variance scaling), but the latter will halve the computational cost:

generate white noise directly in frequency-domain

NOTE: imaginary components at DC and Nyquist are discarded by irfft()

X = np.random.normal(0, A * 6**-0.5 * N**0.5, N+2).view(np.complex128())

multiply with frequency response of desired power-law noise and apply inverse Fourier transform

NOTE: implements circular convolution

x = np.fft.irfft(X * H)

On 05.05.22 18:34, Magnus Danielson via time-nuts wrote:

Both is being revisioned and 1139 just went out for re-balloting process after receiving ballotting comments and 1193 is just to get approved to be sent to balloting.

Any details you can share regarding the changes over the current version? Are drafts publicly available?

Best regards,
Carsten


time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an email to time-nuts-leave@lists.febo.com
To unsubscribe, go to and follow the instructions there.

The use of forward then reverse Fourier transforms is one of the most important achievements of the Fourier transform. When one data set is convolved with another data set, it appears impossible to undo the tangle. But if the data is transformed into the Fourier domain, serial division can separate the data, and transformation back will yield the original data. See: "The Fourier transform and its applications” by Ron Bracewell. Cheers, Neville Michie > On 10 May 2022, at 16:20, Carsten Andrich <carsten.andrich@tu-ilmenau.de> wrote: > > First, thanks to everyone who chimed in on this highly interesting topic. > > On 04.05.22 18:49, Attila Kinali wrote: >> FFT based systems take a white, normal distributed noise source, >> Fourier transform it, filter it in frequency domain and transform >> it back. Runtime is dominated by the FFT and thus O(n*log(n)). >> There was a nice paper by either Barnes or Greenhall (or both?) >> on this, which I seem currently unable to find. This is also the >> method employed by the bruiteur tool from sigma-theta. >> >> Biggest disadvantage of this method is, that it operates on the >> whole sample length multiple times. I.e it becomes slow very >> quickly, especially when the whole sample length is larger >> than main memory. But they deliver exact results with exactly >> the spectrum / time-correlation you want. > > If you happen to find the paper, please share a reference. I'm curious about implementation details and side-effects, e.g., whether implementing the filter via circular convolution (straightforward multiplication in frequency-domain) carries any penalties regarding stochastic properties due to periodicity of the generated noise. > > Also, any reason to do this via forward and inverse FFT? AFAIK the Fourier transform of white noise is white noise, because the underlying Gaussian is an Eigenfuntion of the transform. Generating in time-domain as follows (Python code using NumPy) > > N = int(1e6) # number of samples > A = 1e-9 # Allan deviation for tau = 1 sec > # generate white noise in time-domain > X = np.fft.rfft(np.random.normal(0, A * 3**-0.5, N)) > # multiply with frequency response of desired power-law noise and apply inverse Fourier transform > # NOTE: implements circular convolution > x = np.fft.irfft(X * H) > > should yield the same properties as generating the white noise directly in frequency-domain (there may be an off-by-one-or-two in there regarding variance scaling), but the latter will halve the computational cost: > > # generate white noise directly in frequency-domain > # NOTE: imaginary components at DC and Nyquist are discarded by irfft() > X = np.random.normal(0, A * 6**-0.5 * N**0.5, N+2).view(np.complex128()) > # multiply with frequency response of desired power-law noise and apply inverse Fourier transform > # NOTE: implements circular convolution > x = np.fft.irfft(X * H) > > > On 05.05.22 18:34, Magnus Danielson via time-nuts wrote: >> Both is being revisioned and 1139 just went out for re-balloting process after receiving ballotting comments and 1193 is just to get approved to be sent to balloting. > Any details you can share regarding the changes over the current version? Are drafts publicly available? > > > Best regards, > Carsten > _______________________________________________ > time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an email to time-nuts-leave@lists.febo.com > To unsubscribe, go to and follow the instructions there.
AK
Attila Kinali
Tue, May 10, 2022 10:58 AM

On Tue, 10 May 2022 08:20:35 +0200
Carsten Andrich carsten.andrich@tu-ilmenau.de wrote:

If you happen to find the paper, please share a reference. I'm curious
about implementation details and side-effects, e.g., whether
implementing the filter via circular convolution (straightforward
multiplication in frequency-domain) carries any penalties regarding
stochastic properties due to periodicity of the generated noise.

Yes, for one, noise is not periodic, neither is the filter you need.
You can't build a filter with a 1/sqrt(f) slope over all the
frequency range. That would require a fractional integrator, which
is a non-trivial task. Unless you actually do fractional integration,
all time domain filters will be approximations of the required filter.

Also, any reason to do this via forward and inverse FFT? AFAIK the
Fourier transform of white noise is white noise, because the underlying
Gaussian is an Eigenfuntion of the transform. Generating in time-domain
as follows (Python code using NumPy)

I had the same question when I first saw this. Unfortunately I don't have a good
answer, besides that forward + inverse ensures that the noise looks like it is
supposed to do, while I'm not 100% whether there is an easy way to generate
time-domain Gauss i.i.d. noise in the frequency domain.

If you know how, please let me know.

But be aware, while the Gauss bell curve is an eigenfunction of the Fourier
transform, the noise we feed into it is not the Gauss bell curve. It's the
probability distribution of the noise that has this bell curve, not the noise
itself. Hence the probability distribution of a random variable is not necessarily
invariant over the Fourier transfom.

On 05.05.22 18:34, Magnus Danielson via time-nuts wrote:

Both is being revisioned and 1139 just went out for re-balloting
process after receiving ballotting comments and 1193 is just to get
approved to be sent to balloting.

Any details you can share regarding the changes over the current
version? Are drafts publicly available?

Both 1139 and 1193 saw major rework. Large portions have been replaced and rewritten
and the rest re-arranged to make more sense. I would look at the upcoming revisions
more as complete rewrites instead instead of just mere updates.

I am not aware of any public drafts. As the groups working on them are well networked
with those who would use the standards, having a public discussion never came up
and thus no public draft policy was formed. But the revision process is mostly
complete, both should be published at by the end of the year, probably earlier.

			Attila Kinali

--
In science if you know what you are doing you should not be doing it.
In engineering if you do not know what you are doing you should not be doing it.
-- Richard W. Hamming, The Art of Doing Science and Engineering

On Tue, 10 May 2022 08:20:35 +0200 Carsten Andrich <carsten.andrich@tu-ilmenau.de> wrote: > If you happen to find the paper, please share a reference. I'm curious > about implementation details and side-effects, e.g., whether > implementing the filter via circular convolution (straightforward > multiplication in frequency-domain) carries any penalties regarding > stochastic properties due to periodicity of the generated noise. Yes, for one, noise is not periodic, neither is the filter you need. You can't build a filter with a 1/sqrt(f) slope over all the frequency range. That would require a fractional integrator, which is a non-trivial task. Unless you actually do fractional integration, all time domain filters will be approximations of the required filter. > Also, any reason to do this via forward and inverse FFT? AFAIK the > Fourier transform of white noise is white noise, because the underlying > Gaussian is an Eigenfuntion of the transform. Generating in time-domain > as follows (Python code using NumPy) I had the same question when I first saw this. Unfortunately I don't have a good answer, besides that forward + inverse ensures that the noise looks like it is supposed to do, while I'm not 100% whether there is an easy way to generate time-domain Gauss i.i.d. noise in the frequency domain. If you know how, please let me know. But be aware, while the Gauss bell curve is an eigenfunction of the Fourier transform, the noise we feed into it is not the Gauss bell curve. It's the probability distribution of the noise that has this bell curve, not the noise itself. Hence the probability distribution of a random variable is not necessarily invariant over the Fourier transfom. > On 05.05.22 18:34, Magnus Danielson via time-nuts wrote: > > Both is being revisioned and 1139 just went out for re-balloting > > process after receiving ballotting comments and 1193 is just to get > > approved to be sent to balloting. > Any details you can share regarding the changes over the current > version? Are drafts publicly available? Both 1139 and 1193 saw major rework. Large portions have been replaced and rewritten and the rest re-arranged to make more sense. I would look at the upcoming revisions more as complete rewrites instead instead of just mere updates. I am not aware of any public drafts. As the groups working on them are well networked with those who would use the standards, having a public discussion never came up and thus no public draft policy was formed. But the revision process is mostly complete, both should be published at by the end of the year, probably earlier. Attila Kinali -- In science if you know what you are doing you should not be doing it. In engineering if you do not know what you are doing you should not be doing it. -- Richard W. Hamming, The Art of Doing Science and Engineering
CA
Carsten Andrich
Wed, May 11, 2022 6:15 AM

On 10.05.22 10:37, Neville Michie wrote:

The use of forward then reverse Fourier transforms is one of the most important
achievements of the Fourier transform. When one data set is convolved with
another data set, it appears impossible to undo the tangle.
But if the data is transformed into the Fourier domain, serial division can separate
the data, and transformation back will yield the original data.

Absolutely, but in this case I was wondering why to do the costly O(n
log n) forward transform at all, if its output can be directly computed
in O(n).

On 10.05.22 12:58, Attila Kinali wrote:

If you happen to find the paper, please share a reference. I'm curious
about implementation details and side-effects, e.g., whether
implementing the filter via circular convolution (straightforward
multiplication in frequency-domain) carries any penalties regarding
stochastic properties due to periodicity of the generated noise.

Yes, for one, noise is not periodic, neither is the filter you need.
You can't build a filter with a 1/sqrt(f) slope over all the
frequency range. That would require a fractional integrator, which
is a non-trivial task. Unless you actually do fractional integration,
all time domain filters will be approximations of the required filter.

Any chance the Paper was "FFT-BASED METHODS FOR SIMULATING FLICKER FM"
by Greenhall [1]?

I've had a look at the source code of the bruiteur tool, which you
previously recommended, and it appears to opt for the circular
convolution approach. Would you consider that the state-of-the-art for
1/sqrt(f)? The same is used here [3]. Kasdin gives an extensive overview
over the subject [2], but I haven't read the 20+ pages paper yet.

I had the same question when I first saw this. Unfortunately I don't have a good
answer, besides that forward + inverse ensures that the noise looks like it is
supposed to do, while I'm not 100% whether there is an easy way to generate
time-domain Gauss i.i.d. noise in the frequency domain.

If you know how, please let me know.

Got an idea on that, will report back.

But be aware, while the Gauss bell curve is an eigenfunction of the Fourier
transform, the noise we feed into it is not the Gauss bell curve.

Thanks for pointing that out. It appears I went for the first reasonable
sounding explanation to support my gut feeling without thinking it
through :D

Best regards,
Carsten

[1] https://apps.dtic.mil/sti/pdfs/ADA485683.pdf
[2] https://ieeexplore.ieee.org/document/381848
[3] https://rjav.sra.ro/index.php/rjav/article/view/40

On 10.05.22 10:37, Neville Michie wrote: > The use of forward then reverse Fourier transforms is one of the most important > achievements of the Fourier transform. When one data set is convolved with > another data set, it appears impossible to undo the tangle. > But if the data is transformed into the Fourier domain, serial division can separate > the data, and transformation back will yield the original data. Absolutely, but in this case I was wondering why to do the costly O(n log n) forward transform at all, if its output can be directly computed in O(n). On 10.05.22 12:58, Attila Kinali wrote: >> If you happen to find the paper, please share a reference. I'm curious >> about implementation details and side-effects, e.g., whether >> implementing the filter via circular convolution (straightforward >> multiplication in frequency-domain) carries any penalties regarding >> stochastic properties due to periodicity of the generated noise. > Yes, for one, noise is not periodic, neither is the filter you need. > You can't build a filter with a 1/sqrt(f) slope over all the > frequency range. That would require a fractional integrator, which > is a non-trivial task. Unless you actually do fractional integration, > all time domain filters will be approximations of the required filter. Any chance the Paper was "FFT-BASED METHODS FOR SIMULATING FLICKER FM" by Greenhall [1]? I've had a look at the source code of the bruiteur tool, which you previously recommended, and it appears to opt for the circular convolution approach. Would you consider that the state-of-the-art for 1/sqrt(f)? The same is used here [3]. Kasdin gives an extensive overview over the subject [2], but I haven't read the 20+ pages paper yet. > I had the same question when I first saw this. Unfortunately I don't have a good > answer, besides that forward + inverse ensures that the noise looks like it is > supposed to do, while I'm not 100% whether there is an easy way to generate > time-domain Gauss i.i.d. noise in the frequency domain. > > If you know how, please let me know. Got an idea on that, will report back. > But be aware, while the Gauss bell curve is an eigenfunction of the Fourier > transform, the noise we feed into it is not the Gauss bell curve. Thanks for pointing that out. It appears I went for the first reasonable sounding explanation to support my gut feeling without thinking it through :D Best regards, Carsten [1] https://apps.dtic.mil/sti/pdfs/ADA485683.pdf [2] https://ieeexplore.ieee.org/document/381848 [3] https://rjav.sra.ro/index.php/rjav/article/view/40
CA
Carsten Andrich
Fri, May 13, 2022 3:25 PM

On 11.05.22 08:15, Carsten Andrich wrote:

Also, any reason to do this via forward and inverse FFT? AFAIK the
Fourier transform of white noise is white noise, [...]

I had the same question when I first saw this. Unfortunately I don't have a good
answer, besides that forward + inverse ensures that the noise looks like it is
supposed to do, while I'm not 100% whether there is an easy way to generate
time-domain Gauss i.i.d. noise in the frequency domain.

If you know how, please let me know.

Got an idea on that, will report back.

Disclaimer: I'm an electrical engineer, not a mathematician, so someone
trained in the latter craft should verify my train of though. Feedback
is much appreciated.

Turns out the derivation of the DFT of time-domain white noise is
straightforward. The DFT formula

X[k] = \Sigma_{n=0}^{N-1} x[n] * e^{-2j*\pikn/N}

illustrates that a single frequency-domain sample is just a sum of
scaled time-domain samples. Now let x[n] be N normally distributed
samples with zero mean and variance \sigma^2, thus each X[k] is a sum of
scaled i.i.d. random variables. According to the central limit theorem,
the sum of these random variables is normally distributed.

To ascertain the variance of X[k], rely on the linearity of variance
[1], i.e., Var(aX+bY) = a^2Var(X) + b^2Var(Y) + 2ab*Cov(X,Y), and
the fact that the covariance of uncorrelated variables is zero, so,
separating into real and imaginary components, one gets:

Var(Re{X[k]}) = \Sum_{n=0}^{N-1} Var(x[n]) * Re{e^{-2j*\pikn/N})}^2
= \Sum_{n=0}^{N-1} \sigma^2  * cos(2*\pikn/N)^2
= \sigma^2 * \Sum_{n=0}^{N-1} cos(2*\pikn/N)^2

Var(Im{X[k]}) = \Sum_{n=0}^{N-1} Var(x[n]) * Im{e^{-2j*\pikn/N})}^2
= ...
= \sigma^2 * \Sum_{n=0}^{N-1} sin(2*\pikn/N)^2

The sum over squared sin(…)/cos(…) is always N/2, except for k=0 and
k=N/2, where cos(…) is N and sin(…) is 0, resulting in X[k] with real DC
and Nyquist components as is to be expected for a real x[n].
Finally, for an x[n] ~ N(0, \sigma^2), the DFT's X[k] has the following
variance:

             { N   * \sigma^2, k = 0

Var(Re{X[k]}) = { N  * \sigma^2, k = N/2
{ N/2 * \sigma^2, else

             { 0             , k = 0

Var(Im{X[k]}) = { 0            , k = N/2
{ N/2 * \sigma^2, else

Therefore, a normally distributed time domain-sequence x[n] ~ N(0,
\sigma^2) with N samples has the following DFT (note: N is the number of
samples and N(0, 1) is a normally distributed random variable with mean
0 and variance 1):

    { N^0.5     * \sigma *  N(0, 1)                , k = 0

X[k] = { N^0.5    * \sigma *  N(0, 1)                , k = N/2
{ (N/2)^0.5 * \sigma * (N(0, 1) + 1j * N(0, 1)), else

Greenhall has the same results, with two noteworthy differences [2].
First, normalization with the sample count occurs after the IFFT.
Second, his FFT is of size 2N, resulting in a N^0.5 factor between his
results and the above. Finally, Greenhall discards half (minus one) of
the samples returned by the IFFT to realize linear convolution instead
of circular convolution, fundamentally implementing a single iteration
of overlap-save fast convolution [3]. If I didn't miss anything skimming
over the bruiteur source code, it seems to skip that very step and
therefore generates periodic output [4].

Best regards,
Carsten

[1] https://en.wikipedia.org/wiki/Variance#Basic_properties
[2] https://apps.dtic.mil/sti/pdfs/ADA485683.pdf#page=5
[3] https://en.wikipedia.org/wiki/Overlap%E2%80%93save_method
[4] https://github.com/euldulle/SigmaTheta/blob/master/source/filtre.c#L130

On 11.05.22 08:15, Carsten Andrich wrote: >>> Also, any reason to do this via forward and inverse FFT? AFAIK the >>> Fourier transform of white noise is white noise, [...] >> I had the same question when I first saw this. Unfortunately I don't have a good >> answer, besides that forward + inverse ensures that the noise looks like it is >> supposed to do, while I'm not 100% whether there is an easy way to generate >> time-domain Gauss i.i.d. noise in the frequency domain. >> >> If you know how, please let me know. > Got an idea on that, will report back. Disclaimer: I'm an electrical engineer, not a mathematician, so someone trained in the latter craft should verify my train of though. Feedback is much appreciated. Turns out the derivation of the DFT of time-domain white noise is straightforward. The DFT formula X[k] = \Sigma_{n=0}^{N-1} x[n] * e^{-2j*\pi*k*n/N} illustrates that a single frequency-domain sample is just a sum of scaled time-domain samples. Now let x[n] be N normally distributed samples with zero mean and variance \sigma^2, thus each X[k] is a sum of scaled i.i.d. random variables. According to the central limit theorem, the sum of these random variables is normally distributed. To ascertain the variance of X[k], rely on the linearity of variance [1], i.e., Var(a*X+b*Y) = a^2*Var(X) + b^2*Var(Y) + 2ab*Cov(X,Y), and the fact that the covariance of uncorrelated variables is zero, so, separating into real and imaginary components, one gets: Var(Re{X[k]}) = \Sum_{n=0}^{N-1} Var(x[n]) * Re{e^{-2j*\pi*k*n/N})}^2 = \Sum_{n=0}^{N-1} \sigma^2 * cos(2*\pi*k*n/N)^2 = \sigma^2 * \Sum_{n=0}^{N-1} cos(2*\pi*k*n/N)^2 Var(Im{X[k]}) = \Sum_{n=0}^{N-1} Var(x[n]) * Im{e^{-2j*\pi*k*n/N})}^2 = ... = \sigma^2 * \Sum_{n=0}^{N-1} sin(2*\pi*k*n/N)^2 The sum over squared sin(…)/cos(…) is always N/2, except for k=0 and k=N/2, where cos(…) is N and sin(…) is 0, resulting in X[k] with real DC and Nyquist components as is to be expected for a real x[n]. Finally, for an x[n] ~ N(0, \sigma^2), the DFT's X[k] has the following variance: { N * \sigma^2, k = 0 Var(Re{X[k]}) = { N * \sigma^2, k = N/2 { N/2 * \sigma^2, else { 0 , k = 0 Var(Im{X[k]}) = { 0 , k = N/2 { N/2 * \sigma^2, else Therefore, a normally distributed time domain-sequence x[n] ~ N(0, \sigma^2) with N samples has the following DFT (note: N is the number of samples and N(0, 1) is a normally distributed random variable with mean 0 and variance 1): { N^0.5 * \sigma * N(0, 1) , k = 0 X[k] = { N^0.5 * \sigma * N(0, 1) , k = N/2 { (N/2)^0.5 * \sigma * (N(0, 1) + 1j * N(0, 1)), else Greenhall has the same results, with two noteworthy differences [2]. First, normalization with the sample count occurs after the IFFT. Second, his FFT is of size 2N, resulting in a N^0.5 factor between his results and the above. Finally, Greenhall discards half (minus one) of the samples returned by the IFFT to realize linear convolution instead of circular convolution, fundamentally implementing a single iteration of overlap-save fast convolution [3]. If I didn't miss anything skimming over the bruiteur source code, it seems to skip that very step and therefore generates periodic output [4]. Best regards, Carsten [1] https://en.wikipedia.org/wiki/Variance#Basic_properties [2] https://apps.dtic.mil/sti/pdfs/ADA485683.pdf#page=5 [3] https://en.wikipedia.org/wiki/Overlap%E2%80%93save_method [4] https://github.com/euldulle/SigmaTheta/blob/master/source/filtre.c#L130
MD
Magnus Danielson
Sat, May 14, 2022 6:59 AM

Hi Carsten,

On 2022-05-13 09:25, Carsten Andrich wrote:

On 11.05.22 08:15, Carsten Andrich wrote:

Also, any reason to do this via forward and inverse FFT? AFAIK the
Fourier transform of white noise is white noise, [...]

I had the same question when I first saw this. Unfortunately I don't
have a good
answer, besides that forward + inverse ensures that the noise looks
like it is
supposed to do, while I'm not 100% whether there is an easy way to
generate
time-domain Gauss i.i.d. noise in the frequency domain.

If you know how, please let me know.

Got an idea on that, will report back.

Disclaimer: I'm an electrical engineer, not a mathematician, so
someone trained in the latter craft should verify my train of though.
Feedback is much appreciated.

Turns out the derivation of the DFT of time-domain white noise is
straightforward. The DFT formula

X[k] = \Sigma_{n=0}^{N-1} x[n] * e^{-2j*\pikn/N}

illustrates that a single frequency-domain sample is just a sum of
scaled time-domain samples. Now let x[n] be N normally distributed
samples with zero mean and variance \sigma^2, thus each X[k] is a sum
of scaled i.i.d. random variables. According to the central limit
theorem, the sum of these random variables is normally distributed.

Do note that the model of no correlation is not correct model of
reality. There is several effects which make "white noise" slightly
correlated, even if this for most pratical uses is very small
correlation. Not that it significantly changes your conclusions, but you
should remember that the model only go so far. To avoid aliasing, you
need an anti-aliasing filter that causes correlation between samples.
Also, the noise has inherent bandwidth limitations and futher, thermal
noise is convergent because of the power-distribution of thermal noise
as established by Max Planck, and is really the existence of photons.
The physics of it cannot be fully ignored as one goes into the math
field, but rather, one should be aware that the simplified models may
fool yourself in the mathematical exercise.

To ascertain the variance of X[k], rely on the linearity of variance
[1], i.e., Var(aX+bY) = a^2Var(X) + b^2Var(Y) + 2ab*Cov(X,Y), and
the fact that the covariance of uncorrelated variables is zero, so,
separating into real and imaginary components, one gets:

Var(Re{X[k]}) = \Sum_{n=0}^{N-1} Var(x[n]) * Re{e^{-2j*\pikn/N})}^2
              = \Sum_{n=0}^{N-1} \sigma^2  * cos(2*\pikn/N)^2
              = \sigma^2 * \Sum_{n=0}^{N-1} cos(2*\pikn/N)^2

Var(Im{X[k]}) = \Sum_{n=0}^{N-1} Var(x[n]) * Im{e^{-2j*\pikn/N})}^2
              = ...
              = \sigma^2 * \Sum_{n=0}^{N-1} sin(2*\pikn/N)^2

The sum over squared sin(…)/cos(…) is always N/2, except for k=0 and
k=N/2, where cos(…) is N and sin(…) is 0, resulting in X[k] with real
DC and Nyquist components as is to be expected for a real x[n].
Finally, for an x[n] ~ N(0, \sigma^2), the DFT's X[k] has the
following variance:

                { N   * \sigma^2, k = 0
Var(Re{X[k]}) = { N   * \sigma^2, k = N/2
                { N/2 * \sigma^2, else

                { 0             , k = 0
Var(Im{X[k]}) = { 0             , k = N/2
                { N/2 * \sigma^2, else

Therefore, a normally distributed time domain-sequence x[n] ~ N(0,
\sigma^2) with N samples has the following DFT (note: N is the number
of samples and N(0, 1) is a normally distributed random variable with
mean 0 and variance 1):

       { N^0.5     * \sigma *  N(0, 1)                , k = 0
X[k] = { N^0.5     * \sigma *  N(0, 1)                , k = N/2
       { (N/2)^0.5 * \sigma * (N(0, 1) + 1j * N(0, 1)), else

Here you skipped a few steps compared to your other derivation. You
should explain how X[k] comes out of Var(Re(X[k])) and Var(Im(X[k])).

Greenhall has the same results, with two noteworthy differences [2].
First, normalization with the sample count occurs after the IFFT.
Second, his FFT is of size 2N, resulting in a N^0.5 factor between his
results and the above. Finally, Greenhall discards half (minus one) of
the samples returned by the IFFT to realize linear convolution instead
of circular convolution, fundamentally implementing a single iteration
of overlap-save fast convolution [3]. If I didn't miss anything
skimming over the bruiteur source code, it seems to skip that very
step and therefore generates periodic output [4].

This is a result of using real-only values in the complex Fourier
transform. It creates mirror images. Greenhall uses one method to
circumvent the issue.

Cheers,
Magnus

Hi Carsten, On 2022-05-13 09:25, Carsten Andrich wrote: > On 11.05.22 08:15, Carsten Andrich wrote: > >>>> Also, any reason to do this via forward and inverse FFT? AFAIK the >>>> Fourier transform of white noise is white noise, [...] >>> I had the same question when I first saw this. Unfortunately I don't >>> have a good >>> answer, besides that forward + inverse ensures that the noise looks >>> like it is >>> supposed to do, while I'm not 100% whether there is an easy way to >>> generate >>> time-domain Gauss i.i.d. noise in the frequency domain. >>> >>> If you know how, please let me know. >> Got an idea on that, will report back. > > Disclaimer: I'm an electrical engineer, not a mathematician, so > someone trained in the latter craft should verify my train of though. > Feedback is much appreciated. > > Turns out the derivation of the DFT of time-domain white noise is > straightforward. The DFT formula > > X[k] = \Sigma_{n=0}^{N-1} x[n] * e^{-2j*\pi*k*n/N} > > illustrates that a single frequency-domain sample is just a sum of > scaled time-domain samples. Now let x[n] be N normally distributed > samples with zero mean and variance \sigma^2, thus each X[k] is a sum > of scaled i.i.d. random variables. According to the central limit > theorem, the sum of these random variables is normally distributed. > Do note that the model of no correlation is not correct model of reality. There is several effects which make "white noise" slightly correlated, even if this for most pratical uses is very small correlation. Not that it significantly changes your conclusions, but you should remember that the model only go so far. To avoid aliasing, you need an anti-aliasing filter that causes correlation between samples. Also, the noise has inherent bandwidth limitations and futher, thermal noise is convergent because of the power-distribution of thermal noise as established by Max Planck, and is really the existence of photons. The physics of it cannot be fully ignored as one goes into the math field, but rather, one should be aware that the simplified models may fool yourself in the mathematical exercise. > To ascertain the variance of X[k], rely on the linearity of variance > [1], i.e., Var(a*X+b*Y) = a^2*Var(X) + b^2*Var(Y) + 2ab*Cov(X,Y), and > the fact that the covariance of uncorrelated variables is zero, so, > separating into real and imaginary components, one gets: > > Var(Re{X[k]}) = \Sum_{n=0}^{N-1} Var(x[n]) * Re{e^{-2j*\pi*k*n/N})}^2 >               = \Sum_{n=0}^{N-1} \sigma^2  * cos(2*\pi*k*n/N)^2 >               = \sigma^2 * \Sum_{n=0}^{N-1} cos(2*\pi*k*n/N)^2 > > Var(Im{X[k]}) = \Sum_{n=0}^{N-1} Var(x[n]) * Im{e^{-2j*\pi*k*n/N})}^2 >               = ... >               = \sigma^2 * \Sum_{n=0}^{N-1} sin(2*\pi*k*n/N)^2 > > The sum over squared sin(…)/cos(…) is always N/2, except for k=0 and > k=N/2, where cos(…) is N and sin(…) is 0, resulting in X[k] with real > DC and Nyquist components as is to be expected for a real x[n]. > Finally, for an x[n] ~ N(0, \sigma^2), the DFT's X[k] has the > following variance: > >                 { N   * \sigma^2, k = 0 > Var(Re{X[k]}) = { N   * \sigma^2, k = N/2 >                 { N/2 * \sigma^2, else > > >                 { 0             , k = 0 > Var(Im{X[k]}) = { 0             , k = N/2 >                 { N/2 * \sigma^2, else > > Therefore, a normally distributed time domain-sequence x[n] ~ N(0, > \sigma^2) with N samples has the following DFT (note: N is the number > of samples and N(0, 1) is a normally distributed random variable with > mean 0 and variance 1): > >        { N^0.5     * \sigma *  N(0, 1)                , k = 0 > X[k] = { N^0.5     * \sigma *  N(0, 1)                , k = N/2 >        { (N/2)^0.5 * \sigma * (N(0, 1) + 1j * N(0, 1)), else > Here you skipped a few steps compared to your other derivation. You should explain how X[k] comes out of Var(Re(X[k])) and Var(Im(X[k])). > Greenhall has the same results, with two noteworthy differences [2]. > First, normalization with the sample count occurs after the IFFT. > Second, his FFT is of size 2N, resulting in a N^0.5 factor between his > results and the above. Finally, Greenhall discards half (minus one) of > the samples returned by the IFFT to realize linear convolution instead > of circular convolution, fundamentally implementing a single iteration > of overlap-save fast convolution [3]. If I didn't miss anything > skimming over the bruiteur source code, it seems to skip that very > step and therefore generates periodic output [4]. This is a result of using real-only values in the complex Fourier transform. It creates mirror images. Greenhall uses one method to circumvent the issue. Cheers, Magnus > > Best regards, > Carsten > > [1] https://en.wikipedia.org/wiki/Variance#Basic_properties > [2] https://apps.dtic.mil/sti/pdfs/ADA485683.pdf#page=5 > [3] https://en.wikipedia.org/wiki/Overlap%E2%80%93save_method > [4] > https://github.com/euldulle/SigmaTheta/blob/master/source/filtre.c#L130 > _______________________________________________ > time-nuts mailing list -- time-nuts@lists.febo.com > To unsubscribe send an email to time-nuts-leave@lists.febo.com
MW
Matthias Welwarsky
Sat, May 14, 2022 2:58 PM

On Dienstag, 3. Mai 2022 22:08:49 CEST Magnus Danielson via time-nuts wrote:

Dear Matthias,

Notice that 1/f is power-spectrum density, straight filter will give you
1/f^2 in power-spectrum, just as an integration slope.

One approach to flicker filter is an IIR filter with the weighing of
1/sqrt(n+1) where n is tap index, and feed it normal noise. You need to
"flush out" state before you use it so you have a long history to help
shaping. For a 1024 sample series, I do 2048 samples and only use the
last 1024. Efficient? No. Quick-and-dirty? Yes.

I went "window shopping" on Google and found something that would probably fit
my needs here:

https://ccrma.stanford.edu/~jos/sasp/Example_Synthesis_1_F_Noise.html

Matlab code:

Nx = 2^16;  % number of samples to synthesize
B = [0.049922035 -0.095993537 0.050612699 -0.004408786];
A = [1 -2.494956002  2.017265875  -0.522189400];
nT60 = round(log(1000)/(1-max(abs(roots(A))))); % T60 est.
v = randn(1,Nx+nT60); % Gaussian white noise: N(0,1)
x = filter(B,A,v);    % Apply 1/F roll-off to PSD
x = x(nT60+1:end);    % Skip transient response

It looks quite simple and there is no explanation where the filter
coefficients come from, but I checked the PSD and it looks quite reasonable.

The ADEV of a synthesized oscillator, using the above generator to generate 1/
f FM noise is interesting: it's an almost completely flat curve that moves
"sideways" until the drift becomes dominant.

Regards,
Matthias

On Dienstag, 3. Mai 2022 22:08:49 CEST Magnus Danielson via time-nuts wrote: > Dear Matthias, > > Notice that 1/f is power-spectrum density, straight filter will give you > 1/f^2 in power-spectrum, just as an integration slope. > > One approach to flicker filter is an IIR filter with the weighing of > 1/sqrt(n+1) where n is tap index, and feed it normal noise. You need to > "flush out" state before you use it so you have a long history to help > shaping. For a 1024 sample series, I do 2048 samples and only use the > last 1024. Efficient? No. Quick-and-dirty? Yes. I went "window shopping" on Google and found something that would probably fit my needs here: https://ccrma.stanford.edu/~jos/sasp/Example_Synthesis_1_F_Noise.html Matlab code: Nx = 2^16; % number of samples to synthesize B = [0.049922035 -0.095993537 0.050612699 -0.004408786]; A = [1 -2.494956002 2.017265875 -0.522189400]; nT60 = round(log(1000)/(1-max(abs(roots(A))))); % T60 est. v = randn(1,Nx+nT60); % Gaussian white noise: N(0,1) x = filter(B,A,v); % Apply 1/F roll-off to PSD x = x(nT60+1:end); % Skip transient response It looks quite simple and there is no explanation where the filter coefficients come from, but I checked the PSD and it looks quite reasonable. The ADEV of a synthesized oscillator, using the above generator to generate 1/ f FM noise is interesting: it's an almost completely flat curve that moves "sideways" until the drift becomes dominant. Regards, Matthias
CA
Carsten Andrich
Sat, May 14, 2022 4:43 PM

On 14.05.22 16:58, Matthias Welwarsky wrote:

I went "window shopping" on Google and found something that would probably fit
my needs here:

https://ccrma.stanford.edu/~jos/sasp/Example_Synthesis_1_F_Noise.html

Matlab code:

Nx = 2^16;  % number of samples to synthesize
B = [0.049922035 -0.095993537 0.050612699 -0.004408786];
A = [1 -2.494956002  2.017265875  -0.522189400];
nT60 = round(log(1000)/(1-max(abs(roots(A))))); % T60 est.
v = randn(1,Nx+nT60); % Gaussian white noise: N(0,1)
x = filter(B,A,v);    % Apply 1/F roll-off to PSD
x = x(nT60+1:end);    % Skip transient response

Hi Matthias,

the coefficients could come the digital transform of an analog filter
that approximates a 10 dB/decade slope, see:
https://electronics.stackexchange.com/questions/200583/is-it-possible-to-make-a-weak-analog-low-pass-filter-with-less-than-20-db-decade
https://electronics.stackexchange.com/questions/374247/is-the-roll-off-gain-of-filters-always-20-db-dec
https://m.youtube.com/watch?v=fn2UGyk5DP4

However, even for the 2^16 samples used by the CCRMA snippet, the filter
slope rolls off too quickly. I've attached its frequency response. It
exhibits a little wobbly 1/f power slope over 3 orders of magnitude, but
it's essentially flat over the remaining two orders of mag. The used IIR
filter is too short to affect the lower frequencies.

If you want a good approximation of 1/f over the full frequency range, I
personally believe Greenhall's "discrete spectrum algorithm" [1] to be
the best choice, as per the literature I've consulted so far. It
realizes an appropriately large FIR filter, i.e., as large as the
input/output signal, and it's straightforward to implement. Because it's
generic, it can be used to generate any combination of power-law noises
or arbitrary characteristics, e.g., phase noise based on measurements,
in a single invocation.

Best regards,
Carsten

[1] https://apps.dtic.mil/sti/pdfs/ADA485683.pdf#page=5

P.S.: Python Code to plot attached frequency response:

import matplotlib.pyplot as plt
import numpy as np
import scipy.signal as signal

w, h = signal.freqz(
[0.049922035, -0.095993537, 0.050612699, -0.004408786],
[1, -2.494956002,  2.017265875,  -0.522189400],
2**16)

plt.loglog(w/np.pi, abs(h))
plt.plot([1e-3, 1e-0], [0.93, 0.93/10**(0.5*3)], "--")

plt.xlabel("Normalized frequency ($f_\mathrm{Nyquist}$)")
plt.ylabel("Gain")
plt.grid()

On 14.05.22 16:58, Matthias Welwarsky wrote: > I went "window shopping" on Google and found something that would probably fit > my needs here: > > https://ccrma.stanford.edu/~jos/sasp/Example_Synthesis_1_F_Noise.html > > Matlab code: > > Nx = 2^16; % number of samples to synthesize > B = [0.049922035 -0.095993537 0.050612699 -0.004408786]; > A = [1 -2.494956002 2.017265875 -0.522189400]; > nT60 = round(log(1000)/(1-max(abs(roots(A))))); % T60 est. > v = randn(1,Nx+nT60); % Gaussian white noise: N(0,1) > x = filter(B,A,v); % Apply 1/F roll-off to PSD > x = x(nT60+1:end); % Skip transient response Hi Matthias, the coefficients could come the digital transform of an analog filter that approximates a 10 dB/decade slope, see: https://electronics.stackexchange.com/questions/200583/is-it-possible-to-make-a-weak-analog-low-pass-filter-with-less-than-20-db-decade https://electronics.stackexchange.com/questions/374247/is-the-roll-off-gain-of-filters-always-20-db-dec https://m.youtube.com/watch?v=fn2UGyk5DP4 However, even for the 2^16 samples used by the CCRMA snippet, the filter slope rolls off too quickly. I've attached its frequency response. It exhibits a little wobbly 1/f power slope over 3 orders of magnitude, but it's essentially flat over the remaining two orders of mag. The used IIR filter is too short to affect the lower frequencies. If you want a good approximation of 1/f over the full frequency range, I personally believe Greenhall's "discrete spectrum algorithm" [1] to be the best choice, as per the literature I've consulted so far. It realizes an appropriately large FIR filter, i.e., as large as the input/output signal, and it's straightforward to implement. Because it's generic, it can be used to generate any combination of power-law noises or arbitrary characteristics, e.g., phase noise based on measurements, in a single invocation. Best regards, Carsten [1] https://apps.dtic.mil/sti/pdfs/ADA485683.pdf#page=5 P.S.: Python Code to plot attached frequency response: import matplotlib.pyplot as plt import numpy as np import scipy.signal as signal w, h = signal.freqz( [0.049922035, -0.095993537, 0.050612699, -0.004408786], [1, -2.494956002, 2.017265875, -0.522189400], 2**16) plt.loglog(w/np.pi, abs(h)) plt.plot([1e-3, 1e-0], [0.93, 0.93/10**(0.5*3)], "--") plt.xlabel("Normalized frequency ($f_\mathrm{Nyquist}$)") plt.ylabel("Gain") plt.grid()
CA
Carsten Andrich
Sat, May 14, 2022 5:38 PM

Hi Magnus,

On 14.05.22 08:59, Magnus Danielson via time-nuts wrote:

Do note that the model of no correlation is not correct model of
reality. There is several effects which make "white noise" slightly
correlated, even if this for most pratical uses is very small
correlation. Not that it significantly changes your conclusions, but
you should remember that the model only go so far. To avoid aliasing,
you need an anti-aliasing filter that causes correlation between
samples. Also, the noise has inherent bandwidth limitations and
futher, thermal noise is convergent because of the power-distribution
of thermal noise as established by Max Planck, and is really the
existence of photons. The physics of it cannot be fully ignored as one
goes into the math field, but rather, one should be aware that the
simplified models may fool yourself in the mathematical exercise.

Thank you for that insight. Duly noted. I'll opt to ignore the residual
correlation. As was pointed out here before, the 5 component power law
noise model is an oversimplification of oscillators, so the remaining
error due to residual correlation is hopefully negligible compared to
the general model error.

Here you skipped a few steps compared to your other derivation. You
should explain how X[k] comes out of Var(Re(X[k])) and Var(Im(X[k])).

Given the variance of X[k] and E{X[k]} = 0 \forall k, it follows that

X[k] = Var(Re{X[k]})^0.5 * N(0, 1) + 1j * Var(Im{X[k]})^0.5 * N(0, 1)

because the variance is the scaling of a standard Gaussian N(0, 1)
distribution is the square root of its variance.

This is a result of using real-only values in the complex Fourier
transform. It creates mirror images. Greenhall uses one method to
circumvent the issue.

Can't quite follow on that one. What do you mean by "mirror images"? Do
you mean that my formula for X[k] is missing the complex conjugates for
k = N/2+1 ... N-1? Used with a regular, complex IFFT the previously
posted formula for X[k] would obviously generate complex output, which
is wrong. I missed that one, because my implementation uses a
complex-to-real IFFT, which has the complex conjugate implied. However,
for a the regular, complex (I)FFT given by my derivation, the correct
formula for X[k] should be the following:

    { N^0.5     * \sigma *  N(0, 1)                , k = 0, N/2

X[k] = { (N/2)^0.5 * \sigma * (N(0, 1) + 1j * N(0, 1)), k = 1 ... N/2 - 1
{ conj(X[N-k])                                , k = N/2 + 1 ... N - 1

Best regards,
Carsten

--
M.Sc. Carsten Andrich

Technische Universität Ilmenau
Fachgebiet Elektronische Messtechnik und Signalverarbeitung (EMS)
Helmholtzplatz 2
98693 Ilmenau
T +49 3677 69-4269

Hi Magnus, On 14.05.22 08:59, Magnus Danielson via time-nuts wrote: > Do note that the model of no correlation is not correct model of > reality. There is several effects which make "white noise" slightly > correlated, even if this for most pratical uses is very small > correlation. Not that it significantly changes your conclusions, but > you should remember that the model only go so far. To avoid aliasing, > you need an anti-aliasing filter that causes correlation between > samples. Also, the noise has inherent bandwidth limitations and > futher, thermal noise is convergent because of the power-distribution > of thermal noise as established by Max Planck, and is really the > existence of photons. The physics of it cannot be fully ignored as one > goes into the math field, but rather, one should be aware that the > simplified models may fool yourself in the mathematical exercise. Thank you for that insight. Duly noted. I'll opt to ignore the residual correlation. As was pointed out here before, the 5 component power law noise model is an oversimplification of oscillators, so the remaining error due to residual correlation is hopefully negligible compared to the general model error. > Here you skipped a few steps compared to your other derivation. You > should explain how X[k] comes out of Var(Re(X[k])) and Var(Im(X[k])). Given the variance of X[k] and E{X[k]} = 0 \forall k, it follows that X[k] = Var(Re{X[k]})^0.5 * N(0, 1) + 1j * Var(Im{X[k]})^0.5 * N(0, 1) because the variance is the scaling of a standard Gaussian N(0, 1) distribution is the square root of its variance. > This is a result of using real-only values in the complex Fourier > transform. It creates mirror images. Greenhall uses one method to > circumvent the issue. Can't quite follow on that one. What do you mean by "mirror images"? Do you mean that my formula for X[k] is missing the complex conjugates for k = N/2+1 ... N-1? Used with a regular, complex IFFT the previously posted formula for X[k] would obviously generate complex output, which is wrong. I missed that one, because my implementation uses a complex-to-real IFFT, which has the complex conjugate implied. However, for a the regular, complex (I)FFT given by my derivation, the correct formula for X[k] should be the following: { N^0.5 * \sigma * N(0, 1) , k = 0, N/2 X[k] = { (N/2)^0.5 * \sigma * (N(0, 1) + 1j * N(0, 1)), k = 1 ... N/2 - 1 { conj(X[N-k]) , k = N/2 + 1 ... N - 1 Best regards, Carsten -- M.Sc. Carsten Andrich Technische Universität Ilmenau Fachgebiet Elektronische Messtechnik und Signalverarbeitung (EMS) Helmholtzplatz 2 98693 Ilmenau T +49 3677 69-4269
MW
Matthias Welwarsky
Sat, May 14, 2022 6:30 PM

On Samstag, 14. Mai 2022 18:43:13 CEST Carsten Andrich wrote:

However, even for the 2^16 samples used by the CCRMA snippet, the filter
slope rolls off too quickly. I've attached its frequency response. It
exhibits a little wobbly 1/f power slope over 3 orders of magnitude, but
it's essentially flat over the remaining two orders of mag. The used IIR
filter is too short to affect the lower frequencies.

Ah. That explains why the ADEV "degrades" for longer tau. It bends "down". For
very low frequencies, i.e. long tau in ADEV terms, the filter is invisible,
i.e. it passes on white noise. That makes it indeed unusable, for my purposes.

BR,
Matthias

On Samstag, 14. Mai 2022 18:43:13 CEST Carsten Andrich wrote: > However, even for the 2^16 samples used by the CCRMA snippet, the filter > slope rolls off too quickly. I've attached its frequency response. It > exhibits a little wobbly 1/f power slope over 3 orders of magnitude, but > it's essentially flat over the remaining two orders of mag. The used IIR > filter is too short to affect the lower frequencies. Ah. That explains why the ADEV "degrades" for longer tau. It bends "down". For very low frequencies, i.e. long tau in ADEV terms, the filter is invisible, i.e. it passes on white noise. That makes it indeed unusable, for my purposes. BR, Matthias
MD
Magnus Danielson
Sun, May 15, 2022 4:33 PM

Hi Matthias,

On 2022-05-14 08:58, Matthias Welwarsky wrote:

On Dienstag, 3. Mai 2022 22:08:49 CEST Magnus Danielson via time-nuts wrote:

Dear Matthias,

Notice that 1/f is power-spectrum density, straight filter will give you
1/f^2 in power-spectrum, just as an integration slope.

One approach to flicker filter is an IIR filter with the weighing of
1/sqrt(n+1) where n is tap index, and feed it normal noise. You need to
"flush out" state before you use it so you have a long history to help
shaping. For a 1024 sample series, I do 2048 samples and only use the
last 1024. Efficient? No. Quick-and-dirty? Yes.

I went "window shopping" on Google and found something that would probably fit
my needs here:

https://ccrma.stanford.edu/~jos/sasp/Example_Synthesis_1_F_Noise.html

Matlab code:

Nx = 2^16;  % number of samples to synthesize
B = [0.049922035 -0.095993537 0.050612699 -0.004408786];
A = [1 -2.494956002  2.017265875  -0.522189400];
nT60 = round(log(1000)/(1-max(abs(roots(A))))); % T60 est.
v = randn(1,Nx+nT60); % Gaussian white noise: N(0,1)
x = filter(B,A,v);    % Apply 1/F roll-off to PSD
x = x(nT60+1:end);    % Skip transient response

It looks quite simple and there is no explanation where the filter
coefficients come from, but I checked the PSD and it looks quite reasonable.

This is a variant of the James "Jim" Barnes filter to use lead-lag
filter to approximate 1/f slope. You achieve it within a certain range
of frequency. The first article with this is available as a technical
note from NBS 1965 (available at NIST T&F archive - check for Barnes and
Allan), but there is a more modern PTTI article by Barnes and Greenhall
(also in NIST archive) that uses a more flexible approach where the
spread of pole/zero pairs is parametric rather than fixed. The later
paper is important as it also contains the code to initialize the state
of the filter as if it has been running for ever so the state have
stabilized. A particular interesting thing in that article is the plot
of the filter property aligned to the 1/f slope, it illustrates very
well the useful range of the produced filter. This plot is achieved by
scaling the amplitude responce with sqrt(f).

I recommend using the Barnes & Greenhall variant rather than what you
found. It needs to be adapted to the simulation at hand, so the fixed
setup you have will fit only some needs. One needs to have the filter
covering the full frequency range where used flicker noise is dominant
or near dominant. As one uses flicker shaping for both flicker phase
modulation as well as flicker frequency modulation, there is two
different frequency ranges there they are dominant or near dominant.

There is many other approaches, see Attilas splendid review the other day.

Let me also say that I find myself having this question coming even from
the most senioric researches even the last week: "What is your favorite
flicker noise simulation model?". They keep acknowledge my basic message
of "Flicker noise simulation is hard". None model fit all applications,
so no one solution solves it all. One needs to validate that it fit the
application at hand.

Cheers,
Magnus

The ADEV of a synthesized oscillator, using the above generator to generate 1/
f FM noise is interesting: it's an almost completely flat curve that moves
"sideways" until the drift becomes dominant.

Regards,
Matthias


time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-leave@lists.febo.com

Hi Matthias, On 2022-05-14 08:58, Matthias Welwarsky wrote: > On Dienstag, 3. Mai 2022 22:08:49 CEST Magnus Danielson via time-nuts wrote: >> Dear Matthias, >> >> Notice that 1/f is power-spectrum density, straight filter will give you >> 1/f^2 in power-spectrum, just as an integration slope. >> >> One approach to flicker filter is an IIR filter with the weighing of >> 1/sqrt(n+1) where n is tap index, and feed it normal noise. You need to >> "flush out" state before you use it so you have a long history to help >> shaping. For a 1024 sample series, I do 2048 samples and only use the >> last 1024. Efficient? No. Quick-and-dirty? Yes. > I went "window shopping" on Google and found something that would probably fit > my needs here: > > https://ccrma.stanford.edu/~jos/sasp/Example_Synthesis_1_F_Noise.html > > Matlab code: > > Nx = 2^16; % number of samples to synthesize > B = [0.049922035 -0.095993537 0.050612699 -0.004408786]; > A = [1 -2.494956002 2.017265875 -0.522189400]; > nT60 = round(log(1000)/(1-max(abs(roots(A))))); % T60 est. > v = randn(1,Nx+nT60); % Gaussian white noise: N(0,1) > x = filter(B,A,v); % Apply 1/F roll-off to PSD > x = x(nT60+1:end); % Skip transient response > > It looks quite simple and there is no explanation where the filter > coefficients come from, but I checked the PSD and it looks quite reasonable. This is a variant of the James "Jim" Barnes filter to use lead-lag filter to approximate 1/f slope. You achieve it within a certain range of frequency. The first article with this is available as a technical note from NBS 1965 (available at NIST T&F archive - check for Barnes and Allan), but there is a more modern PTTI article by Barnes and Greenhall (also in NIST archive) that uses a more flexible approach where the spread of pole/zero pairs is parametric rather than fixed. The later paper is important as it also contains the code to initialize the state of the filter as if it has been running for ever so the state have stabilized. A particular interesting thing in that article is the plot of the filter property aligned to the 1/f slope, it illustrates very well the useful range of the produced filter. This plot is achieved by scaling the amplitude responce with sqrt(f). I recommend using the Barnes & Greenhall variant rather than what you found. It needs to be adapted to the simulation at hand, so the fixed setup you have will fit only some needs. One needs to have the filter covering the full frequency range where used flicker noise is dominant or near dominant. As one uses flicker shaping for both flicker phase modulation as well as flicker frequency modulation, there is two different frequency ranges there they are dominant or near dominant. There is many other approaches, see Attilas splendid review the other day. Let me also say that I find myself having this question coming even from the most senioric researches even the last week: "What is your favorite flicker noise simulation model?". They keep acknowledge my basic message of "Flicker noise simulation is hard". None model fit all applications, so no one solution solves it all. One needs to validate that it fit the application at hand. Cheers, Magnus > > The ADEV of a synthesized oscillator, using the above generator to generate 1/ > f FM noise is interesting: it's an almost completely flat curve that moves > "sideways" until the drift becomes dominant. > > Regards, > Matthias > > > _______________________________________________ > time-nuts mailing list -- time-nuts@lists.febo.com > To unsubscribe send an email to time-nuts-leave@lists.febo.com