Hi John:
But the data does not include the variation we get because of the
sawtooth error in the Motorola GPS receivers. I'm using an old 8
channel UT+ timing receiver with the asymmetrical sawtooth error. Maybe
if I get one of the newer M12+Timing receivers and correct the sawtooth
either in software or hardware then I could approach the numbers shown
on the NIST GPS archive web page. The ideal would be an Ashtech Z12
setup for carrier phase time transfer.
Based on Tom's post suggesting looking at one day prompted me to look at
what averaging is needed for one day readings, hence the subject post.
I'm going to try 5,000 second averages to try and reduce the GPS 1 PPS
variation. The SR620 can only average in a 1, 2, 5 * En sequence up to
5E6 readings. It would be nice to be able to do 86400 second averages.
Have Fun,
Brooke
John Ackermann N8UR wrote:
Hi Brooke --
As you well know, I'm still learning my way around this stuff. I've
done plots of a standard vs. raw GPS using anything from 100 second to
3600 second (1 hour) averages. Recently, Tom suggested reducing data
to 1 day when looking at the Cesium offset.
Recently someone gave a link to the NIST GPS data archives --
http://tf.nist.gov/timefreq/service/gpstrace.htm -- which they've (I
think) recently redone to make the data format much more useful.
What's really interesting is that they show plots of the GPS
constellation vs. UTC(NIST) using 1 hour averages. Yesterday, the
range was about 8ns, while over ten days the range was 22ns, and
there was almost no increase out to 30 days. There's a definite
daily pattern in the data.
Their AVAR data shows about 5x10e-13 at one hour, and around 4x10e-14
at 24 hours.
From that, I gather that a 1 hour average may be usable, but as Tom
said, if you're patient enough, using 1 day averages is probably the
right way to get useful offset data and avoid chasing after ghosts.
Brooke Clarke wrote:
Hi:
Is this line of reasoning correct?
When I average the raw GPS 1 PPS using 100 to 1,000 pulses and look
at the standard deviation (assuming everything else is OK) it's in
the mid 30 ns area.
So when the time interval between a single raw GPS 1 PPS and a
perfect clock is measured the expected error would be +/- 3 * Sigma
or about 100 ns.
Comparing two of these readings that are 24 hours apart would have an
observation error of 200 ns/86,400 sec or about 2.3E-12. The 200 ns
comes from the starting observation being say +100 ns and the ending
observation being - 100 ns.
To get better would take either waiting many more days or averaging
the time interval to reduce the uncertainty. Averaging gets the
result much faster. Now the question is how much averaging to use?
Averaging improves the measurement proportionally to the square root
of the number of averages. With 100 second averages 2.3E-13 could be
seen in 24 hours, and with 10,000 second averaging 2.3E-14 could be
seen in one day.
To compute the Allan deviation using a series of measurements the
amount of averaging on the GPS 1 PPS would need to be such that the
GPS noise was much smaller than the uncertanity of what's being
measured.
Have Fun,
Brooke Clarke, N6GCE
time-nuts mailing list
time-nuts@febo.com
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
Brooke Clarke wrote:
Hi John:
But the data does not include the variation we get because of the
sawtooth error in the Motorola GPS receivers. I'm using an old 8
channel UT+ timing receiver with the asymmetrical sawtooth error.
Maybe if I get one of the newer M12+Timing receivers and correct the
sawtooth either in software or hardware then I could approach the
numbers shown on the NIST GPS archive web page. The ideal would be an
Ashtech Z12 setup for carrier phase time transfer.
Based on Tom's post suggesting looking at one day prompted me to look
at what averaging is needed for one day readings, hence the subject post.
I'm going to try 5,000 second averages to try and reduce the GPS 1 PPS
variation. The SR620 can only average in a 1, 2, 5 * En sequence up
to 5E6 readings. It would be nice to be able to do 86400 second
averages.
I think any averaging over 1000 seconds will pretty much squash the
sawtooth, even in the old receivers. My current plot is not old enough
to have really good data, but right now I'm showing AVAR of 7.2x10e-13
for tau=3600 seconds (but that's with only 21 samples).
I normally do my averaging in software rather than the counter, although
the setup for the current experiment is actually averaging an average --
I'm doing 100 sample averages in the counter and logging that to the
data file, then reducing that to one hour averages by averaging 36
readings from the data file. But since each counter average is based on
the same number of samples, I don't think I'm breaking any statistical
laws by doing it this way.
John
Group,
What if I use two clocks and a Time Interval counter?
I'd apply the sawtooth 10 MHz output to a decade divider and
feed 1 MHz to a Datum 9300 time code generator. The generator
provides a day-time clock and a 1 PPS output
I'd apply the standard-under-test 10 MHz output to an identical
divider and time code generator, also getting time and 1 PPS.
I'd expect some systematic errors from the rise and fall times
of all of those SSI and MSI chips, but I can make the measurement
interval long enough that those errors fall off the end of the
time of year plus TI counter digits.
I don't need high-speed computer input (which, I think, takes
the fun out of it for most of you). I sample the counter once
an hour while I tweak the standard down to 10E-10 error, using
100 ns resolution on the counter. The standard won't move any
faster than that, so there's no point to more resolution.
If I could sample every counter reading, I'd know exactly what
the maximum reading error would be. Then I'd adjust the sampling
times to swamp the reading error.
This procedure does not yield fast results, but it is very
accurate when the sources of errors are not known quantitatively.
Of course, you can't do this without a UPS - or converting
everything to run from batteries. That leads to an assortment
of batteries and chargers.
Bill Hawkins
The trouble with averaging GPS 1PPS is that it needs to be done for
multiple of 12h periods to counter the systematic effect of reception
conditions.
I found that simply recording the 1PPS timings and comparing raw
samples with 86400 seconds intervals and then averaging to be
the best strategy.
Usually what I do is to make the 1PPS from the GPS be the start
signal to my HP5370B and feed the 5 or 10MHz signal to stop.
The dataset needs to be collected on a computer (via GPIB) and
any cycle shifts in the 5/10 MHz needs to be resolved before
the comparison can be done.
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
phk@FreeBSD.ORG | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
Averaging improves the measurement proportionally to the square root of
the number of averages. With 100 second averages 2.3E-13 could be seen in
24 hours, and with 10,000 second averaging 2.3E-14 could be seen in one day.
Ah but the square root rule is only true when the
samples are independent. 1 PPS samples from
a GPS receiver are not completely independent.
/tvb
Based on Tom's post suggesting looking at one day prompted me to look at
what averaging is needed for one day readings, hence the subject post.
Yeah, the reason I suggested one-day averaging is
that there are a host of effects you notice when you
use short GPS averaging times: sawtooth, jitter,
wander, counter resolution, GPS orbits, multi-path,
diurnal temperature, etc.
Now these effects are all very interesting, to be sure,
but in the context of measuring the performance of
a cesium, or watching the effect of a C-field change,
it is my opinion that all these effects get in the way
of what you are trying to measure. Hence the
suggestion to boil it down to one point per day.
An obvious by-product of one-point-per-day is that
it forces one to be very patient, which, I think, is
a necessary lesson when doing performance
comparisons with cesium clocks. It is easy to get
excited about a "trend" between lunch and dinner,
only to find your cesium decides to change its mind
between dinner and breakfast. Best to leave the
poor thing alone for a week before you jump to any
conclusions.
/tvb