One of the intended use cases of my DMTD is to measure minute frequency
differences to enable the frequency tuning of clocks against a
reference. A typical frequency display update for this use case would be
at most every second.
As the DMTD can only measure phase this phase has to be converted into
frequency. The advice I received was to makeĀ many short duration phase
measurements and use a linear regression on those phase measurements to
calculate the frequency at the display update interval.
A typical display of the measured frequency using linear regression over
100 phase measurements and the noise of the phase measurements (green
trace) can be found here:
http://athome.kaashoek.com/time-nuts/DMTD/freq_capture.png
An alternative method would be to measure the phase at the required
display update frequency and convert the measured phase and the
previously measured phase and the measurement interval into the frequency.
But what method delivers the most accurate frequency? Linear regression
accuracy improves with the number of measurements involved but the
accuracy of the phase measurements decreases with shorter phase
measurements.
To determine accuracy, both methods where used to measure two 10 MHz
clocks with a 10 microHz frequency difference over a period of 100
seconds. Using these 100 measurements the ADEV was calculated and
compared for a tau of 1 second.
The method using linear regression and 100 phase measurements every
second had an ADEV at 1 second of 2.1e-13, the method using a single
phase measurement every second had an ADEV at 1 second of 1.2e-13.
At first it seems the second method has an advantage but as its using
the phase measurement of the previous second it uses twice the
measurement data so in practice I feel there is no difference.
Is this to be expected?