Dear all,
I have written multiple times about the mission, but I thought I share a
separate view this time, and I think it may be equally interesting. This
time I would like to bring up the topic of logging. You can see traces
in various posts from before, but maybe I can contribute a somewhat more
comprehensive picture.
I realized early that there was many things to log for this mission. For
starters, I wanted to make sure I was logging the 5071A cesium clocks
state, to see how it was behaving. Much of the state is control loops
which compensate for environmental effects. There is also a temperature
reading in the clock. Just logging these as we expose the clock for the
elements of being in the boot of my car, moved to another car, moved
into the lab of the observatory and back will improve confidence that
the clock was happy with life in it's own reference, which is an assumed
property.
I also wanted to log the power-system since the charing and discharging
of batteries is critical, and add additional environmental sensors.
I had from before developed a tool to log my EFOS-B active hydrogen
maser, using a python script that then post data into an InfluxDB
database and at the back-end of that pulls data out into Grafana for
graphs. I learned immensly from doing this, so I realized that I should
attempt to extend that to incorporate these new loggings.
In essence I use a Raspberry Pi 4B with an external harddisk (where the
InfluxDB files lives). I bought a USB hub with separate power supply and
7 locations.
I used two DC/DC converters to make 24 V of the LOAD PANEL convert over
to the 5V needed. Turns out I had power limit issues causing problem, so
I had to move the Raspberry Pi to use a wallwart sitting of the
generated 230 VAC.
For fun I found a Raspberry Pi hat called Environ+ which has a BME280
sensor, which had been suggested by others, so I bought that and
installed. Further, I have a DPM7885 pressure sensor laying around, so
hooking that one to the 12 V bus and then using a USB-Serial adaptor got
that started.
So, first I added the 5071A logging, using a USB-Serial adaptor. Coding
that up was fairly quick. As always, some work is needed to get it to
sync up cleanly. Then I added the DPM7885 with a similar dance. That was
quite easy. Each of these had their own temperature sensor.
These where in place before leaving home for my summerhouse in southern
Sweden, a 600 km drive.
Later I had the time to add the BME280 sensor thought he Pyhton library
code provided. Picking up the example code turned out to be a fairly
quick adaptation. With that I added another temperature sensor, another
pressure sensor and then humidity sensor. The BME280 has compensation
factors in it, and I validated that the code actually did use the
validation schemes. I could however see remaining temperature dependence
that was uncompensated, because, that is what you see in your graphs if
you look and is curious (who? me?).
In addition I also wanted to add the TADR TICC, and that was fairly
quick integration. I did however end up having problem that the PPS from
the 5071A only cause a few triggers on the TICC ChB, so I could not
really use it as intended.
That is how far I got while at my summerhouse, then I had to leave for a
1100 km drive to south of Belgium, the next day was a good 850 km drive
to just outside Grenoble in south of France directly followed with the
remaining 220 km drive to Saint Veran village, in just three days.
I was not able to add the Victron VE Direct logging until up on the
mountain, after restoring the setup. Again I used the Python library
available and it's example code and was fairly quickly able to integrate
it. Naturally I made an off by 1 on the digits, so my current readings
got 10 times larger than correct, but I fixed that.
I had intended to let my Mosaic-T eject NMEA messages and parse that and
ingest into the database, to get time, position and height. That did not
get done during the mission.
The logging system was riddled by a number of power issues, much of all
can be blamed on having too little time to do things properly, and to
little time to test. Also, with power problems came reboot issues, such
that needing to have a screen present during booting as well as not
automatically start logging. These things I was not able to fix during
the mission, but I have fixed that since, since it bugged me and I want
to be prepared for the future, both my own lab and any similar missions.
It's only when you attempt to do stuff you learn how difficult it is.
The fact that the screen was starting to fail ended up being a problem
too. I have since made the Raspberry Pi able to boot safely without screen.
Now, since I have three USB devices showing up as /dev/ttyUSB0,
/dev/ttyUSB1 and /dev/ttyUSB2 in the Raspbian/Debian Linux distro, and I
will not know which order they show up, I needed to provide a mapping of
known order. The mapping is done by a system called udev, and you can
add your own rules. This was well known, and I thought I had resolved
it. Turns out no, so I had to redo it. Many of the documentation and
examples is outdated and does not work with modern code. However, much
of the basic hints is correct, it's more details of encoding. Once I
sorted out that, I was able to make the devices show up as /dev/ttyUSB4,
/dev/ttyUSB5 and /dev/ttyUSB6 but with a fixed mapping. This simplifies
much of the manual hazzle of figuring out which ended up being which.
Then, it was time to setup automatic logging. I decided to take on the
modern solution of systemd, and using the example it took me only a few
attempts to get it shaped up to work right. I have now built 3 example
logging using systemd to do my DPM7885, BME280 and VEDirect loggings.
The 5071A used was then already sitting in the basement of Observatoir
Royal de Belgique so I did not bother to do that one, but using the
examples, it is trivial to do that one with the examples I have, so when
needed I can do that.
I chose to log data every 10 s, except for the VE Direct which spews out
data every second, so just consume that and push it onto the database
was easiest.
In Grafana, setting up plots was done, and it was fairly
straight-forward. I aim to cleanup my plot setup so that it can be
exportable and important into another setup.
The code for all this you can find at my GitHub repository
https://github.com/sa0mad/masermon
I aim to add more stuff over time, and if people find it useful, more
things can be integrated. I aim to add Grafana setups once I have them
in suitable form.
I've found that integration of many different systems with a dash of
Pyhton and then dump datastreams into InfluxDB have been a great
strategy. I use UTC time-stamps for everything and I use NTP to get the
Raspberry Pi have the right time. I did not bother to integrate the GNSS
Receivers time, but that would be a next logical step, especially since
Mosaic-T has both NTP and PTP support, but since it ended up having
issues, I did not bother.
Graphing data was a lot of fun, and you can see on the pressure the
decent into Saint Véran village but also the drive over the Col d'Izoard
at 2360 m. You can see temperature swings.
Putting up a difference between the pressure sensors, you can see how
they have especially difference scale sensitivity but also temperature
sensitivity.
Plotting the temperatures it was easy to see the different elevation of
temperature due to where they where mounted, but also how processing on
Raspberry Pi made its hat temperature vary.
I have not yet had the time to analyze all the data, and already started
doing a tool to extract data and process least square fit. That data,
the TICC, turns out to be quite meaningless thought. It seems like the
PPS was returned through the receiver rather than being separate, so I
was measuring extremely small shift in delays.
Among the clean-ups will also checking the function of the TICCs etc.
I must say that the TADR TICC is almost ideal for a setup like this,
with low power and the performance it gives. Details on how it starts up
and how I can control it and sync up needs more work, and I just had not
had time to make that bullet proof. For logging I need to have a fixed
way for a script to establish contact, set it up exactly as needed for
that measurement and then have robust output. I ended up loosing data
because I had not been able to make the TICC logging robust from various
start-up scenarios. That issue occurred in various degrees with all
equipment, and just takes time to sort out. Time was not plentiful
before this project. Testing and testing again would be needed to know
how to get things robust and in place. I have learned immensly about
what can mess up, and will test a lot now.
So, a very intense vacation, and I hope many of the learnings and
associated code may inspire and come to use.
Oh, yes. Many of you may wonder "What about the result then?". We keep
doing reference measurement and will be post-processing data when people
are back from vacation etc. We all have to be patient. I try to share a
lot of other things.
Cheers,
Magnus
On 8/3/23 6:55 AM, Magnus Danielson via time-nuts wrote:
Dear all,
I have written multiple times about the mission, but I thought I share
a separate view this time, and I think it may be equally interesting.
This time I would like to bring up the topic of logging. You can see
traces in various posts from before, but maybe I can contribute a
somewhat more comprehensive picture.
This is great stuff, Magnus. It's all these little practical details
that help some one planning a similar activity know what kinds of stuff
happens - it won't be the same, of course, but it's the kind of thing
you encounter.
And why we used to have this cynical phrase "all you gotta do is..."
Someone would have a block diagram on the white board or a power point
slide - it looks simple in concept, but the practicalities take a lot of
time and effort.
One thing that one seems to always have to create your own tools for is
combining time series of measurement, where the clock are not the same.
InfluxdB does time serieses (or, I suppose, it has a native time type
and is theoretically optimized for such things), but it doesn't do
resynchronization or time correlation. That's still on the user.
Dear Jim,
On 2023-08-04 18:27, Lux, Jim via time-nuts wrote:
On 8/3/23 6:55 AM, Magnus Danielson via time-nuts wrote:
Dear all,
I have written multiple times about the mission, but I thought I
share a separate view this time, and I think it may be equally
interesting. This time I would like to bring up the topic of logging.
You can see traces in various posts from before, but maybe I can
contribute a somewhat more comprehensive picture.
This is great stuff, Magnus. It's all these little practical details
that help some one planning a similar activity know what kinds of
stuff happens - it won't be the same, of course, but it's the kind of
thing you encounter.
Thanks!
And why we used to have this cynical phrase "all you gotta do is..."
Someone would have a block diagram on the white board or a power point
slide - it looks simple in concept, but the practicalities take a lot
of time and effort.
I think this is where is becomes much more relevant for many, to see the
hurdles and to learn from them. My motto is that the real failure is not
to learn from the failure. I think all these big and small problems is
the interesting stuff, but it also display how things actually work.
Most importantly, don't do this with two weeks of preparation. Build
things such that it keeps working, autostart with as much autonomy,
protection switch etc. so that you with that as your base can do the
dance and transfer things over.
This afternoon I transfered the remaining system (the 5071A used is in
Brussles) over from the car and into the lab. While it does not care as
much, I have maintained the experiment runing as if there was a clock
still there, and thus moved batteries and measurement case over and is
still continuous operation. I do have problem with the RPi, and I think
this is a re-currence of an old problem, and I want to track that
problem down to finally fix that.
The block schematics for this I drawed into my notebook as I was sitting
at Observatoir de Saint Véran. One page for the power setup, and another
page for the scientic interconnection. The later is about 2/3 of the
first. Power and various interconnection of cables is what brings things
down in real life. Like having the exact right USB-cables, the needed
DB9 null-modem cables or the right N, TNC, BNC, SMA adapter(s). The
constant need to be inventive to solve things and having prepared for it
is the real challenge.
So, will I do this again? Who knows. Will I be better prepared? Sure.
Much better prepared. I also know what things needs to make this stuff
up and it will be much better tested in advance.
Much of the things is the same as needed for a fixed laboratory, so that
is what I will build now.
One thing that one seems to always have to create your own tools for
is combining time series of measurement, where the clock are not the
same. InfluxdB does time serieses (or, I suppose, it has a native time
type and is theoretically optimized for such things), but it doesn't
do resynchronization or time correlation. That's still on the user.
InfluxDB is not a tool for processing time-series, it's just a tool to
record them and retrieve them from. It does that job much better than
having to mess with SQL databases directly. I've done one tool to
extract the TICC time-series, but eventually it should be generalized so
that I can do correlations etc. Now, much of the needed toolbox exists
in Python so it is more a matter of making the pieces fit. Luckily,
these are things that can be done in post-processing.
Cheers,
Magnus
Hello Magnus
I enjoyed reading your account of your clocks' alpine holiday.
I did something similar in spirit a couple of years ago, except the
destination was a nuclear reactor.
That experiment had to be run remotely though, once in place.
Getting software interacting reliably with hardware does take a lot of
work, and sometimes you just need to run a few systems for a long time
to pick up all of the things that can go wrong. At work, we have a
bunch of Perl scripts that run our calibration setups etc. Every time
I think of updating them to something like python, I then think of the
30 years ( in some cases) of battle testing they have had and think
"Nahh ..."
Regarding the TAPR TICC, this python script from OpenTTP might be a
useful starting point
https://github.com/openttp/openttp/blob/develop/software/gpscv/common/bin/ticclog.py
There are also other bits and pieces in OpenTTP to manage automatic
starting of logging processes and so on.
Perhaps they might be useful to you.
Cheers
Michael
On Fri, Aug 4, 2023 at 11:34 PM Magnus Danielson via time-nuts
time-nuts@lists.febo.com wrote:
Dear all,
I have written multiple times about the mission, but I thought I share a
separate view this time, and I think it may be equally interesting. This
time I would like to bring up the topic of logging. You can see traces
in various posts from before, but maybe I can contribute a somewhat more
comprehensive picture.
I realized early that there was many things to log for this mission. For
starters, I wanted to make sure I was logging the 5071A cesium clocks
state, to see how it was behaving. Much of the state is control loops
which compensate for environmental effects. There is also a temperature
reading in the clock. Just logging these as we expose the clock for the
elements of being in the boot of my car, moved to another car, moved
into the lab of the observatory and back will improve confidence that
the clock was happy with life in it's own reference, which is an assumed
property.
I also wanted to log the power-system since the charing and discharging
of batteries is critical, and add additional environmental sensors.
I had from before developed a tool to log my EFOS-B active hydrogen
maser, using a python script that then post data into an InfluxDB
database and at the back-end of that pulls data out into Grafana for
graphs. I learned immensly from doing this, so I realized that I should
attempt to extend that to incorporate these new loggings.
In essence I use a Raspberry Pi 4B with an external harddisk (where the
InfluxDB files lives). I bought a USB hub with separate power supply and
7 locations.
I used two DC/DC converters to make 24 V of the LOAD PANEL convert over
to the 5V needed. Turns out I had power limit issues causing problem, so
I had to move the Raspberry Pi to use a wallwart sitting of the
generated 230 VAC.
For fun I found a Raspberry Pi hat called Environ+ which has a BME280
sensor, which had been suggested by others, so I bought that and
installed. Further, I have a DPM7885 pressure sensor laying around, so
hooking that one to the 12 V bus and then using a USB-Serial adaptor got
that started.
So, first I added the 5071A logging, using a USB-Serial adaptor. Coding
that up was fairly quick. As always, some work is needed to get it to
sync up cleanly. Then I added the DPM7885 with a similar dance. That was
quite easy. Each of these had their own temperature sensor.
These where in place before leaving home for my summerhouse in southern
Sweden, a 600 km drive.
Later I had the time to add the BME280 sensor thought he Pyhton library
code provided. Picking up the example code turned out to be a fairly
quick adaptation. With that I added another temperature sensor, another
pressure sensor and then humidity sensor. The BME280 has compensation
factors in it, and I validated that the code actually did use the
validation schemes. I could however see remaining temperature dependence
that was uncompensated, because, that is what you see in your graphs if
you look and is curious (who? me?).
In addition I also wanted to add the TADR TICC, and that was fairly
quick integration. I did however end up having problem that the PPS from
the 5071A only cause a few triggers on the TICC ChB, so I could not
really use it as intended.
That is how far I got while at my summerhouse, then I had to leave for a
1100 km drive to south of Belgium, the next day was a good 850 km drive
to just outside Grenoble in south of France directly followed with the
remaining 220 km drive to Saint Veran village, in just three days.
I was not able to add the Victron VE Direct logging until up on the
mountain, after restoring the setup. Again I used the Python library
available and it's example code and was fairly quickly able to integrate
it. Naturally I made an off by 1 on the digits, so my current readings
got 10 times larger than correct, but I fixed that.
I had intended to let my Mosaic-T eject NMEA messages and parse that and
ingest into the database, to get time, position and height. That did not
get done during the mission.
The logging system was riddled by a number of power issues, much of all
can be blamed on having too little time to do things properly, and to
little time to test. Also, with power problems came reboot issues, such
that needing to have a screen present during booting as well as not
automatically start logging. These things I was not able to fix during
the mission, but I have fixed that since, since it bugged me and I want
to be prepared for the future, both my own lab and any similar missions.
It's only when you attempt to do stuff you learn how difficult it is.
The fact that the screen was starting to fail ended up being a problem
too. I have since made the Raspberry Pi able to boot safely without screen.
Now, since I have three USB devices showing up as /dev/ttyUSB0,
/dev/ttyUSB1 and /dev/ttyUSB2 in the Raspbian/Debian Linux distro, and I
will not know which order they show up, I needed to provide a mapping of
known order. The mapping is done by a system called udev, and you can
add your own rules. This was well known, and I thought I had resolved
it. Turns out no, so I had to redo it. Many of the documentation and
examples is outdated and does not work with modern code. However, much
of the basic hints is correct, it's more details of encoding. Once I
sorted out that, I was able to make the devices show up as /dev/ttyUSB4,
/dev/ttyUSB5 and /dev/ttyUSB6 but with a fixed mapping. This simplifies
much of the manual hazzle of figuring out which ended up being which.
Then, it was time to setup automatic logging. I decided to take on the
modern solution of systemd, and using the example it took me only a few
attempts to get it shaped up to work right. I have now built 3 example
logging using systemd to do my DPM7885, BME280 and VEDirect loggings.
The 5071A used was then already sitting in the basement of Observatoir
Royal de Belgique so I did not bother to do that one, but using the
examples, it is trivial to do that one with the examples I have, so when
needed I can do that.
I chose to log data every 10 s, except for the VE Direct which spews out
data every second, so just consume that and push it onto the database
was easiest.
In Grafana, setting up plots was done, and it was fairly
straight-forward. I aim to cleanup my plot setup so that it can be
exportable and important into another setup.
The code for all this you can find at my GitHub repository
https://github.com/sa0mad/masermon
I aim to add more stuff over time, and if people find it useful, more
things can be integrated. I aim to add Grafana setups once I have them
in suitable form.
I've found that integration of many different systems with a dash of
Pyhton and then dump datastreams into InfluxDB have been a great
strategy. I use UTC time-stamps for everything and I use NTP to get the
Raspberry Pi have the right time. I did not bother to integrate the GNSS
Receivers time, but that would be a next logical step, especially since
Mosaic-T has both NTP and PTP support, but since it ended up having
issues, I did not bother.
Graphing data was a lot of fun, and you can see on the pressure the
decent into Saint Véran village but also the drive over the Col d'Izoard
at 2360 m. You can see temperature swings.
Putting up a difference between the pressure sensors, you can see how
they have especially difference scale sensitivity but also temperature
sensitivity.
Plotting the temperatures it was easy to see the different elevation of
temperature due to where they where mounted, but also how processing on
Raspberry Pi made its hat temperature vary.
I have not yet had the time to analyze all the data, and already started
doing a tool to extract data and process least square fit. That data,
the TICC, turns out to be quite meaningless thought. It seems like the
PPS was returned through the receiver rather than being separate, so I
was measuring extremely small shift in delays.
Among the clean-ups will also checking the function of the TICCs etc.
I must say that the TADR TICC is almost ideal for a setup like this,
with low power and the performance it gives. Details on how it starts up
and how I can control it and sync up needs more work, and I just had not
had time to make that bullet proof. For logging I need to have a fixed
way for a script to establish contact, set it up exactly as needed for
that measurement and then have robust output. I ended up loosing data
because I had not been able to make the TICC logging robust from various
start-up scenarios. That issue occurred in various degrees with all
equipment, and just takes time to sort out. Time was not plentiful
before this project. Testing and testing again would be needed to know
how to get things robust and in place. I have learned immensly about
what can mess up, and will test a lot now.
So, a very intense vacation, and I hope many of the learnings and
associated code may inspire and come to use.
Oh, yes. Many of you may wonder "What about the result then?". We keep
doing reference measurement and will be post-processing data when people
are back from vacation etc. We all have to be patient. I try to share a
lot of other things.
Cheers,
Magnus
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-leave@lists.febo.com
Dear Michael,
On 2023-08-05 01:30, Michael Wouters via time-nuts wrote:
Hello Magnus
I enjoyed reading your account of your clocks' alpine holiday.
Thanks! :)
I did something similar in spirit a couple of years ago, except the
destination was a nuclear reactor.
That experiment had to be run remotely though, once in place.
Interesting!
Getting software interacting reliably with hardware does take a lot of
work, and sometimes you just need to run a few systems for a long time
to pick up all of the things that can go wrong. At work, we have a
bunch of Perl scripts that run our calibration setups etc. Every time
I think of updating them to something like python, I then think of the
30 years ( in some cases) of battle testing they have had and think
"Nahh ..."
Indeed, it takes time to learn all the issues and catch all the
different behaviours and how to counter them, but as one picks up the
learnings, one bump the MTBF up quite consideralble. Testing is key, and
learning what it takes, enables accelerated testing. Some of the things
one has to overcome is also limitations in the setup, so some faults can
actually disappear on a newer platform. Decoding old solutions one can
sometimes figure out better and more robust methods. Some of these needs
to be updated eventually, and doing such porting work is also the way to
learn the subtle problems and has a value of it's own, so a 30 year old
system may actually benefit from being upgraded.
Regarding the TAPR TICC, this python script from OpenTTP might be a
useful starting point
https://github.com/openttp/openttp/blob/develop/software/gpscv/common/bin/ticclog.py
Oh, thanks. I had not seen that one.
There are also other bits and pieces in OpenTTP to manage automatic
starting of logging processes and so on.
Perhaps they might be useful to you.
I'll have a look. Thanks for sharing!
For many of these things, I'm in clean-up phase. The clean-up is to
prepare for my lab and for future missions.
Cheers,
Magnus
Cheers
Michael
On Fri, Aug 4, 2023 at 11:34 PM Magnus Danielson via time-nuts
time-nuts@lists.febo.com wrote:
Dear all,
I have written multiple times about the mission, but I thought I share a
separate view this time, and I think it may be equally interesting. This
time I would like to bring up the topic of logging. You can see traces
in various posts from before, but maybe I can contribute a somewhat more
comprehensive picture.I realized early that there was many things to log for this mission. For
starters, I wanted to make sure I was logging the 5071A cesium clocks
state, to see how it was behaving. Much of the state is control loops
which compensate for environmental effects. There is also a temperature
reading in the clock. Just logging these as we expose the clock for the
elements of being in the boot of my car, moved to another car, moved
into the lab of the observatory and back will improve confidence that
the clock was happy with life in it's own reference, which is an assumed
property.I also wanted to log the power-system since the charing and discharging
of batteries is critical, and add additional environmental sensors.I had from before developed a tool to log my EFOS-B active hydrogen
maser, using a python script that then post data into an InfluxDB
database and at the back-end of that pulls data out into Grafana for
graphs. I learned immensly from doing this, so I realized that I should
attempt to extend that to incorporate these new loggings.In essence I use a Raspberry Pi 4B with an external harddisk (where the
InfluxDB files lives). I bought a USB hub with separate power supply and
7 locations.I used two DC/DC converters to make 24 V of the LOAD PANEL convert over
to the 5V needed. Turns out I had power limit issues causing problem, so
I had to move the Raspberry Pi to use a wallwart sitting of the
generated 230 VAC.For fun I found a Raspberry Pi hat called Environ+ which has a BME280
sensor, which had been suggested by others, so I bought that and
installed. Further, I have a DPM7885 pressure sensor laying around, so
hooking that one to the 12 V bus and then using a USB-Serial adaptor got
that started.So, first I added the 5071A logging, using a USB-Serial adaptor. Coding
that up was fairly quick. As always, some work is needed to get it to
sync up cleanly. Then I added the DPM7885 with a similar dance. That was
quite easy. Each of these had their own temperature sensor.These where in place before leaving home for my summerhouse in southern
Sweden, a 600 km drive.Later I had the time to add the BME280 sensor thought he Pyhton library
code provided. Picking up the example code turned out to be a fairly
quick adaptation. With that I added another temperature sensor, another
pressure sensor and then humidity sensor. The BME280 has compensation
factors in it, and I validated that the code actually did use the
validation schemes. I could however see remaining temperature dependence
that was uncompensated, because, that is what you see in your graphs if
you look and is curious (who? me?).In addition I also wanted to add the TADR TICC, and that was fairly
quick integration. I did however end up having problem that the PPS from
the 5071A only cause a few triggers on the TICC ChB, so I could not
really use it as intended.That is how far I got while at my summerhouse, then I had to leave for a
1100 km drive to south of Belgium, the next day was a good 850 km drive
to just outside Grenoble in south of France directly followed with the
remaining 220 km drive to Saint Veran village, in just three days.I was not able to add the Victron VE Direct logging until up on the
mountain, after restoring the setup. Again I used the Python library
available and it's example code and was fairly quickly able to integrate
it. Naturally I made an off by 1 on the digits, so my current readings
got 10 times larger than correct, but I fixed that.I had intended to let my Mosaic-T eject NMEA messages and parse that and
ingest into the database, to get time, position and height. That did not
get done during the mission.The logging system was riddled by a number of power issues, much of all
can be blamed on having too little time to do things properly, and to
little time to test. Also, with power problems came reboot issues, such
that needing to have a screen present during booting as well as not
automatically start logging. These things I was not able to fix during
the mission, but I have fixed that since, since it bugged me and I want
to be prepared for the future, both my own lab and any similar missions.
It's only when you attempt to do stuff you learn how difficult it is.The fact that the screen was starting to fail ended up being a problem
too. I have since made the Raspberry Pi able to boot safely without screen.Now, since I have three USB devices showing up as /dev/ttyUSB0,
/dev/ttyUSB1 and /dev/ttyUSB2 in the Raspbian/Debian Linux distro, and I
will not know which order they show up, I needed to provide a mapping of
known order. The mapping is done by a system called udev, and you can
add your own rules. This was well known, and I thought I had resolved
it. Turns out no, so I had to redo it. Many of the documentation and
examples is outdated and does not work with modern code. However, much
of the basic hints is correct, it's more details of encoding. Once I
sorted out that, I was able to make the devices show up as /dev/ttyUSB4,
/dev/ttyUSB5 and /dev/ttyUSB6 but with a fixed mapping. This simplifies
much of the manual hazzle of figuring out which ended up being which.Then, it was time to setup automatic logging. I decided to take on the
modern solution of systemd, and using the example it took me only a few
attempts to get it shaped up to work right. I have now built 3 example
logging using systemd to do my DPM7885, BME280 and VEDirect loggings.
The 5071A used was then already sitting in the basement of Observatoir
Royal de Belgique so I did not bother to do that one, but using the
examples, it is trivial to do that one with the examples I have, so when
needed I can do that.I chose to log data every 10 s, except for the VE Direct which spews out
data every second, so just consume that and push it onto the database
was easiest.In Grafana, setting up plots was done, and it was fairly
straight-forward. I aim to cleanup my plot setup so that it can be
exportable and important into another setup.The code for all this you can find at my GitHub repository
https://github.com/sa0mad/masermonI aim to add more stuff over time, and if people find it useful, more
things can be integrated. I aim to add Grafana setups once I have them
in suitable form.I've found that integration of many different systems with a dash of
Pyhton and then dump datastreams into InfluxDB have been a great
strategy. I use UTC time-stamps for everything and I use NTP to get the
Raspberry Pi have the right time. I did not bother to integrate the GNSS
Receivers time, but that would be a next logical step, especially since
Mosaic-T has both NTP and PTP support, but since it ended up having
issues, I did not bother.Graphing data was a lot of fun, and you can see on the pressure the
decent into Saint Véran village but also the drive over the Col d'Izoard
at 2360 m. You can see temperature swings.Putting up a difference between the pressure sensors, you can see how
they have especially difference scale sensitivity but also temperature
sensitivity.Plotting the temperatures it was easy to see the different elevation of
temperature due to where they where mounted, but also how processing on
Raspberry Pi made its hat temperature vary.I have not yet had the time to analyze all the data, and already started
doing a tool to extract data and process least square fit. That data,
the TICC, turns out to be quite meaningless thought. It seems like the
PPS was returned through the receiver rather than being separate, so I
was measuring extremely small shift in delays.Among the clean-ups will also checking the function of the TICCs etc.
I must say that the TADR TICC is almost ideal for a setup like this,
with low power and the performance it gives. Details on how it starts up
and how I can control it and sync up needs more work, and I just had not
had time to make that bullet proof. For logging I need to have a fixed
way for a script to establish contact, set it up exactly as needed for
that measurement and then have robust output. I ended up loosing data
because I had not been able to make the TICC logging robust from various
start-up scenarios. That issue occurred in various degrees with all
equipment, and just takes time to sort out. Time was not plentiful
before this project. Testing and testing again would be needed to know
how to get things robust and in place. I have learned immensly about
what can mess up, and will test a lot now.So, a very intense vacation, and I hope many of the learnings and
associated code may inspire and come to use.Oh, yes. Many of you may wonder "What about the result then?". We keep
doing reference measurement and will be post-processing data when people
are back from vacation etc. We all have to be patient. I try to share a
lot of other things.Cheers,
Magnus
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-leave@lists.febo.com
time-nuts mailing list -- time-nuts@lists.febo.com
To unsubscribe send an email to time-nuts-leave@lists.febo.com