I get that the calculation for time on a Unix system is the current time minus Epoch, but how does the physical computer know when Epoch was? You can't really hard code the starting time because hardware is made at different times.
I figure you could have a source of truth that could be accessible on the internet, but that would mean offline computers would never know what time it is.
Computers have a built-in (battery powered) clock that the user can set.
Time is synchronized over the internet (see NTP), but offline computers simply use their built-in clock (which usually starts to drift away from real time at some point and has to be adjusted every so often).
Related
We have a system with two micro-controllers and we are developing a simulation environment. In the real system the micro-controllers communicate trough UARTs sending small data back and forth all the time every millisecond (asking for variable values, sending commands, etc.).
In the simulation each micro-controller is a different application and we must find a way to communicate them. I have tried TCP/IP before but I notice from time to time there are very long time delays - for example it works ok and after 20 messages there is a second delay. I assume this is because TCP/IP is not designed to send small packets of ~10 bytes of data every millisecond. I have found some information about the issue but nothing conclusive on how to avoid the problem.
I was wondering if NamedPipes will work better. Will there be a huge performance hit when using NamedPipes to communicate back and forth ~10 bytes of data every ms? Should we look into another alternative? In the past the company has used virtual serial ports but that requires special third-party software and is too cumbersome to set up.
After doing some research related to the Network Time Protocol (NTP), I was wondering what could be a real use of it. From my knowledge, almost all devices have a real time clock that keeps updating itself even when the machine is shutted down. Then, when the machine boots up, the software clock takes its value from this hardware clock (the real time clock).
What is the point of taking the time from a server and, therefore, exposing the machine to some kind of time attack? Is the goal to keep a set of hosts strictly synchronized and avoid that their times differ too much (but how much could be, in reality, this "too much"? ) ? Also: if a host configures NTP, does it still have an initialized software clock from the real time clock that simply corrects itself according to the received NTP packets, or not?
If I am making a p2p file sharing application, I need to know how many regular home computers must I replicate a file on for it to be ALMOST ALWAYS available. Any idea?
It is very hard to evaluate, because it is not only a question of being awake, it is also a question of being reachable and of workload capacity and bandwidth. It is not because a PC has the file and that's it is online that it will be able to deliver the file (especially if it is a big file).
This kind of info is impossible to guess from a theoretical perspective. The best approach is to measure it from your live system. But, if you really need some estimation, an average user would open its PC between 7-9 AM and shut it down between 20-23 PM, with may be a couple of hours off during the day.
You may want to Google about P2P CHURN. There is some theory out there that could help you create some model, but honestly, in my experience, there is nothing like concrete/real data.
if the system time changes for day light saving time, does the output of gettimeofday() is changed?
Note that Unix systems do not "change the system time" with daylight saving time -- it is all handled when programs want to print the current time, typically using localtime(3).
The seconds since the epoch keep counting monotonically even when local governments decide to change the clocks on our walls.
UNIX/linux computers use UNIX time which is more or less UTC i.e. Greenwich London. Only when you print or display the time does it get converted into localtime. This is done by consulting the TIMEZONE setting.
On linux the timezone database is in /usr/share/zoneinfo - the current timezone is defined in /etc/localtime. This file handles daylight savings time so the conversion when you print is handled cleanly. Note that the timezone is usually defined for geographical and political reasons.
Your timezone database will be periodically updated as various governments and/or local councils decide to change their timezone or daylight savings time settings. Recently Samoa skipped a complete day at the end of 2011. Provided the TIMEZONE files on any local Samoan UNIX/Linux computers had been updated beforehand then no problems would have been encountered.
Other considerations are NTP time synchronisation and leap seconds. If you run an NTP client to keep your computer in sync then this 'slews' the internal computer clock by either speeding it up or slowing it down - note that it is usually a bad idea to set the time manually using the 'date' or 'rdate' commands as this can cause a jump in time and may affect software that uses timeouts of some kind. (A recent example was an Asterisk PABX that I had to maintain).
If you are going to change the time manually use the ntpdate command and define an upstream NTP server.
I am sending real-time-critical data over the internet between two dedicated computers, using my own protocol.
There is, of course, latency involved.
For debugging and optimization, I like to have both computers use the same timebase. I.e, I need to know the time difference of their clocks so that I can judge the latencies better.
Of course, relativism and such doesn't really allow me to sync them perfectly, but I like to get as close as possible.
Relying on NTP alone does not appear good enough - clocks can be off by half a second in my experience (clarification: I relied so far on the default one provided by Apple).
I need precision in the 1/10s range, at least. The two computers won't be too far apart, ICMP ping times are less than 100ms, usually.
Any suggestions how to do this?
(currently, the machines involved run OS X, so if you know a solution just for them, that'll be a start)
Get the time from a GPS receiver connected to the machines. If they are in a data centre it can be difficult getting an antenna into a location that it can get a lock though unfortunately.
I would suggest that your best bet is to install an ntp server on one of the machines and get the other to sync to it.
Did you try to have one of the machine as NTP server for the other? May be they won't be in sync with the 'real' time but this may bring you within the precision you require.
Relying on NTP alone does not appear
good enough - clocks can be off by
half a second in my experience.
That's strange. NTPd over Internet is supposed to give you much greater precision.