if the system time changes for day light saving time, does the output of gettimeofday() is changed?
Note that Unix systems do not "change the system time" with daylight saving time -- it is all handled when programs want to print the current time, typically using localtime(3).
The seconds since the epoch keep counting monotonically even when local governments decide to change the clocks on our walls.
UNIX/linux computers use UNIX time which is more or less UTC i.e. Greenwich London. Only when you print or display the time does it get converted into localtime. This is done by consulting the TIMEZONE setting.
On linux the timezone database is in /usr/share/zoneinfo - the current timezone is defined in /etc/localtime. This file handles daylight savings time so the conversion when you print is handled cleanly. Note that the timezone is usually defined for geographical and political reasons.
Your timezone database will be periodically updated as various governments and/or local councils decide to change their timezone or daylight savings time settings. Recently Samoa skipped a complete day at the end of 2011. Provided the TIMEZONE files on any local Samoan UNIX/Linux computers had been updated beforehand then no problems would have been encountered.
Other considerations are NTP time synchronisation and leap seconds. If you run an NTP client to keep your computer in sync then this 'slews' the internal computer clock by either speeding it up or slowing it down - note that it is usually a bad idea to set the time manually using the 'date' or 'rdate' commands as this can cause a jump in time and may affect software that uses timeouts of some kind. (A recent example was an Asterisk PABX that I had to maintain).
If you are going to change the time manually use the ntpdate command and define an upstream NTP server.
Related
I get that the calculation for time on a Unix system is the current time minus Epoch, but how does the physical computer know when Epoch was? You can't really hard code the starting time because hardware is made at different times.
I figure you could have a source of truth that could be accessible on the internet, but that would mean offline computers would never know what time it is.
Computers have a built-in (battery powered) clock that the user can set.
Time is synchronized over the internet (see NTP), but offline computers simply use their built-in clock (which usually starts to drift away from real time at some point and has to be adjusted every so often).
I have a stored procedure that inserts as a value in a column SYSUTCDATETIME().
I live in Hastings in the UK (pretty close to the meridian line) so we are currently on British Summer Time (one hour ahead of GMT or UTC). Ergo when I ran this procedure at 19.46 last night I would have expected the procedure to have recorded 18.46 as the UTC value in the relevant field in the underlying table. Instead I was surprised to see the value recorded as being 18.06, forty minutes adrift of what I had expected.
This makes me wonder if SQL Server is in fact using the internet to determine UTC time (my computer always records it's location as being considerably further west that it actually is which is connected with my ISP's location(I assume)). If this is indeed the case then I need to find a way of getting SQL Server to calculate UTC time from the local machine rather that using the internet as an anomaly like this could actually lead to some very serious consequences.
Does anyone know what SQL Server actually uses as its source for calculating UTC time, and if it is the internet (when available) how to force it to use the local machine on which it is running?
Do Unix timestamps share the same value in different zones.
Basically if a computer is running in Japan under a Japanese timezone and a computer is running in UK is the unix timestamp the same for them for the particular instance in time. I believe it is as UNIX time stamp defaults to UTC.
In other words in my mobile application is it safe to calculate time differences on the client side based on the unix time stamp? It wouldn't be if the current unix time stamp was different depending on the timezone.
Yes, assuming that two servers have their time synchronized using some NTP service.
NOTE:
In mysql, unix_timestamp() execution is free of local time_zone and returns result w.r.t UTC but unix_timestamp(DATE) function assumes that specified DATE is in local time_zone and thus first converts it to UTC and then returns the result.
After doing some research related to the Network Time Protocol (NTP), I was wondering what could be a real use of it. From my knowledge, almost all devices have a real time clock that keeps updating itself even when the machine is shutted down. Then, when the machine boots up, the software clock takes its value from this hardware clock (the real time clock).
What is the point of taking the time from a server and, therefore, exposing the machine to some kind of time attack? Is the goal to keep a set of hosts strictly synchronized and avoid that their times differ too much (but how much could be, in reality, this "too much"? ) ? Also: if a host configures NTP, does it still have an initialized software clock from the real time clock that simply corrects itself according to the received NTP packets, or not?
I'm trying out the sample applications provided together with the PingFederate .NET Integration Kit. I was able to make it work for the Single Server set-up (my machine served as both the IdP and the SP).
But when I tried setting up two machines like it was specified in this link:
https://documentation.pingidentity.com/display/NETIK/Deploying+the+Sample+Applications
A more realistic scenario is to deploy the applications on a separate IIS server machine
I was able to edit the Adapter Instance and the Default URL but there's this problem of clock skew between servers
Verify that your server clocks are synchronized. If they are not synchronized, you can account for this by adjusting the Not Before Tolerance value in the OpenToken adapter configuration, which is the amount of time (in seconds) to allow for clock skew between servers. The default and recommended value is 0.
I checked the possible values and the max is 3600 seconds.
Question: What if my server has more than an hour of time difference? Is that set-up still possible? (Servers are actually on different time zones.)
The OpenToken uses GMT, so timezones are taken out of the picture - as long as your server is set to the proper time, and actual proper timezone for where it is, it should work just fine. For example, you can have serverA in New York City, and serverB in Los Angeles. If serverA is set to Eastern Time, and serverB is set to Pacific Time, then the OpenToken will work - since it converts times to GMT, the times on the token will be the "same".
Hope that makes sense - I need another cup of coffee this morning. :)