Is any math can sort out date from time-milli? (eg:1544901911)
It is possible to get time by a initial modulus of 86400 and dividing 3600 (hour) and modulus by 3600 and dividing by 60 (minute) of overal milli.
Is it possible to get date from these, I really don't know how it works (just knows that it begined from 1970 Jan 1 onwards).
Not using any code language, I am just asking the mathematics behind this.
I have problems making sense of what you wrote. 86400 is the number of seconds in a day. So if you have the time in seconds, and want the time of the day, then modulo 86400 makes sense. As your question starts with time in milliseconds, modulo 86400000 would be more appropriate. But I guess we get the idea either way.
So as I said, extracting the time of the day works as you know the number of seconds in a day. The number of seconds in a year is harder, as you have to deal with leap days. You can have a look at existing standard library implementations, e.g. Python datetime. That starts by taking the time (or rather number of days, i.e. time divided by time per day, whatever the unit) modulo 400 years, since the number of whole days in 400 years is fixed. Then it goes on looking at 100 year cycles, 4 year cycles and years, with case distinctions for leap years, and tables containing information about month lengths, and so on. Yes, this can be done, but it's tedious plus already part of standard libraries for most languages.
For passing times in JSON to/from a web API, why would I choose to use an ISO8601 string instead of simply the UTC epoch value? For example, both of these are the same:
Epoch = 1511324473
iso8601 = 2017-11-22T04:21:13Z
The epoch value is obviously shorter in length, which is always good for mobile data usage, and it's pretty simple to convert between epoch values and the language's local Date type variable.
I'm just not seeing the benefit to using an ISO string value.
Both are unambiguous and easy to parse in programs. The benefit of epoch like you have mentioned is that it is smaller and will be faster to process in your program. The downside is it means nothing to humans.
iso8901 dates are easy to read on their own and don't require the user to translate a number in to a recognizable date. The size increase in iso8601 is unnoticeable when compared to much much larger things like images.
Personally I would pick ease of reading over speed for an API as it will cut down on debugging time while inspecting values sent and received. In another situation such as passing times around internally you may wish to choose the speed of an integer over text so it depends which you think will be more useful.
Unix/Epoch Time
+ Compact
+ Easy to do arithmetic actions without any libraries, i.e. var tomorrow=now()+60*60*24
- Not human-readable
- Cannot represent dates before 1 January 1970
- Cannot represent dates after 19 January 2038 (if using Int32)
- Timezone and offset are "external" info, there is ambiguity if the value is UTC or any other offset.
- Officially the spec supports only seconds.
- When someone changes the value to milliseconds for better resolution, there is an ambiguity if the value is seconds or milliseconds.
- Older than ISO 8601 format
- Represents seconds since 1970 (as opposed to instant in time)
- Precision of seconds
ISO 8601 Time
+ Human readable
+ Represents instant in time, as opposed to seconds since 1970
+ Newer then Unix time format
+ Specifies representation of date, time, date-time, duration and interval!
+ Supports an offset representation
+ Precision of nanoseconds
- Less compact
- For any arithmetic actions, reach library is required (like java.time.OffsetDatetime)
I'm running into a situation where a cron job I thought was running every 55 minutes is actually running at 55 minutes after the hour and at the top of the hour. Actually, it's not a cron job, but it's a PHP scheduling application that uses cron syntax.
When I ask this application to schedule a job every 55 minutes, it creates a crontab line like the following.
*/55 * * * *
This crontab line ends up not running a job every 55 minutes. Instead a job runs at 55 minutes after the hours, and at the top of the hour. I do not desire this. I've run this though a cron tester, and it verifies the undesired behavior is correct cron behavior.
This leads me to looking up what the / actually means. When I looked at the cron manual I learned the slash indicated "steps", but the manual itself is a little fuzzy on that that means
Step values can be used in conjunction with ranges. Following a range with "<number>" specifies skips of the number's value through the range. For example, "0-23/2" can be used in the hours field to specify command execution every other hour (the alternative in the V7 standard is "0,2,4,6,8,10,12,14,16,18,20,22"). Steps are also permitted after an asterisk, so if you want to say "every two hours", just use "*/2".
The manual's description ("specifies skips of the number's value through the range") is a little vague, and the "every two hours" example is a little misleading (which is probably what led to the bug in the application)
So, two questions:
How does the unix cron program use the "step" information (the number after a slash) to decide if it should skip running a job? (modular division? If so, on what? With what conditions deciding a "true" run, and which decisions not? Or is it something else?)
Is it possible to configure a unix cron job to run every "N" minutes?
Step values can be used in conjunction with ranges. Following a range
with "<number>" specifies skips of the number's value through the range. For
example, "0-23/2" can be used in the hours field to specify command
execution every other hour (the alternative in the V7 standard is
"0,2,4,6,8,10,12,14,16,18,20,22"). Steps are also permitted after an
asterisk, so if you want to say "every two hours", just use "*/2".
The "range" being referred to here is the range given before the /, which is a subrange of the range of times for the particular field. The first field specifies minutes within an hour, so */... specifies a range from 0 to 59. A first field of */55 specifies all minutes (within the range 0-55) that are multiples of 55 -- i.e., 0 and 55 minutes after each hour.
Similarly, 0-23/2 or */2 in the second (hours) field specifies all hours (within the range 0-23) that are multiples of 2.
If you specify a range starting other than at 0, the number (say N) after the / specifies every Nth minute/hour/etc starting at the lower bound of the range. For example, 3-23/7 in the second field means every 7th hour starting at 03:00 (03:00, 10:00, 17:00).
This works best when the interval you want happens to divide evenly into the next higher unit of time. For example, you can easily specify an event to occur every 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, or 30 minutes, or every 1, 2, 3, 4, 6, or 12 hours. (Thank the Babylonians for choosing time units with so many nice divisors.)
Unfortunately, cron has no concept of "every 55 minutes" within a time range longer than an hour.
If you want to run a job every 55 minutes (say, at 00:00, 00:55, 01:50, 02:45, etc.), you'll have to do it indirectly. One approach is to schedule a script to run every 5 minutes; the script then checks the current time, and does its work only once every 11 times it's called.
Or you can use multiple lines in your crontab file to run the same job at 00:00, 00:55, 01:50, etc. -- except that a day is not a multiple of 55 minutes. If you don't mind having a longer or shorter interval once a day, week, or month, you can write a program to generate a large crontab with as many entries as you need, all running the same command at a specified time.
I came across this website that is helpful with regard to cron jobs.
https://crontab.guru
And specific to your case with * /55
https://crontab.guru/#*/55_*_*_*_*
It helped to get a better understanding of the concept behind it.
There is another tool named at that should be considered. It can be used instead of cron to achieve what the topic starter wants. As far as I remember, it is pre-installed in OS X but it isn't bundled with some Linux distros like Debian (simply apt install at).
It runs a job at a specific time of day and that time can be calculated using a complex specification. In our case the following can be used:
You can also give times like now + count time-units, where the time-units can be minutes, hours, days, or weeks and you
can tell at to run the job today by suffixing the time with today and to run the job tomorrow by suffixing the time with tomorrow.
The script every2min.sh is executed every 2 minutes. It delays next execution every time the instance is running:
#!/bin/sh
at -f ./every2min.sh now + 2 minutes
echo "$(date +'%F %T') running..." >> /tmp/every2min.log
Which outputs
2019-06-27 14:14:23 running...
2019-06-27 14:16:00 running...
2019-06-27 14:18:00 running...
As at does not know about "seconds" unit, the execution time will be rounded to full minute after the first run. But for a given task (with 55 minutes range) it should not be a big problem.
There also might be security considerations
For both at and batch, commands are read from standard input or the file specified with the -f option and executed. The working directory, the environment (except for the variables BASH_VERSINFO, DISPLAY, EUID, GROUPS, SHELLOPTS, TERM, UID, and _) and the umask are retained from the time of invocation.
This is the easiest way to schedule something to be ran every X minutes I've seen so far.
Time is often converted into numeric parameter (e.g., to miliseconds or other units) elapsed from a reference date (epoch time)
The overview on wikipedia is very incomplete:
http://en.wikipedia.org/wiki/Epoch_%28reference_date%29
What is is the list of epoch times for all possible OS platforms and major programming languages?
(e.g., R (running on different OS platforms,unix, windows, Android, Apple, Perl, Python, Ruby, C++, Java).
In most modern frameworks, it's the Unix/POSIX standard of 1/1/1970.
You asked about R - it's 1/1/1970. Refrence Here
Most languages/frameworks that are cross platform either do this internally, or they abstract it. It would be too painful otherwise. Imagine having to compensate for a different epoch every time you re-targeted. That would be aweful.
BTW - There is another list here that may be more interesting to you.
I have got the following data:
In a computing context, an epoch is the date and time relative to which a computer's clock and timestamp values are determined. The epoch traditionally corresponds to 0 hours, 0 minutes, and 0 seconds (00:00:00) Coordinated Universal Time (UTC) on a specific date, which varies from system to system. Most versions of Unix, for example, use January 1, 1970 as the epoch date; Windows uses January 1, 1601; Macintosh systems use January 1, 1904, and Digital Equipment Corporation's Virtual Memory System (VMS) uses November 17, 1858.
Reference : Epoch
also you can see this : Epoch Computer
In Java epoch is like unix i.e. January 1 1970 Midnight which is used in programming widely.
In the Thrift IDL there isn't a Date type. What's the best cross language mechanism to represent a date object. I think there are 2 ideal candidates but I'd love to hear other ideas.
String - in each language you could use something like strftime to convert the date back.
i32 - Time since epoch can be converted back.
I'm sure there are other things to think about besides conversion. Hoping people out there have some good feedback.
tldr; use an appropriate-encoded string unless there is a reason to do otherwise.
It depends on what is required. Here are some differences - keep in mind that modern computers are fast and conversion is likely only a small fraction of overall application time so "more processing" is generally not even be applicably measurable!
String (with ISO 8601 or the stricter XML dateTime):
"more space" / "more processing" (see above) / fixed size or variable size
standardized culture-neutal format
human readable and easily identifiable
supports timezones
more range (-9999 to 9999)
more/arbitrary precision (up to 1us)
lexicographically ordered (within same timezone and compatible format)
Epoch (UNIX variant):
"less space" / "less processing" / fixed size
standardized culture-neutral format
not human readable (a diligent coder should be able to identify "about now")
no timezones (can't even distinguish between "local" and UTC)
less range (1970 to 2034 with a signed 32-bit number)
less/fixed precision (1 second)
numerically ordered
(The Julian day is another encoding with many similarities to an Epoch time.)
Conclusion:
Unless space/performance is a proven issue - this requires a performance analysis and functional requirements - I'd pick the former. Computers today are a good bit faster than computers just a few years ago and much, much faster than computers decades old.
Just for posterity, you may be interested in temporenc (http://temporenc.org), a comprehensive binary encoding format for dates and times.