In the Elixir DateTime implementation, what is meant by Calendar.std_offset? - datetime

In the docs for DateTime I see that Calendar.std_offset is "The time zone standard offset in seconds (not zero in summer times)" from this link
A Calendar.utc_offset is the offset in seconds from Coordinated UTC time according to Wikipedia. So what is the purpose of Calendar.std_offset? What does it do? It seems you could specify an offset purely from "utc_offset". Is the "std_offset" to account for Daylight savings time only?

The standard offset is the offset to apply to the standard time/UTC offset during summer/daylight savings time for the given zone. So given a UTC offset of 5 hours and a standard offset of 1 hour, the total summer/daylight savings time offset is 6 hours, and the standard time/UTC offset is 5 hours.

Related

Is it possible to encode date AND time (with some caveats) into 12 bits?

I have at my disposal 16 bits. Of them, 4 bits are the header and cannot be touched. This leaves us with 12 bits. I would like to encode date and time data into them. These are essentially logs being sent over LPWAN.
Obviously, it's impossible to encode proper generic date and time into it. Since the unix timestamp uses 32 bits, and projects like Compact Time Format use 5 bytes.
Let's say we don't really need the year, because this information is available elsewhere. Let's also say the time resolution of seconds doesn't have to be super accurate, so we can split the seconds into 30 second intervals. If we were to simply encode the data as is then:
4 bits month (0-11)
5 bits day (0-31)
5 bits hour (0-23)
6 bits minute (0-59)
1 bit second (0,30)
-----------------------------
21 bits
21 bits is much better than 32. But it's still not 12. I could subtract one bit from the minutes (rounding to the nearest even minute), and remove the seconds but that still leaves us with 19 bits. Which is still far from 12.
Just wondering if it's possible, and if anyone has any ideas.
12 bits can hold 2^12 = 4096 values, which feels pretty tight for a task. Not sure much can be done in terms of compressing a date time into a 4096 number. It is too little space to represent this data.
There are some workarounds, none of them able to achieve what you want, but maybe something you could use anyway:
Split date and time. Alternate with some algorithm between sending date/time, one bit can be used to indicate what data is being sent. This leaves 11 bits to encode either date or time. You could go a bit further and split time like this as well. Receiving side can then reconstruct a full date time having access to the previously received data.
You could have a schema where one date packet is sent as a starting point, and subsequent packets are incremented in N-second intervals from the start of the epoch
Remove date time from data completely, saving 12 bits, but send it periodically as a stand-alone heartbeat/datetime packet.
You could try compressing the whole data packet which could allow using more bits to represent date time and still fit into a smaller overall packet size
If data is sent at reasonable fixed intervals, you could use a circular counter of an interval of N seconds, this may work if you have few devices and you can keep track of when they start transmitting. For example a satellite was launched on XYZ date time, it send counter every 30 seconds, we received counter value of 100, to calculate date we use simple math XYZ + 30*100 seconds
No. Unless you'd be happy with representing less than a span of a day and a half. You can just count 4096 30-second intervals, and find that that will cover 34 hours and eight minutes. 4096 two-minute intervals is just four times that, or five days, 16 hours, and 32 minutes. Still a small fraction of a year.
If you can assure that the difference between successive log entries is small, then you can stuff that in 12 bits. You will need a special entry to give an initial date, and maybe you could insert such an entry when the difference betweem successive entries is too large.
#oleksii has some other good suggestions as well.

Getting date from the world time in millis

Is any math can sort out date from time-milli? (eg:1544901911)
It is possible to get time by a initial modulus of 86400 and dividing 3600 (hour) and modulus by 3600 and dividing by 60 (minute) of overal milli.
Is it possible to get date from these, I really don't know how it works (just knows that it begined from 1970 Jan 1 onwards).
Not using any code language, I am just asking the mathematics behind this.
I have problems making sense of what you wrote. 86400 is the number of seconds in a day. So if you have the time in seconds, and want the time of the day, then modulo 86400 makes sense. As your question starts with time in milliseconds, modulo 86400000 would be more appropriate. But I guess we get the idea either way.
So as I said, extracting the time of the day works as you know the number of seconds in a day. The number of seconds in a year is harder, as you have to deal with leap days. You can have a look at existing standard library implementations, e.g. Python datetime. That starts by taking the time (or rather number of days, i.e. time divided by time per day, whatever the unit) modulo 400 years, since the number of whole days in 400 years is fixed. Then it goes on looking at 100 year cycles, 4 year cycles and years, with case distinctions for leap years, and tables containing information about month lengths, and so on. Yes, this can be done, but it's tedious plus already part of standard libraries for most languages.

Why use ISO 8601 format for datetime in API instead of numeric milliseconds? [duplicate]

For passing times in JSON to/from a web API, why would I choose to use an ISO8601 string instead of simply the UTC epoch value? For example, both of these are the same:
Epoch = 1511324473
iso8601 = 2017-11-22T04:21:13Z
The epoch value is obviously shorter in length, which is always good for mobile data usage, and it's pretty simple to convert between epoch values and the language's local Date type variable.
I'm just not seeing the benefit to using an ISO string value.
Both are unambiguous and easy to parse in programs. The benefit of epoch like you have mentioned is that it is smaller and will be faster to process in your program. The downside is it means nothing to humans.
iso8901 dates are easy to read on their own and don't require the user to translate a number in to a recognizable date. The size increase in iso8601 is unnoticeable when compared to much much larger things like images.
Personally I would pick ease of reading over speed for an API as it will cut down on debugging time while inspecting values sent and received. In another situation such as passing times around internally you may wish to choose the speed of an integer over text so it depends which you think will be more useful.
Unix/Epoch Time
+ Compact
+ Easy to do arithmetic actions without any libraries, i.e. var tomorrow=now()+60*60*24
- Not human-readable
- Cannot represent dates before 1 January 1970
- Cannot represent dates after 19 January 2038 (if using Int32)
- Timezone and offset are "external" info, there is ambiguity if the value is UTC or any other offset.
- Officially the spec supports only seconds.
- When someone changes the value to milliseconds for better resolution, there is an ambiguity if the value is seconds or milliseconds.
- Older than ISO 8601 format
- Represents seconds since 1970 (as opposed to instant in time)
- Precision of seconds
ISO 8601 Time
+ Human readable
+ Represents instant in time, as opposed to seconds since 1970
+ Newer then Unix time format
+ Specifies representation of date, time, date-time, duration and interval!
+ Supports an offset representation
+ Precision of nanoseconds
- Less compact
- For any arithmetic actions, reach library is required (like java.time.OffsetDatetime)

Unix time uncertainty

I have a gap in Unix time understanding. Unix time started to be counted on 1/1/1970 but in what timezone?
Say it´s 31st of December 1969 11p.m. in London (-3600 Unix time)
In Sidney they have 8 a.m. 1st of January 1970 (28 800 Unix time) in the same time.
So my question is when did they start counting Unix time? 1/1 1970 of what timezone?
Thank you
"Unix time" should always be UTC.
http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap04.html#tag_04_15
Wikipedia has some further verbiage around this at https://en.wikipedia.org/wiki/Unix_time#UTC_basis:
The precise definition of Unix time as an encoding of UTC is only
uncontroversial when applied to the present form of UTC. Fortunately,
the fact that the Unix epoch predates the start of this form of UTC
does not affect its use in this era: the number of days from 1 January
1970 (the Unix epoch) to 1 January 1972 (the start of UTC) is not in
question, and the number of days is all that is significant to Unix
time.

How to save date and time with the smallest amount of space

I have a program which saves some data to an NFC tag. The NFC tag only has some bytes for memory. And because I need to save a date and time in minutes (decimal) to the tag, I need to save this in the most memory efficient way possible. For instance the decimal number 23592786 requires 36 bits, but if the decimal number is converted to a base36 value it only requires 25 bits of memory.
Number 23592786 requires 25 bits, because binary representation of this number is 25-bit length. You can save some bits, if date range is limited. One year contains about 526000 minutes, so interval in minutes from 0:00 1st Jan 2000 (arbitrary start date) will take 24 bits (3 bytes) and represents dates till 2031 year.
The simplest might be to use a Unix time this gives the the number of seconds since Jan 1 1970, this typically takes 32 bits. As MBo has said you can reduce the number of bits by 6, by jut counting minutes or by choosing a more recent start date. However there are advantages in using an industry standard. Depending on you application you might be able to get it down to 2 byte which could represent about 45 days.

Resources