I have a program which saves some data to an NFC tag. The NFC tag only has some bytes for memory. And because I need to save a date and time in minutes (decimal) to the tag, I need to save this in the most memory efficient way possible. For instance the decimal number 23592786 requires 36 bits, but if the decimal number is converted to a base36 value it only requires 25 bits of memory.
Number 23592786 requires 25 bits, because binary representation of this number is 25-bit length. You can save some bits, if date range is limited. One year contains about 526000 minutes, so interval in minutes from 0:00 1st Jan 2000 (arbitrary start date) will take 24 bits (3 bytes) and represents dates till 2031 year.
The simplest might be to use a Unix time this gives the the number of seconds since Jan 1 1970, this typically takes 32 bits. As MBo has said you can reduce the number of bits by 6, by jut counting minutes or by choosing a more recent start date. However there are advantages in using an industry standard. Depending on you application you might be able to get it down to 2 byte which could represent about 45 days.
Related
I have at my disposal 16 bits. Of them, 4 bits are the header and cannot be touched. This leaves us with 12 bits. I would like to encode date and time data into them. These are essentially logs being sent over LPWAN.
Obviously, it's impossible to encode proper generic date and time into it. Since the unix timestamp uses 32 bits, and projects like Compact Time Format use 5 bytes.
Let's say we don't really need the year, because this information is available elsewhere. Let's also say the time resolution of seconds doesn't have to be super accurate, so we can split the seconds into 30 second intervals. If we were to simply encode the data as is then:
4 bits month (0-11)
5 bits day (0-31)
5 bits hour (0-23)
6 bits minute (0-59)
1 bit second (0,30)
-----------------------------
21 bits
21 bits is much better than 32. But it's still not 12. I could subtract one bit from the minutes (rounding to the nearest even minute), and remove the seconds but that still leaves us with 19 bits. Which is still far from 12.
Just wondering if it's possible, and if anyone has any ideas.
12 bits can hold 2^12 = 4096 values, which feels pretty tight for a task. Not sure much can be done in terms of compressing a date time into a 4096 number. It is too little space to represent this data.
There are some workarounds, none of them able to achieve what you want, but maybe something you could use anyway:
Split date and time. Alternate with some algorithm between sending date/time, one bit can be used to indicate what data is being sent. This leaves 11 bits to encode either date or time. You could go a bit further and split time like this as well. Receiving side can then reconstruct a full date time having access to the previously received data.
You could have a schema where one date packet is sent as a starting point, and subsequent packets are incremented in N-second intervals from the start of the epoch
Remove date time from data completely, saving 12 bits, but send it periodically as a stand-alone heartbeat/datetime packet.
You could try compressing the whole data packet which could allow using more bits to represent date time and still fit into a smaller overall packet size
If data is sent at reasonable fixed intervals, you could use a circular counter of an interval of N seconds, this may work if you have few devices and you can keep track of when they start transmitting. For example a satellite was launched on XYZ date time, it send counter every 30 seconds, we received counter value of 100, to calculate date we use simple math XYZ + 30*100 seconds
No. Unless you'd be happy with representing less than a span of a day and a half. You can just count 4096 30-second intervals, and find that that will cover 34 hours and eight minutes. 4096 two-minute intervals is just four times that, or five days, 16 hours, and 32 minutes. Still a small fraction of a year.
If you can assure that the difference between successive log entries is small, then you can stuff that in 12 bits. You will need a special entry to give an initial date, and maybe you could insert such an entry when the difference betweem successive entries is too large.
#oleksii has some other good suggestions as well.
Is any math can sort out date from time-milli? (eg:1544901911)
It is possible to get time by a initial modulus of 86400 and dividing 3600 (hour) and modulus by 3600 and dividing by 60 (minute) of overal milli.
Is it possible to get date from these, I really don't know how it works (just knows that it begined from 1970 Jan 1 onwards).
Not using any code language, I am just asking the mathematics behind this.
I have problems making sense of what you wrote. 86400 is the number of seconds in a day. So if you have the time in seconds, and want the time of the day, then modulo 86400 makes sense. As your question starts with time in milliseconds, modulo 86400000 would be more appropriate. But I guess we get the idea either way.
So as I said, extracting the time of the day works as you know the number of seconds in a day. The number of seconds in a year is harder, as you have to deal with leap days. You can have a look at existing standard library implementations, e.g. Python datetime. That starts by taking the time (or rather number of days, i.e. time divided by time per day, whatever the unit) modulo 400 years, since the number of whole days in 400 years is fixed. Then it goes on looking at 100 year cycles, 4 year cycles and years, with case distinctions for leap years, and tables containing information about month lengths, and so on. Yes, this can be done, but it's tedious plus already part of standard libraries for most languages.
I used PerfMon on Windows XP for checking network load of an application that I have written.
In the below example you see five columns:
Date Time, Bandwidth, [x] Bytes per seconds sent, [x] Bytes per second
received, [x] Total Bytes per second
[x] == The network interface that I checked the load against
Here's the data.
02/18/2014 15:30:50.894,"1000000000","922.92007218169454","826.92838536756381","1749.8484575492582"
02/18/2014 15:30:51.894,"1000000000","994.06970480770792","774.05427718427154","1768.1239819919795"
02/18/2014 15:30:52.894,"1000000000","1446.0226222234514","1319.0206353476713","2765.0432575711229"
02/18/2014 15:30:53.894,"1000000000","2652.0592714274339","1207.0269760983833","3859.0862475258173"
Date, Time and bandwidth (10^9 bit = 1Gbit (lan connection)) are obviously correct.
The other 3 columns are hard to interpret! It says the unit is bytes per second for each but how can the system resolve 14 respectively 13 digits after the decimal dot if these were really bytes?
What is 0.0000000000000001 byte?
Indeed the values are plausible until reaching the dot.
The timer's resolution is higher than shown. You might send 923076 bytes in 100003 microseconds, so the trace shows 100 milliseconds and ignores the microseconds in the time column, but calculates 923076/100003 for the bytes per seconds column. Note i made up the numbers, doesn't make much sense to find a pair that gives your 922.9200... exactly.
I am preparing for my exams and was solving problems regarding Sliding Window Protocol and I came across these questions..
A 1000km long cable operates a 1MBPS. Propagation delay is 10 microsec/km. If frame size is 1kB, then how many bits are required for sequence number?
A) 3 B) 4 C) 5 D) 6
I got the ans as C option as follows,
propagation time is 10 microsec/km
so, for 1000 km it is 10*1000 microsec, ie 10 milisec
then RTT will be 20 milisec
in 10^3 milisec 8*10^6 bits
so, in 20 milisec X bits;
X = 20*(8*10^6)/10^3 = 160*10^3 bits
now, 1 frame is of size 1kB ie 8000 bits
so total number of frames will be 20. this will be a window size.
hence, to represent 20 frames uniquely we need 5 bits.
the ans was correct as per the answer key.. and then I came across this one..
Frames of 1000 bits are sent over a 10^6 bps duplex link between two hosts. The propagation time is
25ms. Frames are to be transmitted into this link to maximally pack them in transit (within the link).
What is the minimum number of bits (l) that will be required to represent the sequence numbers distinctly?
Assume that no time gap needs to be given between transmission of two frames.
(A) l=2 (B) l=3 (C) l=4 (D) l=5
as per the earlier one I solved this one like follows,
propagation time is 25 ms
then RTT will be 50 ms
in 10^3 ms 10^6 bits
so, in 50 ms X bits;
X = 50*(10^6)/10^3 = 50*10^3 bits
now, 1 frame is of size 1kb ie 1000 bits
so total number of frames will be 50. this will be a window size.
hence, to represent 50 frames uniquely we need 6 bits.
and 6 is not even in the option. Answer key is using same solution but taking propagation time not RTT for calculation. and their answer is 5 bits. I am totally confused, which one is correct?
I don't see what RTT has to do with it. The frames are only being sent in one direction.
Round-Trip-Time means that you have to take into account the ACK (acknowledgement message) you must receive that tells you the frames you are sending are being received by on the other side of the link. This 'time' window is the period where you get to send the remaining frames that the window allows you to send before you anticipate an ACK.
Ideally you want to be able to transmit continuously, i.e not having to stop at the window frame limit to wait for an ACK (which is essentially turns into a stop-and-wait situation if you have to stop and wait for the ack. The solution to this question is: the minimum number of frames that will be transmitted from the moment the first frame is transmitted to the moment you get an ack. (also known as the size for a large window)
Your calculations look to be correct in both cases and it would be safe to assume the answer choices for the second question are wrong .
Here its duplex channel so YOUR RTT= Tp hence they have considered Tp
Now you will get X = 25*10³
So total bits of window will be 5..
"Geolocation is the identification of the real-world geographic location of an object. Geolocation may refer to the practice of assessing the location, or to the actual assessed location." -- http://en.wikipedia.org/wiki/Geolocation
Is there a standard way to describe temporal locations/coordinates that extend beyond Unix timestamps? If not, please suggest or describe an outline for one. Such a system would formalize times like:
-13,750,000,000 ± 11,000,000 (Big Bang)
1970-01-01:00:00:00 (Unix Epoch)
1 (Year 1 CE)
For example: both Geolocations and Chronolocations frequently lack precision -- this is just one consideration but I'm sure there are more.
My goal is to formalize a way to store and retrieve temporal locations of all kinds. As you might imagine this is more complex than it sounds.
I have never heard of such a system, but it would be fairly trivial to write a class where a structured data type like this exists:
struct bigTime{
signed long int millenium;
int decade;
signed long int seconds;
}time;
You could store milennia before/after an arbitrary point (even 1970 for simplicity) for long range, decades for mid range, then use seconds and milliseconds as short term.
You could create a class where adding +/- $X seconds, minutes, hours, days, weeks, months, years, decades, centuries, millenia would be straightforward.
Say you wanted to go 156 years back. that's -15 decades and -189 341 556 seconds.
Or 3205 years and 2 weeks and a day back. That's -3 millenia, -20 decades, -159 080 630 seconds.
Or even 67,000,012 years (from jonathan's offtopic joke). That's -67000 millenia, -1 decade -63 113 851.9 seconds.
All of those are from today, but would be from whatever arbitrary point you chose.
The system I describe would give you 4.2 Trillion years to work with either way down to the millisecond, and more or less minimize memory required. (I'm sure it could be brought down more if you tried)