While going through the Logstash date plugin documentation at
https://www.elastic.co/guide/en/logstash/current/plugins-filters-date.html#plugins-filters-date-match
I came across TAI64N date format.
Can someone please explain about this time format?
TAI stands for Temps Atomique International, the current international real-time standard. One TAI second is defined as the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the cesium atom. TAI also specifies a frame of reference.
From Toward a Unified Timestamp with explicit precision
TAI64 defines a 64-bit integer format where each value identifies a particular SI second. The duration of SI seconds is defined through a known, precise count of state transitions of a cesium atom. Time is structured as a sequence of seconds starting January 1, 1970 in the Gregorian calendar, when atomic time (TAI) became the international standard for real time. The standard defines 262 seconds before the year 1970, and another 262 from this epoch onward, thus covering a span of roughly 300 billion years, enough for most applications.
The extensions TAI64N and TAI64NA allow for finer time resolutions by
referring to particular nanoseconds and attoseconds (10-18s), respectively, within a particular second.
While TAI64 is compellingly simple and consistent, it has to be extended not only
with regard to fine resolutions, but in other ways as well.
It is only concerned with time-points, but a complete model of time needs to address the interrelation of time-points and intervals as well. Most models conceive intervals as sets of consecutive timepoints. This creates the obvious transformation problem - assuming a dense time domain - that no amount of time-points with a postulated duration of 0 can yield an interval with a duration larger than 0, and that even the shortest interval is a set of an infinite number of time-points.
TAI64 does not address uncertainty with respect to time.
It emphasizes monotonically-increasing continuity of time measurement. Human perception of time, however, is shaped by less regular astronomical phenomena.
Precisely, TAI64 format is better for various reasons such as,
International Atomic Time
Strictly monotonic (no leap seconds)
64-bit uint #seconds from epoch
32-bit uint #nano-seconds (TAI64N)
32-bit uint #atto-seconds (TAI64NA)
You can read further on, Bernstein D.J. 2002. "TAI64, TAI64N, and TAI64NA,
TAI64, TAI64N, and TAI64NA
TAI and real time
TAI64 labels and external TAI64 format. A TAI64 label is an integer
between 0 and 2^64 referring to a particular second of real time.
Integer s refers to the TAI second beginning exactly 2^62 - s seconds
before the beginning of 1970 TAI, if s is between 0 inclusive and 2^62
exclusive; or the TAI second beginning exactly s - 2^62 seconds after
the beginning of 1970 TAI, if s is between 2^62 inclusive and 2^63
exclusive. Integers 2^63 and larger are reserved for future
extensions. Under many cosmological theories, the integers under 2^63
are adequate to cover the entire expected lifetime of the universe; in
this case no extensions will be necessary. A TAI64 label is normally
stored or communicated in external TAI64 format, consisting of eight
8-bit bytes in big-endian format. This means that bytes b0 b1 b2 b3 b4 b5 b6 b7 represent the label b0 * 2^56 + b1 * 2^48 + b2 * 2^40 + b3 * 2^32 + b4 * 2^24 + b5 * 2^16 + b6 * 2^8 + b7.
For example, bytes 3f ff ff ff ff ff ff ff hexadecimal represent the
second that ended 1969 TAI; bytes 40 00 00 00 00 00 00 00 hexadecimal
represent the second that began 1970 TAI; bytes 40 00 00 00 00 00 00 01 hexadecimal represent the following second. Bytes 40 00 00 00 2a 2b 2c 2d hexadecimal represent 1992-06-02 08:07:09 TAI, also known as
1992-06-02 08:06:43 UTC.
source
Related
I'm trying to interpret the communication between an ISO 7816 type card and the card reader. I've connected inline between the card and the reader when i dump the ouput to console i'm getting data that that im not expecting, see below:
Action: Card inserted to reader, expect an ATR only
Expected output:
3B 65 00 00 B0 40 40 22 00
Actual Output:
E0 3B 65 00 B0 40 40 22 00 90 00 80 9B 80 E0 E2
The 90 00 is the standard for OK that it reset, but why i am still logging additional data both before the ATR (E0) as well as data after
The communication line is documented in ISO 7816-3 (electrical interface and transmission protocols), look for the respective chapters of T=0 or T=1 protocol. T=1 is a block oriented protocol involving a prolog containing node addresses and an epilog containing a CRC/LRC.
For the ATR however, no protocol is running yet, since here the information is contained, which protocols the card supports, for the terminal to choose. Surely so early 90 00 is no SW1/SW2.
I'm writing some crypto (known algorithm - not rolling my own) but I couldn't find any specific documentation on this case.
One method of padding (although the issue is there for any of them could have the same problem) works like this:
If your block is < 8 bytes, pad the end with the number of padding bytes
So FF E2 B8 AA becomes FF E2 B8 AA 04 04 04 04
Which is great and allows you with a pretty obvious window with which you can remove padding during decryption, but my question is that instead of the above example say I have this -
10 39 ff ef 09 64 aa (7 bytes in length). Now in this situation the above algorithm would say to convert this to 10 39 ff ef 09 64 aa 01, but my question is then when decrypting how do you decide between when you get a 01 byte on the end of a decrypted message how do you know whether it's meant to be padding (and should be stripped) or it's part of the actual message and you should keep it?
The most reasonable solutions I can think of would be append/prepend the size of the actual message in the encryption or add a parity block to state whether there's padding or not, which both have their own problems in my mind.
I'm assuming this problem has been encountered before but I was wondering what the solution was.
PKCS #5/7 padding is always added – if the length of the plaintext is a multiple of the block size, a whole block of padding is added. This way there is no ambiguity, which is the main benefit of PKCS #7 over, say, zero padding.
Quoted from the PKCS #7 specification:
2. Some content-encryption algorithms assume the
input length is a multiple of k octets, where k > 1, and
let the application define a method for handling inputs
whose lengths are not a multiple of k octets. For such
algorithms, the method shall be to pad the input at the
trailing end with k - (l mod k) octets all having value k -
(l mod k), where l is the length of the input. In other
words, the input is padded at the trailing end with one of
the following strings:
01 -- if l mod k = k-1
02 02 -- if l mod k = k-2
.
.
.
k k ... k k -- if l mod k = 0
The padding can be removed unambiguously since all input is
padded and no padding string is a suffix of another. This
padding method is well-defined if and only if k < 256;
methods for larger k are an open issue for further study.
I don't understand this question:
Consider a system that has a byte-addressable memory organized in 32-bit words according to the big-endian scheme. A program reads 2 integers into an array and stores them in successive locations, starting at location at address 0x00001000. The 2 integers are 1025 and 287,454,020.
Show the contents of the two memory words at locations 0x00001000 and 0x00001004 after the two integers have been stored.
Can anyone explain how to do this? This is like a new language to me.
Big endian just means that the bytes are ordered from most significant to least significant at increasing memory addresses, so:
0x00001000 00 04 00 01 ; 1024 (decimal) = 00040001 (hex)
0x00001004 11 22 33 44 ; 287454020 (decimal) = 11223344 (hex)
Just for completeness, if this were a little endian system then memory would look like this:
0x00001000 01 00 04 00 ; 1024 (decimal) = 00040001 (hex)
0x00001004 44 33 22 11 ; 287454020 (decimal) = 11223344 (hex)
I had a long time decoding IR codes with optimum's Ken Shirriff Arduino Library. I modified the code a bit so that I was able to dump a Samsung air conditioner (MH026FB) 56-bit signals.
The results of my work is located in Google Docs document Samsung MH026FB AirCon IR Codes Dump.
It is a spreasheet with all dumped values and the interpretation of results. AFAIK, air conditioner unit sends out two or three "bursts" of 56 bit data, depending on command. I was able to decode bits properly, figuring out where air conditioner temperature, fan, function and other options are located.
The problem I have is related to the checksum. In all those 7-byte codes, the second one is computed somehow from the latter 5 bytes, for example:
BF B2 0F FF FF FF F0 (lead-in code)
7F B8 8A 71 F6 4F F0 (auto mode - 25 degrees)
7F B2 80 71 7A 4F F0 (auto mode - 26 degrees)
7F B4 80 71 FA 7D F0 (heat mode - 26 degrees - fan auto)
Since I re-create the IR codes at runtime, I need to be able to compute checksum for these codes.
I tried with many standard checksum algorithms, none of them gave meaningful results. The checksum seems to be related to number of zeroes in the rest of code (bytes from 3 to 7), but I really can't figure it how.
Is there a solution to this problem?
Ken Shirriff sorted this out. Algorithm is as follow:
Count the number of 1 bits in all the bytes except #2 (checksum)
Compute count mod 15. If the value is 0, use 15 instead.
Take the value from 2, flip the 4 bits, and reverse the 4 bits.
The checksum is Bn where n is the value from the previous step.
Congraturations to him for his smartness and sharpness.
When bit order in bytes/packets and 0/1 are interpreted properly (from the algorithm it appears that both are reversed), the algorithm would be just sum of 0 bits modulo 15.
It is nearly correct.
Count the 0's / 1's (You can call them what you like, but it is the short signals).
Do not count 2. byte and first/last bit of 3.byte (depending if you are seeing it as big or little indian).
Take the result and -30 (29-30 = 15, only looking af 4 bits!)
Reverse result
Checksum = 0x4 "reverse resultesult", if short signals = 0, and 0xB "reverse resultesult" if long signal = 0.
i used Ken's method but mod 15 didnt work for me.
Count the number of 1 bits in all the bytes except #2 (checksum)
Compute count mod 17. if value is 16, use first byte of mode result(0).
Take the value , flip the 4 bits.
The checksum is 0xn9 where n is the value from the previous step.
I have the following frame:
7e 01 00 00 01 00 18 ef 00 00 00 b5 20 c1 05 10 02 71 2e 1a c2 05 10 01 71 00 6e 87 02 00 01 42 71 2e 1a 01 96 27 be 27 54 17 3d b9 93 ac 7e
If I understand correctly, then it is this portion of the frame on which the FCS is calculated:
010000010018ef000000b520c1051002712e1ac205100171006e8702000142712e1a019627be2754173db9
I've tried entering this into a number of online calculators but I cant produce 0x93ac from the above data.
http://www.lammertbies.nl/comm/info/crc-calculation.html with input type hex.
How is 0x93ac arrived at?
Thanks,
Barry
Answering rather for others who got here while searching for advice.
The key is what several points in the closely related ITU-T recommendations (e.g. Q.921, available online for quite some time already) say:
1. the lowest order bit is transmitted (and thus received) first
This legacy behaviour is in contrary to the daily life conventions where highest order digits are written first in the order of reading, and all the generic online calculators and libraries perform the calculation using the conventional order and provide optional settings to facilitate the reversed one.
Therefore, you must ask the online calculator
to reverse the order of bits in the message you've input in the "conventional" format before performing the calculation,
to reverse the order of bits of the result so that you get them in
the same order like in the message itself
Quite reasonably, some calculators offer just a single common setting for both.
This reasons the settings "reverse data bytes" and "reverse CRC result before Final XOR" recommended in the previous answer;
2. the result of the CRC calculation must be bit-inverted before sending
Bit inversion is another name of "xor by 0xffff...". There is a purpose in bit-inverting the CRC calculation result before sending it as the message FCS (the last two bytes of the message, the '93 ac' in your example).
See point 4 for details.
This reasons the setting "Final value ffff", whose name is quite misleading as it actually defines the pattern to be for xor'ed with the result of the calculation. As such operation is required by several CRC types, only the xor patterns vary from 0 (no op) through 0xfff... (complete inversion), generic calculators/libraries offer it for simplicity of use.
3. the calculation must include processing of a leading sequence of 0xffff
This reasons the point "initial value ffff".
4. on the receiving (checking) side, it is recommended to push the complete message, i.e. including the FCS, through the CRC calculation, and expect the result to be 0x1d0f
There is some clever thinking behind this:
the intrinsic property of the CRC algorithm is that
CRC( x.CRC(x) )
is always 0 (x represents the original message and "." represents concatenation).
running the complete message through the calculation rather than
calculating only the message itself and comparing with the FCS
received separately means much simpler algorithm (or even circuitry)
at the receiving side.
however, it is too easy to make a coding mistake causing a result to become 0. Luckily, thanks to the CRC algorithm intrinsic properties again,
CRC( x.(CRC(x))' )
yields a constant value independent of x and different from 0 (at least for CRC-CCITT, which we talk about here). The "'" sign represents the bit inversion as required in point 2.
First of all, CRC value is 0xac93
Use this calculator: http://www.zorc.breitbandkatze.de/crc.html
Set CRC order 16
Polynomial 1021
Initial value ffff
Final value ffff
"reverse data bytes"
"reverse CRC result before Final XOR"
Enter your sequence as:
%01%00%00%01%00%18%ef%00%00%00%b5%20%c1%05%10%02%71%2e%1a%c2%05%10%01%71%00%6e%87%02%00%01%42%71%2e%1a%01%96%27%be%27%54%17%3d%b9
Press "calculate" and you get 0xAC93
This is simple Python script for HDLC CRC calculation. You can use it for DLMS
def byte_mirror(c):
c = (c & 0xF0) >> 4 | (c & 0x0F) << 4
c = (c & 0xCC) >> 2 | (c & 0x33) << 2
c = (c & 0xAA) >> 1 | (c & 0x55) << 1
return c
CRC_INIT=0xffff
POLYNOMIAL=0x1021
DATA_VALUE=0xA0
SNRM_request=[ 0x7E, 0xA0, 0x08, 0x03, 0x02, 0xFF, 0x93, 0xCA, 0xE4, 0x7E]
print("sent>>", end=" ")
for x in SNRM_request:
if x>15:
print(hex(x), end=" ")
else:
a=str(hex(x))
a = a[:2] + "0" + a[2:]
print(a, end=" ")
lenn=len(SNRM_request)
print(" ")
crc = CRC_INIT
for i in range(lenn):
if( (i!=0) and (i!=(lenn-1)) and (i!=(lenn-2)) and (i!=(lenn-3)) ):
print("i>>",i)
c=SNRM_request[i]
c=byte_mirror(c)
c = c << 8
print(hex(c))
for j in range(8):
print(hex(c))
print("CRC",hex(crc))
if (crc ^ c) & 0x8000:
crc = (crc << 1) ^ POLYNOMIAL
else:
crc = crc << 1
c = c << 1
crc=crc%65536
c =c%65536
print("CRC-CALC",hex(crc))
crc=0xFFFF-crc
print("CRC- NOT",hex(crc))
crc_HI=crc//256
crc_LO=crc%256
print("CRC-HI",hex(crc_HI))
print("CRC-LO",hex(crc_LO))
crc_HI=byte_mirror(crc_HI)
crc_LO=byte_mirror(crc_LO)
print("CRC-HI-zrc",hex(crc_HI))
print("CRC-LO-zrc",hex(crc_LO))
crc=256*crc_HI+crc_LO
print("CRC-END",hex(crc))
For future readers, there's code in appendix C of RFC1662 to calculate FCS for HDLC.