i try to setup the LPSTK-CC1352R Launchpad with Node RED and a bluetooth connection.
the inbuild sensor is a hdc2080 sensor.
I'm not a electronic engineer so the datasheets are a bit confusing to me.
I made it to the point, where i have a connection to the MCU via bluetooth and get every second the temperature values. Unfortunately i get these values as a 4 dimensional hex array.
[04 4a d5 41]
[dc 44 d5 41]
[b4 3f d5 41]
[8c 3a d5 41]
...
here is a example of values.
I tried a lot to convert them into a simple temperature value but without success.
I even found a kind of tutorial, but without success.
Could anyone help me with the convertion?
Thank you :)
You have to reorder the hex values from right to left, because the last hex value is not changing, which means it has to be little endian.
https://en.wikipedia.org/wiki/Endianness#:~:text=Little%2Dendian%20representation%20of%20integers,visualized%2C%20an%20oddity%20for%20programmers.
4hex are 32-bit
converted to IEEE-754 Floating Point:
[41 d5 4a 04] = 26.6611404419
[41 d5 44 dc] = 26.6586227417
[41 d5 3f b4] = 26.6561050415
[41 d5 3a 8c] = 26.6535873413
https://www.h-schmidt.net/FloatConverter/IEEE754.html
Related
While going through the Logstash date plugin documentation at
https://www.elastic.co/guide/en/logstash/current/plugins-filters-date.html#plugins-filters-date-match
I came across TAI64N date format.
Can someone please explain about this time format?
TAI stands for Temps Atomique International, the current international real-time standard. One TAI second is defined as the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the cesium atom. TAI also specifies a frame of reference.
From Toward a Unified Timestamp with explicit precision
TAI64 defines a 64-bit integer format where each value identifies a particular SI second. The duration of SI seconds is defined through a known, precise count of state transitions of a cesium atom. Time is structured as a sequence of seconds starting January 1, 1970 in the Gregorian calendar, when atomic time (TAI) became the international standard for real time. The standard defines 262 seconds before the year 1970, and another 262 from this epoch onward, thus covering a span of roughly 300 billion years, enough for most applications.
The extensions TAI64N and TAI64NA allow for finer time resolutions by
referring to particular nanoseconds and attoseconds (10-18s), respectively, within a particular second.
While TAI64 is compellingly simple and consistent, it has to be extended not only
with regard to fine resolutions, but in other ways as well.
It is only concerned with time-points, but a complete model of time needs to address the interrelation of time-points and intervals as well. Most models conceive intervals as sets of consecutive timepoints. This creates the obvious transformation problem - assuming a dense time domain - that no amount of time-points with a postulated duration of 0 can yield an interval with a duration larger than 0, and that even the shortest interval is a set of an infinite number of time-points.
TAI64 does not address uncertainty with respect to time.
It emphasizes monotonically-increasing continuity of time measurement. Human perception of time, however, is shaped by less regular astronomical phenomena.
Precisely, TAI64 format is better for various reasons such as,
International Atomic Time
Strictly monotonic (no leap seconds)
64-bit uint #seconds from epoch
32-bit uint #nano-seconds (TAI64N)
32-bit uint #atto-seconds (TAI64NA)
You can read further on, Bernstein D.J. 2002. "TAI64, TAI64N, and TAI64NA,
TAI64, TAI64N, and TAI64NA
TAI and real time
TAI64 labels and external TAI64 format. A TAI64 label is an integer
between 0 and 2^64 referring to a particular second of real time.
Integer s refers to the TAI second beginning exactly 2^62 - s seconds
before the beginning of 1970 TAI, if s is between 0 inclusive and 2^62
exclusive; or the TAI second beginning exactly s - 2^62 seconds after
the beginning of 1970 TAI, if s is between 2^62 inclusive and 2^63
exclusive. Integers 2^63 and larger are reserved for future
extensions. Under many cosmological theories, the integers under 2^63
are adequate to cover the entire expected lifetime of the universe; in
this case no extensions will be necessary. A TAI64 label is normally
stored or communicated in external TAI64 format, consisting of eight
8-bit bytes in big-endian format. This means that bytes b0 b1 b2 b3 b4 b5 b6 b7 represent the label b0 * 2^56 + b1 * 2^48 + b2 * 2^40 + b3 * 2^32 + b4 * 2^24 + b5 * 2^16 + b6 * 2^8 + b7.
For example, bytes 3f ff ff ff ff ff ff ff hexadecimal represent the
second that ended 1969 TAI; bytes 40 00 00 00 00 00 00 00 hexadecimal
represent the second that began 1970 TAI; bytes 40 00 00 00 00 00 00 01 hexadecimal represent the following second. Bytes 40 00 00 00 2a 2b 2c 2d hexadecimal represent 1992-06-02 08:07:09 TAI, also known as
1992-06-02 08:06:43 UTC.
source
Today i got this reader from local shop. Earlier i worked with Wiegand type readers with no problem. So anyway, when i try to read EM type card with 0009177233 ID (written on card) i should get at least 9177233 with start and stop chars expected. But instead i get 50008C0891
ASCII 50008C0891
HEX 02 35 30 30 30 38 43 30 38 39 31 0D 0A 03
BIN 00000010 00110101 00110000 00110000 00110000 00111000 01000011 00110000
00111000 00111001 00110001 00001101 00001010 00000011
I use USB-RS232 converter and RealTerm software.
Does anyone has any ideas why?
Are there 2 ID's?
The decimal 9177233 equals HEX 8C0891, so the software gives you the serialnumber in hexadecimal notation. I think, the full number 50008C0891 is the 5 Bytes (40bit) from the UID of the EM-type chip.
Regards
I'm trying to interpret the communication between an ISO 7816 type card and the card reader. I've connected inline between the card and the reader when i dump the ouput to console i'm getting data that that im not expecting, see below:
Action: Card inserted to reader, expect an ATR only
Expected output:
3B 65 00 00 B0 40 40 22 00
Actual Output:
E0 3B 65 00 B0 40 40 22 00 90 00 80 9B 80 E0 E2
The 90 00 is the standard for OK that it reset, but why i am still logging additional data both before the ATR (E0) as well as data after
The communication line is documented in ISO 7816-3 (electrical interface and transmission protocols), look for the respective chapters of T=0 or T=1 protocol. T=1 is a block oriented protocol involving a prolog containing node addresses and an epilog containing a CRC/LRC.
For the ATR however, no protocol is running yet, since here the information is contained, which protocols the card supports, for the terminal to choose. Surely so early 90 00 is no SW1/SW2.
I had a long time decoding IR codes with optimum's Ken Shirriff Arduino Library. I modified the code a bit so that I was able to dump a Samsung air conditioner (MH026FB) 56-bit signals.
The results of my work is located in Google Docs document Samsung MH026FB AirCon IR Codes Dump.
It is a spreasheet with all dumped values and the interpretation of results. AFAIK, air conditioner unit sends out two or three "bursts" of 56 bit data, depending on command. I was able to decode bits properly, figuring out where air conditioner temperature, fan, function and other options are located.
The problem I have is related to the checksum. In all those 7-byte codes, the second one is computed somehow from the latter 5 bytes, for example:
BF B2 0F FF FF FF F0 (lead-in code)
7F B8 8A 71 F6 4F F0 (auto mode - 25 degrees)
7F B2 80 71 7A 4F F0 (auto mode - 26 degrees)
7F B4 80 71 FA 7D F0 (heat mode - 26 degrees - fan auto)
Since I re-create the IR codes at runtime, I need to be able to compute checksum for these codes.
I tried with many standard checksum algorithms, none of them gave meaningful results. The checksum seems to be related to number of zeroes in the rest of code (bytes from 3 to 7), but I really can't figure it how.
Is there a solution to this problem?
Ken Shirriff sorted this out. Algorithm is as follow:
Count the number of 1 bits in all the bytes except #2 (checksum)
Compute count mod 15. If the value is 0, use 15 instead.
Take the value from 2, flip the 4 bits, and reverse the 4 bits.
The checksum is Bn where n is the value from the previous step.
Congraturations to him for his smartness and sharpness.
When bit order in bytes/packets and 0/1 are interpreted properly (from the algorithm it appears that both are reversed), the algorithm would be just sum of 0 bits modulo 15.
It is nearly correct.
Count the 0's / 1's (You can call them what you like, but it is the short signals).
Do not count 2. byte and first/last bit of 3.byte (depending if you are seeing it as big or little indian).
Take the result and -30 (29-30 = 15, only looking af 4 bits!)
Reverse result
Checksum = 0x4 "reverse resultesult", if short signals = 0, and 0xB "reverse resultesult" if long signal = 0.
i used Ken's method but mod 15 didnt work for me.
Count the number of 1 bits in all the bytes except #2 (checksum)
Compute count mod 17. if value is 16, use first byte of mode result(0).
Take the value , flip the 4 bits.
The checksum is 0xn9 where n is the value from the previous step.
I'm retrieving a file from a database server and allowing the user to download it. The problem is, I'm not getting the same byte stream out as I've read from the server.
I have confirmed (through lots of response.write) that I've received the right array of bytes, they're in the right order, etc...
Here's the code for the download st.FileContents is a byte[]:
Response.Clear();
Response.AddHeader("Content-Disposition",
"attachment; filename=" + st.FileName);
Response.AddHeader("Content-Length", st.FileSize.ToString());
Response.ContentType = "application/octet-stream";
Response.Write(new System.Text.ASCIIEncoding()
.GetString(st.FileContents)); // Problem line
Response.End();
I've tried a few ways of converting that byte [] to a string, and none give the results I need. Instead of the expected stream of bytes as:
FF D8 FF E0 1E D0 4A 46 58 58 00 10 FF D8 FF DB
(yes, that's the start of a jpeg image)
I wind up with something like:
C3 BF C3 98 C3 BF C3 A0 1E C3 90 4A 46 58 58 00
The first 6 bytes get mangled into 10 completely different bytes. What gives?
Edit
The answer is, of course, to use BinaryWrite instead of Write.
You shouldn't be treating binary data as strings. In this example, ASCII encoding only supports characters in range 0-127, so any bytes that do not fall into that range are treated as invalid and replaced with a question mark. Use HttpResponse.BinaryWrite instead:
Response.BinaryWrite(st.FileContents);