This question already has an answer here:
Decode bluetooth data from the Indoor Bike Data characteristic
(1 answer)
Closed 2 years ago.
I'm trying to read data from the BTLE fitness machine service, specifically the Indoor Bike Data characteristic.
A typical reading I'm getting has the bytes 44-02-9c-09-5c-00-4f-00-50. The first two are flags which indicate that the rest of the bytes represent, in order:
Instantaneous cadence (uint16)
Instantaneous power (sint16)
Heart rate (uint8)
The trouble is, that only accounts for 5 more bytes, but there are 7 more bytes in the value. It looks like 5c-00 is cadence, 00-4f is power, and 50 is heart rate, but
I don't know what the 9c-09 represents, but more importantly,
I don't know how to reliably read this characteristic if it's going to send me data that the flags field says is not present.
What do I need to do to parse these bytes correctly? In this specific case I could maybe skip those two bytes, but that won't be reliable over different device manufacturers.
Update: FWIW I don't think it was correct to mark this as a duplicate. I was able to parse the bytes, the problem was that the result appeared to contradict the fitness machine spec. The accepted answer clarified that.
The 9c-09 value is instantaneous speed, which is present (counterintuitively) if the first flag bit is 0. See Fitness Machine Service spec, section 4.9.1.1.
Related
I am new to DNP3 protocol and I have a question.
I understand that the data is stored in arrays
But, I did not understand if the array could be noncontinuous?Â
In addition, Is there any beginners source information for DNP3 protocol? (I have tried to read the DNP3 specification but it was unclear to me)
I would appreciate your answer!
Yes, data indexes may be non-contiguous.
To achieve "more efficient" transmission of data, section 5.1.2 of the IEEE Standard for Electric Power Systems Communications— Distributed Network Protocol (DNP3) states that "gaps in the point index range are permissible but should be avoided wherever possible."
The DNP3 standard does not specify how data is stored, but rather how it is transmitted. The indexes are part of an addressing scheme used to identify individual pieces of data in a device. A given piece of data, or point, is identified by its Group number and an Index. For example, "Group 30 : Index 9" is the 10th readable, analog value ("10th" because the lists are zero-based).
Another way to state the answer is that point addresses (meaning indexes within a Group) are not required to be contiguous.
Note that even if points in a device are indexed contiguously, the device could return data with non-contiguous indexes in a single transmission packet. For example, a packet of data from a device might contain the 2nd, 5th, and 12th readable, analog input.
I don't have any specific recommendation for beginners information.
I have a task where I need to read 2 parameters from a BLE Beacon. The documentation was seriously lacking and after a fair amount of effort, I managed to get some basic information about reading the data from the BLE Beacon.
The parameters to read are
1) Battery Voltage of the sensor
2) Temperature the beacon has a built in temperature sensor.
I think I have tried almost every popular Python BLE library out there but I just can't seem to get the temperature reading out of the beacon. "I think" I am able to read the voltage. The reason why I said "I think" is because the value seems to match what was provided in the minimal document. And also when I put the beacon into the charger, I can see the value go up - an indication that it is the voltage reading. As I could not read the temperature ( because the UUIDs that are mentioned in the document, the value doesn't seem to change ). I have tried enabling the sensor in every possible way and method described - by writing 01:00 etc. I spent a fair amount of time to reverse engineer the thing. I ran a packet sniffer and managed to capture the data that was being transferred between the beacon and the mobile app ( They have a mobile app ). But then again I am not able to figure out how the temperature readings are being communicated between the beacon and the app. Let me break the whole stuff in smaller blocks.
Hardware: BLE beacon from which voltage and temperature can be read. The temperature sensor is built into the beacon. And the beacon itself is from Texas Instruments but the temperature, voltage sensing part is done by a third party. They provided us with some minimal information and it was difficult to make sense of some of the sentences as they have trouble communicating in English.
The sequence to get the data goes like this
Scan for beacons
When the beacon is found then connect to it
Enable notification
Set notification interval
Get the voltage and temperature reading.
I have been able to do the first 4 real fast, and "half" of No. 5, i.e getting the voltage part. When I say real fast I mean I got that stuff with nearly no documentation available at that time.
As per the info that I have the data resides in these characteristics/UUIDs. Also please note that the UUID are not standard 128 bit and this caused me issues when using certain libraries. But after some tries I got to read/write to them using handles etc. The handles and other stuff I printed are ones that I read using PYGATT (A Python wrapper for gatttool).
The UUIDs are marked as 1st, 2nd, 3rd and 4th parameters and it has the following to say about the parameters
- A: 1 byte (2nd Param)
- B: Maj + Min values, 4 bytes (4th Param)
- C: 4 bytes (3rd Param)
- D: Enable/disable notification ( I have been able to turn this on )
- E: Set notification interval ( I have been able to set this and can notice the change in notification interval )
This is minimal so as to not have a large file. All it does is this - the mobile app connects to the beacon, then the notifications start and the temperate readings are retrieved by the mobile app. Like I had mentioned, I don't seem to have problem reading the voltage, it's only the temperature that I am getting stuck at. I have been at it for a week now. I think I have tried nearly everything that I could think of. I even enumerated all the writable characteristics and tried writing numbers like 1 ( enables the sensor? ). I could have offered a bounty for this straight away if it were possible. I rarely get stuck for so long with a problem. This is driving me a little crazy. I am getting close to my wits end - I guess it's time for a super hero - anyone out there? :) I can provide for every bit of information needed if someone could indicate what is wrong. I even wrote a cordova app ... and tried a bunch of stuff from my Android phone. I can connect ... write to characteristics, read stuff etc but temperature ready, nah!!! It just won't budge. All I get is the same set of values ( I used a JSON.stringify to display A, B and C). I can bother about the byte order later. I guess that is a smaller problem.
The communication between the beacon and a third party mobile app is fine, it is able to read the temperature info just fine.
I have been looking at wireshark data and I am fairly sure that the temperature data is being communicated at this stage. But then when I decode the "value", it looks like it's the voltage. It mentions l2cap but I am not sure how that is being used here to send the temperature readings ( if it is using that in the first place ).
Update: Wrote to every writable characteristics. Wrote values like 1, 0100, 2, 7 on every writable characteristics. At the same time I was reading every readable characteristic ( in a loop ) and doing a comparison (just true/false) with the previous set of values. This seemed like a quick and easier way to know if something changed. Didn't want to take chances with converting the hex to a float. I can figure out the byte order later.
From the sniffed data (wireshark) I can only see 3 writes happening on the beacon.
I am not fully sure, even after a long discussion, but it seems that the four bytes of the notification are used for the voltage as well as the temperature, since the temperature can most probably be derived from the voltage.
From the values it seems that those four bytes represent the voltage in float (if you ignore the absurd factor of 10^-38 that comes in because only 4 bytes instead of 8 bytes are used).
Since typically the temperature T is derived from a resistivity measurement, where the resistivity R is proportional to the voltage U (if the current is constant), you can in principle calculate the temperature T from the voltage U.
The problem is that T(R) is relatively linear, but not perfectly (in contrast to U(R) which is assumed to be U=RI). So you may need to plot the values for T(U) to find out the curve that they are using.
To add to the confusion, I got the best results when only using the first five bits of the third byte and the eight bits of the fourth byte. I am not aware why this is the case, and it might point to some trouble still.
The best option is to ask for their function T(U) that they are using. If they can and will provide it for you...
Background: I've spent a while working with a variety of device interfaces and have seen a lot of protocols, many serial and UDP in which data integrity is handled at the application protocol level. I've been seeking to improve my receive routine handling of protocols in general, and considering the "ideal" design of a protocol.
My question is: is there any protocol framing scheme out there that can definitively identify corrupt data in all cases? For example, consider the standard framing scheme of many protocols:
Field: Length in bytes
<SOH>: 1
<other framing information>: arbitrary, but fixed for a given protocol
<length>: 1 or 2
<data payload etc.>: based on length field (above)
<checksum/CRC>: 1 or 2
<ETX>: 1
For the vast majority of cases, this works fine. When you receive some data, you search for the SOH (or whatever your start byte sequence is), move forward a fixed number of bytes to your length field, and then move that number of bytes (plus or minus some fixed offset) to the end of the packet to your CRC, and if that checks out you know you have a valid packet. If you don't have enough bytes in your input buffer to find an SOH or to have a CRC based on the length field, then you wait until you receive enough to check the CRC. Disregarding CRC collisions (not much we can do about that), this guarantees that your packet is well formed and uncorrupted.
However, if the length field itself is corrupt and has a high value (which I'm running into), then you can't check the (corrupt) packet's CRC until you fill up your input buffer with enough bytes to meet the corrupt length field's requirement.
So is there a deterministic way to get around this, either in the receive handler or in the protocol design itself? I can set a maximum packet length or a timeout to flush my receive buffer in the receive handler, which should solve the problem on a practical level, but I'm still wondering if there's a "pure" theoretical solution that works for the general case and doesn't require setting implementation-specific maximum lengths or timeouts.
Thanks!
The reason why all protocols I know of, including those handling "streaming" data, chop up the datastream in smaller transmission units each with their own checks on board is exactly to avoid the problems you describe. Probably the fundamental flaw in your protocol design is that the blocks are too big.
The accepted answer of this SO question contains a good explanation and a link to a very interesting (but rather heavy on math) paper about this subject.
So in short, you should stick to smaller transmission units not only because of practical programming related arguments but also because of the message length's role in determining the security offered by your crc.
One way would be to encode the length parameter so that it would be easily detected to be corrupted, and save you from reading in the large buffer to check the CRC.
For example, the XModem protocol embeds an 8 bit packet number followed by it's one's complement.
It could mean doubling your length block size, but it's an option.
The Ethernet II frame format does not contain a length field, and I'd like to understand how the end of a frame can be detected without it.
Unfortunately, I have no idea of physics, but the following sounds reasonable to me: we assume that Layer 1 (Physical Layer) provides us with a way of transmitting raw bits in such a way that it is possible to distinguish between the situation where bits are being sent and the situation where nothing is sent (if digital data was coded into analog signals via phase modulation, this would be true, for example - but I don't know if this is really what's done). In this case, an ethernet card could simply wait until a certain time intervall occurs where no more bits are being transmitted, and then decide that the frame transmission has to be finished.
Is this really what's happening?
If yes: where can I find these things, and what are common values for the length of "certain time intervall"? Why does IEEE 802.3 have a length field?
If not: how is it done instead?
Thank you for your help!
Hanno
Your assumption is right. The length field inside the frame is not needed for layer1.
Layer1 uses other means to detect the end of a frame which vary depending on the type of physical layer.
with 10Base-T a frame is followed by a TP_IDL waveform. The lack of further Manchester coded data bits can be detected.
with 100Base-T a frame is ended with an End of Stream Delimiter bit pattern that may not occur in payload data (because of its 4B/5B encoding).
A rough description you can find e.g. here:
http://ww1.microchip.com/downloads/en/AppNotes/01120a.pdf "Ethernet Theory of Operation"
I am writing a networking application. It has some unxpected lags. I need to calculate some figures but I cant find an information - how many bits can be transferes through Ethernet connection at each tick.
I know that the resulting transfer rate is 100Mbps/1Gbps. But ethernet should use hardware ticks to sync both ends I suppose. So it moves data in ticks.
So the question is how many ticks per second or how many bits per one tick used in ethernet.
The actual connection is 100 Mbps full-duplex.
I found the answer myself. Here is the article: http://discountcablesusa.com/ethernetcables.html
The first table says 100BaseTX has frequency 31.25MHz, the signal rate is 125Mb (25 is Ehternet overhead so we receive 100Mbit/s of "useful" data).
Hence 125/31.25 = 4bit per one transmition which is roughly the answer on my question.