I'm attempting to get an estimated range to a BLE device, given the devices RSSI and calibrated transmission power.
The calibrated txPower is supposedly emitted as the last byte in the peripherals iBeacon advertisement packet. These packets, according to documentation i've found, should be 30 bytes in length.
However, the returned byte[] for the packet is 62 bytes in length. Because of this, the format of the advertisement data is unknown.
Why might this be happening, and is there any way to decipher the format of the 62 byte packet?
When scanning for BLE devices, Android APIs return not just the bytes for the raw advertisement PDUs but also the scan response PDUs. The latter are tacked on to the end of the former in the byte array returned by the scanning APIs.
For this reason, you cannot reliably use a negative offset from the end if the byte array to access beacon fields. Using a positive offset from the beginning is more reliable, but even this can fail if unusual PDUs are inserted before the manufacturer advertisement PDU, which is the one you care about.
For 100% reliability you must parse out all the PDUs, find the manufacturer advertisement one, and look at the bytes in that. I learned this the hard way when writing the Android Beacon Library. It is open source, so even if you want to roll your own scanning, it is a good idea to see how it does it.
Related
I'm a little confused about the inter-character gap in Modbus and whether its required when a master sends a message to a slave. The protocol spec says you can't have more than a 3.5 character gap between bytes when transmitting but is there any specific minimum amount of time you must have between bytes?
I've written a Modbus driver (master) that is able to communicate with a variety of devices and most don't seem to care about any gap between characters when receiving messages. However, I've come across a couple of devices where I was unable to communicate reliably without putting in a some kind of delay (measured in microseconds) between bytes, which is determined by the baud rate.
Is the character gap an absolute requirement or does it depend on the manufacture of the device and how the implement the Modbus protocol?
Does Modbus RTU require a gap between characters when transmitting?
No.
In fact the Modbus spec states in section 2.5.1.1 MODBUS Message RTU Framing that "[t]he entire message frame must be transmitted as a continuous stream of characters."
Requiring intercharacter gaps would be contradictory to specifying "a continuous stream".
The protocol spec says you can't have more than a 3.5 character gap between bytes when transmitting ...
You're misquoting the protocol spec.
Only a 1.5 character gap is tolerated between characters in a RTU message.
From the Modbus spec: "If a silent interval of more than 1.5 character times occurs between two characters, the message frame is declared incomplete and should be discarded by the receiver."
A silent (idle) line with a duration of 3.5 characters must precede a message.
IOW a gap of 2 (i.e. more than 1.5 and less than 3.5) characters would prematurely end the current message, and the following characters (of that malformed message) would not be considered the start of a new message and must be discarded (until the line goes idle for at least 3.5 characters).
... is there any specific minimum amount of time you must have between bytes?
The Modbus spec does not mention any such requirement.
Such a requirement would be impractical.
UARTs (typically) do not have a capability to meter its output by inserting a delay between the transmission of character frames.
Adding such a delay is an additional processor burden as well as the use of a timer.
On the contrary UARTs have evolved to transmit characters as fast as the baudrate allows with the least processor intervention, e.g. hardware FIFO and DMA.
A "minimum amount of time you must have between bytes" is simply a reduction in the effective data rate.
Therefore an appropriate reduction of the baudrate would accomplish the exact same data rate.
Is the character gap an absolute requirement or does it depend on the manufacture of the device and how the implement the Modbus protocol?
No, you are probably using too fast a baudrate for the device/circumstances in question.
A microprocessor or microcontroller should be able to keep a UART busy and transmit without any intercharacter gaps.
A UART that requires gaps during receiving is IMO an overloaded system and is broken.
For reliable communication (without flow control) use a baudrate low enough so that metering the transmitted characters is not necessary.
Addendum
Apparently there is at least one UART (from Siemens) that can meter its output by holding the line idle for N bit-times between character frames.
It is at the end of the message that there should be a pause of 3.5 characters or longer.
Usually in the data transmission protocols in the first positions of the byte sequence, the number of bytes that follow is included, but Modbus RTU does not send that length and what determines when a message ends is the pause of 3.5 characters.
If you send the sequence of bytes at once there should never be any pause between characters
If you are writing a Master you should not worry about this, since it is the slave that must wait 3.5 characters to know when the master request is finished.
You from the master side simply wait for the slave to reply since you know how many bytes the slave is going to send, in the request you already sent how many bits or 16bit words you want to read.
And if you have communication problems with some devices, it is probably due to the combination of communication speed and poor quality of the line. Try a lower baud rate, but adding wait between characters for me doesn't make much sense.
I have encountered uarts which require 2 stop bits on receive. If it set for 2 stop bits, this may explain the reqirement for a gap to extend thes# stop period beyond 2 stop bits.
Usually, only the first stop bit is checked on receive to determine framing error regardless of the stop bit setting.
I'm developing a web front end for a GNU Radio application developed by a colleague.
I have a TCP client connecting to the output of two TCP Sink blocks, and the data encoding is not as I expect it to be.
One TCP Sink is sending complex data and the other is sending float data.
I'm decoding the data at the client by reading each 4-byte chunk as a float32 value. The server and the client are both little-endian systems, but I also tried byte swapping (with the GNU Radio Endian Swap block and also manually at the client), and the data is still not right. Actually it's much worse then, confirming there is no byte order mismatch.
When I execute the flow graph in GNU Radio Companion with appropriate GUI elements, the plots look correct. The data values are shown as expected to between 0 and 10.
However the values decoded at the client are generally around 0.00xxxxx, and the plot looks like noise rather than showing a simple tone as is seen in GNU Radio. If I manually scale the data by multiplying by 1000 it still looks like noise.
I'll describe the pre-D path in GNU Radio since it's shorter, but I see the same problem on the post-D path, where a WBFM Receive and a Rational Resampler are added, followed by a Throttle block and then a TCP Sink block sending float data.
File Source (Output Type: complex, vector length: 1) =>
Throttle (vector length: 1) =>
Low Pass Filter (FIR Type: Complex->Complex (Decimating)) =>
Throttle (vector length: 1) =>
TCP Sink (input type: complex, vector length: 1).
This seems to be the correct way to specify the stream parameters (and indeed Companion shows errors if I make changes which mismatch the stream items), but I can find no way to decode the data correctly on the other end of the stream.
"the historic RFC 1700 (also known as Internet standard STD 2) has defined the network order for protocols in the Internet protocol suite to be big-endian , hence the use of the term 'network byte order' for big-endian byte order."
see https://en.wikipedia.org/wiki/Endianness
having mentioned the network order for protocols being big-endian, this actually says nothing about the byte order of network payload itself.
also note: Sun Microsystems made big-endian native byte order computers (upon which much Internet protocol development was done).
i am surprised the previous answer has gone this long without a lesson on network byte order versus native byte order.
GNURadio appears to assume native byte order from a UDP Source block.
Examining the datatype color codes in Help->Types of GNURadio Companion, the orange colored 'float' connections are float32.
To verify a computer's native byte order, in Python, do:
from sys import byteorder
byteorder
the result will be 'little' or 'big'
It might be possible that no matter what type floats you are sending, when bytes get on network they get ordered in little endian. I had similar problem with udp connection, and I solved it by parsing floats as little endian on client side.
Let's say we have two machines on a network MA and MB,
MA considers little endian the order of the bits in a byte,
on the contrary MB considers big endian the order of the bits in a byte.
How do MA and MB agree on what "endianess" to use for the bits in a byte during the
communication over the network ?
Is there a standard "network endianess" or what ?
Do socket programmers have to take any actions in ensuring a correct communication ?
For example HTTP is a text protocol, that means that machines send and receive bytes which represent characters,what if the encoding of the characters is different in the endianess of the bits ?
Yes, the hardware protocols specify the bit order of bytes on all network links. This is generally handled automatically by the NIC hardware.
See, for example, this description of Ethernet frame format.
Ethernet transmission is strange, in that the byte order is big-endian (leftmost byte is sent first), but bit order little-endian (rigthmost, or LSB (Least Significant Bit) of the byte is sent first).
Check this page: http://www.comptechdoc.org/independent/networking/protocol/protlayers.html
It suggests that byte ordering is done at the Presentation Layer which is quite high up. This however will relate specifically to the application that you are using. I suspect data at lower levels (wrapping the higher levels) has a predetermined byte and bit order.
I'm trying to understand asynchronous serial data transmission. I know that the transmitting device sends a start bit (e.g. 1) to the receiver to indicate that transmission has begun; then a stop bit (e.g. 0) afterwards to indicate that the transmission has ended.
What I don't understand: how does the receiving device know which bit is the stop bit? The stop bit is surely no different from the other bits of data. The only way I can think of is if the transmitting device stops sending bits for a significant gap, the receiving device would know that no more bits are forthcoming, and the last bit must have been a stop bit. But if that is the case, then why would a stop bit be required at all, rather than the receiving device simply waiting for a bit, and considering the transmission to be ended when the transmitting device doesn't send any more bits?
That becomes a question of protocol. start and stop bits only have meaning if the communicating devices agree on that meaning (e.g. a frame consists of a start bit, 8 data bits, and a stop bit). Similarly, how to denote when a particular communication is complete needs to be agreed between the participants (e.g. define one or more frames that denote message termination).So for a particular communication either a full frame is received and the listener keeps listening, a partial frame is received with no subsequent data transmission and the connection can be considered faulted after some duration, or a full frame is received and that frame denotes the end of the exchange.
I have a situation right now when I have after reading the byte stream from COM port in object of QByteArray type exactly and only 1 byte of data. BUT one very non-friendly protocol requires to have 9 bits of data after reading data from COM port.
But according to win32API function: ReadFile(....) I can read from the COM stream ONLY bytes= 1,2,3.....
So - That's why I am reading only 8 bits=1 byte with help of this function and with help of some operations with parity bit I am calculating the value of the 9th bit of generalized data...
So on one hand I have 1 byte (8 bits) of proper(real) data - on another hand I have a value of this 9th bit (0 or 1); 2 objects which in sum must create the value of generalized data.
How I can combine these objects into one & final QByteArray object? Because the global function ReadComData can and must return only QByteArray object.
UARTs cannot "write" 9-bit data. On the wire, your (typically 8-bit) data are usually framed between a start-bit and a stop-bit, so you have 10 bits transmitted for every byte you send. If you have a parity bit, it is transmitted after the last data bit, but before the stop bit. But this is generated by the sending UART, not part of a protocol. A data bus for a typical UART 16550 is only 8-bits wide (you can actually send 5-, 6-, 7-, or 8-bit data).
On the receiving end, the UART has to be configured based on what is on the wire. If your sender is using a parity bit, then you program the UART (via the "COM" port settings) accordingly. The parity bit is just to help check for errors on the wire. It is based on the data bits -- you cannot put another data bit in a parity bit. The receiving UART can be used to check for parity errors (read via the line status register (LSR)), and this can be passed up to you via system calls.
It is possible your protocol is splitting up the data across multiple bytes. If that's the case, then convert two bytes into one 16-bit word and mask the 6 bits you don't want to use.