What is B/s which can be achieved with Movesense send_ble_nus with 1.6.2? I assume packet length 20 is optimal. With 50 Hz * 20 B/s = 1000 B/s no loss when listening with Xamarin Forms https://github.com/aritchie/bluetoothle component on Windows 10 and Android 8.1. With 100 Hz * 20 B/s = 2000 B/s some (Window 10 <1 %, Android 8.1 <0.1 %) packets lost. Can 2000 B/s rates be obtained with e.g. MTU changes or with more optimal code?
Movesense sensor supports up to 158 byte MTU and BLE 4.2 Data Length Extension. If the counterpart knows to use large MTU and DLE, the optimal is to fill it all: i.e. putting data in 155 byte packets. Theoretically it is possible to get up to 800kbps speeds, but in practice with mobile it will be less (maybe much less).
With android it is easy to see what connection parameters are negotiated by enabling the "HCI dump" feature from Developer settings and studying the resulting .log file with Wireshark protocol analyzer.
Related
In transferring a raw stream of data over Bluetooth LE L2CAP between Linux and iOS, I get a throughput of approx. 9 kilobyte payload per sec. I expected to be able to transfer approx. 25 kB/s.
How can I investigate and/or optimize on the transfer rate?
Client Details
The L2CAP client is an iPhone 13 Pro running iOS 16 using the "CBL2CAPChannel-Demo" app over PSM 0x95.
It is essentially using Apple's open2LCAPChannel(_ PSM:) method in CBPeripheral.
As far as I see, Apple offers no configuration options for changing connection intervals, MTUs or anything like that. It all seems to be automatic.
Server Details
The L2CAP server is a Linux computer running Linux 5.10 using the bluez Bluetooth stack.
The test program is the command l2cat from Rust's bluer-tools.
I have used btmon to examine the exchanged packets, and they seem to be generally either 188 or 243 bytes in length.
Per Apple's recommendation, I have tried setting the connection interval min/max to 15 like so:
echo 15 > /sys/kernel/debug/bluetooth/hci0/conn_min_interval
echo 15 > /sys/kernel/debug/bluetooth/hci0/conn_max_interval
It did not have any effect on throughput.
How can I tell whether the Bluetooth module is using the 1Mbps PHY or the 2Mbps PHY?
I have tested the server on two seperate types of hardware:
Raspberry Pi CM4 with its built-in Bluetooth module and an external antennae
i.MX 8M Mini computer with an Intel WiFi 6 AX200 module and an external antennae
Note: It was tested with two different types of external antennae, and it was verified that the antennaes were connected properly.
I am trying to understand the bottleneck in my setup, that is if one exists.
From an iPhone 11 app (central) I am writing to an L2CAP channel made available by a Raspberry Pi 4B (peripheral using BlueZ). (Using the CoreBluetooth framework on the iOS side)
From the btmon log I see The iPhone suggests an MTU and MPS of 1498. The Pi suggests an MTU of 65535 and MPS of 247.
Throughput gets better with higher MTU but only up to a point. No difference between specifying MTU of 5000 or 65535 from the peripheral side.
On the peripheral I have set the connection interval min to 15ms and max to 15ms (and tried 30ms). Higher intervals result in slower throughput as expected.
The kbit/s does not seem to go above the 137kbit/s, which is far lower than the 394kbit/s shown by Apple in WWDC 2017.
Data length extension is available on both devices.
From the btmon logs I see the majority packets are of size 247 (243 bytes of payload) and a few of size 146 (142 bytes of payload). This could account for some slowness but I doubt it causes the throughput to go down by a factor of 3.
Am I missing something or is this the limit for my setup?
I have a device sending a series of 20 byte UART frames ( 20 bytes each without delay in between the 20 bytes).By sending data from device to PC Using FTDI chip TTL to USB convertor , there is a delay of 16 ms in between the 20 bytes sometimes.Why is it so?
This was monitored on - Look RS232 software
When observed on a monitoring software in PC there is a delay of 16ms -in between the 20 bytes in some of the frames. eg 4 bytes first then 16 and so on.Or 6 bytes first then the rest 10 after 16 ms.
It has been tried with different baud rates.The same issue persists.
I read that there is a latency timeout in FTDI chip if minimum 64 bytes are not received and this latency is 16ms. But this should not affect my application since the maximum length at which data is sent continuously is 20 bytes after which there is a delay and then the next 20 byte arrives. So as soon as the 20 bytes are received then time out should occur. There should not be a delay in between.
Reference:
https://www.ftdichip.com/Support/Documents/AppNotes/AN_107_AdvancedDriverOptions_AN_000073.pdf
Could this be due to USB scheduling delay and the fact that it it not exactly interrupt driven? does anybody have a possible solution for this? the delay is always 16 ms.
Expected result is 20 bytes data without delay in between.
Without delay will be impossible. As we are talking about USB there is always some jitter due to the nature of USB and most bus systems in general. The latency/timeout setting of the FT driver is typically set to 16 ms. You can change that under windows in the device manager via the settings dialog of the corresponding virtual com port. Best I could get with a FT2232H with baud rate beyond 10 MBps is 1.46+-0.31 ms (n=1000 packages) for packets traveling back and forth.
How long does it take for an iBeacon to send advertising packet?
I want to clarify this in order to configure advertising interval for beacons so that I can read hundreds of beacons as reliably and as fast as possible while avoiding collisions of iBeacon advertising packets.
This is a useful question in terms of getting an ideal lower bound on power usage by the device (excluding any compute power used by the device).
The BLE packet has a preamble of 1 byte, access address of 4 bytes, header of 2 bytes, MAC address of 6 bytes, data of up to 31 bytes, then a CRC of 3 bytes. That's a total of 46 bytes or 368 bits.
BLE has a supposed data rate of 1Mbit. According to this article, that excludes framing / error checking / connecting (although an advertising packet probably won't spend a lot of time connecting). So assuming the best case of 1Mbit=1024*1024, we can send 2849 advertising packets per second. That means each one is about 0.35 ms - in an ideal world. If the article is right, and the effective data rate is as much as 4x slower, it could be as long as 1.4 ms.
I am trying to estimate bandwidth usage of a XMPP application.
The application is receiving a 150-bytes ping each second, and answering with a ping of the same size.(*)
However, when I measure the data usage, I get something like 900 bytes per ping (and not the 300 expected)
I have a suspicion this might relate to something layer bellow (TCP? IP?) and datagram sizes. But, so far, reading the TCP/IP guide did not lead me anywhere.
Another hypothesis would be that this overhead comes from XMPP itself, somehow.
Can anyone enlighten me ?
(*) to get this "150 bytes" I counted the number of chars in the <iq> (the xml representation of the ping)
I am using TLS, but not BOSH (actually, BOSH on the other connection: I am measuring results in the android client, and the pings are coming from a web application, but I think that should not matter)
The client is Xabber, running on android
Lets try to calculate the worst-case overhead down to the IP level.
For TLS we have:
With TLS 1.1 and up, in CBC mode: an IV of 16 bytes.
Again, in CBC mode: TLS padding. TLS uses blocks of 16 bytes, so it may need to add 15 bytes of padding.
(Technically TLS allows for up to 255 bytes of padding, but in practice I think that's rare.)
With SHA-384: A MAC of 48 bytes.
TLS header of 5 bytes.
That's 84 extra bytes.
TCP and IP headers are 40 bytes (from this answer) if no extra options are used, but for IPv6 this would be 60 bytes.
So you could be seeing 84 + 60 + 150 = 294 bytes per ping.
However, on the TCP level we also need ACKs. If you are pinging a different client (especially over BOSH), then the pong will likely be too late to piggyback the TCP ACK for the ping. So the server must send a 60 byte ACK for the ping and the client also needs to send a 60 byte ACK for the pong.
That brings us to:
294 + 60 + 294 + 60 = 708
900 still sounds a lot too large. Are you sure the ping and the pong are both 150 bytes?