How do I make the peripheral of a BLE connection request a PHY update to 2M PHY? - bluetooth-lowenergy

Parameters like connection interval can be changed by changing files like conn_min_interval.
The value in this file is then used for negotiating the connection interval between the central and peripheral.
Is there an equivalent file for changing the preferred LE PHY to 2M PHY or Coded PHY?
From this I imagine the peripheral would trigger a PHY update request.
I have found plenty of documentation on what 2M PHY and Coded PHY are, but no documentation on how to actually enable them.
(I'm using BlueZ with devices that both support 2M PHY and Coded PHY)

Related

Is there a way to send data from the FPGA logic on a Zedboard to an external CPU without involvement of the ZYNQ PS?

I am a high school student, who is not very familiar with FPGAs and the Xilinx line.
I am running a ring oscillator module on a Zybo Z7 board. I am also running a counter module, which I want to sample at a high rate. I am currently sending the data through AXI to the ZYNQ processing system, and then using the inbuilt UART to USB buffer to send the data through a USB cable to my computer. On the computer side, I treat this input as a virtual serial line, and use a python script to take and log the data from an IOstream. This method takes very long, however, and I am trying to increase the sample speed. Thus, I was wondering if I could bypass the onboard PS, and connect the FPGA fabric directly to the UART buffer.
I have tried optimizing my PS code, which I have written in C. I have reached the point where it takes 30 oscillations of the onboard ZYNQ clock between samples. Now, however, I have created a newer and more reliable sampling framework in the FPGA logic, which requires a 'handshake' mechanism to start and stop the counter between samples. It takes a very long time for the PS to sample the counter, send the sample, and then restart the counter. Thus, the uptime of my sampling framework is a fraction of what I want it to be. Removing the PS would be ideal, as I know I can automate this handshake signal within the PL if I am able to connect it to a UART interface.
You can implement logic in the PL that can handle the UART communication thus bypassing the PS.
Here's an implementation you can try using:https://github.com/jakubcabal/uart-for-fpga
You would connect the UART pins to one of the Zybo Z7 Pmod ports and use an external USB to UART adapter such as this one, anyone would work as long it supports 3.3V: https://www.adafruit.com/product/5335
The adapter built into the board is connected to directly to the PS MIO pins and cannot be used by the PL.

Adding delay in packets using DPDK

I want to simulate latency in packets I send using DPDK.
Initially I added usleep(10), and it worked but later I realized using sleep might hinder performance of my traffic generator.
usleep(10);
rte_eth_tx_burst(m_repid, queue_id, tx_pkts, nb_pkts);
So, I tried using a polling mechanism. Something like this:
inline void add_latency(float lat) {
//usleep(lat);
float start = now_sec();
float elapsed;
do {
elapsed = now_sec() - start;
} while(elapsed < (lat/1000));
}
But the packets are not getting send.
tx_pkts: 0
What am I doing wrong?
EDIT:
DPDK version: DPDK 22.03
Firmware:
# dmidecode -s bios-version
2.0.19
NIC:
0000:01:00.0 'I350 Gigabit Network Connection' if=em1 drv=igb unused=igb_uio,vfio-pci,uio_pci_generic *Active*
0000:01:00.3 'I350 Gigabit Network Connection' if=em4 drv=igb unused=igb_uio,vfio-pci,uio_pci_generic
For both Intel NIC i350 and Mellanox MT27800 as per DPDK 22.03 does not support HW offload for delayed packet transmission. Delayed packet transmission is a hardware feature which allows transmission of a packet at a defined future timestamp. For example if one needs to send a packet 10 microseconds from time of DMA to NIC buffer, the TX descriptor can be updated with the 10us as TX timestamp.
A similar (approximate) behaviour can be achieved by enabling TX timestamp on HW by Reporting back the timestamp in the transmit descriptor. The timestamp captured will be the time at which the first byte of the packet is sent out on the wire. With an approximation of time required for DMA of the packet from DPDK Main memory to NIC SRAM one can achieve the delayed packet transmit.
But there are certain caveats for the same, such as
DPDK NIC pmd should support low latency mode (allow tx of 1 packet burst). Example Intel E810 nic PMD args
Allow disabling of HW Switch engine and lookup. Example vswitch_disable or eswitch_disbale in the case of Mellanox CX-5 and CX-6 nic.
Support for HW TX time stamps to allow software control on TX intervals.
note:
Intel i210 in Linux driver supports delayed transmission with help TX shaper.
With Mellanox NIC ConnectX-7 using PMD arg tx_ppcan be used to capability to schedule traffic directly on timestamp specified in descriptor is provided.
Since the question is not clarified for packet size, Inter Frame Gap delay for simulate latency in packets I send using DPDK, the assumption is made it on the wire for 64B with fixed default IFG.
Suggestion:
Option-1: if it is 64B best approach is to create an array of pause packets for TX burst. Select the time intervals based on HW or SW timestamp to swap the array index with the actual packet intended to be sent.
Option-2: allow synce packets to synchronize the time stamps between server-client. Using out of band information do dynamic sleep (with approximate cost for DMA and wire transfer) to skew to desired results.
Please note if the intention is check the latency on DUT the whole approach is specified as code snippet is not correct. Refer DPDK synce example or DPDK pktgen latency for more clarity.

BLUEZ - is propagation delay of ~50ms normal for BLE?

I'm trying to sample TI CC2650STK sensor data at ~120Hz with my Raspberry Pi 3B and when comparing the signal trace to a wired MPU6050 I seem to have a ~50ms phase-shift in the resultant signal (in the image below orange is the data received over BLE and blue is the data received over I2C with another sensor (MPU6050):
The firmware on the sensor side doesn't seem to have any big buffers:
(50{ ms }/8{ ms/sample } = ~6 { samples }), where each sample is 18bytes long -> 6*18 buffer size req'd I guess...).
On the RPi side I use Bluez with Bluepy library and again I see no buffers that could cause such a delay. For test purposes the sensor is lying right next to my pi, so surely OTA transmission cannot be taking 40-50ms of time? More so, timing my code that handles the incoming notifications shows that the whole handling (my high level code + bluepy library + BLUEZ Stack) takes less than 1-2 ms.
Is it normal to see such huge propagation delay or would you say I'm missing something in my code?
Looks legit to me.
BLE is timeslotted. Peripheral cannot transmit any time it wants, it has to wait for next connection event for sending its payload. If next connection event is right after sensor data update, message gets sent with no big latency. If sensor data is generated right after connection event, peripheral stack has to wait a complete connection interval for next connection event.
Connection interval is an amount of time, multiple of 1.25 ms between 7.25 ms and 4 s, set by Master of the connection (your Pi's HCI) upon connection. It can be updated by Master arbitrarily. Slave can kindly ask for modification of parameters from the Master, but master can do whatever it wants (most Master implementation try to respect constraints from Slave though).
If you measure an average delay of 50 ms, you are probably using a connection interval of 100 ms (probably a little less because of constants delays in the chain).
Bluez contains a hcitool lecup command line that is able to change the connection parameters for a given connection.

Sending and receiving data over Bluetooth Low Energy (BLE) using Telit BlueMod+SR

We are looking at using the Telit BlueMod+SR chip in a hardware idea we are working on. Towards that I've been trying to build a Bluetooth Low Energy (BLE) server simulation using the Telit BlueEva+SR evaluation board driven over USB by a Python script.
The two relevant manuals appear to be:
BlueMod+SR AT Command Reference
Terminal I/O Profile Client Implementation Guide (though I'm implementing the server)
(N.B. these are available here but are behind a register/login.)
I'm stuck on something basic: how to send or receive data. I understand that this is done by setting the value of a Generic Attribute Profile (GATT) service's characteristic. The BlueMod+SR already has the GATT service characteristics that I need (a UART data TX characteristic and a UART data RX characteristic) on its Terminal I/O Service. The UUID's of the characteristics I need are given in the Terminal I/O Profile Client Implementation Guide but I cannot see how to read from nor write to them. The AT Command Reference has a section on GATT Server commands but the only one listed, +LEATTRIB, is for defining the attributes for a service (and the ones I need are already defined).
What are the commands I need to read and write the values for the characteristics UART Data TX, UART Data RX, UART Credits TX, and UART Credits RX?
It turns out that I did not need to use the credit mechanism, that's handled for me. So to write to the TX characteristic I can either connect to BLE and just write the data, or use the multiplexing and write the data to channel 0x01 (Terminal I/O). Reading the RX characteristic si similarly just reading the serial connection.

Serial Transfer UART Delay

I currently have an embedded device connected to a PC through a serial port. I am having trouble with receiving data on the PC. When I use my PCI serial port card I am able to receive data right away (no delays). When I use my USB-To-Serial plug or the motherboards built in serial port I have to delay reading data (40ms for 32byte packets).
The only difference I can find between the hardware is the UART. The PCI card uses a 16650 and the plug/motherboard uses a standard 16550A. The PCI card is set to interrupt at 28 bytes and the plug is set to interrupt at 14 bytes.
I am connected at 56700 Baud (if this helps).
The delay becomes the majority of the duty cycle and really increases transfer time. (10min transfer vs 1 hour transfer).
Does anyone have an explanation for why I have to use a delay with the plug/motherboard? Can anyone suggest a possible solution to minimizing or removing this delay?
Linux has an ASYNC_LOW_LATENCY flag for the serial driver that may help. Whatever driver you're using may have something similar.
However, latency shouldn't make a difference on a bulk transfer. It should add 40 ms at the very start of the transfer and that's it, which is why drivers don't worry about it in the first place. I would recommend refactoring your transfer protocol to use a sliding window protocol, with a window size of around 100 packets, if you are doing 32-byte packets at that baud rate and latency. In other words, you only want to stop transmitting if you haven't received an ACK for the packet you sent 100 packets ago.
You'll probably find that different USB-Serial converters produce different results. We've found that the FTDI ones work well for talking with embedded devices. Some converters seem to buffer the data for a long time and/or fragment it.
I've never seen a problem with a motherboard connection - not sure what is going on there! Can you change the interrupt point for the motherboard serial port?
I have a serial to usb converter. When I hook it up to my breakout box and create a loopback I am able to send / receive at close to 1Mbps without problems. The serial port sends binary data that may be translated into ascii data.
Using .Net I set my software to fire an event on every byte (ReceivedBytesThreshold=1), though that doesn't mean it will.

Resources