Is any known solution how to speedup USB response time in Win7?
When I communicate with Usb.SerialPort I always have delay min 10ms for response. I suspect that issue is not serialport but USB communication in windows itself.
Related
I have a project consists of (micro controller RP2040 and Raspberry pi 4), the micro controller should send 2000 bytes every 10 millisecond to the raspberry pi over USB communication, and there is an application on raspberry pi which consume this data from USB.
The issue is, if the consumer app waits 200 milliseconds and then consume from USB, this causes a bottleneck on the micro controller which leads to lose data.
How can I increase the USB buffer size of the raspberry pi? I'm working on Raspberry pi OS 32bit.
Thank you
You should probably just make the consumer app faster or use a thread so it doesn't wait 200 milliseconds at the wrong time. If you can't do that, it might be possible to increase the size and number of read buffers in the Linux kernel by editing the code in cdc-acm.c. I'm not an expert on that code, but when I search for the word read_buffers I see some code that looks relevant. I'm assuming your microcontroller's USB interface is CDC ACM.
I want to simulate latency in packets I send using DPDK.
Initially I added usleep(10), and it worked but later I realized using sleep might hinder performance of my traffic generator.
usleep(10);
rte_eth_tx_burst(m_repid, queue_id, tx_pkts, nb_pkts);
So, I tried using a polling mechanism. Something like this:
inline void add_latency(float lat) {
//usleep(lat);
float start = now_sec();
float elapsed;
do {
elapsed = now_sec() - start;
} while(elapsed < (lat/1000));
}
But the packets are not getting send.
tx_pkts: 0
What am I doing wrong?
EDIT:
DPDK version: DPDK 22.03
Firmware:
# dmidecode -s bios-version
2.0.19
NIC:
0000:01:00.0 'I350 Gigabit Network Connection' if=em1 drv=igb unused=igb_uio,vfio-pci,uio_pci_generic *Active*
0000:01:00.3 'I350 Gigabit Network Connection' if=em4 drv=igb unused=igb_uio,vfio-pci,uio_pci_generic
For both Intel NIC i350 and Mellanox MT27800 as per DPDK 22.03 does not support HW offload for delayed packet transmission. Delayed packet transmission is a hardware feature which allows transmission of a packet at a defined future timestamp. For example if one needs to send a packet 10 microseconds from time of DMA to NIC buffer, the TX descriptor can be updated with the 10us as TX timestamp.
A similar (approximate) behaviour can be achieved by enabling TX timestamp on HW by Reporting back the timestamp in the transmit descriptor. The timestamp captured will be the time at which the first byte of the packet is sent out on the wire. With an approximation of time required for DMA of the packet from DPDK Main memory to NIC SRAM one can achieve the delayed packet transmit.
But there are certain caveats for the same, such as
DPDK NIC pmd should support low latency mode (allow tx of 1 packet burst). Example Intel E810 nic PMD args
Allow disabling of HW Switch engine and lookup. Example vswitch_disable or eswitch_disbale in the case of Mellanox CX-5 and CX-6 nic.
Support for HW TX time stamps to allow software control on TX intervals.
note:
Intel i210 in Linux driver supports delayed transmission with help TX shaper.
With Mellanox NIC ConnectX-7 using PMD arg tx_ppcan be used to capability to schedule traffic directly on timestamp specified in descriptor is provided.
Since the question is not clarified for packet size, Inter Frame Gap delay for simulate latency in packets I send using DPDK, the assumption is made it on the wire for 64B with fixed default IFG.
Suggestion:
Option-1: if it is 64B best approach is to create an array of pause packets for TX burst. Select the time intervals based on HW or SW timestamp to swap the array index with the actual packet intended to be sent.
Option-2: allow synce packets to synchronize the time stamps between server-client. Using out of band information do dynamic sleep (with approximate cost for DMA and wire transfer) to skew to desired results.
Please note if the intention is check the latency on DUT the whole approach is specified as code snippet is not correct. Refer DPDK synce example or DPDK pktgen latency for more clarity.
Trying to reverse an engineer a program that communicates with a device through a COM PORT. From monitoring the program, it sends 4 bytes of 0x00 and then the device replies back with the same and switches to flash mode. When I send the same bytes, the device does not respond, and in fact, no matter what I send, the serial port monitor says 0x00 is being written.
I've tried several different com port monitors, tried several different baud rates, flush com port before sending the command. Nothing seems to work. What could cause this type of issue?
no matter what I send, the serial port monitor says 0x00
From what you describe it seems like the serial sniffer you use is faulty.
Serial port sniffer I'm using successfully is this Serial Analyzer.
I am reading data from a sensor connected to a standard RS-232 serial port on a conventional linux kernel (Ubuntu 12.04)
The sensor outputs at 1000Hz. And connects at a baud rate of 115200, 8N1. Each sensor reading is 4 bytes, for a total throughput of 4Kb/s. The pattern of transmission, confirmed by oscilloscope, is a 4-byte burst followed by a near-millisecond pause. The sensor is very, very consistent.
99% of the packets are received with very low latency. However for about 0.5% of the bytes, the serial port read blocks for 2-8ms. Following this block, all the "missed" bytes are read very quickly. This suggests data is, on rare occasions, being buffered.
I have experimented with scheduler priority (nice) and serial port settings (ASYNC_LOW_LATENCY, VMIN, VTIME, raw, non-blocking settings, etc.). None of these seem to have any discernible effect.
Is there anything else I can do to get more consistent serial port reads short of recompiling the kernel or switching to a more real-time OS?
An answer can be provided with software or hardware arguments. See for example High delay in RS232 communication on a PXA270 or https://electronics.stackexchange.com/questions/96893/what-can-i-do-to-decrease-the-latency-from-these-serial-ports-which-are-attached.You can try to use low_latency paramlow_latency parameter how is suggest in Minimize Linux Serial Port Latency.
I'm using Python 2.7, pySerial under windows 7.
I have 8 devices, they are connected to my PC via Virtual COM port (Silicon Labs CP210x USB to UART Bridge), I'm testing them With multiprocessing all 8 COM ports are open and each time i'm sending command to one unit only, there is no MultiThreading.
The problem is that after X time (it could be 10 minutes or 5 hours), the output Queue of the serial ports fails to send me responses. it's not a specific port each time it's a different port (it can be several ports)
It's importent to say, the device gets my command and do it. the fail is to get the response, The device i'm testing is OK for sure.
I am sniffing the port with Serial monitor, all the commands send OK and the device makes them, only it doesn't respond.
Any idea's ?
There could be n number of reasons:
Buffers might be full
Com Port is not working or might be
And finally the device is malfunctioning..
Check out these things might be it will help you.