I am reading data from a sensor connected to a standard RS-232 serial port on a conventional linux kernel (Ubuntu 12.04)
The sensor outputs at 1000Hz. And connects at a baud rate of 115200, 8N1. Each sensor reading is 4 bytes, for a total throughput of 4Kb/s. The pattern of transmission, confirmed by oscilloscope, is a 4-byte burst followed by a near-millisecond pause. The sensor is very, very consistent.
99% of the packets are received with very low latency. However for about 0.5% of the bytes, the serial port read blocks for 2-8ms. Following this block, all the "missed" bytes are read very quickly. This suggests data is, on rare occasions, being buffered.
I have experimented with scheduler priority (nice) and serial port settings (ASYNC_LOW_LATENCY, VMIN, VTIME, raw, non-blocking settings, etc.). None of these seem to have any discernible effect.
Is there anything else I can do to get more consistent serial port reads short of recompiling the kernel or switching to a more real-time OS?
An answer can be provided with software or hardware arguments. See for example High delay in RS232 communication on a PXA270 or https://electronics.stackexchange.com/questions/96893/what-can-i-do-to-decrease-the-latency-from-these-serial-ports-which-are-attached.You can try to use low_latency paramlow_latency parameter how is suggest in Minimize Linux Serial Port Latency.
Related
I want to simulate latency in packets I send using DPDK.
Initially I added usleep(10), and it worked but later I realized using sleep might hinder performance of my traffic generator.
usleep(10);
rte_eth_tx_burst(m_repid, queue_id, tx_pkts, nb_pkts);
So, I tried using a polling mechanism. Something like this:
inline void add_latency(float lat) {
//usleep(lat);
float start = now_sec();
float elapsed;
do {
elapsed = now_sec() - start;
} while(elapsed < (lat/1000));
}
But the packets are not getting send.
tx_pkts: 0
What am I doing wrong?
EDIT:
DPDK version: DPDK 22.03
Firmware:
# dmidecode -s bios-version
2.0.19
NIC:
0000:01:00.0 'I350 Gigabit Network Connection' if=em1 drv=igb unused=igb_uio,vfio-pci,uio_pci_generic *Active*
0000:01:00.3 'I350 Gigabit Network Connection' if=em4 drv=igb unused=igb_uio,vfio-pci,uio_pci_generic
For both Intel NIC i350 and Mellanox MT27800 as per DPDK 22.03 does not support HW offload for delayed packet transmission. Delayed packet transmission is a hardware feature which allows transmission of a packet at a defined future timestamp. For example if one needs to send a packet 10 microseconds from time of DMA to NIC buffer, the TX descriptor can be updated with the 10us as TX timestamp.
A similar (approximate) behaviour can be achieved by enabling TX timestamp on HW by Reporting back the timestamp in the transmit descriptor. The timestamp captured will be the time at which the first byte of the packet is sent out on the wire. With an approximation of time required for DMA of the packet from DPDK Main memory to NIC SRAM one can achieve the delayed packet transmit.
But there are certain caveats for the same, such as
DPDK NIC pmd should support low latency mode (allow tx of 1 packet burst). Example Intel E810 nic PMD args
Allow disabling of HW Switch engine and lookup. Example vswitch_disable or eswitch_disbale in the case of Mellanox CX-5 and CX-6 nic.
Support for HW TX time stamps to allow software control on TX intervals.
note:
Intel i210 in Linux driver supports delayed transmission with help TX shaper.
With Mellanox NIC ConnectX-7 using PMD arg tx_ppcan be used to capability to schedule traffic directly on timestamp specified in descriptor is provided.
Since the question is not clarified for packet size, Inter Frame Gap delay for simulate latency in packets I send using DPDK, the assumption is made it on the wire for 64B with fixed default IFG.
Suggestion:
Option-1: if it is 64B best approach is to create an array of pause packets for TX burst. Select the time intervals based on HW or SW timestamp to swap the array index with the actual packet intended to be sent.
Option-2: allow synce packets to synchronize the time stamps between server-client. Using out of band information do dynamic sleep (with approximate cost for DMA and wire transfer) to skew to desired results.
Please note if the intention is check the latency on DUT the whole approach is specified as code snippet is not correct. Refer DPDK synce example or DPDK pktgen latency for more clarity.
I'm writing a simple program to transmit data from the MCU to the PC.
I'm using FTDI cable to achieve that.
Data that I'm sending is string digits from 0 to 9 (0x30 to 0x39 as ascii codes).
Both the MCU and the PC terminal are configured to 9600 kbps, 8 bits, no parity, no flow control, one stop bit.
When the data transferred from the MCU to the PC - symbols are wrong.
When TX and RX lines of the MCU are both connected to each other - I can see, that all symbols that were sent, were received by the MCU.
When TX and RX lines of the FTDI cable (connected to the PC) are connected to each other - all symbols that were sent from the PC terminal were received by the PC.
I cannot understand what can be wrong in sending data from the MCU to the PC.
Please, help!
The symptoms you describe suggest a timing mismatch between the PC and the MCU. UART serial comms can tolerate a baud rate mismatch of <5% at either end. In practice because teh PC is certainly accurate, you might get away with up to 10% in the embedded target - but that is extreme. Either the baud rate divisor for your part is incorrectly programmed, or your system clock is inaccurate or simply not the frequency you believe it to be. RC oscillators on some MCUs used to reduce costs can be off-nominal as bad as +/-10%.
You should verify the clock and the baud rate directly with an oscilloscope, or laboriously verify every clock setting from the PLL to the UART baud-rate generator.
The solution is more simple than I thought.
For my previous applications I used the ATC-810 cable (USB-to-UART, FT232BL chip).
At the past it worked, but now for some reason it doesn't works. New drivers from FTDI, may be...
When I took the TTL-232R-3V3 cable - all data that I'm sending from the MCU I'm receiving on the PC!
Thanks a lot for trying to help!!
I am using 2.6.32 OMAP based linux kernel. I have observed that at high speed data rate (Serial port set to 460800 baud rate) serial port HW fifo overflow happens.
The serial port is configured to generate interrupt at every 8 bytes in rx and tx both direction (i.e when the serial port HW fifo is 8 byte full serial interrupt is generated which reads the data from the serial port at once).
I am transmitting 114 bytes packet continuously (Serial driver has no clue about the packet mode, it receives data in raw mode). Based on calculations,
460800 bits/sec => 460800/10 = 46080 bytes/sec (Where 1 stop bit and 1 start bit) so in 1 second I can transmit under worst case 46080/114 => 404.21 packets without any issue.
But, I expect the serial port to handle at least 1000 packets per second as such I have configured serial driver to generate interrupt every 8 bytes.
I tried the same using windows XP and I am able to read upto 600 packets / second.
Do you think this is feasible on linux under above circumstances? or I am missing something? Let me know your comments.
could someone also, send some important configuration settings that needs to be configured in .config file. I am unable to attach .config file otherwise, I can share it.
There are two kind of overflows that can occur for a serial port. The first one is the one you are talking about, the driver not responding to the interrupt fast enough to empty the FIFO. They are typically around 16 bytes deep so getting a fifo overflow requires the interrupt handler to be unresponsive for 1 / (46080 / 16) = 347 microseconds. That's a really, really long time. You have to have a pretty drastically screwed up driver with a higher priority interrupt to trip that.
The second kind is the one you didn't consider and offers more hope for a logical explanation. The driver copies the bytes from the fifo into a receive buffer. Where they will sit until the user mode program calls read() to read them. Getting an overflow on that buffer will happen when don't configure any kind of handshaking with the device and the user mode program is not calling read() often enough. It looks exactly like a fifo buffer overflow, bytes just disappear. There are status bits to warn about these problems but not checking them is a frequent oversight. You didn't mention doing that either.
So start by improving the diagnostics, do check the overflow status bits to know what's going on. Then do consider enabling handshaking if you find out that it is actually a buffer overflow issue. Increasing the buffer size is possible but not a solution if this is a fire-hose problem. Getting the user mode program to call read() more often is a fix but not an easy one. Just lowering the baud rate, yeah, that always works.
How can i tell if the unit connected to serial port is powered on?
Does serial communication have any means of acknowledging that a command has been received that i can check for?
or is it entirely dependent on whatevers plugged into the serial port?
Most RS232 devices (such as modems) will raise the DSR (data set ready) line when they are powered on and ready to work. You can query the status of this line in software.
In a similar fashion, computers generally raise DTR (data terminal ready) to tell the modem (or whatever device) that they are ready. You can control this line from software.
Acknowledgement is not specified by RS232 and varies from one device to another, but many devices do indeed use hardware handshaking to indicate willingness to receive data. Specifically, they will raise CTS (clear to send) when they are ready. If the device is powered on, but can temporarily not receive data, it will leave DSR high, but will clear CTS.
I currently have an embedded device connected to a PC through a serial port. I am having trouble with receiving data on the PC. When I use my PCI serial port card I am able to receive data right away (no delays). When I use my USB-To-Serial plug or the motherboards built in serial port I have to delay reading data (40ms for 32byte packets).
The only difference I can find between the hardware is the UART. The PCI card uses a 16650 and the plug/motherboard uses a standard 16550A. The PCI card is set to interrupt at 28 bytes and the plug is set to interrupt at 14 bytes.
I am connected at 56700 Baud (if this helps).
The delay becomes the majority of the duty cycle and really increases transfer time. (10min transfer vs 1 hour transfer).
Does anyone have an explanation for why I have to use a delay with the plug/motherboard? Can anyone suggest a possible solution to minimizing or removing this delay?
Linux has an ASYNC_LOW_LATENCY flag for the serial driver that may help. Whatever driver you're using may have something similar.
However, latency shouldn't make a difference on a bulk transfer. It should add 40 ms at the very start of the transfer and that's it, which is why drivers don't worry about it in the first place. I would recommend refactoring your transfer protocol to use a sliding window protocol, with a window size of around 100 packets, if you are doing 32-byte packets at that baud rate and latency. In other words, you only want to stop transmitting if you haven't received an ACK for the packet you sent 100 packets ago.
You'll probably find that different USB-Serial converters produce different results. We've found that the FTDI ones work well for talking with embedded devices. Some converters seem to buffer the data for a long time and/or fragment it.
I've never seen a problem with a motherboard connection - not sure what is going on there! Can you change the interrupt point for the motherboard serial port?
I have a serial to usb converter. When I hook it up to my breakout box and create a loopback I am able to send / receive at close to 1Mbps without problems. The serial port sends binary data that may be translated into ascii data.
Using .Net I set my software to fire an event on every byte (ReceivedBytesThreshold=1), though that doesn't mean it will.