Serial driver hw fifo overrun at 460800 baud rate - serial-port

I am using 2.6.32 OMAP based linux kernel. I have observed that at high speed data rate (Serial port set to 460800 baud rate) serial port HW fifo overflow happens.
The serial port is configured to generate interrupt at every 8 bytes in rx and tx both direction (i.e when the serial port HW fifo is 8 byte full serial interrupt is generated which reads the data from the serial port at once).
I am transmitting 114 bytes packet continuously (Serial driver has no clue about the packet mode, it receives data in raw mode). Based on calculations,
460800 bits/sec => 460800/10 = 46080 bytes/sec (Where 1 stop bit and 1 start bit) so in 1 second I can transmit under worst case 46080/114 => 404.21 packets without any issue.
But, I expect the serial port to handle at least 1000 packets per second as such I have configured serial driver to generate interrupt every 8 bytes.
I tried the same using windows XP and I am able to read upto 600 packets / second.
Do you think this is feasible on linux under above circumstances? or I am missing something? Let me know your comments.
could someone also, send some important configuration settings that needs to be configured in .config file. I am unable to attach .config file otherwise, I can share it.

There are two kind of overflows that can occur for a serial port. The first one is the one you are talking about, the driver not responding to the interrupt fast enough to empty the FIFO. They are typically around 16 bytes deep so getting a fifo overflow requires the interrupt handler to be unresponsive for 1 / (46080 / 16) = 347 microseconds. That's a really, really long time. You have to have a pretty drastically screwed up driver with a higher priority interrupt to trip that.
The second kind is the one you didn't consider and offers more hope for a logical explanation. The driver copies the bytes from the fifo into a receive buffer. Where they will sit until the user mode program calls read() to read them. Getting an overflow on that buffer will happen when don't configure any kind of handshaking with the device and the user mode program is not calling read() often enough. It looks exactly like a fifo buffer overflow, bytes just disappear. There are status bits to warn about these problems but not checking them is a frequent oversight. You didn't mention doing that either.
So start by improving the diagnostics, do check the overflow status bits to know what's going on. Then do consider enabling handshaking if you find out that it is actually a buffer overflow issue. Increasing the buffer size is possible but not a solution if this is a fire-hose problem. Getting the user mode program to call read() more often is a fix but not an easy one. Just lowering the baud rate, yeah, that always works.

Related

Is DMA synchronous in network card drivers?

My understanding is that when a NIC adapter receives new packets, the top half handler uses DMA to copy data from the RX buffer to the main memory. I think this handler should not exit or release the INT pin before the transmission is completed, otherwise new packets would corrupt the old ones.
However, DMA is generally considered asynchronous and itself requires the interrupt mechanism to notify the CPU that data transmission is done. Thus my question, is DMA actually synchronous here, or interrupt can in fact happen within another interrupt handler?
In general, this synchronisation happens via ring descriptor between NIC(device driver) and host CPU. You will get packet path details here. I have explained the ring-descriptors below.
Edit:
Let me explain with Intel's ethernet Controller. If you look at section 3.2.3, where the RX descriptor format is given, it has status field which solves packet ownership problem. There are two major points to avoid contention and packet corruption as to who owns the packet (NIC driver or CPU).
DMA (from I/O device to Host memory): RX/TX Ring consists of 'hardware descriptors' and 'buffers' (carved from host memory). When we say DMA, controller transfer data, this happens from hardware FIFOs to this host memory.
Let us assume my ring buffers (of 512 bytes) are not big enough to hold the complete incoming packet(1500 or Jumbo packet), in that case the packet may span across multiple ring buffers and with EOP(End Of Packet) status field, indicates that the complete packet is now received (considering all the sanity checks/checksums are already done).
Second is who owns the packet now (driver or CPU for further consumption)? Now until the status flag DD (Descriptor Done) is set, it belongs to driver. Once set CPU can grab it for picking-and-poking.
This is specific to RX path. TX path is slightly different.
Consider it this way, there are multiple interrupts (IO, keyboard, mouse, etc) happening all the time in the system, but the time duration between two interrupts are so huge that CPU can do lot of other good stuff in between. And to further offload CPUs work DMA helps transferring data. So if an interrupt is raised and subroutine is called, all the subsequent interrupts can be masked as you are already inside that subroutine, but trust me these subroutine are very tiny they hardly consume any time until your next packet arrives. That means your packet arriving speed has to be higher than your processing speed.
Another Ex: for router/switches 99% time task is routing and switching hence subroutine and interrupt priorities are completely different, moreover all the time they are bombard with tons of packets and hence the subroutine in such cases will never come until there is another packet at bay. At least i have worked on such networking gears.

How can my 57.6 kbaud serial connection trasmit over 100 kbytes per second?

I have an Arduino which dumps serial data continuously, and a Qt application reads the data on my computer.
I use QSerialPort, and I can count a much larger amount of bytes received as what QSerialPort::baudRate() indicates.
Interestingly, If I use a serial-to-USB Converter on my external device, the baudrate set on the external device corresponds to the rate I can read the data in the Qt application. In this mode, the Arduino just sends data over its USART to an serial-to-USB Converter, and it's seen as a virtual COM port on the PC.
However, if I use the "native USB" port on the Arduino Due, it is still detected as a virtual COM port and I can read from it with QSerialPort without any special settings, just by calling open() and read(). If I check the baudrate by calling baudRate(), I get 57600. However, by timing with QTime and counting the received bytes, I measure a much higher data rate than 57600 bits per second.
If I use a buffer of 20 bytes, I can manage around 100 kbyte/sec, with a buffer of 180 bytes I can receive at almost 1.4 Mbyte/sec.
I checked that the data is correctly received (and it's not just junk).
Is just QSerialPort::baudRate() broken, or did I miss something obvious?

9 bits uart emulation with /dev/tty*

I have a uncommon protocol, which requires 9600 baud, 9 bits and one stop bit. I can't find any driver, which can implement this sending/receiving.
Can I send something to /dev/tty* for emulating these queries? What should I send? How can I emulate a 9600 baud rate?
You can use sticky parity, which is also called MARK and SPACE parity. termios.h supports this. However, you need to change the parity settings before sending address or data bytes accordingly and depending on the hardware, this may introduce undesired delays between two types of bytes. I have experienced delays from 0.4 ms to 10 ms with FT232RL & FT232BL USB to serial converters. I'm not sure but I suspect that it's also affected by the motherboard and the USB port you use (USB2 or USB3). Also, you need to be sure that the transmit buffer is empty before attempting a parity mode change because it also affects the parity settings of the bytes that are already placed in the transmit buffer.

periodic serial port latency

I am reading data from a sensor connected to a standard RS-232 serial port on a conventional linux kernel (Ubuntu 12.04)
The sensor outputs at 1000Hz. And connects at a baud rate of 115200, 8N1. Each sensor reading is 4 bytes, for a total throughput of 4Kb/s. The pattern of transmission, confirmed by oscilloscope, is a 4-byte burst followed by a near-millisecond pause. The sensor is very, very consistent.
99% of the packets are received with very low latency. However for about 0.5% of the bytes, the serial port read blocks for 2-8ms. Following this block, all the "missed" bytes are read very quickly. This suggests data is, on rare occasions, being buffered.
I have experimented with scheduler priority (nice) and serial port settings (ASYNC_LOW_LATENCY, VMIN, VTIME, raw, non-blocking settings, etc.). None of these seem to have any discernible effect.
Is there anything else I can do to get more consistent serial port reads short of recompiling the kernel or switching to a more real-time OS?
An answer can be provided with software or hardware arguments. See for example High delay in RS232 communication on a PXA270 or https://electronics.stackexchange.com/questions/96893/what-can-i-do-to-decrease-the-latency-from-these-serial-ports-which-are-attached.You can try to use low_latency paramlow_latency parameter how is suggest in Minimize Linux Serial Port Latency.

Serial Transfer UART Delay

I currently have an embedded device connected to a PC through a serial port. I am having trouble with receiving data on the PC. When I use my PCI serial port card I am able to receive data right away (no delays). When I use my USB-To-Serial plug or the motherboards built in serial port I have to delay reading data (40ms for 32byte packets).
The only difference I can find between the hardware is the UART. The PCI card uses a 16650 and the plug/motherboard uses a standard 16550A. The PCI card is set to interrupt at 28 bytes and the plug is set to interrupt at 14 bytes.
I am connected at 56700 Baud (if this helps).
The delay becomes the majority of the duty cycle and really increases transfer time. (10min transfer vs 1 hour transfer).
Does anyone have an explanation for why I have to use a delay with the plug/motherboard? Can anyone suggest a possible solution to minimizing or removing this delay?
Linux has an ASYNC_LOW_LATENCY flag for the serial driver that may help. Whatever driver you're using may have something similar.
However, latency shouldn't make a difference on a bulk transfer. It should add 40 ms at the very start of the transfer and that's it, which is why drivers don't worry about it in the first place. I would recommend refactoring your transfer protocol to use a sliding window protocol, with a window size of around 100 packets, if you are doing 32-byte packets at that baud rate and latency. In other words, you only want to stop transmitting if you haven't received an ACK for the packet you sent 100 packets ago.
You'll probably find that different USB-Serial converters produce different results. We've found that the FTDI ones work well for talking with embedded devices. Some converters seem to buffer the data for a long time and/or fragment it.
I've never seen a problem with a motherboard connection - not sure what is going on there! Can you change the interrupt point for the motherboard serial port?
I have a serial to usb converter. When I hook it up to my breakout box and create a loopback I am able to send / receive at close to 1Mbps without problems. The serial port sends binary data that may be translated into ascii data.
Using .Net I set my software to fire an event on every byte (ReceivedBytesThreshold=1), though that doesn't mean it will.

Resources