How to reduce MATLAB serial port transmit delay - serial-port

I'm writing an application to read blocks of data from an external device over an RS485 half duplex link running at 921600 baud. The external device sends a block of data and then the MATLAB app needs to send a short acknowledge response before the device will send the next block. I find that MATLAB takes a varying time, often in excess of 35mS, after reading in the data block before it transmits the acknowledgement back. Is there some way to speed up this apparent serial port transmit delay as this is slowing the interaction down a lot? I am using MATLAB version R2018b. Thanks.

Related

How do programs apply backpressure over a network?

Consider the example of a download stream that can be throttled (eg. torrent client, dropbox sync, etc). How does a program apply backpressure to the network?
My thoughts are that, from a software perspective you can choose to read from a socket at a certain speed. But how does the socket you're reading from know that you only want your device to receive data so quickly? Does the actual NIC apply backpressure over the network somehow? If so, by what mechanism?
Backpressure is embedded in TCP/IP protocol. If slow consumer does not read bytes from connection in timely manner, producer is unable to put more bytes than there are buffer memory on sending and receiving sides.
In contrast, UDP messages are not counted and can be dropped if there is no free memory on receiver side to store them.

What happens when ethernet reception buffer is full

I have a quite newbie question : assume that I have two devices communication via Ethernet (TCP/IP) at 100Mbps. In one side, I will be feeding the device with data to transmit. At the other side, I will be consuming the received data. I have the ability to choose the adequate buffer size of both devices.
And now my question is : If data consumption rate from the second device, is slower than data feeding rate at the first one, what will happen then?
I found some, talking about overrun counter.
Is there anything in the ethernet communication indicating that a device is momently busy and can't receive new packets? so I can pause the transmission from the receiver device.
Can some one provide me with a document or documents that explain this issue in detail because I didn't find any.
Thank you by advance
Ethernet protocol runs on MAC controller chip. MAC has two separate RX-ring (for ingress packets) and TX-ring(for egress packets), this means its a full-duplex in nature. RX/TX-rings also have on-chip FIFO but the rings hold PDUs in host memory buffers. I have covered little bit of functionality in one of the related post
Now, congestion can happen but again RX and TX are two different paths and will be due to following conditions
Queue/de-queue of rx-buffers/tx-buffers is NOT fast compared to line rate. This happens when CPU is busy and not honer the interrupts fast enough.
Host memory is slower (ex: DRAM and not SRAM), or not enough memory(due to memory leak)
Intermediate processing of the buffers taking too long.
Now, about the peer device: Back-pressure can be taken care in the a standalone system and when that happens, we usually tail drop the packets. This is agnostics to the peer device, if peer device is slow its that device's problem.
Definition of overrun is: Number of times the receiver hardware was unable to handle received data to a hardware buffer because the input rate exceeded the receiver’s ability to handle the data.
I recommend pick any MAC controller's data-sheet (ex: Intel's ethernet Controller) and you will get all your questions covered. Or if you get to see device-driver for any MAC controller.
TCP/IP is upper layer stack sits inside kernel(this can be in user plane as well), whereas ARPA protocol (ethernet) is inside MAC controller hardware. If you understand this you will understand the difference between router and switches (where there is no TCP/IP stack).

Possible reason why Xbee is not able to send data

I am using an Arduino Pro Mini 328P (3.3v, 8Mhz) with Xbee series 1. I have set the frequency to 1 Mhz and the baudrate to 9600. Also I have set baudrate to 9600 in the Xbee. I have also tested that at this baudrate Xbee is sending the data properly in a normal scenario.
Now what I have done in my project:
I have registered my Xbee with the gateway and then it will go to sleep (I have used pin hibernate mode) then it will wake up by a digital pin of the Pro Mini. I have put a delay of 19ms, after which the Xbee will try to send data. After sending the data it will go back to sleep.
The problem is that it behaves randomly when sending data to the gateway (which also has the same Xbee series1). Sometimes it sends the data perfectly, sometimes sending fails. I have also enabled RR to retry 6 times in case the Xbee fails to send the data the first time.
I have no idea how to solve this problem because of the randomness in sending the data.
I have put two Xbees nearer (I have two nodes with the same hardware and the same code). There is an interval between of around 4 minutes. So when one Xbee sends the data perfectly, after that 4 minutes gae (time difference of two RTC on different nodes) the other one fails to send the data. In this condition what can I conclude?
As a side note, the Xbee will try to send the data every hour. To calculate that hour I have to use an RTC, which seems to work fine (I am sure because I have taken the logs, the RTC never fails to generate an interrupt).
So I am wondering what could be the possible reason and how can I fix this problem (without restarting anything if it is possible then nothing will be better than that).
And I have no choice to restart my controller.
How to debug this?
A few things. If possible, increase your baud rate so you spend less time sending data to/from the XBee. If you have a limited power budget, faster baud rates save time and energy. I don't know how the UARTs work on the Arduino, so I can't say whether 115,200bps is possible with a 1MHz CPU clock.
Second, make sure you wait for the XBee to assert CTS back to the Arduino after you wake it up. Never send to the XBee unless it's "clear to send".
Third, if you use API mode, you can watch for a "Transmit Status" frame from the local XBee back to the Arduino which will let you know when the module has successfully sent the frame, and it's safe for you to put it back to sleep.

TCP or UDP for image transfer

I'm using lwip stack on my embedded platform. I have connected the board to my PC via ethernet. My application running on board, dumps the image data out of ethernet. PC applications waits for header, after header it decodes the data and displays the image.
This is for debug purpose only. My images are 4MBytes and i receive 20 Frames per second. So it will be 80MBytes data per second.
Is is advisable to use TCP or UDP?
I tried using TCP, but my send buffers becomes full and it will wait around 200ms to receive acknowledge. Mean time i loose 5-6 images coming from sensor. Can this be fixed if i use UDP?
Thanks,
Sathya
I suggest you apply some kind of compression to your images before sending them to the network.
That said, if you use UDP, you may get better transferrate, but you do need receiving code that can handle lost packets (discard image or request resend or pad affected area)

Serial Transfer UART Delay

I currently have an embedded device connected to a PC through a serial port. I am having trouble with receiving data on the PC. When I use my PCI serial port card I am able to receive data right away (no delays). When I use my USB-To-Serial plug or the motherboards built in serial port I have to delay reading data (40ms for 32byte packets).
The only difference I can find between the hardware is the UART. The PCI card uses a 16650 and the plug/motherboard uses a standard 16550A. The PCI card is set to interrupt at 28 bytes and the plug is set to interrupt at 14 bytes.
I am connected at 56700 Baud (if this helps).
The delay becomes the majority of the duty cycle and really increases transfer time. (10min transfer vs 1 hour transfer).
Does anyone have an explanation for why I have to use a delay with the plug/motherboard? Can anyone suggest a possible solution to minimizing or removing this delay?
Linux has an ASYNC_LOW_LATENCY flag for the serial driver that may help. Whatever driver you're using may have something similar.
However, latency shouldn't make a difference on a bulk transfer. It should add 40 ms at the very start of the transfer and that's it, which is why drivers don't worry about it in the first place. I would recommend refactoring your transfer protocol to use a sliding window protocol, with a window size of around 100 packets, if you are doing 32-byte packets at that baud rate and latency. In other words, you only want to stop transmitting if you haven't received an ACK for the packet you sent 100 packets ago.
You'll probably find that different USB-Serial converters produce different results. We've found that the FTDI ones work well for talking with embedded devices. Some converters seem to buffer the data for a long time and/or fragment it.
I've never seen a problem with a motherboard connection - not sure what is going on there! Can you change the interrupt point for the motherboard serial port?
I have a serial to usb converter. When I hook it up to my breakout box and create a loopback I am able to send / receive at close to 1Mbps without problems. The serial port sends binary data that may be translated into ascii data.
Using .Net I set my software to fire an event on every byte (ReceivedBytesThreshold=1), though that doesn't mean it will.

Resources