I'm designing a device that will encrypt a long (assume infinite) stream of data sent from the PC and send it back. I'm planning to use a single serial port on the device running full duplex with hardware handshaking and "block" the data, sending a CRC value after every block. The device will only buffer a limited number of blocks- ideally just one buffer accumulating the block being received and one buffer holding the block presently being sent, switching them over at each block boundary and using hardware handshaking to keep things in sync.
The problem I'm considering is what happens when there's corruption and there's a mismatch between the CRC value calculated by the receiver- which could be either the PC or the device- and the one sent. If the receiver detects an error, it sets a break condition on its transmit line- because although TX and RX are doing different things that's all we CAN do- and then we drop into a recovery sequence.
Recovery is easy when the error condition is detected before the data disappears from the sender, but particularly on the PC receiving there may be a significant amount of buffer space, and by the time the PC catches up and detects the corruption the data may have disappeared from the device and we can't simply retransmit. It's difficult to "rewind" cipher generation, so resending the source data and trying to pick things up in the middle is difficult- and indeed the source data may not be available to resend depending on where it's ultimately coming from.
I considered having each side send its "last frame successfully received" counter along with its last frame sent CRC value, and having the device drop RTS if there's too much unconfirmed data waiting at the output, but that would then deadlock- the device never gets the confirmation that the PC's receive thread has caught up.
I've also considered having the PC send a block and then not send another block until the first block's been confirmed processed and received back, but that's essentially going to half duplex or block-synchronous operation and the system runs slower than it can do. A compromise is to have a number of buffers in the device, the PC to know how many buffers and to throttle its own output based on what it thinks the device is doing, but having that degree of 'intelligence' needed in the PC side seems inelegant and hacky.
Serial comms is quite ancient tech. Surely there's a good way of doing this?
Designing a reliable protocol is not that easy. Some notes with what you've talked about so far:
Only use RTS to do what it is designed to do, avoid receive buffer overflow. It is not suitable to do more.
Strongly consider not having multiple un-acknowledged frames around. It is only important if the connection suffers from high latency, that is not a problem with serial ports.
Achieve full duplex operation by layering, use the OSI model as a guide.
Be sure to treat the input and output of your protocol as plain byte streams. Framing is only a detail of the protocol implementation, the actual frame size does not matter. If the app signals by using messages then that should be implemented on top of the protocol. Otherwise the automatic outcome of proper layering.
Keep in mind that a frame can do more than just transmit data, it can also include an ACK for a received frame. In other words, you only need a separate ACK frame if there isn't anything to transmit back.
And avoid reinventing the wheel, this has been done before. I can recommend RATP, the subject of RFC916. Widely ignored btw so you are not likely to find code you can copy. I've implemented it and had good success. It has only one flaw that I know of, it is not resilient to multiple connection attempts that are present in the receive buffer. Intentionally purging the buffer when you open the port is important.
Related
Hi i got a simple arduino wifi program that waits for UDP commands sent by a python script. When the python script sends a command packet it expects an aknowledge packet (and in certain circumstances some returned data packets). So basically there are two kinds of commands. SET COMMANDS which only expects an aknowledge packets, and GET commands which expect an aknowledge packet + one or multiple data packets. Right now, when a command packet is lost from the python script's perspective, a timeout is raised and the python script tries again after a small delay. For now, this does not cause any problems with the GET commands because, at worse the arduino replies twice and i receive the data. But this can cause problems with the SET commands. I.e. the arduino could get the command to toggle an led twice (on off on). What could I do to remedy this problem. Should i add some framing to the udp packet command structure like packet counters? The receiving arduino needs to know if there was dome packets lost and tell the python script to restart what ever action it was trying to do.
It is the nature of UDP that packets may get lost or duplicated. You have, essentially, three options.
If you need reliable data transmission, use a protocol that provides it. Using UDP is a bad choice where you need all the features TCP provides anyway. So switch to TCP.
Re-architect the protocol so that you don't need reliable data transmission. For example, your "toggle LED" command could include a sequence number and if the toggle sequence matches the previous one, it's ignored. So you send "toggle LED, sequence 2" over and over until you get an acknowledgement, then in your next request, it's "toggle LED, sequence 3". Be careful, not only may data packets get lost, duplicated or interleaved, but responses may too. It's easy to mess this up.
Implement reliable data transmission. For example, each request may contain a sequence number and you repeat it until you get an acknowledge with the same sequence. Only then move onto the next sequence. Do this with multi-datagram replies too. This is painful, but that's why you are offered TCP -- so you don't have to re-invent it every time you need reliable data transmission.
I have a quite newbie question : assume that I have two devices communication via Ethernet (TCP/IP) at 100Mbps. In one side, I will be feeding the device with data to transmit. At the other side, I will be consuming the received data. I have the ability to choose the adequate buffer size of both devices.
And now my question is : If data consumption rate from the second device, is slower than data feeding rate at the first one, what will happen then?
I found some, talking about overrun counter.
Is there anything in the ethernet communication indicating that a device is momently busy and can't receive new packets? so I can pause the transmission from the receiver device.
Can some one provide me with a document or documents that explain this issue in detail because I didn't find any.
Thank you by advance
Ethernet protocol runs on MAC controller chip. MAC has two separate RX-ring (for ingress packets) and TX-ring(for egress packets), this means its a full-duplex in nature. RX/TX-rings also have on-chip FIFO but the rings hold PDUs in host memory buffers. I have covered little bit of functionality in one of the related post
Now, congestion can happen but again RX and TX are two different paths and will be due to following conditions
Queue/de-queue of rx-buffers/tx-buffers is NOT fast compared to line rate. This happens when CPU is busy and not honer the interrupts fast enough.
Host memory is slower (ex: DRAM and not SRAM), or not enough memory(due to memory leak)
Intermediate processing of the buffers taking too long.
Now, about the peer device: Back-pressure can be taken care in the a standalone system and when that happens, we usually tail drop the packets. This is agnostics to the peer device, if peer device is slow its that device's problem.
Definition of overrun is: Number of times the receiver hardware was unable to handle received data to a hardware buffer because the input rate exceeded the receiver’s ability to handle the data.
I recommend pick any MAC controller's data-sheet (ex: Intel's ethernet Controller) and you will get all your questions covered. Or if you get to see device-driver for any MAC controller.
TCP/IP is upper layer stack sits inside kernel(this can be in user plane as well), whereas ARPA protocol (ethernet) is inside MAC controller hardware. If you understand this you will understand the difference between router and switches (where there is no TCP/IP stack).
I read in some places that advertising packets are sent to every one in the distance range. However, should the other device be scanning to receive them or it will receive it anyways?
The problem:
let's say I'm establishing a piconet between 5 or 6 BLE devices. At some point I have some connections between the slaves and one master. Then if one of the devices get removed/shut off for a few days I would like it to reconnect back to the network as soon as turned on.
I read about the autoconnect feature but it seems when you set it true, the device creates a background scanning which is actually slower (in frequency) than the manual scanning. This makes me conclude that for the autoConnect to work the device which is being turned on again needs to advertise again, right? Therefore, if autoconnect really runs a slow scan on background so it seems to me that you can never receive the adv packets instantly unless you're scanning somehow. Does that make sense?
If so, is there any way around it? I mean, detect the device that is comming back to the range instantly?
Nothing is "Instant". You are talking about radio protocols with delays, timeouts, retransmits, jamming, etc. There are always delays. The important thing is what you consider acceptable for your application.
A radio transceiver is either receiving, sleeping or transmitting, on one given channel at a time. Transmitting and receiving implies power consumption.
When a Central is idle (not handling any connection at all), all it has to do is scanning. It can do it full time (even if spec says this should be duty cycled). You can expect to actually receive an advertising packet from peer Peripheral the first time it is transmitted.
When a Central is maintaining a connection to multiple peripherals, its transceiver time is shared between all the connections to maintain. Background scanning is considered low priority, and takes some of the remaining transceiver time. Then an advertising Peripheral may send its ADV packet while Central is not listening.
Here comes statistical magic:
Spec says interval between two advertising events must be augmented with a (pseudo-)random delay. This ensures Central (scanner) and Peripheral (advertiser) will manage to see each other at some point in time. Without this random delay, their timing allocations could become harmonic, but out of phase, and it could happen they never see each other.
Depending on the parameters used on Central and Peripheral (advInterval, advDelay, scanWindow, scanInterval) and radio link quality, you can compute the probability to be able to reach a node after a given time. This is left as an exercise to the reader... :)
In the end, the question you should ask yourself looks like "is it acceptable my Peripheral is reconnected to my Central after 300 ms in 95% of cases" ?
I'm trying to understand whether its redundant for me to include some kind of CRC or checksum in my communication protocol. Does the chrome.serial and other chrome hardware communication API's in general if anyone can speak to them (e.g. chrome.hid, chrome.bluetoothLowEnergy, ...)
Serial communications is simply a way of transmitting bits and its major reason for existence is that it's one bit at a time -- and can therefore work over just a single communications link, such as a simple telephone line. There's no built-in CRC or checksum or anything.
There are many systems that live on top of serial comms that attempt to deal with the fact that communications often takes place in a noisy environment. Back in the day of modems over telephone lines, you might have to deal with the fact that someone else in the house might pick up another extension on the phone line and inject a bunch of noise into your download. Thus, protocols like XMODEM were invented, wrappering serial comms in a more robust framework. (Then, when XMODEM proved unreliable, we went to YMODEM and ZMODEM.)
Depending on what you're talking to (for example, a device like an Arduino connnected to a USB serial port over a wire that's 25 cm long) you might find that putting the work into checksumming the data isn't worth the trouble, because the likelihood of interference is so low and the consequences are trivial. On the other hand, if you're talking to a controller for a laser weapon, you might want to make sure the command you send is the command that's received.
I don't know anything about the other systems you mention, but I'm old enough to have spent a lot of time doing serial comms back in the '80s (and now doing it again for devices using chrome.serial, go figure).
I'm using Chrome's serial API to communicate with Arduino devices, and I have yet to experience random corruption in the middle of an exchange (my exchanges are short bursts, 50-500 bytes max). However, I do see garbage bytes blast out if a connection is flaky or a cable is "rudely" disconnected (like a few minutes ago when I tripped over the FTDI cable).
In my project, a mis-processed command wont break anything, and I can get by with a master-slave protocol. Because of this, I designed a pretty slim solution: The Arduino slave listens for an "attention byte" (!) followed by a command byte, after which it reads a fixed number of data bytes depending on the command. Since the Arduino discards until it hears an attention byte and a valid command, the breaking-errors usually occur when a connection is cut while a slave is "awaiting x data bytes". To account for this, the first thing the master does on connect is to blindly blast out enough AT bytes to push the Arduino through "awaiting data" even in the worst-case-scenario. Crude, yet sufficient.
I realize my solution is pretty lo-fi, so I did a bit of surfing around and I found this post to be pretty comprehensive: Simple serial point-to-point communication protocol
Also, if you need a strategy for error-correction over error-detection/re-transmission (or over my strategy, which I guess is "error-brute-forcing"), you may want to check out the link to a technique called "Hamming," near the bottom of that thread - That one looked promising!
Good luck!
-Matt
I'm building an electronic device that has to be prepared for RS232 connections, and I'd like to know if it's really necessary to make room for more than 3 pins (Tx, Rx, GND) on each port.
If I don't use the rest of signals (those made for handshaking): am I going to find problems communicating with any device?
Generally, yes, that's a problem. The kind of problem that you can only avoid if you can give specific instructions to the client on how to configure the port on his end. Which is never not a problem, if that's not done properly then data transfer just won't occur and finding out why can be very awkward. You are almost guaranteed to get a support call.
A lot of standard programs pay attention to your DTR signal, DSR on their end. Data Terminal Ready indicates that your device is powered up and whatever the client receives is not produced by electrical noise. Without DSR they'll just ignore what you send. Very simple to implement, just tie it to your power supply.
Pretty common is flow control through the RTS/CTS signals. If enabled in the client program, it won't send you anything until you turn on the Request To Send signal. Again very simple to implement if you don't need flow control, just tie it logically high like DTR so the client program's configuration doesn't matter.
DCD and Ring are modem signals, pretty unlikely to matter to a generic device. Tie them logically low.
Very simple to implement, avoids lots of mishaps and support calls, do wire them.
And do consider whether you can actually live without flow control. It is very rarely a problem on the client end, modern machines can very easily keep up with the kind of data rates that are common on serial ports. That is not necessarily the case on your end, the usual limitation is the amount of RAM you can reserve for the receive buffer and the speed of the embedded processor. A modern machine can firehose you with data pretty easily. If your uart FIFO or receive interrupt handler or data processing code cannot keep up then the inevitable data loss is very hard to deal with. Not an issue if you use RTS/CTS or Xon/Xoff handshaking or if you use a master/slave protocol or are comfortable with a low enough baudrate.