Serial port handshake. What the difference between Hardware and None handshaking? - serial-port

I'm trying to determine the difference when I open serial port with hardware handshake and none handshake.
It seems that in both cases I have to control RTS/CTS signals (just tested it with one COM device).
So what the difference between opening serial port with hardware handshake and without handshaking?
From my previous understanding, when we work without handshaking, we don't care about RTS/CTS and DTR/DSR signals. Just send and receive data whenever we want. Was I wrong?
One more question. Can we work without handshaking in full duplex mode only?

As you probably know, the issue is "flow control". Like the Clash song "Should I stay or should I go?".
RTS/CTS is hardware control. XON/XOFF is software control. Otherwise, "just keep going".
This link might explain further:
http://www.lammertbies.nl/comm/info/RS-232_flow_control.html
I suspect that when you were asking about "handshaking" with respect to "duplex", perhaps you meant this:
http://en.wikipedia.org/wiki/RS-232
In older versions of the specification, RS-232's use of the RTS and
CTS lines is asymmetric: The DTE asserts RTS to indicate a desire to
transmit to the DCE, and the DCE asserts CTS in response to grant
permission. This allows for half-duplex modems that disable their
transmitters when not required, and must transmit a synchronization
preamble to the receiver when they are re-enabled.
'Hope that helps!

The difference is more about whether the kernel will pay attention to the CTS/RTS lines when deciding whether to send more data or not. With hardware handshaking turned on, it will. With it set to none, it won't, even though the CTS/RTS lines may stay asserted, so that the peer will know it can send data.

Related

Understanding serial device settings

Please feel free to slap me and send a link if this question has already been answered; I just couldn't find it. I did search though.
I've been trouble-shooting communication with a serial device. In looking over lots of documentation, I now understand what the settings for "baud rate," "data bits," "stop bit," and "parity" mean. But what I can't seem to understand is who (sender or receiver) determines these settings.
Say I have a serial device plugged into my computer. In my code, I open a connection to the serial port and specify something like 9600,8,E,1. When I specify these settings, do these get sent to the sending itself, so that it knows how to send the data to my receiver? Or is it more common for a sender to expect a receiver to comply with strict settings?
The issue I'm having is that I attempted to use "Even" parity, and that resulted in tons of irregular transfer errors. When I use "Odd" parity, however, those errors go away. There is also a USB to Serial adapter involved in my set up. There aren't any transfer errors with Even or Odd parity without the adapter in the middle. So I'm just having a hard time understanding whether the device itself doesn't support sending with Even parity, or whether the adapter is the thing causing trouble, etc.
Thanks.
When I specify these settings, do these get sent to the sending itself, so that it knows how to send the data to my receiver?
No.
To expand on the comment by Hans Passant, both sides of the serial port have to agree on the settings, otherwise they won't talk to each other. If they don't agree, you will get gibberish data on either side as the hardware will read the data at an incorrect time. The settings are normally documented in the manual for the device that you are attempting to communicate with. For example, to communicate with a Cisco router, you will generally use the following settings:
Bits per sec : 9600
Data bits : 8
Parity : none
Stop bits : 1
Flow control : none
When you setup the serial port on your side, you must use these same settings, there is no hardware-level handshake between the two devices that determines the speed that they will communicate at.
Sometimes, the format for the serial port settings may be given in a format like the following:
9600,8,N,1
Which is just shorthand for the above quote(9600 baud, 8 data bits, no parity, 1 stop bit)
In my experience, most devices default to 9600,8,N,1, the next common serial setting is 115200,8,N,1

Does chrome.serial API ensure data integrity?

I'm trying to understand whether its redundant for me to include some kind of CRC or checksum in my communication protocol. Does the chrome.serial and other chrome hardware communication API's in general if anyone can speak to them (e.g. chrome.hid, chrome.bluetoothLowEnergy, ...)
Serial communications is simply a way of transmitting bits and its major reason for existence is that it's one bit at a time -- and can therefore work over just a single communications link, such as a simple telephone line. There's no built-in CRC or checksum or anything.
There are many systems that live on top of serial comms that attempt to deal with the fact that communications often takes place in a noisy environment. Back in the day of modems over telephone lines, you might have to deal with the fact that someone else in the house might pick up another extension on the phone line and inject a bunch of noise into your download. Thus, protocols like XMODEM were invented, wrappering serial comms in a more robust framework. (Then, when XMODEM proved unreliable, we went to YMODEM and ZMODEM.)
Depending on what you're talking to (for example, a device like an Arduino connnected to a USB serial port over a wire that's 25 cm long) you might find that putting the work into checksumming the data isn't worth the trouble, because the likelihood of interference is so low and the consequences are trivial. On the other hand, if you're talking to a controller for a laser weapon, you might want to make sure the command you send is the command that's received.
I don't know anything about the other systems you mention, but I'm old enough to have spent a lot of time doing serial comms back in the '80s (and now doing it again for devices using chrome.serial, go figure).
I'm using Chrome's serial API to communicate with Arduino devices, and I have yet to experience random corruption in the middle of an exchange (my exchanges are short bursts, 50-500 bytes max). However, I do see garbage bytes blast out if a connection is flaky or a cable is "rudely" disconnected (like a few minutes ago when I tripped over the FTDI cable).
In my project, a mis-processed command wont break anything, and I can get by with a master-slave protocol. Because of this, I designed a pretty slim solution: The Arduino slave listens for an "attention byte" (!) followed by a command byte, after which it reads a fixed number of data bytes depending on the command. Since the Arduino discards until it hears an attention byte and a valid command, the breaking-errors usually occur when a connection is cut while a slave is "awaiting x data bytes". To account for this, the first thing the master does on connect is to blindly blast out enough AT bytes to push the Arduino through "awaiting data" even in the worst-case-scenario. Crude, yet sufficient.
I realize my solution is pretty lo-fi, so I did a bit of surfing around and I found this post to be pretty comprehensive: Simple serial point-to-point communication protocol
Also, if you need a strategy for error-correction over error-detection/re-transmission (or over my strategy, which I guess is "error-brute-forcing"), you may want to check out the link to a technique called "Hamming," near the bottom of that thread - That one looked promising!
Good luck!
-Matt

Is it really necessary the handshakng on an RS232 connection?

I'm building an electronic device that has to be prepared for RS232 connections, and I'd like to know if it's really necessary to make room for more than 3 pins (Tx, Rx, GND) on each port.
If I don't use the rest of signals (those made for handshaking): am I going to find problems communicating with any device?
Generally, yes, that's a problem. The kind of problem that you can only avoid if you can give specific instructions to the client on how to configure the port on his end. Which is never not a problem, if that's not done properly then data transfer just won't occur and finding out why can be very awkward. You are almost guaranteed to get a support call.
A lot of standard programs pay attention to your DTR signal, DSR on their end. Data Terminal Ready indicates that your device is powered up and whatever the client receives is not produced by electrical noise. Without DSR they'll just ignore what you send. Very simple to implement, just tie it to your power supply.
Pretty common is flow control through the RTS/CTS signals. If enabled in the client program, it won't send you anything until you turn on the Request To Send signal. Again very simple to implement if you don't need flow control, just tie it logically high like DTR so the client program's configuration doesn't matter.
DCD and Ring are modem signals, pretty unlikely to matter to a generic device. Tie them logically low.
Very simple to implement, avoids lots of mishaps and support calls, do wire them.
And do consider whether you can actually live without flow control. It is very rarely a problem on the client end, modern machines can very easily keep up with the kind of data rates that are common on serial ports. That is not necessarily the case on your end, the usual limitation is the amount of RAM you can reserve for the receive buffer and the speed of the embedded processor. A modern machine can firehose you with data pretty easily. If your uart FIFO or receive interrupt handler or data processing code cannot keep up then the inevitable data loss is very hard to deal with. Not an issue if you use RTS/CTS or Xon/Xoff handshaking or if you use a master/slave protocol or are comfortable with a low enough baudrate.

Reliable full-duplex serial comms

I'm designing a device that will encrypt a long (assume infinite) stream of data sent from the PC and send it back. I'm planning to use a single serial port on the device running full duplex with hardware handshaking and "block" the data, sending a CRC value after every block. The device will only buffer a limited number of blocks- ideally just one buffer accumulating the block being received and one buffer holding the block presently being sent, switching them over at each block boundary and using hardware handshaking to keep things in sync.
The problem I'm considering is what happens when there's corruption and there's a mismatch between the CRC value calculated by the receiver- which could be either the PC or the device- and the one sent. If the receiver detects an error, it sets a break condition on its transmit line- because although TX and RX are doing different things that's all we CAN do- and then we drop into a recovery sequence.
Recovery is easy when the error condition is detected before the data disappears from the sender, but particularly on the PC receiving there may be a significant amount of buffer space, and by the time the PC catches up and detects the corruption the data may have disappeared from the device and we can't simply retransmit. It's difficult to "rewind" cipher generation, so resending the source data and trying to pick things up in the middle is difficult- and indeed the source data may not be available to resend depending on where it's ultimately coming from.
I considered having each side send its "last frame successfully received" counter along with its last frame sent CRC value, and having the device drop RTS if there's too much unconfirmed data waiting at the output, but that would then deadlock- the device never gets the confirmation that the PC's receive thread has caught up.
I've also considered having the PC send a block and then not send another block until the first block's been confirmed processed and received back, but that's essentially going to half duplex or block-synchronous operation and the system runs slower than it can do. A compromise is to have a number of buffers in the device, the PC to know how many buffers and to throttle its own output based on what it thinks the device is doing, but having that degree of 'intelligence' needed in the PC side seems inelegant and hacky.
Serial comms is quite ancient tech. Surely there's a good way of doing this?
Designing a reliable protocol is not that easy. Some notes with what you've talked about so far:
Only use RTS to do what it is designed to do, avoid receive buffer overflow. It is not suitable to do more.
Strongly consider not having multiple un-acknowledged frames around. It is only important if the connection suffers from high latency, that is not a problem with serial ports.
Achieve full duplex operation by layering, use the OSI model as a guide.
Be sure to treat the input and output of your protocol as plain byte streams. Framing is only a detail of the protocol implementation, the actual frame size does not matter. If the app signals by using messages then that should be implemented on top of the protocol. Otherwise the automatic outcome of proper layering.
Keep in mind that a frame can do more than just transmit data, it can also include an ACK for a received frame. In other words, you only need a separate ACK frame if there isn't anything to transmit back.
And avoid reinventing the wheel, this has been done before. I can recommend RATP, the subject of RFC916. Widely ignored btw so you are not likely to find code you can copy. I've implemented it and had good success. It has only one flaw that I know of, it is not resilient to multiple connection attempts that are present in the receive buffer. Intentionally purging the buffer when you open the port is important.

Automatically detecting a serial port's configuration?

I am designing software around an existing hardware product. I have full control of the communication protocol but I'm not sure how to facilitate device detection.
A device could have a range of possible configurations (i.e. baud rate, data bits, parity bits, stop bits) that must be detected at runtime. What is the easiest, most reliable way for the software to figure out what configuration it is using? Again, I have full control of the communication protocol so I can define any mechanism I wish.
Is this a full-duplex or half-duplex device? Can you control request-to-send and monitor clear-to-send on both ends of the serial line? Is the serial line point-to-point (like RS-232) or multi-drop (like RS-485)? It will make a (albeit small) difference if you are going to interfere with other already connected devices while negotiating with a newly connected one.
If you think of the handshake process like a modem negotiating a link layer protocol, it uses a standard set of messages to describe the type of communications it would like to have and waits for an "ack" from the other end. In your case I recommend having a "let's talk" standard message that your head end generates with the range of bit rates and waits for the ack from the device.
I also recommend reducing the number of configuration options for the device. Forget about variable data bits, parity bits, and stop bits. The serial communications world is no longer as unstable as it was back in the 70's. Just use 8 data bits, no parity, one stop bit and vary the bit rate. A CRC on the end of messages provides plenty of error-checking.

Resources