Is it really necessary the handshakng on an RS232 connection? - serial-port

I'm building an electronic device that has to be prepared for RS232 connections, and I'd like to know if it's really necessary to make room for more than 3 pins (Tx, Rx, GND) on each port.
If I don't use the rest of signals (those made for handshaking): am I going to find problems communicating with any device?

Generally, yes, that's a problem. The kind of problem that you can only avoid if you can give specific instructions to the client on how to configure the port on his end. Which is never not a problem, if that's not done properly then data transfer just won't occur and finding out why can be very awkward. You are almost guaranteed to get a support call.
A lot of standard programs pay attention to your DTR signal, DSR on their end. Data Terminal Ready indicates that your device is powered up and whatever the client receives is not produced by electrical noise. Without DSR they'll just ignore what you send. Very simple to implement, just tie it to your power supply.
Pretty common is flow control through the RTS/CTS signals. If enabled in the client program, it won't send you anything until you turn on the Request To Send signal. Again very simple to implement if you don't need flow control, just tie it logically high like DTR so the client program's configuration doesn't matter.
DCD and Ring are modem signals, pretty unlikely to matter to a generic device. Tie them logically low.
Very simple to implement, avoids lots of mishaps and support calls, do wire them.
And do consider whether you can actually live without flow control. It is very rarely a problem on the client end, modern machines can very easily keep up with the kind of data rates that are common on serial ports. That is not necessarily the case on your end, the usual limitation is the amount of RAM you can reserve for the receive buffer and the speed of the embedded processor. A modern machine can firehose you with data pretty easily. If your uart FIFO or receive interrupt handler or data processing code cannot keep up then the inevitable data loss is very hard to deal with. Not an issue if you use RTS/CTS or Xon/Xoff handshaking or if you use a master/slave protocol or are comfortable with a low enough baudrate.

Related

How do smart phones use AT commands and data connection(s)? gsm mux? multiple uarts?

I am involved in a project where we have some kind of IoT device. An nxp processor with an LTE modem on a PCB. The software running on it connects to the modem over a single uart interface. It will initialize the modem through AT commands, and finally made a data call to the provider (PPP).
Then, it uses lwIP (light weight IP) to open some mqtt subscriptions, and allow user code to make http get/post requests to our servers.
Every 15 minutes we want to retrieve signal strength from the modem and report this back to the server. What I do now, is put the modem back in command mode, retrieve the signal strength info, go back to data mode, and resume normal operation.
The round trip from data mode, to commando mode, and back to data mode takes several seconds (4-5 ish). This is annoying, because during that time we are not receptive for commands.
I've read about gsm mux 07.10. By following some defined protocol it allows to create virtual serial ports, over one physical uart. That sounds nice, although I realize this will go at the cost of performance (bytes will be added to each frame we send to either command mode / data mode).
The gsm mux 07.10 spec dates from 1999. I am far from an expert in mobile solutions. I was wondering: is muxing still the way to go? How does a typical smart phone deals with this for example? Do they include modems with more than one uart to have parallel access to AT commands and a live internet connection? Or do they in fact still rely on gsm mux?
If somebody would be so kind to give some insights. Also on potential C libraries that are available that implement gsm mux 07.10? It seems that TinyGSM implements it (although I can't seem to find where), and I also can find the linux kernel driver that implements gsm mux 07.10. But that driver is written on top the tty interfaces in linux, so that would mean I would have to reverse engineer the kernel driver and strip out the tty stuff and replace it with my own uart implementation.
First of all, the spec numbering is the old GSM specification numbering, so those old specs will never be updated, the new specifications with new numbering scheme will. I do not remember when the switch was made, but I do remember someone at work giving a presentation on 07.10 probably around 1998/1999, so probably a few years after that or around that time (and definitely before 2009).
The newer spec numbering scheme uses three digits for the first part.
So for instance the old AT command spec 07.07 is now 27.007, and the current 07.10 multiplex specification is 27.010.
The following is what I remember of 07.10.
The motivations for developing 07.10 was to exactly support the kind of scenario that you describe. Remember back in the mid 90's, if mobile phones had a serial interface then that was RS-232 though each manufacturer's proprietary connector at the bottom of the phone. One single serial interface.
However, in order to use 07.10 mux in serial communication you needed to install some specific serial drivers in Windows with support for 07.10 (and I think maybe there was some reliability issue with them?), and for that reason 07.10 never took of and became anything more than an rarely used solution.
Also by the end of the 90's additional serial interfaces like Bluetooth and IrDA became available on many phones, and later USB as well, which both added additional physical interfaces as well as natively multiplexing within each protocol.
So the need for multiplexing over physical RS-232 became less of an issue, and whatever little popularity 07.10 ever had dwindled down to virtual nothing.
Fast forward a couple of decades and suddenly someone asks about it on stackoverflow. Good on you :) As far as I can tell I cannot see any fundamental problems with using it for the purpose you present.
Modern smart phones that support AT commands will most likely have a code base for the AT command parsing with roots in the 90's, which most likely include the AT+CMUX command. Of course manufacturers today have zero explicit wish for supporting it, but when it is already present it will just come along with the collection of all other legacy AT commands that they support.
So if the modem supports AT+CMUX you should be good to go. I have no experience or recommendation with regards to client protocol libraries.

Does chrome.serial API ensure data integrity?

I'm trying to understand whether its redundant for me to include some kind of CRC or checksum in my communication protocol. Does the chrome.serial and other chrome hardware communication API's in general if anyone can speak to them (e.g. chrome.hid, chrome.bluetoothLowEnergy, ...)
Serial communications is simply a way of transmitting bits and its major reason for existence is that it's one bit at a time -- and can therefore work over just a single communications link, such as a simple telephone line. There's no built-in CRC or checksum or anything.
There are many systems that live on top of serial comms that attempt to deal with the fact that communications often takes place in a noisy environment. Back in the day of modems over telephone lines, you might have to deal with the fact that someone else in the house might pick up another extension on the phone line and inject a bunch of noise into your download. Thus, protocols like XMODEM were invented, wrappering serial comms in a more robust framework. (Then, when XMODEM proved unreliable, we went to YMODEM and ZMODEM.)
Depending on what you're talking to (for example, a device like an Arduino connnected to a USB serial port over a wire that's 25 cm long) you might find that putting the work into checksumming the data isn't worth the trouble, because the likelihood of interference is so low and the consequences are trivial. On the other hand, if you're talking to a controller for a laser weapon, you might want to make sure the command you send is the command that's received.
I don't know anything about the other systems you mention, but I'm old enough to have spent a lot of time doing serial comms back in the '80s (and now doing it again for devices using chrome.serial, go figure).
I'm using Chrome's serial API to communicate with Arduino devices, and I have yet to experience random corruption in the middle of an exchange (my exchanges are short bursts, 50-500 bytes max). However, I do see garbage bytes blast out if a connection is flaky or a cable is "rudely" disconnected (like a few minutes ago when I tripped over the FTDI cable).
In my project, a mis-processed command wont break anything, and I can get by with a master-slave protocol. Because of this, I designed a pretty slim solution: The Arduino slave listens for an "attention byte" (!) followed by a command byte, after which it reads a fixed number of data bytes depending on the command. Since the Arduino discards until it hears an attention byte and a valid command, the breaking-errors usually occur when a connection is cut while a slave is "awaiting x data bytes". To account for this, the first thing the master does on connect is to blindly blast out enough AT bytes to push the Arduino through "awaiting data" even in the worst-case-scenario. Crude, yet sufficient.
I realize my solution is pretty lo-fi, so I did a bit of surfing around and I found this post to be pretty comprehensive: Simple serial point-to-point communication protocol
Also, if you need a strategy for error-correction over error-detection/re-transmission (or over my strategy, which I guess is "error-brute-forcing"), you may want to check out the link to a technique called "Hamming," near the bottom of that thread - That one looked promising!
Good luck!
-Matt

Reliable full-duplex serial comms

I'm designing a device that will encrypt a long (assume infinite) stream of data sent from the PC and send it back. I'm planning to use a single serial port on the device running full duplex with hardware handshaking and "block" the data, sending a CRC value after every block. The device will only buffer a limited number of blocks- ideally just one buffer accumulating the block being received and one buffer holding the block presently being sent, switching them over at each block boundary and using hardware handshaking to keep things in sync.
The problem I'm considering is what happens when there's corruption and there's a mismatch between the CRC value calculated by the receiver- which could be either the PC or the device- and the one sent. If the receiver detects an error, it sets a break condition on its transmit line- because although TX and RX are doing different things that's all we CAN do- and then we drop into a recovery sequence.
Recovery is easy when the error condition is detected before the data disappears from the sender, but particularly on the PC receiving there may be a significant amount of buffer space, and by the time the PC catches up and detects the corruption the data may have disappeared from the device and we can't simply retransmit. It's difficult to "rewind" cipher generation, so resending the source data and trying to pick things up in the middle is difficult- and indeed the source data may not be available to resend depending on where it's ultimately coming from.
I considered having each side send its "last frame successfully received" counter along with its last frame sent CRC value, and having the device drop RTS if there's too much unconfirmed data waiting at the output, but that would then deadlock- the device never gets the confirmation that the PC's receive thread has caught up.
I've also considered having the PC send a block and then not send another block until the first block's been confirmed processed and received back, but that's essentially going to half duplex or block-synchronous operation and the system runs slower than it can do. A compromise is to have a number of buffers in the device, the PC to know how many buffers and to throttle its own output based on what it thinks the device is doing, but having that degree of 'intelligence' needed in the PC side seems inelegant and hacky.
Serial comms is quite ancient tech. Surely there's a good way of doing this?
Designing a reliable protocol is not that easy. Some notes with what you've talked about so far:
Only use RTS to do what it is designed to do, avoid receive buffer overflow. It is not suitable to do more.
Strongly consider not having multiple un-acknowledged frames around. It is only important if the connection suffers from high latency, that is not a problem with serial ports.
Achieve full duplex operation by layering, use the OSI model as a guide.
Be sure to treat the input and output of your protocol as plain byte streams. Framing is only a detail of the protocol implementation, the actual frame size does not matter. If the app signals by using messages then that should be implemented on top of the protocol. Otherwise the automatic outcome of proper layering.
Keep in mind that a frame can do more than just transmit data, it can also include an ACK for a received frame. In other words, you only need a separate ACK frame if there isn't anything to transmit back.
And avoid reinventing the wheel, this has been done before. I can recommend RATP, the subject of RFC916. Widely ignored btw so you are not likely to find code you can copy. I've implemented it and had good success. It has only one flaw that I know of, it is not resilient to multiple connection attempts that are present in the receive buffer. Intentionally purging the buffer when you open the port is important.

Serial protocol for sending image data

We have a custom-built microcontroller card (ST32 / ARM Cortex M3) which has a camera attached. The camera captures 10-bit greyscale at 1280x1024 resolution. We need to send that image data back to a PC host over serial. That's quite a big chunk of data; at 115200 baud transfer would be 3 minutes, assuming everything goes fine. Anything I implement to ensure robustness would seem to slow that process down (eg split into blocks, checksum the blocks, ask for resend if corrupt). So was wondering how people might make a good compromise between speed and integrity.
We are currently seeing real transfer times of about 6 minutes. We had to set the UART baud rate at a weird value - 1036800 - because at 115200 there were issues (PC is running at 115200). I'm more software than hardware so any thoughts as to why that might happen would be helpful!
Start by doing some easy compression on your image.
Either run-length encoding or delta encoding will give you less data to send.
There are much better algorithms like TIFF but you may want to trade off the complexity of TIFF-ing your buffer for easier software on the embedded side.
Then you can afford something simple like Xmodem for your compressed data.
That has the useful property of being a standard protocol too.
That might lead you to using a terminal+xmodem transfer style interface to your host.
That would make debugging the interface pretty simple too.
Tim Williscroft's answer about compressing your data is a nice first step.
Now from the serial protocol side, the real transfer rate depends a lot on how you configure and implement your software both side. The baud rate is not the only thing to care about:
Are you using hardware flow control? If using hardware flow control, you will be able to significantly increase the baud rate (x10) without generating overrun errors.
From the STM32 are you using DMA, interrupts or even worth polling method to manage the data transmission? I don't know the exact STM32 reference you are using but on the STM32 I used, the UART transmission FIFO was limited to 1 byte. So you are merely obliged to use DMA if you have performance issues.
Still from the STM32 side, you can greatly improve performances taking care on the bus accesses (and possible conflicts arbitration) your application is doing.
Moreover on STM32, all clocks are configurable. Using an external high speed oscillator (if one available on board) may be a good way to improve performance over the internal RC oscillator. Also take care about internal bus clock configuration!
Now from the PC side, the performance may be impacted depending how your application bufferize & treat the received data.
The first thing to do is to look where the time is taken:
Observe your UART signal with a scope. As you said the transfer takes twice the theoretical time, you shouldn't see a continuous signal. Without hardware flow control it is the STM32 that takes time to output data. With hardware flow control, also look at the flow control signal to determinate which side causes the pauses (it may be both).

What is the difference between a Relay Controller and a Microcontroller?

What is the difference between a Relay Controller and a Microcontroller?
I'm looking into Arduino boards and am just getting into electronics, so I wanted to know the difference.
I know this is not a programming question, but I am developing in PHP and would like to know what the difference is before I start to code to make sure I'm going down the right path.
Those two devices are very different. Depending on exactly what you're trying to do, you may be able to use either, however. You'll have to tell more about your goal.
If you're switching high-current or high-voltage loads on and off, you'll need some sort of relay (or perhaps a large FET). If your current and voltage requirements are sufficiently low (5V, 40ma), you may be able to drive your load directly with the Arduino's output pins.
The Arduino is a microcontroller. That means it's an entire computer, just simplified. It has RAM, registers, an ALU, etc. Microcontrollers are generally specialized such that instead of interfacing to peripherals using some kind of bus like in a desktop computer processor, they have I/O capabilities built in, often simply in the form of outputs that can be set high (the input voltage, usually 5V) or low (0V) programmatically. The Arduino probably uses its own programming langauge, although there may be more than one language available for it (I've never used one). I doubt PHP is one of those langauges.
The relay controller is exactly what the name implies -- a simple circuit that controls some relays. Relays are electrically actuated switches. There's no intelligence in the relay controller. It can't be programmed; it must be controlled externally via USB. If you're attempting to interface with it from PHP on a desktop/server computer, this is probably your best choice. You're right that it's expensive. You could probably build your own for a fraction of the cost, especially if you're willing to use the parallel port on your computer (googling for how should give simple instructions). It's worth noting that that relay controller, and presumably most others, likely contain some kind of microcontroller with the I/O pins connected to circuitry that increases the current and/or voltage to the point where it can drive the relay, which in turn switches the load.
Hmm... only very vaguely programming related :) I think we may need another StackOverflow for electronics. Maybe SparkOverflow?

Resources