What value to set the baud rate to - serial-port

In MATLAB, I am establishing a serial link to an Arduino. Is a higher baud rate always better? I am using 9,600 baud now, but that is merely because it is the most standard value.

You'll have better luck over at https://arduino.stackexchange.com/.
Why do people settle?
People settle because it is more than fast enough. The most common use is just to print some stuff on a terminal
for debuggin. 9600 baud is 960 characters per second, or 12 x 80
character lines per second. How fast can you read? :)
If your program is using the serial port for bulk data transfer, you
would choose not to settle.
See the following resources:
Serial.begin(): Why not always use 28800?
How high of a baud rate can I go (without errors)?

Good question. I've spent years working with modems and I'm not stranger to baud rates. My Arduino uses a USB connection and it handles the baud rate, so I never got into messing with it.
It's strictly how quickly do you want your program loaded. It has no other effect. It would be reasonable to consider that low-end equipment might not support the higher end speed. From the communications perspective, the higher the baud rate the more chance of data errors. I think it's a stretch to think the communications between the pc and an Arduino is going to have much of an issue.

Related

Setting the baud rate for USART in STM32 microcontroller

Why do we need to calculate the baud rate using the following formula?
baud = fCK / (16*USARTDIV)
I mean, why can't we write 9600 or any other desired baud rate value directly in USART_BRR register? Why do we need to perform this calculation first? What are we calculating here anyway?
You can't just write your desired baud rate into a register as the processor doesn't know how fast the clock is to that peripheral, so it wouldn't be able to set up dividers correctly.
It would be possible to make a USART that you could tell a baud rate too, but it would require extra complexity and would still need to know how fast its clock is.
You are not configuring software, but hardware. That means, the bits you set are (more or less) directly connected to clock dividers which control the baud rate of the UART module.
As an example, you have to use fCK in your calculation. Imagine the UART module would have to calculate fCK by itself. The problem is, fCK is the result of all used clock dividers and not configured centrally. The effort to calculate fCK in hardware would just be disproportionate (effort = costs). It's just much easier to let the user calculate it.

Is DMA the Correct Way to Receive High-Speed Digital Data on a Microprocessor?

I have been using the Teensy 3.6 microcontroller board (180 MHz ARM Cortex-M4 processor) to try and implement a driver for a sensor. The sensor is controlled over SPI and when it is commanded to make a measurement, it sends out the data over two lines, DOUT and PCLK. PCLK is a 5 MHz clock signal and the bits are sent over DOUT, measured on the falling edges of the PCLK signal. The data frame itself consists of 1,024 16-bit values.
My first attempt consisted a relatively naïve approach: I attached an interrupt to the PCLK pin looking for falling edges. When it detects a falling edge, it sets a bool that a new bit is available and sets another bool to the value of the DOUT line. The main loop of the program generates a uint_16 value from these bits and collects 1,024 of these values for the full measurement frame.
However, this program locks up the Teensy almost immediately. From my experiments, it seems to lock up as soon as the interrupt is attached. I believe that the microprocessor is being swamped by interrupts.
I think that the correct way of doing this is by using the Teensy's DMA controller. I have been reading Paul Stoffregen's DMAChannel library but I can't understand it. I need to trigger the DMA measurements from the PCLK digital pin and have it read in bits from the DOUT digital pin. Could someone tell me if I am looking at this problem in the correct way? Am I overlooking something, and what resources should I view to better understand DMA on the Teensy?
Thanks!
I put this on the Software Engineering Stack Exchange because I feel that this is primarily a programming problem, but if it is an EE problem, please feel free to move it to the EE SE.
Is DMA the Correct Way to Receive High-Speed Digital Data on a Microprocessor?
There is more than one source of 'high speed digital data'. DMA is not the globally correct solution for all data, but it can be a solution.
it sends out the data over two lines, DOUT and PCLK. PCLK is a 5 MHz clock signal and the bits are sent over DOUT, measured on the falling edges of the PCLK signal.
I attached an interrupt to the PCLK pin looking for falling edges. When it detects a falling edge, it sets a bool that a new bit is available and sets another bool to the value of the DOUT line.
This approach would be call 'bit bashing'. You are using a CPU to physically measure the pins. It is a worst case solution that I see many experienced developers implement. It will work with any hardware connection. Fortunately, the Kinetis K66 has several peripherals that maybe able to assist you.
Specifically, the FTM, CMP, I2C, SPI and UART modules may be useful. These hardware modules are capable of reducing the work load from processing each bit to groups of bits. For instance, the FTM support a capture mode. The idea is to ignore the PCLK signal and just measure the time between edges. These times will be fixed in a bit period/CLK. If the timer captures a two bit period, then you know that two ones or zeros were sent.
Also, your signal seems like SSI which is an 'digital audio' channel. Unfortunately, the K66 doesn't have an SSI module. Typical I2C is open drain and it always has a start bit and fixed word size. It maybe possible to use this if you have some knowledge of the data and/or can attach some circuit to fake some bits (to be removed later).
You could use the UART and time between characters to capture data. The time will be a run of bits that aren't the start bit. However it looks like this UART module requires stop bits (the SIM feature are probably very limited).
Once you do this, the decision between DMA, interrupt and polling can be made. There is nothing faster than polling if the CPU uses the data. DMA and interrupts are needed if you need to multiplex the CPU with the data transfer. DMA is better if the CPU doesn't need to act on most of the data or the work the CPU is doing is not memory intensive (number crunching). Interrupts depend on your context save overhead. This can be minimized depending on the facilities your main line uses.
Some glue circuitry to adapt the signal to one of the K66 modules could go a long way to making a more efficient solution. If you can't change the signal, another (NXP?) SOC with an SSI module would work well. The NXP modules usually support chaining to an eDMA module as well as interrupts.

Why 9600 data rate is used over other rates for Serial.begin()?

I am working on Arduino and for communication purpose with the computer Serial.begin() function is being used. Now since there is a range of data rates from 300...115200.
Majority uses 9600!
Why is it so? What is its significance
During previous millenium 9600 bauds has been a standard for some devices.
Currently this speed is enough for most cases, so they stick to it; many devices use 9600 baud as a default.
Personally I use serial for debugging most often. At 9600 baud, it can print more than 10 lines per second, that is more than I can read.
Yet you can keep in mind that the buffer is limited to 64 char and when it is full, arduino will block a serial.write instruction until there is enough space in buffer. That is why you encounter some slowdown with slow baud rates.
On the other sides you will burden the MCU with speed of 0.5M on hardware serial. And with software serial you may see an impact much sooner.
Personally I had some trouble with chinese nano that used CH340 USB/Serial; python communication to arduino with pyserial was unreliable at speed over 9600 bauds.
Many devices use 9600 or 19200 baud, and I guess that people just copy over values without thinking about them, thereby continuing the practice even if it is no longer necessary.
That said, the maximum length of a serial cable depends on the baud rate you choose. Higher baud rates require shorter cables. So if you don't need the higher rate, just stay with a low one like 9600.

Serial Comms baud rate, parity and stop bits. Which options to use and when?

I'm trying to pick up some serial comms for a new job I am starting. I have done some reading which has helped a lot however, a lot of the reading tells you about the specification of serial comms and what everything is, but not when is best to use particular options.
My searches for this information so far only seem to pull in the spec; perhaps as a novice I am searching for the wrong terms.
My questions then!
Baud Rate - I have read this is signal changes per second and is often mislabelled as bits per second. Is this essentially bits per second including the frame data if asynchronous, and actually bits per second if synchronous?
Parity - Even/Odd.. Is there any difference at all between the two? I'm thinking in terms of efficiency or similar. Does this only still exist for compatabilities sake?
Stop Bits - I have read so far you can have 1 or 2 stop bits. In C# there seems to be an option for 1.5 too. I can't find anything on why you would want/need more than 1.
If anyone can advise on these points, or point me to some recommended reading material I would be very grateful.
Thanks for reading.
edit: typo
You very rarely have a choice, you must make it compatible with the settings that the device uses. If you don't know then you need to look in a manual or pick up a phone. Do keep in mind that it is increasing very rare to work with a real serial port device, one that uses an UART. Most commonly you actually talk to an emulated serial port, implemented by a USB or Bluetooth device driver. The settings you use don't matter in such a case since the actual signaling is implemented by the underlying bus.
If you can configure the device then basic guidelines are:
Baudrate is directly related to the length of the cable and the amount of electrical interference that's present. You have to go slower when you get bit errors. The RS-232 spec only allows for a maximum of 50 ft at 9600 baud.
Parity ought to be used when you don't use an error-correcting protocol. It does not matter whether you pick Odd or Even. Odd people pick odd, it's their prerogative.
Stopbits is usually 1. Picking 1.5 or 2 help a bit to relieve pressure on a device whose interrupt response times are poor, detected by data loss.
Databits is almost always 8, sometimes 7 if the device only handles ASCII codes.
Handshaking is an important setting that never stop causing trouble since many programmers just overlook it. Modern computers are almost always fast enough to not need it but that's not necessarily true for devices. The most basic stay-out-of-trouble configuration is to turn DTR on when you open the port and to tell the device driver to take care of RTS/CTS handshaking. Xon/Xoff handshaking is sometimes used, depends on the device.
A good 90% of the battle is won by implementing solid error checking. It is almost always skimped on, bad idea. Very important for serial port devices since they have no error correcting capabilities themselves and very weak error detection. Always make sure that you can detect and properly report overrun, parity and framing errors. And test them by getting the settings intentionally wrong.

Maximum potential speed for serial port rs232 [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
What is the potential maximum speed for rs232 serial port on a modern PC? I know that the specification says it is 115200 bps. But i believe it can be faster. What influences the speed of the rs232 port? I believe it is quartz resonator but i am not sure.
This goes back to the original IBM PC. The engineers that designed it needed a cheap way to generate a stable frequency. And turned to crystals that were widely in use at the time, used in any color TV in the USA. A crystal made to run an oscillator circuit at the color burst frequency in the NTSC television standard. Which is 315/88 = 3.579545 megahertz. From there, it first went through a programmable divider, the one you change to set the baudrate. The UART itself then divides it by 16 to generate the sub-sampling clock for the data line.
So the highest baudrate you can get is by setting the divider to the smallest value, 2. Which produces 3579545 / 2 / 16 = 111861 baud. A 2.3% error from the ideal baudrate. But close enough, the clock rate doesn't have to be exact. The point of asynchronous signalling, the A in UART, the start bit always re-synchronizes the receiver.
Getting real RS-232 hardware running at 115200 baud reliably is a significant challenge. The electrical standard is very sensitive to noise, there is no attempt at canceling induced noise and no attempt at creating an impedance-matched transmission line. The maximum recommended cable length at 9600 baud is only 50 feet. At 115200 only very short cables will do in practice. To go further you need a different approach, like RS-422's differential signals.
This is all ancient history and doesn't exactly apply to modern hardware anymore. True serial hardware based on a UART chip like 16550 have been disappearing rapidly and replaced by USB emulators. Which have a custom driver to emulate a serial port. They do accept a baudrate selection but just ignore it for the USB bus itself, it only applies to the last half-inch in the dongle you plug in the device. Whether or not the driver accepts 115200 as the maximum value is a driver implementation detail, they usually accept higher values.
The maximum speed is limited by the specs of the UART hardware.
I believe the "classical" PC UART (the 16550) in modern implementations can handle at least 1.5 Mbps. If you use a USB-based serial adapter, there's no 16550 involved and the limit is instead set by the specific chip(s) used in the adapter, of course.
I regularly use a RS232 link running at 460,800 bps, with a USB-based adapter.
In response to the comment about clocking (with a caveat: I'm a software guy): asynchronous serial communication doesn't transmit the clock (that's the asynchronous part right there) along with the data. Instead, transmitter and receiver are supposed to agree beforehand about which bitrate to use.
A start bit on the data line signals the start of each "character" (typically a byte, but with start/stop/parity bits framing it). The receiver then starts sampling the data line in order to determine if its a 0 or a 1. This sampling is typically done at least 16 times faster than the actual bit rate, to make sure it's stable. So for a UART communicating at 460,800 bps like I mentioned above, the receiver will be sampling the RX signal at around 7.4 MHz. This means that even if you clock the actual UART with a raw frequency f, you can't expect it to reliably receive data at that rate. There is overhead.
Yes it is possible to run at higher speeds but the major limitation is the environment, in a noisy environment there will be more corrupt data limitating the speed. Another limitation is the length of the cable between the devices, you may need to add a repeater or some other device to strengthen the signal.

Resources