Using Rockpis i am implementing Uart(serial communication), made a configuration of 9600bps baudrate then transfering the data, other end using Docklight software capturing the data.
Here the time gap between two packets it is of 17Milliseconds at baudrate is 9600bps, is it possible to reduce that 17milliseconds to 10 milliseconds.
here i have attached image below.
Related
I am attempting to reverse engineer a device which utilises a serial line to communicate.
I have not currently been able to determine the baud rate of the serial line as of yet. Upon reading the data at a certain baud rate and converting it over to hex, I can see that some of the digits change each time I take a reading.
I am certain that each time I take a reading the device is performing the same action, and should therefore be outputting the same signal.
If I receive different digits each time I take a measurement, would this potentially be a trait of selecting an incorrect baud rate?
For a project I need to make communicate in a CANBus network, ethernet network and with RS-232. I want to use one single MCU that will act as the main unit of CANBus start topology, Ethernet start topology and that MCU also will be transfering the RS232 data that comes to it to another device. Now I want to use high speed CAN which can be up to 1 Mbits per second. However,RS-232 is max 20 k baud. I wonder if it is doable with 1 MCU to handle 3 different communications ( CANBus, ethernet and RS-232). I am afraid of to get overrun with data at some point. I can buffer data short term if data comes in bursts that can be averaged out. For continuous data where I'll never be able to keep up, I'll need to discard messages, perhaps in a managed way. But I do not want to discard any data. So my question is: Would using 1 MCU for this case work? And are there any software tricks that would help me with this case? (Like giving CANBus a higher priority etc.)
Yes, this can be done with a single MCU. Even a simple MCU should easily be able to handle data rates of 1 Mbps. Most likely you want to use DMA enabled transfer so the CPU core will only need to act when the transmission of a chunk of data has completed.
The problem of being overrun by data due to the mismatch in data rate is a separate topic:
If the mismatch persists, no system can handle it, no matter how capable.
If the mismatch is temporary, it's just a function of the available buffer size.
So if the worst case you want to handle is 10s of incoming data at 1 Mbps (with an outgoing rate of 20kbps), then you will need 10s x (1Mbps - 20kps) = 9.8 Mbit = 1.225 MByte of buffer memory.
I have been using the Teensy 3.6 microcontroller board (180 MHz ARM Cortex-M4 processor) to try and implement a driver for a sensor. The sensor is controlled over SPI and when it is commanded to make a measurement, it sends out the data over two lines, DOUT and PCLK. PCLK is a 5 MHz clock signal and the bits are sent over DOUT, measured on the falling edges of the PCLK signal. The data frame itself consists of 1,024 16-bit values.
My first attempt consisted a relatively naïve approach: I attached an interrupt to the PCLK pin looking for falling edges. When it detects a falling edge, it sets a bool that a new bit is available and sets another bool to the value of the DOUT line. The main loop of the program generates a uint_16 value from these bits and collects 1,024 of these values for the full measurement frame.
However, this program locks up the Teensy almost immediately. From my experiments, it seems to lock up as soon as the interrupt is attached. I believe that the microprocessor is being swamped by interrupts.
I think that the correct way of doing this is by using the Teensy's DMA controller. I have been reading Paul Stoffregen's DMAChannel library but I can't understand it. I need to trigger the DMA measurements from the PCLK digital pin and have it read in bits from the DOUT digital pin. Could someone tell me if I am looking at this problem in the correct way? Am I overlooking something, and what resources should I view to better understand DMA on the Teensy?
Thanks!
I put this on the Software Engineering Stack Exchange because I feel that this is primarily a programming problem, but if it is an EE problem, please feel free to move it to the EE SE.
Is DMA the Correct Way to Receive High-Speed Digital Data on a Microprocessor?
There is more than one source of 'high speed digital data'. DMA is not the globally correct solution for all data, but it can be a solution.
it sends out the data over two lines, DOUT and PCLK. PCLK is a 5 MHz clock signal and the bits are sent over DOUT, measured on the falling edges of the PCLK signal.
I attached an interrupt to the PCLK pin looking for falling edges. When it detects a falling edge, it sets a bool that a new bit is available and sets another bool to the value of the DOUT line.
This approach would be call 'bit bashing'. You are using a CPU to physically measure the pins. It is a worst case solution that I see many experienced developers implement. It will work with any hardware connection. Fortunately, the Kinetis K66 has several peripherals that maybe able to assist you.
Specifically, the FTM, CMP, I2C, SPI and UART modules may be useful. These hardware modules are capable of reducing the work load from processing each bit to groups of bits. For instance, the FTM support a capture mode. The idea is to ignore the PCLK signal and just measure the time between edges. These times will be fixed in a bit period/CLK. If the timer captures a two bit period, then you know that two ones or zeros were sent.
Also, your signal seems like SSI which is an 'digital audio' channel. Unfortunately, the K66 doesn't have an SSI module. Typical I2C is open drain and it always has a start bit and fixed word size. It maybe possible to use this if you have some knowledge of the data and/or can attach some circuit to fake some bits (to be removed later).
You could use the UART and time between characters to capture data. The time will be a run of bits that aren't the start bit. However it looks like this UART module requires stop bits (the SIM feature are probably very limited).
Once you do this, the decision between DMA, interrupt and polling can be made. There is nothing faster than polling if the CPU uses the data. DMA and interrupts are needed if you need to multiplex the CPU with the data transfer. DMA is better if the CPU doesn't need to act on most of the data or the work the CPU is doing is not memory intensive (number crunching). Interrupts depend on your context save overhead. This can be minimized depending on the facilities your main line uses.
Some glue circuitry to adapt the signal to one of the K66 modules could go a long way to making a more efficient solution. If you can't change the signal, another (NXP?) SOC with an SSI module would work well. The NXP modules usually support chaining to an eDMA module as well as interrupts.
While running some tests with an FT232R USBtoRS232 Chip, which should be able to manage speeds up to 3 Mbaud, I have the problem that my actual speed is only around 38 kbaud or 3,8 KB/s.
I've searched the web, but I could not find any comparable data, to prove or disprove this limitation.
While I am looking further into this, I would like to know, if someone here has comparable data.
I tested with my own code and with this tool here:
http://www.aggsoft.com/com-port-stress-test.htm
Settings would be 115,200, 8N1, and 64 byte data-packet.
I would have expected results like these:
At 115200 baud -> effectively 11,520 byte/s or 11,52 KB/s
At 921600 baud -> 92,16 KB/s
I need to confirm a minimal speed of 11,2 KB/s, better speeds around 15-60 KB/s.
Based on the datasheet, this should be no problem - based on reality, I am stuck at 3,8 KB/s - for now at least.
Oh my, found a quite good hint - my transfer rate is highly dependent on the size of the packets. So, while using 64 byte packets, I end up with 3,8 KB/s, using 180 byte packets, it somewhat averages around 11,26 KB/s - and the main light went on, when I checked the speed for 1 byte packets -> around 64 byte/s!
Adding some math to it -> 11,52 KB/s divided by 180 equals to 64 byte/s. So basically the speed scales with the byte-size. Is this right? And why is that?
The results that you observe are because of the way serial over USB works. This is a USB 1.1 chip. The USB does transfers using packets and not a continuous stream as for example serial.
So your device will get a time sliced window and it is up to the driver to utilize this window effectively. When you set the packet size to 1 you can only transmit one byte per USB packet. To transmit the next byte you have to wait for your turn again.
Usually a USB device has a buffer on the device end where it can buffer the data between transfers and thus keep the output rate constant. You are under-flowing this buffer when you set packet size too low. The time slice on USB 1.1 is 10 ms which only gives you 100 transfers per second to be shared between all of the devices.
When you make a "send" call, all of your data will go out in one transfer to keep interactive applications working right. It is best to use the maximum transfer size to achieve best performance on USB devices. This is not always possible if you have interactive application, but mostly possible when you have a data transfer application.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
What is the potential maximum speed for rs232 serial port on a modern PC? I know that the specification says it is 115200 bps. But i believe it can be faster. What influences the speed of the rs232 port? I believe it is quartz resonator but i am not sure.
This goes back to the original IBM PC. The engineers that designed it needed a cheap way to generate a stable frequency. And turned to crystals that were widely in use at the time, used in any color TV in the USA. A crystal made to run an oscillator circuit at the color burst frequency in the NTSC television standard. Which is 315/88 = 3.579545 megahertz. From there, it first went through a programmable divider, the one you change to set the baudrate. The UART itself then divides it by 16 to generate the sub-sampling clock for the data line.
So the highest baudrate you can get is by setting the divider to the smallest value, 2. Which produces 3579545 / 2 / 16 = 111861 baud. A 2.3% error from the ideal baudrate. But close enough, the clock rate doesn't have to be exact. The point of asynchronous signalling, the A in UART, the start bit always re-synchronizes the receiver.
Getting real RS-232 hardware running at 115200 baud reliably is a significant challenge. The electrical standard is very sensitive to noise, there is no attempt at canceling induced noise and no attempt at creating an impedance-matched transmission line. The maximum recommended cable length at 9600 baud is only 50 feet. At 115200 only very short cables will do in practice. To go further you need a different approach, like RS-422's differential signals.
This is all ancient history and doesn't exactly apply to modern hardware anymore. True serial hardware based on a UART chip like 16550 have been disappearing rapidly and replaced by USB emulators. Which have a custom driver to emulate a serial port. They do accept a baudrate selection but just ignore it for the USB bus itself, it only applies to the last half-inch in the dongle you plug in the device. Whether or not the driver accepts 115200 as the maximum value is a driver implementation detail, they usually accept higher values.
The maximum speed is limited by the specs of the UART hardware.
I believe the "classical" PC UART (the 16550) in modern implementations can handle at least 1.5 Mbps. If you use a USB-based serial adapter, there's no 16550 involved and the limit is instead set by the specific chip(s) used in the adapter, of course.
I regularly use a RS232 link running at 460,800 bps, with a USB-based adapter.
In response to the comment about clocking (with a caveat: I'm a software guy): asynchronous serial communication doesn't transmit the clock (that's the asynchronous part right there) along with the data. Instead, transmitter and receiver are supposed to agree beforehand about which bitrate to use.
A start bit on the data line signals the start of each "character" (typically a byte, but with start/stop/parity bits framing it). The receiver then starts sampling the data line in order to determine if its a 0 or a 1. This sampling is typically done at least 16 times faster than the actual bit rate, to make sure it's stable. So for a UART communicating at 460,800 bps like I mentioned above, the receiver will be sampling the RX signal at around 7.4 MHz. This means that even if you clock the actual UART with a raw frequency f, you can't expect it to reliably receive data at that rate. There is overhead.
Yes it is possible to run at higher speeds but the major limitation is the environment, in a noisy environment there will be more corrupt data limitating the speed. Another limitation is the length of the cable between the devices, you may need to add a repeater or some other device to strengthen the signal.