Using two USARTs running at 115200 baud on a STM32F2, one to communicate with a radio module and one for serial from a PC. The clock speed is 120MHz.
When receiving data from both USARTs simultaneously overrun errors can occur on one USART or the other. Doing some quick back of the envelope calculations there should be enough time to process both, as the interrupts are just simple copy the byte to a circular buffer.
In both theory and from measurement the interrupt code to push byte to buffer should/does run in the order of 2-4µS, at 115200 baud we have around 70us to process each char.
Why are we seeing occassional OREs on one or other USART?
Update - additional information:
No other ISRs in our code are firing at this time.
We are running Keil RTX with systick interrupt configured to fire every 10mS.
We are not disabling any interrupts at this time.
According this book (The Designer's Guide to the Cortex-M Processor Family) the interupt latency is around 12cycles (not really deadly)
Given all the above 70us is at least a factor of 10 over the time we take to clear the interrupts - so I'm not sure its is so easy to explain. Should I be concluding that there must be some other factor I am over looking?
MDK-ARM is version 4.70
The systick interrupt is used by the RTOS so cannot time this the other ISRs take 2-3µS to run per byte each.
I ran into a similar problem as yours a few months ago on a Cortex M4 (SAM4S). I have a function that gets called at 100 Hz based on a Timer Interrupt.
In the meantime I had a UART configured to interrupt on char reception. The expected data over UART was 64 byte long packets and interrupting on every char caused latency such that my 100 Hz update function was running at about 20 Hz. 100 Hz is relatively slow on this particular 120 MHz processor but interrupting on every char was causing massive delays.
I decided to configure the UART to use PDC (Peripheral DMA controller) and my problems disappeared instantly.
DMA allows the UART to store data in memory WITHOUT interrupting the processor until the buffer is full saving lots of overhead.
In my case, I told PDC to store UART data into an buffer (byte array) and specified the length. When UART via PDC filled the buffer the PDC issued an interrupt.
In PDC ISR:
Give PDC new empty buffer
Restart UART PDC (so can collect data while we do other stuff in isr)
memcpy full buffer into RINGBUFFER
Exit ISR
As swineone recommended above, implement DMA and you'll love life.
Had a similar problem. Short solution - change oversampling to 8 bits which makes USART clock more precise. And choose your MCU clock wisely!
huart1.Init.OverSampling = UART_OVERSAMPLING_8;
Furthermore, add USART error handler and mechanism to check that your data valid such as CRC16. Here is example for the STM32F0xx series, I am assuming it should be pretty similar across the series.
void UART_flush(void) {
// Flush UART RX buffer if RXNE is set
if READ_BIT(huart1.Instance->ISR, USART_ISR_RXNE) {
SET_BIT(huart1.Instance->RQR, UART_RXDATA_FLUSH_REQUEST);
}
// Not available on F030xx devices!
// SET_BIT(huart1.Instance->RQR, UART_TXDATA_FLUSH_REQUEST);
// Clear All Errors (if needed)
if (READ_BIT(huart1.Instance->ISR, USART_ISR_ORE | USART_ISR_FE | USART_ISR_NE)) {
SET_BIT(huart1.Instance->ICR, USART_ICR_ORECF | USART_ICR_FECF | USART_ICR_NCF);
}
}
// USART Error Handler
void HAL_UART_ErrorCallback(UART_HandleTypeDef *huart) {
if(huart->Instance==USART1) {
// See if we have any errors
if (READ_BIT(huart1.Instance->ISR, USART_ISR_ORE | USART_ISR_FE | USART_ISR_NE | USART_ISR_RXNE)) {
// Flush errors
UART_flush();
// Raise Error Handler
_Error_Handler(__FILE__, __LINE__);
}
}
}
DMA might help as well. My problem was related to USART clock tolerances which might even cause overrun error with DMA implemented. Since it is USART hardware problem. Anyway, hope this helps someone out there! Cheers!
I have this problem recently, so I implemented a UART_ErrorCallback function that had was not implanted yet (just the _weak version).
Is like this:
void HAL_UART_ErrorCallback(UART_HandleTypeDef *huart)
{
if(huart == &huart1)
{
HAL_UART_DeInit(&huart1);
MX_USART1_UART_Init(); //my initialization code
...
And this solve the overrun issue.
Related
THE GOAL
In my Qt application, I need to control a GPIO pin, depending on data being sent over the serial bus. So, I need to set it to HIGH for as long as I transmit data, and to LOW, immediately after the transmission ends. Consider it as a serial communication flow control pin, which when set to 1 it enables transmission, and when set to 0 enables receive of data. The entire system is half-duplex and communicates in a master-slave fashion.
THE PROBLEM
I managed to come close to a solution, by setting it to HIGH immediately before any transmission, introducing some constant delay (I used QThread:usleep() ) depending on the baud rate and then setting it to low again, but I was getting random "stretchings" of the pulse (staying HIGH longer than it should) when I was visualizing it with an oscilloscope.
ATTEMPTED SOLUTIONS
Well, it seems that some "magic" is taking place, which adds some extra delay, on top of the one I have manually defined. In order to get rid of that possibility, I used the bytesWritten() signal, so I can fire my setPinLow() slot when we finish writing the actual data to the port. So my code now looks like this:
classTTY::classTTY(/*someStuff*/) : port(/*some other stuff*/)
{
s_port = new QSerialPort();
connect(s_port, SIGNAL(bytesWritten(qint64)), this, SLOT(setPinLow()));
if(GPIOPin->open(QFile::ReadWrite | QFile::Truncate | QFile::Text | QFile::Unbuffered)) {
qDebug() << "GPIO pin ready to switch.";
} else {
qDebug() << "Failed to access GPIO pin";
}
bool classTTY::sendData(data, replyLength)
{
directionPinEnable(true);
if(m_port->isOpen()) {
s_expectedReplyLength = replyLength;
s_receivedData.clear();
s_port->flush();
s_port->write(data);
return true;
}
return false;
}
void classTTY::setPinLow()
{
gpioPinEnable(false);
}
void classTTY::gpioPinEnable(bool enable){
if(enable == true){
GPIOPin->write("1");
} else if (enable == false) {
GPIOPin->write("0");
}
}
After implementing it the pin started to give really short pulses, much more like "spikes", which implies (I think) that now it stays HIGH for as long as the Qt write() process lasts, and not while the actual propagation of the data lasts.
THE QUESTION(S)
What is that extra delay being added when I use the naive,
QThread::usleep approach, that causes the stretch of the pulse?
Why the signal-slot approach is not working, since it is
event-driven?
In general, how can I instruct the pin to go active ONLY during the
transmission of data and then drop again to zero, so I can receive
the slave's reply?
What is that extra delay being added when I use the naive, QThread::usleep approach, that causes the stretch of the pulse?
Linux is not a real-time operating system a thread sleep suspends the process fo no less than the time specified. During the sleep, other threads and processes may run and may not yield the processor for a longer time than your sleep period, or may not yield at all and consume their entire OS allocated time-slice. Beside that kernel driver interrupt handlers will always preempt a user-level process. Linus has a build option for real-time scheduling, but the guarantees remain less robust that a true RTOS and latencies typically worse.
Note also that not only can your thread be suspended for longer than the sleep period, but the transmission may be extended by more than the number of bits over baud-rate - the kernel driver can be preempted by other drivers and introduce inter-character gaps over which you have no control.
Why the signal-slot approach is not working, since it is event-driven?
The documentation for QSerialPort::waitForBytesWritten() states:
This function blocks until at least one byte has been written to the serial port and the bytesWritten() signal has been emitted.
So it is clear that the semantics of this are that "some data has been written" rather than "all data has been written". It will return whenever a byte is written, then if you call it again, it will likely return immediatly if bytes are continuing to be written (because QSerialPort is buffered and will write data independently of you application).
In general, how can I instruct the pin to go active ONLY during the transmission of data and then drop again to zero, so I can receive the slave's reply?
Qt is not unfortunately the answer; this behaviour needs to be implemented in the serial port kernel driver or at least at a lower-level that Qt. The Qt QSerialPort abstraction does not give you the level of control or insight into the actual occurrence "on the wire" that you need. It is somewhat arms-length from the hardware - for good reason.
However there is a simple solution - don't bother! it seems entirely unnecessary. It is a master-slave communication, and as such the data itself is flow control. The slave does not talk until spoken to, and the master must expect and wait for a reply after it has spoken. Why does the slave need any permission to speak other than that implied by being spoken to?
I have steppers stepping during an interrupt timer at 50 and have all my code working between the interrupts until I tried reading serial commands more than one character long.
I'm getting dropped bytes so my strings are missing a letter every 4-5 chars. I researched all day to try and figure out a solution but have come up with nothing. If I don't use an interrupt my stepper stops for 2 seconds reading a one char serial input as a string.
My goal is to have a remote control app sending speed commands. I need help working this problem out.
https://sourceforge.net/p/open-slider/code/ci/master/tree/OpenSliderFirmware/
String incomingString = "";
if (Serial.available() > 0) {
incomingString = Serial.readString();
Serial.println(incomingString);
}
Using Accelstepper library
interrupt:
//Interrupt Timer1
void ISR_stepperManager() {
Slide.runSpeed();
Xaxis.runSpeed();
Yaxis.runSpeed();
}
Quick answer: you don't if the interrupt timer is cutting in too often.
I resolved the problem by using a variable interrupt timer and a step multiplier. Basically the steps are called every time the timer interrupts instead of checking millis inside the interrupt function. This solved many issues. The speed of the stepper is now controlled by the interrupt timer. This gave me more free cycles to fully read the incoming serial without corruption and improved efficiency. Calling more steps per cycle when doing over 4k steps/s also improved efficiency requiring less cycles for a high rate of steps.
The serial is processed one char per cycle to prevent blocking.
Overall, if you are using serial and an interrupt timer, any interrupt happening < 100us you should be cautious how much code you are running during the interrupt. It will cause issues with incoming serial and user inputs. A few lines of code in a 25us timer interrupt, incoming serial will not function.
i'm not sure if it will help to your problem, but i saw along the time that the String type is not safe to use when other things need to happened.
i prefer to use char array and read one char at a time.
while(Serial.available())
{
data[x] = Serial.read();
x++;
}
i'm finding it much more reliable.
hope it's help!
I'm using Bluetooth serial port profile to communicate with Arduino. The bluetooth module (HC-06) is connected to my digital pins 10 and 11 (RX, TX). The module is working properly, but I need an interrupt on data receive. I can't periodically check for incoming data as Arduino is working on a time-sensitive task (music-playing through a passive buzzer) and I need control signals to interrupt immediately on receive. I've looked through many documents including Arduino's own site, and they all explain how to establish regular communication using checking for serialPort.available() periodically. I've found one SO question Arduino Serial Interrupts but that's too complicated for my level. Any suggestions on reading realtime input through serial?
Note that the current version of SoftSerial actually uses PCINT to detect the individual bits. Hence I believe defining it again at the main loop would conflict with the SoftSerial's actual detection of bits.
I am reluctant to suggest this as it is modifying a core library. Which is difficult not to do when sharing interrupts. But if desperate, you could modify that routine, with your need.
within
\arduino-1.5.7\hardware\arduino\avr\libraries\SoftwareSerial\SoftwareSerial.cpp.
//
// The receive routine called by the interrupt handler
//
void SoftwareSerial::recv()
{
...
// if buffer full, set the overflow flag and return
if ((_receive_buffer_tail + 1) % _SS_MAX_RX_BUFF != _receive_buffer_head)
{
// save new data in buffer: tail points to where byte goes
_receive_buffer[_receive_buffer_tail] = d; // save new byte
_receive_buffer_tail = (_receive_buffer_tail + 1) % _SS_MAX_RX_BUFF;
#ifdef YOUR_THING_ENABLE
// Quickly check if it is what you want and DO YOUR THING HERE!
#endif
}
...
}
But beware your are still in a ISR and all Interrupts are OFF and you are blocking EVERYTHING. One should not lollygag nor dilly dally, here. Do you something quick and get out.
I have a PIC24 based system equipped with a 24 bit, 8 channels ADC (google MCP3914 Evaluation Board for more details...).
I have got the board to sample all of the 8 channels, store the data in a 512x8 buffer and transmit the data to PC using a USB module when the buffer is full (it's is done by different interrupts).
The only problem is that when the MCU is transmitting data (UART transmission interrupt has higher priority than the ADC reading interrupt) the ADC is not sampling data hence there will be data loss (sample rate is around 500 samples/sec).
Is there any way to prevent this data loss? maybe some multitasking?
Simply transmit the information to the UART register without using interrupts but by polling the bit TXIF
while (PIR1.TXIF == 0);
TXREG = "the data you want to send";
The same applies to the ADC conversion : if you were using interruptions to start / stop a conversion, simply poll the required bits (ADON) and thats it.
The TX bits and AD bits may vary depending on your PIC.
That prevents the MCU to enter an interrupt service routine and loose 3-4 samples.
In PIC24 an interrupt can be assigned one of the 8 priorities. Take a look at the corresponding section in the "Family Reference Manual" -> http://ww1.microchip.com/downloads/en/DeviceDoc/70000600d.pdf
Alternatively you can use DMA channels which are very handy. You can configure your ADC to use the DMA, and thus sampling and feeding the buffer won't use any CPU Time, same goes for UART I beleive.
http://ww1.microchip.com/downloads/en/DeviceDoc/39742A.pdf
http://esca.atomki.hu/PIC24/code_examples/docs/manuallyCreated/Appendix_H_ADC_with_DMA.pdf
Background
I have need for a data logging application running on a "Arduino compatible" chipKit UNO32 board, with a connected sensor. Data should be logged to an SD-card found on a "Arduino Wireless SD shield".
The sensor is connected via I2C.
My problem is that when I use the Arduino SD library writing is slow: 25 ms per print() operation, which gives me a maximum of 40 Hz which is laughable compared to the 100-800Hz data rate of my sensor.
My faulty solution
Luckily the sensor comes equipped with both an on-chip FIFO that can store 32 sensor values. This means I can go to at least 200Hz without any troubles since the time to fill the FIFO is way larger than the time to write to the card.
But I'd still really like to get to at least 400Hz, so I thought I'd have the following setup:
Tell sensor to put data in the FIFO
When the FIFO is almost full, the sensor triggers an interrupt (sensor does this, and it works, I can catch the interrupt)
When the Arduino receives the interrupt, it polls the sensor for data (via I2C) and stores the data in a buffer in SRAM.
When the SRAM buffer is getting full, write its contents to the SD-card.
Unfortunately, this does not seem to work since the Arduino Wire library that handles I2C use interrupts, and can not be called from within an interrupt handler. I have tried it, and it freezes the microcontroller.
My question
There seems to be other I2C libraries for Arduino that do not rely on interrupts. Should I try that route?
Or is my way of thinking (grabbing a load of data in an ISR) bad from start? Is there another approach I should take?
Just use the interrupt to set a flag and finish the ISR. from the main loop do the calling I2C instead of directly calling from ISR.
boolean fifoFull = false;
void fifoISR() {
fifoFull = true;
}
void loop() {
if(fifoFull) {
// Do I2C
fifoFull = false;
}
}