I'm using the DMA (described as PDC in the datasheet) of SAM4SD16C with USART 0 peripheral.
I've set a timer which generates a interrupt each ms. Each 5 ms a data transfert should be performed via DMA.
An other interrupt should occur when TXEMPTY flag is set.
To see when the transmission starts and ends I toggle an Ouptut and watch it on oscilloscope. And then I realized that the end of reception is varying in time by 20 µs (my main clock is 120MHz)... Which in my project is not acceptable. Meanwhile, the start of transmission is 100ns precise, so there is no problem concerning this point.
I'm wondering if there is a way to have a better control on DMA time transfer.
As discussed in comments above, the imprecision of End Of Reception instant is due to baudrate value. This imprecision is around the baudrate period and probably an additional bus idle time.
Related
I am trying to measure power usage using dds353 kWh meter. This meter has a pulse output. I am interested in using the esp32 since I can periodically send the data over the internet to nodered dashboard.I am also very interested in using the esp32 in low power mode and periodically wake up to send data over mqtt. I have tried out examples from github using espressif idf but I would not mind an arduino equivalent. I would like to do hardware interrupt which when one of the rtc gpio pin goes high a counter is incremented while a seperate timer interrupt run and occasionally wakes up the main xtensia cores which fetches data from the rtc and sends it over. I have looked at the pulse counter examples and with my limited knowledge can not tell if the interrupts are triggered when the ulp is in sleep mode or only when it is on. I would really be glad if someone would show me how to basically use the ulp for counting pulses even when it is sleep mode and periodically wake up the main cores. I am ok with IDF or arduino examples
If you want to count pulses while in deep sleep youuse the ULP. Code on the ULP continues to execute when the board wakes up and goes to normal power mode. So when it is awake, it will still run the counter on the ULP processor unless you stop the ULP periodic wake up timer, ULP will keep waking up and running while the main CPU is active.
As you gave already checked with this example , it should be pretty close to what you need. The only difference seems to be that the example is set to wake up after a given number of pulses, rather than a fixed amount of time. However it should be easy to change that, by enabling deep sleep wake up from timer.For the Arduino you could check Some additional info:
ULP doesn't have GPIO interrupts. So you use deep sleep wake stub (small piece of code which runs immediately after deep sleep, prior to loading application from flash into RAM) you can increment the pulse counter variable, and go to sleep again. This way you can get low power consumption (~5uA) between pulses and moderate power consumption while running the wake stub (around 13mA), for a very short time.
So its up to you to experiment with your specific scenario.
You can use Pulse Counter(PCNT) feature in ESP32 to count the number of pulse in background, Understanding by using same you can able to do some periodic wake-up and read the count.. Its also possible to configure event when number of counts reached certain threshold and had lot of options,
For get information and available Interfaces and API's for Pulse Counter(PCNT) please follow below link, https://docs.espressif.com/projects/esp-idf/en/latest/esp32/api-reference/peripherals/pcnt.html
Initially I faced lot of issue to make Pulse Counter(PCNT) work in Adrino IDE for ESP-32, After multiple attempt I make it working, And same sample code is uploaded in GitHub for reference. I have not use all the API's in the official documentation but but used few of them and are working..
I have created sample program for a water flow meter, there also we use to get pulse which needs to count to measure the water flow rate, understanding simile to kWh meter.
GitHub Sample code Path:- https://github.com/Embedded-Linux-Developement/Arduino_Sample_Programs/tree/main/ESP_32/Water_Flow_Pulse_counter_WithOut_Interrupt_Using_PCNT
I have not placing the code here, because its there in GitHub and not directly for the asked question, but simile one and can use it. Its a working code I tested in HW.
Hopes Its helpful,
Regards, Jerry James
I have been using the Teensy 3.6 microcontroller board (180 MHz ARM Cortex-M4 processor) to try and implement a driver for a sensor. The sensor is controlled over SPI and when it is commanded to make a measurement, it sends out the data over two lines, DOUT and PCLK. PCLK is a 5 MHz clock signal and the bits are sent over DOUT, measured on the falling edges of the PCLK signal. The data frame itself consists of 1,024 16-bit values.
My first attempt consisted a relatively naïve approach: I attached an interrupt to the PCLK pin looking for falling edges. When it detects a falling edge, it sets a bool that a new bit is available and sets another bool to the value of the DOUT line. The main loop of the program generates a uint_16 value from these bits and collects 1,024 of these values for the full measurement frame.
However, this program locks up the Teensy almost immediately. From my experiments, it seems to lock up as soon as the interrupt is attached. I believe that the microprocessor is being swamped by interrupts.
I think that the correct way of doing this is by using the Teensy's DMA controller. I have been reading Paul Stoffregen's DMAChannel library but I can't understand it. I need to trigger the DMA measurements from the PCLK digital pin and have it read in bits from the DOUT digital pin. Could someone tell me if I am looking at this problem in the correct way? Am I overlooking something, and what resources should I view to better understand DMA on the Teensy?
Thanks!
I put this on the Software Engineering Stack Exchange because I feel that this is primarily a programming problem, but if it is an EE problem, please feel free to move it to the EE SE.
Is DMA the Correct Way to Receive High-Speed Digital Data on a Microprocessor?
There is more than one source of 'high speed digital data'. DMA is not the globally correct solution for all data, but it can be a solution.
it sends out the data over two lines, DOUT and PCLK. PCLK is a 5 MHz clock signal and the bits are sent over DOUT, measured on the falling edges of the PCLK signal.
I attached an interrupt to the PCLK pin looking for falling edges. When it detects a falling edge, it sets a bool that a new bit is available and sets another bool to the value of the DOUT line.
This approach would be call 'bit bashing'. You are using a CPU to physically measure the pins. It is a worst case solution that I see many experienced developers implement. It will work with any hardware connection. Fortunately, the Kinetis K66 has several peripherals that maybe able to assist you.
Specifically, the FTM, CMP, I2C, SPI and UART modules may be useful. These hardware modules are capable of reducing the work load from processing each bit to groups of bits. For instance, the FTM support a capture mode. The idea is to ignore the PCLK signal and just measure the time between edges. These times will be fixed in a bit period/CLK. If the timer captures a two bit period, then you know that two ones or zeros were sent.
Also, your signal seems like SSI which is an 'digital audio' channel. Unfortunately, the K66 doesn't have an SSI module. Typical I2C is open drain and it always has a start bit and fixed word size. It maybe possible to use this if you have some knowledge of the data and/or can attach some circuit to fake some bits (to be removed later).
You could use the UART and time between characters to capture data. The time will be a run of bits that aren't the start bit. However it looks like this UART module requires stop bits (the SIM feature are probably very limited).
Once you do this, the decision between DMA, interrupt and polling can be made. There is nothing faster than polling if the CPU uses the data. DMA and interrupts are needed if you need to multiplex the CPU with the data transfer. DMA is better if the CPU doesn't need to act on most of the data or the work the CPU is doing is not memory intensive (number crunching). Interrupts depend on your context save overhead. This can be minimized depending on the facilities your main line uses.
Some glue circuitry to adapt the signal to one of the K66 modules could go a long way to making a more efficient solution. If you can't change the signal, another (NXP?) SOC with an SSI module would work well. The NXP modules usually support chaining to an eDMA module as well as interrupts.
I have a PIC24 based system equipped with a 24 bit, 8 channels ADC (google MCP3914 Evaluation Board for more details...).
I have got the board to sample all of the 8 channels, store the data in a 512x8 buffer and transmit the data to PC using a USB module when the buffer is full (it's is done by different interrupts).
The only problem is that when the MCU is transmitting data (UART transmission interrupt has higher priority than the ADC reading interrupt) the ADC is not sampling data hence there will be data loss (sample rate is around 500 samples/sec).
Is there any way to prevent this data loss? maybe some multitasking?
Simply transmit the information to the UART register without using interrupts but by polling the bit TXIF
while (PIR1.TXIF == 0);
TXREG = "the data you want to send";
The same applies to the ADC conversion : if you were using interruptions to start / stop a conversion, simply poll the required bits (ADON) and thats it.
The TX bits and AD bits may vary depending on your PIC.
That prevents the MCU to enter an interrupt service routine and loose 3-4 samples.
In PIC24 an interrupt can be assigned one of the 8 priorities. Take a look at the corresponding section in the "Family Reference Manual" -> http://ww1.microchip.com/downloads/en/DeviceDoc/70000600d.pdf
Alternatively you can use DMA channels which are very handy. You can configure your ADC to use the DMA, and thus sampling and feeding the buffer won't use any CPU Time, same goes for UART I beleive.
http://ww1.microchip.com/downloads/en/DeviceDoc/39742A.pdf
http://esca.atomki.hu/PIC24/code_examples/docs/manuallyCreated/Appendix_H_ADC_with_DMA.pdf
Using two USARTs running at 115200 baud on a STM32F2, one to communicate with a radio module and one for serial from a PC. The clock speed is 120MHz.
When receiving data from both USARTs simultaneously overrun errors can occur on one USART or the other. Doing some quick back of the envelope calculations there should be enough time to process both, as the interrupts are just simple copy the byte to a circular buffer.
In both theory and from measurement the interrupt code to push byte to buffer should/does run in the order of 2-4µS, at 115200 baud we have around 70us to process each char.
Why are we seeing occassional OREs on one or other USART?
Update - additional information:
No other ISRs in our code are firing at this time.
We are running Keil RTX with systick interrupt configured to fire every 10mS.
We are not disabling any interrupts at this time.
According this book (The Designer's Guide to the Cortex-M Processor Family) the interupt latency is around 12cycles (not really deadly)
Given all the above 70us is at least a factor of 10 over the time we take to clear the interrupts - so I'm not sure its is so easy to explain. Should I be concluding that there must be some other factor I am over looking?
MDK-ARM is version 4.70
The systick interrupt is used by the RTOS so cannot time this the other ISRs take 2-3µS to run per byte each.
I ran into a similar problem as yours a few months ago on a Cortex M4 (SAM4S). I have a function that gets called at 100 Hz based on a Timer Interrupt.
In the meantime I had a UART configured to interrupt on char reception. The expected data over UART was 64 byte long packets and interrupting on every char caused latency such that my 100 Hz update function was running at about 20 Hz. 100 Hz is relatively slow on this particular 120 MHz processor but interrupting on every char was causing massive delays.
I decided to configure the UART to use PDC (Peripheral DMA controller) and my problems disappeared instantly.
DMA allows the UART to store data in memory WITHOUT interrupting the processor until the buffer is full saving lots of overhead.
In my case, I told PDC to store UART data into an buffer (byte array) and specified the length. When UART via PDC filled the buffer the PDC issued an interrupt.
In PDC ISR:
Give PDC new empty buffer
Restart UART PDC (so can collect data while we do other stuff in isr)
memcpy full buffer into RINGBUFFER
Exit ISR
As swineone recommended above, implement DMA and you'll love life.
Had a similar problem. Short solution - change oversampling to 8 bits which makes USART clock more precise. And choose your MCU clock wisely!
huart1.Init.OverSampling = UART_OVERSAMPLING_8;
Furthermore, add USART error handler and mechanism to check that your data valid such as CRC16. Here is example for the STM32F0xx series, I am assuming it should be pretty similar across the series.
void UART_flush(void) {
// Flush UART RX buffer if RXNE is set
if READ_BIT(huart1.Instance->ISR, USART_ISR_RXNE) {
SET_BIT(huart1.Instance->RQR, UART_RXDATA_FLUSH_REQUEST);
}
// Not available on F030xx devices!
// SET_BIT(huart1.Instance->RQR, UART_TXDATA_FLUSH_REQUEST);
// Clear All Errors (if needed)
if (READ_BIT(huart1.Instance->ISR, USART_ISR_ORE | USART_ISR_FE | USART_ISR_NE)) {
SET_BIT(huart1.Instance->ICR, USART_ICR_ORECF | USART_ICR_FECF | USART_ICR_NCF);
}
}
// USART Error Handler
void HAL_UART_ErrorCallback(UART_HandleTypeDef *huart) {
if(huart->Instance==USART1) {
// See if we have any errors
if (READ_BIT(huart1.Instance->ISR, USART_ISR_ORE | USART_ISR_FE | USART_ISR_NE | USART_ISR_RXNE)) {
// Flush errors
UART_flush();
// Raise Error Handler
_Error_Handler(__FILE__, __LINE__);
}
}
}
DMA might help as well. My problem was related to USART clock tolerances which might even cause overrun error with DMA implemented. Since it is USART hardware problem. Anyway, hope this helps someone out there! Cheers!
I have this problem recently, so I implemented a UART_ErrorCallback function that had was not implanted yet (just the _weak version).
Is like this:
void HAL_UART_ErrorCallback(UART_HandleTypeDef *huart)
{
if(huart == &huart1)
{
HAL_UART_DeInit(&huart1);
MX_USART1_UART_Init(); //my initialization code
...
And this solve the overrun issue.
I'm looking to add the ability to capture wave forms to an ATmega 328 based product and I've been unable to find details on how responsive the ATmega 328 is when doing A/D conversions. The code is being prototyped on an Arduino, but will be migrated to custom board when done.
My plan is to have a total period (typically 16 to 20 milliseconds, based on local AC line frequency) and sample a single pin on the order of 50 to 100 times during that interval. Can the ATmega 328 reliably perform that many conversions successively? The minimum interval per conversion is 16ms / 100 = 160us.
I can add a code example if anyone has to see code, but right now I'm more concerned about the minimum period between multiple successive A/D conversions.
The easiest way would be to write a Arduino script and do some timing benchmarks for yourself.
The other way - doing this by spec - requires some more input for each involved level.
On the lowest level is the ATmega328 chip. The docs on the ADC part says:
By default, the successive approximation circuitry requires an input clock frequency between 50 kHz and 200 kHz to get maximum resolution. If a lower resolution than 10 bits is needed, the input clock frequency to the ADC can be higher than 200 kHz to get a higher sample rate.
Assuming a 16 MHz clock for the ATMega the only available prescaler value for the ADC clock is 128 which is 125kHz for 10 bit resolution. You could use the prescaler value 64 (250kHz) if you can get away with 8 bit resolution.
Next: The doc says:
A normal conversion takes 13 ADC clock cycles. The first conversion after the ADC is switched on (ADEN in ADCSRA is set) takes 25 ADC clock cycles in order to initialize the analog circuitry.
So taking the 125kHz ADC clock this would mean ~9600Hz sample rate in "single conversion" mode. This is 104µs per sample. These are the Arduino defaults.
Compared to your requirement of 160µs this seems good.
BUT: So far only the conversion alone have been considered. You have to transfer the data somewhere. ALSO the Arduino analogRead() function has some overhead as you can see in the file wiring_analog.c in the Arduino dist.
This overhead might be to much - you have to test it for yourself.
On the other hand: Nobody forces you to use the Arduino analogRead function. Some available choices:
you can ditch the overhead of analogRead and/or
you can reconfigure the ADC to your needs (8 bit only, higher ADC clock) and/or
you can use "advanced" modes like continuous sampling ("freerunning mode"9 of the ADC or
you can use even interrupts to trigger the conversions.
Of course all of these choices heavily depend on your knowledge and time budget. :-)