Multiple timers or single task with multiple counter? - arduino

Assume you have some functions that must be called at different point in times but continuosly (constant task like each 250ms, each 2s, each 5 mins).
Is it better to use 4-5 timers each one dedicated to a task or is it better to code everything in the smaller task and then use a counter variable to run the other function?
e.g.
//callback each 250ms
void 250ms_TASK(){
if (counter % 8 != 0){ //250ms*8 = 2s
return;
}
// do 2 sec stuff
if (counter != 4800){ //250ms*4800 = 20min
return;
}
//do 20min stuff
counter = 0;
}
Assume also that you want to avoid/be bulletproof to situations like this:
before doing 2 secs stuff you MUST be sure that the 8th 250ms task is computed.
before doing 20 min stuff you MUST be sure that the 4800th 250ms and the 600th 2s task is computed.
The question is related to best practice and performance.
Moreover is it better to perform those calculations in the callback or use the callback to modify flags and perform the calculations in the main loop ?

I assume you are using STM32 since you tagged STM32.
Unless your application is very much time critical that you need to use preemptive and asynchronous timer interrupts (for example 5 mins task is very important so it should be called even while a separated 250ms callback task is running), using multiple timer interrupts is just waste of timers and you need to use as fewer interrupts as possible IMHO. Counting variable is not costly so it is okay to do that.
The real consideration is the length of tasks. The ISRs should be as short as possible so if the timer callback tasks are quite long you should use flags and use polling operation in the main loop. Polling flags is more preferable especially when you are using multiple callbacks in a single timer ISR. Imagine the moment that 250ms, 2s, and 20min callbacks should be called in the ISR and the ISR will take 3 times longer than usual.
By the way, if you decide to use a single timer, why not using SysTick? The SysTick timer is provided in every Cortex M MCUs and its operation is the same across the MCU families. You can easily configure this as 1ms interrupt timer very easily. As far as you use polling in the main loop 1ms interrupt must be fine. There are many tutorials on Systick (for example, part1 and part2)

The standard way to do this for tasks that aren't very time critical, is to implement a single timer, which triggers once every millisecond.
That timer then goes through a list of registered "software timers" and checks if it is time for them to be executed. If so, the timer then calls a function pointer which contains the timer-specific code. That is, a callback function called upon by the timer driver.
If these functions are kept minimal, for example just setting a flag, you can execute them from the main timer ISR.

You can make various arguments regarding power consumption and real timer requirement. It really depends on your application. But these question can deliver insightful answers for beginners, and even more experienced developers. The keyword here is scheduling.
The typical setup I prefer, bare metal real-time:
Main runs all low priority and idle tasks. Main bases these timings on the systick timer that ticks every 1 ms: if( (now - then) > delay ){ then = now; foo(); }
These tasks can be interrupted by everything, except in a critical zone (when using ISR threadspace data).
Low priority tasks are blinking LED's and handling communications.
There are peripheral interrupts and timers that set IRQ pending bits to signal real-time work is ready to be done. Eg: read uart or adc register before overrun.
The interrupt priorities and timers are setup in a way that the work is done in the correct order at the correct time. Eg: when processing ADC samples, and the hardware alarm IRQ arrives, this is handled immediately.
This way I have the DMA signal samples are ready to be processed, whilst a synchronized timer at a lower frequency set the IRQ-pending for the process loop. The process loop must run after the samples, thus has lower priority in the NVIC.
Advantage: Real time performance is not impeded when the communication channel is overflowed with data.
Disadvantage: The cpu never sleeps long.
The ISR's of the real time tasks may not exceed their time window. This is where Windowed Watchdog Timers are useful. Also, idle tasks will only run when there is time to spare. They might be late.
A similar option here is to use a real time operating system. Like ChibiOS.
However, when you're a battery application you don't want the MCU to wake up every second. You want the MCU to wake up only when work has to be done. You can do this in two ways.
Multiple hardware timers signal the wake-up event.
This requires multiple timers to keep running and might still use too much energy.
Tickless operation. You use one timer, the chip wakes up and does work when the time is reached. Then it reloads the timer compare with the time of the next deadline. If your intervals are long enough apart you can use the RTC for this to get ultra low power consumption.
Advantage: chip is allowed to go to sleep for longer period depending on workload.
Disadvantage: the design is a bit more complicated to implement and debug.
Similar option here is to use a tickless operating system.

Assuming you're not using a real time OS, I'd use a timer to do the time critical stuff (if it's handled with few clock cycles) and long timer counters through an interrupt and use non time critical stuff and longer periods in the main loop (with or without a watchdog timer/sleep).
The interrupts will interrupt the main loop stuff so you can be sure the time critical stuff happens when it needs to, the less time critical stuff happens whenever it can.
You could use a state machine in the main loop to do the logic stuff to make sure everything is done in the right order, things are checked, loaded, sensors read etc.

There is no right answer here, best practices would be to implement the design to meet the requirements, since requirements for a project vary from project to project there is no single right answer. One common solution will fail to work right for a wide array of products, as would another common solution. You could force one solution but that can add a lot of hacked up band-aids simply adding risk to the project, possibly lead to failure and or recalls or field upgrades that were unecessary that make the product and the company look bad. Do your system engineering and most of the time the correct solution will simply present itself, dont do your system engineering and the failures will simply present themselves.

Related

Pause SAMD21 TCC counter

The Atmel SAMD21 TCC peripheral provides a STOP command, which pauses the counter. The counter can be resumed with a RETRIGGER command.
When STOP is issued, the TCC enters a fault state, in which the outputs are either tristated, or driven to states specified in a config register. Presumably this mechanism is designed to support a fixed failsafe output state.
In my case I want the output pins to freeze in the state they're in at the time of the STOP command. The only way I can see to to do this is to update the configured fault output state register every time the outputs are updated - requiring interrupt processing which kind of defeats the purpose of much of the TCC's output waveform extension architecture, as well as being a processing load I'd prefer to avoid. There are other complications too, such as accounting for the dead time mechanism, and hardware/software races.
So I've been looking at ways to achieve this that don't involve the STOP command - but I can't see any other way of stopping the counter. There's no way to gate the peripheral clock input, and disabling it in GCLK is ruled out as it also runs TCC1. (And who knows what other effects this would have.) Negating the ENABLE bit, besides being overkill, unsurprisingly also tristates the outputs. Modifying the configuration in various other ways usually requires writing to enable-protected registers, thus requiring disabling the peripheral first.
(One idea I haven't investigated that yet is to drive the counter from the event system, and control the event generation/gating instead.)
So: is there any way of pausing the peripheral in its current state, while maintaining the state of the output pins?
All that I can think of to try is the async 'COUNT' event, which sounds like it is a gate for the clock to the counter.
(page numbers from the 03/2016 manual)
31.6.4.3. Events, p.712;
Count during active state of an asynchronous event (increment or decrement, depending on counter direction). In this case, the counter will be incremented or decremented on each cycle of the prescaled clock, as long as the event is active.
31.8.9. Event Control, p.734;
EVCTRL register,
Bits 2:0 – EVACT0[2:0]: Timer/Counter Event Input 0 Action
0x5 COUNT (async) Count on active state of asynchronous event
The downside is that software events have to be synchronous.

Can I delay/bundle reactions to QPlainTextEditor.textChanged events?

I have a small IDE for a modeling language I wrote, implemented in PyQt/PySide, and am trying to implement a code navigator that let's you jump to different sections in the file being edited.
The current implementation is: (1) connect to QPlainTextEditor.textChanged, (2) any time a change is made, (sloppily) parse the file and update the navigator pane
It seems to work OK, but I'm worried this could cause major performance issues for large files on slower systems, in particular if more stuff is connected to textChanged in the future.
My question: Has anybody here implemented a delayed reaction to events, so that multiple events (i.e. keystrokes) within a short period only trigger a single update (say once per second)? And is there a proper QT way of doing this?
Thanks,
Michael
You can try using timers if you want some "delay".
There would be 2 ways to use them (with different results).
One is only parse after no input has been done for a certain amount of time
NOTE: I only know C++ Qt but I assume the same things are valid for pyqt so this is kind of "pseudocode" I hope you get the concept though.
QTimer timer; //somewhere
timer.setSingleShot(true); //only fire once
connect(timer,QTimer::timeout(),OnTimerDone(...);
OnTextChanged(...)
{
timer.start(500); //wait 500ms
}
OnTimerDone(...)
{
DoStuff(...);
}
This will restart the timer every input, so when you call that and the timer is not done the timeout signal is not emitted. When no input is done for an amount of time the timer timeouts and you parse the file.
The second option would be to have a periodic timer running (singleShot(false)).
Just start the timer for like each second. and timeout will be called once a second. You can combine that with a variable which you set to true when the input changes and to false when the file is parsed. So you avoid parsing when nothing has changed.
In C++Qt you won't have to worry about multi-threading because the slot gets called in the GUI thread. I assume it is the same for python but you should probably check this.

Which is a better option for multitasking in arduino - millis( ) or Scheduler Library?

I have an application wherein I want to flash( trigger for a sec) a solenoid every 10 seconds and at the same time receive a serial input to rotate a servo motor.
The delay() creates conflicts so I have gone through the millis() function which is easy to understand.But in the arduino website they have something called the Scheduler library which looks pretty damn easy( haven't tried it though).
So which is better and efficient option to consider, is it millis() or Scheduler?
Thank you,
The Scheduler library uses millis() as well to calculate the delay between tasks.
To link a function to the sheduler, it needs to have a void f(void) prototype.
So to be able to add a function that returns something or has parameters, you need to wrap it in another function of void f(void) prototype.
IMHO, a sheduler library is usefull to organize your code when you have multiple tasks (This library has a maximum of 10 tasks, but you can change it).
In your case, you only have two tasks. It may be better to just use your own little sheduler using millis().
If you want to stay with millis(), then
Simple Multi-tasking in Arduino
will be of help. It covers:- adding a loopTimer to see how slow your loop/tasks are running, removing delays from your code and third party libraries, reading serial input without blocking and sending serial prints without blocking and giving important tasks more time.
Finally it transfers the code unchanged to the ESP32 to add remote control.
The basic code is
void loop() {
callTask_1(); // do something
callTask_2(); // do something else
callTask_1(); // check the first task again as it needs to be more responsive than the others.
callTask_3(); // do something else
}
The trick is that each callTask..() method must return quickly so that the other tasks in the loop get called promptly and often. The rest of the instructable covers how to keep your tasks running quickly and not holding everything up, using a temperature controlled, stepper motor driven damper with a user interface as a concrete example.

Need an Arduino Sketch that uses digital write for a certian number of seconds

I need a simple way to run a program using digital write for a certain number of seconds.
I am driving two DC Motors. I already have my setup complete, and have driven the motors using pause() and digitalWrite(). I will be making time measurements in milliseconds
Adjustable runtime, and would preferable have non-blocking code.
You could use a timer-driven interrupt triggering code execution which will handle the output (decrementing required time value and eventually switching off the output) or use threads.
I would suggest using threads.
Your requirement is similar to a "blinking diodes" case I described in a different thread
If you replace defines setting time intervals and use variables instead you could use this code to drive outputs or simplify the whole code by using only one thread working the same way aforementioned timer interrupt would work.
If you would like to try timer interrupt-driven approach this post gives a good overview and examples (but you have to change OCR1A to about 16 to get overflow each 1ms) .

Qt: When QTimer actually started?

When application calls QTimer::start() is it started immediately or will be started after current event processed ? In other words, should I use single-shot timer with time correction in case of long-time processing in its timeout() slot ?
To answer with certainty would require inspecting platform-specific code within Qt. That's a good sign that this is not something you should be depending on. Moreover, QTimer doesn't promise much in terms of accuracy:
Timers will never time out earlier than the specified timeout value
and they are not guaranteed to time out at the exact value specified.
In many situations, they may time out late by a period of time that
depends on the accuracy of the system timers.
The accuracy of timers depends on the underlying operating system and
hardware. Most platforms support a resolution of 1 millisecond, though
the accuracy of the timer will not equal this resolution in many
real-world situations.
If Qt is unable to deliver the requested number of timer clicks, it
will silently discard some.
If you need to know precisely how much time has passed between timeout signals, use your QTimer in conjunction with a QElapsedTimer.

Resources