Reliable/persistent timer in Flutter - asynchronous

In my application, it needs a timer to asynchronously launch a new UI screen when the timer expires. 
The following code shows the desired semantics:
Timer t = Timer(Duration(seconds: myDuration), () {
askQuestion('How are you doing?'); });
// Maybe later, before the timer goes off...
t.cancel();
Using Timer, my implementation works sometimes, especially with small durations.
But when the duration is large, say 25, or 400 minutes, then my app would be killed and restarted, then the Timer is lost, the timeout will never happen.
My question is how I can make the Timer persistent surviving the restart of the app or reliable with some equivalent mechanism? 
I may provide an ad hoc persistent solution by checking stored time value for the timer to expire at the (re)start of my program, and decide if I should relaunch the timer. But I'd like to learn if there is already a generic solution, as it's a general requirement for Future.delayed operation to survive program restart, otherwise, the use of Future.delayed is limited.
Otherwise, I might search for alternative solutions that provide persistence. I remember local notification might.

I think, you should have to start timer when application started, but you also need to write time when you should show screen to cache service/preferences/db. Then if your application killed and started again you will start timer with rest time(because you need every time start timer on start and you know how much is rest).
Imagine that you should show screen every 400 min. Okay, from now is DateTime.now().add(Duration(minutes:400)) and we can safely start timer from now to this time by (pseudo code) but idea should be clear
timeLeft = cachedDateTime - DateTime.now()
if(timeLeft<0){
show you page immediately and write to cache next time
}
else {
start timer with time left here
}

Related

Multiple timers or single task with multiple counter?

Assume you have some functions that must be called at different point in times but continuosly (constant task like each 250ms, each 2s, each 5 mins).
Is it better to use 4-5 timers each one dedicated to a task or is it better to code everything in the smaller task and then use a counter variable to run the other function?
e.g.
//callback each 250ms
void 250ms_TASK(){
if (counter % 8 != 0){ //250ms*8 = 2s
return;
}
// do 2 sec stuff
if (counter != 4800){ //250ms*4800 = 20min
return;
}
//do 20min stuff
counter = 0;
}
Assume also that you want to avoid/be bulletproof to situations like this:
before doing 2 secs stuff you MUST be sure that the 8th 250ms task is computed.
before doing 20 min stuff you MUST be sure that the 4800th 250ms and the 600th 2s task is computed.
The question is related to best practice and performance.
Moreover is it better to perform those calculations in the callback or use the callback to modify flags and perform the calculations in the main loop ?
I assume you are using STM32 since you tagged STM32.
Unless your application is very much time critical that you need to use preemptive and asynchronous timer interrupts (for example 5 mins task is very important so it should be called even while a separated 250ms callback task is running), using multiple timer interrupts is just waste of timers and you need to use as fewer interrupts as possible IMHO. Counting variable is not costly so it is okay to do that.
The real consideration is the length of tasks. The ISRs should be as short as possible so if the timer callback tasks are quite long you should use flags and use polling operation in the main loop. Polling flags is more preferable especially when you are using multiple callbacks in a single timer ISR. Imagine the moment that 250ms, 2s, and 20min callbacks should be called in the ISR and the ISR will take 3 times longer than usual.
By the way, if you decide to use a single timer, why not using SysTick? The SysTick timer is provided in every Cortex M MCUs and its operation is the same across the MCU families. You can easily configure this as 1ms interrupt timer very easily. As far as you use polling in the main loop 1ms interrupt must be fine. There are many tutorials on Systick (for example, part1 and part2)
The standard way to do this for tasks that aren't very time critical, is to implement a single timer, which triggers once every millisecond.
That timer then goes through a list of registered "software timers" and checks if it is time for them to be executed. If so, the timer then calls a function pointer which contains the timer-specific code. That is, a callback function called upon by the timer driver.
If these functions are kept minimal, for example just setting a flag, you can execute them from the main timer ISR.
You can make various arguments regarding power consumption and real timer requirement. It really depends on your application. But these question can deliver insightful answers for beginners, and even more experienced developers. The keyword here is scheduling.
The typical setup I prefer, bare metal real-time:
Main runs all low priority and idle tasks. Main bases these timings on the systick timer that ticks every 1 ms: if( (now - then) > delay ){ then = now; foo(); }
These tasks can be interrupted by everything, except in a critical zone (when using ISR threadspace data).
Low priority tasks are blinking LED's and handling communications.
There are peripheral interrupts and timers that set IRQ pending bits to signal real-time work is ready to be done. Eg: read uart or adc register before overrun.
The interrupt priorities and timers are setup in a way that the work is done in the correct order at the correct time. Eg: when processing ADC samples, and the hardware alarm IRQ arrives, this is handled immediately.
This way I have the DMA signal samples are ready to be processed, whilst a synchronized timer at a lower frequency set the IRQ-pending for the process loop. The process loop must run after the samples, thus has lower priority in the NVIC.
Advantage: Real time performance is not impeded when the communication channel is overflowed with data.
Disadvantage: The cpu never sleeps long.
The ISR's of the real time tasks may not exceed their time window. This is where Windowed Watchdog Timers are useful. Also, idle tasks will only run when there is time to spare. They might be late.
A similar option here is to use a real time operating system. Like ChibiOS.
However, when you're a battery application you don't want the MCU to wake up every second. You want the MCU to wake up only when work has to be done. You can do this in two ways.
Multiple hardware timers signal the wake-up event.
This requires multiple timers to keep running and might still use too much energy.
Tickless operation. You use one timer, the chip wakes up and does work when the time is reached. Then it reloads the timer compare with the time of the next deadline. If your intervals are long enough apart you can use the RTC for this to get ultra low power consumption.
Advantage: chip is allowed to go to sleep for longer period depending on workload.
Disadvantage: the design is a bit more complicated to implement and debug.
Similar option here is to use a tickless operating system.
Assuming you're not using a real time OS, I'd use a timer to do the time critical stuff (if it's handled with few clock cycles) and long timer counters through an interrupt and use non time critical stuff and longer periods in the main loop (with or without a watchdog timer/sleep).
The interrupts will interrupt the main loop stuff so you can be sure the time critical stuff happens when it needs to, the less time critical stuff happens whenever it can.
You could use a state machine in the main loop to do the logic stuff to make sure everything is done in the right order, things are checked, loaded, sensors read etc.
There is no right answer here, best practices would be to implement the design to meet the requirements, since requirements for a project vary from project to project there is no single right answer. One common solution will fail to work right for a wide array of products, as would another common solution. You could force one solution but that can add a lot of hacked up band-aids simply adding risk to the project, possibly lead to failure and or recalls or field upgrades that were unecessary that make the product and the company look bad. Do your system engineering and most of the time the correct solution will simply present itself, dont do your system engineering and the failures will simply present themselves.

Can I delay/bundle reactions to QPlainTextEditor.textChanged events?

I have a small IDE for a modeling language I wrote, implemented in PyQt/PySide, and am trying to implement a code navigator that let's you jump to different sections in the file being edited.
The current implementation is: (1) connect to QPlainTextEditor.textChanged, (2) any time a change is made, (sloppily) parse the file and update the navigator pane
It seems to work OK, but I'm worried this could cause major performance issues for large files on slower systems, in particular if more stuff is connected to textChanged in the future.
My question: Has anybody here implemented a delayed reaction to events, so that multiple events (i.e. keystrokes) within a short period only trigger a single update (say once per second)? And is there a proper QT way of doing this?
Thanks,
Michael
You can try using timers if you want some "delay".
There would be 2 ways to use them (with different results).
One is only parse after no input has been done for a certain amount of time
NOTE: I only know C++ Qt but I assume the same things are valid for pyqt so this is kind of "pseudocode" I hope you get the concept though.
QTimer timer; //somewhere
timer.setSingleShot(true); //only fire once
connect(timer,QTimer::timeout(),OnTimerDone(...);
OnTextChanged(...)
{
timer.start(500); //wait 500ms
}
OnTimerDone(...)
{
DoStuff(...);
}
This will restart the timer every input, so when you call that and the timer is not done the timeout signal is not emitted. When no input is done for an amount of time the timer timeouts and you parse the file.
The second option would be to have a periodic timer running (singleShot(false)).
Just start the timer for like each second. and timeout will be called once a second. You can combine that with a variable which you set to true when the input changes and to false when the file is parsed. So you avoid parsing when nothing has changed.
In C++Qt you won't have to worry about multi-threading because the slot gets called in the GUI thread. I assume it is the same for python but you should probably check this.

Is QTimer smart enough to resynchronize itself

Let's say we start a QTimer with a 100ms interval at t0.
Let's say first timeout occurs at t0+100ms. Fine.
Let's say that, due to huge CPU load and/or lots of events having to be handled by the event loop, second timeout occurs at t0+230ms.
Let's say CPU is back to normal load. Is their any chance that third timeout could occur at t0+300ms (QTimer object realising it was late and trying to correct that by resynchronizing itself), or will it most likely timeout at t0+330ms?
Per QTimer documentation :
All timer types may time out later than expected if the system is busy or unable to provide the requested accuracy. In such a case of timeout overrun, Qt will emit activated() only once, even if multiple timeouts have expired, and then will resume the original interval.
I'm not sure I understand this correctly but, apparently, it won't resynchronize itself and third timeout will occur at t0+330ms.

QTcpSocket readyRead() signal emits multiple times

I'm new to Qt and currently learning to code with QTcpServer and QTcpSocket.
My code to process data is like
Myclass()
{
connect(&socket, &QTcpSocket::readyRead, this, &MyClass::processData);
}
void MyClass::processData()
{
/* Process the data which may be time-consuming */
}
Is it the correct way to use the signal like that? As I'm reading the documentation that in the same thread the slot is invoked immediately, which means if my processing work hasn't finished and new data comes, Qt will pause on the current work and enter the processData() again. That is not exactly what I want to do, So should I QueueConnection in the signal/slot connection?
Or could you please provide some good methods that I should adopt in this case?
Qt will not pause your current work when data comes in, it will call processData() only when the event loop is free and waiting for new events.
so, when your application is busy executing your code, the application will appear to be unresponsive, because it can't respond to external events, so processData() won't get called if some data is received on the socket until the current function (that may contain your heavy code) returns, and the control is back in the event loop, that has to process the queued events (these events may contain the received data on the socket, or the user clicks on some QPushButton, etc) .
in short, that's why you always have to make your code as short and optimized as possible, in order not to block the event loop for a long time.
With the event delivery stuck, widgets won't update themselves (QPaintEvent objects will sit in the queue), no further interaction with widgets is possible (for the same reason), timers won't fire and networking communications will slow down and stop. Moreover, many window managers will detect that your application is not handling events any more and tell the user that your application isn't responding. That's why is so important to quickly react to events and return to the event loop as soon as possible!
see https://wiki.qt.io/Threads_Events_QObjects

QTimer firing issue in QGIS(Quantum GIS)

I have been involved in building a custum QGIS application in which live data is to be shown on the viewer of the application.
The IPC being used is unix message queues.
The data is to be refreshed at a specified interval say, 3 seconds.
Now the problem that i am facing is that the processing of the data which is to be shown is taking more than 3 seconds,so what i have done is that before the app starts to process data for the next update,the refresh QTimer is stopped and after the data is processed i again restart the QTimer.The app should work in such a way that after an update/refresh(during this refresh the app goes unresponsive) the user should get ample time to continue to work on the app apart from seeing the data being updated.I am able to get acceptable pauses for the user to work-- in one scenario.
But on different OS(RHEL 5.0 to RHEL 5.2) the situation is something different.The timer goes wild and continues to fire without giving any pauses b/w the successive updates thus going into an infinite loop.Handling this update data definitely takes longer than 3 sec,but for that very reason i have stopped-restarted the timer while processing..and the same logic works in one scenario while in other it doesnt.. The other fact that i have observed is that when this quick firing of the timer happens the time taken by the refreshing function to exit is very small abt 300ms so the start-stop of the timer that i have placed at the start-and-end of this function happens in that small time..so before the actual processing of the data finishes,there are 3-4 starts of the timer in queue waiting to be executed and thus the infinite looping problem gets worse from that point for every successive update.
The important thing to note here is that for the same code in one OS the refresh time is shown to be as around 4000ms(the actual processing time taken for the same amount of data) while for the other OS its 300ms.
Maybe this has something to do with newer libs on the updated OS..but I dont know how to debug it because i am not able to get any clues why its happening as such... maybe something related to pthreads has changed b/w the OSs??
So, my query is that is there any way that will assure that some processing in my app is timerised(and which is independent of the OS) without using QTimer as i think that QTimer is not a good option to achieve what i want??
What option can be there?? pthreads or Boost threads? which one would be better if i am to use threads as an alternate??But how can i make sure atleast a 3 second gap b/w successive updates no matter how long the update processing takes?
Kindly help.
Thanks.
If I was trying to get an acceptable, longer-term solution, I would investigate updating your display in a separate thread. In that thread, you could paint the display to an image, updating as often as you desire... although you might want to throttle the thread so it doesn't take all of the processing time available. Then in the UI thread, you could read that image and draw it to screen. That could improve your responsiveness to panning, since you could be displaying different parts of the image. You could update the image every 3 seconds based on a timer (just redraw from the source), or you could have the other thread emit a signal whenever the new data is completely refreshed.

Resources