The Atmel SAMD21 TCC peripheral provides a STOP command, which pauses the counter. The counter can be resumed with a RETRIGGER command.
When STOP is issued, the TCC enters a fault state, in which the outputs are either tristated, or driven to states specified in a config register. Presumably this mechanism is designed to support a fixed failsafe output state.
In my case I want the output pins to freeze in the state they're in at the time of the STOP command. The only way I can see to to do this is to update the configured fault output state register every time the outputs are updated - requiring interrupt processing which kind of defeats the purpose of much of the TCC's output waveform extension architecture, as well as being a processing load I'd prefer to avoid. There are other complications too, such as accounting for the dead time mechanism, and hardware/software races.
So I've been looking at ways to achieve this that don't involve the STOP command - but I can't see any other way of stopping the counter. There's no way to gate the peripheral clock input, and disabling it in GCLK is ruled out as it also runs TCC1. (And who knows what other effects this would have.) Negating the ENABLE bit, besides being overkill, unsurprisingly also tristates the outputs. Modifying the configuration in various other ways usually requires writing to enable-protected registers, thus requiring disabling the peripheral first.
(One idea I haven't investigated that yet is to drive the counter from the event system, and control the event generation/gating instead.)
So: is there any way of pausing the peripheral in its current state, while maintaining the state of the output pins?
All that I can think of to try is the async 'COUNT' event, which sounds like it is a gate for the clock to the counter.
(page numbers from the 03/2016 manual)
31.6.4.3. Events, p.712;
Count during active state of an asynchronous event (increment or decrement, depending on counter direction). In this case, the counter will be incremented or decremented on each cycle of the prescaled clock, as long as the event is active.
31.8.9. Event Control, p.734;
EVCTRL register,
Bits 2:0 – EVACT0[2:0]: Timer/Counter Event Input 0 Action
0x5 COUNT (async) Count on active state of asynchronous event
The downside is that software events have to be synchronous.
Related
Assume you have some functions that must be called at different point in times but continuosly (constant task like each 250ms, each 2s, each 5 mins).
Is it better to use 4-5 timers each one dedicated to a task or is it better to code everything in the smaller task and then use a counter variable to run the other function?
e.g.
//callback each 250ms
void 250ms_TASK(){
if (counter % 8 != 0){ //250ms*8 = 2s
return;
}
// do 2 sec stuff
if (counter != 4800){ //250ms*4800 = 20min
return;
}
//do 20min stuff
counter = 0;
}
Assume also that you want to avoid/be bulletproof to situations like this:
before doing 2 secs stuff you MUST be sure that the 8th 250ms task is computed.
before doing 20 min stuff you MUST be sure that the 4800th 250ms and the 600th 2s task is computed.
The question is related to best practice and performance.
Moreover is it better to perform those calculations in the callback or use the callback to modify flags and perform the calculations in the main loop ?
I assume you are using STM32 since you tagged STM32.
Unless your application is very much time critical that you need to use preemptive and asynchronous timer interrupts (for example 5 mins task is very important so it should be called even while a separated 250ms callback task is running), using multiple timer interrupts is just waste of timers and you need to use as fewer interrupts as possible IMHO. Counting variable is not costly so it is okay to do that.
The real consideration is the length of tasks. The ISRs should be as short as possible so if the timer callback tasks are quite long you should use flags and use polling operation in the main loop. Polling flags is more preferable especially when you are using multiple callbacks in a single timer ISR. Imagine the moment that 250ms, 2s, and 20min callbacks should be called in the ISR and the ISR will take 3 times longer than usual.
By the way, if you decide to use a single timer, why not using SysTick? The SysTick timer is provided in every Cortex M MCUs and its operation is the same across the MCU families. You can easily configure this as 1ms interrupt timer very easily. As far as you use polling in the main loop 1ms interrupt must be fine. There are many tutorials on Systick (for example, part1 and part2)
The standard way to do this for tasks that aren't very time critical, is to implement a single timer, which triggers once every millisecond.
That timer then goes through a list of registered "software timers" and checks if it is time for them to be executed. If so, the timer then calls a function pointer which contains the timer-specific code. That is, a callback function called upon by the timer driver.
If these functions are kept minimal, for example just setting a flag, you can execute them from the main timer ISR.
You can make various arguments regarding power consumption and real timer requirement. It really depends on your application. But these question can deliver insightful answers for beginners, and even more experienced developers. The keyword here is scheduling.
The typical setup I prefer, bare metal real-time:
Main runs all low priority and idle tasks. Main bases these timings on the systick timer that ticks every 1 ms: if( (now - then) > delay ){ then = now; foo(); }
These tasks can be interrupted by everything, except in a critical zone (when using ISR threadspace data).
Low priority tasks are blinking LED's and handling communications.
There are peripheral interrupts and timers that set IRQ pending bits to signal real-time work is ready to be done. Eg: read uart or adc register before overrun.
The interrupt priorities and timers are setup in a way that the work is done in the correct order at the correct time. Eg: when processing ADC samples, and the hardware alarm IRQ arrives, this is handled immediately.
This way I have the DMA signal samples are ready to be processed, whilst a synchronized timer at a lower frequency set the IRQ-pending for the process loop. The process loop must run after the samples, thus has lower priority in the NVIC.
Advantage: Real time performance is not impeded when the communication channel is overflowed with data.
Disadvantage: The cpu never sleeps long.
The ISR's of the real time tasks may not exceed their time window. This is where Windowed Watchdog Timers are useful. Also, idle tasks will only run when there is time to spare. They might be late.
A similar option here is to use a real time operating system. Like ChibiOS.
However, when you're a battery application you don't want the MCU to wake up every second. You want the MCU to wake up only when work has to be done. You can do this in two ways.
Multiple hardware timers signal the wake-up event.
This requires multiple timers to keep running and might still use too much energy.
Tickless operation. You use one timer, the chip wakes up and does work when the time is reached. Then it reloads the timer compare with the time of the next deadline. If your intervals are long enough apart you can use the RTC for this to get ultra low power consumption.
Advantage: chip is allowed to go to sleep for longer period depending on workload.
Disadvantage: the design is a bit more complicated to implement and debug.
Similar option here is to use a tickless operating system.
Assuming you're not using a real time OS, I'd use a timer to do the time critical stuff (if it's handled with few clock cycles) and long timer counters through an interrupt and use non time critical stuff and longer periods in the main loop (with or without a watchdog timer/sleep).
The interrupts will interrupt the main loop stuff so you can be sure the time critical stuff happens when it needs to, the less time critical stuff happens whenever it can.
You could use a state machine in the main loop to do the logic stuff to make sure everything is done in the right order, things are checked, loaded, sensors read etc.
There is no right answer here, best practices would be to implement the design to meet the requirements, since requirements for a project vary from project to project there is no single right answer. One common solution will fail to work right for a wide array of products, as would another common solution. You could force one solution but that can add a lot of hacked up band-aids simply adding risk to the project, possibly lead to failure and or recalls or field upgrades that were unecessary that make the product and the company look bad. Do your system engineering and most of the time the correct solution will simply present itself, dont do your system engineering and the failures will simply present themselves.
Will an AVR MCU (Arduino) remember all interrupts that happen while in nointerrupts() section?
void f1() {
noInterrupts();
// critical, time-sensitive code here
interrupts();
}
//now jump to queued interrupts if any
Will it execute them after interrupts()?
I ask this because after reading the datasheet, I have the feeling that all interrupts have their flag, so this could be no problem. But I'm not that experienced, and I'm missing something probably, because other tutorials always state vaguely that "don't stay there too long, since no interrupts at all may happen at that time".
Why?
I have a circular buffer where I put some packets from I2C (they are put whenever I2C interrupt occurs). I also read this buffer from the main loop once in a while, at unpredictable time and overwrite is allowed.
Also, I use the same buffer class (but different instances) in the opposite direction (I2C then triggers interrupt and reads).
My problem is: I would like to turn off interrupts during non-interrupt read/write, so I can be sure I will not be in a situation when I2C triggers and overwrites my item being currently read from normal, main loop.
I now kind of handle the situation with flags before and after read, so the interrupt first checks if the item is free, but I'm not confident in that approach, and I would like to make it work with noInterrupts() and interrupts() during main loop read.
Thank you.
Your approach seems normal. You need to disable interrupts to protect the buffer.
While the interrupts are disabled, interrupt flags will continue to be set but the handlers won't be called. When interrupts are enabled, the interrupt handlers will run.
The warning to not take too long with interrupts disabled is that two of the same type of interrupt may be triggered while disabled. If that happens, the single interrupt flag is set once, and there is no way to tell that there were two interrupts pending. The interrupt handler only runs once. Essentially, the first of the interrupts is lost.
I have a small IDE for a modeling language I wrote, implemented in PyQt/PySide, and am trying to implement a code navigator that let's you jump to different sections in the file being edited.
The current implementation is: (1) connect to QPlainTextEditor.textChanged, (2) any time a change is made, (sloppily) parse the file and update the navigator pane
It seems to work OK, but I'm worried this could cause major performance issues for large files on slower systems, in particular if more stuff is connected to textChanged in the future.
My question: Has anybody here implemented a delayed reaction to events, so that multiple events (i.e. keystrokes) within a short period only trigger a single update (say once per second)? And is there a proper QT way of doing this?
Thanks,
Michael
You can try using timers if you want some "delay".
There would be 2 ways to use them (with different results).
One is only parse after no input has been done for a certain amount of time
NOTE: I only know C++ Qt but I assume the same things are valid for pyqt so this is kind of "pseudocode" I hope you get the concept though.
QTimer timer; //somewhere
timer.setSingleShot(true); //only fire once
connect(timer,QTimer::timeout(),OnTimerDone(...);
OnTextChanged(...)
{
timer.start(500); //wait 500ms
}
OnTimerDone(...)
{
DoStuff(...);
}
This will restart the timer every input, so when you call that and the timer is not done the timeout signal is not emitted. When no input is done for an amount of time the timer timeouts and you parse the file.
The second option would be to have a periodic timer running (singleShot(false)).
Just start the timer for like each second. and timeout will be called once a second. You can combine that with a variable which you set to true when the input changes and to false when the file is parsed. So you avoid parsing when nothing has changed.
In C++Qt you won't have to worry about multi-threading because the slot gets called in the GUI thread. I assume it is the same for python but you should probably check this.
Can someone explain me how to write ISR and how to set their priority when they are many in one program?
What is the function of vectors and is it necessary to consider them while interrupt handling?
If its possible please provide some examples as well (c code).
Just like when a doorbell or phone rings at your home you stop what you are doing, deal with the interrupt, then, ideally, return to what you were doing.
Same with a processor (msp430 or otherwise). There are ways to interrupt the processor for various reasons. I have a new byte in the uart for you, a timer has timed out, a gpio pin has changed state, etc. Things that you have configured to be something that interrupts the processor when they happen.
Just like the doorbell. the hardware has to have a way to stop and save something to remember what it was doing, find out what the interrupt is and handle it, then go back to what it was doing. Processors often, quite literally interrupt between instructions they will finish the current instruction (with piplines "current" is a bit fuzzy). Then based on the interrupt and the design of the processor there is some place that the hardware and software agree upon (the hardware dictates and the programmers use) such that the software can tell the processor where the code is that handles all interrupts or that particular flavor of interrupt, depending on how the processor is designed. A common solution is an interrupt vector table, a list of addresses usually that the programmer sets that point to the code that handles each one of those events or interrupts, both the programmer and the hardware know that a particular interrupt will cause a particular address to be read in the memory space and the hardware assumes that address is the code for that interupt.
So the processor gets an interrupt, it saves the state of the machine which at a minimum is the program counter and can depending on the design also save the status register and gprs, but often the programmer is responsible for saving gprs and such as needed. The hardware then based on the interrupt/event reads from an address, usually that address contains an address to a handler so for example 0xFFF8 might be the address to the interrupt handler (dont know didnt look it up for the msp430). so 0xFFF8 is not where the code is but the number at that address is where the code is maybe 0xD008 for example. It depends on the processor architecture but when you finish handling the interrupt you need to tell the processor so it can return to what was interrupted. often that is a special return from interrupt instruction but different processors have different solutions.
Priority if any, is dictated by the hardware design, something as simple as an msp430 might not (not sure off hand) have a priority scheme other than whoever gets here first. and the scheme might be that before you exit the handler you check to see if any others have come in while you were handling that one that interrupted you. if there is a priority scheme in the design then it simply repeats the process saves state (of the interrupt or forground code interrupted) finds the entry point for the handler using a vector table usually. when the highest priority handler finishes it returns and control goes back to the next higher priority thing, and eventually back to the forground task (assuming nothing else comes along).
in general an isr needs to not destroy anything the foreground task was using, preserve the state of the gprs if needed, preserve the state of the status register, dont mess up the stack or memory used by the foreground task, etc. And ideally keep the isr lean and mean, dont waste a lot of time there. the vector table is just where you fill in the addresses for entry points into the code reset handler interrupt handler, etc.
An interrupt handler (also known as an interrupt service routine or ISR) is a piece of code that runs when an event (I/O) occurs that requires CPU attention. An interrupt event is typically asynchronous, hence the reason a handler must be registered for the event.
For example, in the case of Serial communication, data is received by the USCI peripheral (configured for UART) that needs to be processed. In this case, an interrupt will be issued by the USCI peripheral and the CPU will begin executing from the interrupt handler (addressed by the interrupt vector). Vectors are at fixed locations and are outlined in the datasheet of your device. When the end of the interrupt handler is reached, the CPU will go back to where it left off (or service another interrupt). A datasheet/user's guide will explain the default priorities of interrupts.
A typical interrupt handler using the IAR Embedded Workbench IDE will look like the following:
// Port 1 interrupt service routine
#pragma vector=PORT1_VECTOR
__interrupt void Port_1(void)
{
P1OUT ^= 0x01;
// P1.0 = toggle
P1IFG &= ~0x10;
// P1.4 IFG cleared
}
Further reading is available here.
I am thinking about modelling a material flow network. There are processes which operate at a certain speed, buffers which can overflow or underflow and connections between these.
I don't see any problems modelling this in a classic Discrete Event Simulation (DES) fashion using a global event queue. I tried modelling the system without a queue but failed in early stages. Still I do not understand the underlying reason why a queue is needed, at least not for events which originate "inside" the network.
The idea of a queue-less DES is to treat the whole network as a function which takes a stream of events from the outside world and returns a stream of state changes. Every node in the network should only be affected by nodes which are directly connected to it. I have set some hopes on Haskell's arrows and Functional Reactive Programming (FRP) in general, but I am still learning.
An event queue looks too "global" to me. If my network falls apart into two subnets with no connections between them and I only ask questions about the state changes of one subnet, the other subnet should not do any computations at all. I could use two event queues in that case. However, as soon as I connect the two subnets I would have to put all events into a single queue. I don't like the idea, that I need to know the topology of the network in order to set up my queue(s).
So
is anybody aware of DES algorithms which do not need a global queue?
is there a reason why this is difficult or even impossible?
is FRP useful in the context of DES?
To answer the first point, no I'm not aware of any discrete-event simulation (DES) algorithms that do not need a global event queue. It is possible to have a hierarchy of event queues, in which each event queue is represented in its parent event queue as an event (corresponding to the time of its next event). If a new event is added to an event queue such that it becomes the queue's next event, then the event queue needs to be rescheduled in its parent to preserve the order of event execution. However, you will ultimately still boil down to a single, global event queue that is the parent of all of the others in hierarchy, and which dispatches each event.
Alternatively, you could dispense with DES and perform something more akin to a programmable logic controller (PLC) which reevaluates the state of the entire network every small increment of time. However, typically, that would be a lot slower (it may not even run as fast as real-time), because most of the time it would have nothing to do. If you pick too big a time increment, the simulation may lose accuracy.
The simplest answer to the second point is that, ultimately, to the best of my knowledge, it is impossible to do without a global event queue. Each simulation event needs to execute at the correct time, and - since time cannot run backwards - the order in which events are dispatched matters. The current simulation time is defined by the time that the current event executes. If you have separate event queues, you also have separate clocks, which would make things very confusing, to say the least.
In your case, if your subnetworks are completely independent, you could simulate each subnetwork individually. However, if the state of one subnetwork affects the state of the total network, and the state of the total network affects the state of each subnetwork, then - since an event is influenced by the events that preceded it, can only influence the events that follow, but cannot influence what preceded it - you have to simulate the whole network with a global event queue.
If it's any consolation, a true DES simulation does not perform any processing in between events (other that determining what the next event is), so there should be no wasted processing in one subnetwork if all the action is taking place in another.
Finally, functional reactive programming (FRP) is absolutely useful in the context of a DES. Indeed, I now write of lot of my DES simulations in Scala using this approach.
I hope this helps!
UPDATE: Since writing the above, I've used Sodium (an excellent FRP library, which was referenced by the OP in the comments below), and can add some further explanation: Sodium provides a means for subscribing to events, and for performing actions when those events occur. However, here I'm using the term event in a general sense, such as a button being clicked by a user in a GUI, or a network package arriving, etc. In other words, the events are not necessarily simulation events.
You can still use Sodium—or any other FRP library—as part of a simulation, to subscribe to simulation events and perform actions when they occur; however, these tools typically have no built-in support for simulation, and so you must incorporate a simulation engine as the source of simulation events, in the same way that a GUI is incorporated as the source of user interaction events. It is within this engine that the global event queue must reside.
Incidentally, if you are trying to perform parallel or distributed simulation model execution, things get considerably more complicated. You have multiple event queues in these situations, but they must be synchronized (giving the appearance of a single queue). The two basic approaches are conservative synchronization and optimistic synchronization.