I am a beginner in RTOS programming.I have a query regarding the same.
Query:I understand that context switching happens between various tasks as per priority assigned. I wanted to know how exactly does a higher priority task interrupts a low priority task technically? Does each task is assigned to hardware interrupt pin so that whenever micro-controller is interrupted on that pin by external hardware,the specific task is processed provided it is assigned higher priority when compared to the task that is presently being processed? But practically speaking if there are 128 tasks present in the program it might require 7 hardware pins reserved for interrupts. What is the logic I am missing?
I recommend to read the pretty good docs on https://www.freertos.org e.g. RTOS Fundamentals
I’m sure this will provide a good overview and related details.
Besides that you’ll find out that usually you don’t need external hardware pins to run a multitasking OS.
Free RTOS uses only sys_tick/os_tick hw-interrupt for context switching. This is high precision periodic interrupt configured on any underlying controller
for example on Cortex M:
https://www.keil.com/pack/doc/CMSIS/Core/html/group__SysTick__gr.html
In the interrupt handling of this, FreeRTOS schedular switches the tasks based on the Ready Queue Task list and its priorities.
Related
Hello i have just started using FreeRTOS with STM32. I understand the concept of synchronisation between Tasks or Threads using Semaphores. But what i really dont get is the use of the Semaphores/Mutexes with the Interrupt Service Routine ISR. Why would i use xSemaphoreGiveFromISR() instead of just using xSemaphoreGive() while both of them are mainly used for sync purposes not to interrupt. Also what is the difference between software timers and Interrupts?. I know when and how i should use Interrupts but when would i need to use software timers?
If you dig into the sources you‘ll see the difference between the normal vs. *FromISR API. There are a few more of those. It’s mainly an optimization to minimize execution time in ISRs (if supported by the MCU used) because ISRs should be kept as short as possible.
Also the ISR (calling) context is different to normal task context and the *FromISR API takes care of this.
It’s an implementation detail - just follow the documented rules and you’ll be fine :)
Basically software timers are used to support a couple/many timers using a single HW timer. Often software needs a number of simultaneously running timers e.g. to trigger a number of periodic jobs/actions with differing periods, but HW resources (timers) are limited.
This also applies to the FreeRTOS timer feature using the FreeRTOS systick which usually runs anyway.
Interrupts in general are a different thing. They’re a way how peripheral HW can interact with an connected processor where an application is running.
Well, for instance a HW timer configured accordingly fires up an (HW) interrupt to trigger a software via an ISR to do something on that event.
See also the recommended and comprehensive FreeRTOS documentation.
I am using 3 micro-controller on a board.
Main micro, gateway micro and safety micro;
name suggest the associated applications.
Internal watchdog exist for all three, but I need to have an external supervision so as not to have a buggy timer code nullifying the effect of internal watchdog. Also to keep the BOM cost low, so can use just 1 external watchdog.
Propose to use the following strategy:
Main microcontroller: We plan to have the internal watchdog and as well an external watchdog for this.
Safety Microcontroller: We plan to have internal watchdog and as well monitoring over SPI by Main microcontroller.
Gateway Microcontroller: We plan to have internal watchdog and as well monitoring over SPI by Main microcontroller.
One issue with this is - EMI or noise issues over line causing SPI corruption and hence false RESET from main micro.
Has anybody faced similar challenge? Any suggestions for this?
Many Thanks for your help!!!!
Not knowing the specifics of your application, it is not possible to give you a definitive answer. The way you would normally solve this sort of problem is to do a failure mode and effects analysis. Essentially you list out all the parts of your system and then brainstorm all the possible failure modes you think could happen. EMC would be one of them. You then estimate a probability that each failure mode will occur and assign a severity to it in the event that it does occur. Multiplying these out will allow you to identify the areas that carry greater impact and need extra protection. When all the failure modes have a severity x risk value below a threshold set by your application, you will have a 'valid' solution.
Not doing a thorough analysis like this means you may very well put all your effort into defending the front door while leaving the back door unlocked.
I was reading about micro-controller interrupts, and I have a question does the save of the current state happen for software interrupts too or just for hardware interrupts.
Yes. The general concept behind any interrupt is that it can suspend the execution of the underlying program without that program realising it has been interrupted. This requires storing of the CPU and (some) register states and their restoration once the interrupt service routine has completed. The CPU typically has special hardware mechanism for doing this.
Software interrupts share the same mechanism and so the state is saved and restored.
Note, however, that the normal use for software interrupts is on more complex microprocessors where they allow a safe hardware mechanism for moving between privilege modes - e.g. your application and an operating system. On a low level microcontroller they are of little use, as if you already know you want to call the piece of code in the interrupt, you might as well just call it directly as a function.
Typically the state saved is minimal: just enough to restore the previous state when the only action of the ISR is RTI. Usually just the instruction pointer (and code segment) and flags. The whole processor state is not saved, otherwise it would place too much overhead on a time-critical interrupt. It is up the the ISR to save and restore anything more than the bare-bones mechanism of the processor interrupt.
Yes and no, yes as Jon has described, but software interrupts can be used as your interface to the operating system, and for that use case you likely will be setting up registers as parameters into the software interrupt. And expect registers when it returns to be the result.
So software interrupts, can be treated like hardware interrupts and you have to preserve the state, or they can be treated as or used as fancy function calls and you have parameters going in and
In short it's a maybe.
Without knowing the exact example I couldn't say. What gets saved or doesn't get saved really depends on which software is generating/receiving the interrupt.
At low levels there is a lot of concern about speed at which things happen. And hence work that may not need to be done is avoided. Therefore there is no point in "saving your state" if you don't intend to modify it. So it really depends on what exactly we are talking about.
This is where specifications come into play. Operating systems and processors all specify exactly what state they do save and what state they do not save and depending on your specific problem the answer may differ...
#dwelch's answer explains it for Linux. (which is closest to what you were asking for)
#Jon's answer explains it for processors in general.
#weather-vane's answer explains it for micro-controllers.
These are all specific examples. If you were working with FreeRTOS on a micro-controller you would have to probably make your own software interrupt protocol and define it yourself in which case you can choose to save state or leave the responsibility to the caller. In the end, it is just a choice that a programmer that wrote the system made and hence you must go find his notes and read them.
In the end, it is mostly a gentlemen agreement.
I want to do some interrupt-driven signal processing on an Atmega328, which might not have enough SRAM (2K) to store the data of an entire run. This means I'll have to write part of the buffer to external memory while still gathering data.
My question is whether it is safe to have serial writes or I2C communications (e.g. to an SD card) while still triggering interrupts. I think serial communications themselves are interrupt driven so this might become an issue. Is this true? How about I2C? If both are likely to cause problems, what would be the recommended way (if any) to flush a buffer while still gathering data?
It is a common scenario and really depends on how much processing time you require exactly for each task and how tight your timing restrictions are regarding the device communication.
Thinks you should review/consider:
You have to be able to write data faster to the external RAM than it is acquired -> How fast do you gather data?
See that you spent as little time as possible in your interrupt handlers
Have an eye on interrupt starvation. Priority can cause ISR A not to be executed if ISR B triggers more frequently that AND has higher priority. Adjust execution interval if possible.
Check what serial data has to be sent in sequence or in short succession to honor timing requirements. You may have to pause/delay other processing for a short time.
For example, a process waiting for
disk I/O to complete will sleep on the
address of the buffer header
corresponding to the data being
transferred. When the interrupt
routine for the disk driver notes that
the transfer is complete, it calls
wakeup on the buffer header. The
interrupt uses the kernel stack for
whatever process happened to be
running at the time, and the wakeup is
done from that system process.
Can you please explain the last line in the paragraph which I have emphasised. It is about waking up the process which has been waiting for some event to occur and thus has slept. This para is from Galvin. By the way can you suggest some good book or link for studying unix operating systems?
Thanks.
There is some process running at the time the interrupt is received. The kernel doesn't change over to some other process context to handle it -- that would take time -- it just does what's necessary in the current context, and lets the scheduler know that the next time it schedules, the waiting process is ready to proceed.
There are a number of good internals books around. I'm fond of the various McKusick et al books, like The Design and Implementation of the FreeBSD Operating System.
Maurice Bach's Design of the Unix Operating System is the most well-known and comprehensive book on the subject.
The I/O completion interrupt will be executed as soon as the disk signals the end of the transfer. This is done regardless of what the kernel is currently doing. Interrupt handlers are usually very small and self-contained. Therefore it is faster to re-use the current runtime environment (stack, CPU state, etc) instead of doing a full context switch to a separate thread. On the down side this means that interrupt handlers are only allowed to do very limited things, like setting a flag somewhere else, or enqueing a work item. Also, they have to clean up very carefully after themselves, so that the running process is not disturbed.
Eric Raymond's 'The Art of Unix Programming' , should be read to understand the Unix philosophy and culture.To actually know and appreciate the reasons behind its design.