For example, a process waiting for
disk I/O to complete will sleep on the
address of the buffer header
corresponding to the data being
transferred. When the interrupt
routine for the disk driver notes that
the transfer is complete, it calls
wakeup on the buffer header. The
interrupt uses the kernel stack for
whatever process happened to be
running at the time, and the wakeup is
done from that system process.
Can you please explain the last line in the paragraph which I have emphasised. It is about waking up the process which has been waiting for some event to occur and thus has slept. This para is from Galvin. By the way can you suggest some good book or link for studying unix operating systems?
Thanks.
There is some process running at the time the interrupt is received. The kernel doesn't change over to some other process context to handle it -- that would take time -- it just does what's necessary in the current context, and lets the scheduler know that the next time it schedules, the waiting process is ready to proceed.
There are a number of good internals books around. I'm fond of the various McKusick et al books, like The Design and Implementation of the FreeBSD Operating System.
Maurice Bach's Design of the Unix Operating System is the most well-known and comprehensive book on the subject.
The I/O completion interrupt will be executed as soon as the disk signals the end of the transfer. This is done regardless of what the kernel is currently doing. Interrupt handlers are usually very small and self-contained. Therefore it is faster to re-use the current runtime environment (stack, CPU state, etc) instead of doing a full context switch to a separate thread. On the down side this means that interrupt handlers are only allowed to do very limited things, like setting a flag somewhere else, or enqueing a work item. Also, they have to clean up very carefully after themselves, so that the running process is not disturbed.
Eric Raymond's 'The Art of Unix Programming' , should be read to understand the Unix philosophy and culture.To actually know and appreciate the reasons behind its design.
Related
I am teaching myself OS by going through the lecture notes of the course at IIT Bombay (https://www.cse.iitb.ac.in/~mythili/os/). One of the questions in the Process worksheet asks which of the following doesn't always happen in the situation described at the title. The answer is C.
A. The process moves to kernel mode.
B. The program counter of the CPU shifts to the kernel part of the address space.
C. The process is context-switched out and a separate kernel process starts execution.
D. The OS code that deals with handling TCP/IP packets is invoked
I'm a bit confused though. I thought when an interrupt routine occurs the process is context-switched out so other processes can run and the CPU is not idle during that time. The kernel, then, will take care of the packet sending. Why would C not be correct then?
You are right in saying that "when an interrupt routine occurs the process is context-switched out so other processes can run and the CPU is not idle during that time", but the words "generally or mostly" need to be added to it.
In most cases, there is another process waiting for CPU time and that can be scheduled. However it is not the case 100% of the time. The question is about the word "always" and while other options always occur in the given situation, option C is a choice that OS makes at run time. If OS determines that switching out this process can be sub optimal than performing the system call and resuming the same process, then it may not perform the context switching.
There is a cost associated with context switching and if other processes are also blocked on some I/O then it may be optimal for OS to NOT switch the context or there might be other reasons to not switch the context such as what if only 1 process is running, there is no other process to switch the context to!
I was reading about micro-controller interrupts, and I have a question does the save of the current state happen for software interrupts too or just for hardware interrupts.
Yes. The general concept behind any interrupt is that it can suspend the execution of the underlying program without that program realising it has been interrupted. This requires storing of the CPU and (some) register states and their restoration once the interrupt service routine has completed. The CPU typically has special hardware mechanism for doing this.
Software interrupts share the same mechanism and so the state is saved and restored.
Note, however, that the normal use for software interrupts is on more complex microprocessors where they allow a safe hardware mechanism for moving between privilege modes - e.g. your application and an operating system. On a low level microcontroller they are of little use, as if you already know you want to call the piece of code in the interrupt, you might as well just call it directly as a function.
Typically the state saved is minimal: just enough to restore the previous state when the only action of the ISR is RTI. Usually just the instruction pointer (and code segment) and flags. The whole processor state is not saved, otherwise it would place too much overhead on a time-critical interrupt. It is up the the ISR to save and restore anything more than the bare-bones mechanism of the processor interrupt.
Yes and no, yes as Jon has described, but software interrupts can be used as your interface to the operating system, and for that use case you likely will be setting up registers as parameters into the software interrupt. And expect registers when it returns to be the result.
So software interrupts, can be treated like hardware interrupts and you have to preserve the state, or they can be treated as or used as fancy function calls and you have parameters going in and
In short it's a maybe.
Without knowing the exact example I couldn't say. What gets saved or doesn't get saved really depends on which software is generating/receiving the interrupt.
At low levels there is a lot of concern about speed at which things happen. And hence work that may not need to be done is avoided. Therefore there is no point in "saving your state" if you don't intend to modify it. So it really depends on what exactly we are talking about.
This is where specifications come into play. Operating systems and processors all specify exactly what state they do save and what state they do not save and depending on your specific problem the answer may differ...
#dwelch's answer explains it for Linux. (which is closest to what you were asking for)
#Jon's answer explains it for processors in general.
#weather-vane's answer explains it for micro-controllers.
These are all specific examples. If you were working with FreeRTOS on a micro-controller you would have to probably make your own software interrupt protocol and define it yourself in which case you can choose to save state or leave the responsibility to the caller. In the end, it is just a choice that a programmer that wrote the system made and hence you must go find his notes and read them.
In the end, it is mostly a gentlemen agreement.
I'm trying to get around the concept of cooperative multitasking system and exactly how it works in a single threaded application.
My understanding is that this is a "form of multitasking in which multiple tasks execute by voluntarily ceding control to other tasks at programmer-defined points within each task."
So if you have a list of tasks and one task is executing, how do you determine to pass execution to another task? And when you give execution back to a previous task, how do resume from where you were previously?
I find this a bit confusing because I don't understand how this can be achieve without a multithreaded application.
Any advice would be very helpeful :)
Thanks
In your specific scenario where a single process (or thread of execution) uses cooperative multitasking, you can use something like Windows' fibers or POSIX setcontext family of functions. I will use the term fiber here.
Basically when one fiber is finished executing a chunk of work and wants to voluntarily allow other fibers to run (hence the "cooperative" term), it either manually switches to the other fiber's context or more typically it performs some kind of yield() or scheduler() call that jumps into the scheduler's context, then the scheduler finds a new fiber to run and switches to that fiber's context.
What do we mean by context here? Basically the stack and registers. There is nothing magic about the stack, it's just a block of memory the stack pointer happens to point to. There is also nothing magic about the program counter, it just points to the next instruction to execute. Switching contexts simply saves the current registers somewhere, changes the stack pointer to a different chunk of memory, updates the program counter to a different stream of instructions, copies that context's saved registers into the CPU, then does a jump. Bam, you're now executing different instructions with a different stack. Often the context switch code is written in assembly that is invoked in a way that doesn't modify the current stack or it backs out the changes, in either case it leaves no traces on the stack or in registers so when code resumes execution it has no idea anything happened. (Again, the theme: we assume that method calls fiddle with registers, push arguments to the stack, move the stack pointer, etc but that is just the C calling convention. Nothing requires you to maintain a stack at all or to have any particular method call leave any traces of itself on the stack).
Since each stack is separate, you don't have some continuous chain of seemingly random method calls eventually overflowing the stack (which might be the result if you naively tried to implement this scheme using standard C methods that continuously called each other). You could implement this manually with a state machine where each fiber kept a state machine of where it was in its work, periodically returning to the calling dispatcher's method, but why bother when actual fiber/co-routine support is widely available?
Also remember that cooperative multitasking is orthogonal to processes, protected memory, address spaces, etc. Witness Mac OS 9 or Windows 3.x. They supported the idea of separate processes. But when you yielded, the context was changed to the OS context, allowing the OS scheduler to run, which then potentially selected another process to switch to. In theory you could have a full protected virtual memory OS that still used cooperative multitasking. In those systems, if a errant process never yielded, the OS scheduler never ran, so all other processes in the system were frozen. **
The next natural question is what makes something pre-emptive... The answer is that the OS schedules an interrupt timer with the CPU to stop the currently executing task and switch back to the OS scheduler's context regardless of whether the current task cares to release the CPU or not, thus "pre-empting" it.
If the OS uses CPU privilege levels, the (kernel configured) timer is not cancelable by lower level (user mode) code, though in theory if the OS didn't use such protections an errant task could mask off or cancel the interrupt timer and hijack the CPU. There are some other scenarios like IO calls where the scheduler can be invoked outside the timer, and the scheduler may decide no other process has higher priority and return control to the same process without a switch... And in reality most OSes don't do a real context switch here because that's expensive, the scheduler code runs inside the context of whatever process was executing, so it has to be very careful not to step on the stack, to save register states, etc.
** You might ask why not just fire a timer if yield isn't called within a certain period of time. The answer lies in multi-threaded synchronization. In a cooperative system, you don't have to bother taking locks, worry about re-entrance, etc because you only yield when things are in a known good state. If this mythical timer fires, you have now potentially corrupted the state of the program that was interrupted. If programs have to be written to handle this, congrats... You now have a half-assed pre-emptive multitasking system. Might as well just do it right! And if you are changing things anyway, may as well add threads, protected memory, etc. That's pretty much the history of the major OSes right there.
The basic idea behind cooperative multitasking is trust - that each subtask will relinquish control, of its own accord, in a timely fashion, to avoid starving other tasks of processor time. This is why tasks in a cooperative multitasking system need to be tested extremely thoroughly, and in some cases certified for use.
I don't claim to be an expert, but I imagine cooperative tasks could be implemented as state machines, where passing control to the task would cause it to run for the absolute minimal amount of time it needs to make any kind of progress. For example, a file reader might read the next few bytes of a file, a parser might parse the next line of a document, or a sensor controller might take a single reading, before returning control back to a cooperative scheduler, which would check for task completion.
Each task would have to keep its internal state on the heap (at object level), rather than on the stack frame (at function level) like a conventional blocking function or thread.
And unlike conventional multitasking, which relies on a hardware timer to trigger a context switch, cooperative multitasking relies on the code to be written in such a way that each step of each long-running task is guaranteed to finish in an acceptably small amount of time.
The tasks will execute an explicit wait or pause or yield operation which makes the call to the dispatcher. There may be different operations for waiting on IO to complete or explicitly yielding in a heavy computation. In an application task's main loop, it could have a *wait_for_event* call instead of busy polling. This would suspend the task until it has input to process.
There may also be a time-out mechanism for catching runaway tasks, but it is not the primary means of switching (or else it wouldn't be cooperative).
One way to think of cooperative multitasking is to split a task into steps (or states). Each task keeps track of the next step it needs to execute. When it's the task's turn, it executes only that one step and returns. That way, in the main loop of your program you are simply calling each task in order, and because each task only takes up a small amount of time to complete a single step, we end up with a system which allows all of the tasks to share cpu time (ie. cooperate).
I always wondered what they are: every time I hear about them, images of futuristic flywheel-like devices go dancing (rolling?) through my mind...
What are they?
When you use regular locks (mutexes, critical sections etc), operating system puts your thread in the WAIT state and preempts it by scheduling other threads on the same core. This has a performance penalty if the wait time is really short, because your thread now has to wait for a preemption to receive CPU time again.
Besides, kernel objects are not available in every state of the kernel, such as in an interrupt handler or when paging is not available etc.
Spinlocks don't cause preemption but wait in a loop ("spin") till the other core releases the lock. This prevents the thread from losing its quantum and continue as soon as the lock gets released. The simple mechanism of spinlocks allows a kernel to utilize it in almost any state.
That's why on a single core machine a spinlock is simply a "disable interrupts" or "raise IRQL" which prevents thread scheduling completely.
Spinlocks ultimately allow kernels to avoid "Big Kernel Lock"s (a lock acquired when core enters kernel and released at the exit) and have granular locking over kernel primitives, causing better multi-processing on multi-core machines thus better performance.
EDIT: A question came up: "Does that mean I should use spinlocks wherever possible?" and I'll try to answer it:
As I mentioned, Spinlocks are only useful in places where anticipated waiting time is shorter than a quantum (read: milliseconds) and preemption doesn't make much sense (e.g. kernel objects aren't available).
If waiting time is unknown, or if you're in user mode Spinlocks aren't efficient. You consume 100% CPU time on the waiting core while checking if a spinlock is available. You prevent other threads from running on that core till your quantum expires. This scenario is only feasible for short bursts at kernel level and unlikely an option for a user-mode application.
Here is a question on SO addressing that: Spinlocks, How Useful Are They?
Say a resource is protected by a lock ,a thread that wants access to the resource needs to acquire the lock first. If the lock is not available, the thread might repeatedly check if the lock has been freed. During this time the thread busy waits, checking for the lock, using CPU, but not doing any useful work. Such a lock is termed as a spin lock.
It is pertty much a loop that keeps going till a certain condition is met:
while(cantGoOn) {};
while(something != TRUE ){};
// it happend
move_on();
It's a type of lock that does busy waiting
It's considered an anti-pattern, except for very low-level driver programming (where it can happen that calling a "proper" waiting function has more overhead than simply busy locking for a few cycles).
See for example Spinlocks in Linux kernel.
SpinLocks are the ones in which thread waits till the lock is available. This will normally be used to avoid overhead of obtaining the kernel objects when there is a scope of acquiring the kernel object within some small time period.
Ex:
While(SpinCount-- && Kernel Object is not free)
{}
try acquiring Kernel object
You would want to use a spinlock when you think it is cheaper to enter a busy waiting loop and pool a resource instead of blocking when the resource is locked.
Spinning can be beneficial when locks are fine grained and large in number (for example, a lock per node in a linked list) as well as when lock hold times are always extremely short. In general, while holding a spin lock, one should avoid blocking, calling anything that itself may block, holding more than one spin lock at once, making dynamically dispatched calls (interface and virtuals), making statically dispatched calls into any code one doesn't own, or allocating memory.
It's also important to note that SpinLock is a value type, for performance reasons. As such, one must be very careful not to accidentally copy a SpinLock instance, as the two instances (the original and the copy) would then be completely independent of one another, which would likely lead to erroneous behavior of the application. If a SpinLock instance must be passed around, it should be passed by reference rather than by value.
It's a loop that spins around until a condition is met.
In nutshell, spinlock employs atomic compare and swap (CAS) or test-and-set like instructions to implement lock free, wait free thread safe idiom. Such structures scale well in multi-core machines.
Well, yes - the point of spin locks (vs a traditional critical sections, etc) is that they offer better performance under some circumstances (multicore systems..), because they don't immediately yield the rest of the thread's quantum.
Spinlock, is a type of lock, which is non-block able & non-sleep-able. Any thread which want to acquire a spinlock for any shared or critical resource will continuously spin, wasting the CPU processing cycle till it acquire the lock for the specified resource. Once spinlock is acquired, it try to complete the work in its quantum and then release the resource respectively. Spinlock is the highest priority type of lock, simply can say, it is non-preemptive kind of lock.
operating systems related question dunno if i can ask here
but thought i would get proper explanation in this forum
when a process be exec in user context... wont the higher priority precesses in kernel context blocking the process in user context all the time...
it is hazy for me... the concepts
......
There is two main kind of scheduler in operating system, preemptive schedulers and non-preemptive schedulers.
Non preemptive schedulers would behave like you think, a process with higher rights and higher priority will keep using the cpu until it finish OR until it block (on a mutex for example or with a call to yield which explicitly release the cpu in order to schedule another one.)
But non-preemptive schedulers are rare and linux scheduler isn't that kind. It uses time slices to let process work for a short period of time before de-scheduling it, it also include priority but keep scheduling processes with lower priority, you should take a look at this linux scheduler article.
This Stackoverflow posting has a discussion that includes a run-down of how kernel mode works with an explanation of some of the jargon. In particular look at the section titled 'A brief primer on kernel vs. user mode'. It might help to shed some light on your question.
http://en.wikipedia.org/wiki/Ring_0
Your process in kernel mode can as well be preempted, when it reaches the quantum.
Wikipedia: Preemption