Why sleeping or blocking not allowed in interrupt handler.
Assume i have following setup.
Single core system.
Developing a bare-metal application using FreeRTOS.
There are many FreeRTOS APIs which cannot be called from ISR context as they may block waiting for
events to occur. So this means we cannot put the ISR in blocked state.
If you block in an interrupt handler, it can commonly not be triggered again. And all other interrupts of same and lower priority, and the non-interrupt part of your program are blocked, too.
Final line: don't do it.
Related
When exactly signal will start execution in unix ?Does the signal will be processed when system turns into kernel mode? or immediately when it is receives signal? I assume it will be processed immediate when it receives.
A signal is the Unix mechanism for allowing a user space process to receive asynchronous notifications. As such, signals are always "delivered by" the kernel. And hence, it is impossible for a signal to be delivered without a transition into kernel mode. Therefore it doesn't make sense to talk of a process "receiving" a signal (or sending one) without the involvement of the kernel.
Signals can be generated in different ways.
They can be generated by a device driver within the kernel (for example, tty driver in response to the interrupt, kill, or stop keys or in response to input or output by a backgrounded process).
They can be generated by the kernel in response to an emergent out-of-memory condition.
They can be generated by a processor exception in response to something the process itself does during its execution (illegal instruction, divide by zero, reference an illegal address).
They can be generated directly by another process (or by the receiving process itself) via kill(2).
SIGPIPE can be generated as a result of writing to a pipe that has no reader.
But in every case, the signal is delivered to the receiving process by the kernel and hence through a kernel-mode transition.
The kernel might need to force that transition -- pre-empt the receiving process -- in order to deliver the signal (for example, in the case of a CPU-bound process running on processor A being sent a signal by a different process running on processor B).
In some cases, the signal may be handled for the process by the kernel itself (for example, with SIGKILL -- or several others when no signal handler is configured).
Actually invoking a process' signal handler is done by manipulating the process' user space stack so that the signal handler is invoked on return from kernel-mode and then, if/when the signal handler procedure returns, the originally executing code can be resumed.
As to when it is processed, that is subject to a number of different factors.
There are operating system (i.e. kernel) operations that are never interrupted by signals (these are generally relatively short duration operations), in which case the signal will be processed after their completion.
The process may have temporarily blocked signal delivery, in which case the signal will be "pending" until it is unblocked.
The process could be swapped out or non-runnable for any of a number of reasons -- in which case, its signal handler cannot be invoked until the process is runnable again.
Resuming the process in order to deliver the signal might be delayed by interrupts and higher priority tasks.
A signal will be immediately detected by the process which receives it.
Depending on the signal type, the process might treat it with the default handler, might ignore it or might execute a custom handler. It depends a lot on what the process is and what signal it receives. The exception is the kill signal (9) which is treated by the kernel and terminates the execution of the process which was supposed to receive it.
If a spinlock is held in the process context. What will happen if the same spinlock is required in an interrupt context?
Either the interrupt handler wait until the spinlock is released by the process, or the interrupt handler will schedule it on another processor? As mentioned in the following thread in stackoverflow.
How does Kernel handle the lock in process context when an interrupt comes?
But still the question will be the same, the interrupt handler will wait for the spinlock to be released? Isn't it?
If a spinlock is held in the process context. What will happen if the
same spinlock is required in an interrupt context?
In short, this is a bad design and will lead to deadlock. That's why there are APIs spin_lock_irq/spin_lock_irqsave that disable the interrupts before acquiring such locks and avoids such contentions.
All port operations in Rebol 3 are asynchronous. The only way I can find to do synchronous communication is calling wait.
But the problem with calling wait in this case is that it will check events for all open ports (even if they are not in the port block passed to wait). Then they call their responding event handlers, but a read/write could be done in one of those event handlers. That could result in recursive calls to "wait".
How do I get around this?
Why donĀ“t you create a kind of "Buffer" function to receive all messages from assyncronous entries and process them as FIFO (first-in, first-out)?
This way you may keep the Assync characteristics of your ports and process them in sync mode.
in cases where there are only asynchronous events and we are in need on synchronous reply, start a timer or sleep for timeout, if the handler or required objective is met then say true, else false and make sure the event gets cancelled /reset for the same if critical.
I think that there are 2 design problems (maybe intrinsic to the tools / solutions at hand).
Wait is doing too much - it will check events for all open ports. In a sound environment, waiting should be implemented only where it is needed: per device, per port, per socket... Creating unnecessary inter-dependencies between shared resources cannot end well - especially knowing that shared resources (even without inter-dependencies) can create a lot of problems.
The event handlers may do too much. An event handler should be as short as possible, and it should only handle the event. If is does more, then the handler is doing too much - especially if involves other shared resources. In many situations, the handler just saves the data which will be lost otherwise; and an asynchronous job will do the more complex things.
You can just use a lock. Cummunication1 can set some global lock state i.e. with a variable (be sure that it's thread safe). locked = true. Then Communication2 can wait until it's unlocked.
loop do
sleep 10ms
break if not locked
end
locked = true
handle_communication()
I have a process which is already in signal handler , and called a process blocking call. What will happen if one more signal arrives for this process ?
By default signals don't block each other. A signal only blocks itself during its own delivery. So, in general, an handling code can be interrupted by another signal delivery.
You can control this behavior by setting the process signal mask relatively to each signal delivery. This means that you can block (or serialize) signal delivery. For instance you can declare that you accept to be interrupted with signal S1 while handling signal S2, but not the converse...
Remember that signal delivery introduces some concurrency into your code, so controlling the blocking is needed.
I'm pretty sure signals are blocked while a handler is being executed, but I'm having a hard time finding something that says that definitively.
Also, you may wish to see this question - some of the answers talk about what functions you should and shouldn't call from a signal handler.
In general, you should consider a signal handler like an interrupt handler - do the very least you can in the handler, and return quickly.
I'm programming a file transfer handler with speed limit feature, the rate based on user level. How do I control/calculate transfer rate in HttpHandler?.
Some asp.net resource tell me that use Thread.Sleep will block asp.net thread pool.
It is generally a bad idea to Sleep any thread from ASP .NET, because those threads could be used otherwise to service requests from the pool. If there were say, 10 threads in the pool, sleeping 10 threads that were processing downloads would cause all other requests to pile up in the queue until a download had finished.
You are perhaps best served by creating an IHttpAsyncHandler instead of an IHttpHandler, as perscribed in:
http://msdn.microsoft.com/en-us/library/ms227433.aspx
You can use a timer to periodically pump x bytes of data to the client (but be sure to periodically pool for a closed connection using IsClientConnected or some such).
You might want to try using timers and a timer callback to do this. The idea would be to have a timer (or maybe two) that triggers when your handler can run and for how long. Every time the "go" timer expires, it starts a thread which writes your data to the response until the "stop" timer expires (or the same timer expires again), then that thread finishes what it was doing, does the housekeeping for the next thread, resets the "go" timer, and exits. Your main thread justs sets up the initial timer, the data for the transfer, then invokes the timer and exits. Presumably you'd need to keep a handle to the response somewhere so that you could get access to it again. By varying the length of time that the handler has to wait/execute you can control how many resources it uses.