If a process is waiting on a semaphore does os allocate it cpu? - wait

I am working on a windows system. I have a main thread from which I start few threads.The new threads do the processing.Now my main thread waits on WaitForMultipleObjects(). So is my main thread also allocated cpu on regular interval?Or as it is on wait the other threads share cpu?

(Slightly different) Duplicate: Does WaitForSingleObject give up a thread's time slice?
"[No] -- the thread is blocked until whatever it's waiting on becomes signaled. The thread won't be scheduled to run while it's blocked, so other threads get all the CPU time."

Related

hyperthreading - which thread runs when both threads are ready

The idea behind Intels hyperthreading is (as far as I understand) that one core is used for two threads in a time-multiplexed manner.
The HW support this by having the state-related resources doubled and time-sharing other resources. If the running thread stalls (e.g. because it has to fetch new data from RAM), the other thread gets access to the shared resources. The result is a better utilization of the shared resources.
So if one thread isn't ready, the other thread is allowed to run. In other words - a thread switch can happen when the executing thread stalls.
I've tried to find out what will happen if both threads are ready for a long time but I haven't been able to find the information.
What happens if the running thread doesn't stall?
Will the running thread continue as long as it is ready?
Will the core switch to the other thread after some time? If so - what is the criteria for the switch? Is it controlled by HW or SW?
Hyperthreading is simultaneous multithreading (SMT). So it doesn't just switch back and forth on some relatively coarse-grain scale (like stalls), in the case of Sandy Bridge and newer, the fetcher and the decoder alternate between the threads. Execution units are shared competitively, so even if neither thread is stalling they can still together achieve a better utilization than if they ran alone (but that's not typical). So the problems you identified don't apply, because it doesn't work like that in the first place.

How does async task interrupt main thread (from itself - the main one)?

I can't seem to find this specific implementation detail, or even a pointer to where in an OS book to find this.
Basically, main thread calls an async task (to be run later) on itself. So... when does it run?
Does it wait for the run loop to finish? Or does it just randomly interrupt the run-loop in the middle of any function?
I understand the registers will be the same (unless separate thread), but not really the instruction pointer and what happens to the stack, if anything does happen.
Thank you
In C# the task is scheduled to be run on the current SynchronizationContext. The context basically has a queue of tasks which it schedules to run on the threads it is associated with, in a GUI app there is only one thread so the task is scheduled to run there.
The GUI thread is not interrupted but it executes the task when it finishes all other tasks preceding it in the queue.
The threads of a process all share the same address space, not the same CPU registers. The thread scheduling is done depends on the programming language and the O/S. Usually there are explicit scheduling points, such as returning from a system call, blocking awaiting I/O completion, or between p-code instructions for interpreted languages. Some O/S implemtations reschedule depending on how long a thread has run for time-based scheduling. Often languages include a function that explicitly offers the CPU to any other thread or process by transferring control to the process or thread scheduler component of the O/S.
The act of switching from one thread or process to another is known as a context switch and is carefully tuned code because this is often done thousands of times per second. This can make the code difficult to follow.
The best explanation of this I've ever seen is http://www.amazon.com/The-Design-UNIX-Operating-System/dp/0132017997 classic.

"System Idle Process" eats CPU on a high threading application

I have a multi-threaded web application with about 1000~2000 threads at production environment.
I expect CPU usage on w3wp.exe but System Idle Process eats CPU. Why?
The Idle process isn't actually a real process, it doesn't "eat" your CPU time. the %cpu you see next to it is actually unused %cpu (more or less).
The reason for the poor performance of your application is most likely due to your 2000 threads. Windows (or indeed any operating system) was never meant to run so many threads at a time. You're wasting most of the time just context switching between them, each getting a couple of milliseconds of processing time every ~30 seconds (15ms*2000=30sec!!!!).
Rethink your application.
the idle process is simply holding process time until a program needs it, its not actually eating any cycles at all. you can think the system idle time as 'available cpu'
System Idle Process is not a real process, it represents unused processor time.
This means that your app doesn't utilize the processor completely - it may be memory-bound or CPU-bound; possibly the threads are waiting for each other, or for external resources? Context switching overhead could also be a culprit - unless you have 2000 cores, the threads are not actually running all at the same time, but assigned time slices by the task scheduler, this also takes some time.
You have not provided a lot of details so I can only speculate at this point. I would say it is likely that most of those threads are doing nothing. The ones that are doing something are probably IO bound meaning that they are spending most of their waiting for the external resource to respond.
Now lets talk about the "1000~2000 threads". There are very few cases (maybe none) where having that many threads is a good idea. I think your current issue is a perfect example of why. Most of those threads are (apparently anyway) doing nothing but wasting resources. If you want to process multiple tasks in parallel, espcially if they are IO bound, then it is better to take advantage of pooled resources like the ThreadPool or by using the Task Parallel Library.

one timer per thread using Qt

I modified Qt's broadcast sender example so that it has ten threads and in each thread it starts a timer, but only timer of the first thread is triggered. How can I have one timer running for each thread?
Timers only work if the thread has an event loop.
A couple years later in OS course I learned:
Timers are a per process thing. When a the OS kernel sends a timer trigger even whatever thread that is currently running gets the call and processes it.
So I couldn't have ten timers per thread in a straight-forward manner.

What's the difference between a worker thread and an I/O thread?

Looking at the processmodel element in the Web.Config there are two attributes.
maxWorkerThreads="25"
maxIoThreads="25"
What is the difference between worker threads and I/O threads?
Fundamentally not a lot, it's all about how ASP.NET and IIS allocate I/O wait objects and manage the contention and latency of communicating over the network and transferring data.
I/O threads are set aside as such because they will be doing I/O (as the name implies) and may have to wait for "long" periods of time (hundreds of milliseconds). They also can be optimized and used differently to take advantage of I/O completion port functionality in the Windows kernel. A single I/O thread may be managing multiple completion ports to maintain throughput.
Windows has a lot of capabilities for dealing with I/O blocking whereas ASP.NET/.NET has a plain concept of "Thread". ASP.NET can optimize for I/O by using more of the unmanaged threading capabilities in the OS. You wouldn't want to do this all the time for every thread as you lose a lot of capabilities that .NET gives you which is why there is a distinction between how the threads are intended to be used.
Worker threads are threads upon which regular "work" or just plain code/processing happens. Worker threads are unlikely to block a lot or wait on anything and will be short running and therefore require more aggressive scheduling to maximize processing power and throughput.
[Edit]: I also found this link which is particularly relevant to this question:
http://blogs.msdn.com/ericeil/archive/2008/06/20/windows-i-o-threads-vs-managed-i-o-threads.aspx
Just to add on to chadmyers...
Seems like I/O Threads was the old way ASP.NET serviced requests,
"Requests in IIS 5.0 are typically
serviced over I/O threads, or threads
performing asynchronous I/O because
requests are dispatched to the worker
process using asynchronous writes to a
named pipe."
with IIS6.0 this has changed.
"Thus all requests are now serviced by
worker threads drawn from the CLR
thread pool and never on I/O threads."
Source: http://msdn.microsoft.com/hi-in/magazine/cc164128(en-us).aspx

Resources