Why a context switch is slow? - asynchronous

I wanna know why a context switch is slow compared to asynchronous operations on the same thread.
Why is better to run N threads (with N equals to the number of cores), each one processing M clients assynchronously, instead of running M threads? I've told the reason is the context switch overhead, but I can't find how slow are context switchs.

Just to clarify I will assume that when you say “instead of running M threads” you mean N*M threads (if you run M threads, each one will need to process N clients in order to match the same number of total clients and this will be a similar case).
So the difference between N threads running in N cores, each one processing M clients, and N*M threads running in the same number of cores it is that in the first case you won’t have to create new threads and, as you said, you won’t have context switching. This is an advantage because the work needed to create OS threads is heavy; it needs to create a different process space, a new stack, etc. Besides, if you have more threads the OS scheduler will be stopping and activating the running processes, which it is also time-consuming. Every time the scheduler change the process assigned to a core it will probably also need to cache the context of this process, adding a lot of cache-misses and consequently more time.
On the other hand, if you have a fixed number of thread, equals to the number of cores (sometimes even N-1 is suggested) you can manage the “tasks” or clients in a user-level scheduler which may incur in a few more computations of your program but avoid a lot of OS processes and memory management, making the overall execution faster. Some current parallel APIs such as .Net Task Parallel Library (TPL), OpenMP, Intel’s Threading Building Blocks, or Cilk embody this model of parallelism called dynamic multithreading.

Related

Hyperthreading makes my code run slower?

Some multithreaded code I just wrote appears to run slower under hyperthreaded CPUs - i.e. disabling hyperthreading makes it run FASTER. Is this normal?
This depends entirely on use case. A subjective term like normal has a lot of leeway! There are use cases where Hyper-Threading (HT) makes sense, and cases where it will have a performance impact.
One such case of performance decrease is for applications making heavy use of AVX instructions. The AVX instructions are carried out in the vector processing unit(VPU), of which there is one per core in Intel Xeon processors. Additional threads will block when trying to access the VPU if it is not available, leading to no performance improvement with the use of HT.
If you have say, 4 cores with HT, allowing you to run 8 threads, you will only actually be able to run 4 VPU instructions at a time - so your other 4 threads will be blocked as they complete. The additional overhead of the blocking and scheduling will usually net you a lower throughput than if you were running 4 threads on 4 cores, with HT disabled.
Likewise, running just 4 threads on the 8 cores, the OS scheduler can schedule the threads to run on any physical core - so there may still be a chance where one thread blocks waiting for another to complete. Some newer applications and job schedulers can now coordinate with the OS to "pin" threads on physical cores, allowing HT to be enabled, but not to oversubscribe the amount of threads that are running on a core. Over time this will probably get better, but does require awareness on the developer's part.
For more general purpose use cases, like a generic server handling many types of workloads, the advantage of HT running additional threads in a single core it usually a performance gain.

Difference between processor and process in parallel computing?

Every time I come across something like "process 0 does x task" , I am inclined to think they mean processor.
After reading a bit more about it, I find that there are two memory classifications, shared memory and distributed memory:
A shared memory executes something like a thread (implying same data is available to all processors- hence it makes sense to call it a process) However, even for distributed memory it is called a process instead of a processor. For example: "Process 0 is computing the partial dot product"
Why is this so? Why is it called a process and not a processor?
PS. I hope this question is not trivial :)
These other answers are all pretty spot on. Processors are physical, processes are software. So a quad core CPU will have 4 processors, but can run many more processes.
Your confusion around distributed terminology is fair though. In distributed computing, typically X number of processes will be executed equal to the number of hardware processors. In this scenario, each processes gets an ID in software often called a rank. Ranks are independent of processors, and different ranks will have different tasks. So when you report a status, information is relative to the process rank, and not the physical processor.
To rephrase, in distributed computing there will usually be one process running on each processor. The process will have a unique id that is more important in the software than the physical processor it is running on, so status information is given about the process. As the number of processes and processors are equal, this distinction can get a bit blurred.
The distinction is hardware vs software.
The process is the logical instance of your program. The processor is the hardware entity that runs the process. Most of the time, you don't care about the actual processor, only the process that's executing.
For instance, the OS may decide to temporarily put your processes to sleep in order to give other applications runtime, and later it may awaken them on different processors. As long as your processes produce the expected results, this should not be of any interest to you: all you care about is the computation, not where it's happening.
For me, processor refers to machine, that is responsible for computing operations. Process is a single instance of some program. (I hope i understood what you meant).
I would say that they use the terms indistinctly because most of the time the context allows it and the difference may be subtle to some extent. That is, since each process (when it is single threaded) executes on a processor, people typically does not want to make the distinction between the physical entity (processor) and the logical entity (process).
This assumption might be wrong when considering processors with multithreading capabilities (SMT, and Hyper-Threading for Intel processors) and/or executing multi-threaded applications because processes run on any available processor (or thread). In those situations, people should be stricter when making this affirmations. Still, since it is possible to bind one process (and even one thread) to a processor (or processor thread) using affinity commands, they can use indistinctly both terms under these circumstances.

Max number of workers/slaves for parallel job snow

I'm running a foreach loop with the snow back-end on a windows machine. I have 8 cores to work with. The rscript is exectuted via a system call embedded in a python script, so there would be an active python instance too.
Is there any benefit to not have #workers=#cores and instead #workers<#cores so there is always an opening for system processes or the python instance?
It successfully runs having #workers=#cores but do I take a performance hit by saturating the cores (max possible threads) with the r worker instances?
It will depend on
Your processor (specifically hyperthreading)
How much info has to be copied to/from the different images
If you're implementing this over multiple boxes (LAN)
For 1) hyperthreading helps. I know my machine does it so I typically have twice as many workers are cores and my code completes in about 85% of the time compared to if I matched the number of workers with cores. It won't improve more than that.
2) If you're not forking, using sockets for instance, you're working as if you're in a distributed memory paradigm, which means creating one copy in memory for every worker. This can be a non-trivial amount of time. Also, multiple images in the same machine may take up a lot of space, depending on what you're working on. I often match the number of workers with number because doubling workers will make me run out of memory.
This is compounded by 3) network speeds over multiple workstations. Locally between machines our switch will transfer things at about 20 mbytes/second which is 10x faster than my internet download speeds at home, but is a snail's pace compared to making copies in the same box.
You might consider increasing R's nice value so that the python has priority when it needs to do something.

Will using Parallel.ForEach in ASP.NET prevent other threads from processing requests?

I'm looking at using some PLINQ in an ASP.NET application to increase throughput on some I/O bound and computationally bound processing we have (e.g., iterating a collection to use for web service calls and calculating some geo-coordinate distances).
My question is: should I be concerned that running operations like Parallel.ForEach will take threads away from processing other requests, or does it use a separate pool of threads (I've heard about something called I/O completion ports, but not sure how that plays into the discussion)?
Parallel.ForEach will, at most, use one thread for as many cores as you have. The thread pool default maximum size is 250 times the number of cores you have. So you'll have to be really trying to run out of available threads.

Multitasking on Linux with multiple CPUs

I feel my question is quite basic, but I couldn't find any related SO question.
I need to run a program a few thousands of times (different input each time), and currently it is done by a shell script. The machine runs Ubuntu and has 8 CPUs (as revealed by cat /proc/cpuinfo). Using top I see that only 1 CPU is utilized. In order to speed thing up, I want to utilize all 8 CPUs. I know I can start the program in the background, and then call it again (and indeed top reveals that 2 CPUs are utilized in that case), so I can change my shell script to call the program in groups of 8. My question is, is that a recommended way to utilize all CPUs, or is there another, somewhat 'cleaner' way?
You can use cpu affinity to be explicit about the processor for the processes.
http://www.cyberciti.biz/tips/setting-processor-affinity-certain-task-or-process.html
However, if each process runs on a cpu (as it should, the kernel will make sure that things are running as efficiently as possible), then just fire n processes off (8 in your case, or make your shell script figure out what n is so your script is a bit more robust, or make it a command line option) and let the kernel do it for you. Each time a process ends, fire off another process until you are done.
Question is overly vague.
That you want to use all the CPUs implies you want the end result as quickly as possible - but a major concern for the performance f multiple instances would be contention for resources (reducing performance) and caching (improving performance).
Usually splitting the job amongst multiple processes will usually yield results faster. And there are many, many ways of sharding the workload. But without knowing a lot more about what it is doing it is difficult to recommend a particular approach.
Given that you have 8 CPUs, and assuming that the only constrained resource is the CPU, then you don't want to have more than 8 threads running concurrently on the job. So the problem then becomes how you schedule work to ensure that you are using the 8 cores optimally. Splitting the work into 8 scripts and running them concurrently you will initially see all 8 scripts running concurrently - but its very likely, depending on the nature of the work, that the scripts will finish at different times.
So if you really want to use the hardware optimally, that means running 8 processes as daemons, preferably with each process having a cpu affinity set, fed by a message queue. But is it really worthwhile coding all this if you're not going to be running this regularly? Also it may be faster to run just 7 and keep a CPU for handling the quueue and other demands placed on the box.

Resources