Does Number of cores affect execution time - multiprocessor

Say I have written a program that takes 30 seconds to execute on a dual core processor. What time it would take on a 16 core processor? same or differs
Two cases:
One- the program is written with multiple cores in mind.
Two- program is written irrespective of no of cores.

Viewed in isolation, then unless you have explicitly written multithreaded code, the runtime should be the same.
1 Of course, it might be faster if you have other applications running simultaneously, because they can now run on the other cores.
If you have written multithreaded code, then the speedup you see will be based on all sorts of factors (memory bandwidth, IO bandwidth, memory access patterns, cache coherency, synchronisation, etc.), as well as Amdahl's law. It will always be some number less than N (where N is the number of cores).
1. Assuming we're talking about a conventional platform.

Related

Should I aim for multiple physical cores or multiple threads for parallel computing in R?

I am new to parallel computing and this may be a trivial question. I am thinking about which processor to choose for parallel computing (on a single machine)? In particular, I would like to know whether I should aim for a high number (physical) cores or a high number of threads?
I am working with R (package parallel) on Windows. Typically, the datasets are not large, so the limit is not the memory but the number and duration of independent processes run on the data.
I understood that parallel makes use of logical cores (i.e., hardware threads) but that such threads do not work truly in parallel because they share “execution resources” https://en.wikipedia.org/wiki/Hyper-threading. So, would e.g., 4 (physical) cores with 1 thread each result in more speed (throughput) than 2 (physical) cores with 2 threads each (i.e., 4 logical cores)?
Suggestions on specific processors are also more than welcome.
For memory or I/O intensive workloads, a HT enabled processor provides better performance and power efficiency, at lower cost. For compute intensive workloads, possible gain due to additional logical threads reduces. Your application seems to be compute intensive. If that is the only kind of workload the system has to execute, you can look for system with higher number of physical core processors.
The problem is there are limited number of processor which doesn't support logical threads. Most of the processor from Intel support Hyper-threading. Cost of with and without HT processors is not very different. With a HT enabled processor a system can handle more diverse workloads. It can multitask more efficiently.
It is possible to disable HT by configuring the BIOS.

Hyperthreading makes my code run slower?

Some multithreaded code I just wrote appears to run slower under hyperthreaded CPUs - i.e. disabling hyperthreading makes it run FASTER. Is this normal?
This depends entirely on use case. A subjective term like normal has a lot of leeway! There are use cases where Hyper-Threading (HT) makes sense, and cases where it will have a performance impact.
One such case of performance decrease is for applications making heavy use of AVX instructions. The AVX instructions are carried out in the vector processing unit(VPU), of which there is one per core in Intel Xeon processors. Additional threads will block when trying to access the VPU if it is not available, leading to no performance improvement with the use of HT.
If you have say, 4 cores with HT, allowing you to run 8 threads, you will only actually be able to run 4 VPU instructions at a time - so your other 4 threads will be blocked as they complete. The additional overhead of the blocking and scheduling will usually net you a lower throughput than if you were running 4 threads on 4 cores, with HT disabled.
Likewise, running just 4 threads on the 8 cores, the OS scheduler can schedule the threads to run on any physical core - so there may still be a chance where one thread blocks waiting for another to complete. Some newer applications and job schedulers can now coordinate with the OS to "pin" threads on physical cores, allowing HT to be enabled, but not to oversubscribe the amount of threads that are running on a core. Over time this will probably get better, but does require awareness on the developer's part.
For more general purpose use cases, like a generic server handling many types of workloads, the advantage of HT running additional threads in a single core it usually a performance gain.

What is the practical relationship between the streams memory bandwidth benchmark and the potential speedup from running MPI locally?

I ran the streams memory bandwidth benchmark (https://www.cs.virginia.edu/stream/) on a computer with 10 processors. The benchmark indicated that after 3 or 4 processors, the speedup plateaued at about 3x. What are the practical implications of this result for the performance of an MPI code? For simplicity, assume the program is running multiple processes locally on this multicore machine only. Does this mean that if you are running a memory access intensive program, then you will not be able to get more than 3x speedup, even if you use all the cores? If you ran a program that was not memory-access intensive, could you theoretically get the full 10x? If you simultaneously ran two or three memory access intensive programs, each using three processors, would they each be able to get 3x speedup, or would they interfere with each other and slow each other down as they all simultaneously pulled from RAM?
Speedup is about how much parallelism exist in the code. Moreover, any resource could also become a bottleneck depending on the type of the application. If your application is memory intensive, then you will be limited by the memory bandwidth. If its not memory intensive, and it is highly parallel (take Monte Carlo sampling as an example), then you will get close to the full speedup from your cores.
To answer your last question (multiple memory-intensive): at the end of the day we rely on memory controllers to do read/write. So it depends on the memory banks and where the physical pages are allocated from. So, any of the two situations that you mentioned could happen.

Difference between processor and process in parallel computing?

Every time I come across something like "process 0 does x task" , I am inclined to think they mean processor.
After reading a bit more about it, I find that there are two memory classifications, shared memory and distributed memory:
A shared memory executes something like a thread (implying same data is available to all processors- hence it makes sense to call it a process) However, even for distributed memory it is called a process instead of a processor. For example: "Process 0 is computing the partial dot product"
Why is this so? Why is it called a process and not a processor?
PS. I hope this question is not trivial :)
These other answers are all pretty spot on. Processors are physical, processes are software. So a quad core CPU will have 4 processors, but can run many more processes.
Your confusion around distributed terminology is fair though. In distributed computing, typically X number of processes will be executed equal to the number of hardware processors. In this scenario, each processes gets an ID in software often called a rank. Ranks are independent of processors, and different ranks will have different tasks. So when you report a status, information is relative to the process rank, and not the physical processor.
To rephrase, in distributed computing there will usually be one process running on each processor. The process will have a unique id that is more important in the software than the physical processor it is running on, so status information is given about the process. As the number of processes and processors are equal, this distinction can get a bit blurred.
The distinction is hardware vs software.
The process is the logical instance of your program. The processor is the hardware entity that runs the process. Most of the time, you don't care about the actual processor, only the process that's executing.
For instance, the OS may decide to temporarily put your processes to sleep in order to give other applications runtime, and later it may awaken them on different processors. As long as your processes produce the expected results, this should not be of any interest to you: all you care about is the computation, not where it's happening.
For me, processor refers to machine, that is responsible for computing operations. Process is a single instance of some program. (I hope i understood what you meant).
I would say that they use the terms indistinctly because most of the time the context allows it and the difference may be subtle to some extent. That is, since each process (when it is single threaded) executes on a processor, people typically does not want to make the distinction between the physical entity (processor) and the logical entity (process).
This assumption might be wrong when considering processors with multithreading capabilities (SMT, and Hyper-Threading for Intel processors) and/or executing multi-threaded applications because processes run on any available processor (or thread). In those situations, people should be stricter when making this affirmations. Still, since it is possible to bind one process (and even one thread) to a processor (or processor thread) using affinity commands, they can use indistinctly both terms under these circumstances.

MPI: cores or processors?

Hi I am kind of MPI noob so please bear with me on this one. :)
Say I have an MPI program called foo.c and I run the executable with
mpirun -np 3 ./foo
Now this means the program will be run in parallel using 3 processors (1 process per processor). But since most processors today have more than one core, (take 2 cores per processor say) does this mean the program will be run on 3 cores or 3 processors?
Probably this has to do with my poor understanding of what the difference between a core and a processor really is so if you could also explain a little more that would be helpful.
Thank you.
mpirun will execute a number of "processes" on the machine. The cpu or core where these processes are executed is operating-system dependent.
On a N cpu machines with M cores on each cpu, you have room for N*M processes running at full speed.
But, typically:
If you have multiple cores, each process will run on a separate core
If you ask for more processes than the available core*cpus, everything will run, but with a lower efficiency (yes, you can run multi-process jobs on a single-cpu single-core machine...)
If you are using a queuing system or a preconfigured MPI system for which a list of remote machines exists, the allocation will be distributed on the remote machines.
(Depending of the mpi implementation, there might be some options to force a specific cpu or core, but you should not need to worry about that).
Distribution of processes to cores and processors is handled by the operating system and the MPI implementation. Running on a desktop, the operating system will generally put each process on a different core, potentially redistributing processes during run-time. In larger systems such a s a supercomputer or a cluster, the distribution is handled by resource managers such as SLURM. However this happens, one or multiple processes will be assigned to each core.
Regarding hardware, a core can run only a single process at a time. Technologies such as hyper-threading allows multiple processes to share the resources of a single core. There are cases where two or more processes per core is optimal. For instance, if a processes is doing a large amount of file I/O another may take its place and do computation while the first is hung on a read or write.
In short, give MPI the number of processes you want to execute. Distribution of these processes is then handled transparent to the user. The number of processes that you use should be determined by requirements of the application (powers of 2, number of files to be read), the number of cores available, and the optimal number of processes per core for the application.
The OS Scheduler will try to optimally allocate separate cores to your parallel application's processes in a multi core system OR to separate processors in multi processor system.
The interesting case is a multi-core multi cpu system. Again you can let the OS Scheduler do it for you , OR you can enforce the ( logical/physical) core affinity to your processes to bind them to a particular core.
The mpirun command uses a hostlist. If don't specify it, it will probably use "localhost" and run all your processes there. If you run 3 processes and you have a 4 core machine, you probably get good speedup because the OS will generally put them on different cores. If you only have two cores, then one core will get two processes.
The previous is not entirely true, since the OS is allowed to move processes, so you may want to use numactl to bind them to a core.
If you are on a multi-node cluster, then a well-setup mpi will generate a hostfile where each node appears as many times as it has cores. So on a 4 node cluster with 8 cores per node, you can request up to 32 processes and expect close to perfect speedup. (If your code and your algorithm allow that, of course.) Requesting 9 processes on that cluster may put 8 on one node and the 9th on another, which is of course not great for performance. You'd hope that your cluster software comes with an mpirun that spreads the processes out better than that.
from performance view of MPI job,there are some explicit rule:
1) if you code is pure MPI code (BLAS is not tuned with openMP), turn off hyperthread and set the tasks number of job per node to the cores of node
2) if you code is MPI+openMP, you can set PPN (processes per node) to the cores of node and OMP_NUM_THEADS to the 2(if there are two hardware threads per core)
3) if you code is MPI+openMP and you cluster is huge then you can set PPN (processes per node) to 1 and OMP_NUM_THEADS to the logical CPU numbers to save the communication overhead
In order to provide a useful framework I would consider this hierarchy:
a motherboard can hold one or more chips/dice;
a chip/die can contain one or more cores (independent CPUs);
a CPU can work out one or more threads concurrently (the multithreading I know of consists of two threads)
In the early days, you had most often one motherboard with one chip with one CPU running one thread. Only one process at a time could be dealt with, and the attending hardware set was referred to as the processor. There was was one-to-one mapping between pieces of software (the task to run) and pieces of hardware (the device to run the task).
Process is definitely a software notion. 'Thread' is, cast quite simply, a specification of 'process' in the context of parallel concurrent computing. Nowadays processor can refer to a physical device as well as its extended processing capabilities (multithreading again, which to be sure is a technological implementation). For example, you can have machines with two chips on the motherboard, with four core/CPUs per chip, and with each core/CPUs running two threads concurrently. Then you would be able to run 2x4x2=16 processes (without oversubscription of resources, of courses).
The MPI syntax you quote addresses processes (option np), or threads if you like. The description part of man mpirun even refers to processes as 'slots' (for example, see the specs for the hostfile).
Slots indicate how many processes can potentially execute on a node.
This usage sounds like a legacy of that close correspondence between units of hardware and units of software that was standard back then. 'Slot' is originally a material/hardware feature, not unlike the term 'socket' that has undergone a similar change of semantics at times.
So indeed I feel quite some sympathy for your confusion. If you are a Linux user, you can visualize the report of cat /proc/cpuinfo. These lines refer to one processor named '2' out of four:
processor : 2
...
physical id : 0
siblings : 4
core id : 2
cpu cores : 4
They say that in this one machine I have gotten only one chip (since 'phyical id' takes only one value in the whole list, omitted), that this one chip as 4 'cpu cores' and that this one chip is running four siblings (4 threads, so there is no multithreading). In this case there are 4 processing elements, and 4 cpu cores.
In the example above with multithreading, you would see a listing for 16 processors, 2 values for 'physical id' (chips), 'cpu cores' equal to 4 (per chip) and `siblings' equal to 8 (per chip) since multithreading is enabled on that chip. In this case you have four times as many processors as cores.
Therefore, in this extended context, 'processor' indicates the machine's capability to work on a 'process', and this is what MPI and you want to use, regardeless of the number and feats of cores that can enable this. You only need to gain an overview of where these processing capabilities come from.
Another useful Linux command is then lscpu:
...
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
...
There 'socket' indeed is the physical connection in the motherboard where the chip is plugged into, so it is a byname of chip indeed. Indeed no multithreading here.
I am indebted to the discussions in this other post https://unix.stackexchange.com/q/146051/132913

Resources