What are some computers that support NUMA? Also, how many cores are required? I have tried searching in Google and Bing but couldn't find any answers.
NUMA Support
The traditional model for multiprocessor support is symmetric multiprocessor (SMP). In this model, each processor has equal access to memory and I/O. As more processors are added, the processor bus becomes a limitation for system performance.
System designers use non-uniform memory access (NUMA) to increase processor speed without increasing the load on the processor bus. The architecture is non-uniform because each processor is close to some parts of memory and farther from other parts of memory. The processor quickly gains access to the memory it is close to, while it can take longer to gain access to memory that is farther away.
In a NUMA system, CPUs are arranged in smaller systems called nodes. Each node has its own processors and memory, and is connected to the larger system through a cache-coherent interconnect bus.
The system attempts to improve performance by scheduling threads on processors that are in the same node as the memory being used. It attempts to satisfy memory-allocation requests from within the node, but will allocate memory from other nodes if necessary. It also provides an API to make the topology of the system available to applications. You can improve the performance of your applications by using the NUMA functions to optimize scheduling and memory usage.
************************************************************************
Multiple Processors
Computers with multiple processors are typically designed for one of two architectures: non-uniform memory access (NUMA) or symmetric multiprocessing (SMP).
In a NUMA computer, each processor is closer to some parts of memory than others, making memory access faster for some parts of memory than other parts. Under the NUMA model, the system attempts to schedule threads on processors that are close to the memory being used. For more information about NUMA, see NUMA Support.
In an SMP computer, two or more identical processors or cores connect to a single shared main memory. Under the SMP model, any thread can be assigned to any processor. Therefore, scheduling threads on an SMP computer is similar to scheduling threads on a computer with a single processor. However, the scheduler has a pool of processors, so that it can schedule threads to run concurrently. Scheduling is still determined by thread priority, but it can be influenced by setting thread affinity and thread ideal processor, as discussed in this topic.
Related
I have a problem that is memory bandwidth limited -- I need to read a lot (many GB) of data sequentially from RAM, do some quick processing and write it sequentially to a different location in RAM. Memory latency is not a concern.
Is there any benefit from dividing the work between two or more cores in different NUMA zones? Equivalently, does working across zones reduce the available bandwidth?
For bandwidth-limited, multi-threaded code, the behavior in a NUMA system will primarily depend how "local" each thread's data accesses are, and secondarily on details of the remote accesses.
In a typical 2-socket server system, the local memory bandwidth available to two NUMA nodes is twice that available to a single node. (But remember that it may take many threads running on many cores to reach asymptotic bandwidth for each socket.)
The STREAM Benchmark, for example, is typically run in a configuration that allows almost all accesses from every thread to be "local". This is implemented by assuming "first touch" NUMA placement -- when allocated memory is first written, the OS has to create mappings from the process virtual address space to physical addresses, and (by default) the OS chooses physical addresses that are in the same NUMA node as the core that executed the store instruction.
"Local" bandwidth (to DRAM) in most systems is approximately symmetric (for reads and writes) and relatively easy to understand. "Remote" bandwidth is much more asymmetric for reads and writes, and there is usually significant contention between the read/write commands going between the chips and the data moving between the chips. The overall ratio of local to remote bandwidth also varies significantly across processor generations. For some processors (e.g., Xeon E5 v3 and probably v4), the interconnect is relatively fast, so jobs with poor locality can often be run with all of the memory interleaved between the two sockets.
Local bandwidths have increased significantly since then, with more recent processors generally strongly favoring local access.
Example from the Intel Xeon Platinum 8160 (2 UPI links between chips):
Local Bandwidth for Reads (each socket) ~112 GB/s
Remote Bandwidth for Reads (one-direction at a time) ~34 GB/s
Local bandwidth scales perfectly in two-socket systems, and remote bandwidth also scales very well when using both sockets (each socket reading data from the other socket).
It gets more complicated with combined read and write traffic between sockets, because the read traffic from node 0 to node 1 competes with the write traffic from node 1 to node 0, etc.
Local Bandwidth for 1R:1W (each socket) ~101 GB/s (reduced due to read/write scheduling overhead)
Remote Bandwidth for 1R:1W (one socket running at a time) ~50 GB/s -- more bandwidth is available because both directions are being used, but this also means that if both sockets are doing the same thing, there will be conflicts. I see less than 60 GB/s aggregate when both sockets are running 1R:1W remote at the same time.
Of course different ratios of local to remote accesses will change the scaling. Timing can also be an issue -- if the threads are doing local accesses at the same time, then remote accesses at the same time, there will be more contention in the remote access portion (compared to a case in which the threads are doing their remote accesses at different times).
I was wondering if there is a scientific differentiation in terminology when speaking of CPU Usage and CPU Utilization. I have the feeling that both words are used as synonyms. They both describe the relation between CPU Time and CPU Capacity. Wikipedia calls it CPU Usage. Microsoft uses CPU Utilization. But I also found an article where Microsoft uses the term CPU Usage. Now VMware defines to use CPU Utilization in the context of physical CPUs and CPU Usage in the context of logical CPUs. Also, there is no tag for cpu_utilization in stackoverflow.
Does anyone know a scientific differentiation?
Usage
CPU usage as a percentage during the interval.
o VM - Amount of actively used virtual CPU, as a percentage of total available CPU. This is the host's view of the CPU usage, not the guest operating system view. It is the average CPU utilization over all available virtual CPUs in the virtual machine. For example, if a virtual machine with one virtual CPU is running on a host that has four physical CPUs and the CPU usage is 100%, the virtual machine is using one physical CPU completely.
virtual CPU usage = usagemhz / (# of virtual CPUs x core frequency)
o Host - Actively used CPU of the host, as a percentage of the total available CPU. Active CPU is approximately equal to the ratio of the used CPU to the available CPU.
available CPU = # of physical CPUs x clock rate
100% represents all CPUs on the host. For example, if a four-CPU host is running a virtual machine with two CPUs, and the usage is 50%, the host is using two CPUs completely.
o Cluster - Sum of actively used CPU of all virtual machines in the cluster, as a percentage of the total available CPU.
CPU Usage = CPU usagemhz / effectivecpu
CPU usage, as measured in megahertz, during the interval.
o VM - Amount of actively used virtual CPU. This is the host's view of the CPU usage, not the guest operating system view.
o Host - Sum of the actively used CPU of all powered on virtual machines on a host. The maximum possible value is the frequency of the processors multiplied by the number of processors. For example, if you have a host with four 2GHz CPUs running a virtual machine that is using 4000MHz, the host is using two CPUs completely.
4000 / (4 x 2000) = 0.50
Used:
Time accounted to the virtual machine. If a system service runs on behalf of this virtual machine, the time spent by that service (represented by cpu.system) should be charged to this virtual machine. If not, the time spent (represented by cpu.overlap) should not be charged against this virtual machine.
Reference:http://pubs.vmware.com/vsphere-51/index.jsp?topic=%2Fcom.vmware.wssdk.apiref.doc%2Fcpu_counters.html
Very doubtful. You will probably find exact definitions in some academic text books but I bet they'll be inconsistent between text books. I've seen definitions in manpages that are inconsistent with the actual implementation within the code. This is a case where everyone assumes the definitions are so obvious they never check to see if theirs is consistent with others.
My suggestion is to fully definite your use and go with that. Others can then have a reference (your formula/algorithm) and can translate between yours and theirs.
By the way, figuring out utilization, usage, etc. is very complicated and fraught with traps. OSs move tasks around, logical CPUs move between cores, turbo modes temporarily bump clock rates, work is offloaded to internal coprocessors, processors go to sleep or drop in frequency, hyperthreading where multiple logical CPUs contend for shared resources, etc. What's worse is that it is a moving target. Exact and well-defined metrics today will start to get out of date quickly as hardware and software architectures continue to evolve per Moore's law and any SW equivalent.
Within a single context (paper, book, web article, etc.), there may be a difference, but there are not, as far as I know, consistent universally accepted standard definitions for these terms.
Within one authors writings, however, they might be used to describe different things. For example (not an exhaustive list):
How much of a single CPUs computing capacity is being used over a specific sample period
How much of a single CPUs computing capacity is being used by a specific schedulable entity (thread, process, light-weight process, kernel, interrupt routine, etc.) over a specific sample period
Either of the above, but taking all CPUs in the system into account
Any of the above, but with a difference in perspective between real CPUs and virtual CPUs (whether hyperthreading or CPUs actually being emulated by VMware, KVM/QEMU, Xen, Virtualbox or the like)
A comparative measure of how much CPU capacity is being used in one algorithm over another
Probably several other possibilities as well....
I am developing some program than runs on 4 node cluster with 4 cores on each node. I have a quite fast version of OpenMP version of the program that only runs on one cluster and I am trying to scale it using MPI. Due to my limited experience I am wondering which one would give me faster performance, a OpenMP hybrid architecture or a MPI only architecture? I have seen this slide claiming that the hybrid one generally cannot out perform the pure MPI one, but it does not give supporting evidence and is kind of counter-intuitive for me.
BTW, My platform use infiniband to interconnect nodes.
Thank a lot,
Bob
Shared memory is usually more efficient than message passing, as the latter usually requires increased data movement (moving data from the source to its destination) which is costly both performance-wise and energy-wise. This cost is predicted to keep growing with every generation.
The material states that MPI-only applications are usually on-par or better than hybrid applications, although they usually have larger memory requirements.
However, they are based on the fact that most of the large hybrid applications shown were based on parallel computation then serial communication.
This kind of implementations are usually susceptible to the following problems:
Non uniform memory access: having two sockets in a single node is a popular setup in HPC. Since modern processors have their memory controller on chip, half of the memory will be easily accessible from the local memory controller, meanwhile the other half has to pass through the remote memory controller (i.e., the one present in the other socket). Therefore, how the program allocates memory is very important: if the memory is reserved in the serialized phase (on the closest possible memory), then half of the cores will suffer longer main memory accesses.
Load balance: each *parallel computation to serialized communication** phase implies a synchronization barrier. This barriers force the fastest cores to wait for the slowest cores in a parallel region. Fastest/slowest unbalance may be affected by OS preemption (time is shared with other system processes), dynamic frequency scaling, etc.
Some of this issues are more straightforward to solve than others. For example,
the multiple-socket NUMA problem can be mitigated placing different MPI processes in different sockets inside the same node.
To really exploit the efficiency of shared memory parallelism, the best option is trying to overlap communication with computation and ensure load balance between all processes, so that the synchronization cost is mitigated.
However, developing hybrid applications which are both load balanced and do not impose big synchronization barriers is very difficult, and nowadays there is a strong research effort to address this complexity.
I was just wondering how is it possible that OpenMP (shared memory) and MPI (distributed memory) could run on normal desktop CPUs like i7 for example. Is there some kind of a virtual machine that can simulate shared and distributed memory on these CPUs? I am asking it because when learnig OpenMP and MPI, the structures of supercomputers is shown, with shared memory or different nodes for distributed memory, each node with its own processor and memory.
MPI assumes nothing about how and where MPI processes run. As far as MPI is concerned, processes are just entites that have a unique address known as their rank and MPI gives them the ability to send and receive data in the form of messages. How exactly are the messages transfered is left to the implementation. The model is so general that MPI can run virtually on any platform imaginable.
OpenMP deals with shared memory programming using threads. Threads are just concurrent instruction flows that can access a shared memory space. They can execute in a timesharing fashion on a single CPU core or they can execute on multiple cores inside a single CPU chip, or they can be distributed among multiple CPUs connected together by some sophisticated network that allows them to access each others memory.
Given all that, MPI does not require that each process executes on a dedicated CPU core or that millions of cores should be necessarily put on separate boards connected with some high speed network - performance does, as well as technical limitations. You can happily run a 100 processes MPI job on a single CPU core though performance would be very very bad but it will still work (given enough memory is available). The same applies to OpenMP - it does not require that each thread is scheduled on a dedicated CPU core but doing so gives the best performance.
That's why MPI and OpenMP are called abstractions - they are general enough that the execution hardware can vary greatly while source code is kept the same.
A modern multicore-CPU-based PC is a shared-memory computer. It is a sensible approximation to think of each core as a processor, and that they all have equal access to the same RAM. This approximation hides a lot of details of processor and chip architectures.
It has always (well, perhaps not always, but for almost as long as MPI has been around) been possible to use message-passing (of which MPI is one standard) on a shared-memory computer so that you can run the same MPI-enabled program as you would on a genuinely distributed-memory machine.
At the application level a programmer only cares about calls to MPI routines. At the systems level the MPI run-time translates these calls into, well on a cluster or supercomputer, into instructions to send stuff over the interconnect. On a shared-memory computer it could instead translate these calls into instructions to send stuff over the internal bus.
This is by no means a comprehensive introduction to the topics you've raised, but that's what Google and all the published sources out there are for.
Hi I am kind of MPI noob so please bear with me on this one. :)
Say I have an MPI program called foo.c and I run the executable with
mpirun -np 3 ./foo
Now this means the program will be run in parallel using 3 processors (1 process per processor). But since most processors today have more than one core, (take 2 cores per processor say) does this mean the program will be run on 3 cores or 3 processors?
Probably this has to do with my poor understanding of what the difference between a core and a processor really is so if you could also explain a little more that would be helpful.
Thank you.
mpirun will execute a number of "processes" on the machine. The cpu or core where these processes are executed is operating-system dependent.
On a N cpu machines with M cores on each cpu, you have room for N*M processes running at full speed.
But, typically:
If you have multiple cores, each process will run on a separate core
If you ask for more processes than the available core*cpus, everything will run, but with a lower efficiency (yes, you can run multi-process jobs on a single-cpu single-core machine...)
If you are using a queuing system or a preconfigured MPI system for which a list of remote machines exists, the allocation will be distributed on the remote machines.
(Depending of the mpi implementation, there might be some options to force a specific cpu or core, but you should not need to worry about that).
Distribution of processes to cores and processors is handled by the operating system and the MPI implementation. Running on a desktop, the operating system will generally put each process on a different core, potentially redistributing processes during run-time. In larger systems such a s a supercomputer or a cluster, the distribution is handled by resource managers such as SLURM. However this happens, one or multiple processes will be assigned to each core.
Regarding hardware, a core can run only a single process at a time. Technologies such as hyper-threading allows multiple processes to share the resources of a single core. There are cases where two or more processes per core is optimal. For instance, if a processes is doing a large amount of file I/O another may take its place and do computation while the first is hung on a read or write.
In short, give MPI the number of processes you want to execute. Distribution of these processes is then handled transparent to the user. The number of processes that you use should be determined by requirements of the application (powers of 2, number of files to be read), the number of cores available, and the optimal number of processes per core for the application.
The OS Scheduler will try to optimally allocate separate cores to your parallel application's processes in a multi core system OR to separate processors in multi processor system.
The interesting case is a multi-core multi cpu system. Again you can let the OS Scheduler do it for you , OR you can enforce the ( logical/physical) core affinity to your processes to bind them to a particular core.
The mpirun command uses a hostlist. If don't specify it, it will probably use "localhost" and run all your processes there. If you run 3 processes and you have a 4 core machine, you probably get good speedup because the OS will generally put them on different cores. If you only have two cores, then one core will get two processes.
The previous is not entirely true, since the OS is allowed to move processes, so you may want to use numactl to bind them to a core.
If you are on a multi-node cluster, then a well-setup mpi will generate a hostfile where each node appears as many times as it has cores. So on a 4 node cluster with 8 cores per node, you can request up to 32 processes and expect close to perfect speedup. (If your code and your algorithm allow that, of course.) Requesting 9 processes on that cluster may put 8 on one node and the 9th on another, which is of course not great for performance. You'd hope that your cluster software comes with an mpirun that spreads the processes out better than that.
from performance view of MPI job,there are some explicit rule:
1) if you code is pure MPI code (BLAS is not tuned with openMP), turn off hyperthread and set the tasks number of job per node to the cores of node
2) if you code is MPI+openMP, you can set PPN (processes per node) to the cores of node and OMP_NUM_THEADS to the 2(if there are two hardware threads per core)
3) if you code is MPI+openMP and you cluster is huge then you can set PPN (processes per node) to 1 and OMP_NUM_THEADS to the logical CPU numbers to save the communication overhead
In order to provide a useful framework I would consider this hierarchy:
a motherboard can hold one or more chips/dice;
a chip/die can contain one or more cores (independent CPUs);
a CPU can work out one or more threads concurrently (the multithreading I know of consists of two threads)
In the early days, you had most often one motherboard with one chip with one CPU running one thread. Only one process at a time could be dealt with, and the attending hardware set was referred to as the processor. There was was one-to-one mapping between pieces of software (the task to run) and pieces of hardware (the device to run the task).
Process is definitely a software notion. 'Thread' is, cast quite simply, a specification of 'process' in the context of parallel concurrent computing. Nowadays processor can refer to a physical device as well as its extended processing capabilities (multithreading again, which to be sure is a technological implementation). For example, you can have machines with two chips on the motherboard, with four core/CPUs per chip, and with each core/CPUs running two threads concurrently. Then you would be able to run 2x4x2=16 processes (without oversubscription of resources, of courses).
The MPI syntax you quote addresses processes (option np), or threads if you like. The description part of man mpirun even refers to processes as 'slots' (for example, see the specs for the hostfile).
Slots indicate how many processes can potentially execute on a node.
This usage sounds like a legacy of that close correspondence between units of hardware and units of software that was standard back then. 'Slot' is originally a material/hardware feature, not unlike the term 'socket' that has undergone a similar change of semantics at times.
So indeed I feel quite some sympathy for your confusion. If you are a Linux user, you can visualize the report of cat /proc/cpuinfo. These lines refer to one processor named '2' out of four:
processor : 2
...
physical id : 0
siblings : 4
core id : 2
cpu cores : 4
They say that in this one machine I have gotten only one chip (since 'phyical id' takes only one value in the whole list, omitted), that this one chip as 4 'cpu cores' and that this one chip is running four siblings (4 threads, so there is no multithreading). In this case there are 4 processing elements, and 4 cpu cores.
In the example above with multithreading, you would see a listing for 16 processors, 2 values for 'physical id' (chips), 'cpu cores' equal to 4 (per chip) and `siblings' equal to 8 (per chip) since multithreading is enabled on that chip. In this case you have four times as many processors as cores.
Therefore, in this extended context, 'processor' indicates the machine's capability to work on a 'process', and this is what MPI and you want to use, regardeless of the number and feats of cores that can enable this. You only need to gain an overview of where these processing capabilities come from.
Another useful Linux command is then lscpu:
...
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
...
There 'socket' indeed is the physical connection in the motherboard where the chip is plugged into, so it is a byname of chip indeed. Indeed no multithreading here.
I am indebted to the discussions in this other post https://unix.stackexchange.com/q/146051/132913