In my sample MPI code, I'm trying to merge two groups of processes (parent and child) after MPI_Comm_spawn using
MPI_Intercomm_merge(intercomm, 0, &intracomm);
where 'intercomm' is the inter-communicator handle returned from spawn operation and 'intracomm' is the merged intra-communicator.
Here I try to measure the time taken for this merge operation using
MPI_Wtime();
For 24 MPI processes, it measures 0.93 seconds compared to MPI_Comm_spawn which just takes 0.14 seconds.
Is MPI_Comm_merge operation a heavy in terms of computation time? I am not sure if this is weird or normal.
Related
I'm new in using Julia and after some courses about numeric analysis programming became a hobby of mine.
I ran some tests with all my cores and did the same with threads to compare. I noticed that doing heavier computation went better with the threaded loop than with the process, But it was about the same when it came to addition. (operations were randomly selected for example)
After some research its all kinda vague and I ultimately want some perspective from someone that is using the same language if it matters at all.
Some technical info: 8 physical cores, julia added vector of 16 after addprocs() and nthreads() is 16
using Distributed
addprocs()
#everywhere using SharedArrays;
#everywhere using BenchmarkTools;
function test(lim)
r = zeros(Int64(lim / 16),Threads.nthreads())
Threads.#threads for i in eachindex(r)
r[Threads.threadid()] = (BigInt(i)^7 +5)%7;
end
return sum(r)
end
#btime test(10^4) # 1.178 ms (240079 allocations: 3.98 MiB)
#everywhere function test2(lim)
a = SharedArray{Int64}(lim);
#sync #distributed for i=1:lim
a[i] = (BigInt(i)^7 +5)%7;
end
return sum(a)
end
#btime test2(10^4) # 3.796 ms (4413 allocations: 189.02 KiB)
Note that your loops do very different things.
Int the first loop each thread keeps updating the same single cell the Array. Most likely since only a single memory cell is update in a single thread, the processor caching mechanism can be used to speed up things.
On the other hand the second loop each process is updating several different memory cells and such caching is not possible.
The first Array holds Float64 values while the second holds Int64 values
After correcting those things the difference gets smaller (this is on my laptop, I have only 8 threads):
julia> #btime test(10^4)
2.781 ms (220037 allocations: 3.59 MiB)
29997
julia> #btime test2(10^4)
4.867 ms (2145 allocations: 90.14 KiB)
29997
Now the other issue is that when Distributed is used you are doing inter-process communication which does not occur when using Threads.
Basically, the inter-process processing does not make sense to be used for jobs lasting few milliseconds. When you try to increase the processing volumes the difference might start to diminish.
So when to use what - it depends.. General guidelines (somewhat subjective) are following:
Processes are more robust (threads are still experimental)
Threads are easier as long as you do not need to use locking or atomic values
When the parallelism level is beyond 16 threads become inefficient and Distributed should be used (this is my personal observation)
When writing utility packages for others use threads - do not distribute code inside a package. Explanation: If you add multi-threading to a package it's behavior can be transparent to the user. On the other hand Julia's multiprocessing (Distributed package) abstraction does not distinguish between parallel and distributed - that is your workers can be either local or remote. This makes fundamental difference how code is designed (e.g. SharedArrays vs DistributedArrays), moreover the design of code might also depend on e.g. number of servers or possibilities of limiting inter-node communication. Hence normally, Distributed-related package logic should be separated from from standard utility package while the multi-threaded functionality can just be made transparent to the package user. There are of course some exceptions to this rule such as providing some distributed data processing server tools etc. but this is a general rule of thumb.
For huge scale computations I always use processes because you can easily go onto a computer cluster with them and distribute the workload across hundreds of machines.
I have ~200 .Rds datasets that I perform various operations on (different scripts) in a pipeline (of multiple scripts). In most of these scripts I've begun with a for loop and upgraded to a foreach. My problem is that the dataset objects are different sizes (x axis is size in mb):
so if I optimise core number usage (I have a 12core 16gbRAM machine at the office and a 16core 32gbRAM machine at home), it'll whip through the first 90 without incident, but then larger files bunch up and max out the total RAM allocation (remember Rds files are compressed so these are larger in RAM than on disk, but the variability in file size at least gives an indication of the problem). This causes workers to crash and typically leaves me with 1 to 3 cores running through the remainder of the big files (using .errorhandling = "pass"). I'm thinking it would be great to optimise the core number based on number and RAM size of workers, and total available RAM, and figured others might have been in a similar dilemma and developed strategies to address this. Some approaches I've thought of but not tried:
Approach 1: first loop or list through the files on disk, potentially by opening & closing them, use object.size() to get their sizes in RAM, sort largest to smallest, cut halfway, reverse the order of the second half, and intersperse them: smallest, biggest, 2nd smallest, 2nd biggest, etc. 2 workers (or any even numbered multiple) should therefore be working on the 'mean' RAM usage. However: worker 1 will finish its job faster than any other job in the stack and then go onto job 3, the 2nd smallest, likely finish that really quickly also then do job 4, the second largest, while worker 2 is still on the largest, meaning that by job 4, this approach has the machine processing the 2 largest RAM objects concurrently, the opposite of what we want.
Approach 2: sort objects by size-in-RAM for each object, small to large. Starting from object 1, iteratively add subsequent objects' RAM usage until total RAM core number is exceeded. Foreach on that batch. Repeat. This would work but requires some convoluted coding (probably a for loop wrapper around the foreach which passes the foreach its task list each time?). Also if there are a lot of tasks which won't exceed the RAM (per my example), the cores limit batching process will mean all 12 or 16 have to complete before the next 12 or 16 are started, introducing inefficiency.
Approach 3: sort small-large per 2. Run foreach with all cores. This will churn through the small ones maximally efficiently until the tasks get bigger, at which point workers will start to crash, reducing the number of workers sharing the RAM and thus increasing the chance the remaining workers can continue. Conceptually this will mean cores-1 tasks fail and need to be re-run, but the code is easy and should work fast. I already have code that checks the output directory and removes tasks from the jobs list if they've already been completed, which means I could just re-run this approach, however I should anticipate further losses and therefore reruns required unless I lower the cores number.
Approach 4: as 3 but somehow close the worker (reduce core number) BEFORE the task is assigned, meaning the task doesn't have to trigger a RAM overrun and fail in order to reduce worker count. This would also mean no having to restart RStudio.
Approach 5: ideally there would be some intelligent queueing system in foreach that would do this all for me but beggars can't be choosers! Conceptually this would be similar to 4, above: for each worker, don't start the next task until there's sufficient RAM available.
Any thoughts appreciated from folks who've run into similar issues. Cheers!
I've thought a bit about this too.
My problem is a bit different, I don't have any crash but more some slowdowns due to swapping when not enough RAM.
Things that may work:
randomize the iterations so that it is approximately evenly distributed (without needing to know the timings in advance)
similar to approach 5, have some barriers (waiting of some workers with a while loop and Sys.sleep()) while not enough memory (e.g. determined via package {memuse}).
Things I do in practice:
always store the results of iterations in foreach loops and test if already computed (RDS file already exists)
skip some iterations if needed
rerun the "intensive" iterations using less cores
I face a big challenge to justify the performance of the following snapshot of my code that uses Intel MPI library
double time=0
time = time - MPI_Wtime();
MPI_Win_lock(MPI_LOCK_EXCLUSIVE,0,0,win_global_scheduling_step);
MPI_Win_unlock(0,win_global_scheduling_step);
time= time + MPI_Wtime();
if(id==0)
sleep(10);
printf("%d sync time %f\n", id, time);
The output depends on how much will rank 0 sleep.
As the following
0 sync time 0.000305
1 sync time 10.00045
2 sync time 10.00015
If I change the sleep of the rank 0 to be 5 seconds instead of 10 seconds, then the sync time at the other ranks will be of the same scale of 5 seconds
The actual data associated with the window "win_global_step" is owned by rank 0
Any discussion or thoughts about the code would be so helpful
If rank 0 owns the win_global_step, and rank 0 goes to sleep or cranks away on a computation kernel, or otherwise does not make MPI calls, many MPI implementations will not be able to service other requests.
There is an environment variable (MPICH_ASYNC_PROGRESS) you might try setting. It introduces some big performance tradeoffs, but it can in some instances let RMA operations make progress without explicit calls to MPI routines.
Despite the name "MPICH" in the environment variable, it might work for you as Intel MPI is based off of the MPICH implementation.
I have a nested *apply calls and I want to parallelize them. I have the option to parallelize on either the top call or the nested inner call. I believe that, in theory, the first one is supposed to be better, but my problem is that I have 4 cores but the outer-most object has 5 parts that are of very varying sizes. When I ran the first example, all 4 cores ran for about 10 minutes before 2 of them finished. At 1 hour the third one finished, and the 4th was the last one to finish at 1:45 having gotten the two largest processes.
What are the pro's and con's of each?
parLapply(cl, object, function(obj) lapply(obj, funct))
-- OR --
lapply(object, function(obj) parLapply(cl, obj, funct))
Additionally, is there is a way to manually distribute the load? That way I could separate the two large objects and put the two smallest together.
EDIT: Generally, what does CS theory state about this situation? Which is generally the best place for a parallel call (excluding peculiar circumstances like this)
parLapply groups your tasks so there is one group of tasks per cluster worker. That doesn't work well if you need load balancing, so I suggest that you try clusterApplyLB instead:
clusterApplyLB(cl, object, function(obj) lapply(obj, funct))
If you have 5 tasks and 4 workers, this will schedule tasks 1-4 on workers 1-4, and then it will schedule task 5 on the worker that finishes it's task first. That may work reasonably well, but it will work better if the last task is the shortest.
If instead you use:
lapply(object, function(obj) clusterApplyLB(cl, obj, funct))
it will execute 5 separate parallel jobs. That could be very inefficient if the tasks within those parallel jobs are small, plus you will waste resources for each of the 5 jobs that have load balancing problems. Thus, this approach doesn't usually work well.
You usually want to use the first case, but load balancing is often a serious problem when the number of tasks isn't much larger than the number of workers. If each call to funct takes a reasonable amount of time (at least a couple of seconds, for example), you could try unrolling the loop using the nesting operator from the foreach package:
r <-
foreach(obj=object) %:%
foreach(o=obj) %dopar% {
funct(o)
}
This turns all of the calls to funct into a single stream of tasks, but still returns the results in a list of lists.
You can find out more about using the foreach nesting operator in a vignette that I wrote called Nesting Foreach Loops.
I wish to calculate the speedup of my MPI application against the number of parallel processes/nodes.
Application mostly performs huge matrices computation in parallel.
I can measure an elapsed time using MPI_Wtime(), something like this..
double start = MPI_Wtime();
....
double end = MPI_Wtime();
double elapsed = end - start;
But how can I achieve this against the degree of parallelization ?
The usual definition of speedup is time on 1 process divided by time on p processes.
If you wish to present the performance of your code, it's good to pick a range of p from 1 to the highest amount you have access to run and plot the results on a speedup vs. p plot.
Note that strictly speaking, speedup should compare the time on p processes vs the best possible sequential code, not just running your parallel code sequentially. This seems like a moot point, but in some areas the parallel codes are pretty awful in the sequential case. In the sparse matrix world, for example, you can find a parallel code 10-50x slower than the top sequential code.