What does the load-average used by parallel make represent? - build-process

Using GNU make on Windows, what exactly does the load-average value represent?
For example:
make -j --load-average=2.5
What does the 2.5 mean?

It means that make will not start any new thread until the number of runnable processes, averaged over some period of time is below 2.5.
Edit, following vines' remark
a runnable process, in Unix parlance, is a process that is either waiting for CPU time or readily running. Technically it is a process which is in TASK_RUNNING state.
However... this prompted me to re-read the original question, and note its "on Windows" part....
Whereby my original answer is, loosely, correct for GNU Make on Unix-like hosts, it is plain short of factual on Windows. The discrepancy of behavior is due to the fact the the two operating systems provide very different metrics to describe their "current" CPU load. Consequently Make's logic has to interpret these CPU load readings differently, to serve its --load-average feature.
The purpose of the --load-average parameter is to provide guidance to Make as to when it can start new threads; causing Make to share CPU resources with other applications (and within itself) more elegantly.
In Linux, the semantic of this parameter is very close to its name: new Make threads are allowed when the load-average, as reported by the kernel (I'm assuming this is the "one minute" load average, though it could be the five minutes one), is less than the parameter value.
In Windows, Make computes the load average from the weighed-average of the CPU Load (as reported by GetSystemTimes function) and the memory load (eg. from GlobalMemoryStatusEx function).

On Windows - nothing, apparently. This is a UNIX term: http://en.wikipedia.org/wiki/Load_%28computing%29
My copy of Cygwin reports zero load averages when I run the uptime command. I don't think there is a quick way of calculating this on Windows; it was asked on the Cygwin mailing list in the past.
In other words: it's not implemented, so it's always zero.
Here's the implementation of getloadavg, directly from the GNU Make 3.81 sources:
# if !defined (LDAV_DONE) && (defined (__MSDOS__) || defined (WINDOWS32))
# define LDAV_DONE
/* A faithful emulation is going to have to be saved for a rainy day. */
for ( ; elem < nelem; elem++)
{
loadavg[elem] = 0.0;
}
# endif /* __MSDOS__ || WINDOWS32 */
I haven't checked on newer versions of GNU make but I doubt it's changed.

Related

Why is FASTR (ie GraalVM version of R) 10x *slower* compared to normal R despite Oracle's claim of 40x *faster*?

Oracle claims that its graalvm implementaion of R (called "FastR") is up to 40x faster than normal R (https://www.graalvm.org/r/). However, I ran this super simple (but realistic) 4 line test program and not only was GraalVM/FastR not 40x faster, it was actually 10x SLOWER!
x <- 1:300000/300000
mu <- exp(-400*(x-0.6)^2)+
5*exp(-500*(x-0.75)^2)/3+2*exp(-500*(x-0.9)^2)
y <- mu+0.5*rnorm(300000)
t1 <- system.time(fit1 <- smooth.spline(x,y,spar=0.6))
t1
In FASTR, t1 returns this value:
user system elapsed
0.870 0.012 0.901
While in the original normal R, I get this result:
user system elapsed
0.112 0.000 0.113
As you can see, FAST R is super slow even for this simple (ie 4 lines of code, no extra/special library imported etc). I tested this on a 16 core VM on Google Cloud. Thoughts? (FYI: I did a quick peek at the smooth.spline code, and it does call Fortran, but according to the Oracle marketing site, GraalVM/FastR is faster than even Fortran-R code.)
====================================
EDIT:
Per the comments from Ben Bolker and user438383 below, I modified the code to include a for loop so that the code ran for much longer and I had time to monitor CPU usage. The modified code is below:
x <- 1:300000/300000
mu <- exp(-400*(x-0.6)^2)+
5*exp(-500*(x-0.75)^2)/3+2*exp(-500*(x-0.9)^2)
y <- mu+0.5*rnorm(300000)
forloopfunction <- function(xTrain, yTrain) {
for (x in 1:100) {
smooth.spline(xTrain, yTrain, spar=0.6)
}
}
t1 <- system.time(fit1 <-forloopfunction(x,y))
t1
Now, the normal R returns this for t1:
user system elapsed
19.665 0.008 19.667
while FastR returns this:
user system elapsed
76.570 0.210 77.918
So, now, FastR is only 4x slower, but that's still considerably slower. (I would be ok with 5% to even 10% difference, but that's 400% difference.) Moreoever, I checked the cpu usage. Normal R used only 1 core (at 100%) for the entirety of the 19 seconds. However, surprisingly, FastR used between 100% and 300% of CPU usage (ie between 1 full core and 3 full cores) during the ~78 seconds. So, I think it fairly reasonably to conclude that at least for this test (which happens to be a realistic test for my very simple scenario), FastR is at least 4x slower while consuming ~1x to 3x more CPU cores. Particularly given that I'm not importing any special libraries which the FASTR team may not have time to properly analyze (ie I'm using just vanilla R code that ships with R), I think that there's something not quite right with the FASTR implementation, at least when it comes to speed. (I haven't tested accuracy, but that's now moot I think.) Has anyone else experienced anything similar or does anyone know of any "magic" configuration that one needs to do to FASTR to get its claimed speeds (or at least similar, ie +- 5% speeds to normal R)? (Or maybe there's some known limitation to FASTR that I may be able to work around, ie don't use normal fortran binaries etc, but use these special ones etc.)
TL;DR: your example is indeed not the best use-case for FastR, because it spends most of its time in R builtins and Fortran code. There is no reason for it to be slower on FastR, though, and we will work on fixing that. FastR may be still useful for your application overall or just for some selected algorithms that run slowly on GNU-R, but would be a good fit for FastR (loopy, "scalar" code, see FastRCluster package).
As others have mentioned, when it comes to micro benchmarks one needs to repeat the benchmark multiple times to allow the system to warm-up. This is important in any case, but more so for systems that rely on dynamic compilation, like FastR.
Dynamic just-in-time compilation works by first interpreting the program while recording the profile of the execution, i.e., learning how the program executes, and only then compiling the program using this knowledge to optimize it better(*). In case of dynamic languages like R, this can be very beneficial, because we can observe types and other dynamic behavior that is hard if not impossible to statically determine without actually running the program.
It should be now clear why FastR needs few iterations to show the best performance it can achieve. It is true that the interpretation mode of FastR has not been optimized very much, so the first few iterations are actually slower than GNU-R. This is not inherent limitation of the technology that FastR is based on, but tradeoff of where we put our resources. Our priority in FastR has been peak performance, i.e., after a sufficient warm-up for micro benchmarks or for applications that run for long enough time.
To your concrete example. I could also reproduce the issue and I analyzed it by running the program with builtin CPU sampler:
$GRAALVM_HOME/bin/Rscript --cpusampler --cpusampler.Delay=20000 --engine.TraceCompilation example.R
...
-----------------------------------------------------------------------------------------------------------
Thread[main,5,main]
Name || Total Time || Self Time || Location
-----------------------------------------------------------------------------------------------------------
order || 2190ms 81.4% || 2190ms 81.4% || order.r~1-42:0-1567
which || 70ms 2.6% || 70ms 2.6% || which.r~1-6:0-194
ifelse || 140ms 5.2% || 70ms 2.6% || ifelse.r~1-34:0-1109
...
--cpusampler.Delay=20000 delays the start of sampling by 20 seconds
--engine.TraceCompilation prints basic info about the JIT compilation
when the program finishes, it prints the table from CPU sampler
(example.R runs the micro benchmark in a loop)
One observation is that the Fotran routine called from smooth.spline is not to blame here. It makes sense because FastR runs the very same native Fortran code as GNU-R. FastR does have to convert the data to native memory, but that is probably small cost compared to the computation itself. Also the transition between native and R code is in general more expensive on FastR, but here it does not play a role.
So the problem here seems to be a builtin function order. In GNU-R builtin functions are implemented in C, they basically do a big switch on the type of the input (integer/real/...) and then just execute highly optimized C code doing the work on plain C integer/double/... array. That is already the most effective thing one can do and FastR cannot beat that, but there is no reason for it to not be as fast. Indeed it turns out that there is a performance bug in FastR and the fix is on its way to master. Thank you for bringing our attention to it.
Other points raised:
but according to the Oracle marketing site, GraalVM/FastR is faster than even Fortran-R code
YMMV. That concrete benchmark presented at our website does spend considerable amount of time in R code, so the overhead of R<->native transition does not skew the result as much. The best results are when translating the Fortran code to R, so making the whole thing just a pure R program. This shows that FastR can run the same algorithm in R as fast as or quite close to Fortran and that is, performance wise, the main benefit of FastR. There is no free lunch. Warm-up time and the costs of R<->native transition is currently the price to pay.
FastR used between 100% and 300% of CPU usage
This is due to JIT compilations going on on background threads. Again, no free lunch.
To summarize:
FastR can run R code faster by using dynamic just-in-time compilation and optimizing chunks of R code (functions or possibly multiple functions inlined into one compilation unit) to the point that it can get close or even match equivalent native code, i.e., significantly faster than GNU-R. This matters on "scalar" R code, i.e., code with loops. For code that spends majority of time in builtin R functions, like, e.g., sum((x - mean(x))^2) for large x, this doesn't gain that much, because that code already spends much of the time in optimized native code even on GNU-R.
What FastR cannot do is to beat GNU-R on execution of a single R builtin function, which is likely to be already highly optimized C code in GNU-R. For individual builtins we may beat GNU-R, because we happen to choose slightly better algorithm or GNU-R has some performance bug somewhere, or it can be the other way around like in this case.
What FastR also cannot do is speeding up native code, like Fortran routines that some R code may call. FastR runs the very same native code. On top of that, the transition between native and R code is more costly in FastR, so programs doing this transition too often may end up being slower on FastR.
Note: what FastR can do and is a work-in-progress is to run LLVM bitcode instead of the native code. GraalVM supports execution of LLVM bitcode and can optimize it together with other languages, which removes the cost of the R<->native transition and even gives more power to the compiler to optimize across this boundary.
Note: you can use FastR via the cluster package interface to execute only parts of you application.
(*) the first profiling tier may be also compiled, which gives different tradeoffs

Reusing FFTW wisdom on clusters

I'm running distributed MPI programs on clusters using multiple nodes, where I make use of the MPI FFT's of FFTW. To save time I reuse wisdom from one run to the next. To generate this wisdom, FFTW experiments with a lot of different algorithms and what not for the given problem. I am worried that because I am working on a cluster, the best solution stored as wisdom for one set of CPUs/nodes may not be the best solution for some other set of CPUs/nodes performing the same task, and so I should not reuse wisdom unless I am running on exactly the same CPUs/nodes as the run where the wisdom was gathered.
Is this correct, or is the wisdom somehow completely indifferent to the physical hardware on which it is generated?
If your cluster is homogeneous, the saved fftw plans likely make sense, though the the way the processes are connected may affect optimal plans for mpi-related operations. But if your cluster is not homogeneous, saving the fftw plan can be suboptimal and problem related to load balance could proove hard to solve.
Taking a look at wisdom files produced by fftw and fftw_mpi for a 2D c2c transform, I can see additionnal lines likely related to phases like transposition where mpi communications are required, such as:
(fftw_mpi_transpose_pairwise_register 0 #x1040 #x1040 #x0 #x394c59f5 #xf7d5729e #xe8cf4383 #xce624769)
Indeed, there are different algorithms for transposing the 2D (or 3D) array: in the folder mpi of the source of fftw, files transpose-pairwise.c, transpose-alltoall.c and transpose-recurse.c implement these algorithms. As flags FFTW_MEASURE or FFTW_EXHAUSTIVE are set, these algorithms are run to select the fastest, as stated here. The result might depend on the topology of the network of processes (how many processes on each node? How these nodes are connected?). If the optimal plan depends on where the processes are running and on the network topology, using the wisdom utility will not be decisive. Otherwise, using the wisdom feature can save some time as the plan is built.
To test whether the optimal plan changed, you can perform a couple of runs and save the resulting plan in files: a reproductibility test!
int rank;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
fftw_mpi_gather_wisdom(MPI_COMM_WORLD);
if (rank == 0) fftw_export_wisdom_to_filename("wisdommpi.txt");
/* save the plan on each process ! Depending on the file system of the cluster, performing communications can be required */
char filename[42];
sprintf(filename, "wisdom%d.txt",rank);
fftw_export_wisdom_to_filename(filename);
Finally, to compare the produced wisdom files, try in a bash script:
for filename in wis*.txt; do
for filename2 in wis*.txt; do
echo "."
if grep -Fqvf "$filename" "$filename2"; then
echo "$filename"
echo "$filename2"
echo $"There are lines in file1 that don’t occur in file2."
fi
done
done
This script check that all lines in files are also present in the other files, following Check if all lines from one file are present somewhere in another file
On my personal computer, using mpirun -np 4 main, all wisdom files are identical except for a permutation of lines.
If the files are different from one run to another, it could be attributed to the communication pattern between processes... or sequential performance of dft for each process. The piece of code above save the optimal plan for each process. If lines related to sequential operations, without fftw_mpi in it, such as:
(fftw_codelet_n1fv_10_sse2 0 #x1440 #x1440 #x0 #xa9be7eee #x53354c26 #xc32b0044 #xb92f3bfd)
become different, it is a clue that the optimal sequential algorithm changes from one process to the other. In this case, the wall clock time of the sequential operations may also differ from one process to another. Hence, checking the load balance between processes could be instructive. As noticed in the documentation of FFTW about load balance:
Load balancing is especially difficult when you are parallelizing over heterogeneous machines; ... FFTW does not deal with this problem, however—it assumes that your processes run on hardware of comparable speed, and that the goal is therefore to divide the problem as equally as possible.
This assumption is consistent with the operation performed by fftw_mpi_gather_wisdom();
(If the plans created for the same problem by different processes are not the same, fftw_mpi_gather_wisdom will arbitrarily choose one of the plans.) Both of these functions may result in suboptimal plans for different processes if the processes are running on non-identical hardware...
The transpose operation in 2D and 3D fft requires a lot a communications: one of the implementation is a call to MPI_Alltoall involving almost the whole array. Hence, a good connectivity between nodes (infiniband...) can proove useful.
Let us know if you found different optimal plans from one run to another and how these plans differ!

nvprof R gputools code never ends

I am trying to run "nvprof" from command line on R. Here is how I am doing it:
./nvprof --print-gpu-trace --devices 0 --analysis-metrics --export-profile /home/xxxxx/%p R
This gives me a R prompt and i write R code. I can do with Rscript too.
Problem i see is when i give --analysis-metrics option it gives me lots of lines similar to
==44041== Replaying kernel "void ger_kernel(cublasGerParams)"
And R process never ends. I am not sure what I am missing.
nvprof doesn't modify process exit behavior, so I think you're just suffering from slowness because your app invokes a lot of kernels. You have two options to speed this up.
1. Selectively profiling metrics
The --analysis-metrics option enables collection of a number of metrics, which requires kernels to be replayed - collecting a different set of metrics for each kernel run.
If your application has a lot of kernel invocations, this can take time. I'd suggest you query the available metrics with the nvprof --query-metrics command, and then manually choose the metrics you are interested in.
Once you know which metrics you want, you can query them using nvprof -m metric_1,metric_2,.... This way, the application will profile less metrics, hence requiring less replays, and running faster.
2. Selectively profiling kernels
Alternatively, you can only profile a specific kernel using the --kernels <context id/name>:<stream id/name>:<kernel name>:<invocation> option.
For example, nvprof --kernels ::foo:2 --analysis-metrics ./your_cuda_app will profile all analysis metrics for the kernel whose name contains the string foo, and only on its second invocation. This option takes regular expressions, and is quite powerful.
You can mix and match the above two approaches to speed up profiling. You will be able to find more help about these and other nvprof options using the command nvprof --help.

Can I recursively source a TCL script indefinitely?

I have a TCL script running inside a TCL shell (synopsys primetime if it's of any difference).
The script is initiated by source <script> from the shell.
The script calls itself recursively after a specific time interval has passed by calling source <script> at the end of the script.
My question is a bit academic: Could there be a stack-overflow issue if the script keeps calling itself in this method?
If I expand the question: What happens when a TCL script sources another script? Does it fork to a child process? if so, then every call forks to another child, which will eventually stack up to a pile of processes - but since the source command itself is not parallel - there is no fork (from my understanding).
Hope the question is clear.
Thanks.
Short answer: yes.
If you're using Tcl 8.5 or before, you'll run out of C stack. There's code to try to detect it and throw a soft (catchable) error if you do. There's also a (lower) limit on the number of recursions that can be done, controllable via interp recursionlimit. Note that this is counting recursive entries to the core Tcl script interpreter engine; it's not exactly recursion levels in your script, though it is very close.
# Set the recursion limit for the current interpreter to 2000
interp recursionlimit {} 2000
The default is 1000, which is enough for nearly any non-recursive algorithm.
In Tcl 8.6, a non-recursive execution engine is used for most commands (including source). This lets your code use much greater recursion depths, limited mainly by how much general memory you have. I've successfully run code with recursion depths of over a million on conventional hardware.
You'll still need to raise the interp recursionlimit though; the default 1000 limit remains because it catches more bugs (i.e., unintentional recursions) than not. It's just that you can meaningfully raise it much more.
The command doesn’t fork a new process. It acts as if the lines in the sourced files were there in place of the invocation of source. They are interpreted by the current interpreter unless you specify otherwise.

Give CPU more power to plot in Octave

I made this function in Octave which plots fractals. Now, it takes a long time to plot all the points I've calculated. I've made my function as efficient as possible, the only way I think I can make it plot faster is by having my CPU completely focus itself on the function or telling it somehow it should focus on my plot.
Is there a way I can do this or is this really the limit?
To determine how much CPU is being consumed for your plot, run your plot, and in a separate window (assuming your on Linux/Unix), run the top command. (for windows, launch the task master and switch to the 'Processes' tab, click on CPU header to sort by CPU).
(The rollover description for Octave on the tag on your question says that Octave is a scripting language. I would expect it's calling gnuplot to create the plots. Look for this as the highest CPU consumer).
You should see that your Octave/gnuplot cmd is near the top of the list, and for top there is a column labeled %CPU (or similar). This will show you how much CPU that process is consuming.
I would expect to see that process is consuming 95% or more CPU. If you see that is a significantly lower number, then you need to check the processes below that, are they consuming the remaining CPU (some sort of Virus scan (on a PC), or DB or Server?)? If a competing program is the problem, then you'll have to decide if you can wait til it/they are finished, OR that you can kill them and restart later. (For lunix, use kill -15 pid or kill -11 pid. Only use kill -9 pid as a last resort. Search here for articles on correct order for trying to kill -$n)
If there are no competing processes AND it octave/gnuplot is using less than 95%, then you'll have to find alternate tools to see what is holding up the process. (This is unlikely, it's possible some part of your overall plotting process is either Disk I/O or Network I/O bound).
So, it depends on the timescale you're currently experiencing versus the time you "want" to experience.
Does your system have multiple CPUs? Then you'll need to study the octave/gnuplot documentation to see if it supports a switch to indicate "use $n available CPUs for processing". (Or find a plotting program that does support using $n multiple CPUs).
Realistically, if your process now takes 10 mins, and you can, by eliminating competing processes, go from 60% to 90%, that is a %50 increase in CPU, but will only reduce it to 5 mins (not certain, maybe less, math is not my strong point ;-)). Being able to divide the task over 5-10-?? CPUs will be the most certain path to faster turn-around times.
So, to go further with this, you'll need to edit your question with some data points. How long is your plot taking? How big is the file it's processing. Is there something especially math intensive for the plotting you're doing? Could a pre-processed data file speed up the calcs? Also, if the results of top don't show gnuplot running at 99% CPU, then edit your posting to show the top output that will help us understand your problem. (Paste in your top output, select it with your mouse, and then use the formatting tool {} at the top of the input box to keep the formatting and avoid having the output wrap in your posting).
IHTH.
P.S. Note the # of followers for each of the tags you've assigned to your question by rolling over. You might get more useful "eyes" on your question by including a tag for the OS you're using, and a tag related to performance measurement/testing (Go to the tags tab and type in various terms to see how many followers you're getting. One bit of S.O. etiquette is to only specify 1 programming language (if appropriate) and that may apply to OS's too.)

Resources