Without root access, run R with tuned BLAS when it is linked with reference BLAS - r

Can any one tell me why I can not successfully test OpenBLAS's dgemm performance (in GFLOPs) in R via the following way?
link R with the "reference BLAS" libblas.so
compile my C program mmperf.c with OpenBLAS library libopenblas.so
load the resulting shared library mmperf.so into R, call the R wrapper function mmperf and report dgemm performance in GFLOPs.
Point 1 looks strange, but I have no choice because I have no root access on machines I want to test, so actual linking to OpenBLAS is impossible. By "not successfully" I mean my program ends up reporting dgemm performance for reference BLAS instead of OpenBLAS. I hope someone can explain to me:
why my way does not work;
is it possible at all to make it work (this is important, because if it is impossible, I must write a C main function and do my job in a C program.)
I've investigated into this issue for two days, here I will include various system output to assist you to make a diagnose. To make things reproducible, I will also include the code, makefile as well as shell command.
Part 1: system environment before testing
There are 2 ways to invoke R, either using R or Rscript. There are some differences in what is loaded when they are invoked:
~/Desktop/dgemm$ readelf -d $(R RHOME)/bin/exec/R | grep "NEEDED"
0x00000001 (NEEDED) Shared library: [libR.so]
0x00000001 (NEEDED) Shared library: [libpthread.so.0]
0x00000001 (NEEDED) Shared library: [libc.so.6]
~/Desktop/dgemm$ readelf -d $(R RHOME)/bin/Rscript | grep "NEEDED"
0x00000001 (NEEDED) Shared library: [libc.so.6]
Here we need to choose Rscript, because R loads libR.so, which will automatically load the reference BLAS libblas.so.3:
~/Desktop/dgemm$ readelf -d $(R RHOME)/lib/libR.so | grep blas
0x00000001 (NEEDED) Shared library: [libblas.so.3]
~/Desktop/dgemm$ ls -l /etc/alternatives/libblas.so.3
... 31 May /etc/alternatives/libblas.so.3 -> /usr/lib/libblas/libblas.so.3.0
~/Desktop/dgemm$ readelf -d /usr/lib/libblas/libblas.so.3 | grep SONAME
0x0000000e (SONAME) Library soname: [libblas.so.3]
Comparatively, Rscript gives a cleaner environment.
Part 2: OpenBLAS
After downloading source file from OpenBLAS and a simple make command, a shared library of the form libopenblas-<arch>-<release>.so-<version> can be generated. Note that we will not have root access to install it; instead, we copy this library into our working directory ~/Desktop/dgemm and rename it simply to libopenblas.so. At the same time we have to make another copy with name libopenblas.so.0, as this is the SONAME which run time loader will seek for:
~/Desktop/dgemm$ readelf -d libopenblas.so | grep "RPATH\|SONAME"
0x0000000e (SONAME) Library soname: [libopenblas.so.0]
Note that the RPATH attribute is not given, which means this library is intended to be put in /usr/lib and we should call ldconfig to add it to ld.so.cache. But again we don't have root access to do this. In fact, if this can be done, then all the difficulties are gone. We could then use update-alternatives --config libblas.so.3 to effectively link R to OpenBLAS.
Part 3: C code, Makefile, and R code
Here is a C script mmperf.c computing GFLOPs of multiplying 2 square matrices of size N:
#include <R.h>
#include <Rmath.h>
#include <Rinternals.h>
#include <R_ext/BLAS.h>
#include <sys/time.h>
/* standard C subroutine */
double mmperf (int n) {
/* local vars */
int n2 = n * n, tmp; double *A, *C, one = 1.0;
struct timeval t1, t2; double elapsedTime, GFLOPs;
/* simulate N-by-N matrix A */
A = (double *)calloc(n2, sizeof(double));
GetRNGstate();
tmp = 0; while (tmp < n2) {A[tmp] = runif(0.0, 1.0); tmp++;}
PutRNGstate();
/* generate N-by-N zero matrix C */
C = (double *)calloc(n2, sizeof(double));
/* time 'dgemm.f' for C <- A * A + C */
gettimeofday(&t1, NULL);
F77_CALL(dgemm) ("N", "N", &n, &n, &n, &one, A, &n, A, &n, &one, C, &n);
gettimeofday(&t2, NULL);
/* free memory */
free(A); free(C);
/* compute and return elapsedTime in microseconds (usec or 1e-6 sec) */
elapsedTime = (double)(t2.tv_sec - t1.tv_sec) * 1e+6;
elapsedTime += (double)(t2.tv_usec - t1.tv_usec);
/* convert microseconds to nanoseconds (1e-9 sec) */
elapsedTime *= 1e+3;
/* compute and return GFLOPs */
GFLOPs = 2.0 * (double)n2 * (double)n / elapsedTime;
return GFLOPs;
}
/* R wrapper */
SEXP R_mmperf (SEXP n) {
double GFLOPs = mmperf(asInteger(n));
return ScalarReal(GFLOPs);
}
Here is a simple R script mmperf.R to report GFLOPs for case N = 2000
mmperf <- function (n) {
dyn.load("mmperf.so")
GFLOPs <- .Call("R_mmperf", n)
dyn.unload("mmperf.so")
return(GFLOPs)
}
GFLOPs <- round(mmperf(2000), 2)
cat(paste("GFLOPs =",GFLOPs, "\n"))
Finally there is a simple makefile to generate the shared library mmperf.so:
mmperf.so: mmperf.o
gcc -shared -L$(shell pwd) -Wl,-rpath=$(shell pwd) -o mmperf.so mmperf.o -lopenblas
mmperf.o: mmperf.c
gcc -fpic -O2 -I$(shell Rscript --default-packages=base --vanilla -e 'cat(R.home("include"))') -c mmperf.c
Put all these files under working directory ~/Desktop/dgemm, and compile it:
~/Desktop/dgemm$ make
~/Desktop/dgemm$ readelf -d mmperf.so | grep "NEEDED\|RPATH\|SONAME"
0x00000001 (NEEDED) Shared library: [libopenblas.so.0]
0x00000001 (NEEDED) Shared library: [libc.so.6]
0x0000000f (RPATH) Library rpath: [/home/zheyuan/Desktop/dgemm]
The output reassures us that OpenBLAS is correctly linked, and the run time load path is correctly set.
Part 4: testing OpenBLAS in R
Let's do
~/Desktop/dgemm$ Rscript --default-packages=base --vanilla mmperf.R
Note our script needs only the base package in R, and --vanilla is used to ignore all user settings on R start-up. On my laptop, my program returns:
GFLOPs = 1.11
Oops! This is truely reference BLAS performance not OpenBLAS (which is about 8-9 GFLOPs).
Part 5: Why?
To be honest, I don't know why this happens. Each step seems to work correctly. Does something subtle occurs when R is invoked? For example, any possibility that OpenBLAS library is overridden by reference BLAS at some point for some reason? Any explanations and solutions? Thanks!

why my way does not work
First, shared libraries on UNIX are designed to mimic the way archive libraries work (archive libraries were there first). In particular that means that if you have libfoo.so and libbar.so, both defining symbol foo, then whichever library is loaded first is the one that wins: all references to foo from anywhere within the program (including from libbar.so) will bind to libfoo.sos definition of foo.
This mimics what would happen if you linked your program against libfoo.a and libbar.a, where both archive libraries defined the same symbol foo. More info on archive linking here.
It should be clear from above, that if libblas.so.3 and libopenblas.so.0 define the same set of symbols (which they do), and if libblas.so.3 is loaded into the process first, then routines from libopenblas.so.0 will never be called.
Second, you've correctly decided that since R directly links against libR.so, and since libR.so directly links against libblas.so.3, it is guaranteed that libopenblas.so.0 will lose the battle.
However, you erroneously decided that Rscript is better, but it's not: Rscript is a tiny binary (11K on my system; compare to 2.4MB for libR.so), and approximately all it does is exec of R. This is trivial to see in strace output:
strace -e trace=execve /usr/bin/Rscript --default-packages=base --vanilla /dev/null
execve("/usr/bin/Rscript", ["/usr/bin/Rscript", "--default-packages=base", "--vanilla", "/dev/null"], [/* 42 vars */]) = 0
execve("/usr/lib/R/bin/R", ["/usr/lib/R/bin/R", "--slave", "--no-restore", "--vanilla", "--file=/dev/null", "--args"], [/* 43 vars */]) = 0
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=89625, si_status=0, si_utime=0, si_stime=0} ---
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=89626, si_status=0, si_utime=0, si_stime=0} ---
execve("/usr/lib/R/bin/exec/R", ["/usr/lib/R/bin/exec/R", "--slave", "--no-restore", "--vanilla", "--file=/dev/null", "--args"], [/* 51 vars */]) = 0
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=89630, si_status=0, si_utime=0, si_stime=0} ---
+++ exited with 0 +++
Which means that by the time your script starts executing, libblas.so.3 has been loaded, and libopenblas.so.0 that will be loaded as a dependency of mmperf.so will not actually be used for anything.
is it possible at all to make it work
Probably. I can think of two possible solutions:
Pretend that libopenblas.so.0 is actually libblas.so.3
Rebuild entire R package against libopenblas.so.
For #1, you need to ln -s libopenblas.so.0 libblas.so.3, then make sure that your copy of libblas.so.3 is found before the system one, by setting LD_LIBRARY_PATH appropriately.
This appears to work for me:
mkdir /tmp/libblas
# pretend that libc.so.6 is really libblas.so.3
cp /lib/x86_64-linux-gnu/libc.so.6 /tmp/libblas/libblas.so.3
LD_LIBRARY_PATH=/tmp/libblas /usr/bin/Rscript /dev/null
Error in dyn.load(file, DLLpath = DLLpath, ...) :
unable to load shared object '/usr/lib/R/library/stats/libs/stats.so':
/usr/lib/liblapack.so.3: undefined symbol: cgemv_
During startup - Warning message:
package ‘stats’ in options("defaultPackages") was not found
Note how I got an error (my "pretend" libblas.so.3 doesn't define symbols expected of it, since it's really a copy of libc.so.6).
You can also confirm which version of libblas.so.3 is getting loaded this way:
LD_DEBUG=libs LD_LIBRARY_PATH=/tmp/libblas /usr/bin/Rscript /dev/null |& grep 'libblas\.so\.3'
91533: find library=libblas.so.3 [0]; searching
91533: trying file=/usr/lib/R/lib/libblas.so.3
91533: trying file=/usr/lib/x86_64-linux-gnu/libblas.so.3
91533: trying file=/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/amd64/server/libblas.so.3
91533: trying file=/tmp/libblas/libblas.so.3
91533: calling init: /tmp/libblas/libblas.so.3
For #2, you said:
I have no root access on machines I want to test, so actual linking to OpenBLAS is impossible.
but that seems to be a bogus argument: if you can build libopenblas, surely you can also build your own version of R.
Update:
You mentioned in the beginning that libblas.so.3 and libopenblas.so.0 define the same symbol, what does this mean? They have different SONAME, is that insufficient to distinguish them by the system?
The symbols and the SONAME have nothing to do with each other.
You can see symbols in the output from readelf -Ws libblas.so.3 and readelf -Ws libopenblas.so.0. Symbols related to BLAS, such as cgemv_, will appear in both libraries.
Your confusion about SONAME possibly comes from Windows. The DLLs on Windows are designed completely differently. In particular, when FOO.DLL imports symbol bar from BAR.DLL, both the name of the symbol (bar) and the DLL from which that symbol was imported (BAR.DLL) are recorded in the FOO.DLLs import table.
That makes it easy to have R import cgemv_ from BLAS.DLL, while MMPERF.DLL imports the same symbol from OPENBLAS.DLL.
However, that makes library interpositioning hard, and works completely differently from the way archive libraries work (even on Windows).
Opinions differ on which design is better overall, but neither system is likely to ever change its model.
There are ways for UNIX to emulate Windows-style symbol binding: see RTLD_DEEPBIND in dlopen man page. Beware: these are fraught with peril, likely to confuse UNIX experts, are not widely used, and likely to have implementation bugs.
Update 2:
you mean I compile R and install it under my home directory?
Yes.
Then when I want to invoke it, I should explicitly give the path to my version of executable program, otherwise the one on the system might be invoked instead? Or, can I put this path at the first position of environment variable $PATH to cheat the system?
Either way works.

*********************
Solution 2:
*********************
Here we offer another solution, by exploiting environment variable LD_PRELOAD mentioned in our solution 1. The use of LD_PRELOAD is more "brutal", as it forces loading a given library into the program before any other program, even before the C library libc.so! This is often used for urgent patching in Linux development.
As shown in the part 2 of the original post, the shared BLAS library libopenblas.so has SONAME libopenblas.so.0. An SONAME is an internal name that dynamic library loader would seek at run time, so we need to make a symbolic link to libopenblas.so with this SONAME:
~/Desktop/dgemm$ ln -sf libopenblas.so libopenblas.so.0
then we export it:
~/Desktop/dgemm$ export LD_PRELOAD=$(pwd)/libopenblas.so.0
Note that a full path to libopenblas.so.0 needs be fed to LD_PRELOAD for a successful load, even if libopenblas.so.0 is under $(pwd).
Now we launch Rscript and check what happens by LD_DEBUG:
~/Desktop/dgemm$ LD_DEBUG=libs Rscript --default-packages=base --vanilla /dev/null |& grep blas
4860: calling init: /home/zheyuan/Desktop/dgemm/libopenblas.so
4860: calling init: /home/zheyuan/Desktop/dgemm/libopenblas.so
4865: calling init: /home/zheyuan/Desktop/dgemm/libopenblas.so
4868: calling fini: /home/zheyuan/Desktop/dgemm/libopenblas.so [0]
4870: calling init: /home/zheyuan/Desktop/dgemm/libopenblas.so
4869: calling init: /home/zheyuan/Desktop/dgemm/libopenblas.so
4867: calling fini: /home/zheyuan/Desktop/dgemm/libopenblas.so [0]
4860: find library=libblas.so.3 [0]; searching
4860: trying file=/usr/lib/R/lib/libblas.so.3
4860: trying file=/usr/lib/i386-linux-gnu/i686/sse2/libblas.so.3
4860: trying file=/usr/lib/i386-linux-gnu/i686/cmov/libblas.so.3
4860: trying file=/usr/lib/i386-linux-gnu/i686/libblas.so.3
4860: trying file=/usr/lib/i386-linux-gnu/sse2/libblas.so.3
4860: trying file=/usr/lib/i386-linux-gnu/libblas.so.3
4860: trying file=/usr/lib/jvm/java-7-openjdk-i386/jre/lib/i386/client/libblas.so.3
4860: trying file=/usr/lib/libblas.so.3
4860: calling init: /usr/lib/libblas.so.3
4860: calling init: /home/zheyuan/Desktop/dgemm/libopenblas.so
4874: calling init: /home/zheyuan/Desktop/dgemm/libopenblas.so
4876: calling init: /home/zheyuan/Desktop/dgemm/libopenblas.so
4860: calling fini: /home/zheyuan/Desktop/dgemm/libopenblas.so [0]
4860: calling fini: /usr/lib/libblas.so.3 [0]
Comparing with what we saw in solution 1 by cheating R with our own version of libblas.so.3, we can see that
libopenblas.so.0 is loaded first, hence found first by Rscript;
after libopenblas.so.0 is found, Rscript goes on searching and loading libblas.so.3. However, this will play no effect by the "first come, first serve" rule, explained in the original answer.
Good, everything works, so we test our mmperf.c program:
~/Desktop/dgemm$ Rscript --default-packages=base --vanilla mmperf.R
GFLOPs = 9.62
The outcome 9.62 is bigger than 8.77 we saw in the earlier solution merely by chance. As a test for using OpenBLAS we don't run the experiment many times for preciser result.
Then as usual, we unset environment variable in the end:
~/Desktop/dgemm$ unset LD_PRELOAD

*********************
Solution 1:
*********************
Thanks to Employed Russian, my problem is finally solved. The investigation requires important skills in Linux system debugging and patching, and I believe this is a great asset I learned. Here I would post a solution, as well as correcting several points in my original post.
1 About invoking R
In my original post, I mentioned there are two ways to launch R, either via R or Rscript. However, I have wrongly exaggerated their difference. Let's now investigate their start-up process, via an important Linux debugging facility strace (see man strace). There are actually lots of interesting things happening after we type a command in the shell, and we can use
strace -e trace=process [command]
to trace all system calls involving process management. As a result we can watch the fork, wait, and execution steps of a process. Though not stated in the manual page, #Employed Russian shows that it is possible to specify only a subclass of process, for example, execve for the execution steps.
For R we have
~/Desktop/dgemm$ time strace -e trace=execve R --vanilla < /dev/null > /dev/null
execve("/usr/bin/R", ["R", "--vanilla"], [/* 70 vars */]) = 0
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=5777, si_status=0, si_utime=0, si_stime=0} ---
execve("/usr/lib/R/bin/exec/R", ["/usr/lib/R/bin/exec/R", "--vanilla"], [/* 79 vars */]) = 0
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=5778, si_status=0, si_utime=0, si_stime=0} ---
+++ exited with 0 +++
real 0m0.345s
user 0m0.256s
sys 0m0.068s
while for Rscript we have
~/Desktop/dgemm$ time strace -e trace=execve Rscript --default-packages=base --vanilla /dev/null
execve("/usr/bin/Rscript", ["Rscript", "--default-packages=base", "--vanilla", "/dev/null"], [/* 70 vars */]) = 0
execve("/usr/lib/R/bin/R", ["/usr/lib/R/bin/R", "--slave", "--no-restore", "--vanilla", "--file=/dev/null"], [/* 71 vars */]) = 0
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=5822, si_status=0, si_utime=0, si_stime=0} ---
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=5823, si_status=0, si_utime=0, si_stime=0} ---
execve("/usr/lib/R/bin/exec/R", ["/usr/lib/R/bin/exec/R", "--slave", "--no-restore", "--vanilla", "--file=/dev/null"], [/* 80 vars */]) = 0
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=5827, si_status=0, si_utime=0, si_stime=0} ---
+++ exited with 0 +++
real 0m0.063s
user 0m0.020s
sys 0m0.028s
We have also used time to measure the start-up time. Note that
Rscript is about 5.5 times faster than R. One reason is that R will load 6 default packages on start-up, while Rscript only loads one base package by control: --default-packages=base. But it is still much faster even without this setting.
In the end both start-up processes are directed to $(R RHOME)/bin/exec/R, and in my original post, I have already exploited readelf -d to show that this executable will load libR.so, which are linked with libblas.so.3. According to #Employed Russian's explanation, the BLAS library loaded first will win, so there is no way my original method will work.
To successfully run strace, we have used the amazing file /dev/null as input file and output file when necessary. For example, Rscript demands an input file, while R demands both. We feed the null device to both to make the command run smoothly and the output clean. The null device is a physically existing file, but works amazingly. When reading from it, it contains nothing; while writing to it, it discards everything.
2. Cheat R
Now since libblas.so will be loaded anyway, the only thing we can do is to provide our own version of this library. As I have said in the original post, if we have root access, this is really easy, by using update-alternatives --config libblas.so.3, so that the system Linux will help us complete this switch. But #Employed Russian offers an awesome way to cheat the system without root access: let's check how R finds BLAS library on start-up, and make sure we feed our version before the system default is found! To monitor how shared libraries are found and loaded, use environment variable LD_DEBUG.
There are a number of Linux environment variables with prefix LD_, as documented in man ld.so. These variables can be assigned before an executable, so that we can change the running feature of a program. Some useful variables include:
LD_LIBRARY_PATH for setting run time library search path;
LD_DEBUG for tracing look-up and loading of shared libraries;
LD_TRACE_LOADED_OBJECTS for displaying all loaded library by a program (behaves similar to ldd);
LD_PRELOAD for forcing injecting a library to a program at the very start, before all other libraries are looked for;
LD_PROFILE and LD_PROFILE_OUTPUT for profiling one specified shared library. R user who have read section 3.4.1.1 sprof of Writing R extensions should recall that this is used for profiling compiled code from within R.
The use of LD_DEBUG can be seen by:
~/Desktop/dgemm$ LD_DEBUG=help cat
Valid options for the LD_DEBUG environment variable are:
libs display library search paths
reloc display relocation processing
files display progress for input file
symbols display symbol table processing
bindings display information about symbol binding
versions display version dependencies
scopes display scope information
all all previous options combined
statistics display relocation statistics
unused determined unused DSOs
help display this help message and exit
To direct the debugging output into a file instead of standard output a filename can be specified using the LD_DEBUG_OUTPUT environment variable.
Here we are particularly interested in using LD_DEBUG=libs. For example,
~/Desktop/dgemm$ LD_DEBUG=libs Rscript --default-packages=base --vanilla /dev/null |& grep blas
5974: find library=libblas.so.3 [0]; searching
5974: trying file=/usr/lib/R/lib/libblas.so.3
5974: trying file=/usr/lib/i386-linux-gnu/i686/sse2/libblas.so.3
5974: trying file=/usr/lib/i386-linux-gnu/i686/cmov/libblas.so.3
5974: trying file=/usr/lib/i386-linux-gnu/i686/libblas.so.3
5974: trying file=/usr/lib/i386-linux-gnu/sse2/libblas.so.3
5974: trying file=/usr/lib/i386-linux-gnu/libblas.so.3
5974: trying file=/usr/lib/jvm/java-7-openjdk-i386/jre/lib/i386/client/libblas.so.3
5974: trying file=/usr/lib/libblas.so.3
5974: calling init: /usr/lib/libblas.so.3
5974: calling fini: /usr/lib/libblas.so.3 [0]
shows various attempts that R program tried to locate and load libblas.so.3. So if we could provide our own version of libblas.so.3, and make sure R finds it first, then the problem is solved.
Let's first make a symbolic link libblas.so.3 in our working path to the OpenBLAS library libopenblas.so, then expand default LD_LIBRARY_PATH with our working path (and export it):
~/Desktop/dgemm$ ln -sf libopenblas.so libblas.so.3
~/Desktop/dgemm$ export LD_LIBRARY_PATH = $(pwd):$LD_LIBRARY_PATH ## put our working path at top
Now let's check again the library loading process:
~/Desktop/dgemm$ LD_DEBUG=libs Rscript --default-packages=base --vanilla /dev/null |& grep blas
6063: find library=libblas.so.3 [0]; searching
6063: trying file=/usr/lib/R/lib/libblas.so.3
6063: trying file=/usr/lib/i386-linux-gnu/i686/sse2/libblas.so.3
6063: trying file=/usr/lib/i386-linux-gnu/i686/cmov/libblas.so.3
6063: trying file=/usr/lib/i386-linux-gnu/i686/libblas.so.3
6063: trying file=/usr/lib/i386-linux-gnu/sse2/libblas.so.3
6063: trying file=/usr/lib/i386-linux-gnu/libblas.so.3
6063: trying file=/usr/lib/jvm/java-7-openjdk-i386/jre/lib/i386/client/libblas.so.3
6063: trying file=/home/zheyuan/Desktop/dgemm/libblas.so.3
6063: calling init: /home/zheyuan/Desktop/dgemm/libblas.so.3
6063: calling fini: /home/zheyuan/Desktop/dgemm/libblas.so.3 [0]
Great! We have successfully cheated R.
3. Experiment with OpenBLAS
~/Desktop/dgemm$ Rscript --default-packages=base --vanilla mmperf.R
GFLOPs = 8.77
Now, everything works as expected!
4. Unset LD_LIBRARY_PATH (to be safe)
It is a good practice to unset LD_LIBRARY_PATH after use.
~/Desktop/dgemm$ unset LD_LIBRARY_PATH

Related

Inheriting dependencies when running unit tests from command line

I am trying to run Julia unit tests from the command line but the unit tests fail to run because they cannot find a dependency that I am using in my main project. How can I make this work? The actual command that I try to execute is julia test/test_blueprint.jl from the project root. Here follows more details.
Details about the setup
My project is located at the path /home/jonas/prog/julia/blueprint. In that directory, I have a Project.toml file containing these lines:
name = "blueprint"
uuid = "c1615a0c-c255-402d-ae34-0b88819b43c6"
authors = [""]
version = "0.1.0"
[deps]
FunctionalCollections = "de31a74c-ac4f-5751-b3fd-e18cd04993ca"
Setfield = "efcf1570-3423-57d1-acb7-fd33fddbac46"
along with the Manifest.toml file.
I have a subdirectory at test/ with unit tests that I created following this guide and that directory contains another Project.toml file containing
[deps]
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
There is a file test/test_blueprint.jl with unit tests and that file starts with
using Test
include("../src/blueprint.jl") # Alternative 1
#using blueprint # Alternative 2
using FunctionalCollections
using LinearAlgebra
...
The actual code being tested is in the file src/blueprint.jl.
Details about the problem
From the project root, I attempt to run the unit tests using the command julia test/test_blueprint.jl. When I run that command it produces the following output:
ERROR: LoadError: ArgumentError: Package Setfield not found in current path:
- Run `import Pkg; Pkg.add("Setfield")` to install the Setfield package.
Stacktrace:
[1] require(into::Module, mod::Symbol)
# Base ./loading.jl:967
[2] include(fname::String)
# Base.MainInclude ./client.jl:451
[3] top-level scope
# ~/prog/julia/blueprint/test/test_blueprint.jl:8
in expression starting at /home/jonas/prog/julia/blueprint/src/blueprint.jl:1
in expression starting at /home/jonas/prog/julia/blueprint/test/test_blueprint.jl:8
suggesting that it cannot find the dependency Setfield. If I edit the top of the file test/test_blueprint.jl slightly from
include("../src/blueprint.jl") # Alternative 1
#using blueprint # Alternative 2
to
#include("../src/blueprint.jl") # Alternative 1
using blueprint # Alternative 2
it still fails, but with a different error:
ERROR: LoadError: ArgumentError: Package blueprint not found in current path:
- Run `import Pkg; Pkg.add("blueprint")` to install the blueprint package.
Stacktrace:
[1] require(into::Module, mod::Symbol)
# Base ./loading.jl:967
in expression starting at /home/jonas/prog/julia/blueprint/test/test_blueprint.jl:9
Question: How can I make the unit tests run from the command line?
Note that I can run the unit tests from within the Julia REPL in Emacs by activating the project using C-c C-a at the src/blueprint.jl file and calling C-c C-b at the unit test file test/test_blueprint.jl. My Julia version is 1.7.0 (2021-11-30). Don't hesitate to ask for more clarifications.
First, a few naming conventions that are probably not (but may be) contributing to the issues here:
By convention, package names begin with a single capital, so I would recommend changing the name to Blueprint everywhere
By default, ] test runs tests found in the test/runtests.jl, so I would recommend naming your top-level testing script runtests.jl to avoid confusion, even though it does seem from the errors here that test is finding your test_blueprint.jl file one way or another.
Now, while I can't test this without the full code of your package, what I suspect is happening here is the following:
Normally, dependencies of the package you are testing (let's say MyPackage) are not required in test/Project.toml because they are implicit in MyPackage. So after a successful using MyPackage, while they will still not be available to any functions written in your test scripts (test/runtests.jl), will be available to the functions written in MyPackage -- just as if you had typed ]using MyPackage at the REPL and then run your test code there. This is the only reason you don't normally need to duplicate all the deps from the main Project.toml in test/Project.toml.
Since the using Blueprint approach is failing here for other reasons, when you simply include the code from src/blueprint.jl, the usings within that file will in turn fail because those packages are not present in the active environment at test/Project.toml (even if they are present on your system elsewhere).
Consequently, one quick fix to your problem with the current include("../src/blueprint.jl") approach would be to simply add those dependencies to your test/Project.toml
However, it would be more satisfying to fix the problem you are having with using Blueprint. I don't have enough information to debug this without seeing the full structure of your packages, but I would suggest as a start
making sure that your code is properly structured as a package
testing that, even if unregistered, you can ] add your package from the REPL by git repo URL (i.e. ] add https://some_website.com/you/Blueprint.jl)
EDIT:
Upon inspection of the code linked in the comments (https://github.com/jonasseglare/Blueprint), a few other issues:
Although they are already installed by default, standard libraries these days do need to be included in [deps]. In this case, that means the LinearAlgebra stdlib
Any packages you are explicitly using in your test scripts, other than your package itself, do need to be added to test/Project.toml. I.e., any packages that you are directly using functions from in your test scripts (rather than just indirectly using via the exported functions of your package) do need to be included in test/Project.toml.
In your case, the latter would appear to mean LinearAlgebra and FunctionalCollections, but not Setfield (that one only needs to be included in the regular Project.toml, since it's not being directly used in runtests.jl).
Consequently, with a few minor changes to your repo we are able to simply
] add https://github.com/brenhinkeller/Blueprint
] test Blueprint
or, since you preferred at the command line
user$ julia -e "using Pkg; Pkg.add(url=\"https://github.com/brenhinkeller/Blueprint\")
user$ julia -e "using Pkg; Pkg.test(\"Blueprint\")"
Testing Blueprint
Status `/private/var/folders/qk/2qyrdb854mvd2tn4crc802lw0000gn/T/jl_fSypP7/Project.toml`
[c1615a0c] Blueprint v0.1.0 `https://github.com/brenhinkeller/Blueprint#master`
[de31a74c] FunctionalCollections v0.5.0
[37e2e46d] LinearAlgebra `#stdlib/LinearAlgebra`
[8dfed614] Test `#stdlib/Test`
Status `/private/var/folders/qk/2qyrdb854mvd2tn4crc802lw0000gn/T/jl_fSypP7/Manifest.toml`
[c1615a0c] Blueprint v0.1.0 `https://github.com/brenhinkeller/Blueprint#master`
[187b0558] ConstructionBase v1.3.0
[de31a74c] FunctionalCollections v0.5.0
[1914dd2f] MacroTools v0.5.9
[ae029012] Requires v1.3.0
[efcf1570] Setfield v0.8.1
[56f22d72] Artifacts `#stdlib/Artifacts`
[2a0f44e3] Base64 `#stdlib/Base64`
[9fa8497b] Future `#stdlib/Future`
[b77e0a4c] InteractiveUtils `#stdlib/InteractiveUtils`
[8f399da3] Libdl `#stdlib/Libdl`
[37e2e46d] LinearAlgebra `#stdlib/LinearAlgebra`
[56ddb016] Logging `#stdlib/Logging`
[d6f4376e] Markdown `#stdlib/Markdown`
[9a3f8284] Random `#stdlib/Random`
[ea8e919c] SHA `#stdlib/SHA`
[9e88b42a] Serialization `#stdlib/Serialization`
[8dfed614] Test `#stdlib/Test`
[cf7118a7] UUIDs `#stdlib/UUIDs`
[e66e0078] CompilerSupportLibraries_jll `#stdlib/CompilerSupportLibraries_jll`
[4536629a] OpenBLAS_jll `#stdlib/OpenBLAS_jll`
[8e850b90] libblastrampoline_jll `#stdlib/libblastrampoline_jll`
Testing Running tests...
Test Summary: | Pass Total
Plane tests | 7 7
Test Summary: | Pass Total
Plane intersection | 2 2
Test Summary: | Pass Total
Plane intersection 2 | 4 4
Test Summary: | Pass Total
Plane shadowing | 3 3
Test Summary: | Pass Total
Polyhedron tests | 3 3
Test Summary: | Pass Total
Polyhedron tests 2 | 5 5
Test Summary: | Pass Total
Beam tests | 2 2
Test Summary: | Pass Total
Half-space test | 2 2
Test Summary: | Pass Total
Ordered pair test | 2 2
Test Summary: | Pass Total
Test plane/line intersection | 2 2
Test Summary: | Pass Total
Update line bounds test | 21 21
Testing Blueprint tests passed
FWIW, you should also be able to mix and match those command-line and REPL approaches (i.e., install in repl, test via command line or vice versa).
While I had not originally considered this case, one additional possibility discussed in the comments is where one wishes to test the local state of a package without, or without relying upon, a git remote; in this case #Rulle reports that activating the package directory, i.e,
julia -e "using Pkg; Pkg.activate(\".\"); Pkg.test(\"Blueprint\")"
or
julia --project=. -e "using Pkg; Pkg.test(\"Blueprint\")"
or equivalently in the REPL
] activate .
] test Blueprint
will work assuming the package directory is currently the local directory .
Possible answer to my own question:
To make it work, specify the main project root directory on the command line when calling the script using --project. In this case, we would call
julia --project=/home/jonas/prog/julia/blueprint test/test_blueprint.jl
However, there seems to be some hidden state that I don't understand, because after this command has been run once, it seems as if the --project option can be omitted. On the other hand, I have also tried to provide a nonsense project directory, e.g. /tmp:
julia --project=/tmp test/test_blueprint.jl
and sometimes it will still run the unit tests (!) and sometimes it won't. But when it fails to run the unit tests, it will succeed again as soon as I specify the correct path, that is /home/jonas/prog/julia/blueprint. I don't understand also how this interacts with whether I use using blueprint or include('../src/blueprint.jl') but it seems as if, when I use using, it works only iff the --project path is set correctly. But I am still not sure.

How to properly save Common Lisp image using SBCL?

If I want to create a Lisp-image of my program, how do I do it properly? Are there any prerequisites? And doesn't it play nicely with QUICKLISP?
Right now, if I start SBCL (with just QUICKLISP pre-loaded) and save the image:
(save-lisp-and-die "core")
And then try to start SBCL again with this image
sbcl --core core
And then try to do:
(ql:quickload :cl-yaclyaml)
I get the following:
To load "cl-yaclyaml":
Load 1 ASDF system:
cl-yaclyaml
; Loading "cl-yaclyaml"
.......
debugger invoked on a SB-INT:EXTENSION-FAILURE in thread
#<THREAD "main thread" RUNNING {100322C613}>:
Don't know how to REQUIRE sb-sprof.
See also:
The SBCL Manual, Variable *MODULE-PROVIDER-FUNCTIONS*
The SBCL Manual, Function REQUIRE
Type HELP for debugger help, or (SB-EXT:EXIT) to exit from SBCL.
restarts (invokable by number or by possibly-abbreviated name):
0: [RETRY ] Retry completing load for #<REQUIRE-SYSTEM "sb-sprof">.
1: [ACCEPT ] Continue, treating completing load for #<REQUIRE-SYSTEM "sb-sprof"> as having been successful.
2: Retry ASDF operation.
3: [CLEAR-CONFIGURATION-AND-RETRY] Retry ASDF operation after resetting the configuration.
4: [ABORT ] Give up on "cl-yaclyaml"
5: Exit debugger, returning to top level.
(SB-IMPL::REQUIRE-ERROR "Don't know how to ~S ~A." REQUIRE "sb-sprof")
0]
Alternatively, if I try:
(require 'sb-sprof)
when sbcl is started with saved core, I get the same error. If sbcl is started just as sbcl there is no error reported.
In fact, pre-loading QUICKLISP is not a problem: the same problem happens if sbcl is called initially with sbcl --no-userinit --no-sysinit.
Am I doing it wrong?
PS. If I use roswell, ros -L sbcl-bin -m core run somehow doesn't pick up the image (tested by declaring variable *A* before saving and not seeing it once restarted).
PS2. So far what it looks like is that sbcl does not provide extension modules (SB-SPROF, SB-POSIX, etc.) unless they are explicitly required prior saving the image.
Thanks for the help from #jkiiski here is the full explanation and solution:
SBCL uses extra modules (SB-SPROF, SB-POSIX and others) that are not always loaded into the image. These module reside in contrib directory located either where SBCL_HOME environment variable pointing (if it is set) or where the image resides (for example, in /usr/local/lib/sbcl/).
When an image is saved in another location and if SBCL_HOME is not set, SBCL won't be able to find contrib, hence the errors that I saw.
Setting SBCL_HOME to point to contrib location (or copying contrib to image location or new image to contrib location) solves the problem.
Finally, about roswell: roswell parameter -m searches for images in a specific location. For SBCL (sbcl-bin) it would be something like ~/.roswell/impls/x86-64/linux/sbcl-bin/1.3.7/dump/. Secondly, the image name for SBCL must have the form <name>.core. And to start it, use: ros -m <name> -L sbcl-bin run. (Quick edit: better use ros dump for saving images using roswell as it was pointed out to me)
If you want to create executables, you could try the following:
(sb-ext:save-lisp-and-die
"core"
:compression t
;; this is the main function:
:toplevel (lambda ()
(print "hell world")
0)
:executable t)
With this you should be able to call QUICKLOAD as you wish. Maybe you want to checkout my extension to CL-PROJECT for creating executables: https://github.com/ritschmaster/cl-project

Calling external program from R with multiple commands in system

I am new to programming and mainly I am able to do some scripts within R, but for my work I need to call an external program. For this program to work on the ubuntu's terminal I have to first use setenv and then execute the program. Googling I've found the system () and Sys.setenv() functions, but unfortunately I can make it function.
This is the code that does work in the ubuntu terminal:
$ export PATH=/home/meme/bin:$PATH
$ mast "/home/meme/meme.txt" "/home/meme/seqs.txt" -o "/home/meme/output" -comp
Where the first two arguments are input files, the -o argument is the output directory and the -comp is another parameter for the program to run.
The reason that I need to do it in R despite it already works in the terminal is because I need to run the program 1000 times with 1000 different files so I want to make a for loop where the input name changes in every loop and then analyze every output in R.
I have already tried to use:
Sys.setenv(PATH="/home/meme/bin"); system(mast "/home/meme/meme.txt" "/home/meme/seqs.txt" -o "/home/meme/output" -comp )
and
system(Sys.setenv(PATH="/home/meme/bin") && mast "/home/meme/meme.txt" "/home/meme/seqs.txt" -o "/home/meme/output" -comp )
but always received:
Error: unexpected constant string in "system(mast "/home/meme/meme.txt""
or
Error: unexpected symbol in "system(Sys.setenv(PATH="/home/meme/bin") && mast "/home/meme/meme.txt""
At this point I have run out of ideas to make this work. If this has already been answered, then my googling have just been poor and I would appreciate any links to its response.
Thank you very much for your time.
Carlos
Additional details:
I use Ubuntu 12.04 64-bits version, RStudio version 0.97.551, R version 3.0.2 (2013-09-25) -- "Frisbee Sailing" Platform: x86_64-pc-linux-gnu (64-bit).
The program I use (MAST) finds a sequence pattern in a list of letters and is part of the MEME SUIT version 4.9.1 found in http://meme.nbcr.net/meme/doc/meme-install.html and run through command line. The command-line usage for mast is:
mast <motif file> <sequence file> [options]
Construct the string you want to execute with paste and feed that to system:
for(i in 1:10){
cmd=paste("export FOO=",i," ; echo \"$FOO\" ",sep='')
system(cmd)
}
Note the use of sep='' to stop paste putting spaces in, and back-quoting quote marks in the string to preserve them.
Test before running by using print(cmd) instead of system(cmd) to make sure you are getting the right command built. Maybe do:
if(TESTING){print(cmd)}else{system(cmd)}
and set TESTING=TRUE or FALSE in R before running.
If you are going to be running more than one shell command per system call, it might be better to put them all in one shell script file and call that instead, passing parameters from R. Something like:
cmd = paste("/home/me/bin/dojob.sh ",i,i+1)
system(cmd)
and then dojob.sh is a shell script that parses the args. You'll need to learn a bit more shell scripting.

frama-c jessie killed during VC generation

I'm trying to apply frama-c/jessie on a module of a proprietary safety critical system from our customer. The function under analysis is pretty big (image 700 uncommented lines count) with a lot of conditional statements as well as complex (&&, ||, etc).
I got this error message when I ran it on Ubuntu VM 64 bit machine. It appears Error 137 is related to memory size, etc. But I'm not quite sure.
Any suggestion for how to approach this error is greatly appreciated.
[
formal_verification]$ frama-c -jessie test.c
[kernel] preprocessing with "gcc -C -E -I. -dD test.c"
[jessie] Starting Jessie translation
[jessie] Producing Jessie files in subdir test.jessie
[jessie] File test.jessie/test.jc written.
[jessie] File test.jessie/test.cloc written.
[jessie] Calling Jessie tool in subdir tests.jessie
Generating Why function testFun
[jessie] Calling VCs generator.
gwhy-bin [...] why/test.why
Computation of VCs...
Killed
make: *** [test.stat] Error 137
with a lot of conditional statements as well as complex (&&, ||, etc).
You should use the so-called “fast WP” option when analyzing functions with lots of nested conditionals. Otherwise, the target does not even need to be very large to cause a blowup.
It happens to be the example in Jessie's manual for passing options to Why (it is really a Why option):
-jessie-why-opt=<s>
give an option to Why (e.g., -fast-wp)
You would therefore use -jessie-why-opt=-fast-wp.

Using python to execute .R script

After seven hours of googling and rereading through somewhat similar questions, and then lots of trial and error, I'm now comfortable asking for some guidance.
To simplify my actual task, I created a very basic R script (named test_script):
x <- c(1,2,3,4,5)
avg <- mean(x)
write.csv(avg, file = "output.csv")
This works as expected.
I'm new to python and I'm just trying to figure out how to execute the R script so that the same .csv file is created.
Notable results come from:
subprocess.call(["C:/Program Files/R/R-2.15.2/bin/R", 'C:/Users/matt/Desktop/test_script.R'])
This opens a cmd window with the typical R start-up verbiage, except there is a message which reads, "ARGUMENT 'C:/Users/matt/Desktop/test_script.R' __ ignored __"
And:
subprocess.call(['C:/Program Files/R/R-2.15.2/bin/Rscript', 'C:/Users/matt/Desktop/test_script.r'])
This flashes a cmd window and returns a 0, but no .csv file is created.
Otherwise, I've tried every suggestion I could identify on this site or any other. Any insight will be greatly appreciated. Thanks in advance for your time and efforts.
Running R --help at the command prompt prints:
Usage: R [options] [< infile] [> outfile]
or: R CMD command [arguments]
Start R, a system for statistical computation and graphics, with the
specified options, or invoke an R tool via the 'R CMD' interface.
Options:
-h, --help Print short help message and exit
--version Print version info and exit
...
-f FILE, --file=FILE Take input from 'FILE'
-e EXPR Execute 'EXPR' and exit
FILE may contain spaces but not shell metacharacers.
Commands:
BATCH Run R in batch mode
COMPILE Compile files for use with R
...
Try
call(["C:/Program Files/R/R-2.15.2/bin/R", '-f', 'C:/Users/matt/Desktop/test_script.R'])
There are also some other command-line arguments you can pass to R that may be helpful. Run R --help to see the full list.
It might be too late, but hope it helps for others:
Just add --vanilla in the call list.
subprocess.call(['C:/Program Files/R/R-2.15.2/bin/Rscript', '--vanilla', 'C:/Users/matt/Desktop/test_script.r'])

Resources