I am trying to run Julia unit tests from the command line but the unit tests fail to run because they cannot find a dependency that I am using in my main project. How can I make this work? The actual command that I try to execute is julia test/test_blueprint.jl from the project root. Here follows more details.
Details about the setup
My project is located at the path /home/jonas/prog/julia/blueprint. In that directory, I have a Project.toml file containing these lines:
name = "blueprint"
uuid = "c1615a0c-c255-402d-ae34-0b88819b43c6"
authors = [""]
version = "0.1.0"
[deps]
FunctionalCollections = "de31a74c-ac4f-5751-b3fd-e18cd04993ca"
Setfield = "efcf1570-3423-57d1-acb7-fd33fddbac46"
along with the Manifest.toml file.
I have a subdirectory at test/ with unit tests that I created following this guide and that directory contains another Project.toml file containing
[deps]
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
There is a file test/test_blueprint.jl with unit tests and that file starts with
using Test
include("../src/blueprint.jl") # Alternative 1
#using blueprint # Alternative 2
using FunctionalCollections
using LinearAlgebra
...
The actual code being tested is in the file src/blueprint.jl.
Details about the problem
From the project root, I attempt to run the unit tests using the command julia test/test_blueprint.jl. When I run that command it produces the following output:
ERROR: LoadError: ArgumentError: Package Setfield not found in current path:
- Run `import Pkg; Pkg.add("Setfield")` to install the Setfield package.
Stacktrace:
[1] require(into::Module, mod::Symbol)
# Base ./loading.jl:967
[2] include(fname::String)
# Base.MainInclude ./client.jl:451
[3] top-level scope
# ~/prog/julia/blueprint/test/test_blueprint.jl:8
in expression starting at /home/jonas/prog/julia/blueprint/src/blueprint.jl:1
in expression starting at /home/jonas/prog/julia/blueprint/test/test_blueprint.jl:8
suggesting that it cannot find the dependency Setfield. If I edit the top of the file test/test_blueprint.jl slightly from
include("../src/blueprint.jl") # Alternative 1
#using blueprint # Alternative 2
to
#include("../src/blueprint.jl") # Alternative 1
using blueprint # Alternative 2
it still fails, but with a different error:
ERROR: LoadError: ArgumentError: Package blueprint not found in current path:
- Run `import Pkg; Pkg.add("blueprint")` to install the blueprint package.
Stacktrace:
[1] require(into::Module, mod::Symbol)
# Base ./loading.jl:967
in expression starting at /home/jonas/prog/julia/blueprint/test/test_blueprint.jl:9
Question: How can I make the unit tests run from the command line?
Note that I can run the unit tests from within the Julia REPL in Emacs by activating the project using C-c C-a at the src/blueprint.jl file and calling C-c C-b at the unit test file test/test_blueprint.jl. My Julia version is 1.7.0 (2021-11-30). Don't hesitate to ask for more clarifications.
First, a few naming conventions that are probably not (but may be) contributing to the issues here:
By convention, package names begin with a single capital, so I would recommend changing the name to Blueprint everywhere
By default, ] test runs tests found in the test/runtests.jl, so I would recommend naming your top-level testing script runtests.jl to avoid confusion, even though it does seem from the errors here that test is finding your test_blueprint.jl file one way or another.
Now, while I can't test this without the full code of your package, what I suspect is happening here is the following:
Normally, dependencies of the package you are testing (let's say MyPackage) are not required in test/Project.toml because they are implicit in MyPackage. So after a successful using MyPackage, while they will still not be available to any functions written in your test scripts (test/runtests.jl), will be available to the functions written in MyPackage -- just as if you had typed ]using MyPackage at the REPL and then run your test code there. This is the only reason you don't normally need to duplicate all the deps from the main Project.toml in test/Project.toml.
Since the using Blueprint approach is failing here for other reasons, when you simply include the code from src/blueprint.jl, the usings within that file will in turn fail because those packages are not present in the active environment at test/Project.toml (even if they are present on your system elsewhere).
Consequently, one quick fix to your problem with the current include("../src/blueprint.jl") approach would be to simply add those dependencies to your test/Project.toml
However, it would be more satisfying to fix the problem you are having with using Blueprint. I don't have enough information to debug this without seeing the full structure of your packages, but I would suggest as a start
making sure that your code is properly structured as a package
testing that, even if unregistered, you can ] add your package from the REPL by git repo URL (i.e. ] add https://some_website.com/you/Blueprint.jl)
EDIT:
Upon inspection of the code linked in the comments (https://github.com/jonasseglare/Blueprint), a few other issues:
Although they are already installed by default, standard libraries these days do need to be included in [deps]. In this case, that means the LinearAlgebra stdlib
Any packages you are explicitly using in your test scripts, other than your package itself, do need to be added to test/Project.toml. I.e., any packages that you are directly using functions from in your test scripts (rather than just indirectly using via the exported functions of your package) do need to be included in test/Project.toml.
In your case, the latter would appear to mean LinearAlgebra and FunctionalCollections, but not Setfield (that one only needs to be included in the regular Project.toml, since it's not being directly used in runtests.jl).
Consequently, with a few minor changes to your repo we are able to simply
] add https://github.com/brenhinkeller/Blueprint
] test Blueprint
or, since you preferred at the command line
user$ julia -e "using Pkg; Pkg.add(url=\"https://github.com/brenhinkeller/Blueprint\")
user$ julia -e "using Pkg; Pkg.test(\"Blueprint\")"
Testing Blueprint
Status `/private/var/folders/qk/2qyrdb854mvd2tn4crc802lw0000gn/T/jl_fSypP7/Project.toml`
[c1615a0c] Blueprint v0.1.0 `https://github.com/brenhinkeller/Blueprint#master`
[de31a74c] FunctionalCollections v0.5.0
[37e2e46d] LinearAlgebra `#stdlib/LinearAlgebra`
[8dfed614] Test `#stdlib/Test`
Status `/private/var/folders/qk/2qyrdb854mvd2tn4crc802lw0000gn/T/jl_fSypP7/Manifest.toml`
[c1615a0c] Blueprint v0.1.0 `https://github.com/brenhinkeller/Blueprint#master`
[187b0558] ConstructionBase v1.3.0
[de31a74c] FunctionalCollections v0.5.0
[1914dd2f] MacroTools v0.5.9
[ae029012] Requires v1.3.0
[efcf1570] Setfield v0.8.1
[56f22d72] Artifacts `#stdlib/Artifacts`
[2a0f44e3] Base64 `#stdlib/Base64`
[9fa8497b] Future `#stdlib/Future`
[b77e0a4c] InteractiveUtils `#stdlib/InteractiveUtils`
[8f399da3] Libdl `#stdlib/Libdl`
[37e2e46d] LinearAlgebra `#stdlib/LinearAlgebra`
[56ddb016] Logging `#stdlib/Logging`
[d6f4376e] Markdown `#stdlib/Markdown`
[9a3f8284] Random `#stdlib/Random`
[ea8e919c] SHA `#stdlib/SHA`
[9e88b42a] Serialization `#stdlib/Serialization`
[8dfed614] Test `#stdlib/Test`
[cf7118a7] UUIDs `#stdlib/UUIDs`
[e66e0078] CompilerSupportLibraries_jll `#stdlib/CompilerSupportLibraries_jll`
[4536629a] OpenBLAS_jll `#stdlib/OpenBLAS_jll`
[8e850b90] libblastrampoline_jll `#stdlib/libblastrampoline_jll`
Testing Running tests...
Test Summary: | Pass Total
Plane tests | 7 7
Test Summary: | Pass Total
Plane intersection | 2 2
Test Summary: | Pass Total
Plane intersection 2 | 4 4
Test Summary: | Pass Total
Plane shadowing | 3 3
Test Summary: | Pass Total
Polyhedron tests | 3 3
Test Summary: | Pass Total
Polyhedron tests 2 | 5 5
Test Summary: | Pass Total
Beam tests | 2 2
Test Summary: | Pass Total
Half-space test | 2 2
Test Summary: | Pass Total
Ordered pair test | 2 2
Test Summary: | Pass Total
Test plane/line intersection | 2 2
Test Summary: | Pass Total
Update line bounds test | 21 21
Testing Blueprint tests passed
FWIW, you should also be able to mix and match those command-line and REPL approaches (i.e., install in repl, test via command line or vice versa).
While I had not originally considered this case, one additional possibility discussed in the comments is where one wishes to test the local state of a package without, or without relying upon, a git remote; in this case #Rulle reports that activating the package directory, i.e,
julia -e "using Pkg; Pkg.activate(\".\"); Pkg.test(\"Blueprint\")"
or
julia --project=. -e "using Pkg; Pkg.test(\"Blueprint\")"
or equivalently in the REPL
] activate .
] test Blueprint
will work assuming the package directory is currently the local directory .
Possible answer to my own question:
To make it work, specify the main project root directory on the command line when calling the script using --project. In this case, we would call
julia --project=/home/jonas/prog/julia/blueprint test/test_blueprint.jl
However, there seems to be some hidden state that I don't understand, because after this command has been run once, it seems as if the --project option can be omitted. On the other hand, I have also tried to provide a nonsense project directory, e.g. /tmp:
julia --project=/tmp test/test_blueprint.jl
and sometimes it will still run the unit tests (!) and sometimes it won't. But when it fails to run the unit tests, it will succeed again as soon as I specify the correct path, that is /home/jonas/prog/julia/blueprint. I don't understand also how this interacts with whether I use using blueprint or include('../src/blueprint.jl') but it seems as if, when I use using, it works only iff the --project path is set correctly. But I am still not sure.
I am attempting to use MyPy with modules that use ruamel.yaml and Mypy cannot find ruamel.yaml even though Python has no problem finding it. I am puzzled because I can't find a module called YAML.py or class called YAML either, even though these statements work in Python:
from ruamel.yaml import YAML
yaml = YAML()
x = yaml.load()
What do I need to do to get MyPy to recognize ruamel.yaml?
A workaround is to run without the incremental logic of mypy:
python -m mypy --no-incremental myfile.py
Background
There is a known issue in mypy, see here.
In summary:
Something is not working with the incremental logic of mypy when it is encountering ruamel.
When you run it once, all goes ok. This is the command:
python -m mypy myfile.py
Then, when you run it again, you get an error:
error: Skipping analyzing 'ruamel': found module but no type hints or library stubs [import]
Then, when you run it again, it all goes ok
etc.
You should not be looking for a file YAML.py. The YAML in
yaml = YAML()
is a class that is defined in ruamel/yaml/main.py and that gets imported into ruamel/yaml/__init__.py (both under site-packages). That is why you do:
from ruamel.yaml import YAML
(the alternative would be that there is a file yaml.py under the directory ruamel, but the loader/dumper is a bit too much to put in one file).
What might work if the above knowledge doesn't help you resolve things, is explicitly set the global flag mypy_path or the environment variable MYPYPATH. This has to include the directory in which the directory ruamel is located.
( I could not find it mentioned in the documentation, but from the source ( mypy/build.py:mypy_path() ) you can see that this is supposed to be a string that gets split on os.pathsep (which is the colon (:) on my Linux based system))
I have the same issue.
Even after setting MYPYPATH=./.venv/lib/python3.7/site-packages
A temporary 'solution' is ignoring the missing import exception
mypy --ignore-missing-imports
Can any one tell me why I can not successfully test OpenBLAS's dgemm performance (in GFLOPs) in R via the following way?
link R with the "reference BLAS" libblas.so
compile my C program mmperf.c with OpenBLAS library libopenblas.so
load the resulting shared library mmperf.so into R, call the R wrapper function mmperf and report dgemm performance in GFLOPs.
Point 1 looks strange, but I have no choice because I have no root access on machines I want to test, so actual linking to OpenBLAS is impossible. By "not successfully" I mean my program ends up reporting dgemm performance for reference BLAS instead of OpenBLAS. I hope someone can explain to me:
why my way does not work;
is it possible at all to make it work (this is important, because if it is impossible, I must write a C main function and do my job in a C program.)
I've investigated into this issue for two days, here I will include various system output to assist you to make a diagnose. To make things reproducible, I will also include the code, makefile as well as shell command.
Part 1: system environment before testing
There are 2 ways to invoke R, either using R or Rscript. There are some differences in what is loaded when they are invoked:
~/Desktop/dgemm$ readelf -d $(R RHOME)/bin/exec/R | grep "NEEDED"
0x00000001 (NEEDED) Shared library: [libR.so]
0x00000001 (NEEDED) Shared library: [libpthread.so.0]
0x00000001 (NEEDED) Shared library: [libc.so.6]
~/Desktop/dgemm$ readelf -d $(R RHOME)/bin/Rscript | grep "NEEDED"
0x00000001 (NEEDED) Shared library: [libc.so.6]
Here we need to choose Rscript, because R loads libR.so, which will automatically load the reference BLAS libblas.so.3:
~/Desktop/dgemm$ readelf -d $(R RHOME)/lib/libR.so | grep blas
0x00000001 (NEEDED) Shared library: [libblas.so.3]
~/Desktop/dgemm$ ls -l /etc/alternatives/libblas.so.3
... 31 May /etc/alternatives/libblas.so.3 -> /usr/lib/libblas/libblas.so.3.0
~/Desktop/dgemm$ readelf -d /usr/lib/libblas/libblas.so.3 | grep SONAME
0x0000000e (SONAME) Library soname: [libblas.so.3]
Comparatively, Rscript gives a cleaner environment.
Part 2: OpenBLAS
After downloading source file from OpenBLAS and a simple make command, a shared library of the form libopenblas-<arch>-<release>.so-<version> can be generated. Note that we will not have root access to install it; instead, we copy this library into our working directory ~/Desktop/dgemm and rename it simply to libopenblas.so. At the same time we have to make another copy with name libopenblas.so.0, as this is the SONAME which run time loader will seek for:
~/Desktop/dgemm$ readelf -d libopenblas.so | grep "RPATH\|SONAME"
0x0000000e (SONAME) Library soname: [libopenblas.so.0]
Note that the RPATH attribute is not given, which means this library is intended to be put in /usr/lib and we should call ldconfig to add it to ld.so.cache. But again we don't have root access to do this. In fact, if this can be done, then all the difficulties are gone. We could then use update-alternatives --config libblas.so.3 to effectively link R to OpenBLAS.
Part 3: C code, Makefile, and R code
Here is a C script mmperf.c computing GFLOPs of multiplying 2 square matrices of size N:
#include <R.h>
#include <Rmath.h>
#include <Rinternals.h>
#include <R_ext/BLAS.h>
#include <sys/time.h>
/* standard C subroutine */
double mmperf (int n) {
/* local vars */
int n2 = n * n, tmp; double *A, *C, one = 1.0;
struct timeval t1, t2; double elapsedTime, GFLOPs;
/* simulate N-by-N matrix A */
A = (double *)calloc(n2, sizeof(double));
GetRNGstate();
tmp = 0; while (tmp < n2) {A[tmp] = runif(0.0, 1.0); tmp++;}
PutRNGstate();
/* generate N-by-N zero matrix C */
C = (double *)calloc(n2, sizeof(double));
/* time 'dgemm.f' for C <- A * A + C */
gettimeofday(&t1, NULL);
F77_CALL(dgemm) ("N", "N", &n, &n, &n, &one, A, &n, A, &n, &one, C, &n);
gettimeofday(&t2, NULL);
/* free memory */
free(A); free(C);
/* compute and return elapsedTime in microseconds (usec or 1e-6 sec) */
elapsedTime = (double)(t2.tv_sec - t1.tv_sec) * 1e+6;
elapsedTime += (double)(t2.tv_usec - t1.tv_usec);
/* convert microseconds to nanoseconds (1e-9 sec) */
elapsedTime *= 1e+3;
/* compute and return GFLOPs */
GFLOPs = 2.0 * (double)n2 * (double)n / elapsedTime;
return GFLOPs;
}
/* R wrapper */
SEXP R_mmperf (SEXP n) {
double GFLOPs = mmperf(asInteger(n));
return ScalarReal(GFLOPs);
}
Here is a simple R script mmperf.R to report GFLOPs for case N = 2000
mmperf <- function (n) {
dyn.load("mmperf.so")
GFLOPs <- .Call("R_mmperf", n)
dyn.unload("mmperf.so")
return(GFLOPs)
}
GFLOPs <- round(mmperf(2000), 2)
cat(paste("GFLOPs =",GFLOPs, "\n"))
Finally there is a simple makefile to generate the shared library mmperf.so:
mmperf.so: mmperf.o
gcc -shared -L$(shell pwd) -Wl,-rpath=$(shell pwd) -o mmperf.so mmperf.o -lopenblas
mmperf.o: mmperf.c
gcc -fpic -O2 -I$(shell Rscript --default-packages=base --vanilla -e 'cat(R.home("include"))') -c mmperf.c
Put all these files under working directory ~/Desktop/dgemm, and compile it:
~/Desktop/dgemm$ make
~/Desktop/dgemm$ readelf -d mmperf.so | grep "NEEDED\|RPATH\|SONAME"
0x00000001 (NEEDED) Shared library: [libopenblas.so.0]
0x00000001 (NEEDED) Shared library: [libc.so.6]
0x0000000f (RPATH) Library rpath: [/home/zheyuan/Desktop/dgemm]
The output reassures us that OpenBLAS is correctly linked, and the run time load path is correctly set.
Part 4: testing OpenBLAS in R
Let's do
~/Desktop/dgemm$ Rscript --default-packages=base --vanilla mmperf.R
Note our script needs only the base package in R, and --vanilla is used to ignore all user settings on R start-up. On my laptop, my program returns:
GFLOPs = 1.11
Oops! This is truely reference BLAS performance not OpenBLAS (which is about 8-9 GFLOPs).
Part 5: Why?
To be honest, I don't know why this happens. Each step seems to work correctly. Does something subtle occurs when R is invoked? For example, any possibility that OpenBLAS library is overridden by reference BLAS at some point for some reason? Any explanations and solutions? Thanks!
why my way does not work
First, shared libraries on UNIX are designed to mimic the way archive libraries work (archive libraries were there first). In particular that means that if you have libfoo.so and libbar.so, both defining symbol foo, then whichever library is loaded first is the one that wins: all references to foo from anywhere within the program (including from libbar.so) will bind to libfoo.sos definition of foo.
This mimics what would happen if you linked your program against libfoo.a and libbar.a, where both archive libraries defined the same symbol foo. More info on archive linking here.
It should be clear from above, that if libblas.so.3 and libopenblas.so.0 define the same set of symbols (which they do), and if libblas.so.3 is loaded into the process first, then routines from libopenblas.so.0 will never be called.
Second, you've correctly decided that since R directly links against libR.so, and since libR.so directly links against libblas.so.3, it is guaranteed that libopenblas.so.0 will lose the battle.
However, you erroneously decided that Rscript is better, but it's not: Rscript is a tiny binary (11K on my system; compare to 2.4MB for libR.so), and approximately all it does is exec of R. This is trivial to see in strace output:
strace -e trace=execve /usr/bin/Rscript --default-packages=base --vanilla /dev/null
execve("/usr/bin/Rscript", ["/usr/bin/Rscript", "--default-packages=base", "--vanilla", "/dev/null"], [/* 42 vars */]) = 0
execve("/usr/lib/R/bin/R", ["/usr/lib/R/bin/R", "--slave", "--no-restore", "--vanilla", "--file=/dev/null", "--args"], [/* 43 vars */]) = 0
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=89625, si_status=0, si_utime=0, si_stime=0} ---
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=89626, si_status=0, si_utime=0, si_stime=0} ---
execve("/usr/lib/R/bin/exec/R", ["/usr/lib/R/bin/exec/R", "--slave", "--no-restore", "--vanilla", "--file=/dev/null", "--args"], [/* 51 vars */]) = 0
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=89630, si_status=0, si_utime=0, si_stime=0} ---
+++ exited with 0 +++
Which means that by the time your script starts executing, libblas.so.3 has been loaded, and libopenblas.so.0 that will be loaded as a dependency of mmperf.so will not actually be used for anything.
is it possible at all to make it work
Probably. I can think of two possible solutions:
Pretend that libopenblas.so.0 is actually libblas.so.3
Rebuild entire R package against libopenblas.so.
For #1, you need to ln -s libopenblas.so.0 libblas.so.3, then make sure that your copy of libblas.so.3 is found before the system one, by setting LD_LIBRARY_PATH appropriately.
This appears to work for me:
mkdir /tmp/libblas
# pretend that libc.so.6 is really libblas.so.3
cp /lib/x86_64-linux-gnu/libc.so.6 /tmp/libblas/libblas.so.3
LD_LIBRARY_PATH=/tmp/libblas /usr/bin/Rscript /dev/null
Error in dyn.load(file, DLLpath = DLLpath, ...) :
unable to load shared object '/usr/lib/R/library/stats/libs/stats.so':
/usr/lib/liblapack.so.3: undefined symbol: cgemv_
During startup - Warning message:
package ‘stats’ in options("defaultPackages") was not found
Note how I got an error (my "pretend" libblas.so.3 doesn't define symbols expected of it, since it's really a copy of libc.so.6).
You can also confirm which version of libblas.so.3 is getting loaded this way:
LD_DEBUG=libs LD_LIBRARY_PATH=/tmp/libblas /usr/bin/Rscript /dev/null |& grep 'libblas\.so\.3'
91533: find library=libblas.so.3 [0]; searching
91533: trying file=/usr/lib/R/lib/libblas.so.3
91533: trying file=/usr/lib/x86_64-linux-gnu/libblas.so.3
91533: trying file=/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/amd64/server/libblas.so.3
91533: trying file=/tmp/libblas/libblas.so.3
91533: calling init: /tmp/libblas/libblas.so.3
For #2, you said:
I have no root access on machines I want to test, so actual linking to OpenBLAS is impossible.
but that seems to be a bogus argument: if you can build libopenblas, surely you can also build your own version of R.
Update:
You mentioned in the beginning that libblas.so.3 and libopenblas.so.0 define the same symbol, what does this mean? They have different SONAME, is that insufficient to distinguish them by the system?
The symbols and the SONAME have nothing to do with each other.
You can see symbols in the output from readelf -Ws libblas.so.3 and readelf -Ws libopenblas.so.0. Symbols related to BLAS, such as cgemv_, will appear in both libraries.
Your confusion about SONAME possibly comes from Windows. The DLLs on Windows are designed completely differently. In particular, when FOO.DLL imports symbol bar from BAR.DLL, both the name of the symbol (bar) and the DLL from which that symbol was imported (BAR.DLL) are recorded in the FOO.DLLs import table.
That makes it easy to have R import cgemv_ from BLAS.DLL, while MMPERF.DLL imports the same symbol from OPENBLAS.DLL.
However, that makes library interpositioning hard, and works completely differently from the way archive libraries work (even on Windows).
Opinions differ on which design is better overall, but neither system is likely to ever change its model.
There are ways for UNIX to emulate Windows-style symbol binding: see RTLD_DEEPBIND in dlopen man page. Beware: these are fraught with peril, likely to confuse UNIX experts, are not widely used, and likely to have implementation bugs.
Update 2:
you mean I compile R and install it under my home directory?
Yes.
Then when I want to invoke it, I should explicitly give the path to my version of executable program, otherwise the one on the system might be invoked instead? Or, can I put this path at the first position of environment variable $PATH to cheat the system?
Either way works.
*********************
Solution 2:
*********************
Here we offer another solution, by exploiting environment variable LD_PRELOAD mentioned in our solution 1. The use of LD_PRELOAD is more "brutal", as it forces loading a given library into the program before any other program, even before the C library libc.so! This is often used for urgent patching in Linux development.
As shown in the part 2 of the original post, the shared BLAS library libopenblas.so has SONAME libopenblas.so.0. An SONAME is an internal name that dynamic library loader would seek at run time, so we need to make a symbolic link to libopenblas.so with this SONAME:
~/Desktop/dgemm$ ln -sf libopenblas.so libopenblas.so.0
then we export it:
~/Desktop/dgemm$ export LD_PRELOAD=$(pwd)/libopenblas.so.0
Note that a full path to libopenblas.so.0 needs be fed to LD_PRELOAD for a successful load, even if libopenblas.so.0 is under $(pwd).
Now we launch Rscript and check what happens by LD_DEBUG:
~/Desktop/dgemm$ LD_DEBUG=libs Rscript --default-packages=base --vanilla /dev/null |& grep blas
4860: calling init: /home/zheyuan/Desktop/dgemm/libopenblas.so
4860: calling init: /home/zheyuan/Desktop/dgemm/libopenblas.so
4865: calling init: /home/zheyuan/Desktop/dgemm/libopenblas.so
4868: calling fini: /home/zheyuan/Desktop/dgemm/libopenblas.so [0]
4870: calling init: /home/zheyuan/Desktop/dgemm/libopenblas.so
4869: calling init: /home/zheyuan/Desktop/dgemm/libopenblas.so
4867: calling fini: /home/zheyuan/Desktop/dgemm/libopenblas.so [0]
4860: find library=libblas.so.3 [0]; searching
4860: trying file=/usr/lib/R/lib/libblas.so.3
4860: trying file=/usr/lib/i386-linux-gnu/i686/sse2/libblas.so.3
4860: trying file=/usr/lib/i386-linux-gnu/i686/cmov/libblas.so.3
4860: trying file=/usr/lib/i386-linux-gnu/i686/libblas.so.3
4860: trying file=/usr/lib/i386-linux-gnu/sse2/libblas.so.3
4860: trying file=/usr/lib/i386-linux-gnu/libblas.so.3
4860: trying file=/usr/lib/jvm/java-7-openjdk-i386/jre/lib/i386/client/libblas.so.3
4860: trying file=/usr/lib/libblas.so.3
4860: calling init: /usr/lib/libblas.so.3
4860: calling init: /home/zheyuan/Desktop/dgemm/libopenblas.so
4874: calling init: /home/zheyuan/Desktop/dgemm/libopenblas.so
4876: calling init: /home/zheyuan/Desktop/dgemm/libopenblas.so
4860: calling fini: /home/zheyuan/Desktop/dgemm/libopenblas.so [0]
4860: calling fini: /usr/lib/libblas.so.3 [0]
Comparing with what we saw in solution 1 by cheating R with our own version of libblas.so.3, we can see that
libopenblas.so.0 is loaded first, hence found first by Rscript;
after libopenblas.so.0 is found, Rscript goes on searching and loading libblas.so.3. However, this will play no effect by the "first come, first serve" rule, explained in the original answer.
Good, everything works, so we test our mmperf.c program:
~/Desktop/dgemm$ Rscript --default-packages=base --vanilla mmperf.R
GFLOPs = 9.62
The outcome 9.62 is bigger than 8.77 we saw in the earlier solution merely by chance. As a test for using OpenBLAS we don't run the experiment many times for preciser result.
Then as usual, we unset environment variable in the end:
~/Desktop/dgemm$ unset LD_PRELOAD
*********************
Solution 1:
*********************
Thanks to Employed Russian, my problem is finally solved. The investigation requires important skills in Linux system debugging and patching, and I believe this is a great asset I learned. Here I would post a solution, as well as correcting several points in my original post.
1 About invoking R
In my original post, I mentioned there are two ways to launch R, either via R or Rscript. However, I have wrongly exaggerated their difference. Let's now investigate their start-up process, via an important Linux debugging facility strace (see man strace). There are actually lots of interesting things happening after we type a command in the shell, and we can use
strace -e trace=process [command]
to trace all system calls involving process management. As a result we can watch the fork, wait, and execution steps of a process. Though not stated in the manual page, #Employed Russian shows that it is possible to specify only a subclass of process, for example, execve for the execution steps.
For R we have
~/Desktop/dgemm$ time strace -e trace=execve R --vanilla < /dev/null > /dev/null
execve("/usr/bin/R", ["R", "--vanilla"], [/* 70 vars */]) = 0
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=5777, si_status=0, si_utime=0, si_stime=0} ---
execve("/usr/lib/R/bin/exec/R", ["/usr/lib/R/bin/exec/R", "--vanilla"], [/* 79 vars */]) = 0
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=5778, si_status=0, si_utime=0, si_stime=0} ---
+++ exited with 0 +++
real 0m0.345s
user 0m0.256s
sys 0m0.068s
while for Rscript we have
~/Desktop/dgemm$ time strace -e trace=execve Rscript --default-packages=base --vanilla /dev/null
execve("/usr/bin/Rscript", ["Rscript", "--default-packages=base", "--vanilla", "/dev/null"], [/* 70 vars */]) = 0
execve("/usr/lib/R/bin/R", ["/usr/lib/R/bin/R", "--slave", "--no-restore", "--vanilla", "--file=/dev/null"], [/* 71 vars */]) = 0
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=5822, si_status=0, si_utime=0, si_stime=0} ---
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=5823, si_status=0, si_utime=0, si_stime=0} ---
execve("/usr/lib/R/bin/exec/R", ["/usr/lib/R/bin/exec/R", "--slave", "--no-restore", "--vanilla", "--file=/dev/null"], [/* 80 vars */]) = 0
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=5827, si_status=0, si_utime=0, si_stime=0} ---
+++ exited with 0 +++
real 0m0.063s
user 0m0.020s
sys 0m0.028s
We have also used time to measure the start-up time. Note that
Rscript is about 5.5 times faster than R. One reason is that R will load 6 default packages on start-up, while Rscript only loads one base package by control: --default-packages=base. But it is still much faster even without this setting.
In the end both start-up processes are directed to $(R RHOME)/bin/exec/R, and in my original post, I have already exploited readelf -d to show that this executable will load libR.so, which are linked with libblas.so.3. According to #Employed Russian's explanation, the BLAS library loaded first will win, so there is no way my original method will work.
To successfully run strace, we have used the amazing file /dev/null as input file and output file when necessary. For example, Rscript demands an input file, while R demands both. We feed the null device to both to make the command run smoothly and the output clean. The null device is a physically existing file, but works amazingly. When reading from it, it contains nothing; while writing to it, it discards everything.
2. Cheat R
Now since libblas.so will be loaded anyway, the only thing we can do is to provide our own version of this library. As I have said in the original post, if we have root access, this is really easy, by using update-alternatives --config libblas.so.3, so that the system Linux will help us complete this switch. But #Employed Russian offers an awesome way to cheat the system without root access: let's check how R finds BLAS library on start-up, and make sure we feed our version before the system default is found! To monitor how shared libraries are found and loaded, use environment variable LD_DEBUG.
There are a number of Linux environment variables with prefix LD_, as documented in man ld.so. These variables can be assigned before an executable, so that we can change the running feature of a program. Some useful variables include:
LD_LIBRARY_PATH for setting run time library search path;
LD_DEBUG for tracing look-up and loading of shared libraries;
LD_TRACE_LOADED_OBJECTS for displaying all loaded library by a program (behaves similar to ldd);
LD_PRELOAD for forcing injecting a library to a program at the very start, before all other libraries are looked for;
LD_PROFILE and LD_PROFILE_OUTPUT for profiling one specified shared library. R user who have read section 3.4.1.1 sprof of Writing R extensions should recall that this is used for profiling compiled code from within R.
The use of LD_DEBUG can be seen by:
~/Desktop/dgemm$ LD_DEBUG=help cat
Valid options for the LD_DEBUG environment variable are:
libs display library search paths
reloc display relocation processing
files display progress for input file
symbols display symbol table processing
bindings display information about symbol binding
versions display version dependencies
scopes display scope information
all all previous options combined
statistics display relocation statistics
unused determined unused DSOs
help display this help message and exit
To direct the debugging output into a file instead of standard output a filename can be specified using the LD_DEBUG_OUTPUT environment variable.
Here we are particularly interested in using LD_DEBUG=libs. For example,
~/Desktop/dgemm$ LD_DEBUG=libs Rscript --default-packages=base --vanilla /dev/null |& grep blas
5974: find library=libblas.so.3 [0]; searching
5974: trying file=/usr/lib/R/lib/libblas.so.3
5974: trying file=/usr/lib/i386-linux-gnu/i686/sse2/libblas.so.3
5974: trying file=/usr/lib/i386-linux-gnu/i686/cmov/libblas.so.3
5974: trying file=/usr/lib/i386-linux-gnu/i686/libblas.so.3
5974: trying file=/usr/lib/i386-linux-gnu/sse2/libblas.so.3
5974: trying file=/usr/lib/i386-linux-gnu/libblas.so.3
5974: trying file=/usr/lib/jvm/java-7-openjdk-i386/jre/lib/i386/client/libblas.so.3
5974: trying file=/usr/lib/libblas.so.3
5974: calling init: /usr/lib/libblas.so.3
5974: calling fini: /usr/lib/libblas.so.3 [0]
shows various attempts that R program tried to locate and load libblas.so.3. So if we could provide our own version of libblas.so.3, and make sure R finds it first, then the problem is solved.
Let's first make a symbolic link libblas.so.3 in our working path to the OpenBLAS library libopenblas.so, then expand default LD_LIBRARY_PATH with our working path (and export it):
~/Desktop/dgemm$ ln -sf libopenblas.so libblas.so.3
~/Desktop/dgemm$ export LD_LIBRARY_PATH = $(pwd):$LD_LIBRARY_PATH ## put our working path at top
Now let's check again the library loading process:
~/Desktop/dgemm$ LD_DEBUG=libs Rscript --default-packages=base --vanilla /dev/null |& grep blas
6063: find library=libblas.so.3 [0]; searching
6063: trying file=/usr/lib/R/lib/libblas.so.3
6063: trying file=/usr/lib/i386-linux-gnu/i686/sse2/libblas.so.3
6063: trying file=/usr/lib/i386-linux-gnu/i686/cmov/libblas.so.3
6063: trying file=/usr/lib/i386-linux-gnu/i686/libblas.so.3
6063: trying file=/usr/lib/i386-linux-gnu/sse2/libblas.so.3
6063: trying file=/usr/lib/i386-linux-gnu/libblas.so.3
6063: trying file=/usr/lib/jvm/java-7-openjdk-i386/jre/lib/i386/client/libblas.so.3
6063: trying file=/home/zheyuan/Desktop/dgemm/libblas.so.3
6063: calling init: /home/zheyuan/Desktop/dgemm/libblas.so.3
6063: calling fini: /home/zheyuan/Desktop/dgemm/libblas.so.3 [0]
Great! We have successfully cheated R.
3. Experiment with OpenBLAS
~/Desktop/dgemm$ Rscript --default-packages=base --vanilla mmperf.R
GFLOPs = 8.77
Now, everything works as expected!
4. Unset LD_LIBRARY_PATH (to be safe)
It is a good practice to unset LD_LIBRARY_PATH after use.
~/Desktop/dgemm$ unset LD_LIBRARY_PATH