frama-c jessie killed during VC generation - frama-c

I'm trying to apply frama-c/jessie on a module of a proprietary safety critical system from our customer. The function under analysis is pretty big (image 700 uncommented lines count) with a lot of conditional statements as well as complex (&&, ||, etc).
I got this error message when I ran it on Ubuntu VM 64 bit machine. It appears Error 137 is related to memory size, etc. But I'm not quite sure.
Any suggestion for how to approach this error is greatly appreciated.
[
formal_verification]$ frama-c -jessie test.c
[kernel] preprocessing with "gcc -C -E -I. -dD test.c"
[jessie] Starting Jessie translation
[jessie] Producing Jessie files in subdir test.jessie
[jessie] File test.jessie/test.jc written.
[jessie] File test.jessie/test.cloc written.
[jessie] Calling Jessie tool in subdir tests.jessie
Generating Why function testFun
[jessie] Calling VCs generator.
gwhy-bin [...] why/test.why
Computation of VCs...
Killed
make: *** [test.stat] Error 137

with a lot of conditional statements as well as complex (&&, ||, etc).
You should use the so-called “fast WP” option when analyzing functions with lots of nested conditionals. Otherwise, the target does not even need to be very large to cause a blowup.
It happens to be the example in Jessie's manual for passing options to Why (it is really a Why option):
-jessie-why-opt=<s>
give an option to Why (e.g., -fast-wp)
You would therefore use -jessie-why-opt=-fast-wp.

Related

produce binarycode from IR generate from llvmlite

Does anyone know if it is possible to have binary code from IR generated with llvmlite? in llvm, we can simply clang -emit-llvm -o foo.bc -c foo.c. What if I am using llvmlite?
As far as I can tell, llvmlite doesn't include a linker. You can write object code with, for example,
target = llvm.Target.from_default_triple()
machine = target.create_target_machine()
with llvm.create_mcjit_compiler(module, target) as mcjit:
def on_compiled(module, objbytes):
open('mymodule.o', 'w').write(objbytes)
mcjit.set_object_cache(on_compiled, lambda m: None)
mcjit.finalize_object()
And then use your standard linker ld, which usually you would have via gcc or clang to link the object file. LLVM 4 seems to ship with its own linker lld which is an option to use manually but llvmlite isn't on version 4 and wouldn't be able to expose that functionality.
On my machine for example, I can run from bash
$ gcc -o llvmapp mymodule.o
$ ./llvmapp
It appears the easiest solution thus far is directly write all your code in python, but it comes at the cost of run time, which I know some people don't care about.
Unfortunately, I would have to agree with #Jimmy. I haven't seen anything yet and it is 2019 which is 2 years later and still nothing.

Without root access, run R with tuned BLAS when it is linked with reference BLAS

Can any one tell me why I can not successfully test OpenBLAS's dgemm performance (in GFLOPs) in R via the following way?
link R with the "reference BLAS" libblas.so
compile my C program mmperf.c with OpenBLAS library libopenblas.so
load the resulting shared library mmperf.so into R, call the R wrapper function mmperf and report dgemm performance in GFLOPs.
Point 1 looks strange, but I have no choice because I have no root access on machines I want to test, so actual linking to OpenBLAS is impossible. By "not successfully" I mean my program ends up reporting dgemm performance for reference BLAS instead of OpenBLAS. I hope someone can explain to me:
why my way does not work;
is it possible at all to make it work (this is important, because if it is impossible, I must write a C main function and do my job in a C program.)
I've investigated into this issue for two days, here I will include various system output to assist you to make a diagnose. To make things reproducible, I will also include the code, makefile as well as shell command.
Part 1: system environment before testing
There are 2 ways to invoke R, either using R or Rscript. There are some differences in what is loaded when they are invoked:
~/Desktop/dgemm$ readelf -d $(R RHOME)/bin/exec/R | grep "NEEDED"
0x00000001 (NEEDED) Shared library: [libR.so]
0x00000001 (NEEDED) Shared library: [libpthread.so.0]
0x00000001 (NEEDED) Shared library: [libc.so.6]
~/Desktop/dgemm$ readelf -d $(R RHOME)/bin/Rscript | grep "NEEDED"
0x00000001 (NEEDED) Shared library: [libc.so.6]
Here we need to choose Rscript, because R loads libR.so, which will automatically load the reference BLAS libblas.so.3:
~/Desktop/dgemm$ readelf -d $(R RHOME)/lib/libR.so | grep blas
0x00000001 (NEEDED) Shared library: [libblas.so.3]
~/Desktop/dgemm$ ls -l /etc/alternatives/libblas.so.3
... 31 May /etc/alternatives/libblas.so.3 -> /usr/lib/libblas/libblas.so.3.0
~/Desktop/dgemm$ readelf -d /usr/lib/libblas/libblas.so.3 | grep SONAME
0x0000000e (SONAME) Library soname: [libblas.so.3]
Comparatively, Rscript gives a cleaner environment.
Part 2: OpenBLAS
After downloading source file from OpenBLAS and a simple make command, a shared library of the form libopenblas-<arch>-<release>.so-<version> can be generated. Note that we will not have root access to install it; instead, we copy this library into our working directory ~/Desktop/dgemm and rename it simply to libopenblas.so. At the same time we have to make another copy with name libopenblas.so.0, as this is the SONAME which run time loader will seek for:
~/Desktop/dgemm$ readelf -d libopenblas.so | grep "RPATH\|SONAME"
0x0000000e (SONAME) Library soname: [libopenblas.so.0]
Note that the RPATH attribute is not given, which means this library is intended to be put in /usr/lib and we should call ldconfig to add it to ld.so.cache. But again we don't have root access to do this. In fact, if this can be done, then all the difficulties are gone. We could then use update-alternatives --config libblas.so.3 to effectively link R to OpenBLAS.
Part 3: C code, Makefile, and R code
Here is a C script mmperf.c computing GFLOPs of multiplying 2 square matrices of size N:
#include <R.h>
#include <Rmath.h>
#include <Rinternals.h>
#include <R_ext/BLAS.h>
#include <sys/time.h>
/* standard C subroutine */
double mmperf (int n) {
/* local vars */
int n2 = n * n, tmp; double *A, *C, one = 1.0;
struct timeval t1, t2; double elapsedTime, GFLOPs;
/* simulate N-by-N matrix A */
A = (double *)calloc(n2, sizeof(double));
GetRNGstate();
tmp = 0; while (tmp < n2) {A[tmp] = runif(0.0, 1.0); tmp++;}
PutRNGstate();
/* generate N-by-N zero matrix C */
C = (double *)calloc(n2, sizeof(double));
/* time 'dgemm.f' for C <- A * A + C */
gettimeofday(&t1, NULL);
F77_CALL(dgemm) ("N", "N", &n, &n, &n, &one, A, &n, A, &n, &one, C, &n);
gettimeofday(&t2, NULL);
/* free memory */
free(A); free(C);
/* compute and return elapsedTime in microseconds (usec or 1e-6 sec) */
elapsedTime = (double)(t2.tv_sec - t1.tv_sec) * 1e+6;
elapsedTime += (double)(t2.tv_usec - t1.tv_usec);
/* convert microseconds to nanoseconds (1e-9 sec) */
elapsedTime *= 1e+3;
/* compute and return GFLOPs */
GFLOPs = 2.0 * (double)n2 * (double)n / elapsedTime;
return GFLOPs;
}
/* R wrapper */
SEXP R_mmperf (SEXP n) {
double GFLOPs = mmperf(asInteger(n));
return ScalarReal(GFLOPs);
}
Here is a simple R script mmperf.R to report GFLOPs for case N = 2000
mmperf <- function (n) {
dyn.load("mmperf.so")
GFLOPs <- .Call("R_mmperf", n)
dyn.unload("mmperf.so")
return(GFLOPs)
}
GFLOPs <- round(mmperf(2000), 2)
cat(paste("GFLOPs =",GFLOPs, "\n"))
Finally there is a simple makefile to generate the shared library mmperf.so:
mmperf.so: mmperf.o
gcc -shared -L$(shell pwd) -Wl,-rpath=$(shell pwd) -o mmperf.so mmperf.o -lopenblas
mmperf.o: mmperf.c
gcc -fpic -O2 -I$(shell Rscript --default-packages=base --vanilla -e 'cat(R.home("include"))') -c mmperf.c
Put all these files under working directory ~/Desktop/dgemm, and compile it:
~/Desktop/dgemm$ make
~/Desktop/dgemm$ readelf -d mmperf.so | grep "NEEDED\|RPATH\|SONAME"
0x00000001 (NEEDED) Shared library: [libopenblas.so.0]
0x00000001 (NEEDED) Shared library: [libc.so.6]
0x0000000f (RPATH) Library rpath: [/home/zheyuan/Desktop/dgemm]
The output reassures us that OpenBLAS is correctly linked, and the run time load path is correctly set.
Part 4: testing OpenBLAS in R
Let's do
~/Desktop/dgemm$ Rscript --default-packages=base --vanilla mmperf.R
Note our script needs only the base package in R, and --vanilla is used to ignore all user settings on R start-up. On my laptop, my program returns:
GFLOPs = 1.11
Oops! This is truely reference BLAS performance not OpenBLAS (which is about 8-9 GFLOPs).
Part 5: Why?
To be honest, I don't know why this happens. Each step seems to work correctly. Does something subtle occurs when R is invoked? For example, any possibility that OpenBLAS library is overridden by reference BLAS at some point for some reason? Any explanations and solutions? Thanks!
why my way does not work
First, shared libraries on UNIX are designed to mimic the way archive libraries work (archive libraries were there first). In particular that means that if you have libfoo.so and libbar.so, both defining symbol foo, then whichever library is loaded first is the one that wins: all references to foo from anywhere within the program (including from libbar.so) will bind to libfoo.sos definition of foo.
This mimics what would happen if you linked your program against libfoo.a and libbar.a, where both archive libraries defined the same symbol foo. More info on archive linking here.
It should be clear from above, that if libblas.so.3 and libopenblas.so.0 define the same set of symbols (which they do), and if libblas.so.3 is loaded into the process first, then routines from libopenblas.so.0 will never be called.
Second, you've correctly decided that since R directly links against libR.so, and since libR.so directly links against libblas.so.3, it is guaranteed that libopenblas.so.0 will lose the battle.
However, you erroneously decided that Rscript is better, but it's not: Rscript is a tiny binary (11K on my system; compare to 2.4MB for libR.so), and approximately all it does is exec of R. This is trivial to see in strace output:
strace -e trace=execve /usr/bin/Rscript --default-packages=base --vanilla /dev/null
execve("/usr/bin/Rscript", ["/usr/bin/Rscript", "--default-packages=base", "--vanilla", "/dev/null"], [/* 42 vars */]) = 0
execve("/usr/lib/R/bin/R", ["/usr/lib/R/bin/R", "--slave", "--no-restore", "--vanilla", "--file=/dev/null", "--args"], [/* 43 vars */]) = 0
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=89625, si_status=0, si_utime=0, si_stime=0} ---
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=89626, si_status=0, si_utime=0, si_stime=0} ---
execve("/usr/lib/R/bin/exec/R", ["/usr/lib/R/bin/exec/R", "--slave", "--no-restore", "--vanilla", "--file=/dev/null", "--args"], [/* 51 vars */]) = 0
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=89630, si_status=0, si_utime=0, si_stime=0} ---
+++ exited with 0 +++
Which means that by the time your script starts executing, libblas.so.3 has been loaded, and libopenblas.so.0 that will be loaded as a dependency of mmperf.so will not actually be used for anything.
is it possible at all to make it work
Probably. I can think of two possible solutions:
Pretend that libopenblas.so.0 is actually libblas.so.3
Rebuild entire R package against libopenblas.so.
For #1, you need to ln -s libopenblas.so.0 libblas.so.3, then make sure that your copy of libblas.so.3 is found before the system one, by setting LD_LIBRARY_PATH appropriately.
This appears to work for me:
mkdir /tmp/libblas
# pretend that libc.so.6 is really libblas.so.3
cp /lib/x86_64-linux-gnu/libc.so.6 /tmp/libblas/libblas.so.3
LD_LIBRARY_PATH=/tmp/libblas /usr/bin/Rscript /dev/null
Error in dyn.load(file, DLLpath = DLLpath, ...) :
unable to load shared object '/usr/lib/R/library/stats/libs/stats.so':
/usr/lib/liblapack.so.3: undefined symbol: cgemv_
During startup - Warning message:
package ‘stats’ in options("defaultPackages") was not found
Note how I got an error (my "pretend" libblas.so.3 doesn't define symbols expected of it, since it's really a copy of libc.so.6).
You can also confirm which version of libblas.so.3 is getting loaded this way:
LD_DEBUG=libs LD_LIBRARY_PATH=/tmp/libblas /usr/bin/Rscript /dev/null |& grep 'libblas\.so\.3'
91533: find library=libblas.so.3 [0]; searching
91533: trying file=/usr/lib/R/lib/libblas.so.3
91533: trying file=/usr/lib/x86_64-linux-gnu/libblas.so.3
91533: trying file=/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/amd64/server/libblas.so.3
91533: trying file=/tmp/libblas/libblas.so.3
91533: calling init: /tmp/libblas/libblas.so.3
For #2, you said:
I have no root access on machines I want to test, so actual linking to OpenBLAS is impossible.
but that seems to be a bogus argument: if you can build libopenblas, surely you can also build your own version of R.
Update:
You mentioned in the beginning that libblas.so.3 and libopenblas.so.0 define the same symbol, what does this mean? They have different SONAME, is that insufficient to distinguish them by the system?
The symbols and the SONAME have nothing to do with each other.
You can see symbols in the output from readelf -Ws libblas.so.3 and readelf -Ws libopenblas.so.0. Symbols related to BLAS, such as cgemv_, will appear in both libraries.
Your confusion about SONAME possibly comes from Windows. The DLLs on Windows are designed completely differently. In particular, when FOO.DLL imports symbol bar from BAR.DLL, both the name of the symbol (bar) and the DLL from which that symbol was imported (BAR.DLL) are recorded in the FOO.DLLs import table.
That makes it easy to have R import cgemv_ from BLAS.DLL, while MMPERF.DLL imports the same symbol from OPENBLAS.DLL.
However, that makes library interpositioning hard, and works completely differently from the way archive libraries work (even on Windows).
Opinions differ on which design is better overall, but neither system is likely to ever change its model.
There are ways for UNIX to emulate Windows-style symbol binding: see RTLD_DEEPBIND in dlopen man page. Beware: these are fraught with peril, likely to confuse UNIX experts, are not widely used, and likely to have implementation bugs.
Update 2:
you mean I compile R and install it under my home directory?
Yes.
Then when I want to invoke it, I should explicitly give the path to my version of executable program, otherwise the one on the system might be invoked instead? Or, can I put this path at the first position of environment variable $PATH to cheat the system?
Either way works.
*********************
Solution 2:
*********************
Here we offer another solution, by exploiting environment variable LD_PRELOAD mentioned in our solution 1. The use of LD_PRELOAD is more "brutal", as it forces loading a given library into the program before any other program, even before the C library libc.so! This is often used for urgent patching in Linux development.
As shown in the part 2 of the original post, the shared BLAS library libopenblas.so has SONAME libopenblas.so.0. An SONAME is an internal name that dynamic library loader would seek at run time, so we need to make a symbolic link to libopenblas.so with this SONAME:
~/Desktop/dgemm$ ln -sf libopenblas.so libopenblas.so.0
then we export it:
~/Desktop/dgemm$ export LD_PRELOAD=$(pwd)/libopenblas.so.0
Note that a full path to libopenblas.so.0 needs be fed to LD_PRELOAD for a successful load, even if libopenblas.so.0 is under $(pwd).
Now we launch Rscript and check what happens by LD_DEBUG:
~/Desktop/dgemm$ LD_DEBUG=libs Rscript --default-packages=base --vanilla /dev/null |& grep blas
4860: calling init: /home/zheyuan/Desktop/dgemm/libopenblas.so
4860: calling init: /home/zheyuan/Desktop/dgemm/libopenblas.so
4865: calling init: /home/zheyuan/Desktop/dgemm/libopenblas.so
4868: calling fini: /home/zheyuan/Desktop/dgemm/libopenblas.so [0]
4870: calling init: /home/zheyuan/Desktop/dgemm/libopenblas.so
4869: calling init: /home/zheyuan/Desktop/dgemm/libopenblas.so
4867: calling fini: /home/zheyuan/Desktop/dgemm/libopenblas.so [0]
4860: find library=libblas.so.3 [0]; searching
4860: trying file=/usr/lib/R/lib/libblas.so.3
4860: trying file=/usr/lib/i386-linux-gnu/i686/sse2/libblas.so.3
4860: trying file=/usr/lib/i386-linux-gnu/i686/cmov/libblas.so.3
4860: trying file=/usr/lib/i386-linux-gnu/i686/libblas.so.3
4860: trying file=/usr/lib/i386-linux-gnu/sse2/libblas.so.3
4860: trying file=/usr/lib/i386-linux-gnu/libblas.so.3
4860: trying file=/usr/lib/jvm/java-7-openjdk-i386/jre/lib/i386/client/libblas.so.3
4860: trying file=/usr/lib/libblas.so.3
4860: calling init: /usr/lib/libblas.so.3
4860: calling init: /home/zheyuan/Desktop/dgemm/libopenblas.so
4874: calling init: /home/zheyuan/Desktop/dgemm/libopenblas.so
4876: calling init: /home/zheyuan/Desktop/dgemm/libopenblas.so
4860: calling fini: /home/zheyuan/Desktop/dgemm/libopenblas.so [0]
4860: calling fini: /usr/lib/libblas.so.3 [0]
Comparing with what we saw in solution 1 by cheating R with our own version of libblas.so.3, we can see that
libopenblas.so.0 is loaded first, hence found first by Rscript;
after libopenblas.so.0 is found, Rscript goes on searching and loading libblas.so.3. However, this will play no effect by the "first come, first serve" rule, explained in the original answer.
Good, everything works, so we test our mmperf.c program:
~/Desktop/dgemm$ Rscript --default-packages=base --vanilla mmperf.R
GFLOPs = 9.62
The outcome 9.62 is bigger than 8.77 we saw in the earlier solution merely by chance. As a test for using OpenBLAS we don't run the experiment many times for preciser result.
Then as usual, we unset environment variable in the end:
~/Desktop/dgemm$ unset LD_PRELOAD
*********************
Solution 1:
*********************
Thanks to Employed Russian, my problem is finally solved. The investigation requires important skills in Linux system debugging and patching, and I believe this is a great asset I learned. Here I would post a solution, as well as correcting several points in my original post.
1 About invoking R
In my original post, I mentioned there are two ways to launch R, either via R or Rscript. However, I have wrongly exaggerated their difference. Let's now investigate their start-up process, via an important Linux debugging facility strace (see man strace). There are actually lots of interesting things happening after we type a command in the shell, and we can use
strace -e trace=process [command]
to trace all system calls involving process management. As a result we can watch the fork, wait, and execution steps of a process. Though not stated in the manual page, #Employed Russian shows that it is possible to specify only a subclass of process, for example, execve for the execution steps.
For R we have
~/Desktop/dgemm$ time strace -e trace=execve R --vanilla < /dev/null > /dev/null
execve("/usr/bin/R", ["R", "--vanilla"], [/* 70 vars */]) = 0
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=5777, si_status=0, si_utime=0, si_stime=0} ---
execve("/usr/lib/R/bin/exec/R", ["/usr/lib/R/bin/exec/R", "--vanilla"], [/* 79 vars */]) = 0
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=5778, si_status=0, si_utime=0, si_stime=0} ---
+++ exited with 0 +++
real 0m0.345s
user 0m0.256s
sys 0m0.068s
while for Rscript we have
~/Desktop/dgemm$ time strace -e trace=execve Rscript --default-packages=base --vanilla /dev/null
execve("/usr/bin/Rscript", ["Rscript", "--default-packages=base", "--vanilla", "/dev/null"], [/* 70 vars */]) = 0
execve("/usr/lib/R/bin/R", ["/usr/lib/R/bin/R", "--slave", "--no-restore", "--vanilla", "--file=/dev/null"], [/* 71 vars */]) = 0
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=5822, si_status=0, si_utime=0, si_stime=0} ---
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=5823, si_status=0, si_utime=0, si_stime=0} ---
execve("/usr/lib/R/bin/exec/R", ["/usr/lib/R/bin/exec/R", "--slave", "--no-restore", "--vanilla", "--file=/dev/null"], [/* 80 vars */]) = 0
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=5827, si_status=0, si_utime=0, si_stime=0} ---
+++ exited with 0 +++
real 0m0.063s
user 0m0.020s
sys 0m0.028s
We have also used time to measure the start-up time. Note that
Rscript is about 5.5 times faster than R. One reason is that R will load 6 default packages on start-up, while Rscript only loads one base package by control: --default-packages=base. But it is still much faster even without this setting.
In the end both start-up processes are directed to $(R RHOME)/bin/exec/R, and in my original post, I have already exploited readelf -d to show that this executable will load libR.so, which are linked with libblas.so.3. According to #Employed Russian's explanation, the BLAS library loaded first will win, so there is no way my original method will work.
To successfully run strace, we have used the amazing file /dev/null as input file and output file when necessary. For example, Rscript demands an input file, while R demands both. We feed the null device to both to make the command run smoothly and the output clean. The null device is a physically existing file, but works amazingly. When reading from it, it contains nothing; while writing to it, it discards everything.
2. Cheat R
Now since libblas.so will be loaded anyway, the only thing we can do is to provide our own version of this library. As I have said in the original post, if we have root access, this is really easy, by using update-alternatives --config libblas.so.3, so that the system Linux will help us complete this switch. But #Employed Russian offers an awesome way to cheat the system without root access: let's check how R finds BLAS library on start-up, and make sure we feed our version before the system default is found! To monitor how shared libraries are found and loaded, use environment variable LD_DEBUG.
There are a number of Linux environment variables with prefix LD_, as documented in man ld.so. These variables can be assigned before an executable, so that we can change the running feature of a program. Some useful variables include:
LD_LIBRARY_PATH for setting run time library search path;
LD_DEBUG for tracing look-up and loading of shared libraries;
LD_TRACE_LOADED_OBJECTS for displaying all loaded library by a program (behaves similar to ldd);
LD_PRELOAD for forcing injecting a library to a program at the very start, before all other libraries are looked for;
LD_PROFILE and LD_PROFILE_OUTPUT for profiling one specified shared library. R user who have read section 3.4.1.1 sprof of Writing R extensions should recall that this is used for profiling compiled code from within R.
The use of LD_DEBUG can be seen by:
~/Desktop/dgemm$ LD_DEBUG=help cat
Valid options for the LD_DEBUG environment variable are:
libs display library search paths
reloc display relocation processing
files display progress for input file
symbols display symbol table processing
bindings display information about symbol binding
versions display version dependencies
scopes display scope information
all all previous options combined
statistics display relocation statistics
unused determined unused DSOs
help display this help message and exit
To direct the debugging output into a file instead of standard output a filename can be specified using the LD_DEBUG_OUTPUT environment variable.
Here we are particularly interested in using LD_DEBUG=libs. For example,
~/Desktop/dgemm$ LD_DEBUG=libs Rscript --default-packages=base --vanilla /dev/null |& grep blas
5974: find library=libblas.so.3 [0]; searching
5974: trying file=/usr/lib/R/lib/libblas.so.3
5974: trying file=/usr/lib/i386-linux-gnu/i686/sse2/libblas.so.3
5974: trying file=/usr/lib/i386-linux-gnu/i686/cmov/libblas.so.3
5974: trying file=/usr/lib/i386-linux-gnu/i686/libblas.so.3
5974: trying file=/usr/lib/i386-linux-gnu/sse2/libblas.so.3
5974: trying file=/usr/lib/i386-linux-gnu/libblas.so.3
5974: trying file=/usr/lib/jvm/java-7-openjdk-i386/jre/lib/i386/client/libblas.so.3
5974: trying file=/usr/lib/libblas.so.3
5974: calling init: /usr/lib/libblas.so.3
5974: calling fini: /usr/lib/libblas.so.3 [0]
shows various attempts that R program tried to locate and load libblas.so.3. So if we could provide our own version of libblas.so.3, and make sure R finds it first, then the problem is solved.
Let's first make a symbolic link libblas.so.3 in our working path to the OpenBLAS library libopenblas.so, then expand default LD_LIBRARY_PATH with our working path (and export it):
~/Desktop/dgemm$ ln -sf libopenblas.so libblas.so.3
~/Desktop/dgemm$ export LD_LIBRARY_PATH = $(pwd):$LD_LIBRARY_PATH ## put our working path at top
Now let's check again the library loading process:
~/Desktop/dgemm$ LD_DEBUG=libs Rscript --default-packages=base --vanilla /dev/null |& grep blas
6063: find library=libblas.so.3 [0]; searching
6063: trying file=/usr/lib/R/lib/libblas.so.3
6063: trying file=/usr/lib/i386-linux-gnu/i686/sse2/libblas.so.3
6063: trying file=/usr/lib/i386-linux-gnu/i686/cmov/libblas.so.3
6063: trying file=/usr/lib/i386-linux-gnu/i686/libblas.so.3
6063: trying file=/usr/lib/i386-linux-gnu/sse2/libblas.so.3
6063: trying file=/usr/lib/i386-linux-gnu/libblas.so.3
6063: trying file=/usr/lib/jvm/java-7-openjdk-i386/jre/lib/i386/client/libblas.so.3
6063: trying file=/home/zheyuan/Desktop/dgemm/libblas.so.3
6063: calling init: /home/zheyuan/Desktop/dgemm/libblas.so.3
6063: calling fini: /home/zheyuan/Desktop/dgemm/libblas.so.3 [0]
Great! We have successfully cheated R.
3. Experiment with OpenBLAS
~/Desktop/dgemm$ Rscript --default-packages=base --vanilla mmperf.R
GFLOPs = 8.77
Now, everything works as expected!
4. Unset LD_LIBRARY_PATH (to be safe)
It is a good practice to unset LD_LIBRARY_PATH after use.
~/Desktop/dgemm$ unset LD_LIBRARY_PATH

How do Perl Cwd::cwd and Cwd::getcwd functions differ?

The question
What is the difference between Cwd::cwd and Cwd::getcwd in Perl, generally, without regard to any specific platform? Why does Perl have both? What is the intended use, which one should I use in which scenarios? (Example use cases will be appreciated.) Does it matter? (Assuming I don’t mix them.) Does choice of either one affect portability in any way? Which one is more commonly used in modules?
Even if I interpret the manual is saying that except for corner cases cwd is `pwd` and getcwd just calls getcwd from unistd.h, what is the actual difference? This works only on POSIX systems, anyway.
I can always read the implementation but that tells me nothing about the meaning of those functions. Implementation details may change, not so defined meaning. (Otherwise a breaking change occurs, which is serious business.)
What does the manual say
Quoting Perl’s Cwd module manpage:
Each of these functions are called without arguments and return the absolute path of the current working directory.
getcwd
my $cwd = getcwd();
Returns the current working directory.
Exposes the POSIX function getcwd(3) or re-implements it if it's not available.
cwd
my $cwd = cwd();
The cwd() is the most natural form for the current architecture. For most systems it is identical to `pwd` (but without the trailing line terminator).
And in the Notes section:
Actually, on Mac OS, the getcwd(), fastgetcwd() and fastcwd() functions are all aliases for the cwd() function, which, on Mac OS, calls `pwd`. Likewise, the abs_path() function is an alias for fast_abs_path()
OK, I know that on Mac OS1 there is no difference between getcwd() and cwd() as both actually boil down to `pwd`. But what on other platforms? (I’m especially interested in Debian Linux.)
1 Classic Mac OS, not OS X. $^O values are MacOS and darwin for Mac OS and OS X, respectively. Thanks, #tobyink and #ikegami.
And a little meta-question: How to avoid asking similar questions for other modules with very similar functions? Is there a universal way of discovering the difference, other than digging through the implementation? (Currently, I think that if the documentation is not clear about intended use and differences, I have to ask someone more experienced or read the implementation myself.)
Generally speaking
I think the idea is that cwd() always resolves to the external, OS-specific way of getting the current working directory. That is, running pwd on Linux, command /c cd on DOS, /usr/bin/fullpath -t in QNX, and so on — all examples are from actual Cwd.pm. The getcwd() is supposed to use the POSIX system call if it is available, and falls back to the cwd() if not.
Why we have both? In the current implementation I believe exporting just getcwd() would be enough for most of systems, but who knows why the logic of “if syscall is available, use it, else run cwd()” can fail on some system (e.g. on MorphOS in Perl 5.6.1).
On Linux
On Linux, cwd() will run `/bin/pwd` (will actually execute the binary and get its output), while getcwd() will issue getcwd(2) system call.
Actual effect inspected via strace
One can use strace(1) to see that in action:
Using cwd():
$ strace -f perl -MCwd -e 'cwd(); ' 2>&1 | grep execve
execve("/usr/bin/perl", ["perl", "-MCwd", "-e", "cwd(); "], [/* 27 vars */]) = 0
[pid 31276] execve("/bin/pwd", ["/bin/pwd"], [/* 27 vars */] <unfinished ...>
[pid 31276] <... execve resumed> ) = 0
Using getcwd():
$ strace -f perl -MCwd -e 'getcwd(); ' 2>&1 | grep execve
execve("/usr/bin/perl", ["perl", "-MCwd", "-e", "getcwd(); "], [/* 27 vars */]) = 0
Reading Cwd.pm source
You can take a look at the sources (Cwd.pm, e.g. in CPAN) and see that for Linux cwd() call is mapped to _backtick_pwd which, as the name suggests, calls the pwd in backticks.
Here is a snippet from Cwd.pm, with my comments:
unless ($METHOD_MAP{$^O}{cwd} or defined &cwd) {
...
# some logic to find the pwd binary here, $found_pwd_cmd is set to 1 on Linux
...
if( $os eq 'MacOS' || $found_pwd_cmd )
{
*cwd = \&_backtick_pwd; # on Linux we actually go here
}
else {
*cwd = \&getcwd;
}
}
Performance benchmark
Finally, the difference between two is that cwd(), which calls another binary, must be slower. We can make some kind of a performance test:
$ time perl -MCwd -e 'for (1..10000) { cwd(); }'
real 0m7.177s
user 0m0.380s
sys 0m1.440s
Now compare it with the system call:
$ time perl -MCwd -e 'for (1..10000) { getcwd(); }'
real 0m0.018s
user 0m0.009s
sys 0m0.008s
Discussion, choice
But as you don't usually query the current working directory too often, both options will work — unless you cannot spawn any more processes for some reason related to ulimit, out of memory situation, etc.
Finally, as for selecting which one to use: for Linux, I would always use getcwd(). I suppose you will need to make your tests and select which function to use if you are going to write a portable piece of code that will run on some really strange platform (here, of course, Linux, OS X, and Windows are not in the list of strange platforms).

frama-c mingw __restrict__ keyword

I am new to Frama-C. I would like to run it under Windows enviroments. My compiler is gcc,mingw.
I have tryied to run same examples from Value Analysis tutorial by I have a problem with library header files.
I've found that it's not possible to run frama-c because restrict keyword. It shows error in string.h file
void * __cdecl memcpy(void * __restrict__ _Dst,const void * __restrict__ _Src,size_t _Size) __MINGW_ATTRIB_DEPRECATED_SEC_WARN;
When I manually add #define restrict to all *.c files in SkeinProject
schneier.com/code/skein_NIST_CD_102610.zip
everything works correcly. By doing it by hand is not what I'm looking for.
Next step was to add argument -D__restrict__
frama-c -cpp-extra-args=-D__restrict__ -main=Init -val SHA3api_ref.c
[kernel] preprocessing with "gcc -C -E -I. -D__restrict__ SHA3api_ref.c"
../lib/gcc/i686-w64-mingw32/4.7.2/../../../../i686-w64-mingw32/include/string.h:41:[kernel] user error: syntax error
[kernel] user error: skipping file "SHA3api_ref.c" that has errors.
[kernel] Frama-C aborted because of an invalid user input.
I've also generated precompiled *.i files but error still the same.
gcc -E -D__restrict__ SHA3api_ref.c >SHA3api_ref.i
frama-c -main=Init -val SHA3api_ref.i
../lib/gcc/i686-w64-mingw32/4.7.2/../../../../i686-w64-mingw32/include/string.h:41:[kernel] user error: syntax error
[kernel] user error: skipping file "SHA3api_ref.i" that has errors.
[kernel] Frama-C aborted because of an invalid user input.
What can I do with it?
Your system headers contain non-standard syntax extensions that are not supported by Frama-C. This is normal, as the headers are often provided as part of a complete package with the compiler, so the headers and the compiler only need to work together, not to work with all the other programs that take C source code as input.
Generally speaking, you should always use the headers provided with Frama-C
instead of those from your system.
When using GCC or a compatible compiler such as Clang, this involves
passing the pre-processor the options -nostdinc and -I... where ...
stands for the place where Frama-C's headers were installed. This
location can be obtained from Frama-C with the option -print-share-path.
All in all, on a Unix system, it may look like:
frama-c -cpp-extra-args=-nostdinc -cpp-extra-args=-I`frama-c -print-share-path`/libc .....
Doing the same thing with Windows and MinGW follows the same idea but sometimes involves extra trouble due to the perpetual ambiguity between \ and / as directory separators.
Recently, Frank Dordowsky has been having trouble with using a very new GCC version to pre-process C files for Frama-C. That was only when using -pp-annot, but in any case, the solution was to switch to Clang as pre-processor.

Calling external program from R with multiple commands in system

I am new to programming and mainly I am able to do some scripts within R, but for my work I need to call an external program. For this program to work on the ubuntu's terminal I have to first use setenv and then execute the program. Googling I've found the system () and Sys.setenv() functions, but unfortunately I can make it function.
This is the code that does work in the ubuntu terminal:
$ export PATH=/home/meme/bin:$PATH
$ mast "/home/meme/meme.txt" "/home/meme/seqs.txt" -o "/home/meme/output" -comp
Where the first two arguments are input files, the -o argument is the output directory and the -comp is another parameter for the program to run.
The reason that I need to do it in R despite it already works in the terminal is because I need to run the program 1000 times with 1000 different files so I want to make a for loop where the input name changes in every loop and then analyze every output in R.
I have already tried to use:
Sys.setenv(PATH="/home/meme/bin"); system(mast "/home/meme/meme.txt" "/home/meme/seqs.txt" -o "/home/meme/output" -comp )
and
system(Sys.setenv(PATH="/home/meme/bin") && mast "/home/meme/meme.txt" "/home/meme/seqs.txt" -o "/home/meme/output" -comp )
but always received:
Error: unexpected constant string in "system(mast "/home/meme/meme.txt""
or
Error: unexpected symbol in "system(Sys.setenv(PATH="/home/meme/bin") && mast "/home/meme/meme.txt""
At this point I have run out of ideas to make this work. If this has already been answered, then my googling have just been poor and I would appreciate any links to its response.
Thank you very much for your time.
Carlos
Additional details:
I use Ubuntu 12.04 64-bits version, RStudio version 0.97.551, R version 3.0.2 (2013-09-25) -- "Frisbee Sailing" Platform: x86_64-pc-linux-gnu (64-bit).
The program I use (MAST) finds a sequence pattern in a list of letters and is part of the MEME SUIT version 4.9.1 found in http://meme.nbcr.net/meme/doc/meme-install.html and run through command line. The command-line usage for mast is:
mast <motif file> <sequence file> [options]
Construct the string you want to execute with paste and feed that to system:
for(i in 1:10){
cmd=paste("export FOO=",i," ; echo \"$FOO\" ",sep='')
system(cmd)
}
Note the use of sep='' to stop paste putting spaces in, and back-quoting quote marks in the string to preserve them.
Test before running by using print(cmd) instead of system(cmd) to make sure you are getting the right command built. Maybe do:
if(TESTING){print(cmd)}else{system(cmd)}
and set TESTING=TRUE or FALSE in R before running.
If you are going to be running more than one shell command per system call, it might be better to put them all in one shell script file and call that instead, passing parameters from R. Something like:
cmd = paste("/home/me/bin/dojob.sh ",i,i+1)
system(cmd)
and then dojob.sh is a shell script that parses the args. You'll need to learn a bit more shell scripting.

Resources