Does anyone know if it is possible to have binary code from IR generated with llvmlite? in llvm, we can simply clang -emit-llvm -o foo.bc -c foo.c. What if I am using llvmlite?
As far as I can tell, llvmlite doesn't include a linker. You can write object code with, for example,
target = llvm.Target.from_default_triple()
machine = target.create_target_machine()
with llvm.create_mcjit_compiler(module, target) as mcjit:
def on_compiled(module, objbytes):
open('mymodule.o', 'w').write(objbytes)
mcjit.set_object_cache(on_compiled, lambda m: None)
mcjit.finalize_object()
And then use your standard linker ld, which usually you would have via gcc or clang to link the object file. LLVM 4 seems to ship with its own linker lld which is an option to use manually but llvmlite isn't on version 4 and wouldn't be able to expose that functionality.
On my machine for example, I can run from bash
$ gcc -o llvmapp mymodule.o
$ ./llvmapp
It appears the easiest solution thus far is directly write all your code in python, but it comes at the cost of run time, which I know some people don't care about.
Unfortunately, I would have to agree with #Jimmy. I haven't seen anything yet and it is 2019 which is 2 years later and still nothing.
Related
I'm kinda new to the world of Intel's HPC toolchain and I'm facing some troubles making even simple DPC++ application to work when gtest is used as a testing framework
This is the CMakeLists "structure" I'm following
cmake_minimum_required(VERSION 3.14)
project(foo)
set(CMAKE_CXX_COMPILER "dpcpp")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++17 -O3 -fsycl")
# add executables
# target linked libraries
# ...
option(ENABLE_TESTS ON)
if(ENABLE_TESTS)
FetchContent_Declare(
googletest
GIT_REPOSITORY https://github.com/google/googletest.git
GIT_TAG release-1.11.0
)
set(gtest_force_shared_crt ON CACHE BOOL "" FORCE)
FetchContent_MakeAvailable(googletest)
add_subdirectory(tests)
endif()
If I remove the last block, it compiles and runs as expected, otherwise I get the following error:
CMake Error at build/_deps/googletest-src/CMakeLists.txt:10 (project):
The CMAKE_CXX_COMPILER:
dpcpp
is not a full path and was not found in the PATH.
Tell CMake where to find the compiler by setting either the environment
variable "CXX" or the CMake cache entry CMAKE_CXX_COMPILER to the full path
to the compiler, or to the compiler name if it is in the PATH.
-- Configuring incomplete, errors occurred!
See also "/home/u141905/foo/build/CMakeFiles/CMakeOutput.log".
See also "/home/u141905/foo/build/CMakeFiles/CMakeError.log".
You have changed variables that require your cache to be deleted.
Configure will be re-run and you may have to reset some variables.
The following variables have changed:
CMAKE_CXX_COMPILER= /usr/bin/c++
-- Generating done
CMake Generate step failed. Build files cannot be regenerated correctly.
make: *** [Makefile:2: all] Error 1
Please notice that dpcpp is correctly set, in fact I'm using Intel's devcloud platform.
Setting CXXto the output of whereis dpcpp produces the same error.
The only "workaround" (I doubt it is one though) I found is using clang++ instead (the version from Intel's llvm). Any help or suggestion is much appreciated, thanks in advance!
EDIT: after some more attempts, I noticed that if I set CMAKE_CXX_COMPILER just after fetching gtest, everything works fine. Anyway I don't understand why this happens and how can be properly fixed.
Use the path for dpcpp binary for setting the CMAKE_CXX_COMPILER instead of using"set(CMAKE_CXX_COMPILER "dpcpp")". After adding the path("/opt/intel/oneapi/compiler/2022.0.1/linux/bin/dpcpp") to the CMAKE_CXX_COMPILER, you can run the program successfully.
Please find the below CMakeLists.txt for setting the CMAKE_CXX_COMPILER:
project(foo)
set(CMAKE_CXX_COMPILER "/opt/intel/oneapi/compiler/2022.0.1/linux/bin/dpcpp")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++17 -O3 -fsycl")
# add executables
# target linked libraries
# ...
set(ENABLE_TESTS ON)
include(FetchContent)
if(ENABLE_TESTS)
FetchContent_Declare(
googletest
GIT_REPOSITORY https://github.com/google/googletest.git
GIT_TAG release-1.11.0
)
set(gtest_force_shared_crt ON CACHE BOOL "" FORCE)
FetchContent_MakeAvailable(googletest)
add_subdirectory(tests)
endif()
Thanks & Regards,
Hemanth
Did you run the source /opt/intel/oneapi/setvars.sh intel64 script? I.e. is dpcpp on your path before running cmake?
The question
What is the difference between Cwd::cwd and Cwd::getcwd in Perl, generally, without regard to any specific platform? Why does Perl have both? What is the intended use, which one should I use in which scenarios? (Example use cases will be appreciated.) Does it matter? (Assuming I don’t mix them.) Does choice of either one affect portability in any way? Which one is more commonly used in modules?
Even if I interpret the manual is saying that except for corner cases cwd is `pwd` and getcwd just calls getcwd from unistd.h, what is the actual difference? This works only on POSIX systems, anyway.
I can always read the implementation but that tells me nothing about the meaning of those functions. Implementation details may change, not so defined meaning. (Otherwise a breaking change occurs, which is serious business.)
What does the manual say
Quoting Perl’s Cwd module manpage:
Each of these functions are called without arguments and return the absolute path of the current working directory.
getcwd
my $cwd = getcwd();
Returns the current working directory.
Exposes the POSIX function getcwd(3) or re-implements it if it's not available.
cwd
my $cwd = cwd();
The cwd() is the most natural form for the current architecture. For most systems it is identical to `pwd` (but without the trailing line terminator).
And in the Notes section:
Actually, on Mac OS, the getcwd(), fastgetcwd() and fastcwd() functions are all aliases for the cwd() function, which, on Mac OS, calls `pwd`. Likewise, the abs_path() function is an alias for fast_abs_path()
OK, I know that on Mac OS1 there is no difference between getcwd() and cwd() as both actually boil down to `pwd`. But what on other platforms? (I’m especially interested in Debian Linux.)
1 Classic Mac OS, not OS X. $^O values are MacOS and darwin for Mac OS and OS X, respectively. Thanks, #tobyink and #ikegami.
And a little meta-question: How to avoid asking similar questions for other modules with very similar functions? Is there a universal way of discovering the difference, other than digging through the implementation? (Currently, I think that if the documentation is not clear about intended use and differences, I have to ask someone more experienced or read the implementation myself.)
Generally speaking
I think the idea is that cwd() always resolves to the external, OS-specific way of getting the current working directory. That is, running pwd on Linux, command /c cd on DOS, /usr/bin/fullpath -t in QNX, and so on — all examples are from actual Cwd.pm. The getcwd() is supposed to use the POSIX system call if it is available, and falls back to the cwd() if not.
Why we have both? In the current implementation I believe exporting just getcwd() would be enough for most of systems, but who knows why the logic of “if syscall is available, use it, else run cwd()” can fail on some system (e.g. on MorphOS in Perl 5.6.1).
On Linux
On Linux, cwd() will run `/bin/pwd` (will actually execute the binary and get its output), while getcwd() will issue getcwd(2) system call.
Actual effect inspected via strace
One can use strace(1) to see that in action:
Using cwd():
$ strace -f perl -MCwd -e 'cwd(); ' 2>&1 | grep execve
execve("/usr/bin/perl", ["perl", "-MCwd", "-e", "cwd(); "], [/* 27 vars */]) = 0
[pid 31276] execve("/bin/pwd", ["/bin/pwd"], [/* 27 vars */] <unfinished ...>
[pid 31276] <... execve resumed> ) = 0
Using getcwd():
$ strace -f perl -MCwd -e 'getcwd(); ' 2>&1 | grep execve
execve("/usr/bin/perl", ["perl", "-MCwd", "-e", "getcwd(); "], [/* 27 vars */]) = 0
Reading Cwd.pm source
You can take a look at the sources (Cwd.pm, e.g. in CPAN) and see that for Linux cwd() call is mapped to _backtick_pwd which, as the name suggests, calls the pwd in backticks.
Here is a snippet from Cwd.pm, with my comments:
unless ($METHOD_MAP{$^O}{cwd} or defined &cwd) {
...
# some logic to find the pwd binary here, $found_pwd_cmd is set to 1 on Linux
...
if( $os eq 'MacOS' || $found_pwd_cmd )
{
*cwd = \&_backtick_pwd; # on Linux we actually go here
}
else {
*cwd = \&getcwd;
}
}
Performance benchmark
Finally, the difference between two is that cwd(), which calls another binary, must be slower. We can make some kind of a performance test:
$ time perl -MCwd -e 'for (1..10000) { cwd(); }'
real 0m7.177s
user 0m0.380s
sys 0m1.440s
Now compare it with the system call:
$ time perl -MCwd -e 'for (1..10000) { getcwd(); }'
real 0m0.018s
user 0m0.009s
sys 0m0.008s
Discussion, choice
But as you don't usually query the current working directory too often, both options will work — unless you cannot spawn any more processes for some reason related to ulimit, out of memory situation, etc.
Finally, as for selecting which one to use: for Linux, I would always use getcwd(). I suppose you will need to make your tests and select which function to use if you are going to write a portable piece of code that will run on some really strange platform (here, of course, Linux, OS X, and Windows are not in the list of strange platforms).
I am new to Frama-C. I would like to run it under Windows enviroments. My compiler is gcc,mingw.
I have tryied to run same examples from Value Analysis tutorial by I have a problem with library header files.
I've found that it's not possible to run frama-c because restrict keyword. It shows error in string.h file
void * __cdecl memcpy(void * __restrict__ _Dst,const void * __restrict__ _Src,size_t _Size) __MINGW_ATTRIB_DEPRECATED_SEC_WARN;
When I manually add #define restrict to all *.c files in SkeinProject
schneier.com/code/skein_NIST_CD_102610.zip
everything works correcly. By doing it by hand is not what I'm looking for.
Next step was to add argument -D__restrict__
frama-c -cpp-extra-args=-D__restrict__ -main=Init -val SHA3api_ref.c
[kernel] preprocessing with "gcc -C -E -I. -D__restrict__ SHA3api_ref.c"
../lib/gcc/i686-w64-mingw32/4.7.2/../../../../i686-w64-mingw32/include/string.h:41:[kernel] user error: syntax error
[kernel] user error: skipping file "SHA3api_ref.c" that has errors.
[kernel] Frama-C aborted because of an invalid user input.
I've also generated precompiled *.i files but error still the same.
gcc -E -D__restrict__ SHA3api_ref.c >SHA3api_ref.i
frama-c -main=Init -val SHA3api_ref.i
../lib/gcc/i686-w64-mingw32/4.7.2/../../../../i686-w64-mingw32/include/string.h:41:[kernel] user error: syntax error
[kernel] user error: skipping file "SHA3api_ref.i" that has errors.
[kernel] Frama-C aborted because of an invalid user input.
What can I do with it?
Your system headers contain non-standard syntax extensions that are not supported by Frama-C. This is normal, as the headers are often provided as part of a complete package with the compiler, so the headers and the compiler only need to work together, not to work with all the other programs that take C source code as input.
Generally speaking, you should always use the headers provided with Frama-C
instead of those from your system.
When using GCC or a compatible compiler such as Clang, this involves
passing the pre-processor the options -nostdinc and -I... where ...
stands for the place where Frama-C's headers were installed. This
location can be obtained from Frama-C with the option -print-share-path.
All in all, on a Unix system, it may look like:
frama-c -cpp-extra-args=-nostdinc -cpp-extra-args=-I`frama-c -print-share-path`/libc .....
Doing the same thing with Windows and MinGW follows the same idea but sometimes involves extra trouble due to the perpetual ambiguity between \ and / as directory separators.
Recently, Frank Dordowsky has been having trouble with using a very new GCC version to pre-process C files for Frama-C. That was only when using -pp-annot, but in any case, the solution was to switch to Clang as pre-processor.
I am trying to develop a plugin to frama-c.
I did build the application, which has several files, and then created the makefile referencing all the files i needed.
I am able to use make and then make install and execute the plugin. My problem appears when i call functions from the ocamlyices library in a function...
I am still able to do the make and make install and when i try to execute i get the following error:
[kernel] warning: cannot load plug-in 'name' (incompatible with Fluorine-20130601).
[kernel] user error: option `<name>' is unknown.
use `frama-c -help' for more information.
[kernel] Frama-C aborted: invalid user input.
So it says it is incompatible when I add the call to ocamlyices functions. Is there any option/configuration I am missing somewhere?
Thank you for your help.
The final solution looked like this:
FRAMAC_SHARE := $(shell frama-c.byte -print-path)
FRAMAC_LIBDIR := $(shell frama-c.byte -print-libpath)
PLUGIN_NAME = Fact
# All needed files
PLUGIN_CMO = ../base/basic_types concolic_search run_fact ../lib/lib
PLUGIN_DOCFLAGS = -I ../base -I ../lib -I $(YICES) -I /usr/lib/ocaml/batteries -I ../instrumentation
PLUGIN_BFLAGS = -I ../base -I ../lib -I $(YICES) -I ../instrumentation
PLUGIN_OFLAGS = -I ../base -I ../lib -I $(YICES) -I ../instrumentation
PLUGIN_EXTRA_BYTE = $(YICES)/ocamlyices.cma
PLUGIN_EXTRA_OPT = $(YICES)/ocamlyices.cmxa
include $(FRAMAC_SHARE)/Makefile.dynamic
The variable $(YICES) is defined as
export YICES="/usr/local/lib/ocaml/3.12.1/ocamlyices"
As mentioned by Anne, if your plug-in uses an external library that is not already included by Frama-C itself, you need a few more steps than for a basic plug-in, especially setting PLUGIN_EXTRA_BYTE and PLUGIN_EXTRA_OPT to the external libraries that you want to be linked to your plug-in. It might also be necessary to adapt the flags passed to the linker with PLUGIN_LINK_BFLAGS and PLUGIN_LINK_OFLAGS, but this is heavily dependent on ocamlyices itself. More information on the variables that can be used to customize the compilation of your plug-in can be found in section 5.3.3 of Frama-C's development guide.
I'm trying to apply frama-c/jessie on a module of a proprietary safety critical system from our customer. The function under analysis is pretty big (image 700 uncommented lines count) with a lot of conditional statements as well as complex (&&, ||, etc).
I got this error message when I ran it on Ubuntu VM 64 bit machine. It appears Error 137 is related to memory size, etc. But I'm not quite sure.
Any suggestion for how to approach this error is greatly appreciated.
[
formal_verification]$ frama-c -jessie test.c
[kernel] preprocessing with "gcc -C -E -I. -dD test.c"
[jessie] Starting Jessie translation
[jessie] Producing Jessie files in subdir test.jessie
[jessie] File test.jessie/test.jc written.
[jessie] File test.jessie/test.cloc written.
[jessie] Calling Jessie tool in subdir tests.jessie
Generating Why function testFun
[jessie] Calling VCs generator.
gwhy-bin [...] why/test.why
Computation of VCs...
Killed
make: *** [test.stat] Error 137
with a lot of conditional statements as well as complex (&&, ||, etc).
You should use the so-called “fast WP” option when analyzing functions with lots of nested conditionals. Otherwise, the target does not even need to be very large to cause a blowup.
It happens to be the example in Jessie's manual for passing options to Why (it is really a Why option):
-jessie-why-opt=<s>
give an option to Why (e.g., -fast-wp)
You would therefore use -jessie-why-opt=-fast-wp.