Detect whether the compiler has full C++11 support from Rcpp - r

I am building a package using Rcpp and RcppArmadillo, and when I install it on one of my machines I get the following compiler warnings from functions which use RcppArmadillo:
WARNING: your C++ compiler is in C++11 mode, but it has incomplete
support for C++11 features; if something breaks, you get to keep all
the pieces
As far as I can tell, this doesn't break any of my code, however it would be nice to turn off these warnings if possible.
Based on another question I know I can disable C++11 support for Armadillo by adding the following macro before I include the RcppArmadillo header file:
#define ARMA_DONT_USE_CXX11
Which is just a shorthand for:
#if defined(ARMA_DONT_USE_CXX11)
#undef ARMA_USE_CXX11
#undef ARMA_USE_CXX11_RNG
#endif
What I'd like to be able to do is detect whether the compiler has full support for C++11 extensions or not, and #undef the appropriate macros only if the machines compiler doesn't support full C++11 extensions.
According to the R extensions manual, this should be stored in CXX1XSTD, so I want to do something like:
#if CXX1XSTD == "-std=c++0x"
#undef ARMA_USE_CXX11
#undef ARMA_USE_CXX11_RNG
#endif
However CXX1XSTD isn't a predefined macro in Rcpp or C++, so I that doesn't work.

Briefly:
The first warning is from Armadillo. Conrad works hard to keep his code compliant; he chose to put this warning in. You did not say which compiler version you used so I can't comment further.
The rest of your post is a little confused. There are actual test macros for compilers and versions -- a well known source is this one at sf.net -- and we try out best to reflect that in our sources.
In short, g++ 4.8 or later (as on Linux) and clang 3.4 or 3.5 (as on Linux and OS X) are good enough.
Windows and its g++ 4.6.* has issues. That latter one is the compiler we are given by Rtools and there is little we can do.
CXX1XSTD is defined by R 3.1.0 or later. R tests the compiler during its compile / build cycle and remember, so when as you for a C++11 etc compliant compiler R knows whether to give you c++0x or c++11. In essence, you are thinking here that you know better; I think you may be wrong in that belief. But hey, if there is something we can do better than we currently do in the Rcpp / RcppArmadillo headers then let us know.
I didn't see where you said why you turned C++11 on. Are you actually using C++11 or later features? Because if you don't then the whole issue is moot. But if you do then you need to be careful. Many R installations will have old compilers (Windows, older RHEL, ...).
I suspect you'd get overall better respones if you posted on rcpp-devel.

Related

RcppEigen and Vectorization

In Eigen FAQ it states that you need to enable vectorization in the compiler.
I am trying to develop an R package using RcppEigen. I would like it if the user would have the best performance without having to manually compile the package with specified flags.
What is best practice for an R package looking to enable vectorization in the Eigen library?
Do exactly what the FAQ says and set the compiler flags. You may have to turn those on from a script configure after you test what the current compiler supports -- and CRAN may still tell you that the flags are not portable.
Also, just to fix terms here, there is no "library" here in our: RcppEigen only uses headers from Eigen which is designed as a templated header-only package.
I'm a beginner too and many hours trying to understand Rcpp may be relevant to you #jds. I wanted to enable vectorisation on my Dell Precision M2800 with AVX architecture so I added the -mavx2 flag to my configure file using the following chunk thrice:
CXXFLAGS= -O3 -std=c++11 -Wall -mavx2
This code change sped up my code (a series of double-nested for loops) from 4.1s to 1.4s!
Find out how to amend the compiler flags that get used by sourceCpp by building a skeleton package using configure and clean files to create your Makevars file as beautifully demonstrated by #nrussell in How to change and set Rcpp compile arguments

How to compile opencl-kernel-file(.cl) to LLVM IR

This question is related to LLVM/clang.
I already know how to compile opencl-kernel-file(.cl) using OpenCL API ( clBuildProgram() and clGetProgramBuildInfo() )
my question is this:
How to compile opencl-kernel-file(.cl) to LLVM IR with OpenCL 1.2 or higher?
In the other words, How to compile opnecl-kernel-file(.cl) to LLVM IR without libclc?
I have tried various methods to get LLVM-IR of OpenCL-Kernel-File.
I first followed the clang user manual.(https://clang.llvm.org/docs/UsersManual.html#opencl-features) but it did not run.
Secondly, I found a way to use libclc.
commands is this:
clang++ -emit-llvm -c -target -nvptx64-nvidial-nvcl -Dcl_clang_storage_class_specifiers -include /usr/local/include/clc/clc.h -fpack-struct=64 -o "$#".bc "$#" <br>
llvm-link "$#".bc /usr/local/lib/clc/nvptx64--nvidiacl.bc -o "$#".linked.bc <br>
llc -mcpu=sm_52 -march=nvptx64 "$#".linked.bc -o "$#".nvptx.s<br>
This method worked fine, but since libclc was built on top of the OpenCL 1.1 specification, it could not be used with OpenCL 1.2 or later code such as code using printf.
And this method uses libclc, which implements OpenCL built-in functions in the shape of new function. You can observe that in the assembly(ptx) of result opencl binary, it goes straight to the function call instead of converting it to an inline assembly. I am concerned that this will affect gpu behavior and performance, such as execution time.
So now I am looking for a way to replace compilation using libclc.
As a last resort, I'm considering using libclc with the NVPTX backend and AMDGPU backend of LLVM.
But if there is already another way, I want to use it.
(I expect that the OpenCL front-end I have not found yet exists in clang)
My program's scenarios are:
There is opencl kernel source file(.cl)
Compile the file to LLVM IR
IR-Level process to the IR
Compile(using llc) the IR to Binary
with each gpu targets(nvptx, amdgcn..)
Using the binary, Run host(.c or .cpp with lib OpenCL) with clCreateProgramWithBinary()
Now, When I compile kernel source file to LLVM IR, I have to include header of libclc(-include option in first one of above command) for compiling built-in functions. And I have to link libclc libraries before compile IR to binary
My environments are below:
GTX960
- NVIDIA's Binary appears in nvptx format
- I'm using sm_52 nvptx for my gpu.
Ubuntu Linux 16.04 LTS
LLVM/Clang 5.0.0
- If there is another way, I am willing to change the LLVM version.
Thanks in advice!
Clang 9 (and up) can compile OpenCL kernels written in the OpenCL C language. You can tell Clang to emit LLVM-IR by passing the -emit-llvm flag (add -S to output the IR in text rather than in bytecode format), and specify which version of the OpenCL standard using e.g. -cl-std=CL2.0. Clang currently supports up to OpenCL 2.0.
By default, Clang will not add the standard OpenCL headers, so if your kernel uses any of the OpenCL built-in functions you may see an error like the following:
clang-9 -c -x cl -emit-llvm -S -cl-std=CL2.0 my_kernel.cl -o my_kernel.ll
my_kernel.cl:17:12: error: implicit declaration of function 'get_global_id' is invalid in OpenCL
int i = get_global_id(0);
^
1 error generated.
You can tell Clang to include the standard OpenCL headers by passing the -finclude-default-header flag to the Clang frontend, e.g.
clang-9 -c -x cl -emit-llvm -S -cl-std=CL2.0 -Xclang -finclude-default-header my_kernel.cl -o my_kernel.ll
(I expect that the OpenCL front-end I have not found yet exists in clang)
There is an OpenCL front-end in clang - and you're using it, otherwise you couldn't compile a single line of OpenCL with clang. Frontend is Clang recognizing the OpenCL language. There is no OpenCL backend of any kind in LLVM, it's not the job of LLVM; it's the job of various OpenCL implementations to provide proper libraries. Clang+LLVM just recognizes the language and compiles it to bitcode & machine binaries, that's all it does.
in the assembly(ptx) of result opencl binary, it goes straight to the function call instead of converting it to an inline assembly.
You could try linking to a different library instead of libclc, if you find one. Perhaps NVidia's CUDA has some bitcode libraries somewhere, then again licensing issues... BTW are you 100% sure you need LLVM IR ? getting OpenCL binaries using the OpenCL runtime, or using SPIR-V, might get you faster binaries & certainly be less painful to work with. Even if you manage to get a nice LLVM IR, you'll need some runtime which actually accepts it (i could be wrong, but i doubt proprietary AMD/NVIDIA OpenCL will just accept random LLVM IR as inputs).
Clang does not provide a standard CL declaration header file (for example, C's stdio.h), which is why you're getting "undefined type float" and whatnot.
If you get one such header, you can then mark it as implicit include using "clang -include cl.h -x cl [your filename here]"
One such declaration header can be retrieved from the reference OpenCL compiler implementation at
https://github.com/KhronosGroup/SPIR-Tools/blob/master/headers/opencl_spir.h
And by the way, consider using this compiler which generates SPIR (albeit 1.0) which can be fed into OpenCL drivers as input.

gfortran -fcray-pointer, best way to use pointers in fortran?

I am not a fan of NEEDING numerous compiler flags to get a program to compile. I generally see it as poor programming practice.
I have some old fortran code that uses the POINTER statement and when using gfortran to compile these files it responds with the error
Error: Cray pointer declaration at (1) requires -fcray-pointer flag
here is a sample of what is causing it
COMPLEX*16 matrix(1)
POINTER (PM, mymatrix)
COMMON /M_SHARED/ PM
if I use the intel compiler then there is no problem doing just ifort -O3 myprogram.f but I don't want to be dependent on needing the intel compiler. I would prefer to be able to use gfortran which is free.
I would like to know how far behind the times my example is, and how it should be written properly. Or should i just use the -fcray-pointer flag?

Pocl `make check` fails all tests

I'm trying to set up pocl-0.11 on an ARM (llvm-3.3). I used ./configure --enable-debug --disable-icd --enable-testsuites=all (I'd like to get pocl to run without ICD loader as a first step).
During configure I got a couple of warnings about disabled tests due to missing glut, libDSL, boostlib, etc. Since the warnings 'only' concern some testsuites, I assume the configure is fine and I guess some basic tests will still be enabled!?
Furthermore I get the output:
checking LLC host CPU... cortex-a9
configure: using the ARM optimized kernel lib for the native device
<stdin>:1:19: error: 'test' declared as an array with a negative size
constant int test[sizeof(long)==8?1:-1]={1}; (Is that relevant? I don't really know what to do with this message.)
Eventually configure succeeds and make & make install run without any hint of a problem.
make check then fails all tests, even: check for pocl version FAILED (testsuite.at:29)
The 001/testsuite.log file indicates a linker problem!?
Do you have any idea?
Am I missing a configure flag or an environment variable? I didn't touch --prefix or any other paths.
LLVM 3.3 is quite old and its support will be dropped after the next pocl release. The configure error message you see probably means it fails to detect your CPU features correctly, but the testsuite error indicates that not all LLVM symbols are properly linked in. You can try fixing this by using a shared LLVM library, but I really suggest you to upgrade LLVM. The upcoming 3.7 should work now and has fixed several issues and includes better OpenCL C Clang support.

Setting up "configure" for openMP in R

I have an R package which is easily sped up by using OpenMP. If your compiler supports it then you get the win, if it doesn't then the pragmas are ignored and you get one core.
My problem is how to get the package build system to use the right compiler options and libraries. Currently I have:
PKG_CPPFLAGS=-fopenmp
PKG_LIBS=-fopenmp
hardcoded into src/Makevars on my machine, and this builds it with OpenMP support. But it produces a warning about non-standard compiler flags on check, and will probably fail hard on a machine with no openMP capabilities.
The solution seems to be to use configure and autoconf. There's some information around here:
http://cran.r-project.org/doc/manuals/R-exts.html#Using-Makevars
including a complex example to compile in odbc functionality. But I can't see how to begin tweaking that to check for openmp and libgomp.
None of the R packages I've looked at that talk about using openMP seem to have this set up either.
So does anyone have a walkthrough for setting up an R package with OpenMP?
[EDIT]
I may have cracked this now. I have a configure.ac script and a Makevars.in with #FOO# substitutions for the compiler options. But now I'm not sure of the workflow. Is it:
Run "autoconf configure.in > configure; chmod 755 configure" if I change the configure.in file.
Do a package build.
On package install, the system runs ./configure for me and creates Makevars from Makevars.in
But just to be clear, "autoconf configure.in > configure" doesn't run on package install - its purely a developer process to create the configure script that is distributed - amirite?
Methinks you have the library option wrong, please try
## -- compiling for OpenMP
PKG_CXXFLAGS=-fopenmp
##
## -- linking for OpenMP
PKG_LIBS= -fopenmp -lgomp
In other words, -lgomp gets you the OpenMP library linked. And I presume you know that this library is not part of the popular Rtools kit for Windows. On a modern Linux you should be fine.
In an unrelease testpackage I have here I also add the following to PKG_LIBS, but that is mostly due to my use of Rcpp:
$(shell $(R_HOME)/bin/Rscript -e "Rcpp:::LdFlags()") \
$(LAPACK_LIBS) $(BLAS_LIBS) $(FLIBS)
Lastly, I think the autoconf business is not really needed unless you feel you need to test for OpenMP via configure.
Edit: SpacedMan is correct. Per the beginning of the libgomp-4.4 manual:
1 Enabling OpenMP
To activate the OpenMP extensions for
C/C++ and Fortran, the compile-time
flag `-fopenmp' must be specified.
This enables the OpenMP directive
[...] The flag also
arranges for automatic linking of the
OpenMP runtime library.
So I stand corrected. Seems that it doesn't hurt to manually add what would get added anyway, just for clarity...
Just addressing your question regarding the usage of autoconf--no, you do not want to run autoconf with any arguments, nor should you redirect its output. You are correct that running autoconf to build the configure script is something that the package maintainer does, and the resulting configure script is distributed. Normally, to generate the configure script from configure.ac (older packages use the name configure.in, but that name has been discouraged for several years), the developer simply runs autoconf with no arguments. Before running autoconf, it is necessary to run aclocal, autoheader, libtoolize, etc... There is also a tool (autoreconf) which simplifies the process and invokes all the required programs in the correct order. It is now more typical to run autoreconf instead of autoconf.

Resources