Requiring OpenMP availability for use in an Rcpp package - r

I have prepared a package in R by using RcppArmadillo and OpenMP libraries and following commands:
RcppArmadillo.package.skeleton("mypackage")
compileAttributes(verbose=TRUE)
Also, in the DESCRIPTION file I added:
Imports: Rcpp (>= 0.12.8), RcppArmadillo
LinkingTo:Rcpp, RcppArmadillo
Depends: RcppArmadillo
and in the NAMESPACE file I added:
import(RcppArmadillo)
importFrom(Rcpp, evalCpp)
Then I run the following codes in cmd:
R CMD build mypackage
R CMD INSTALL mypackage.tar.gz
I build and install the package in my computer and I use it now. But my colleges and friends are not able to install the package. The error messages is about RcppArmadillo and OpenMPlibraries.
For instance:
fatal error: 'omp.h' file not found
Does anyone have previous experience in this case? Which type of settings I have to do in my package for solving this problem?

Congratulations! You've most likely stumbled across macOS' lack OpenMP support. This has been documented in the Rcpp FAQ as entry 2.10.3.
Defensive coding
The reason for the error being apparent is you did not protect the OpenMP code appropriately... e.g.
Header inclusions are protected with:
#ifdef _OPENMP
#include <omp.h>
#endif
Code has protections given by:
#ifdef _OPENMP
// multithreaded OpenMP version of code
#else
// single-threaded version of code
#endif
This assumes you are not using the preprocessor #omp tags but more indepth omp function calls. If it is the prior, then the code protection is not important as the preprocessor tags will be discarded.
(For those long time users of the above macro schemes coming here, please note that as of R 3.4.0, the SUPPORT_OPENMP definition was removed completely in favor of _OPENMP.)
Creating a package requirement for OpenMP via configure.ac
However, the above is just good defensive coding. If your package requires a specific feature, then it may be time to consider using an autoconf file called configure.ac to generate a configure script.
Place the configure.ac in the top level of your package.
packagename/
|- data/
|- inst/
|- man/
|- src/
|- Makevars
|- HelloWorld.cpp
|- DESCRIPTION
|- NAMESPACE
|- configure.ac
|- configure
The configure.ac should contain the following:
AC_PREREQ(2.61)
AC_INIT(your_package_name_here, m4_esyscmd_s([awk -e '/^Version:/ {print $2}' DESCRIPTION]))
AC_COPYRIGHT(Copyright (C) 2017 your name?)
## Determine Install Location of R
: ${R_HOME=$(R RHOME)}
if test -z "${R_HOME}"; then
AC_MSG_ERROR([Could not determine R_HOME.])
fi
## Setup RBin
RBIN="${R_HOME}/bin/R"
CXX=`"${RBIN}" CMD config CXX`
CPPFLAGS=`"${RBIN}" CMD config CPPFLAGS`
CXXFLAGS=`"${RBIN}" CMD config CXXFLAGS`
## Package Requires C++
AC_LANG(C++)
AC_REQUIRE_CPP
## Compiler flag check
AC_PROG_CXX
## Borrowed from BHC/imager/icd/randomForest
# Check for OpenMP
AC_OPENMP
# since some systems have broken OMP libraries
# we also check that the actual package will work
ac_pkg_openmp=no
if test -n "${OPENMP_CFLAGS}"; then
AC_MSG_CHECKING([OpenMP detected, checking if viable for package use])
AC_LANG_CONFTEST([AC_LANG_PROGRAM([[#include <omp.h>]], [[ return omp_get_num_threads (); ]])])
"$RBIN" CMD SHLIB conftest.c 1>&AS_MESSAGE_LOG_FD 2>&AS_MESSAGE_LOG_FD && "$RBIN" --vanilla -q -e "dyn.load(paste('conftest',.Platform\$dynlib.ext,sep=''))" 1>&AS_MESSAGE_LOG_FD 2>&AS_MESSAGE_LOG_FD && ac_pkg_openmp=yes
AC_MSG_RESULT([${ac_pkg_openmp}])
fi
# if ${ac_pkg_openmp} = "yes" then we have OMP, otherwise it will be "no"
if test "${ac_pkg_openmp}" = no; then
AC_MSG_WARN([No OpenMP support. If using GCC, upgrade to >= 4.2. If using clang, upgrade to >= 3.8.0])
AC_MSG_ERROR([Please use a different compiler.])
fi
# Fin
AC_OUTPUT
To generate the configure script, run:
autoconf
Once this is done, you will need to rebuild your package. Note, you may need to install autoconf if on Windows and on macOS you likely need to install it via homebrew.
Helping users get a viable OpenMP compiler
Now, you may want to ensure your colleagues are able to get the speedup gain from your OpenMP-enabled code. To do so, one must enable OpenMP by having your colleagues shift away from using the default system compiler to either a "true" gcc or a viable omp enabled clang compiler.
Instructions for both on macOS are given here:
http://thecoatlessprofessor.com/programming/openmp-in-r-on-os-x/

Related

R package requiring the 'libquadmath' library

I made a R package which uses Rcpp and which requires the libquadmath library (to use the multiprecision numbers of boost). On my personal laptop (Ubuntu 18.04), it works "as is". On win-builder it works by setting PKGLIBS = -lquadmath or PKGLIBS = $(FLIBS) in the Makevars file. But I also checked on r-hub with these settings and for the Fedora Linux distribution (R-devel, clang, gfortran) I get a failure.
This failure is:
/home/docker/R/BH/include/boost/multiprecision/float128.hpp:40:10: fatal error: 'quadmath.h' file not found
So I'm fearing that my package will not pass the CRAN checks. What is the way to go?
You write that you set "PKGLIBS = -lquadmath or PKGLIBS = $(FLIBS)". Those are linker instructions.
You write that fatal error: 'quadmath.h' file not found. That is a compiler error.
Now, the error comes from float128.hpp which happens to be in a package I maintain, so I took a quick look:
#if defined(BOOST_MP_USE_FLOAT128)
extern "C" {
#include <quadmath.h>
}
So you could suppress the inclusion by trying to ensure BOOST_MP_USE_FLOAT128. Other than that, I would recommend to look at the Boost documentation for package multiprecision. They may have a hint or two.
Edit: I took a quick peep at the multiprecision documentation but didn't see any leads. For other Boost libraries I have often started from some of the example but I am less familiar with this one.
Edit 2: Your example is also not exactly reproducible. I run Ubuntu here too, and the Boost float128.cpp example works fine on my box via g++ -o fl128 fl128.cpp -lquadmath (when save as fl128.cpp). You may need to do some discovery in a configure script to see why the other Linux systems at RHub fail.

install opencv 3.4.1 and Qt 5.10 with CMAKE

I install Qt 5.10 latest version and opencv3.4.1 and I couldn't install the library in this version of Qt with Cmake can anybody help me to do it on my windows 10 64-bit please?
I tried with this video also
https://www.youtube.com/watch?v=ZOSu-2Oju-A
and in cmd step I have this (picture in this link)
in this link also (( https://wiki.qt.io/How_to_setup_Qt_and_openCV_on_Windows )) I do all steps carefully but I didn't found the folder bin in opencv-build after do the steps and My OS is windows 10 64-bit thanks for help.
Note: I'm basing my answer more on [SO]: openCV mingw-32 error in cmd (the tools versions mentioned in the posted .pdf)
1. Preliminary considerations
At the beginning, I tried to build it using what I already had on my machine (Win 10):
CMake 3.6.4 (bundled by Android Studio - that I didn't updated for months BTW), cmdline - as there's no GUI
MinGW7.2.0 (that I built on my machine for a different task)
g++ 7.2.0
OpenCV 3.4.2 (downloaded [GitHub]: opencv/opencv - opencv-3.4.2.zip, specifically for this task)
The build process passed this stage (well, it failed somewhere below this point, I didn't check why exactly).
Anyway, I thought that the build env that I used, and the suggested one were too far away, so I:
Downloaded CMake 3.6.3 Win x64 bundle (that contains cmake-gui - as cmdline can be a pain, especially for those that are not used to it)
Used MinGW 5.3.0 (part of my Qt installation)
g++ 5.3.0
The problem at its core (as specified in comments, or Googleing the error) is that the g++ compiler doesn't use the C++11 standard (and protobuf source code requires it). So, I did a little test: pasted the faulty code in a file (and added a dummy main), and tried it with the 2 MinGW installations.
code00.cpp:
template <typename char_type>
bool null_or_empty(const char_type *s) {
return (s == nullptr) || (*s == 0);
}
int main() {
return 0;
}
Output:
[cfati#CFATI-5510-0:e:\Work\Dev\StackOverflow\q049459395]> sopr.bat
*** Set shorter prompt to better fit when pasted in StackOverflow (or other) pages ***
[prompt]> dir /b
build
build.bat
cmake-gui.exe - Shortcut.lnk
code.cpp
src
[prompt]> set _PATH=%PATH%
[prompt]> set PATH=%_PATH%;c:\Install\x64\MinGW32\MinGW32\7.2.0-posix-seh-rt_v5-rev1\mingw64\bin
[prompt]> "c:\Install\x64\MinGW32\MinGW32\7.2.0-posix-seh-rt_v5-rev1\mingw64\bin\g++" code.cpp
[prompt]> "c:\Install\x64\MinGW32\MinGW32\7.2.0-posix-seh-rt_v5-rev1\mingw64\bin\g++" --version
g++ (x86_64-posix-seh-rev1, Built by MinGW-W64 project) 7.2.0
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
[prompt]> dir /b
a.exe
build
build.bat
cmake-gui.exe - Shortcut.lnk
code.cpp
src
[prompt]> del /q a.exe
[prompt]> set PATH=%_PATH%;c:\Install\Qt\Qt\Tools\mingw530_32\bin
[prompt]> "c:\Install\Qt\Qt\Tools\mingw530_32\bin\g++" code.cpp
code.cpp: In function 'bool null_or_empty(const char_type*)':
code.cpp:3:17: error: 'nullptr' was not declared in this scope
return (s == nullptr) || (*s == 0);
^
[prompt]> "c:\Install\Qt\Qt\Tools\mingw530_32\bin\g++" --version
g++ (i686-posix-dwarf-rev0, Built by MinGW-W64 project) 5.3.0
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
[prompt]> "c:\Install\Qt\Qt\Tools\mingw530_32\bin\g++" code.cpp -std=c++0x
[prompt]>
As seen, the older g++ needs -std=c++0x explicitly.
Following the build steps, I got the same error as in the image below (I launched mingw32-make directly in protobuf's dir, to skip all the other stuff built before it):
2. Solutions
Both are done at cmake-gui level. After setting the paths:
Hit "Configure"
Do the required variable changes
Hit "Generate"
Launch mingw32-make from console, in the build dir
Notes:
Since I'm very far away from being a CMake expert, before doing this, I empty the build dir to make sure that there's nothing left from previous build (of course the drawback is that everything is done again, which usually takes a long time)
Since I didn't enable parallel build, I didn't wait for the full build to finish (as it takes forever), but just checked that it passes this point
2.1 Force C++11 standard
Click "Add Entry" and set [CMake 3.1]: CXX_STANDARD:
And below, notice that the build passed point where it was failing before:
2.2 Skip Protobuf build
Notes:
In the posted movie, there's no Protobuf build attempted (probably it's not in the bundle?) that's why this error didn't pop up
I don't know what functionality will not be available in the final build, as I don't know what Protobuf is used at
I consider this more like a workaround
Search for Protobuf related variables, and uncheck any that are in the BUILD or WITH groups (the blue lines):
And again the effect is below - the Protobuf build no longer takes place (it comes just after libIlmImf):

Compiling C++ code for R (CRAN) packages on Solaris

I am a little bit confused on how to efficiently prepare the R package, so that it will be compatible across all needed system platforms. This is needed so that the new version of package will be accepted by CRAN. The main difficulty comes from compiling external C++ shared library, and optionally CUDA version if the compiler is available. To support this flow I've created specific Makefile, unfortunately using GNU-extensions. It works fine on Linux, OSX and when executed manually via gmake on Solaris. Relevant part is here:
# Checking whether nvcc compiler is available
NVCC_TEST = $(shell basename $(shell which nvcc 2> /dev/null)"")
ifeq ($(NVCC_TEST),nvcc)
ALL_LIBS += libcucubes_gpu.so
ALL_OBJS += $(GPU_OBJS)
ALL_FLAGS += $(GPU_FLAGS)
else
ALL_OBJS += gpu_fallback.o
endif
Turns out that, when running R CMD INSTALL (...) on Solaris, the installation fails on something like this:
make: Fatal error in reader: Makefile, line 39: Unexpected end of line seen
ERROR: compilation failed for package 'libcucubes'
As it turns out, it is caused by the fact that Solaris' version of make is executed instead of GNU-compatible gmake (I've tested it works fine), even though it is available. My question is whether there is any simple way to force usage of gmake here, for the R package build. In general I know I could use autotools to solve compatibility issues during installation, but it seems to bring too much complexity for that simple case. Any advices will be really appreciated, thanks!
If you can't get your build process to use gmake instead of Solaris's pure POSIX make, you can use this hack:
Make a dedicated directory for this hack: mkdir $HOME/make_hack
Softlink gmake asmakein that directory: ln -s /path/to/gmake $HOME/make_hack/make
Set your PATH: PATH=$HOME/make_hack:$PATH
Now, run your build process using that PATH, and it should use gmake. Hopefully it just uses make from its PATH envval and not some hardcoded full path.
Yeah, it's a hack. But it's probably a lot easier than modifying the build process to use gmake instead of make.
From Writing R Extensions:
If you really must require GNU make, declare it in the DESCRIPTION
file by
SystemRequirements: GNU make
and ensure that you use the value of environment variable MAKE (and
not just make) in your scripts.
configure scripts are the preferred solution though. BTW, in general a Makevars file is also preferred over a full Makefile.

Setting up "configure" for openMP in R

I have an R package which is easily sped up by using OpenMP. If your compiler supports it then you get the win, if it doesn't then the pragmas are ignored and you get one core.
My problem is how to get the package build system to use the right compiler options and libraries. Currently I have:
PKG_CPPFLAGS=-fopenmp
PKG_LIBS=-fopenmp
hardcoded into src/Makevars on my machine, and this builds it with OpenMP support. But it produces a warning about non-standard compiler flags on check, and will probably fail hard on a machine with no openMP capabilities.
The solution seems to be to use configure and autoconf. There's some information around here:
http://cran.r-project.org/doc/manuals/R-exts.html#Using-Makevars
including a complex example to compile in odbc functionality. But I can't see how to begin tweaking that to check for openmp and libgomp.
None of the R packages I've looked at that talk about using openMP seem to have this set up either.
So does anyone have a walkthrough for setting up an R package with OpenMP?
[EDIT]
I may have cracked this now. I have a configure.ac script and a Makevars.in with #FOO# substitutions for the compiler options. But now I'm not sure of the workflow. Is it:
Run "autoconf configure.in > configure; chmod 755 configure" if I change the configure.in file.
Do a package build.
On package install, the system runs ./configure for me and creates Makevars from Makevars.in
But just to be clear, "autoconf configure.in > configure" doesn't run on package install - its purely a developer process to create the configure script that is distributed - amirite?
Methinks you have the library option wrong, please try
## -- compiling for OpenMP
PKG_CXXFLAGS=-fopenmp
##
## -- linking for OpenMP
PKG_LIBS= -fopenmp -lgomp
In other words, -lgomp gets you the OpenMP library linked. And I presume you know that this library is not part of the popular Rtools kit for Windows. On a modern Linux you should be fine.
In an unrelease testpackage I have here I also add the following to PKG_LIBS, but that is mostly due to my use of Rcpp:
$(shell $(R_HOME)/bin/Rscript -e "Rcpp:::LdFlags()") \
$(LAPACK_LIBS) $(BLAS_LIBS) $(FLIBS)
Lastly, I think the autoconf business is not really needed unless you feel you need to test for OpenMP via configure.
Edit: SpacedMan is correct. Per the beginning of the libgomp-4.4 manual:
1 Enabling OpenMP
To activate the OpenMP extensions for
C/C++ and Fortran, the compile-time
flag `-fopenmp' must be specified.
This enables the OpenMP directive
[...] The flag also
arranges for automatic linking of the
OpenMP runtime library.
So I stand corrected. Seems that it doesn't hurt to manually add what would get added anyway, just for clarity...
Just addressing your question regarding the usage of autoconf--no, you do not want to run autoconf with any arguments, nor should you redirect its output. You are correct that running autoconf to build the configure script is something that the package maintainer does, and the resulting configure script is distributed. Normally, to generate the configure script from configure.ac (older packages use the name configure.in, but that name has been discouraged for several years), the developer simply runs autoconf with no arguments. Before running autoconf, it is necessary to run aclocal, autoheader, libtoolize, etc... There is also a tool (autoreconf) which simplifies the process and invokes all the required programs in the correct order. It is now more typical to run autoreconf instead of autoconf.

Building and installing an R package library with a jnilib extension

I'm building an R package and need to build a jni library for OSX (called myPackage.jnilib) as part of my build process and then have R's automatic installation mechanisms put it inside the libs directory of my package.
The problem is that R's default is to try and build an object called myPackage.so. I'd like to be able to customize this but can't see how.
I can get part of the way by subverting R's mechanisms using a phony "all" target in Makevars (described here) and then copying the file to the inst directory of my package. This is OK for my own local uses but generates headaches when trying to build universal binaries and isn't very portable. I'm currently preparing the package for CRAN so this method isn't likely to work.
I can see two potential solutions but haven't got either to work yet
Copy my library manually to the libs directory of my package during installation. Since this directory is created on the fly, how would I find out what it is from within Makevars or a configure script
The best solution: Tell R CMD SHLIB the name of my output file so I can use R's normal package mechanisms and let it copy the file to the right directory.
In case anyone else encounters this problem I'm posting my own workaround here.
I define targets in my Makevars and copy the libraries directly (ie answer 1). The variable R_LIBRARY_DIR provides the temporary location where the package is being built.
My Makevars now looks something like this
OBJECTS =
LIBSINSTDIR=$(R_LIBRARY_DIR)/myPackage/libs/
#ARCHFLAG is set in the configure script to i386 or ppc as appropriate
JNIINSTDIR=$(LIBSINSTDIR)/#ARCHFLAG#/
.PHONY: all
all: $(SHLIB) jnilib
jnilib: object1.o object2.o
$(CXX) -bundle $(JAVA_LIBS) $(JAVA_CPPFLAGS) -o libmyPackage.jnilib object1.o object2.o
mkdir -p $(JNIINSTDIR)
cp libmyPackage.jnilib $(JNIINSTDIR)

Resources