How to use Combinatorial BLAS? - linear-algebra

I know how to install and use BLAS for C (cblas.h), but I do not know how to use Combinatorial BLAS (CombBLAS).
I am using CentOS 6.5.
Here is what I have installed successfully on my server:
# yum groupinstall "Development Tools"
# yum install openmpi openmpi-devel
# yum install atlas atlas-devel
# yum install gnuplot
# yum install lapack lapack-devel
# yum install boost boost-devel
# yum install cmake
And this is how I use BLAS in my codes. I create a file helloblas.c
#include <stdio.h>
#include <stdlib.h>
#include <cblas.h>
void main()
{
double result;
int incx, incy;
incx = 1;
incy = 1;
double x[3] = {1,2,3};
double y[3] = {3,4,5};
result = cblas_ddot(3, x, incx, y, incy);
printf("Result = %lf \n", result);
}
Then I compile and execute it successfully using this command:
$ gcc helloblas.c -L/usr/lib64/atlas -lcblas
$ ./a.out
And then I proceed by installing Combinatorial BLAS (CombBLAS), using this steps:
# wget http://gauss.cs.ucsb.edu/~aydin/CombBLAS_FILES/CombBLAS_beta_14_0.tgz
# wget http://gauss.cs.ucsb.edu/~aydin/CombBLAS_FILES/testdata_combblas1.2.1.tgz
# tar zxvf CombBLAS_beta_14_0.tgz
# cp testdata_combblas1.2.1.tgz CombBLAS
# cd CombBLAS
# tar -xzvf testdata_combblas1.2.1.tgz
# module add openmpi-x86_64
# cmake .
# make
The CombBLAS installation was successful since there was no error message.
I have checked the directory of /usr/includes, it contains no library of CombBLAS:
The CombBLAS library (CombBLAS.h) is located in the source folder.
I have three questions:
How to use and include CombBLAS library into your codes? Does anyone have an example source code on how to use CombBLAS? I mean a simple one, just like the helloblas.c above.
I install the CombBLAS on my server in order to be used by other users. But since there is no CombBLAS.h in folder /usr/include, then how can they use it? Obviously, normal users are not permitted to access /root directory.
Is there any user documentation of CombBLAS?
Thank you very much.

One of the authors of CombBLAS has helped me answer my question.
There are several code samples inside folder Applications. For example, we can run it using this command (it run BFS for graph with 2^10 vertices):
$ cd Applications/
$ make tdbfs
$ ./tdbfs Force 10
There are other executables that can be run: betwcent (Betweenness Centrality), dobfs (Direction Optimization Breadth First Search), fbfs (Filtered Breadth First Search), fmis, and mcl.

Related

How to compile rsync on MacOS and install it (not using brew)?

As sometime I am trying to compile a full version (with all options) of rsync on MacOS (please, do not ask why I want to do it - if you can help me clear and directly, thank you so much. Otherwise, do not waste your time).
I found a really helpful script made by "junsionzhang" (https://gist.github.com/junsionzhang), which in my option is simple and direct. Thank you junsionzhang!
Even being a good script, clear, step-by-step, there are some steps that does not work for me (and I tried a lot and for a while).
Here is the script (at Oct, 16, 2022):
#Compile rsync 3.0.7
#Follow these instructions in Terminal on both the client and server to download and compile rsync 3.0.7:
#Download and unarchive rsync and its patches
cd ~/Desktop
curl -O http://rsync.samba.org/ftp/rsync/src/rsync-3.1.2.tar.gz
tar -xzvf rsync-3.1.2.tar.gz
rm rsync-3.1.2.tar.gz
curl -O http://rsync.samba.org/ftp/rsync/src/rsync-patches-3.1.2.tar.gz
tar -xzvf rsync-patches-3.1.2.tar.gz
rm rsync-patches-3.1.2.tar.gz
cd rsync-3.1.2
#Apply patches relevant to preserving Mac OS X metadata
patch -p1 <patches/fileflags.diff
patch -p1 <patches/crtimes.diff
patch -p1 <patches/hfs-compression.diff
#Configure, make, install
./prepare-source
./configure
make
sudo make install
#Verify your installation
/usr/local/bin/rsync --version
#By default, rsync will be installed in /usr/local/bin.
#If that isn't in your path, you will need to call your new version of rsync by its absolute path (/usr/local/bin/rsync).
The three patches lines does not work for me. After patching the fileflags, the rsync patched can not me "prepared", and of course, not be configured. The others, crtimes.diff and hfs-compression.diff, does not exist on the TAR package.
So, questions:
Trying to compile the 3.2.6 version os MacOS Big Sur (11.7), what do I need and which is the right/correct way and steps to patch, update and have the "correct"version?
How do I (correctly) compile and install the all the libraries to have a real full rsync version, with all features available (ACL support, Xattr support, xxhash, zstd, lz4, openssl crypto, and so on...)?
I would like to update and contribute to a new "junsionzhang" script version, making options to install a simple/standard version (rsync only) and options to install the libraries and choose for a "more complete" version, and help another Mac users and the community. How can I make this bash script?
How to install gawk, mawk, nawk, awk ( and where from (what are the differences): gawk, mawk, nawk?
Some libraries I already have installed (which I do not know if I did them right) seems to be outdated. How to I update them?
When running "./prepare-source", i get this: "make: Nothing to be done for `conf'.". Does this is right?
Thank you all! I really appreciate for all help I can get!
Completely untested (I don't run MacOS).
URLs to access the source code:
Source Version Tarballs: https://rsync.samba.org/ftp/rsync/src/
Git repository: https://github.com/WayneD/rsync
might need these pre-requisites (from INSTALL.md):
brew install automake
brew install xxhash
brew install zstd
brew install lz4
brew install openssl
code to download and extract (to ~/Desktop/rsync-3.2.6):
# Download the relevant version of Rsync and the same version of
# Rsync patches, extract them and apply the "suggested"
# patches from the original script:
cd ~/Desktop
# Rsync
curl -O https://rsync.samba.org/ftp/rsync/src/rsync-3.2.6.tar.gz
tar -xzvf rsync-3.2.6.tar.gz
rm rsync-3.2.6.tar.gz
# Patches
curl -O https://rsync.samba.org/ftp/rsync/src/rsync-patches-3.2.6.tar.gz
tar -xzvf rsync-patches-3.2.6.tar.gz
rm rsync-patches-3.2.6.tar.gz
apply the "suggested" patches, light the blue touch paper and stand back:
cd ~/Desktop/rsync-3.2.6
# Apply patches relevant to preserving Mac OS X metadata
patch -p1 <patches/fileflags.diff
patch -p1 <patches/crtimes.diff
patch -p1 <patches/hfs-compression.diff
# Configure, make, install
./prepare-source
./configure
make
sudo make install
# Verify your installation
/usr/local/bin/rsync --version

Is it possible to use javapackager on ZuluFX for Mac

I was able to use ZuluFX 8 with javapackager on Windows. However, on a Mac I get this error:
Bundler Mac Application Image skipped because of a configuration problem: Cannot determine which JRE/JDK exists in the specified runtime directory.
Advice to fix: Point the runtime directory to one of the JDK/JRE root, the Contents/Home directory of that root, or the Contents/Home/jre directory of the JDK.
It's pretty easy to just move the package into Contents/Home but I doubt that will work as it seems there is no JRE bundled with the Mac version of ZuluFX 8. Is this something that can be worked around?
It's pretty easy to just move the package into Contents/Home but I doubt that will work as it seems there is no JRE bundled with the Mac version of ZuluFX 8.
From what I'm seeing, I'm not sure that's correct. The ZuluFx 8 archive for Mac contains a jre directory. I extracted the archive to ~/zuluFX and from there created the Contents/Home directory as required by MacOS and added a symbolic link to said jre directory there. I then set $JAVA_HOME accordingly:
$ pwd
/Users/cody/zuluFX
$ mkdir -p Contents/Home
$ ln -s ../../jre .
$ export JAVA_HOME=~/zuluFX
Then I utilized a simple javapackager example on github to test its usage (I have no other JREs/JDKs installed on this box). The example app simply dumps Java properties and environment variables in a TextArea.
I had to modify the 3build script in the example to comment out its attempt to re-set $JAVA_HOME, but otherwise, it builds successfully, with the following javapackager command:
javapackager \
-deploy -Bruntime=${JAVA_HOME} \
-native image \
-srcdir . \
-srcfiles MacJavaPropertiesApp.jar \
-outdir release \
-outfile ${APP_DIR_NAME} \
-appclass MacJavaPropertiesApp \
-name "MacJavaProperties" \
-title "MacJavaProperties" \
-nosign \
-v
When I launch the resulting app, it reports the usage of the azul/zulu jre as expected:

How do you create a fake install of a debian package for use in testing?

I have a package that previously only targeted RPM based distros for which I am now building .deb packages for Debian based distros.
The aim is to simulate a test installation from user-space that is isolated from the system you are building on. It may be multi-user and you do not want to require root access just to build the software. Many of our tests simulate the installation directory structure already. This is for the next step up to simulate an actual installation using packages built.
For the RPM packages I was able to create test installations using:
WSDIR=/where/I/want/my/tests/to/run
rpmdb --initdb --dbpath "$WSDIR"/rpmdb
rpm --relocate /opt="$WSDIR"/opt --dbpath $WSDIR/rpmdb -i <package>.rpm
The equivalent in the Debian world is something like:
dpkg --force-not-root --admindir=$WSDIR/dpkg --root=$WSDIR/install --install "$DEB"
However, I am stuck over the equivalent to the rpmdb --initdb step.
Note that I can just unpack the archive using:
dpkg-deb -x "$DEB" $WSDIR/install
But I would prefer to be closer to how a real package is installed.
Also I don't think this will run preinstall and postinstall scripts.
Similar questions have suggested using deboostrap to create a chroot environment but this creates a complete new installation. As well as being overkill it is too slow for an automated test. I intend to use this for quick tests of the installation package prior to further testing in actual test environments.
My experiments so far:
(cd $WSDIR/dpkg && mkdir alternatives info parts triggers updates)
cp /var/lib/dpkg/status $WSDIR/dpkg/status
have at best resulted in:
dpkg: error: unable to access dpkg status area: No such file or directory
which does not indicate clear what is wrong.
So how do you create a dpkg admin directory?
Cross posted as https://superuser.com/questions/1271145/how-do-you-create-a-dpkg-admin-directory
Update 24/11/2017
I've tried copying using the dpkg dir from an environment created by [cowdancer][1] (which uses deboostrap under the hood) or copying the real one from /var/lib/dpkg but I still get the same error message so perhaps the error (and/or the --admindir option) doesn't mean quite what I think it means.
Note that:
sudo dpkg --force-not-root --root=$WSDIR/install --admindir=/var/lib/dpkg --install "$DEB"
does work. So it is something to do with the admin dir.
I've also retitled the question as "How do you create a dpkg admin directory" is interesting question but the answer is not necessarily the solution to my problem.
The minimal way to create a dpkg database is something like this:
$ mkdir -p db/{updates,info}
$ touch db/{status,diversions,statoverride}
If you want to use that as non-root, currently the best way is to use fakeroot.
$ mkdir -p fsys
$ PATH=/sbin:/usr/sbin:$PATH fakeroot dpkg --log=/dev/null --admindir=db --instdir=fsys -i pkg.deb
But take into account that passing --root after --admindir or --instdir will reset those paths, which is I think the problem you have been having here.
Also using sudo and --force-not-root does not make much sense? :) And is definitely less confined than using just fakeroot. In the near future it will be possible to run dpkg fully unprivileged in some local tree.
I eventually found an answer for this. Thanks to Guillem Jover for some of this.
Pasting a copy of it here:
mkdir fake
mkdir fake/install
mkdir -p fake/dpkg/info
mkdir -p fake/dpkg/updates
touch fake/dpkg/status
PATH=/sbin:/usr/sbin:$PATH fakeroot dpkg --force-script-chrootless --log=`pwd`/fake/dpkg.log --root=`pwd`/fake --instdir `pwd`/fake --admindir=`pwd`/fake/dpkg --install *.deb
Some points to note:
--force-not-root is not enough. fakeroot is required.
ldconfig and start-stop-daemon must be on the path.
(hence PATH=/sbin:/usr/sbin:$PATH)
The log file needs to be relocated from the default /var/log/dpkg.log
The order of arguments is significant. If used --root must be before --instdir and --admindir.
The admindir is supposed to have a the installation dir as a prefix.
If the package contains any pre or post installation scripts (preinst,postinst) then --force-script-chrootless is required as these scripts are normally run via chroot() which gives operation not permitted when attempted under fakeroot.
For a quick test of trivial dependencies, you can directly install on the system using 'dpkg -i' then 'dpkg -P' and 'apt-get autoremove' to purge the package and clean the dependencies.
An other more secure but slower solution could be to use the autopkgtest package:
https://people.debian.org/~mpitt/autopkgtest/README.package-tests.html

Requiring OpenMP availability for use in an Rcpp package

I have prepared a package in R by using RcppArmadillo and OpenMP libraries and following commands:
RcppArmadillo.package.skeleton("mypackage")
compileAttributes(verbose=TRUE)
Also, in the DESCRIPTION file I added:
Imports: Rcpp (>= 0.12.8), RcppArmadillo
LinkingTo:Rcpp, RcppArmadillo
Depends: RcppArmadillo
and in the NAMESPACE file I added:
import(RcppArmadillo)
importFrom(Rcpp, evalCpp)
Then I run the following codes in cmd:
R CMD build mypackage
R CMD INSTALL mypackage.tar.gz
I build and install the package in my computer and I use it now. But my colleges and friends are not able to install the package. The error messages is about RcppArmadillo and OpenMPlibraries.
For instance:
fatal error: 'omp.h' file not found
Does anyone have previous experience in this case? Which type of settings I have to do in my package for solving this problem?
Congratulations! You've most likely stumbled across macOS' lack OpenMP support. This has been documented in the Rcpp FAQ as entry 2.10.3.
Defensive coding
The reason for the error being apparent is you did not protect the OpenMP code appropriately... e.g.
Header inclusions are protected with:
#ifdef _OPENMP
#include <omp.h>
#endif
Code has protections given by:
#ifdef _OPENMP
// multithreaded OpenMP version of code
#else
// single-threaded version of code
#endif
This assumes you are not using the preprocessor #omp tags but more indepth omp function calls. If it is the prior, then the code protection is not important as the preprocessor tags will be discarded.
(For those long time users of the above macro schemes coming here, please note that as of R 3.4.0, the SUPPORT_OPENMP definition was removed completely in favor of _OPENMP.)
Creating a package requirement for OpenMP via configure.ac
However, the above is just good defensive coding. If your package requires a specific feature, then it may be time to consider using an autoconf file called configure.ac to generate a configure script.
Place the configure.ac in the top level of your package.
packagename/
|- data/
|- inst/
|- man/
|- src/
|- Makevars
|- HelloWorld.cpp
|- DESCRIPTION
|- NAMESPACE
|- configure.ac
|- configure
The configure.ac should contain the following:
AC_PREREQ(2.61)
AC_INIT(your_package_name_here, m4_esyscmd_s([awk -e '/^Version:/ {print $2}' DESCRIPTION]))
AC_COPYRIGHT(Copyright (C) 2017 your name?)
## Determine Install Location of R
: ${R_HOME=$(R RHOME)}
if test -z "${R_HOME}"; then
AC_MSG_ERROR([Could not determine R_HOME.])
fi
## Setup RBin
RBIN="${R_HOME}/bin/R"
CXX=`"${RBIN}" CMD config CXX`
CPPFLAGS=`"${RBIN}" CMD config CPPFLAGS`
CXXFLAGS=`"${RBIN}" CMD config CXXFLAGS`
## Package Requires C++
AC_LANG(C++)
AC_REQUIRE_CPP
## Compiler flag check
AC_PROG_CXX
## Borrowed from BHC/imager/icd/randomForest
# Check for OpenMP
AC_OPENMP
# since some systems have broken OMP libraries
# we also check that the actual package will work
ac_pkg_openmp=no
if test -n "${OPENMP_CFLAGS}"; then
AC_MSG_CHECKING([OpenMP detected, checking if viable for package use])
AC_LANG_CONFTEST([AC_LANG_PROGRAM([[#include <omp.h>]], [[ return omp_get_num_threads (); ]])])
"$RBIN" CMD SHLIB conftest.c 1>&AS_MESSAGE_LOG_FD 2>&AS_MESSAGE_LOG_FD && "$RBIN" --vanilla -q -e "dyn.load(paste('conftest',.Platform\$dynlib.ext,sep=''))" 1>&AS_MESSAGE_LOG_FD 2>&AS_MESSAGE_LOG_FD && ac_pkg_openmp=yes
AC_MSG_RESULT([${ac_pkg_openmp}])
fi
# if ${ac_pkg_openmp} = "yes" then we have OMP, otherwise it will be "no"
if test "${ac_pkg_openmp}" = no; then
AC_MSG_WARN([No OpenMP support. If using GCC, upgrade to >= 4.2. If using clang, upgrade to >= 3.8.0])
AC_MSG_ERROR([Please use a different compiler.])
fi
# Fin
AC_OUTPUT
To generate the configure script, run:
autoconf
Once this is done, you will need to rebuild your package. Note, you may need to install autoconf if on Windows and on macOS you likely need to install it via homebrew.
Helping users get a viable OpenMP compiler
Now, you may want to ensure your colleagues are able to get the speedup gain from your OpenMP-enabled code. To do so, one must enable OpenMP by having your colleagues shift away from using the default system compiler to either a "true" gcc or a viable omp enabled clang compiler.
Instructions for both on macOS are given here:
http://thecoatlessprofessor.com/programming/openmp-in-r-on-os-x/

Compiling an OpenCL program using a CL/cl.h file

I have sample "Hello, World!" code from the net and I want to run it on the GPU on my university's server. When I type "gcc main.c," it responds with:
CL/cl.h: No such file or directory
What should I do? How can I have this header file?
Are you using Ubuntu or Debian distro? Then you could use this package to solve the problem with missing header file:
apt-get install opencl-headers
You must install opencl library to solve linking issues using that Debian and Ubuntu package:
apt-get install ocl-icd-libopencl1
You can also use these nonfree libraries: nvidia-libopencl1 (Debian) or nvidia-libopencl1-xx (Ubuntu).
Make sure you have the appropriate toolkit installed.
This depends on what you intend running your code on. If you have an NVidia card then you need to download and install the CUDA-toolkit which also contains the necessary binaries and libraries for opencl.
Are you running Linux? If you believe you already have OpenCL installed it could be that it is found at a different location than the standard /usr/include. Type the following and see what results you get:
find / -iname cl.h 2>/dev/null
On my laptop for example, the header is found at /usr/local/cuda-5.5/include. If its the case were your header file is at a different location you simply have to specify the path during complication
g++ -I/usr/local/cuda-5.5/include main.c -lOpenCL
Alternatively, you can create a symbolic link from the path to /usr/include:
ln -s /usr/local/cuda-5.5/include/CL /usr/include

Resources