bedr: Binary Availability checking for path all FAIL in R - r

This is my first time using bedr. Upon loading the library all of the binary availbility checks fail.
library("bedr", lib.loc="~/R/R-3.4.4/library")
#
bedr v1.0.4
#
checking binary availability...
Checking path for bedtools... FAIL
Checking path for bedops... FAIL
Checking path for tabix... FAIL
tests and examples will be skipped on R CMD check if binaries are missing
I was originally using R-3.5 and I read somewhere that there are some bugs with R-3.5 that will cause this so I reverted to R-3.4.4. This didn’t seem to help. Additionally, I am using a company laptop at a new employer and am waiting on gaining adminstrative access to install/download programs as needed. I understand these issues might further complicate the matter. Does anyone have a way to diagnose the true cause of this failure and/or fix the problem?

You can install packages bedtools, bedops and tabix.
If you are in Unix environment just use sudo apt install to install these packages
After that all the checks should be PASS

Related

How to deal with build dependencies in source RPM?

I don't usually use Fedora or RPMs, so I'm flying blind here. There are lots of similar questions around here, but none that I found are to the exact point where I'm stuck.
I have the source RPM for an old game program on Fedora ("six" is the game). I want to add a couple of features, but first I want to make sure I know how to compile it so that any future problems are new. I have not made any changes yet at all.
I'm not completely helpless -- when I did
rpmbuild --recompile six-*.src.rpm
I got a complaint about a missing dependency: "kdelibs3-devel", but
dnf install kdelibs3-devel
took care of that.
However, now the complaints are more nuanced. When I retried rpmbuild, it ended with
checking crt_externs.h usability... no
checking crt_externs.h presence... no
checking for crt_externs.h... no
checking for _NSGetEnviron... no
checking for vsnprintf... yes
checking for snprintf... yes
checking for X... libraries /usr/lib64, headers .
checking for IceConnectionNumber in -lICE... yes
checking for libXext... yes
checking for pthread_create in -lpthread... yes
checking for extra includes... no
checking for extra libs... no
checking for libz... -lz
checking for libpng... -lpng -lz -lm
checking for libjpeg6b... no
checking for libjpeg... no
configure: WARNING: libjpeg not found. disable JPEG support.
checking for perl... /usr/bin/perl
checking for Qt... configure: error: Qt (>= Qt 3.3 and < 4.0) (library qt-mt) not found. Please check your installation!
For more details about this problem, look at the end of config.log.
Make sure that you have compiled Qt with thread support!
error: Bad exit status from /var/tmp/rpm-tmp.y8dvN5 (%build)
RPM build errors:
user mockbuild does not exist - using root
user mockbuild does not exist - using root
user mockbuild does not exist - using root
user mockbuild does not exist - using root
Bad exit status from /var/tmp/rpm-tmp.y8dvN5 (%build)
Several things here seem odd, but the obvious biggie is the failure to find Qt between 3.3 and 4.0. This obviously compiled for the Fedora maintainers, so the right thing should be available, but I have no idea what its exact name would be, or how to find it and make it available.
Help, please.
.
The best thing to do here is use higher-level tools. Specifically, use mock. This is a tool which:
manages a build environment (either a chroot or container-based),
so you don't have to worry about that,
handles things like the build dependencies, so you don't have to worry about that, and
makes sure that your build is "clean" rather than influenced by your own user environment so you don't have to worry about that.
In short: mock --rebuild six-*.src.rpm
If you see errors about a specific library missing, you can use dnf itself to find out the name. For example, on Fedora 35:
$ sudo dnf repoquery --whatprovides '*qt-mt*'
qt3-0:3.3.8b-88.fc35.i686
qt3-0:3.3.8b-88.fc35.x86_64
qt3-devel-0:3.3.8b-88.fc35.i686
qt3-devel-0:3.3.8b-88.fc35.x86_64
I guess you just need to dnf install qt3-devel.
As for libjpeg, do you have libjpeg-turbo-devel installed? Configure programs generally look for the -devel stuff rather than just the library (libjpeg). If you have it installed and it's still not picked up, maybe the software is just not compatible with libjpeg-turbo. Fixing that would be a separate challenge.
Using mock got me the ability to build the package, but I could not figure out how to do further work on it because the chroot left behind by mock did not seem functional. So I asked this question in a different way here, and got a really good answer (the second one).

Homebrew: `brew uses --installed gcc` does not give any result

I would want to get the list of installed packages that depend on gcc (installed with homebrew). When I try:
brew uses --installed gcc
it gives no result. And if I check e.g. r's dependencies with brew deps r, it returns gcc (among others). So I assume brew uses should at least return the value r.
Did anyone encounter a similar problem and could shed some light on this?
This is not an authoritative answer, but it appears to me that this is because r depends on :fortran, which is some kind of virtual dependency that can be resolved in different ways. brew deps answers the question, what would I need to install before installing this formula. And in your case it decides that installing gcc is a way to satisfy the :fortran requirement. But the reverse is apparently not supported: It doesn't know just from looking at gcc that this can be used to resolve the virtual dependency :fortran. This is somewhat plausible if one considers the way that virtual dependencies are implemented in Homebrew. Usually, it just looks around in the file system to see if a required binary is available (including ones supplied outside of Homebrew), but it doesn't establish a formula dependency link once it finds a candidate.
(In fact, this case might be even more complex. If you look at brew deps r --tree, you will see that the dependency is actually on :gcc, which is another level of virtual dependency.)
Although not directly related to your question, also note that deps by default is recursive but uses is not. So in order to get a symmetric picture, you'd need to use deps -1 or uses --recursive.

Compiling haskell module Network on win32/cygwin

I am trying to compile Network.HTTP (http://hackage.haskell.org/package/network) on win32/cygwin. However, it does fail with following message:
Setup.hs: Missing dependency on a foreign library:
* Missing (or bad) header file: HsNet.h
This problem can usually be solved by installing the system package that
provides this library (you may need the "-dev" version). If the library is
already installed but in a non-standard location then you can use the flags
--extra-include-dirs= and --extra-lib-dirs= to specify where it is.
If the header file does exist, it may contain errors that are caught by the C
compiler at the preprocessing stage. In this case you can re-run configure
with the verbosity flag -v3 to see the error messages.
Unfortuntely it does not give more clues. The HsNet.h includes sys/uio.h which, actually should not be included, and should be configurered correctly.
Don't use cygwin, instead follow Johan Tibells way
Installing MSYS
Install the latest Haskell Platform. Use the default settings.
Download version 1.0.11 of MSYS. You'll need the following files:
MSYS-1.0.11.exe
msysDTK-1.0.1.exe
msysCORE-1.0.11-bin.tar.gz
The files are all hosted on haskell.org as they're quite hard to find in the official MinGW/MSYS repo.
Run MSYS-1.0.11.exe followed by msysDTK-1.0.1.exe. The former asks you if you want to run a normalization step. You can skip that.
Unpack msysCORE-1.0.11-bin.tar.gz into C:\msys\1.0. Note that you can't do that using an MSYS shell, because you can't overwrite the files in use, so make a copy of C:\msys\1.0, unpack it there, and then rename the copy back to C:\msys\1.0.
Add C:\Program Files\Haskell Platform\VERSION\mingw\bin to your PATH. This is neccesary if you ever want to build packages that use a configure script, like network, as configure scripts need access to a C compiler.
These steps are what Tibell uses to compile the Network package for win and I have used this myself successfully several times on most of the haskell platform releases.
It is possible to build network on win32/cygwin. And the above steps, though useful (by Jonke) may not be necessary.
While doing the configuration step, specify
runghc Setup.hs configure --configure-option="--build=mingw32"
So that the library is configured for mingw32, else you will get link or "undefined references" if you try to link or use network library.
This combined with #Yogesh Sajanikar's answer made it work for me (on win64/cygwin):
Make sure the gcc on your path is NOT the Mingw/Cygwin one, but the
C:\ghc\ghc-6.12.1\mingw\bin\gcc.exe
(Run
export PATH="/cygdrive/.../ghc-7.8.2/mingw/bin:$PATH"
before running cabal install network in the Cygwin shell)

R 3.0.0 crashes on startup

I just updated R from version 2.15.1 to version 3.0.0 on my MAC running 10.6.8 and now R crashes on startup.
I get the error:
Error in getLoadedDLLs() : there is no .Internal function 'getLoadedDLLs'
Error in checkConflicts(value) :
".isMethodsDispatchOn" is not a BUILTIN function
Any ideas on how to go about?
The most common cause of this is having a corrupted ".Rdata" file in your working directory. Using the Mac Finder.app you will not by default be able to see files that begin with a ".", so-called dotfiles. Those files can be "seen" if you execute a change to the plist controlling the behavior of Finder.app. Open a Terminal.app window and run this bit of code:
defaults write com.apple.Finder AppleShowAllFiles YES
Then /point/-/click/-/hold/ on Dock-Finder-icon, and choose "Relaunch"
If you to do so, you can then change it back with the obvious modfication to that procedure. I happen to like seeing the hidden files so that's the way I run my Mac all the time, but some people may feel it is too dangerous to expose the "hidden secrets" to their own bumbling.
Paul raises a good point: I run the following R function in the R console after updating:
update.packages(checkBuilt=TRUE, ask=FALSE)
I have a lot of installed packages and paging through the entire list has gotten too tiresome so I bypass the ask-messages. Sometimes you will get errors because there may be dependencies on r-forge or Omegahat packages or on packages that need to be compiled from source. These may need to be updated "by hand". And you may need more than one pass through such an effort. Take notes of which packages are missing and fill them in.
I had the same problem running RKWard on ubuntu 12.04.
Check your r-base-core, like Paul suggested, to make sure the version is also at the latest version. Mine didn't update automatically. I had a platform dependent version, but RKWard was calling the new version. To solve this problem, I simply marked r-base-core for removal and reinstalled the latest version or r-base-core. poof problem fixed, bippity boppity boo!
I suspect that your error is similar to mine because I had also JUST updated RKWard. Start at updating r-base-core or try to get all of the dependencies to match up the versions.
I hope that you can translate this into what to do on a MAC,
SU

fast install package during development with multiarch

I'm working on a package "xyz" that uses Rcpp with several cpp files.
When I'm only updating the R code, I would like to run R CMD INSTALL xyz on the package directory without having to recompile all the shared libraries that haven't changed. That works fine if I specify the --no-multiarch flag: the source directory src gets populated the first time with the compiled objects, and if the sources don't change they are re-used the next time. With multiarch on, however, R decides to make two copies of src, src-i386 and src-x86_64. It seems to confuse R CMD INSTALL which always re-runs all the compilation. Is there any workaround?
(I'm aware that there are alternative ways, e.g. devtools::load_all, but I'd rather stick to R CM INSTALL if possible).
The platform is MacOS 10.7, and I have the latest version of R.
I have a partial answer for you. One really easy for speed-up is provided by using ccache which you can enable for all R compilation (e.g. via R CMD whatever thereby also getting inline, attributes, RStudio use, ...) globally through .R/Makevars:
edd#max:~$ tail -10 .R/Makevars
VER=4.6
CC=ccache gcc-$(VER)
CXX=ccache g++-$(VER)
SHLIB_CXXLD=g++-$(VER)
FC=ccache gfortran
F77=ccache gfortran
MAKE=make -j8
edd#max:~$
It takes care of all caching of compilation units.
Now, that does not "explicitly" address the --no-multiarch aspect which I don;t play much with that as we are still mostly 'single arch' on Linux. This will change, eventually, but hasn't yet. Yet I suspect but by letting the compiler decide the caching you too will get the net effect.
Other aspects can be controlled too, eg ~/.R/check.Renviron can be used to turn certain tests on or off. I tend to keep'em all on -- better to waste a few seconds here than to get a nastygram from Vienna.

Resources