Homebrew: `brew uses --installed gcc` does not give any result - r

I would want to get the list of installed packages that depend on gcc (installed with homebrew). When I try:
brew uses --installed gcc
it gives no result. And if I check e.g. r's dependencies with brew deps r, it returns gcc (among others). So I assume brew uses should at least return the value r.
Did anyone encounter a similar problem and could shed some light on this?

This is not an authoritative answer, but it appears to me that this is because r depends on :fortran, which is some kind of virtual dependency that can be resolved in different ways. brew deps answers the question, what would I need to install before installing this formula. And in your case it decides that installing gcc is a way to satisfy the :fortran requirement. But the reverse is apparently not supported: It doesn't know just from looking at gcc that this can be used to resolve the virtual dependency :fortran. This is somewhat plausible if one considers the way that virtual dependencies are implemented in Homebrew. Usually, it just looks around in the file system to see if a required binary is available (including ones supplied outside of Homebrew), but it doesn't establish a formula dependency link once it finds a candidate.
(In fact, this case might be even more complex. If you look at brew deps r --tree, you will see that the dependency is actually on :gcc, which is another level of virtual dependency.)
Although not directly related to your question, also note that deps by default is recursive but uses is not. So in order to get a symmetric picture, you'd need to use deps -1 or uses --recursive.

Related

bedr: Binary Availability checking for path all FAIL in R

This is my first time using bedr. Upon loading the library all of the binary availbility checks fail.
library("bedr", lib.loc="~/R/R-3.4.4/library")
#
bedr v1.0.4
#
checking binary availability...
Checking path for bedtools... FAIL
Checking path for bedops... FAIL
Checking path for tabix... FAIL
tests and examples will be skipped on R CMD check if binaries are missing
I was originally using R-3.5 and I read somewhere that there are some bugs with R-3.5 that will cause this so I reverted to R-3.4.4. This didn’t seem to help. Additionally, I am using a company laptop at a new employer and am waiting on gaining adminstrative access to install/download programs as needed. I understand these issues might further complicate the matter. Does anyone have a way to diagnose the true cause of this failure and/or fix the problem?
You can install packages bedtools, bedops and tabix.
If you are in Unix environment just use sudo apt install to install these packages
After that all the checks should be PASS

Why can R be linked to a shared BLAS later even if it was built with `--with-blas = lblas`?

The BLAS section in R installation and administration manual says that when R is built from source, with configuration parameter --without-blas, it will build Netlib's reference BLAS into a standalone shared library at R_HOME/lib/libRblas.so, along side the standard R shared library R_HOME/lib/libR.so. This makes it easier for user to switch and benchmark different tuned BLAS in R environment. The guide suggests that researcher might use symbolic link to libRblas.so to achieve this, and this article gives more details on this.
On contrary, when simply installing a pre-compiled binary version of R, either from R CRAN's mirrors or Ubuntu's repository (for linux user like me), in theory it should be more difficult to switch between different BLAS without rebuilding R, because a pre-compiled R version is configured with --with-blas = (some blas library). We can easily check this, either by reading configuration file at R_HOME/etc/Makeconf, or check the result of R CMD config BLAS_LIBS. For example, on my machine it returns -lblas, so it is linked to reference BLAS at build time. As a result, there is no R_HOME/lib/libRblas.so, only R_HOME/lib/libR.so.
However, this R-blog says that it is possible to switch between difference BLAS, even if R is not installed from source. The author tried the ATLAS and OpenBLAS from ubuntu's repository, and then use update-alternatives --config to work around. It is also possible to configure and install tuned BLAS from source, add them to "alternatives" through update-alternatives --install, and later switch between them in the same way. The BLAS library (a symbolic link) in this case will be found at /usr/lib/libblas.so.3, which is under both ubuntu and R's LD_LIBRARY_PATH. I have tested and this does work! But I am very surprised at how R achieves this. As I said, R should have been tied to the BLAS library configured at building time, i.e., I would expect all BLAS routines integrated into R_HOME/lib/libR.so. So why is it still possible to change BLAS via /usr/lib/libblas.so.3?
Thanks if someone can please explain this.

how to make sure a debian package does not have a dependency

I am building a debian package using dpkg.
The package has a dependency on libvirt which is not desired.
The rules file does not specify this dependency, but it is added by dpkg, I suppose due to some calls to libvirt-dev at build time.
However my package works fine without libvirt. As such, libvirt is a "Recommended" package but not "Required". How do I override this dependency and make sure it is not present in my final deb file ?
Hard to know without seeing your actual package, but I'd guess that you have a binary or shared library which is linked against libvirt. That would cause dh_shlibdeps to include libvirt in the ${shlibs:Depends} substvar.
If that's your problem, then the right fix depends on what's getting linked to libvirt. It should be straightforward to determine; just run ldd on each binary or shared library object in your package, and grep for "libvirt".
If the thing linked against libvirt is only incidental to the package, and isn't part of the main functionality, then using Recommends: would indeed be the right thing. To make dh_shlibdeps exclude that object from its dependency scanning, give it a -X option. Example target for debian/rules, assuming debhelper7-style packaging:
override_dh_shlibdeps:
dh_shlibdeps -Xname_of_your_object_to_exclude
If the thing(s) linked to libvirt actually are an important part of the package functionality, then the generated libvirt dependency is appropriate. If you still don't want it, you'll need to work out how to avoid linking against libvirt during your build.

fast install package during development with multiarch

I'm working on a package "xyz" that uses Rcpp with several cpp files.
When I'm only updating the R code, I would like to run R CMD INSTALL xyz on the package directory without having to recompile all the shared libraries that haven't changed. That works fine if I specify the --no-multiarch flag: the source directory src gets populated the first time with the compiled objects, and if the sources don't change they are re-used the next time. With multiarch on, however, R decides to make two copies of src, src-i386 and src-x86_64. It seems to confuse R CMD INSTALL which always re-runs all the compilation. Is there any workaround?
(I'm aware that there are alternative ways, e.g. devtools::load_all, but I'd rather stick to R CM INSTALL if possible).
The platform is MacOS 10.7, and I have the latest version of R.
I have a partial answer for you. One really easy for speed-up is provided by using ccache which you can enable for all R compilation (e.g. via R CMD whatever thereby also getting inline, attributes, RStudio use, ...) globally through .R/Makevars:
edd#max:~$ tail -10 .R/Makevars
VER=4.6
CC=ccache gcc-$(VER)
CXX=ccache g++-$(VER)
SHLIB_CXXLD=g++-$(VER)
FC=ccache gfortran
F77=ccache gfortran
MAKE=make -j8
edd#max:~$
It takes care of all caching of compilation units.
Now, that does not "explicitly" address the --no-multiarch aspect which I don;t play much with that as we are still mostly 'single arch' on Linux. This will change, eventually, but hasn't yet. Yet I suspect but by letting the compiler decide the caching you too will get the net effect.
Other aspects can be controlled too, eg ~/.R/check.Renviron can be used to turn certain tests on or off. I tend to keep'em all on -- better to waste a few seconds here than to get a nastygram from Vienna.

Generating dependency graph of Autotools packages

I have a software system having 30+ Open Source packages, most of them using GNU Autotools suite.
Are there tools to automatically generate package-to-package dependency graph? I.e. I'd like to see something like gst-plugins-good -> gst-plugins-base -> gstreamer -> glib.
I don't think so, but you could probably whip something together with this knowledge:
Scan the file named either configure.ac or configure.in in the package's root directory.
Look for a string of the form PKG_CHECK_MODULES([...],[...]...)
The second argument of that macro consists of package requirements of the form package or package >= version separated by whitespace.
The requirement string might not be the same as the package tarball name; a tarball that contains package.pc or package.pc.in provides the package package.
This only works for dependencies that use pkg-config. Some don't and you'll need to keep track of those dependencies by hand.
Probably not, because this is a hard problem. If there were only one way to build a package, it might not be too bad, but in general this isn't the case. You have the --enable-foo and --with-foo options that you can pass into configure. Those are sometimes package dependent also, requiring more packages. Most Linux distros (I think but am not completely sure) maintain these sort of dependency lists for yum or zypper or apt or whatever the package manager is by hand, and only one layer deep, leaving it up to the package manager to traverse the graph. The packages for the distro are only built one way. It's not unusual for these lists to be broken, also.

Resources