How do i set up 'make install' to check the md5 of the installed libs/bins and only installed if changed? - gnu-make

I've inherited a fairly large project that is built using autoconfigure/automake (the configure.ac/Makefile.am files have their own issues, but that's a separate question).
My problem is that a top level build + build install generates several static and dynamic libs as well as binaries. So far so good. The problem is that 'make install' will indiscriminately copy over every single one of those libs/bins. (This takes a while)
I'd like it to only copy over libs/bins that have changed - potentially by comparing the md5sum of the target and source files.
How can i hook this up in my configure.ac/Makefile.am?

The actual program to copy the files is install (usually /usr/bin/install); this is defined in the INSTALL Make-variable.
Your install implementation might support the -C flag:
-C, --compare
compare each pair of source and destination files,
and in some cases, do not modify the destination at all
you might have to
So you could try to provide a script that does what you want (compare the source file with the destination file, and only copy if needed), by overriding this variable.
You could also just injec tthe -C flag, to see if it gives you any speedup (I tend to agree with ldav1s' comment that it might not):
make install INSTALL="/usr/bin/install -C"
note, that install accepts quite a number of arguments, and if you are going to re-implement a compatible script, you might have to implement some sub-set thereof.

Related

Meson / Ninja build system - How to run custom script at Uninstall?

Meson/Ninja provide an easy method to run a script at install time.
For example, this line will tell Meson to run the glib-compile-schemas command to compile the GSettings on Linux (system configuration options).
meson.add_install_script('glib-compile-schemas', schemas_dir)
(this command will be automatically run when the user executes ninja install)
How can I tell Meson to run a custom command at uninstall?
In this specific case I would like to delete (or at least reset to default) the key-value pairs in GSettings. To reset them, I have found that the command is gsettings reset-recursively <path> (successfully tested in terminal).
Adding custom uninstall script is still being discussed, it's proposed quite some time ago but not yet implemented. It looks this task is typically left for package manager (and therefore to corresponding packaged scripts).
But I agree, there is some illogical asymmetry in case of meson install command. As a workaround, you can create your own target:
run_target('my-uninstall', command : ['scripts/uninstall.sh'])
The drawbacks, of course, are that it should be invoked explicitly, cannot override, append or rename internal uninstall target and script should have executable permissions.
The internal, reserved uninstall target, however, does revert all explicit install operations:
Meson allows you to uninstall an install step by invoking the
uninstall target. This will remove all files installed as part of
install. Note that this does not restore the original files. This also
does not undo changes done by custom install scripts (because they can
do arbitrary install operations).

libsqlite3 issue: I am not able to use cl-sql

I am trying to use cl-sql for database access to sqlite3.
But I am getting the error
Couldn't load foreign libraries "libsqlite3", "sqlite3". (searched CLSQL-SYS:*FOREIGN-LIBRARY-SEARCH-PATHS*: (#P"/usr/lib/clsql/" #P"/usr/lib/"))
The same is with sqlite.
I have installed sqlite3 using apt-get and there is a file libsqlite.so.0 in /usr/lib directory.
I also tried to build sqlite3 from source but I couldn't get the so file. What is that I am doing wrong?
Your problem is that cl-sql has a third party dependency. If you inspect the implementation of cl-sql (probably under "~/quicklisp/dists/quicklisp/software/clsql-202011220-git/db-sqlite3/sqlite3-loader.lisp") you will see that the function database-type-load-foreign is trying to load a library named either "libsqlite3" or "sqlite3".
Depending on your operating system this is either looking for a .dll or .so with exactly one of those names.
Given that the version of of libsqlite.so has a different name on your particular distribution of linux, you have a number of different options to make this library work.
Install a version of sqlite3 with the correct binary
Create a soft link to your binary that redirects via ln -s /usr/lib/libsqlite.so.0 /usr/lib/libsqlite3.so (assuming libsqlite.so.0 is the file that clsql is looking for)
Add new paths to CLSQL-SYS:*FOREIGN-LIBRARY-SEARCH-PATHS* to point to the correct binary if it is installed elsewhere (via clsql:push-libary-path)

Compiling haskell module Network on win32/cygwin

I am trying to compile Network.HTTP (http://hackage.haskell.org/package/network) on win32/cygwin. However, it does fail with following message:
Setup.hs: Missing dependency on a foreign library:
* Missing (or bad) header file: HsNet.h
This problem can usually be solved by installing the system package that
provides this library (you may need the "-dev" version). If the library is
already installed but in a non-standard location then you can use the flags
--extra-include-dirs= and --extra-lib-dirs= to specify where it is.
If the header file does exist, it may contain errors that are caught by the C
compiler at the preprocessing stage. In this case you can re-run configure
with the verbosity flag -v3 to see the error messages.
Unfortuntely it does not give more clues. The HsNet.h includes sys/uio.h which, actually should not be included, and should be configurered correctly.
Don't use cygwin, instead follow Johan Tibells way
Installing MSYS
Install the latest Haskell Platform. Use the default settings.
Download version 1.0.11 of MSYS. You'll need the following files:
MSYS-1.0.11.exe
msysDTK-1.0.1.exe
msysCORE-1.0.11-bin.tar.gz
The files are all hosted on haskell.org as they're quite hard to find in the official MinGW/MSYS repo.
Run MSYS-1.0.11.exe followed by msysDTK-1.0.1.exe. The former asks you if you want to run a normalization step. You can skip that.
Unpack msysCORE-1.0.11-bin.tar.gz into C:\msys\1.0. Note that you can't do that using an MSYS shell, because you can't overwrite the files in use, so make a copy of C:\msys\1.0, unpack it there, and then rename the copy back to C:\msys\1.0.
Add C:\Program Files\Haskell Platform\VERSION\mingw\bin to your PATH. This is neccesary if you ever want to build packages that use a configure script, like network, as configure scripts need access to a C compiler.
These steps are what Tibell uses to compile the Network package for win and I have used this myself successfully several times on most of the haskell platform releases.
It is possible to build network on win32/cygwin. And the above steps, though useful (by Jonke) may not be necessary.
While doing the configuration step, specify
runghc Setup.hs configure --configure-option="--build=mingw32"
So that the library is configured for mingw32, else you will get link or "undefined references" if you try to link or use network library.
This combined with #Yogesh Sajanikar's answer made it work for me (on win64/cygwin):
Make sure the gcc on your path is NOT the Mingw/Cygwin one, but the
C:\ghc\ghc-6.12.1\mingw\bin\gcc.exe
(Run
export PATH="/cygdrive/.../ghc-7.8.2/mingw/bin:$PATH"
before running cabal install network in the Cygwin shell)

Why always ./configure; make; make install; as 3 separate steps?

Every time you compile something from source, you go through the same 3 steps:
$ ./configure
$ make
$ make install
I understand, that it makes sense to divide the installing process into different steps, but I don't get it, why each and every coder on this planet has to write the same three commands again and again just to get one single job done. From my point of view it would make totally sense to have a ./install.sh script automatically delivered with the source code which contains the following text:
#!/bin/sh
./configure
make
make install
why would people do the 3 steps separately?
Because each step does different things
Prepare(setup) environment for building
./configure
This script has lots of options that you should change. Like --prefix or --with-dir=/foo. That means every system has a different configuration. Also ./configure checks for missing libraries that should be installed. Anything wrong here causes not to build your application. That's why distros have packages that are installed on different places, because every distro thinks it's better to install certain libraries and files to certain directories. It is said to run ./configure, but in fact you should change it always.
For example have a look at the Arch Linux packages site. Here you'll see that any package uses a different configure parameter (assume they are using autotools for the build system).
Building the system
make
This is actually make all by default. And every make has different actions to do. Some do building, some do tests after building, some do checkout from external SCM repositories. Usually you don't have to give any parameters, but again some packages execute them differently.
Install to the system
make install
This installs the package in the place specified with configure. If you want you can specify ./configure to point to your home directory. However, lots of configure options are pointing to /usr or /usr/local. That means then you have to use actually sudo make install because only root can copy files to /usr and /usr/local.
Now you see that each step is a pre-requirement for next step. Each step is a preparation to make things work in a problemless flow. Distros use this metaphor to build packages (like RPM, deb, etc.).
Here you'll see that each step is actually a different state. That's why package managers have different wrappers. Below is an example of a wrapper that lets you build the whole package in one step. But remember that each application has a different wrapper (actually these wrappers have a name like spec, PKGBUILD, etc.):
def setup:
... #use ./configure if autotools is used
def build:
... #use make if autotools is used
def install:
... #use make all if autotools is used
Here one can use autotools, that means ./configure, make and make install. But another one can use SCons, Python related setup or something different.
As you see splitting each state makes things much easier for maintaining and deployment, especially for package maintainers and distros.
First, it should be ./configure && make && make install since each depends on the success of the former. Part of the reason is evolution and part of the reason is convenience for the development workflow.
Originally, most Makefiles would only contain the commands to compile a program and installation was left to the user. An extra rule allows make install to place the compiled output in a place that might be correct; there are still plenty of good reasons that you might not want to do this, including not being the system administrator, not want to install it at all. Moreover, if I am developing the software, I probably don't want to install it. I want to make some changes and test the version sitting in my directory. This becomes even more salient if I'm going to have multiple versions lying around.
./configure goes and detects what is available in the environment and/or is desired by the user to determine how to build the software. This is not something that needs to change very often and can often take some time. Again, if I am a developer, it's not worth the time to reconfigure constantly. More importantly, since make uses timestamps to rebuild modules, if I rerun configure there is a possibility that flags will change and now some of the components in my build will be compile with one set of flags and others with a different set of flags that might lead to different, incompatible behaviour. So long as I don't rerun configure, I know that my compilation environment remains the same even if I change my sources. If I rerun configure, I should make clean first, to remove any built sources to ensure things are built uniformly.
The only case where the three command are run in a row are when users install the program or a package is built (e.g., Debian's debuild or RedHat's rpmbuild). And that assumes that the package can be given a plain configure, which is not usually the case for packaging, where, at least, --prefix=/usr is desired. And pacakgers are like to have to deal with fake-roots when doing the make install part. Since there are lots of exceptions, making ./configure && make && make install the rule would be inconvenient for a lot of people who do it on a far more frequent basis!
configure may fail if it finds that dependencies are missing.
make runs a default target, the first one listed in the Makefile. Often this target is all, but not always. So you could only make all install if you knew that was the target.
So ...
#!/bin/sh
if ./configure $*; then
if make; then
make install
fi
fi
or:
./configure $* && ./make && ./make install
The $* is included because one often has to provide options to configure.
But why not just let people do it themselves? Is this really such a big win?
Firstly ./configure doesn't always find everything that it needs, or in other cases it finds everything it requires but not everything it could use. In that case you would want to know about it (and your ./install.sh script would fail anyway!) The classic example of non-failure with unintended consequences, from my point of view, is compiling large applications like ffmpeg or mplayer. These will use libraries if they are available but will compile anyway if they aren't, leaving some options disabled. The problem is that you then discover later that it was compiled without support for some format or another, thus requiring you to go back and redo it.
Another thing ./configure does somewhat interactively is giving you the option to customise where on the system the application will be installed. Different distributions/environments have different conventions, and you would probably want to stick to the convention on your system. Also, you might want to install it locally (solely for yourself). Traditionally the ./configure and make steps aren't run as root, while make install (unless it is installed solely for yourself) has to be run as root.
Specific distributions often provide scripts that perform this ./install.sh functionality in a distribution-sensitive manner - for example, source RPMs + spec file + rpmbuild or slackbuilds.
(Footnote: that being said, I agree that ./configure; make; make install; can get extremely tedious.)

C compilation flags from R

Can you set R's C and C++ flags at compilation time when installing from R CMD INSTALL (essentially, in this particular case I want to turn off compiler optimization, but ideally there's a general solution)?
I know you can affect some options using --configure-args="...", and I rather optimistically tried --configure-args="diable-optimization", to no avail. Similarly, I could also edit $RHOME/etc/Makeconf but again this is not really the kind of solution I'm looking for (and not possible where I don't have the relevant write permission).
I define my flags through an autoconf script and with a Makevars file in the package/src directory, if this makes any difference.
Dirk - very helpful discussion (as always) and definitly pointed me in the right direction. For my specific issue, it turned out in addition to the Makevars file I had to pass arguments through to configure. I have no idea why this is the case (and reading around doesn't seem to be the norm, so maybe I've done something wrong somewhere), but if anyone else has the same problem, using a ~/.R/Makevars combined with the following arguments for configure/INSTALL worked for me.
R CMD INSTALL --configure-args="CFLAGS=-g CXXFLAGS=-g" package.tar.gz
Yes, I use a file ~/.R/Makevars for that. Also handy to set CC and CXX to different compilers when, say, switching gcc versions, or switching to llvm, or ...
I can confirm that the Makevars file is very useful (specially if you need to use "-L/my/libs" or "-I/my/includes", or others build flags).
For the build, if you want to set an option for the site/machine, you can also change variables in the Makeconf file (/path/R/install/[lib64/R/]etc/Makeconf).
However, if like me, you still have some problems to manage and use libraries later, you can also set libraries with the ldpaths file [1]. This file contains the R_LD_LIBRARY_PATH used by R. This variable is the equivalent of the well known LD_LIBRARY_PATH on unix [2].
I just added some content (just before the comment on MacOS / Darwin) to this file (/path/R/install/[lib64/R/]etc/ldpaths):
if test -n "${LD_LIBRARY_PATH}"; then
R_LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:${R_LD_LIBRARY_PATH}"
fi
## This is DYLD_FALLBACK_LIBRARY_PATH on Darwin (OS X) and
Then, you will be able to manage your libraries dynamically
e.g. using "environment modules" or "lmod".
Note that you can change many other environment and R variables with all the file which are in that config/etc directory (Renviron, repositories, javaconf, Rprofile.site ...).
[1] https://support.rstudio.com/hc/en-us/community/posts/200645248-Setting-up-LD-LIBRARY-PATH-for-a-rsession
[2] http://www.tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html

Resources