I am working on ubuntu 18.10. want to recompile several of my libraries:
zlib, jasper, libpng, hdf5, netcdf
Even with
make distclean
I see that the old lib and include files still remain. Can I assume that a recompilation replaces these files, or can I remove them with sudo apt-get purge/remove ?
I am not sure whether removing them manually is safe, or will remove all of them and.or may remove other unrelated files.
I will appreciate some guidance
Calling make distclean will clean up enough that you will get a fresh compile when doing make afterwards.
The additional target maintainer-clean is available which may remove even more files, but you really shouldn't need to use it.
I assume if you have kept all the source folders and built your libraries within those source folders, then make distclean, if really properly enabled, will clean all the compiled stuff and you can re-build everything.
It is highly possible that make distclean does not clean the installed stuff, meaning if make install was executed and all the built executables/libraries/header were also linked/copied to the system path e.g. /usr/bin, /usr/local/ ... or whatever directories are used by the system.
Nevertheless, if you re-build everything and if again (after successful re-compile/re-build) execute make install, new version of the binaries/libraries/ will overwrite the old ones.
Related
On Debian, I had a bunch of cruft installed in /usr/lib/sbcl/site-systems that wouldn't load because the FASLs didn't match the version of SBCL that is actually installed.
For some reason, none of these files were associated with any Debian package (this is an old computer that has been running the same Debian install for over a decade– it's on Debian Sid).
I deleted the bad systems one at a time, and for most of them, Quicklisp did the right thing and downloaded the Quicklisp version. Sometimes, ASDF would insist that the system should exist at its previous path, but restarting SBCL got past that problem.
But for one system, ASDF has persistently cached the location of its .asd file as being in the /usr/lib/sbcl/site-systems/ directory. Loading this system is impossible because ASDF will not look anywhere else, even after restarting SBCL.
I tried looking in all the paths specified in various config files under /etc/common-lisp. None of those files contain a reference to the now-missing library.
I've resorted to doing a grep -rli across all the files under /usr. I don't expect that to complete in less than a day, and it might not find anything, in which case I'll be forced to grep the whole hard drive, which might take a whole week. Hopefully, the cache isn't compressed, because then I'll never find it.
Does anyone happen to know how ASDF persists the paths of files?
After a lot of excruciating debugging, I discovered that the files in /usr/lib/sbcl/site-systems/ actually do exist. They're broken symlinks.
The files I deleted were in the similar-looking path /usr/lib/sbcl/site/, to which the symlinks pointed.
Removing the symlinks fixed all the loading errors.
A couple of ideas about troubleshooting Quicklisp, particularly if your getting bizarre behavior.:
If you use Quicklisp for any length of time you'll probably eventually use local packages, found here by default, ~/quicklisp/local-projects It's valid to symlink you're projects into that directory. If you ever rename one of your projects, of course, don't forget to create a new symlink and delete the old one
Likewise, if you rename a local project, also delete the system index which Quicklisp will then recreate the next time it runs: ~/quicklisp/local-projects/system-index.txt Doesn't hurt to delete it from time to time just to keep your system fresh.
your *.fasl files can become stale too, deleting the system cache forces quicklisp to recompile everything. On an Ubuntu system running SBCL that would mean deleting the contents of:
rm -rf ~/.cache/common-lisp
Try updating the Quicklisp client
(ql:update-client)
Potentially deleting and reinstalling Quicklisp itself at ~/quicklisp can be necessary. (It's possible to inadvertently edit source files when your debugging and using Swanks lookup definition feature, breaking installed packages that used to work. Not that I would ever have done something as careless as that.)
Also, don't forget that ASDF decends into directories looking for *.asd files. If you have a stray one that's improperly structured that can cause havoc on you build system. (See my comment above about registering local projects to Quicklisp)
Finally, don't forget to check your lisp init file, e.g. .sbclrc for any debugging or quick and dirty hacks you might have left there and forgot about.
These are all things that have worked for me at one time or another, hopefully I'm not perpetuating legend and cant on things have have long since been fixed!
I've inherited a fairly large project that is built using autoconfigure/automake (the configure.ac/Makefile.am files have their own issues, but that's a separate question).
My problem is that a top level build + build install generates several static and dynamic libs as well as binaries. So far so good. The problem is that 'make install' will indiscriminately copy over every single one of those libs/bins. (This takes a while)
I'd like it to only copy over libs/bins that have changed - potentially by comparing the md5sum of the target and source files.
How can i hook this up in my configure.ac/Makefile.am?
The actual program to copy the files is install (usually /usr/bin/install); this is defined in the INSTALL Make-variable.
Your install implementation might support the -C flag:
-C, --compare
compare each pair of source and destination files,
and in some cases, do not modify the destination at all
you might have to
So you could try to provide a script that does what you want (compare the source file with the destination file, and only copy if needed), by overriding this variable.
You could also just injec tthe -C flag, to see if it gives you any speedup (I tend to agree with ldav1s' comment that it might not):
make install INSTALL="/usr/bin/install -C"
note, that install accepts quite a number of arguments, and if you are going to re-implement a compatible script, you might have to implement some sub-set thereof.
I wrote a .spec file on RHEL and I am building RPM using rpmbuild. I need ideas on how to handle the situation below.
My RPM creates an empty logs directory when it installs first time within the installation folder like below
/opt/MyInstallation-1.0.0-1/some executables
/opt/MyInstallation-1.0.0-1/lib/carries shared objects(.so files)
/opt/MyInstallation-1.0.0-1/config/carries some XML and custom configuration files(.xml, etc)
/opt/MyInstallation-1.0.0-1/log--->This is where application writes logs
When my RPM upgrades MyInstallation-1.0.0-1, to MyInstallation-1.0.0-2 for example, I get everything right as I wanted.
But, my question is how to preserve log files written in MyInstallation-1.0.0-1? Or to precisely copy the log directory to MyInstallation-1.0.0-2.
I believe if you tag the directory as %config, it is expected that the user will have files in there, so it will leave it alone.
I found a solution or workaround to this by hit and trial method :)
I am using rpmbuild version 4.8.0 on RHEL 6.3 x86_64. I believe it will work on other distros as well.
If you install with one name only like "MyInstallation" rather than "MyInstallation-version number-RPM Build Number" and create "logs directory as a standard directory(no additional flags on it)[See Original Question for scenario] Whenever you upgrade, you normally don't touch logs directory. RPM will leave its contents as it is. All you have to do is to ensure that you keep the line below in the install section.
%install
install --directory $RPM_BUILD_ROOT%{_prefix}/%{name}/log
Here, prefix and name are macros. That has to do nothing with underlying concept.
Regarding config files, the following is a very precise table that will help you guarding your config files. Again, this rule can't be applied on logs our applications create.
http://www-uxsup.csx.cam.ac.uk/~jw35/docs/rpm_config.html
Thanks & Regards.
How do I completely wipe (remove) Julia from my system?
Unless you've made changes to the code in packages, you can delete the whole .julia directory when you get into trouble. Either via a file manager, or (on a Unix system) via the command line,
rm -rf ~/.julia
Tim's answer is good, however you can also be a bit more specific.
I usually do the following (since I'm using v0.5, the path has v0.5, however, that will depend on what version you are using):
rm -rf ~/.julia/lib/v0.5 ~/.julia/v0.5/<packagename>
Deleting the lib subdirectory gets rid of any precompiled code, which might be also in a bad state.
Every time you compile something from source, you go through the same 3 steps:
$ ./configure
$ make
$ make install
I understand, that it makes sense to divide the installing process into different steps, but I don't get it, why each and every coder on this planet has to write the same three commands again and again just to get one single job done. From my point of view it would make totally sense to have a ./install.sh script automatically delivered with the source code which contains the following text:
#!/bin/sh
./configure
make
make install
why would people do the 3 steps separately?
Because each step does different things
Prepare(setup) environment for building
./configure
This script has lots of options that you should change. Like --prefix or --with-dir=/foo. That means every system has a different configuration. Also ./configure checks for missing libraries that should be installed. Anything wrong here causes not to build your application. That's why distros have packages that are installed on different places, because every distro thinks it's better to install certain libraries and files to certain directories. It is said to run ./configure, but in fact you should change it always.
For example have a look at the Arch Linux packages site. Here you'll see that any package uses a different configure parameter (assume they are using autotools for the build system).
Building the system
make
This is actually make all by default. And every make has different actions to do. Some do building, some do tests after building, some do checkout from external SCM repositories. Usually you don't have to give any parameters, but again some packages execute them differently.
Install to the system
make install
This installs the package in the place specified with configure. If you want you can specify ./configure to point to your home directory. However, lots of configure options are pointing to /usr or /usr/local. That means then you have to use actually sudo make install because only root can copy files to /usr and /usr/local.
Now you see that each step is a pre-requirement for next step. Each step is a preparation to make things work in a problemless flow. Distros use this metaphor to build packages (like RPM, deb, etc.).
Here you'll see that each step is actually a different state. That's why package managers have different wrappers. Below is an example of a wrapper that lets you build the whole package in one step. But remember that each application has a different wrapper (actually these wrappers have a name like spec, PKGBUILD, etc.):
def setup:
... #use ./configure if autotools is used
def build:
... #use make if autotools is used
def install:
... #use make all if autotools is used
Here one can use autotools, that means ./configure, make and make install. But another one can use SCons, Python related setup or something different.
As you see splitting each state makes things much easier for maintaining and deployment, especially for package maintainers and distros.
First, it should be ./configure && make && make install since each depends on the success of the former. Part of the reason is evolution and part of the reason is convenience for the development workflow.
Originally, most Makefiles would only contain the commands to compile a program and installation was left to the user. An extra rule allows make install to place the compiled output in a place that might be correct; there are still plenty of good reasons that you might not want to do this, including not being the system administrator, not want to install it at all. Moreover, if I am developing the software, I probably don't want to install it. I want to make some changes and test the version sitting in my directory. This becomes even more salient if I'm going to have multiple versions lying around.
./configure goes and detects what is available in the environment and/or is desired by the user to determine how to build the software. This is not something that needs to change very often and can often take some time. Again, if I am a developer, it's not worth the time to reconfigure constantly. More importantly, since make uses timestamps to rebuild modules, if I rerun configure there is a possibility that flags will change and now some of the components in my build will be compile with one set of flags and others with a different set of flags that might lead to different, incompatible behaviour. So long as I don't rerun configure, I know that my compilation environment remains the same even if I change my sources. If I rerun configure, I should make clean first, to remove any built sources to ensure things are built uniformly.
The only case where the three command are run in a row are when users install the program or a package is built (e.g., Debian's debuild or RedHat's rpmbuild). And that assumes that the package can be given a plain configure, which is not usually the case for packaging, where, at least, --prefix=/usr is desired. And pacakgers are like to have to deal with fake-roots when doing the make install part. Since there are lots of exceptions, making ./configure && make && make install the rule would be inconvenient for a lot of people who do it on a far more frequent basis!
configure may fail if it finds that dependencies are missing.
make runs a default target, the first one listed in the Makefile. Often this target is all, but not always. So you could only make all install if you knew that was the target.
So ...
#!/bin/sh
if ./configure $*; then
if make; then
make install
fi
fi
or:
./configure $* && ./make && ./make install
The $* is included because one often has to provide options to configure.
But why not just let people do it themselves? Is this really such a big win?
Firstly ./configure doesn't always find everything that it needs, or in other cases it finds everything it requires but not everything it could use. In that case you would want to know about it (and your ./install.sh script would fail anyway!) The classic example of non-failure with unintended consequences, from my point of view, is compiling large applications like ffmpeg or mplayer. These will use libraries if they are available but will compile anyway if they aren't, leaving some options disabled. The problem is that you then discover later that it was compiled without support for some format or another, thus requiring you to go back and redo it.
Another thing ./configure does somewhat interactively is giving you the option to customise where on the system the application will be installed. Different distributions/environments have different conventions, and you would probably want to stick to the convention on your system. Also, you might want to install it locally (solely for yourself). Traditionally the ./configure and make steps aren't run as root, while make install (unless it is installed solely for yourself) has to be run as root.
Specific distributions often provide scripts that perform this ./install.sh functionality in a distribution-sensitive manner - for example, source RPMs + spec file + rpmbuild or slackbuilds.
(Footnote: that being said, I agree that ./configure; make; make install; can get extremely tedious.)