I am implementing some idea on sqlite3. Every time I want to test my codes, I have to compile the whole project. The following is exactly what I do :
sudo make uninstall
sudo make clean
./configure
sudo make
sudo make install
some of above commands cost long time. What should I do to save time?
Skip other steps and do only
sudo make
sudo make install
after you changed some source codes.
Also, don't use sudo at all. You should be able to run an instance without actually "installing" it anywhere. This is what developers will normally do, rather than having to keep installing code they're working on into the very system they're using.
If you have a dual-core machine, use make -j2 to compile 2 files at a time in parallel. Quad core: make -j4, etc. This helps a lot if you make header file changes.
And listen to S.Mark: only do the steps you need to do each time. You probably won't need to run the slow ./configure again. If you run/link your tests against the sqlite in your build directory, you don't need make install either, leaving you with just make.
ccache might be your friend.
On Ubuntu (or similar systems), you start with apt-get install ccache and then before you compile, do PATH=/usr/lib/ccache:$PATH. It'll cache stuff in ~/.ccache and likely speed up subsequent compiles.
Related
Background: I compile bitcoind on one system but run it on another. When I compiled bitcoind 0.19.1 some time back using the following method, I was able to run bitcoind and bitcoin-cli on the target system without issue. I think.
./autogen.sh
./configure --disable-wallet --disable-tests --disable-bench --disable-gui --enable-util-tx=no --prefix=$HOME/bitcoind/x64 --exec-prefix=$HOME/bitcoind/x64
make && make install
Today I compiled v0.20.0 using the same method. If I run ./bitcoind -version on the system I compiled the binary it runs fine, but if I take the binary to my target system I get the following error:
./bitcoind: error while loading shared libraries: libboost_filesystem.so.1.67.0: cannot open shared object file: No such file or directory
The binary seemed to be portable last time, and the pre-compiled binary I download from the Bitcoin Core team runs fine.
Note that on the target system libboost-filesystem-dev and libboost-filesystem1.67-dev are not installed, this is likely the source of my error. That said, running the pre-compiled binary from the Core team runs, so why doesn't mine?
Can someone help me understand if I did something wrong or if I need to add ./configure flags to make the binary more portable? Specifically what I likely did differently than the core developers that made my binary fail where theirs worked?
EDIT 1: Running ./configure --enable-static or ./configure LDFLAGS=-static does not result in a portable binary either.
Also note that installing libboost-filesystem library with apt does fix the error.
Thanks to Andrew Chow for his helpful answer to this on the bitcoind StackExchange. I needed to build the depends as per the depends documentation. Since I'm building for the same platform I'll be running on, I can run make in the depends directory with no arguments except -j2 which uses two cores. Change the number to however many cores you want to commit to the compile.
cd depends
make -j2
cd ..
./autogen.sh
./configure --prefix=$PWD/depends/x86_64-pc-linux-gnu
make -j2 && make install
I've followed the instructions to install the stable branch of Virtuoso Open Source 7 on Ubuntu 16.04. There don't appear to be any errors throughout the process of —
./autogen.sh
CFLAGS="-O2 -m64"
export CFLAGS
./configure
make
make install
However, when I go to /usr/local/virtuoso-opensource/var/lib/virtuoso/db (which contains only virtuoso.ini) and run —
virtuoso-t -f &
The first time I do this the terminal just vanishes. When I reopen the terminal and run the same again it just reads The program 'virtuoso-t' is currently not installed. You can install it by typing: apt install virtuoso-opensource-6.1-bin.
I've tried installing both 7 stable and develop from github and both produce the same result. I'd rather use 7 but tried installing 6 via the ubuntu package and conductor wouldn't work for me - not having much luck all round, one of those days.
Thanks for assistance you can provide.
Sounds like you didn't adjust your $PATH variable after make install.
$PATH should include the path to the directory which contains the virtuoso-t, or you can include that path in the launch command, e.g. —
/path/to/virtuoso-t -f -c /usr/local/virtuoso-opensource/var/lib/virtuoso/db/virtuoso.ini &
(Note that the develop/7 branch is recommended over stable/7 at the moment, due to the number of fixes there.)
I have a package that previously only targeted RPM based distros for which I am now building .deb packages for Debian based distros.
The aim is to simulate a test installation from user-space that is isolated from the system you are building on. It may be multi-user and you do not want to require root access just to build the software. Many of our tests simulate the installation directory structure already. This is for the next step up to simulate an actual installation using packages built.
For the RPM packages I was able to create test installations using:
WSDIR=/where/I/want/my/tests/to/run
rpmdb --initdb --dbpath "$WSDIR"/rpmdb
rpm --relocate /opt="$WSDIR"/opt --dbpath $WSDIR/rpmdb -i <package>.rpm
The equivalent in the Debian world is something like:
dpkg --force-not-root --admindir=$WSDIR/dpkg --root=$WSDIR/install --install "$DEB"
However, I am stuck over the equivalent to the rpmdb --initdb step.
Note that I can just unpack the archive using:
dpkg-deb -x "$DEB" $WSDIR/install
But I would prefer to be closer to how a real package is installed.
Also I don't think this will run preinstall and postinstall scripts.
Similar questions have suggested using deboostrap to create a chroot environment but this creates a complete new installation. As well as being overkill it is too slow for an automated test. I intend to use this for quick tests of the installation package prior to further testing in actual test environments.
My experiments so far:
(cd $WSDIR/dpkg && mkdir alternatives info parts triggers updates)
cp /var/lib/dpkg/status $WSDIR/dpkg/status
have at best resulted in:
dpkg: error: unable to access dpkg status area: No such file or directory
which does not indicate clear what is wrong.
So how do you create a dpkg admin directory?
Cross posted as https://superuser.com/questions/1271145/how-do-you-create-a-dpkg-admin-directory
Update 24/11/2017
I've tried copying using the dpkg dir from an environment created by [cowdancer][1] (which uses deboostrap under the hood) or copying the real one from /var/lib/dpkg but I still get the same error message so perhaps the error (and/or the --admindir option) doesn't mean quite what I think it means.
Note that:
sudo dpkg --force-not-root --root=$WSDIR/install --admindir=/var/lib/dpkg --install "$DEB"
does work. So it is something to do with the admin dir.
I've also retitled the question as "How do you create a dpkg admin directory" is interesting question but the answer is not necessarily the solution to my problem.
The minimal way to create a dpkg database is something like this:
$ mkdir -p db/{updates,info}
$ touch db/{status,diversions,statoverride}
If you want to use that as non-root, currently the best way is to use fakeroot.
$ mkdir -p fsys
$ PATH=/sbin:/usr/sbin:$PATH fakeroot dpkg --log=/dev/null --admindir=db --instdir=fsys -i pkg.deb
But take into account that passing --root after --admindir or --instdir will reset those paths, which is I think the problem you have been having here.
Also using sudo and --force-not-root does not make much sense? :) And is definitely less confined than using just fakeroot. In the near future it will be possible to run dpkg fully unprivileged in some local tree.
I eventually found an answer for this. Thanks to Guillem Jover for some of this.
Pasting a copy of it here:
mkdir fake
mkdir fake/install
mkdir -p fake/dpkg/info
mkdir -p fake/dpkg/updates
touch fake/dpkg/status
PATH=/sbin:/usr/sbin:$PATH fakeroot dpkg --force-script-chrootless --log=`pwd`/fake/dpkg.log --root=`pwd`/fake --instdir `pwd`/fake --admindir=`pwd`/fake/dpkg --install *.deb
Some points to note:
--force-not-root is not enough. fakeroot is required.
ldconfig and start-stop-daemon must be on the path.
(hence PATH=/sbin:/usr/sbin:$PATH)
The log file needs to be relocated from the default /var/log/dpkg.log
The order of arguments is significant. If used --root must be before --instdir and --admindir.
The admindir is supposed to have a the installation dir as a prefix.
If the package contains any pre or post installation scripts (preinst,postinst) then --force-script-chrootless is required as these scripts are normally run via chroot() which gives operation not permitted when attempted under fakeroot.
For a quick test of trivial dependencies, you can directly install on the system using 'dpkg -i' then 'dpkg -P' and 'apt-get autoremove' to purge the package and clean the dependencies.
An other more secure but slower solution could be to use the autopkgtest package:
https://people.debian.org/~mpitt/autopkgtest/README.package-tests.html
How do I completely wipe (remove) Julia from my system?
Unless you've made changes to the code in packages, you can delete the whole .julia directory when you get into trouble. Either via a file manager, or (on a Unix system) via the command line,
rm -rf ~/.julia
Tim's answer is good, however you can also be a bit more specific.
I usually do the following (since I'm using v0.5, the path has v0.5, however, that will depend on what version you are using):
rm -rf ~/.julia/lib/v0.5 ~/.julia/v0.5/<packagename>
Deleting the lib subdirectory gets rid of any precompiled code, which might be also in a bad state.
Every time you compile something from source, you go through the same 3 steps:
$ ./configure
$ make
$ make install
I understand, that it makes sense to divide the installing process into different steps, but I don't get it, why each and every coder on this planet has to write the same three commands again and again just to get one single job done. From my point of view it would make totally sense to have a ./install.sh script automatically delivered with the source code which contains the following text:
#!/bin/sh
./configure
make
make install
why would people do the 3 steps separately?
Because each step does different things
Prepare(setup) environment for building
./configure
This script has lots of options that you should change. Like --prefix or --with-dir=/foo. That means every system has a different configuration. Also ./configure checks for missing libraries that should be installed. Anything wrong here causes not to build your application. That's why distros have packages that are installed on different places, because every distro thinks it's better to install certain libraries and files to certain directories. It is said to run ./configure, but in fact you should change it always.
For example have a look at the Arch Linux packages site. Here you'll see that any package uses a different configure parameter (assume they are using autotools for the build system).
Building the system
make
This is actually make all by default. And every make has different actions to do. Some do building, some do tests after building, some do checkout from external SCM repositories. Usually you don't have to give any parameters, but again some packages execute them differently.
Install to the system
make install
This installs the package in the place specified with configure. If you want you can specify ./configure to point to your home directory. However, lots of configure options are pointing to /usr or /usr/local. That means then you have to use actually sudo make install because only root can copy files to /usr and /usr/local.
Now you see that each step is a pre-requirement for next step. Each step is a preparation to make things work in a problemless flow. Distros use this metaphor to build packages (like RPM, deb, etc.).
Here you'll see that each step is actually a different state. That's why package managers have different wrappers. Below is an example of a wrapper that lets you build the whole package in one step. But remember that each application has a different wrapper (actually these wrappers have a name like spec, PKGBUILD, etc.):
def setup:
... #use ./configure if autotools is used
def build:
... #use make if autotools is used
def install:
... #use make all if autotools is used
Here one can use autotools, that means ./configure, make and make install. But another one can use SCons, Python related setup or something different.
As you see splitting each state makes things much easier for maintaining and deployment, especially for package maintainers and distros.
First, it should be ./configure && make && make install since each depends on the success of the former. Part of the reason is evolution and part of the reason is convenience for the development workflow.
Originally, most Makefiles would only contain the commands to compile a program and installation was left to the user. An extra rule allows make install to place the compiled output in a place that might be correct; there are still plenty of good reasons that you might not want to do this, including not being the system administrator, not want to install it at all. Moreover, if I am developing the software, I probably don't want to install it. I want to make some changes and test the version sitting in my directory. This becomes even more salient if I'm going to have multiple versions lying around.
./configure goes and detects what is available in the environment and/or is desired by the user to determine how to build the software. This is not something that needs to change very often and can often take some time. Again, if I am a developer, it's not worth the time to reconfigure constantly. More importantly, since make uses timestamps to rebuild modules, if I rerun configure there is a possibility that flags will change and now some of the components in my build will be compile with one set of flags and others with a different set of flags that might lead to different, incompatible behaviour. So long as I don't rerun configure, I know that my compilation environment remains the same even if I change my sources. If I rerun configure, I should make clean first, to remove any built sources to ensure things are built uniformly.
The only case where the three command are run in a row are when users install the program or a package is built (e.g., Debian's debuild or RedHat's rpmbuild). And that assumes that the package can be given a plain configure, which is not usually the case for packaging, where, at least, --prefix=/usr is desired. And pacakgers are like to have to deal with fake-roots when doing the make install part. Since there are lots of exceptions, making ./configure && make && make install the rule would be inconvenient for a lot of people who do it on a far more frequent basis!
configure may fail if it finds that dependencies are missing.
make runs a default target, the first one listed in the Makefile. Often this target is all, but not always. So you could only make all install if you knew that was the target.
So ...
#!/bin/sh
if ./configure $*; then
if make; then
make install
fi
fi
or:
./configure $* && ./make && ./make install
The $* is included because one often has to provide options to configure.
But why not just let people do it themselves? Is this really such a big win?
Firstly ./configure doesn't always find everything that it needs, or in other cases it finds everything it requires but not everything it could use. In that case you would want to know about it (and your ./install.sh script would fail anyway!) The classic example of non-failure with unintended consequences, from my point of view, is compiling large applications like ffmpeg or mplayer. These will use libraries if they are available but will compile anyway if they aren't, leaving some options disabled. The problem is that you then discover later that it was compiled without support for some format or another, thus requiring you to go back and redo it.
Another thing ./configure does somewhat interactively is giving you the option to customise where on the system the application will be installed. Different distributions/environments have different conventions, and you would probably want to stick to the convention on your system. Also, you might want to install it locally (solely for yourself). Traditionally the ./configure and make steps aren't run as root, while make install (unless it is installed solely for yourself) has to be run as root.
Specific distributions often provide scripts that perform this ./install.sh functionality in a distribution-sensitive manner - for example, source RPMs + spec file + rpmbuild or slackbuilds.
(Footnote: that being said, I agree that ./configure; make; make install; can get extremely tedious.)