Instaling cross-compiled debian packages to fake "footfs" with dpkg - deb

The setup I have is like this: I have two sets of libraries that are compiled for amd64 (pc) and armelx (ARM). They are both used to cross-compile some software on a build machine.
The first ones (amd64) can be updated without hassle by updating the apt-repository and using apt-get install on the build machine. The packages for ARM however, I don't want to install with apt, because it does not support installing to different directory. If I installed to default directories, the versions could not coexist. Right?
So far, the build machine was updated manually each time there was a new version of the packages, simply by extracting with dpkg -x to a dedicated "fake" footfs directory. This is where the compiler would also look when cross compiling other SW. The problem is, there is no information about these extracted packages or their versions anywhere on the system, right? It should have been in the status file.
My thought was to have these packages installed on this footfs dir with dpkg -i <package.deb> --root=<rootfs>. Would this work? I have a feeling it will not, because the deb packages have no post/pre-remove/install scripts, so it may work for a virgin install somehow, but not for upgrading? Also, what must the rootfs directory structure look like and what must it contain in order for this to work even the first time? Is there a tool to help with this?
Thanks.

Once you have a base armel Debian system, you can actually enter it and run the armel code inside it using something like QEMU. The qemu-arm-static tool (in the qemu-user-static package) can make use of the binfmt_misc capability in Linux to make it so ARM executables are directly run under the QEMU ARM system emulator. So you can run dpkg, apt-get, and so on inside the armel "rootfs" while running on amd64 hardware.
Example:
my_arm_system=/mnt/arm_system
sudo cp /usr/bin/qemu-arm-static "$my_arm_system/usr/bin/"
sudo chroot "$my_arm_system" apt-get update
sudo chroot "$my_arm_system" apt-get install $somepkg
sudo chroot "$my_arm_system" /bin/bash
As for setting up the base armel system in the first place: Debootstrap is the typical method for setting up a Debian base system, whether in a chroot or otherwise. You can use it for installing a base system of a different architecture, but it takes a few extra steps:
distro=jessie # or whatever
echo "Debootstrap phase 1"
sudo mkdir "$my_arm_system"
sudo debootstrap --arch=armel --verbose --foreign "$distro" "$my_arm_system"
sudo cp /usr/bin/qemu-arm-static "$my_arm_system"/usr/bin/
echo "Debootstrap phase 2"
sudo chroot "$my_arm_system" /debootstrap/debootstrap --second-stage
Multistrap is another tool that might be useful; it is intended for setting up Debian environments of one architecture on a host of a different architecture, or for using more complicated APT source combinations. It's not perfect, as it doesn't follow all the deb installation "rules" exactly. It takes some shortcuts/deviations in order to make its job reasonably possible.

Related

copy libpq.5.dylib to /usr/lib/libpq.5.dylib

I can't load packages in R because the file libpq.5.dylib is not in /usr/lib/libpq.5.dylib. It is in /usr/local/Cellar/libpq/13.0/lib/libpq.5.dylib
I tried this line: sudo ln -s /usr/local/Cellar/libpq/13.0/lib/libpq.5.dylib /usr/lib/libpq.5.dylib but I get this response: ln: /usr/lib/libpq.5.dylib: Operation not permitted
What can I do to get the file in /usr/lib/libpq.5.dylib without causing issues? This solution suggests that I may face problems down the line so I don't understand what to do.
You really don't want it in /usr/lib. Apple declared that as off-limits, and on newer macOS versions it lives on a read-only volume. Unless you're willing to go into recovery mode and manually tamper with the volume (and possibly repeat that on future OS updates), this is not the way to go.
Instead, let's address the core issue:
Dynamic libraries on macOS embed their own install path inside the binary, and the linker copies that into binaries linking against them. This information can be changed with install_name_tool (see man install_name_tool).
Examine the install name of the dylib:
otool -l /usr/local/Cellar/libpq/13.0/lib/libpq.5.dylib | fgrep -A2 LC_ID_DYLIB
If the printed path already points to the dylib itself (or a path that is symlinked to it), use this path as [new_path] below, and skip step 2.
If the dylib's install name does not point back to itself, run this:
sudo install_name_tool -id /usr/local/Cellar/libpq/13.0/lib/libpq.5.dylib /usr/local/Cellar/libpq/13.0/lib/libpq.5.dylib
And use /usr/local/Cellar/libpq/13.0/lib/libpq.5.dylib for [new_path] below.
For binaries that link against the dylib, run:
sudo install_name_tool -change /usr/lib/libpq.5.dylib [new_path] [path_to_binary]
I had the same issue building a container through docker for API use : RPostgres was installed but the library couldn't load, same error message.
Since I had installed Postgres on my machine, I figure the problem was worked around therefore I had no such message on local ; but here's how I solved this in my dockerfile, 100% verified on a machine with nothing related to R installed :
RUN apt-get update && apt-get install libpq5 -y
So executing apt-get update && apt-get install libpq5 -y on your terminal should do the trick. Light and efficient.
It tried to load libpq.5.dylib from the symlink /opt/homebrew/opt/postgresql/lib/libpq.5.dylib but could not find the file, so you need to update it:
# TODO: get this from the error, after "Library not loaded:"
SYMLINK_PATH="/opt/homebrew/opt/postgresql/lib/libpq.5.dylib"
# TODO: find this in your machine. The version maybe different than mine
DESTINATION_PATH="/opt/homebrew/opt/postgresql/lib/postgresql#14/libpq.5.dylib"
sudo mv $SYMLINK_PATH $SYMLINK_PATH.old
sudo ln -s $DESTINATION_PATH $SYMLINK_PATH

How do you create a fake install of a debian package for use in testing?

I have a package that previously only targeted RPM based distros for which I am now building .deb packages for Debian based distros.
The aim is to simulate a test installation from user-space that is isolated from the system you are building on. It may be multi-user and you do not want to require root access just to build the software. Many of our tests simulate the installation directory structure already. This is for the next step up to simulate an actual installation using packages built.
For the RPM packages I was able to create test installations using:
WSDIR=/where/I/want/my/tests/to/run
rpmdb --initdb --dbpath "$WSDIR"/rpmdb
rpm --relocate /opt="$WSDIR"/opt --dbpath $WSDIR/rpmdb -i <package>.rpm
The equivalent in the Debian world is something like:
dpkg --force-not-root --admindir=$WSDIR/dpkg --root=$WSDIR/install --install "$DEB"
However, I am stuck over the equivalent to the rpmdb --initdb step.
Note that I can just unpack the archive using:
dpkg-deb -x "$DEB" $WSDIR/install
But I would prefer to be closer to how a real package is installed.
Also I don't think this will run preinstall and postinstall scripts.
Similar questions have suggested using deboostrap to create a chroot environment but this creates a complete new installation. As well as being overkill it is too slow for an automated test. I intend to use this for quick tests of the installation package prior to further testing in actual test environments.
My experiments so far:
(cd $WSDIR/dpkg && mkdir alternatives info parts triggers updates)
cp /var/lib/dpkg/status $WSDIR/dpkg/status
have at best resulted in:
dpkg: error: unable to access dpkg status area: No such file or directory
which does not indicate clear what is wrong.
So how do you create a dpkg admin directory?
Cross posted as https://superuser.com/questions/1271145/how-do-you-create-a-dpkg-admin-directory
Update 24/11/2017
I've tried copying using the dpkg dir from an environment created by [cowdancer][1] (which uses deboostrap under the hood) or copying the real one from /var/lib/dpkg but I still get the same error message so perhaps the error (and/or the --admindir option) doesn't mean quite what I think it means.
Note that:
sudo dpkg --force-not-root --root=$WSDIR/install --admindir=/var/lib/dpkg --install "$DEB"
does work. So it is something to do with the admin dir.
I've also retitled the question as "How do you create a dpkg admin directory" is interesting question but the answer is not necessarily the solution to my problem.
The minimal way to create a dpkg database is something like this:
$ mkdir -p db/{updates,info}
$ touch db/{status,diversions,statoverride}
If you want to use that as non-root, currently the best way is to use fakeroot.
$ mkdir -p fsys
$ PATH=/sbin:/usr/sbin:$PATH fakeroot dpkg --log=/dev/null --admindir=db --instdir=fsys -i pkg.deb
But take into account that passing --root after --admindir or --instdir will reset those paths, which is I think the problem you have been having here.
Also using sudo and --force-not-root does not make much sense? :) And is definitely less confined than using just fakeroot. In the near future it will be possible to run dpkg fully unprivileged in some local tree.
I eventually found an answer for this. Thanks to Guillem Jover for some of this.
Pasting a copy of it here:
mkdir fake
mkdir fake/install
mkdir -p fake/dpkg/info
mkdir -p fake/dpkg/updates
touch fake/dpkg/status
PATH=/sbin:/usr/sbin:$PATH fakeroot dpkg --force-script-chrootless --log=`pwd`/fake/dpkg.log --root=`pwd`/fake --instdir `pwd`/fake --admindir=`pwd`/fake/dpkg --install *.deb
Some points to note:
--force-not-root is not enough. fakeroot is required.
ldconfig and start-stop-daemon must be on the path.
(hence PATH=/sbin:/usr/sbin:$PATH)
The log file needs to be relocated from the default /var/log/dpkg.log
The order of arguments is significant. If used --root must be before --instdir and --admindir.
The admindir is supposed to have a the installation dir as a prefix.
If the package contains any pre or post installation scripts (preinst,postinst) then --force-script-chrootless is required as these scripts are normally run via chroot() which gives operation not permitted when attempted under fakeroot.
For a quick test of trivial dependencies, you can directly install on the system using 'dpkg -i' then 'dpkg -P' and 'apt-get autoremove' to purge the package and clean the dependencies.
An other more secure but slower solution could be to use the autopkgtest package:
https://people.debian.org/~mpitt/autopkgtest/README.package-tests.html

libcrypto.so.10: cannot open shared object file: No such file or directory

I am trying to install ODBC driver for Debian arrording to these instructions: https://blog.afoolishmanifesto.com/posts/install-and-configure-the-ms-odbc-driver-on-debian/
However trying to run:
sqlcmd -S localhost
I get the error
libcrypto.so.10: cannot open shared object file: No such file or
directory
What could be the cause?
So far I have tried
1.
$ cd /usr/lib
$ sudo ln -s libssl.so.0.9.8 libssl.so.10
$ sudo ln -slibcrypto.so.0.9.8 libcrypto.so.10
2.
/usr/local/lib64 to the /etc/ld.so.conf.d/doubango.conf file
3.
sudo apt-get update
sudo apt-get install libssl1.0.0 libssl-dev
cd /lib/x86_64-linux-gnu
sudo ln -s libssl.so.1.0.0 libssl.so.10
sudo ln -s libcrypto.so.1.0.0 libcrypto.so.10
4. Sudo apt-get install libssl0.9.8:i386
None of these have helped.
As I'm quite familiar with Debian and programming, here is some advice:
if you have questions about setting up your system, ask on SuperUser and/or (if your question is specific to a Un*x flavour) on Unix&Linux
when fuddling around with symlinks to shared-libraries, you should have a thorough understanding of what you are doing. these files are named for a reason - and the reason is to protect you (the user of the system) from weird crashes, because an application is using a wrong/incompatible library.
a tutorial that tells you to do so, should give proper warning and explanation about what you are to do.
So, why are these instructions in the tutorial you are following?
The application you are trying to run, has been linked against libcrypto.so.
On the developer machine (that was used to produce the application binary), libcrypto.so was a symlink to libcrypto.so.10, but this is missing on Debian: maybe because the library has been removed (and replaced by a new and incompatible version), or because Debian uses a different naming scheme as compared to the system that was used to compile the application.
If it is the former, then you cannot solve the issue by using symlinks.
You have to get the right library (or the application linked against the correct libraries).
If it is the latter, you may get away with symlinking the expected library name with the correct library files found on your system. (This is assuming that the only difference between the two systems is indeed the so-naming scheme).
So, how to do it?
first of all, you should find out, against which libraries your application was really linked, and which of these libraries are missing.
$ ldd /path/to/my/app | grep -i "not found"
libfoo.so.10 => not found
then find out, whether you have a (hopefully compatible) library on your system. A good place to start is /usr/lib/. but not-so-recently, Debian has started moving the libraries to /usr/lib/<host-triplet>, with <host-triplet> describing a target architecture. You can find out the default value if your application was indeed built for the architecture you are running (e.g. for linux-amd64) you can get the string by running something like:
$ gcc -print-multiarch
Imagine you discover that you have /usr/lib/x86_64-linux-gnu/libfoo.so.1.0.0.
if you have good reason to believe that this can act as a replacement for libfoo.so.10, you can go make the found library available to your application by means of a symlink, e.g.
# cd /usr/local/lib/
# ln -s /usr/lib/x86_64-linux-gnu/libfoo.so.1.0.0 libfoo.so.10
Finally, you might need to refresh the cache of the dynamic linker so it starts using the new library, by running ldconfig as root/superuser.

Compiling OpenCL on Ubuntu

My programming experience is about 1 year of C/C++ experience from high school, but I did my research and wrote a simple program with OpenCL a few months ago. I was able to compile and run this on an Apple computer relatively easily with g++ and the --framework option. Now I'm on my Ubuntu machine and I have no idea how to compile it. The correct drivers have been downloaded along with ATI's Stream SDK (I have an ATI Radeon HD5870). Any help would be appreciated!
Try
locate libOpenCL.so
If it is in one of the standard directories (most likely /usr/lib, or /usr/local/lib) you need to replace "--framework OpenCL" with "-lOpenCL". If g++ cannot find the lib you can tell g++ to look in a specific directory by adding "-L/path/to/library".
I wish I had my Linux to be more helpful... It is probably best if you redownload the ati-stream-sdk, after extracting it, open the Terminal and "cd /path/to/extracted/files"; in that directory execute make && sudo make install
make you probably know this from windows, this compiles, whatever needs to be compiled
&& chains commands together, the following commands will only be executed if the first command succeeded
sudo make install this will put the files in the expected places (sudo executes a command with superuser priviledges, you will have to enter your password)
Hope that helps.
You might be missing the dynamic libraries from the dynamic linker configuration.
Search for where the libraries are. Most likely /usr/lib, or /usr/local/lib.
Make sure the path location is also configured at one of these places:
LD_LIBRARY_PATH - you can set it in you environment shell, like .bashrc
/etc/ld.so.conf - you will need to call ldconfig to update the cache and it requires root access to change the file.
Reason
Aside from #bjoernz, my system can't find the libOpenCL.so file
It's because the correct file directory is missing
After searchig over the internet, I found out that libOpenCL.so file can provided by ocl-icd-opencl-dev package
Solution
You just need to install the package mentioned above by typing into cmd
sudo apt update
sudo apt install ocl-icd-opencl-dev
Therefore, libOpenCL.so can be found under /usr/lib/x86_64-linux-gnu/ folder
My System Information
OS: Ubuntu 16.04 LTS
GPU: NVIDIA GeForce GTX 660
GPU Driver: nvidia-375
OpenCL: 1.2
Reference:
[1] How to install libOpenCL.so on ubuntu
[2] How to set up OpenCL in Linux

installing R packages on ubuntu 8.10

preface: i'm an os x user coming to linux, so excuse my ignorance in advance
I've installed R using synaptic and now i'm trying to install packages.
I open R then try
install.packages("some_package")
system tries to default to /site-library, then tells me it's not writable, then asks about making a personal library?
Should I just make site-library writable? Or is there something more to this?
The directory /usr/share/local/lib/R is the default location; the directory is has ownership root:staff by default. If you add yourself to group staff (easiest: by editing /etc/group and /etc/gshadow) you can write there and you do not need sudo powers for the installation of packages. That is what I do.
Alternatively, do apt-get install littler and copy the example file /usr/share/doc/littler/examples/install.r to /usr/local/bin and chmod 755 it. The you can just do sudo install.r lattice ggplot2 to take two popular examples.
BTW Ubuntu 8.1 does not exist as a version. Maybe you meant 8.10? Consider upgrading to 9.10 ...
Edit: Also have a look at this recent SO question.
I faced the same issue. The most convenient way is to start R as super user.
sudo R
After that, install.packages("some package") should work.
If you are the only user who needs those packages, then the easiest and neatest way is to let R create a personal library for you. That way you don't need to mess with the system directories managed by the package management system.
Another way to install some packages in Ubuntu is to look for Ubuntu packages with names like r-cran-*. This way you do not have to worry about dependencies, the packages become available to all users, and updates are taken care of by the Ubuntu package management system. But only a small proportion of CRAN packages are available this way and you may not get the latest version.
Well, I prefer to install packages into local R folder ~/R/, but it's just a matter of an individual preference... you can also grant yourself a write permission to default library, it doesn't make any difference.
Be sure to add up-to-date packages. Those packages available in default repos are quite old. R v.2.9.0 is available by default in 9.10, while v.2.10.1 is now available.
So stay up-to-date, add this line to file /etc/apt/sources.list (replace <text> with CRAN server address, you can find server addresses on www.r-project.org > CRAN > Linux > Ubuntu):
deb http://<my.favorite.cran.mirror>/bin/linux/ubuntu karmic/
then run this line in terminal:
gpg --keyserver subkeys.pgp.net --recv-key E2A11821 && gpg -a --export E2A11821 | sudo apt-key add -
and if keys are imported properly, run:
sudo apt-get install r-base-core
or if you already installed R, run:
sudo apt-get update && sudo apt-get upgrade
you should also check for alias functions (try man alias in terminal) to automatize repetitive tasks... feel comfortable in terminal, Synaptic is indeed a good tool, but most Linux users prefer command-line approach for a good reason - it's highly customizable =)
I recommend that you stick with one server (be advised when choosing the default server - I prefer UCLA's server, Berkeley works just fine, Main server is usually busy as hell... so there...)
Alternatively, you can add default CRAN server to .First() function:
# replace '<server address>'
.First() <- function() {
options("repos" = c(CRAN = "<my.favorite.cran.mirror>"))
}
now you can just type:
> install.packages('<somepackage>')
and you'll lose the boring Tcl/Tk serverlist window! Oh, what a relief!
Welcome to Ubuntu!
Cheers, mate!

Resources