Problem Definition
I'm attempting to adapt these rosjava installation instructions so that I can include rosjava on a target image built by the BitBake build system. I'm using the jethro branch of Poky.
Implementation Attempt: Build From .deb with package_deb.bbclass
According to the installation instructions, all that really needs to be done to install rosjava is the following:
sudo apt-get install ros-indigo-rosjava
Which works perfectly fine on my build machine. I figured that if I can just point to a .deb and use the Poky metadata class package_deb, it would do all the heavy lifting for me, so I produced the following simple recipe adapted on this posting on the Yocto Project mailing list:
inherit package_deb
SRC_URI = "http://packages.ros.org/ros/ubuntu/pool/main/r/ros-indigo-rosjava/ros-indigo-rosjava_0.2.1-0trusty-20160207-031808-0800_amd64.deb"
SRC_URI[md5sum] = "2020ccc8b4a67dd918a9a2c426eece0b"
SRC_URI[sha256sum] = "ab9493fabe1285b0d21aab031348d0d733d116b0b2470bae90025709b303b649"
The relevant part of the errors I get during the above recipe's do_unpack are:
| no entry data.tar.gz in archive
|
| gzip: stdin: unexpected end of file
| tar: This does not look like a tar archive
| tar: Exiting with failure status due to previous errors
| DEBUG: Python function base_do_unpack finished
| DEBUG: Python function do_unpack finished
The following command produces the output below:
$ ar t python-rosdistro_0.4.5-1_all.deb
debian-binary
control.tar.gz
data.tar.xz
You can see here that there's a data.tar.xz, not data.tar.gz. What can I do to remedy this error and install from this particular .deb?
I've included package_deb in my PACKAGE_CLASSES variable and package-management in my IMAGE_FEATURES. I've tried other methods of installation which have all failed; I thought this method in particular would be very useful to know how to implement.
Update - 3/22
I'm attempting to circumvent the problems with the method above by doing my installation through a ROOTFS_POSTPROCESS_COMMAND which I've adapted from forum posts like this
install_rosjava() {
${STAGING_BINDIR_NATIVE}/dpkg \
--root=${IMAGE_ROOTFS}/ \
--admindir=${IMAGE_ROOTFS}/var/lib/dpkg/ \
-L /var/cache/apt/archives/ros-indigo-rosjava_0.2.1-0trusty-20160207-031808-0800_amd64.deb
}
ROOTFS_POSTPROCESS_COMMAND += " install_rosjava() ; "
However, this fails due to dpkg not being a command found within the ${STAGING_BINDIR_NATIVE} path. The Yocto Project Reference Manual states that:
STAGING_BINDIR_NATIVE Specifies the path to the /usr/bin subdirectory of the sysroot directory for the build host.
Taking a look inside this directory yields a lot of commands but not dpkg (The recipe depends on the dpkg package, and this command can be found in my target rootfs after the build is finished; I've also tried pointing to ${IMAGE_ROOTFS}/usr/bin/dpkg which yields the same results). From what I understand of the BitBake process, this command may be in another sysroot, but I must admit that this is where my understanding breaks down.
Can I adjust this method so that it works, or will I need to start from scratch on an installation from source?
Perhaps there's a different method entirely which I could consider?
If you really want to install their deb directly then your rootfs postprocess is one solution. It doesn't work because depending on dpkg will build you a dpkg for the target but you want a dpkg that will run on the host. Add a dependency on dpkg-native to your image.
Though personally I'd either inherit bin_package and extract the deb they provide then re-package it as a standard package in OE, or ideally write a proper recipe to build rosjava and submit it to meta-ros (https://github.com/bmwcarit/meta-ros).
package_deb is where the packaging machinery for deb packages is stored, it's not something you'd inherit in a recipe but should be listed in PACKAGE_CLASSES.
When you put a .deb in a SRC_URI the fetcher will try to unpack it so you can access the contents: the assumption is that you're going to repack the contents as a native Yocto recipe.
If that's what you want to do then first you'll need to fix the unpack logic (in bitbake/lib/bb/fetch2/__init__.py) to handle .debs with xz-compressed data. This is a bug in bitbake and a bug report and/or patch would be appreciated.
The alternative would be to use their deb directly but I don't recommend that as it's likely the dependencies don't match. The best long-term solution would be to build it from source directly instead of attempting to use a package for another distro.
Related
My environment is below.
・Operating System and version:windows 10 64bit
・Compiler:C:\msys64\mingw64\bin\g++.exe
・PCL Version:1.9.1
pcl_config.h not found as below error occurred when compiled under above env..
Certainly this header file is not included.
Let me know how to solve it.
PS C:\pcl\pcl\examples\common> g++ -o minmax -I ../../io/include -I ../../common/include .\example_get_max_min_coordinates.cpp
In file included from ../../common/include/pcl/PCLHeader.h:10,
from ../../common/include/pcl/point_cloud.h:47,
from ../../io/include/pcl/io/pcd_io.h:42,
from .\example_get_max_min_coordinates.cpp:2:
../../common/include/pcl/pcl_macros.h:64:10: fatal error: pcl/pcl_config.h: No such file or directory
#include
^~~~~~~~~~~~~~~~~~
compilation terminated.
Short answer
pcl_config.h is generated via pcl_config.h.in by the cmake tool. So it seems that compilation did not finish correctly.
Longer answer
Please make sure you have compiled the relevant modules of PCL (at least pcl-core) before proceeding
You might prefer a pre-built installation from releases or distributed by a package/source manager of your choice
PCL makes heavy use of other libraries and it is best to supply the dependencies (as mentioned below) via CMake or manually via the -I and -l options. If you provide the location of pcl_config.h, the compiler will complain about Eigen next.
The build instructions are available here. TL;DR: After satisfying the dependencies (cmake, c++ compiler, boost, eigen, flann, vtk and other depending on use-case), run the following commands
cd $PCL_SOURCE_DIR
mkdir -p build; cd build
cmake -DCMAKE_BUILD_TYPE=Release ..
make -j8
Feel free to use any build generator (like Ninja) or change build type to Debug or RelWithDebInfo as per your needs.
We manage systems and thus manage repositories. We remove repositories which we do not use, present in /etc/yum.repos.d/<file>
Our problem is: after an update/upgrade of the system, CentOS automatically re-creates the repositories which were removed, which is an issue for us.
Question: Is there a command / method to ensure repositories are not re-created after an upgrade on CentOS 7 systems.
Those repositories are created by someone, the OS doesn't recreate them.
Either they are restored by an update of a RPM package such as centos-release or by an automatic script you setup/run (ansible?).
I'm not aware of an automatic method to delete a repo; I see a couple of solutions:
Exclude centos-release from the upgradable packages, by adding
exclude=centos-release
to /etc/yum.conf (space separated list), but this could break some updates;
Disable them with:
# yum-config-manager --disable base,updates,extras,centosplus,epel,whatever
(this can be easily scripted and put in a cron or in your ansible playbook)
Write a small script and place it in /etc/cron.hourly/, e.g. /etc/cron.hourly/wipe_repos, containing:
#!/usr/bin/env bash
rm -f /etc/yum.repos.d/CentOS-Base.repo
or, better:
#!/usr/bin/env bash
yum-config-manager --disable base,updates,extras,centosplus,epel,whatever
I would suggest to use solution 2, since the repo files aren't overwritten by updates, but the new versions are placed along the old in .rpmnew files.
This is guaranteed by the flag %config(noreplace) in the source rpm of centos-release, applied to all files in /etc/yum.repos.d/.
You can check this by downloading the .src.rpm and opening the centos-release.spec file.
$ mkdir test && cd test
$ yumdownloader --source centos-release
$ rpm2cpio centos-release*.rpm | cpio -idmv
$ cat centos-release.spec
(or search for the package online and download the src.rpm)
Then scroll down to section %files and you'll notice:
%config(noreplace) /etc/yum.repos.d/*
%config(noreplace) means that all those files are not replaced with new files from an update, but the files from the new rpm are saved with the extension .rpmnew, so you'll have:
$ ls /etc/yum.repos.d/
CentOS-Base.repo <-- here you set them as disabled
CentOS-Base.repo.rpmnew <-- this comes from the update, but yum will ignore it
For reference, see http://people.ds.cam.ac.uk/jw35/docs/rpm_config.html or https://serverfault.com/a/48819/.
As I already said in the comments below the question, the reason why those repositories keep reappearing after an update is quite simple: the files defining the system repositories are owned by the package centos-release and whenever this package gets updated or reinstalled, the repositories reappear.
The package centos-release is a very basic package, it provides the capabilities redhat-release and system-release, and a number of other basic packages depend on it.
[local ~]$ rpm -q --provides centos-release
centos-release = 7-6.1810.2.el7.centos
centos-release(upstream) = 7.6
centos-release(x86-64) = 7-6.1810.2.el7.centos
config(centos-release) = 7-6.1810.2.el7.centos
redhat-release = 7.6-1
system-release = 7.6-1
system-release(releasever) = 7
[local ~]$ rpm -q --whatrequires system-release
setup-2.8.71-10.el7.noarch
grubby-8.28-25.el7.x86_64
[local ~]$ rpm -q --whatrequires redhat-release
initscripts-9.49.46-1.el7.x86_64
systemd-219-62.el7_6.5.x86_64
There is no easy way out of this.
But one possible solution might be to create a customized RPM package to replace centos-release. It should contain the pointers to your own repositories and of course needs to provide the capabilities redhat-release and system-release.
Please be aware that I have no idea if this is actually going to work, it's just something that came to my mind while thinking about the problem. It might save you the work of creating a full custom distribution derived from CentOS, which is the only other way I can think of to achieve what you seem to want.
My solution doesn't exactly solve the problem you request ("how do I delete default repository config files forever?"), but it does stabilize your config changes. If you zero out the files instead of deleting them, then system updates will leave your 'edited' versions unchanged.
I do feel that this is a 'hack', leaving named ghost files, but it's one I can live with. No need to disable or customize redhat-release or system-release.
My problem was slightly different than yours - I maintained different configs for the same repositories for different situations, indicated by filename. On updates the original files would return, leaving me with redundant and incorrect definitions. Now they don't.
I have a package that previously only targeted RPM based distros for which I am now building .deb packages for Debian based distros.
The aim is to simulate a test installation from user-space that is isolated from the system you are building on. It may be multi-user and you do not want to require root access just to build the software. Many of our tests simulate the installation directory structure already. This is for the next step up to simulate an actual installation using packages built.
For the RPM packages I was able to create test installations using:
WSDIR=/where/I/want/my/tests/to/run
rpmdb --initdb --dbpath "$WSDIR"/rpmdb
rpm --relocate /opt="$WSDIR"/opt --dbpath $WSDIR/rpmdb -i <package>.rpm
The equivalent in the Debian world is something like:
dpkg --force-not-root --admindir=$WSDIR/dpkg --root=$WSDIR/install --install "$DEB"
However, I am stuck over the equivalent to the rpmdb --initdb step.
Note that I can just unpack the archive using:
dpkg-deb -x "$DEB" $WSDIR/install
But I would prefer to be closer to how a real package is installed.
Also I don't think this will run preinstall and postinstall scripts.
Similar questions have suggested using deboostrap to create a chroot environment but this creates a complete new installation. As well as being overkill it is too slow for an automated test. I intend to use this for quick tests of the installation package prior to further testing in actual test environments.
My experiments so far:
(cd $WSDIR/dpkg && mkdir alternatives info parts triggers updates)
cp /var/lib/dpkg/status $WSDIR/dpkg/status
have at best resulted in:
dpkg: error: unable to access dpkg status area: No such file or directory
which does not indicate clear what is wrong.
So how do you create a dpkg admin directory?
Cross posted as https://superuser.com/questions/1271145/how-do-you-create-a-dpkg-admin-directory
Update 24/11/2017
I've tried copying using the dpkg dir from an environment created by [cowdancer][1] (which uses deboostrap under the hood) or copying the real one from /var/lib/dpkg but I still get the same error message so perhaps the error (and/or the --admindir option) doesn't mean quite what I think it means.
Note that:
sudo dpkg --force-not-root --root=$WSDIR/install --admindir=/var/lib/dpkg --install "$DEB"
does work. So it is something to do with the admin dir.
I've also retitled the question as "How do you create a dpkg admin directory" is interesting question but the answer is not necessarily the solution to my problem.
The minimal way to create a dpkg database is something like this:
$ mkdir -p db/{updates,info}
$ touch db/{status,diversions,statoverride}
If you want to use that as non-root, currently the best way is to use fakeroot.
$ mkdir -p fsys
$ PATH=/sbin:/usr/sbin:$PATH fakeroot dpkg --log=/dev/null --admindir=db --instdir=fsys -i pkg.deb
But take into account that passing --root after --admindir or --instdir will reset those paths, which is I think the problem you have been having here.
Also using sudo and --force-not-root does not make much sense? :) And is definitely less confined than using just fakeroot. In the near future it will be possible to run dpkg fully unprivileged in some local tree.
I eventually found an answer for this. Thanks to Guillem Jover for some of this.
Pasting a copy of it here:
mkdir fake
mkdir fake/install
mkdir -p fake/dpkg/info
mkdir -p fake/dpkg/updates
touch fake/dpkg/status
PATH=/sbin:/usr/sbin:$PATH fakeroot dpkg --force-script-chrootless --log=`pwd`/fake/dpkg.log --root=`pwd`/fake --instdir `pwd`/fake --admindir=`pwd`/fake/dpkg --install *.deb
Some points to note:
--force-not-root is not enough. fakeroot is required.
ldconfig and start-stop-daemon must be on the path.
(hence PATH=/sbin:/usr/sbin:$PATH)
The log file needs to be relocated from the default /var/log/dpkg.log
The order of arguments is significant. If used --root must be before --instdir and --admindir.
The admindir is supposed to have a the installation dir as a prefix.
If the package contains any pre or post installation scripts (preinst,postinst) then --force-script-chrootless is required as these scripts are normally run via chroot() which gives operation not permitted when attempted under fakeroot.
For a quick test of trivial dependencies, you can directly install on the system using 'dpkg -i' then 'dpkg -P' and 'apt-get autoremove' to purge the package and clean the dependencies.
An other more secure but slower solution could be to use the autopkgtest package:
https://people.debian.org/~mpitt/autopkgtest/README.package-tests.html
Suppose I have a project for which I have developed an R package. The hierarchy might look something like this.
/project
---Makefile
---workflow.R
---test.R
---/mypackage
---DESCRIPTION
---NAMESPACE
---/R
---func1.R
---func2.R
workflow.R depends on the latest version of mypackage being installed. However, I only want to re-build the package if any file inside of it has been modified.
Currently, in my Makefile, I have:
PACKAGE=$(wildcard mypackage/**/*)
all: install test workflow
install: $(PACKAGE)
R CMD INSTALL mypackage
workflow: install
Rscript workflow.R
test: install
Rscript test.R
However, this will re-install the package every time I run make test, even if nothing inside the package has changed. Is there a clean way to avoid this?
The install rule does not create a file named install in the current directory, so make tries to remake it each time. This looks like it should be a .PHONY target, but that itself won't fix the issue as it will still execute the recipes.
One solution is to have another rule that creates a stub file:
.PHONY: all install test workflow
all: install test workflow
install: install.done
install.done: $(PACKAGE)
R CMD INSTALL mypackage
touch $#
Or you could just make install the stub file itself and make it a non-.PHONY rule.
It sounds like you want to treat the installation as an intermediate step. You can do this by adding
.INTERMEDIATE: install
to your makefile.
The make manual explains (link):
If an ordinary file b does not exist, and make considers a target that depends on b, it invariably creates b and then updates the target from b. But if b is an intermediate file, then make can leave well enough alone. It won’t bother updating b, or the ultimate target, unless some prerequisite of b is newer than that target or there is some other reason to update that target.
I am trying to install ODBC driver for Debian arrording to these instructions: https://blog.afoolishmanifesto.com/posts/install-and-configure-the-ms-odbc-driver-on-debian/
However trying to run:
sqlcmd -S localhost
I get the error
libcrypto.so.10: cannot open shared object file: No such file or
directory
What could be the cause?
So far I have tried
1.
$ cd /usr/lib
$ sudo ln -s libssl.so.0.9.8 libssl.so.10
$ sudo ln -slibcrypto.so.0.9.8 libcrypto.so.10
2.
/usr/local/lib64 to the /etc/ld.so.conf.d/doubango.conf file
3.
sudo apt-get update
sudo apt-get install libssl1.0.0 libssl-dev
cd /lib/x86_64-linux-gnu
sudo ln -s libssl.so.1.0.0 libssl.so.10
sudo ln -s libcrypto.so.1.0.0 libcrypto.so.10
4. Sudo apt-get install libssl0.9.8:i386
None of these have helped.
As I'm quite familiar with Debian and programming, here is some advice:
if you have questions about setting up your system, ask on SuperUser and/or (if your question is specific to a Un*x flavour) on Unix&Linux
when fuddling around with symlinks to shared-libraries, you should have a thorough understanding of what you are doing. these files are named for a reason - and the reason is to protect you (the user of the system) from weird crashes, because an application is using a wrong/incompatible library.
a tutorial that tells you to do so, should give proper warning and explanation about what you are to do.
So, why are these instructions in the tutorial you are following?
The application you are trying to run, has been linked against libcrypto.so.
On the developer machine (that was used to produce the application binary), libcrypto.so was a symlink to libcrypto.so.10, but this is missing on Debian: maybe because the library has been removed (and replaced by a new and incompatible version), or because Debian uses a different naming scheme as compared to the system that was used to compile the application.
If it is the former, then you cannot solve the issue by using symlinks.
You have to get the right library (or the application linked against the correct libraries).
If it is the latter, you may get away with symlinking the expected library name with the correct library files found on your system. (This is assuming that the only difference between the two systems is indeed the so-naming scheme).
So, how to do it?
first of all, you should find out, against which libraries your application was really linked, and which of these libraries are missing.
$ ldd /path/to/my/app | grep -i "not found"
libfoo.so.10 => not found
then find out, whether you have a (hopefully compatible) library on your system. A good place to start is /usr/lib/. but not-so-recently, Debian has started moving the libraries to /usr/lib/<host-triplet>, with <host-triplet> describing a target architecture. You can find out the default value if your application was indeed built for the architecture you are running (e.g. for linux-amd64) you can get the string by running something like:
$ gcc -print-multiarch
Imagine you discover that you have /usr/lib/x86_64-linux-gnu/libfoo.so.1.0.0.
if you have good reason to believe that this can act as a replacement for libfoo.so.10, you can go make the found library available to your application by means of a symlink, e.g.
# cd /usr/local/lib/
# ln -s /usr/lib/x86_64-linux-gnu/libfoo.so.1.0.0 libfoo.so.10
Finally, you might need to refresh the cache of the dynamic linker so it starts using the new library, by running ldconfig as root/superuser.