Executing opkg post install script after image installation - bitbake

We are creating a filesystem image in BitBake and one of the packages requires that its post install script be executed on the device, after the image itself has been installed on the device, and not while the rootfs image is being generated by the build server.
Looking at the package with "opkg status ", it says that the package has been successfully installed -- "install ok installed". However, none of the side effects have been performed, and simply running the .postinst file from /var/lib/opkg/info/.postinst works and reports no errors.
How do I get this to work? It seems that the package is being "installed" in the rootfs image with the incorrect status.

Please see Dev manual section Post-Installation Scripts: With recent Yocto (>=2.7) you can use pkg_postinst_ontarget_${PN}() when you know your script should always run on target during first boot, and never during rootfs generation.
On older Yocto version you can just do what pkg_postinst_ontarget_${PN} does manually in your function pkg_postinst_${PN}():
if [ -n "$D" ]; then
echo "Delaying until first boot"
exit 1
fi
# actual post install script here
$D will be defined during rootfs generation so the postinstall script will fail. This means the script will be run again during first boot on target.
The best option is still fixing the postinstall script so that it works during rootfs generation -- sometimes this isn't possible of course.

Related

Singularity container with stringr fails only locally with 'libicui18n.so.66: cannot open shared object file: No such file or directory'

I enjoy using the Singularity container software, as well as the R package 'stringr' to work with strings.
What I fail to understand is why a Singularity container fails locally (i.e. on my Ubuntu 20.04 computer), yet passes remotely (i.e. on GitHub Actions), when both containers are built at approximately the same time.
Here I run a simple script ([1], see below) that uses stringr:
singularity run --bind $PWD/scripts/ stringr.sif scripts/demo_container.R
(I can remove --bind $PWD/scripts/, but I want to have exactly the same call here as on GitHub Actions)
The error I get is:
'stringr.sif' running with arguments 'scripts/demo_container.R'
Error: package or namespace load failed for ‘stringr’ in dyn.load(file, DLLpath = DLLpath, ...):
unable to load shared object '/home/richel/R/x86_64-pc-linux-gnu-library/4.1/stringi/libs/stringi.so':
libicui18n.so.66: cannot open shared object file: No such file or directory
Execution halted
On GitHub Actions, however, this exact call passes without problems (from this GitHub Actions log):
'stringr.sif' running with arguments 'scripts/demo_container.R'
Hello world
The Singularity script is very simple, it only installs and updates apt, then installs the stringr package ([2], see below).
I understand that this is a shared objects problem, there are some workaround that fail in this context:
sudo apt install libicu-dev: ICU is the library that stringr uses
uninstall stringr and install it again, forcing a recompilation of the shared object, from this GitHub Issue comment
How can it be my Singularity container fails locally, yet passes on GitHub Actions? How can I fix this, so that the container works in both environments?
A non-fix is to use rocker/tidyverse as a base, which I can get to work successfully, as the question is more about why this stringr setup fails.
Thanks and cheers, Richel Bilderbeek
[1] demo_container.R
library(stringr)
message(stringr::str_trim(" Hello world "))
[2] Singularity
Bootstrap: docker
From: r-base
%post
sed -i 's/$/ universe/' /etc/apt/sources.list
apt-get update
apt-get clean
Rscript -e 'install.packages("stringr")'
%runscript
echo "'stringr.sif' running with arguments '$#'"
Rscript "$#"
I had what seems like the same problem, and setting the environment variable R_LIBS solved it. Details below.
As background, the typical error message would look something like this:
Error in dyn.load(file, DLLpath = DLLpath, ...) :
unable to load shared object '/home/mrodo/R/x86_64-pc-linux-gnu-library/4.2/fs/libs/fs.so':
/usr/lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.33' not found (required by /home/mrodo/R/x86_64-pc-linux-gnu-library/4.2/fs/libs/fs.so)
The reason, as I understand it, is that the R package fs had been installed previously to the library that the container is using, but using a different system (either the host system or a container based on a different image). This installation of fs is incompatible with the running container and so throws an error. In this case the error is because a different version of GLIBC is available then what fs wants to use.
How this scenario arose is as follows. Upon trying to install a package using R inside a previous container based off a different image but also running R4.2.x, no default library was writeable as R_LIBS_USER was not set and/or did not exist (if it does not exist when R is loaded, then it is not added to the library paths). Then R prompted to install to a personal library, and I accepted this. But my latest container prompted to use the same personal library, creating the clash.
The solution I used was to set a directory in my home directory as the first path returned by .libPaths(). To do this, you create the directory (won't work if it doesn't exist first0 and add its path as the first path of R_LIBS. You can do that in various ways, but I used the --env option for singularity run.
To make it easier to set this every time, I created an alias for the run command, adding the following to .bashrc:
export R_LIBS_USER_AR42=~/.R/ar42
mkdir -p "$R_LIBS_USER_AR42"
alias ar42='singularity run --env "R_LIBS='$R_LIBS_USER_AR42':/usr/local/lib/R/site-library:/usr/local/lib/R/library$sif/ar42.sif radian'
That would just be tweaked for your own settings.
If you look at the error message, you'll see that the library that cannot be loaded is in your HOME on the host OS: /home/richel/R/x86_64-pc-linux-gnu-library/4.1/stringi/libs/stringi.so
This suggests that the R being used is one you have locally installed on the host and not the one installed in the image. Since singularity processes inherit the full host environment by default, I'd guess you've modified your $PATH and that's clobbering the value set inside the container. Since the environment on the CI / actions server is clean, it is able to run successfully.
I strongly recommend always using the -e/--cleanenv parameters to ensure that the environment inside the singularity container is the same anywhere it is run.

Install riscv spike simulator'; Failed to run dtc: No such file or directory Child dtb process failed

I try to install the riscv tool to my Ubuntu 18.04.4 LTS server.
Use the following git repo and follow its build procedure:
spike simulator
GNU tool
Installation (Newlib)
riscv pk
Issue spike pk hello, gives me
Failed to run dtc: No such file or directory
Child dtb process failed
I have already installed the device-tree-compiler through apt command.
And checked with which dtc, outputs /usr/bin/dtc
What might be the problem?
Any help would be appreciated.
I run those commands on a command-line interface, not capable to run any graphic user interface. Not sure if that causes this problem.
The spike simulator is my first attempt to execute riscv code, I am also welcome to other recommendations.
I figure it out by creating a symbolic link using ln -s $(which dtc) and the problem is solved.

Let Docker image build fail when R package installation returns error

I am trying to create a custom Docker image based on Rocker using Dockerfile. In the Dockerfile I am pulling my own R package from a custom GitLab server using:
RUN R -e "devtools::install_git('[custom gitlab server]', quiet = FALSE)"
Everything usually works, but I have noticed that when the GitLab server is down, or the machine running Docker is low on RAM memory, the package does not install correctly and returns an error message in the R console. This behavior is to be expected. However, Docker does not notice the error produced by R and continues evaluating the rest of the Dockerfile. I would like Docker to fail building the image when this occurs. In that way, I could ultimately prevent automatic deployment of the incomplete Docker container by Kubernetes.
So far I have thought of two potential solutions, but I am struggling with the execution:
R level: Wrap tryCatch() around devtools::install_git to catch the error. But then what? Use stop? Will this cause the Docker building process to stop as well? Could withCallingHandlers() be used?
Dockerfile level: Use a shell command to check for errors? I cannot find the contents of R --help as I do not have a Linux machine at the moment. So I am not sure of what R -e actually does (execute I presume) and which other commands could be passed along with R.
It seems that a similar issue is discussed here and here, but the I do not understand how they have solved it.
Thus how to make sure no Docker image ends up running on the Kubernetes cluster without the custom package?
The Docker build process should stop once one of the commands in the Dockerfile returns a non zero status.
install_git doesn't seem to throw an error when the package wasn't installed successfully, so the execution keeps on.
An obvious way to go would be to wrap the installation inside a dedicated R script and throw an error if it didn't finish successfully, which would then stop the build.
So I would suggest something like this ...
Create installation script install_gitlab.R:
### file install_gitlab.R
## change repo- and package name!!
repo <- '[custom gitlab server]'
pkgname <- 'testpackage'
devtools::install_git(repo, quiet = FALSE)
stopifnot(pkgname %in% installed.packages()[,'Package'])
Modify your Dockerfile accordingly (replace the install_git line):
...
Add install_gitlab.R /runscripts/install_gitlab.R
RUN Rscript /runscripts/install_gitlab.R
...
One thing to keep in mind is, this approach assumes the package you're trying to install is NOT installed prior to calling the command.
If you're using a rocker image, they already have the littler package installed, which has the handy installGithub.r script. I believe it should already have the functionality you want. If not, it at least simplifies the running of the custom install_github.r script.
A docker RUN command using littler just looks like:
RUN installGithub.r "yourRepo"

Installing Python modules

I am trying to install the pyperclip module for Python 3.6 on Windows (32 bit). I have looked at various documentations (Python documentation, pypi.python.org and online courses) and they all said the same thing.
1) Install and update pip
I downloaded get-pip.py from python.org and it ran immediately, so pip should be updated.
2) Use the command python -m pip install SomePackage
Okay here is where I'm having issues. Everywhere says to run this in the command line, or doesn't specify a place to run it.
I ran this in the command prompt: python -m pip install pyperclip. But I got the error message "'python' is not recognized as an internal or external command, operable program or batch file.
If I run it in Python 3.6, it says pip is an invalid syntax. Running it in IDLE gives me the same message.
I have no idea where else to run it. I have the pyperclip module in my python folder. It looks like a really simple problem, but I have been stuck on this for ages!
You need to add the location of the python.exe to your $PATH variable. This depends on your installation location. In my case it is C:\Anaconda3. The default is C:\Python as far as I know.
To edit your path variable you can do the following thing. Go to your Control Panel then search for system. You should see something like: "Edit the system environment variables". Click on this and then click on environment variables in the panel that opened. There you have a list of system variables. You should now look for the Path variable. Now click edit and add the Python path at the end. Make sure that you added a semicolon before adding the path to not mess with your previous configuration.

Can I install a .deb during a BitBake Build?

Problem Definition
I'm attempting to adapt these rosjava installation instructions so that I can include rosjava on a target image built by the BitBake build system. I'm using the jethro branch of Poky.
Implementation Attempt: Build From .deb with package_deb.bbclass
According to the installation instructions, all that really needs to be done to install rosjava is the following:
sudo apt-get install ros-indigo-rosjava
Which works perfectly fine on my build machine. I figured that if I can just point to a .deb and use the Poky metadata class package_deb, it would do all the heavy lifting for me, so I produced the following simple recipe adapted on this posting on the Yocto Project mailing list:
inherit package_deb
SRC_URI = "http://packages.ros.org/ros/ubuntu/pool/main/r/ros-indigo-rosjava/ros-indigo-rosjava_0.2.1-0trusty-20160207-031808-0800_amd64.deb"
SRC_URI[md5sum] = "2020ccc8b4a67dd918a9a2c426eece0b"
SRC_URI[sha256sum] = "ab9493fabe1285b0d21aab031348d0d733d116b0b2470bae90025709b303b649"
The relevant part of the errors I get during the above recipe's do_unpack are:
| no entry data.tar.gz in archive
|
| gzip: stdin: unexpected end of file
| tar: This does not look like a tar archive
| tar: Exiting with failure status due to previous errors
| DEBUG: Python function base_do_unpack finished
| DEBUG: Python function do_unpack finished
The following command produces the output below:
$ ar t python-rosdistro_0.4.5-1_all.deb
debian-binary
control.tar.gz
data.tar.xz
You can see here that there's a data.tar.xz, not data.tar.gz. What can I do to remedy this error and install from this particular .deb?
I've included package_deb in my PACKAGE_CLASSES variable and package-management in my IMAGE_FEATURES. I've tried other methods of installation which have all failed; I thought this method in particular would be very useful to know how to implement.
Update - 3/22
I'm attempting to circumvent the problems with the method above by doing my installation through a ROOTFS_POSTPROCESS_COMMAND which I've adapted from forum posts like this
install_rosjava() {
${STAGING_BINDIR_NATIVE}/dpkg \
--root=${IMAGE_ROOTFS}/ \
--admindir=${IMAGE_ROOTFS}/var/lib/dpkg/ \
-L /var/cache/apt/archives/ros-indigo-rosjava_0.2.1-0trusty-20160207-031808-0800_amd64.deb
}
ROOTFS_POSTPROCESS_COMMAND += " install_rosjava() ; "
However, this fails due to dpkg not being a command found within the ${STAGING_BINDIR_NATIVE} path. The Yocto Project Reference Manual states that:
STAGING_BINDIR_NATIVE Specifies the path to the /usr/bin subdirectory of the sysroot directory for the build host.
Taking a look inside this directory yields a lot of commands but not dpkg (The recipe depends on the dpkg package, and this command can be found in my target rootfs after the build is finished; I've also tried pointing to ${IMAGE_ROOTFS}/usr/bin/dpkg which yields the same results). From what I understand of the BitBake process, this command may be in another sysroot, but I must admit that this is where my understanding breaks down.
Can I adjust this method so that it works, or will I need to start from scratch on an installation from source?
Perhaps there's a different method entirely which I could consider?
If you really want to install their deb directly then your rootfs postprocess is one solution. It doesn't work because depending on dpkg will build you a dpkg for the target but you want a dpkg that will run on the host. Add a dependency on dpkg-native to your image.
Though personally I'd either inherit bin_package and extract the deb they provide then re-package it as a standard package in OE, or ideally write a proper recipe to build rosjava and submit it to meta-ros (https://github.com/bmwcarit/meta-ros).
package_deb is where the packaging machinery for deb packages is stored, it's not something you'd inherit in a recipe but should be listed in PACKAGE_CLASSES.
When you put a .deb in a SRC_URI the fetcher will try to unpack it so you can access the contents: the assumption is that you're going to repack the contents as a native Yocto recipe.
If that's what you want to do then first you'll need to fix the unpack logic (in bitbake/lib/bb/fetch2/__init__.py) to handle .debs with xz-compressed data. This is a bug in bitbake and a bug report and/or patch would be appreciated.
The alternative would be to use their deb directly but I don't recommend that as it's likely the dependencies don't match. The best long-term solution would be to build it from source directly instead of attempting to use a package for another distro.

Resources