Yocto: recipe cannot be made due to missing directories - qt

I am starting to work on Yocto, and trying to put in my first recipe. I already put in the layer for it, called: "meta-layer". This layer was added using the bitbake-layers (create and add) tools, and looks like this:
With the conf/layer.conf in there looking like this:
# We have a conf and classes directory, add to BBPATH
BBPATH .= ":${LAYERDIR}"
# We have recipes-* directories, add to BBFILES
BBFILES += "${LAYERDIR}/recipes-*/*/*.bb \
${LAYERDIR}/recipes-*/*/*.bbappend"
BBFILE_COLLECTIONS += "meta-layer"
BBFILE_PATTERN_meta-layer = "^${LAYERDIR}/"
BBFILE_PRIORITY_meta-layer = "6"
LAYERDEPENDS_meta-layer = "core"
LAYERSERIES_COMPAT_meta-layer = "mickledore"
And the bblayers.conf in the build folder looks like this:
# POKY_BBLAYERS_CONF_VERSION is increased each time build/conf/bblayers.conf
# changes incompatibly
POKY_BBLAYERS_CONF_VERSION = "2"
BBPATH = "${TOPDIR}"
BBFILES ?= "${LAYERDIR}/recipes-*/*/*.bb \
${LAYERDIR}/recipes-*/*/*.bbappend"
BBLAYERS ?= " \
/home/ydaelemans/workdir/poky/meta \
/home/ydaelemans/workdir/poky/meta-poky \
/home/ydaelemans/workdir/poky/meta-yocto-bsp \
/home/ydaelemans/workdir/poky/meta-qt5 \
/home/ydaelemans/workdir/poky/meta-hello \
/home/ydaelemans/workdir/poky/build/workspace \
/home/ydaelemans/workdir/poky/build/meta-layer \
"
When I now add a recipe using the devtool add, it adds the recipe .bb file in the wrong place (I think?), namely in
devtool add hello /home/ydaelemans//workdir/poky/meta-layer/recipes-hello
/home/ydaelemans/workdir/poky/build/workspace/recipes/hello/hello.bb
Which is in the build directory instead of the layer in the source directory. When trying to run here it will give errors that it cannot find the source files. So I copy it back into the layer (one level above the source files). From here I make some adjustments for it (since I am running a QT program, and yes the qt meta is already pulled and build on this setup). Now this file will look like this:
SUMMARY = "bitbake-layers recipe"
DESCRIPTION = "Recipe created by bitbake-layers"
LICENSE = "CLOSED"
DEPENDS += "qtbase wayland"
SRC_URI = "file://hello.pro \
file://main.cpp \
file://mainwindow.cpp \
file://mainwindow.h \
file://mainwindow.ui \
file://hello.pro.user \
file://hello.pro.user \
file://Makefile"
S = "{WORKDIR}"
do_install:append () {
# install -d ${D}${bindir}
# install -m 0775 qt-app ${D}${bindir}/
}
FILES_${PN} += "${bindir}/qt-app"
inherit qmake5
When I then run it using:
bitbake -b /home/ydaelemans/workdir/poky/meta-layer/recipes-hello/hello.bb
I get these errors:
dir/poky/meta-layer/recipes-hello/hello.bb
WARNING: Buildfile specified, dependencies will not be handled. If this is not what you want, do not use -b / --buildfile.
Loading cache: 100% |########################################################| Time: 0:00:00
Loaded 1786 entries from dependency cache.
Build Configuration:
BB_VERSION = "2.2.0"
BUILD_SYS = "x86_64-linux"
NATIVELSBSTRING = "universal"
TARGET_SYS = "arm-poky-linux-gnueabi"
MACHINE = "qemuarm"
DISTRO = "poky"
DISTRO_VERSION = "4.1"
TUNE_FEATURES = "arm vfp cortexa15 neon thumb callconvention-hard"
TARGET_FPU = "hard"
meta
meta-poky
meta-yocto-bsp = "master:0ce159991d8e49f8fa97bdf5f088fdfd753a32dc"
meta-qt5 = "master:1d1b19ff577835bf847152eed44d52e8267d9093"
meta-hello
workspace
meta-layer = "master:0ce159991d8e49f8fa97bdf5f088fdfd753a32dc"
Initialising tasks: 100% |###################################################| Time: 0:00:01
Sstate summary: Wanted 0 Local 0 Mirrors 0 Missed 0 Current 0 (0% match, 0% complete)
NOTE: No setscene tasks
NOTE: Executing Tasks
ERROR: hello-1.0-r0 do_configure: ExecutionError('/home/ydaelemans/workdir/poky/build/tmp/work/cortexa15t2hf-neon-poky-linux-gnueabi/hello/1.0-r0/temp/run.qmake5_base_postconfigure.22783', 2, None, None)
ERROR: Logfile of failure stored in: /home/ydaelemans/workdir/poky/build/tmp/work/cortexa15t2hf-neon-poky-linux-gnueabi/hello/1.0-r0/temp/log.do_configure.22783
Log data follows:
| DEBUG: Executing python function extend_recipe_sysroot
| DEBUG: Python function extend_recipe_sysroot finished
| DEBUG: Executing python function externalsrc_configure_prefunc
| DEBUG: Python function externalsrc_configure_prefunc finished
| DEBUG: Executing shell function qmake5_base_preconfigure
| DEBUG: Shell function qmake5_base_preconfigure finished
| DEBUG: Executing shell function do_configure
| DEBUG: Shell function do_configure finished
| DEBUG: Executing python function do_qa_configure
| DEBUG: Python function do_qa_configure finished
| DEBUG: Executing shell function qmake5_base_postconfigure
| WARNING: exit code 2 from a shell command.
| /home/ydaelemans/workdir/poky/build/tmp/work/cortexa15t2hf-neon-poky-linux-gnueabi/hello/1.0-r0/temp/run.qmake5_base_postconfigure.22783: 152: cannot create /home/ydaelemans/workdir/poky/build/tmp/work-shared/hello/1.0-r0/configure.sstate: Directory nonexistent
ERROR: Task (/home/ydaelemans/workdir/poky/meta-layer/recipes-hello/hello.bb:do_configure) failed with exit code '1'
NOTE: Tasks Summary: Attempted 7 tasks of which 6 didn't need to be rerun and 1 failed.
Summary: 1 task failed:
/home/ydaelemans/workdir/poky/meta-layer/recipes-hello/hello.bb:do_configure
Summary: There was 1 WARNING message.
Summary: There was 1 ERROR message, returning a non-zero exit code.
It looks like it is failing on the configuration, which I am not touching. It seems weird that I would need to make that directory by hand no? Or is that indeed what you need to do?
PS: why does devtool add put my recipe .bb file in the build directory instead of at the layers in the source directory?

Regarding the PATH of your recipe after devtool add, it is completely normal.
Devtool uses a “Workspace” layer in which to accomplish builds. This layer is not specific to any single devtool command but is rather a common working area used across the tool.
https://docs.yoctoproject.org/ref-manual/devtool-reference.html
Normally after devtool add the PATH in bblayers.conf should be updated.
Regarding the configuration error, you shouldn't create it by hand. I highly recommand you to avoid using devtool add if you are confused.
Run bitbake -c cleanall hello
Create your layer, and your recipe.
Add the PATH to bblayers.conf
Add in your distro file IMAGE_INSTALL_append = " Hello" (do not forget the space before Hello"
And this it. Launch bitbake.
Please read the documentation

Related

How do I cross-compile a Qt-project with a recipe in yocto

I am trying to Cross-compile a qt-project from a recipe. I have created a recipe file but when I try to bitbake it. I am met with error
Here is my recipe file
DESCRIPITION = "my_project File Transfer"
LICENSE = "MIT"
LIC_FILES_CHKSUM = "file://${COMMON_LICENSE_DIR}/MIT;md5=0835ade698e0bcf8506ecda2f7b4f302"
SRC_URI = "git://git#bitbucket.org/johndoe/my_ui.git;protocol=ssh;rev=master"
S = "${WORKDIR}/git/my_project"
RDEPENDS_${PN} ="bash"
inherit qmake5
require recipes-qt/qt5/qt5.inc
do_install_append() {
## Creating Folder Structure
install -d ${D}/opt/my_project/bin
install -d ${D}/home/root/my_project
install -d ${D}/home/root/my_project/font
install -d ${D}/home/root/my_project/Images
install -d ${D}/home/root/my_project/Qml
###compile the project
oe_runmake INSTALL_ROOT=${D} install
#### Copying files
install -m 0755 ${S}/font/* ${D}/home/root/my_project/font/
install -m 0755 ${S}/Images/* ${D}/home/root/my_project/Images/
install -m 0755 ${S}/Qml/* ${D}/home/root/my_project/Qml/
}
FILES_${PN} = "/home/root/my_project"
The error that I see is
Sstate summary: Wanted 335 Found 327 Missed 8 Current 1958 (97% match, 99% complete)
NOTE: Executing SetScene Tasks
NOTE: Executing RunQueue Tasks
ERROR: myproject-project-1.0-r0 do_configure: Error calling /home/blue/yacto/rpi-qt5/build/tmp/work/all-poky-linux/myproject-project/1.0-r0/recipe-sysroot-native/usr/bin/qmake -makefile -o Makefile /home/blue/yacto/rpi-qt5/build/tmp/work/all-poky-linux/myproject-project/1.0-r0/git/myproject/myproject.pro --
ERROR: myproject-project-1.0-r0 do_configure: Function failed: do_configure (log file is located at /home/blue/yacto/rpi-qt5/build/tmp/work/all-poky-linux/myproject-project/1.0-r0/temp/log.do_configure.20982)
ERROR: Logfile of failure stored in: /home/blue/yacto/rpi-qt5/build/tmp/work/all-poky-linux/myproject-project/1.0-r0/temp/log.do_configure.20982
Log data follows:
| DEBUG: Executing shell function qmake5_base_preconfigure
| DEBUG: Shell function qmake5_base_preconfigure finished
| DEBUG: Executing shell function do_configure
| NOTE: qmake prevar substitution: ' '
| Could not find qmake spec 'linux-oe-g++'.
| Error processing project file: /home/blue/yacto/rpi-qt5/build/tmp/work/all-poky-linux/myproject-project/1.0-r0/git/myproject/myproject.pro
| ERROR: Error calling /home/blue/yacto/rpi-qt5/build/tmp/work/all-poky-linux/myproject-project/1.0-r0/recipe-sysroot-native/usr/bin/qmake -makefile -o Makefile /home/blue/yacto/rpi-qt5/build/tmp/work/all-poky-linux/myproject-project/1.0-r0/git/myproject/myproject.pro --
| WARNING: exit code 1 from a shell command.
| ERROR: Function failed: do_configure (log file is located at /home/blue/yacto/rpi-qt5/build/tmp/work/all-poky-linux/myproject-project/1.0-r0/temp/log.do_configure.20982)
ERROR: Task (/home/blue/yacto/poky-warrior-21.0.1/meta-rpi_custom/recipes-custom/myproject-project/myproject-project_1.0.bb:do_configure) failed with exit code '1'
NOTE: Tasks Summary: Attempted 4242 tasks of which 4241 didn't need to be rerun and 1 failed.
Summary: 1 task failed:
/home/blue/yacto/poky-warrior-21.0.1/meta-rpi_custom/recipes-custom/myproject-project/myproject-project_1.0.bb:do_configure
I know that in order to cross-compile. I had to run qmake from my cross-compile tool-chain location and then run make command on it.
I am guessing that is what's missing in my recipe. So my question is, Do I add that in my do_configure
if thats the case can anyone help me or point me how do I populate my do_configure
Is it as simple as source /opt/poky/2.7.1/environment----
then qmake
I am drawing a blank at this step
Please let me know what am I doing wrong
Edit1: remove inherit allarch from recipe
I would just leave inherit qmake5 and put DEPENDS + = "qtbase qtxyz ...", qtxyz would be the list of other dependency modules.

How are zsh autocompletions for commands with subcommands defined?

I am trying to write a tab-completion script for borg.
So far, I have managed to define completions for borg itself, as well as borg key with its subcommands and borg benchmark with its singular subcommand. However, I am now trying to define completion for borg init and I am having trouble.
The issue presents itself only when I define two arguments under the borg init command to use the same description text; i.e. both -e and --encryption should use the same description, as they are practically the same argument. This has worked fine for borg's arguments, but now it breaks.
This is my code, slightly redacted to spare you the redundancy:
compdef _borg borg
function _borg {
local line ret=1
local -a argus
local logs="--critical --error --warning --debug --info -v --verbose"
argus+=(
"(*)"{-h,--help}"[Show help and exit]"
"(*)-V[Show Borg version and exit]"
"($logs)--critical[Work on log level CRITICAL]"
"($logs)--error[Work on log level ERROR]"
"($logs)--warning[Work on log level WARNING (default)]"
"($logs)"{--info,-v,--verbose}"[Work on log level INFO]"
"($logs)--debug[Enable debug output; log level DEBUG]"
{-p,--progress}"[Show progress]"
"--log-json[Output one JSON object per log line instead of formatted text]"
"--show-version[Show/log borg version]"
"--show-rc[Show/log returncode]"
"--consider-part-files[treat part files like normal files (e.g. to list/extract them)]"
"--lock-wait[Wait at most SECONDS for acquiring a repository/cache lock (default 1)]:SECONDS:()"
"--umask[Set umask to M (local and remote; default 0077)]:M (umask value, e.g. 0077):()"
"--remote-path[Use PATH as borg executable on the remote (default: \"borg\")]:PATH:()"
"--remote-ratelimit[Set remote network upload rate limit in kiByte/s (default: 0=unlimited)]:RATE:()"
"--debug-profile[Write execution profile in Borg format into FILE.]:FILE:_files"
"--rsh[Use this command to connect to the \"borg serve\" process (default: \"ssh\")]:RSH:()"
"1: :((init\:\"Initialize a new repository\" \
create\:\"Create a new archive\" \
extract\:\"Extract the contents of an archive\" \
check\:\"Verifies consistency of a repository and its archives\" \
rename\:\"Renames an archive in a repository\" \
list\:\"Lists contents of a repository or archive\" \
diff\:\"Finds differences between archives\" \
delete\:\"Deletes an archive or an entire repository (and its cache)\" \
prune\:\"Prunes a repository\" \
info\:\"Shows info about a repository or archive\" \
mount\:\"Mounts an archive as a FUSE filesystem\" \
unmount\:\"Unmounts a FUSE filesystem mounted with \\\"borg mount\\\"\" \
key\:\"Keyword for key-related functions\" \
upgrade\:\"Upgrade a local Borg repository\" \
recreate\:\"EXPERIMENTAL: Recreates contents of existing archives\" \
export-tar\:\"Creates a tarball from an archive\" \
serve\:\"Starts repository server process. Not usually used manually.\" \
config\:\"Gets and sets options in local repository and cache config files\" \
with-lock\:\"Executes another command with the repository lock held\" \
break-lock\:\"Breaks the repository and cache locks\" \
benchmark\:\"Keyword for the benchmark function\"))" \
"*::arg:->args"
)
_arguments -w -s -S -C $argus[#] && ret=0
case $line[1] in
benchmark)
_borg_benchmark
;;
init)
_borg_init
;;
key)
_borg_key
;;
esac
return ret
}
function _borg_benchmark {
# stuff
}
function _borg_benchmark_crud {
# stuff again
}
function _borg_init {
local line ret=1
local -a argus
argus+=(
"-t[This is a test]"
"--test[This is a test]"
"(--append-only)--append-only[Create an append-only mode repository]"
"*::arg:->args"
)
_arguments -w -s -S -C $argus[#] && ret=0
return ret
}
function _borg_key {
# key stuff
}
function _borg_key_changepassphrase {
# stuff
}
function _borg_key_export {
# more stuff
}
function _borg_key_import {
# other stuff
}
If I try to tab-complete borg init - using this setup, I get the following output:
$ borg init -
Completing option
--append-only
--test
-t
-- Create an append-only mode repository
-- This is a test
--append-only
--test
-t
-- Create an append-only mode repository
-- This is a test
--append-only
--test
-t
-- Create an append-only mode repository
-- This is a test
--append-only
--test
-t
-- Create an append-only mode repository
-- This is a test
The completion appears to forget what tabs are and repeats itself four times. If I change --test[This is a test] to --test[This is another test] in _borg_init, I instead get the following completion:
$ borg init -
Completing option
--append-only -- Create an append-only mode repository
--test -- This is another test
-t -- This is a test
The above is "correct", in the sense that it's not broken, but I cannot seem to define arguments that share a description in a subcommand. How should I do that? And, more generally, how are you supposed to define completions for commands with subcommands (which may, in turn, have more arguments)?

Sdkman Incorrect zsh completion script output

I am using oh-my-zsh and I have been trying to develop a custom completion script for sdkman.
However I have encountered a small problem when trying to mutualize some of the commands.
Below is the beginning of the completion script. There are three functions using the _describe method to output a completion help.
#compdef sdk
zstyle ':completion:*:descriptions' format '%B%d%b'
# Gets candidate lists and removes all unecessery things just to get candidate names
__get_candidate_list() {
echo `sdk list | grep --color=never "$ sdk install" | sed 's/\$ sdk install //g' | sed -e 's/[\t ]//g;/^$/d'`
}
__get_current_installed_list() {
echo `sdk current | sed "s/Using://g" | sed "s/\:.*//g" | sed -e "s/[\t ]//g;/^$/d"`
}
__describe_commands() {
local -a commands
commands=(
'install: install a program'
'uninstall: uninstal an existing program'
)
_describe -t commands "Commands" commands && ret=0
}
__describe_install() {
local -a candidate_list
candidate_list=( $( __get_candidate_list ) )
_describe -t candidate_list "Candidates available" candidate_list && ret=0
}
__describe_uninstall() { # FIXME THis is not working, it only displays candidate list
local -a candidates_to_uninstall
candidates_to_uninstall=( $( __get_current_installed_list ) )
_describe -t candidates_to_uninstall "Uninstallable candidates" candidates_to_uninstall && ret=0
}
The __get_candidate_list echoes the following values:
ant asciidoctorj bpipe ceylon crash cuba cxf gaiden glide gradle grails groovy groovyserv infrastructor java jbake kotlin kscript lazybones leiningen maven micronaut sbt scala spark springboot sshoogr vertx visualvm
The __get_current_installed_list echoes the following values:
gradle java kotlin maven sbt scala
The second part of the script below is where we call everything so that the completion script is used correctly by zsh:
function _sdk() {
local ret=1
local target=$words[2]
_arguments -C \
'1: :->first_arg' \
'2: :->second_arg' \
&& ret=0
case $state in
first_arg)
__describe_commands
;;
second_arg)
case $target in
install)
__describe_install
;;
uninstall)
__describe_uninstall
;;
*)
;;
esac
;;
esac
return $ret
}
_sdk "$#"
The problem is the following: when I type sdk install I get the right output, the one from the __get_candidate_list method BUT when I use sdk uninstall it still gives me the output from __get_candidate_list althought I am expecting __get_current_installed_list output.
EDIT: After a bit of debugging, it seems that zsh is not at fault here. I can't figure out why, but sdkman gives me the same output with both sdk list and sdk current (After the sed commands) from inside the completion script. IN my shell, both commands work properly with shell.
Is there something wrong with the way I use the _describe method ?
Is there anything else I am not seeing ?
Thanks for your help.
So I finally found a workaround to fix this but it is not ideal.
I chose to launch the commands in the background when launching the plugin, and fill text files with the results, so that completion scripts can use these after.
Below is the code I used in the zsh-sdkman.plugin.zsh file, in case my github repository disappears:
# --------------------------
# -------- Executed on shell launch for completion help
# --------------------------
# NOTE: Sdkman seems to always output the candidate list rather than the currently installted list when we do this through the completion script
# There are two goals in the code below:
# - Optimization: the _sdkman_get_candidate_list command can take time, so it is done once and in background
# - Bug correction: correct the problem with sdkman command output explained above
# WARNING: We are setting this as a local variable because we don't have it yet at the time of initialization
# A better approach would be welcome
SDKMAN_DIR_LOCAL=~/.sdkman
# Custom variables for later
export ZSH_SDKMAN_CANDIDATE_LIST_HOME=~/.zsh-sdkman.candidate-list
export ZSH_SDKMAN_INSTALLED_LIST_HOME=~/.zsh-sdkman.current-installed-list
_sdkman_get_candidate_list() {
(sdk list | grep --color=never "$ sdk install" | sed 's/\$ sdk install //g' | sed -e 's/[\t ]//g;/^$/d' > $ZSH_SDKMAN_CANDIDATE_LIST_HOME &)
}
_sdkman_get_current_installed_list() {
(sdk current | sed "s/Using://g" | sed "s/\:.*//g" | sed -e "s/[\t ]//g;/^$/d" > $ZSH_SDKMAN_INSTALLED_LIST_HOME &)
}
# "sdk" command is not found if we don't do this
source "$SDKMAN_DIR_LOCAL/bin/sdkman-init.sh"
# Initialize files with available candidate list and currently installted candidates
_sdkman_get_candidate_list "$#"
_sdkman_get_current_installed_list "$#"
For more information, you can see the complete repository of my plugin: https://github.com/matthieusb/zsh-sdkman
If you have another cleaner solution, I'll be willing to make the necessary modifications, or don't hesitate to make a pull request on the project.

Argument in Dockerfile not passed as executed command

In my Dockerfile I'm trying to download the latest WordPress version without any content inside it, but I'm having trouble automating the latest version number so that I don't have to manually change it when the new version of WordPress comes out.
In my Dockerfile I have
ARG LATESTWPVER="$(curl -s https://api.wordpress.org/core/version-check/1.5/ | head -n 4 | tail -n 1)"
ADD $(https://downloads.wordpress.org/release/wordpress-$LATESTWPVER-no-content.zip) /var/www/latest.zip
But the problem is that my LATESTWPVER is not 4.9.8, and I get the error
ADD failed: stat /var/lib/docker/tmp/docker-builder962069305/$(https:/downloads.wordpress.org/release/wordpress-$(curl -s https:/api.wordpress.org/core/version-check/1.5/ | head -n 4 | tail -n 1)-no-content.zip): no such file or directory
It passes the entire command and I'd like to have the output of that command.
In my shell file the
#!/bin/bash
WP_LATEST="$(curl -s https://api.wordpress.org/core/version-check/1.5/ | head -n 4 | tail -n 1)"
echo $WP_LATEST
will return the number 4.9.8.
From the error, I'm guessing that you can only assign something to the variable, but not execute it. Is there a way to execute a command and assign it to a variable and pass it as an argument?
A Dockerfile is not a shell or a build script, so it will not execute what you pass in ARG. There is a workaround - define the version as an ARG and pass that during build.
Dockerfile:
--
FROM ubuntu:latest
ARG LATESTWPVER
RUN echo $LATESTWPVER
ADD https://downloads.wordpress.org/release/wordpress-$LATESTWPVER-no-content.zip /var/www/latest.zip
docker build --build-arg LATESTWPVER=`curl -s https://api.wordpress.org/core/version-check/1.5/ | head -n 4 | tail -n 1` .
Sending build context to Docker daemon 6.656kB
Step 1/4 : FROM ubuntu:latest
---> 113a43faa138
Step 2/4 : ARG LATESTWPVER
---> Using cache
---> 64f47dcfe7fa
Step 3/4 : RUN echo $LATESTWPVER
---> Running in eb5fdd005d77
4.9.8
Removing intermediate container eb5fdd005d77
---> 1015629b927e
Step 4/4 : ADD https://downloads.wordpress.org/release/wordpress-$LATESTWPVER-no-content.zip /var/www/latest.zip
Downloading [==================================================>] 7.118MB/7.118MB
---> 72f0d3790e51
Successfully built 72f0d3790e51

How to deploy home-grown applications with rpm?

here is my scenario
our team develops on AIX
dozens of applications, mostly Perl, shell scripts, batch java, C
i would like to simplify deployment/rollback procedures - currently using plain old tarballs with backups
I looked into installp vs. rpm for packaging (see my SO question) and decided to go with rpm - better docs, plus IBM included it while having their own packaging tool, so this is a valid reason for me
i would want to use a separate rpm db, not the main one - for I don't have root access and I also feel it would be beneficial to separate OS apps from our stuff.
the workflow would look like:
each app has a corresponding rpm.spec - checked-in into source control
build an rpm in a home dir
install/upgrade while using our own packages.rpm
NOTE : I will use this question as notes to myself as I proceed
building rpm's in my home :
1.
I need a .rpmmacros file in my user's root which overrides some system-wide rpm settings
%_signature gpg
%_gpg_name {yourname}
%_gpg_path ~/.gnupg
%distribution AIX 5.3
%vendor {Northwind? :)}
%make make
2.
this will create directory structure needed for rpm builds, it will also update .rpmmacros
#!/bin/sh
[ "x$1" = "x-d" ] && {
DEBUG="y"
export DEBUG
shift 1
}
IAM=`id -un`
PASSWDDIR=`grep ^$IAM: /etc/passwd | awk -F":" '{print $6}'`
HOMEDIR=${HOME:=$PASSWDDIR}
[ ! -d $HOMEDIR ] && {
echo "ERROR: Home directory for user $IAM not found in /etc/passwd."
exit 1
}
RHDIR="$HOMEDIR/rpmbuild"
RPMMACROS="$HOMEDIR/.rpmmacros"
touch $RPMMACROS
TOPDIR="%_topdir"
ISTOP=`grep -c ^$TOPDIR $RPMMACROS`
[ $ISTOP -lt 1 ] && {
echo "%_topdir $HOMEDIR/rpmbuild" >> $RPMMACROS
}
TMPPATH="%_tmppath"
ISTMP=`grep -c ^$TMPPATH $RPMMACROS`
[ $ISTMP -lt 1 ] && {
echo "%_tmppath $HOMEDIR/rpmbuild/tmp" >> $RPMMACROS
}
[ "x$DEBUG" != "x" ] && {
echo "$IAM $HOMEDIR $RPMMACROS"
echo "$RHDIR $TOPDIR $ISTOP"
}
[ ! -d $RHDIR ] && mkdir -p $RHDIR
cd $RHDIR
for i in RPMS SOURCES SPECS SRPMS BUILD tmp ; do
[ ! -d ./$i ] && mkdir ./$i
done
exit 0
you could check if rpm picked up your changes with :
rpm --showrc | grep topdir
3.
specify a non-default location of the RPM database, such as the following:
rpm --dbpath /location/of/your/rpm/database --initdb
I usually check in my spec files to the same place that my code is.
I run a build server (I use Hudson) to kick off a build every night (could be continuous but I chose nightly). The build server checks out the code, builds it, and runs rpmbuild. Hudson sets up a workspace folder that can be deleted after each build so if you set %_topdir to point to that area then you can guarantee there won't be build artifacts left over from a previous build. At the end of the build I check my rpms into version control with a comment containing the build number.
Rolling back is a matter of pulling out the last good rpm from version control, erasing the current rpm, and installing the old rpm.
Sounds like you already have a good handle on using your own package db.

Resources