Yocto: remove packageconfig item - qt

In a recipe (meta-qt5/recipes-qt/qt5/qttools_git.bb) I found:
PACKAGECONFIG ??= ""
PACKAGECONFIG[qtwebkit] = ",,qtwebkit"
now, under my own meta-custom-layer I'm going to create the same path and add a .bbappend file: meta-custom-layer/meta-qt5/recipes-qt/qt5/qttools_git.bbappend.
I want to remove the second line, because I'm not interested in qtwebkit.
It would be enough to put:
PACKAGECONFIG[qtwebkit] = ""
or I need something else?
Because there is the ??= operator, I guess the PACKAGECONFIG variable is updated with qtwebkit elsewhere. I need to find and remove also that assignement? Is there a quick way to find out where is appended?
UPDATE
To find where the qtwebkit is configured I tried to use grep:
$ grep -nrw . -e qtwebkit
./layers/meta-st/meta-st-openstlinux/recipes-samples/packagegroups/packagegroup-framework-sample-qt-extra.bb:30: qtwebkit \
./layers/meta-st/meta-st-openstlinux/recipes-samples/packagegroups/packagegroup-framework-sample-qt-extra.bb:53: qtwebkit-examples \
Binary file ./layers/meta-qt5/.git/index matches
./layers/meta-qt5/README.md:8:When building stuff like `qtdeclarative`, `qtquick`, `qtwebkit`, make
./layers/meta-qt5/recipes-qt/packagegroups/packagegroup-qt5-toolchain-target.bb:12: ${#bb.utils.contains('DISTRO_FEATURES', 'opengl', 'qtwebkit-dev', '', d)} \
./layers/meta-qt5/recipes-qt/qt5/qttools/0001-add-noqtwebkit-configuration.patch:25: BROWSER = qtwebkit
./layers/meta-qt5/recipes-qt/qt5/qttools/0001-add-noqtwebkit-configuration.patch:32:-equals(BROWSER, "qtwebkit") {
./layers/meta-qt5/recipes-qt/qt5/qttools/0001-add-noqtwebkit-configuration.patch:33:+equals(BROWSER, "qtwebkit"):!contains(CONFIG, noqtwebkit) {
./layers/meta-qt5/recipes-qt/qt5/qttools_git.bb:28:PACKAGECONFIG[qtwebkit] = ",,qtwebkit"
./layers/meta-qt5/recipes-qt/qt5/qttools_git.bb:32: ${#bb.utils.contains('PACKAGECONFIG', 'qtwebkit', '', 'CONFIG+=noqtwebkit', d)} \
./layers/meta-qt5/recipes-qt/qt5/qt5-creator_git.bb:17:DEPENDS = "qtbase qtscript qtwebkit qtxmlpatterns qtx11extras qtdeclarative qttools qttools-native qtsvg chrpath-replacement-native"
./layers/meta-qt5/recipes-qt/qt5/qtbase_git.bb:76:# This is in qt5.inc, because qtwebkit-examples are using it to enable ca-certificates dependency
./layers/meta-qt5/recipes-qt/qt5/qtwebkit-examples_git.bb:18:DEPENDS += "qtwebkit qtxmlpatterns"
./layers/meta-qt5/recipes-qt/qt5/qtwebkit-examples_git.bb:19:RDEPENDS_${PN}-examples += "qtwebkit-qmlplugins"
./layers/meta-qt5/recipes-qt/qt5/qtwebkit_git.bb:12:# Patches from https://github.com/meta-qt5/qtwebkit/commits/b5.11
./layers/meta-qt5/lib/recipetool/create_qt5.py:101: 'webkit': 'qtwebkit',
./layers/meta-qt5/lib/recipetool/create_qt5.py:102: 'webkitwidgets': 'qtwebkit',
So I think the line to remove is the one I described above.
bitbake -e <image> leads to an output so long that overflows the console buffer... I tried to grep the output looking for qtwebkit but nothing is returned.
The same applies for grep -nrw . -e DISTRO_FEATURES | grep qtwebkit.

The PACKAGECONFIG[qtwebkit] = ",,qtwebkit" line is showing what enabling or disabling that feature would do if qtwebkit exists in that package's PACKAGECONFIG variable (see here). Based on that second line and the documentation, it's doing "nothing" in either case.
More towards your question about how to diagnose something like, "why is this variable set," a starting point is to use bitbake -e [optional package or image name] > env.log to dump the environment to a log file that you can review. It would be worth checking this with no package or image name as well as with the package and whatever image you're trying to build (sometimes the image configuration might enable a feature in another package's PACKAGECONFIG via other variables; checking the environment will often show you why something was set).

Related

How to select all files from one sample?

I have a problem figuring out how to make the input directive only select all {samples} files in the rule below.
rule MarkDup:
input:
expand("Outputs/MergeBamAlignment/{samples}_{lanes}_{flowcells}.merged.bam", zip,
samples=samples['sample'],
lanes=samples['lane'],
flowcells=samples['flowcell']),
output:
bam = "Outputs/MarkDuplicates/{samples}_markedDuplicates.bam",
metrics = "Outputs/MarkDuplicates/{samples}_markedDuplicates.metrics",
shell:
"gatk --java-options -Djava.io.tempdir=`pwd`/tmp \
MarkDuplicates \
$(echo ' {input}' | sed 's/ / --INPUT /g') \
-O {output.bam} \
--VALIDATION_STRINGENCY LENIENT \
--METRICS_FILE {output.metrics} \
--MAX_FILE_HANDLES_FOR_READ_ENDS_MAP 200000 \
--CREATE_INDEX true \
--TMP_DIR Outputs/MarkDuplicates/tmp"
Currently it will create correctly named output files, but it selects all files that match the pattern based on all wildcards. So I'm perhaps halfway there. I tried changing {samples} to {{samples}} in the input directive as such:
expand("Outputs/MergeBamAlignment/{{samples}}_{lanes}_{flowcells}.merged.bam", zip,
lanes=samples['lane'],
flowcells=samples['flowcell']),`
but this broke the previous rule somehow. So the solution is something like
input:
"{sample}_*.bam"
But clearly this doesn't work.
Is it possible to collect all files that match {sample}_*.bam with a function and use that as input? And if so, will the function still work with $(echo ' {input}' etc...) in the shell directive?
If you just want all the files in the directory, you can use a lambda function
from glob import glob
rule MarkDup:
input:
lambda wcs: glob('Outputs/MergeBamAlignment/%s*.bam' % wcs.samples)
output:
bam="Outputs/MarkDuplicates/{samples}_markedDuplicates.bam",
metrics="Outputs/MarkDuplicates/{samples}_markedDuplicates.metrics"
shell:
...
Just be aware that this approach can't do any checking for missing files, since it will always report that the files needed are the files that are present. If you do need confirmation that the upstream rule has been executed, you can have the previous rule touch a flag, which you then require as input to this rule (though you don't actually use the file for anything other than enforcing execution order).
If I understand correctly, zip needs to be applied only to {lane} and {flowcells} and not to {samples}. In that case, use two expand instances can achieve that.
input:
expand(expand("Outputs/MergeBamAlignment/{{samples}}_{lanes}_{flowcells}.merged.bam",
zip, lanes=samples['lane'], flowcells=samples['flowcell']),
samples=samples['sample'])
PS: output.tmp file uses {sample} instead of {samples}. Typo?

Bitbake recipe not applying patch as expected

I have a tarball src.tar.gz whose contents are unpacked into src/ and a patch of this sources generated with this command:
$ diff -Nurp src/ src_mod/ > my.patch
The patch header starts with this three lines:
diff -Nurp src/path/to/file src_PATCHED/path/to/file
--- src/path/to/file 2012-10-22 05:52:59.000000000 +0200
+++ src_PATCHED/path/to/file 2016-03-14 12:27:52.892802283 +0100
My bitbake recipe references both path and tarball files using this SRC_URI:
SRC_URI = " \
file://my.patch \
file://src.tar.gz \
"
do_fetch and do_unpack tasks work as expected, leaving my.patch and src/ inside ${S} directory, that is:
${S}/my.path
${S}/src.tar.gz
But do_patch task is failing with this ERROR message:
ERROR: Command Error: exit status: 1 Output:
Applying patch my.patch
can't find file to patch at input line 4
Perhaps you used the wrong -p or --strip option?
I have tested different alternatives, for example setting "patchdir" attribute like showed below:
SRC_URI = " \
file://my.patch;patchdir=${S}/src \
file://src.tar.gz \
"
I expected "patchdir" being the same as using "patch -d dir". But it doesn't work as expected, it always returns the same ERROR message.
What I am doing wrong?
My variable ${S} was re-defined inside my recipe with this content:
S = "${WORKDIR}/${PN}-${PV}"
But the fetchers downloads my.patch and src/ inside ${WORKDIR}, not inside ${S} directory, so:
${WORKDIR}/my.path
${WORKDIR}/src.tar.gz
And tarball was also extracted inside ${WORKDIR}
${WORKDIR}/src/
The fix was setting "patchdir" attribute properly, replacing ${S} by ${WORKDIR}
SRC_URI = " \
file://my.patch;patchdir=${WORKDIR}/src \
file://src.tar.gz \
"
That is already working!
Though the author provided their own solution to their distinct problem, this question is the first result that comes up when searching for solutions to the "can't find file to patch" error in Yocto builds, so I want to share a solution for a different situation that produces the same error output.
In my case, I was trying to use a .bbappend file to apply an override controlled patch to a pre-existing recipe, and was receiving the "can't find file to patch" error. The thread at https://community.nxp.com/thread/474138 identified the solution: using the '_append' syntax instead of the '+=' syntax. That is, use:
SRC_URI_append_machineoverride = " file://my.patch"
Instead of
SRC_URI_machineoverride += "file://my.patch"
Note that the use of '_append' requires a leading space (contra the trailing space noted in the thread linked above). I have not yet investigated this enough to explain why this syntax is necessary, but I thought this would still be a valuable addition to the record for this question.

Adding custom commands to existing targets in qmake

Is there a way to specify, in a .pro file, extra commands to be added to a standard target in the Makefile that qmake generates? For example, consider distclean, extra commands might be desired to:
Remove *~ files.
Clean out runtime-generated output files from the source tree.
Etc.
I want to use the normal target and not a custom target because I want this to be completely transparent in my workflow. That is (again using distclean as an example), I don't want to...
... require knowledge in a multi-project setup that certain Makefiles use a custom rule instead of distclean.
... document custom rules, even for stand-alone projects, as distclean is already well-known and intuitive†.
I found How to add custom targets in a qmake generated Makefile?, but this describes adding custom targets (which is already documented, even back in 4.6) rather than adding rules to existing targets. While it does contain some hints, all of them require adding new custom targets, as specifying the same target more than once in a Makefile replaces (not adds) commands from the previous target.
The only thing I could really think of to try was to add target.commands += new commands to the .pro file as a wild guess (e.g distclean.commands += rm \"*~\"). This has no effect.
How can I transparently add custom commands to existing targets with qmake?
† For the distclean example: While maintainer-clean is also on that "standard target" list, in practice I have found it to be rarely used, and in any case qmake doesn't generate it by default; I consider it to be unsuitable.
There are two straightforward ways to accomplish this, depending on how self-contained / portable you want your solution to be and how lenient you want to be with the order of command execution.
Option 1
The first option is to create a custom target in the .pro file for the new commands, then add that target as a prerequisite to the standard target that you are modifying. Going back to the distclean example, let's say you want to add a command to remove all *~ files:
Create a custom target in your .pro file. Note that you have to escape quotes and slashes in .pro files. For example, add:
extraclean.commands = find . -name \"*~\" -exec rm -v {} \\;
Add this target as a dependency of the target you are modifying:
distclean.depends = extraclean
This won't actually modify the distclean rule just yet, as this method can't be used to modify existing rules. However...
Add both your new target and the target you are modifying as extra targets:
QMAKE_EXTRA_TARGETS += distclean extraclean
This will add a second specification of distclean to the Makefile, but this works because you can add dependencies to existing targets in make in separate rules, even though you can't add commands that way. If you were to also specify distclean.commands in your .pro file, you would break the existing distclean by replacing its default recipe.
So, putting that all together, in the .pro file:
extraclean.commands = find . -name \"*~\" -exec rm -v {} \\;
distclean.depends = extraclean
QMAKE_EXTRA_TARGETS += distclean extraclean
Where extraclean is some custom target with the commands you want to add, and distclean is the existing target that you wish to modify.
Pros:
Completely self-contained in a .pro file.
As portable as you can get, leaves the actual Makefile syntax and generation up to qmake.
Cons:
Your new commands aren't appended to the existing recipe. Rather, they happen after all prerequisite targets are satisfied but before the existing recipe. In the distclean example, with the version of qmake that I'm using, this places the commands after the source tree clean but before Makefile itself is deleted (which is the only action the default recipe takes). This is not an issue for this example, but may be an issue for you.
Option 2
The second option is to change the name of the Makefile that qmake generates, and create your own custom Makefile that defers to the generated one, rather than includes + overrides it. This is also a straightforward option; while not as self-contained as option 1, it gives you the ability to execute commands both before and after the default generated recipe.
You don't want to include + override the existing Makefile, because you don't want to replace the default recipes. If you do, you have to re-implement the default, but this can be an issue as that default may change (and you have to keep up with the changes). It's best to let qmake do as much work as possible, and not repeat its work.
To do this:
First, change the name of the file that qmake generates. This can be accomplished by adding a line such as this to the .pro file:
MAKEFILE = RealMakefile
That will cause qmake to output RealMakefile instead of Makefile.
The next step is to create your own Makefile with your custom commands. However, there are some caveats here. First, a full example, again using distclean. In a file named Makefile:
.DEFAULT_GOAL := all
%:
#$(MAKE) -f RealMakefile $#
distclean:
#$(MAKE) -f RealMakefile $#
#find . -name "*~" -exec rm -v {} \;
Some notes about this:
We set .DEFAULT_GOAL because otherwise distclean would be the default. An alternative to this, if you're not comfortable with .DEFAULT_GOAL, is to specify an all rule using #$(MAKE) -f RealMakefile $# as the recipe.
The % target matches any target that isn't otherwise defined in this Makefile. It simply delegates processing to RealMakefile.
The distclean target is where we add our commands. We still need to delegate to RealMakefile, but additional commands can be added both before and after that happens.
Pros:
More control over command order. Commands can be added both before and after the default recipe.
Cons:
Not self-contained in a .pro.
Not as portable: It doesn't leave all Makefile generation up to qmake, and also I'm not actually sure what parts are specific to GNU make here (comments welcome).
So, while this answer may be a little long, both of these methods are very straightforward. I would prefer option 1 unless the command execution order is an issue.
Another solution is to add files you want to delete to the QMAKE_CLEAN and QMAKE_DISTCLEAN qmake variables.
build_tests {
TINYORM_SQLITE_DATABASE = $$quote($$TINYORM_BUILD_TREE/tests/q_tinyorm_test_1.sqlite3)
QMAKE_CLEAN = $$TINYORM_SQLITE_DATABASE
QMAKE_DISTCLEAN = $$TINYORM_SQLITE_DATABASE
}
It is relevant only, when do you know files you want to delete, so in this case, you can not use rm command or some sort of globbing.

Conditional commands in Qt .pro file

I have doubts about my Qt .pro file... I had seen another post about a similar question in this link, but i used the contains() function and didn't work.
In the my case, i have a file called mainconfig.h where i define some project configurations flags, i really create defines there, like: "#define MY_CONFIG_DEFINE". Those flags define what menu options will be shown etc. My problem is: all files are always compiled, even when i don't use its because i defined some flag in "mainconfig.h" file. I would like to avoid compile some files than i will not use, defining some variables in my .pro file and doing conditional commands, including only the files than i want to.
Can someone help me?
I tried this in my .pro file:
# This variable defines the current project ADRIANO_PROJECT = PROJECT_TYPE_1
ADRIANO_PROJECT = PROJECT_TYPE_1
(...)
FORMS += ui/form1.ui \
contains(ADRIANO_PROJECT, PROJECT_TYPE_1) {
ui/myform1.ui \
ui/myform2.ui \
}
ui/form2.ui \
ui/form3.ui
(...)
# This is only a example, ok?
Sorry my english and thanks.
IMHO your syntax is wrong. Try this instead:
ADRIANO_PROJECT = PROJECT_TYPE_1
FORMS += ui/form1.ui \
ui/form2.ui \
ui/form3.ui
contains(ADRIANO_PROJECT, PROJECT_TYPE_1) {
FORMS + = ui/myform1.ui \
ui/myform2.ui
}

Completion when program has sub-commands

I have written a command-line tool that uses sub-commands much like Mercurial, Git, Subversion &c., in that its general usage is:
>myapp [OPTS] SUBCOMMAND [SUBCOMMAND-OPTS] [ARGS]
E.g.
>myapp --verbose speak --voice=samantha --quickly "hello there"
I'm now in the process of building Zsh completion for it but have quickly found out that it is a very complex beast. I have had a look at the _hg and _git completions but they are very complex and different in approach (I struggle to understand them), but both seem to handle each sub-command separately.
Does anyone know if there a way using the built in functions (_arguments, _values, pick_variant &c.) to handle the concept of sub-commands correctly, including handling general options and sub-command specific options appropriately? Or would the best approach be to manually handle the general options and sub-command?
A noddy example would be very much appreciated.
Many thanks.
Writing completion scripts for zsh can be quite difficult. Your best bet is to use an existing one as a guide. The one for Git is way too much for a beginner. You can use this repo:
https://github.com/zsh-users/zsh-completions
As for your question, you have use the concept of state. You define your subcommands in a list and then identify via $state which command you are in. Then you define the options for each command. You can see this in the completion script for play. A simplified version is below:
_play() {
local ret=1
_arguments -C \
'1: :_play_cmds' \
'*::arg:->args' \
&& ret=0
case $state in
(args)
case $line[1] in
(build-module|list-modules|lm|check|id)
_message 'no more arguments' && ret=0
;;
(dependencies|deps)
_arguments \
'1:: :_play_apps' \
'(--debug)--debug[Debug mode (even more informations logged than in verbose mode)]' \
'(--jpda)--jpda[Listen for JPDA connection. The process will suspended until a client is plugged to the JPDA port.]' \
'(--sync)--sync[Keep lib/ and modules/ directory synced. Delete unknow dependencies.]' \
'(--verbose)--verbose[Verbose Mode]' \
&& ret=0
;;
esac
esac
(If you are going to paste this, use the original source, as this won't work).
It looks daunting, but the general idea is not that complicated:
The subcommand comes first (_play_cmds is a list of subcommands with a description for each one).
Then come the arguments. The arguments are built based on which subcommand you are choosing. Note that you can group multiple subcommands if they share arguments.
With man zshcompsys, you can find more info about the whole system, although it is somewhat dense.
I found a technique that works well and is easy to understand. Basically, you create a new completion function for each subcommand and call it from the top-level completion function. Here's an example with dolt, showing how dolt completes to dolt table and dolt table completes to dolt table import, which then completes with a set of flags:
_dolt() {
local line state
_arguments -C \
"1: :->cmds" \
"*::arg:->args"
case "$state" in
cmds)
_values "dolt command" \
"table[Commands for copying, renaming, deleting, and exporting tables.]" \
;;
args)
case $line[1] in
table)
_dolt_table
;;
esac
;;
esac
}
_dolt_table() {
local line state
_arguments -C \
"1: :->cmds" \
"*::arg:->args"
case "$state" in
cmds)
_values "dolt_table command" \
"import[Creates, overwrites, replaces, or updates a table from the data in a file.]" \
;;
args)
case $line[1] in
import)
_dolt_table_import
;;
esac
;;
esac
}
_dolt_table_import() {
_arguments -s \
{-c,--create-table}'[Create a new table, or overwrite an existing table (with the -f flag) from the imported data.]' \
{-u,--update-table}'[Update an existing table with the imported data.]' \
{-f,--force}'[If a create operation is being executed, data already exists in the destination, the force flag will allow the target to be overwritten.]' \
{-r,--replace-table}'[Replace existing table with imported data while preserving the original schema.]' \
'(--continue)--continue[Continue importing when row import errors are encountered.]' \
{-s,--schema}'[The schema for the output data.]' \
{-m,--map}'[A file that lays out how fields should be mapped from input data to output data.]' \
{-pk,--pk}'[Explicitly define the name of the field in the schema which should be used as the primary key.]' \
'(--file-type)--file-type[Explicitly define the type of the file if it can''t be inferred from the file extension.]' \
'(--delim)--delim[Specify a delimiter for a csv style file with a non-comma delimiter.]'
}
I wrote a full guide here:
https://www.dolthub.com/blog/2021-11-15-zsh-completions-with-subcommands/

Resources