Bitbake recipe not applying patch as expected - patch

I have a tarball src.tar.gz whose contents are unpacked into src/ and a patch of this sources generated with this command:
$ diff -Nurp src/ src_mod/ > my.patch
The patch header starts with this three lines:
diff -Nurp src/path/to/file src_PATCHED/path/to/file
--- src/path/to/file 2012-10-22 05:52:59.000000000 +0200
+++ src_PATCHED/path/to/file 2016-03-14 12:27:52.892802283 +0100
My bitbake recipe references both path and tarball files using this SRC_URI:
SRC_URI = " \
file://my.patch \
file://src.tar.gz \
"
do_fetch and do_unpack tasks work as expected, leaving my.patch and src/ inside ${S} directory, that is:
${S}/my.path
${S}/src.tar.gz
But do_patch task is failing with this ERROR message:
ERROR: Command Error: exit status: 1 Output:
Applying patch my.patch
can't find file to patch at input line 4
Perhaps you used the wrong -p or --strip option?
I have tested different alternatives, for example setting "patchdir" attribute like showed below:
SRC_URI = " \
file://my.patch;patchdir=${S}/src \
file://src.tar.gz \
"
I expected "patchdir" being the same as using "patch -d dir". But it doesn't work as expected, it always returns the same ERROR message.
What I am doing wrong?

My variable ${S} was re-defined inside my recipe with this content:
S = "${WORKDIR}/${PN}-${PV}"
But the fetchers downloads my.patch and src/ inside ${WORKDIR}, not inside ${S} directory, so:
${WORKDIR}/my.path
${WORKDIR}/src.tar.gz
And tarball was also extracted inside ${WORKDIR}
${WORKDIR}/src/
The fix was setting "patchdir" attribute properly, replacing ${S} by ${WORKDIR}
SRC_URI = " \
file://my.patch;patchdir=${WORKDIR}/src \
file://src.tar.gz \
"
That is already working!

Though the author provided their own solution to their distinct problem, this question is the first result that comes up when searching for solutions to the "can't find file to patch" error in Yocto builds, so I want to share a solution for a different situation that produces the same error output.
In my case, I was trying to use a .bbappend file to apply an override controlled patch to a pre-existing recipe, and was receiving the "can't find file to patch" error. The thread at https://community.nxp.com/thread/474138 identified the solution: using the '_append' syntax instead of the '+=' syntax. That is, use:
SRC_URI_append_machineoverride = " file://my.patch"
Instead of
SRC_URI_machineoverride += "file://my.patch"
Note that the use of '_append' requires a leading space (contra the trailing space noted in the thread linked above). I have not yet investigated this enough to explain why this syntax is necessary, but I thought this would still be a valuable addition to the record for this question.

Related

Yocto: remove packageconfig item

In a recipe (meta-qt5/recipes-qt/qt5/qttools_git.bb) I found:
PACKAGECONFIG ??= ""
PACKAGECONFIG[qtwebkit] = ",,qtwebkit"
now, under my own meta-custom-layer I'm going to create the same path and add a .bbappend file: meta-custom-layer/meta-qt5/recipes-qt/qt5/qttools_git.bbappend.
I want to remove the second line, because I'm not interested in qtwebkit.
It would be enough to put:
PACKAGECONFIG[qtwebkit] = ""
or I need something else?
Because there is the ??= operator, I guess the PACKAGECONFIG variable is updated with qtwebkit elsewhere. I need to find and remove also that assignement? Is there a quick way to find out where is appended?
UPDATE
To find where the qtwebkit is configured I tried to use grep:
$ grep -nrw . -e qtwebkit
./layers/meta-st/meta-st-openstlinux/recipes-samples/packagegroups/packagegroup-framework-sample-qt-extra.bb:30: qtwebkit \
./layers/meta-st/meta-st-openstlinux/recipes-samples/packagegroups/packagegroup-framework-sample-qt-extra.bb:53: qtwebkit-examples \
Binary file ./layers/meta-qt5/.git/index matches
./layers/meta-qt5/README.md:8:When building stuff like `qtdeclarative`, `qtquick`, `qtwebkit`, make
./layers/meta-qt5/recipes-qt/packagegroups/packagegroup-qt5-toolchain-target.bb:12: ${#bb.utils.contains('DISTRO_FEATURES', 'opengl', 'qtwebkit-dev', '', d)} \
./layers/meta-qt5/recipes-qt/qt5/qttools/0001-add-noqtwebkit-configuration.patch:25: BROWSER = qtwebkit
./layers/meta-qt5/recipes-qt/qt5/qttools/0001-add-noqtwebkit-configuration.patch:32:-equals(BROWSER, "qtwebkit") {
./layers/meta-qt5/recipes-qt/qt5/qttools/0001-add-noqtwebkit-configuration.patch:33:+equals(BROWSER, "qtwebkit"):!contains(CONFIG, noqtwebkit) {
./layers/meta-qt5/recipes-qt/qt5/qttools_git.bb:28:PACKAGECONFIG[qtwebkit] = ",,qtwebkit"
./layers/meta-qt5/recipes-qt/qt5/qttools_git.bb:32: ${#bb.utils.contains('PACKAGECONFIG', 'qtwebkit', '', 'CONFIG+=noqtwebkit', d)} \
./layers/meta-qt5/recipes-qt/qt5/qt5-creator_git.bb:17:DEPENDS = "qtbase qtscript qtwebkit qtxmlpatterns qtx11extras qtdeclarative qttools qttools-native qtsvg chrpath-replacement-native"
./layers/meta-qt5/recipes-qt/qt5/qtbase_git.bb:76:# This is in qt5.inc, because qtwebkit-examples are using it to enable ca-certificates dependency
./layers/meta-qt5/recipes-qt/qt5/qtwebkit-examples_git.bb:18:DEPENDS += "qtwebkit qtxmlpatterns"
./layers/meta-qt5/recipes-qt/qt5/qtwebkit-examples_git.bb:19:RDEPENDS_${PN}-examples += "qtwebkit-qmlplugins"
./layers/meta-qt5/recipes-qt/qt5/qtwebkit_git.bb:12:# Patches from https://github.com/meta-qt5/qtwebkit/commits/b5.11
./layers/meta-qt5/lib/recipetool/create_qt5.py:101: 'webkit': 'qtwebkit',
./layers/meta-qt5/lib/recipetool/create_qt5.py:102: 'webkitwidgets': 'qtwebkit',
So I think the line to remove is the one I described above.
bitbake -e <image> leads to an output so long that overflows the console buffer... I tried to grep the output looking for qtwebkit but nothing is returned.
The same applies for grep -nrw . -e DISTRO_FEATURES | grep qtwebkit.
The PACKAGECONFIG[qtwebkit] = ",,qtwebkit" line is showing what enabling or disabling that feature would do if qtwebkit exists in that package's PACKAGECONFIG variable (see here). Based on that second line and the documentation, it's doing "nothing" in either case.
More towards your question about how to diagnose something like, "why is this variable set," a starting point is to use bitbake -e [optional package or image name] > env.log to dump the environment to a log file that you can review. It would be worth checking this with no package or image name as well as with the package and whatever image you're trying to build (sometimes the image configuration might enable a feature in another package's PACKAGECONFIG via other variables; checking the environment will often show you why something was set).

How to select all files from one sample?

I have a problem figuring out how to make the input directive only select all {samples} files in the rule below.
rule MarkDup:
input:
expand("Outputs/MergeBamAlignment/{samples}_{lanes}_{flowcells}.merged.bam", zip,
samples=samples['sample'],
lanes=samples['lane'],
flowcells=samples['flowcell']),
output:
bam = "Outputs/MarkDuplicates/{samples}_markedDuplicates.bam",
metrics = "Outputs/MarkDuplicates/{samples}_markedDuplicates.metrics",
shell:
"gatk --java-options -Djava.io.tempdir=`pwd`/tmp \
MarkDuplicates \
$(echo ' {input}' | sed 's/ / --INPUT /g') \
-O {output.bam} \
--VALIDATION_STRINGENCY LENIENT \
--METRICS_FILE {output.metrics} \
--MAX_FILE_HANDLES_FOR_READ_ENDS_MAP 200000 \
--CREATE_INDEX true \
--TMP_DIR Outputs/MarkDuplicates/tmp"
Currently it will create correctly named output files, but it selects all files that match the pattern based on all wildcards. So I'm perhaps halfway there. I tried changing {samples} to {{samples}} in the input directive as such:
expand("Outputs/MergeBamAlignment/{{samples}}_{lanes}_{flowcells}.merged.bam", zip,
lanes=samples['lane'],
flowcells=samples['flowcell']),`
but this broke the previous rule somehow. So the solution is something like
input:
"{sample}_*.bam"
But clearly this doesn't work.
Is it possible to collect all files that match {sample}_*.bam with a function and use that as input? And if so, will the function still work with $(echo ' {input}' etc...) in the shell directive?
If you just want all the files in the directory, you can use a lambda function
from glob import glob
rule MarkDup:
input:
lambda wcs: glob('Outputs/MergeBamAlignment/%s*.bam' % wcs.samples)
output:
bam="Outputs/MarkDuplicates/{samples}_markedDuplicates.bam",
metrics="Outputs/MarkDuplicates/{samples}_markedDuplicates.metrics"
shell:
...
Just be aware that this approach can't do any checking for missing files, since it will always report that the files needed are the files that are present. If you do need confirmation that the upstream rule has been executed, you can have the previous rule touch a flag, which you then require as input to this rule (though you don't actually use the file for anything other than enforcing execution order).
If I understand correctly, zip needs to be applied only to {lane} and {flowcells} and not to {samples}. In that case, use two expand instances can achieve that.
input:
expand(expand("Outputs/MergeBamAlignment/{{samples}}_{lanes}_{flowcells}.merged.bam",
zip, lanes=samples['lane'], flowcells=samples['flowcell']),
samples=samples['sample'])
PS: output.tmp file uses {sample} instead of {samples}. Typo?

Zsh completions with multiple repeated options

I am attempting to bend zsh, my shell of choice, to my will, and am completely at a loss on the syntax and operation of completions.
My use case is this: I wish to have completions for 'ansible-playbook' under the '-e' option support three variations:
Normal file completion: ansible-playbook -e vars/file_name.yml
Prepended file completion: ansible-playbook -e #vars/file_name.yml
Arbitrary strings: ansible-playbook -e key=value
I started out with https://github.com/zsh-users/zsh-completions/blob/master/src/_ansible-playbook which worked decently, but required modifications to support the prefixed file pathing. To achieve this I altered the following lines (the -e line):
...
"(-D --diff)"{-D,--diff}"[when changing (small files and templates, show the diff in those. Works great with --check)]"\
"(-e --extra-vars)"{-e,--extra-vars}"[EXTRA_VARS set additional variables as key=value or YAML/JSON]:extra vars:(EXTRA_VARS)"\
'--flush-cache[clear the fact cache]'\
to this:
...
"(-D --diff)"{-D,--diff}"[when changing (small files and templates, show the diff in those. Works great with --check)]"\
"(-e --extra-vars)"{-e,--extra-vars}"[EXTRA_VARS set additional variables as key=value or YAML/JSON]:extra vars:__at_files"\
'--flush-cache[clear the fact cache]'\
and added the '__at_files' function:
__at_files () {
compset -P #; _files
}
This may be very noobish, but for someone that has never encountered this before, I was pleased that this solved my problem, or so I thought.
This fails me if I have multiple '-e' parameters, which is totally a supported model (similar to how docker allows multiple -v or -p arguments). What this means is that the first '-e' parameter will have my prefixed completion work, but any '-e' parameters after that point become 'dumb' and only allow for normal '_files' completion from what I can tell. So the following will not complete properly:
ansible-playbook -e key=value -e #vars/file
but this would complete for the file itself:
ansible-playbook -e key=value -e vars/file
Did I mess up? I see the same type of behavior for this particular completion plugin's '-M' option (it also becomes 'dumb' and does basic file completion). I may have simply not searched for the correct terminology or combination of terms, or perhaps in the rather complicated documentation missed what covers this, but again, with only a few days experience digging into this, I'm lost.
If multiple -e options are valid, the _arguments specification should start with * so instead of:
"(-e --extra-vars)"{-e,--extra-vars}"[EXTR ....
use:
\*{-e,--extra-vars}"[EXTR ...
The (-e --extra-vars) part indicates a list of options that can not follow the one being specified. So that isn't needed anymore because it is presumably valid to do, e.g.:
ansible-playbook -e key-value --extra-vars #vars/file

compiling a ICC binary [duplicate]

I am getting the following error running make:
Makefile:168: *** missing separator. Stop.
What is causing this?
As indicated in the online manual, the most common cause for that error is that lines are indented with spaces when make expects tab characters.
Correct
target:
\tcmd
where \t is TAB (U+0009)
Wrong
target:
....cmd
where each . represents a SPACE (U+0020).
Just for grins, and in case somebody else runs into a similar error:
I got the infamous "missing separator" error because I had invoked a rule defining a function as
($eval $(call function,args))
rather than
$(eval $(call function,args))
i.e. ($ rather than $(.
This is a syntax error in your Makefile. It's quite hard to be more specific than that, without seeing the file itself, or relevant portion(s) thereof.
For me, the problem was that I had some end-of-line # ... comments embedded within a define ... endef multi-line variable definition. Removing the comments made the problem go away.
My error was on a variable declaration line with a multi-line extension. I have a trailing space after the "\" which made that an invalid line continuation.
MY_VAR = \
val1 \ <-- 0x20 there caused the error.
val2
In my case, I was actually missing a tab in between ifeq and the command on the next line. No spaces were there to begin with.
ifeq ($(wildcard $DIR_FILE), )
cd $FOLDER; cp -f $DIR_FILE.tpl $DIR_FILE.xs;
endif
Should have been:
ifeq ($(wildcard $DIR_FILE), )
<tab>cd $FOLDER; cp -f $DIR_FILE.tpl $DIR_FILE.xs;
endif
Note the <tab> is an actual tab character
In my case error caused next. I've tried to execute commands globally i.e outside of any target.
UPD. To run command globally one must be properly formed. For example command
ln -sf ../../user/curl/$SRC_NAME ./$SRC_NAME
would become:
$(shell ln -sf ../../user/curl/$(SRC_NAME) ./$(SRC_NAME))
In my case, this error was caused by the lack of a mere space. I had this if block in my makefile:
if($(METHOD),opt)
CFLAGS=
endif
which should have been:
if ($(METHOD),opt)
CFLAGS=
endif
with a space after if.
In my case, the same error was caused because colon: was missing at end as in staging.deploy:. So note that it can be easy syntax mistake.
I had the missing separator file in Makefiles generated by qmake. I was porting Qt code to a different platform. I didn't have QMAKESPEC nor MAKE set. Here's the link I found the answer:
https://forum.qt.io/topic/3783/missing-separator-error-in-makefile/5
Just to add yet another reason this can show up:
$(eval VALUE)
is not valid and will produce a "missing separator" error.
$(eval IDENTIFIER=VALUE)
is acceptable. This sort of error showed up for me when I had an macro defined with define and tried to do
define SOME_MACRO
... some expression ...
endef
VAR=$(eval $(call SOME_MACRO,arg))
where the macro did not evaluate to an assignment.
I had this because I had no colon after PHONY
Not this,
.PHONY install
install:
install -m0755 bin/ytdl-clean /usr/local/bin
But this (notice the colon)
.PHONY: install
...
Following Makefile code worked:
obj-m = hello.o
all:
$(MAKE) -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules
clean:
$(MAKE) -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean
So apparently, all I needed was the "build-essential" package, then to run autoconf first, which made the Makefile.pre.in, then the ./configure then the make which works perfectly...

Grunt NameError: variable #gutter is undefined

I am using Shoestrap, a WordPress theme based on Roots that uses Bootstrap and less. I added the Bootswatch Yeti theme variables.less to assets/less/bootstrap as as a replacement of the existing one and also added bootswatch.less. Then I added bootswatch.less to bootstrap.less. To recompile I ran grunt. The grunt file content I have added here.
I ran into two issues though. One error I do not know how to fix and one major issue that is that grunt seems to keep on removing assets/css/main.min.css all the time and then tells me the file or directory is missing. Here are the errors I had with --force activated:
grunt --force
Running "clean:dist" (clean) task
Running "less:dist" (less) task
>> NameError: variable #gutter is undefined in assets/less/app.less on line 5, column 13:
>> 4 .gallery-row {
>> 5 padding: (#gutter / 2) 0;
>> 6 }
Warning: Error compiling assets/less/app.less Used --force, continuing.
Running "uglify:dist" (uglify) task
File "assets/js/scripts.min.js" created.
Running "version" task
Warning: ENOENT, no such file or directory 'assets/css/main.min.css' Used --force, continuing.
Done, but with warnings.
It was the Byte Order Mark (BOM) signature that was an issue as I linked to before in the question. But both TextWrangler and Dreamweaver did not remove it. I found one command that did help here: Using awk to remove the Byte-order mark And I ran
awk '{if(NR==1)sub(/^\xef\xbb\xbf/,"");print}' app.less > app.less
which worked like a charm! Only the theme still has not changed in styling. That is rather odd.
Update I
The awk command emptied my app.less. I ran another command found here as well: Using awk to remove the Byte-order mark and that command:
sed -i .bak '1 s/^\xef\xbb\xbf//' *.less
did work without removing all data from app.less, but then I got the same error again:
Reading assets/less/app.less...OK
>> NameError: variable #gutter is undefined in assets/less/app.less on line 5, column 13:
>> 4 .gallery-row {
>> 5 padding: (#gutter / 2) 0;
>> 6 }
Did see TextMate added attributes and removed those using xattr -d com.macromates.caret file.less, but that did not do the trick either.
Update II
Seems that the variable #gutter does not exist. There seems to be a variable #grid-gutter-width. Thanks to at Roots Discourse I was notified - http://discourse.roots.io/t/grunt-hits-a-snag-compiling-gutter-not-defined/940/3 . Making an adjustment does not help though as other variables pop-up as issues. Will see if I can get some feedback from the Shoestrap team.
There was a bug in the shoestrap, theme, I believe we managed to fix it with this commit: https://github.com/shoestrap/shoestrap/commit/ff75cf73cf778e4b80c5e11544c0a67717fbcc10
Please let me know if that works for you...

Resources