How to SRC_URI:append per $MACHINE - patch

I have a yocto native recipe that should apply a different patch to the source code depending on the target ${MACHINE} that it bitbakes for. The folder struct looks like this:
recipe-folder
|-files
| |-machine1
| | |-p1.patch
| |
| |-machine2
| | |-p2.patch
| |
| |-common.patch
|
|-recipe-native_0.1.bb
then the important contents of the recipe are
inherit native
SRC_URI = <some git repo>
SRC_URI:append = "file://common.patch"
SRC_URI:append:machine1 = "file://p1.patch"
SRC_URI:append:machine2 = "file://p2.patch"
do_configure() {
./configure --static
}
do_compile() {
oe_runmake tool1
}
do_install() {
# Default sigtrace installation directory
install -d ${D}${bindir}
install -m 0755 ${S}/output/linux/${release}/tool1 ${D}/${bindir}/tool1
}
The above does not work - only the common patch gets applied.
I also tried
SRC_URI = <some git repo>
SRC_URI:append = "\
file://common.patch \
file://p1.patch \
file://p2.patch \
"
which applies all patches in all targets. Also not what I aim for.
Am I using the command wrong? Is there another way to achieve this ?
Thank you in advance for your help

Related

Need bash script line to log in to WordPress with curl/wget or similar on daily basis with cron

I have an endpoint URL that needs to be hit daily to execute an API call. It requires login. I can't seem to get a cURL or wget command that will successfully log in. I've tried this cURL command:
/usr/bin/curl -L --silent --data
"log=login&pwd=password&ag_login_accept=1&ag_type=login"
https://www.the-url.com 2>&1 | /usr/bin/mail -s "subject"
email#domain.com
but the output is html of the login page, not the api output I get if I manually log in and then go to the url.
I also tried wget:
wget --save-cookies ~/sites/scripts/cookies.txt --keep-session-cookies
--post-data="log=login&pwd=password&ag_login_accept=1&ag_type=login" \
"https://www.the-url.com"
with the same result.
I do this sort of thing:
#!/bin/bash
WPUSR=user_name_goes_here
WPPWD=password_goes_here
COOKIEFILE=`mktemp`
COPT="--load-cookies $COOKIEFILE --save-cookies $COOKIEFILE --keep-session-cookies"
WGET="wget -nv -q ${COPT}"
MSG=`which banner || which figlet || which echo`
function printout() {
links -dump ${1} | grep -v "^ *$" | grep -A 10 "Skip to content"
}
function message() {
$MSG "$1"
}
# login
message 'Login'
LOGIN="log=${WPUSR}${LOGIN}&pwd=${WPPWD}"
LOGIN="${LOGIN}&redirect_to=http://127.0.0.1/wp/?p=1"
${WGET} -O page_01.html --post-data="${LOGIN}" 'http://127.0.0.1/wp/wp-login.php'
printout page_01.html
# show post
message 'View Post'
${WGET} -O page_02.html 'http://127.0.0.1/wp/?p=2'
printout page_02.html
rm "${COOKIEFILE}"
output:
| | ___ __ _(_)_ __
| | / _ \ / _` | | '_ \
| |__| (_) | (_| | | | | |
|_____\___/ \__, |_|_| |_|
|___/
Skip to content
Sitename
Sitename
Just another WordPress site
Posted on 2018-04-11 by jmullee
Hello world!
Welcome to WordPress. This is your first post. Edit or delete it, then
start writing!
One Reply to “Hello world!”
1. A WordPress Commenter says:
2018-04-11 at 16:05
__ ___ ____ _
\ \ / (_) _____ __ | _ \ ___ ___| |_
\ \ / /| |/ _ \ \ /\ / / | |_) / _ \/ __| __|
\ V / | | __/\ V V / | __/ (_) \__ \ |_
\_/ |_|\___| \_/\_/ |_| \___/|___/\__|
Skip to content
Sitename
Sitename
Just another WordPress site
Sample Page
This is an example page. It’s different from a blog post because it will
stay in one place and will show up in your site navigation (in most
themes). Most people start with an About page that introduces them to
potential site visitors. It might say something like this:
Hi there! I’m a bike messenger by day, aspiring actor by night, and this
is my website. I live in Los Angeles, have a great dog named Jack, and I
Alternative suggestion, connect to ssh from a php bash script:
Additional php package needed: php-ssh2
sudo apt install php-ssh2
#!/usr/bin/php
<?php
$connect = ssh2_connect('20.32.66.66.xx', 22);
ssh2_auth_password($connect, 'root', 'PtrDHfutyxxx');
$shell = ssh2_shell($connect, 'xterm');
$stream = ssh2_exec($connect, 'ls -a'); // Example command execute ls
stream_set_blocking($stream, true);
$stream_out = ssh2_fetch_stream($stream, SSH2_STREAM_STDIO);
echo stream_get_contents($stream_out); // Output command result
// ...
It is a base, fine to use from the shell over a vpn.
To avoid passwords in scripts, and secure it one level up, use crypto key pairs for logging.
http://php.net/manual/en/function.ssh2-publickey-init.php

How to run Ag asynchronously with vimproc and quickrun - VIM?

I'm using Ag, vimproc, quickrun
I'm able to run .php asynchronously with this settings in my .vimrc
nnoremap <leader>r :QuickRun<CR>
let g:quickrun_config = get(g:, 'quickrun_config', {})
let g:quickrun_config = {
\ "_" : {
\ 'runner': 'vimproc',
\ 'runner/vimproc/updatetime': 60,
\ 'outputter/quickfix': ':quickrun-module-outputter/quickfix',
\ 'outputter/buffer/split': ':rightbelow 8sp',
\ },
\}
Does anyone know how can I run ag asynchronously?
Ag settings in .vimrc
if executable("ag")
let g:ctrlp_user_command = 'ag %s -i --nocolor --nogroup --hidden
\ --ignore .git
\ --ignore .svn
\ --ignore .hg
\ --ignore .DS_Store
\ --ignore "**/*.pyc"
\ -g ""'
endif
nnoremap <leader>a :Ag!<Space>
nnoremap <leader>aa :Ag! <C-r>=expand('<cword>')<CR>
nnoremap <leader>aaa :Ag! <C-r>=expand('<cword>')<CR><CR>
A generic solution to use asynchronous command with Vim is using a plugin called vim-dispatch, you can launch command in the background with :Start!
Alternatively, this is also a fork of vim called neovim, which is trying to address that issue. At the time of this post, it is not necessarily mature enough, but this is something to consider as well for the future.

is `cap` a reserved word? - zsh completion?

I'm trying to create a Capistrano mutilstage completion for ZSH:
$ cap |
production staging
$ cap production |
deploy -- Deploy a new release
deploy:bundle -- Bundle
...
Completion code:
#compdef cap
#autoload
# /Users/pablo/.oh-my-zsh/custom/plugins/capistrano_custom/_capistrano_custom
local curcontext="$curcontext" state line ret=1
local -a _configs
_arguments -C \
'1: :->cmds' \
'2:: :->args' && ret=0
_cap_tasks() {
if [[ ! -f .cap_tasks~ ]]; then
echo "\nGenerating .cap_tasks~..." > /dev/stderr
cap -v --tasks | grep '#' | cut -d " " -f 2 > .cap_tasks~
fi
cat .cap_tasks~
}
_cap_stages() {
find config/deploy -name \*.rb | cut -d/ -f3 | sed s:.rb::g
}
case $state in
cmds)
if [[ -d config/deploy ]]; then
compadd `_cap_stages`
else
compadd `_cap_tasks`
fi
ret=0
;;
args)
compadd `_cap_tasks`
ret=0
;;
esac
return ret
The problem:
#compdef cap doesn't work. If I type cap and [TAB] it doesn't execute the completion, but with other words (i.e. shipit) works fine.
Any ideas?
Solution:
cap is really a reserved word and it seems that we can't use it with #compdef cap.
I'm wondering how cap and capistrano completions worked before (maybe an old version of ZSH).
Solution dotfiles code: capistrano_custom
Solution oh-my-zsh/PR: #2471
Both solutions use shipit instead of cap.
$ shipit |
production staging
$ shipit production |
deploy -- Deploy a new release
deploy:bundle -- Bundle
...
Yes, cap is a ZSH builtin. Quoting from zsh docs:
The zsh/cap module is used for manipulating POSIX.1e (POSIX.6)
capability sets. [...]. The builtins in this module are:
cap [ capabilities ] Change the shell’s process capability sets to the
specified capabilities, otherwise display the shell’s current
capabilities.

convert a `find` like output to a `tree` like output

This question is a generalized version of the Output of ZipArchive() in tree format question.
Just before I am wasting time on writing this (*nix command line) utility, it will be a good idea to find out if someone already wrote it. I would like a utility that will get as its' standard input a list such as the one returned by find(1) and will output something similar to the one by tree(1)
E.g.:
Input:
/fruit/apple/green
/fruit/apple/red
/fruit/apple/yellow
/fruit/banana/green
/fruit/banana/yellow
/fruit/orange/green
/fruit/orange/orange
/i_want_my_mommy
/person/men/bob
/person/men/david
/person/women/eve
Output
/
|-- fruit/
| |-- apple/
| | |-- green
| | |-- red
| | `-- yellow
| |-- banana/
| | |-- green
| | `-- yellow
| `-- orange/
| |-- green
| `-- orange
|-- i_want_my_mommy
`-- person/
|-- men/
| |-- bob
| `-- david
`-- women/
`-- eve
Usage should be something like:
list2tree --delimiter="/" < Input > Output
Edit0: It seems that I was not clear about the purpose of this exercise. I like the output of tree, but I want it for arbitrary input. It might not be part of any file system name-space.
Edit1: Fixed person branch on the output. Thanks, #Alnitak.
In my Debian 10 I have tree v1.8.0. It supports --fromfile.
--fromfile
Reads a directory listing from a file rather than the file-system. Paths provided on the command line are files to read from rather than directories to search. The dot (.) directory indicates that tree should read paths from standard input.
This way I can feed tree with output from find:
find /foo | tree -d --fromfile .
Problems:
If tree reads /foo/whatever or foo/whatever then foo will be reported as a subdirectory of .. Similarly with ./whatever: . will be reported as an additional level named . under the top level .. So the results may not entirely meet your formal expectations, there will always be a top level . entry. It will be there even if find finds nothing or throws an error.
Filenames with newlines will confuse tree. Using find -print0 is not an option because there is no corresponding switch for tree.
I whipped up a Perl script that splits the paths (on "/"), creates a hash tree, and then prints the tree with Data::TreeDumper. Kinda hacky, but it works:
#!/usr/bin/perl
use strict;
use warnings;
use Data::TreeDumper;
my %tree;
while (<>) {
my $t = \%tree;
foreach my $part (split m!/!, $_) {
next if $part eq '';
chomp $part;
$t->{$part} ||= {};
$t = $t->{$part};
}
}
sub check_tree {
my $t = shift;
foreach my $hash (values %$t) {
undef $hash unless keys %$hash;
check_tree($hash);
}
}
check_tree(\%tree);
my $output = DumpTree(\%tree);
$output =~ s/ = undef.*//g;
$output =~ s/ \[H\d+\].*//g;
print $output;
Here's the output:
$ perl test.pl test.data
|- fruit
| |- apple
| | |- green
| | |- red
| | `- yellow
| |- banana
| | |- green
| | `- yellow
| `- orange
| |- green
| `- orange
|- i_want_my_mommy
`- person
|- men
| |- bob
| `- david
`- women
`- eve
An other tool is treeify written in Rust.
Assuming you have Rust installed get it with:
$ cargo install treeify
So, I finally wrote what I hope will become the python tree utils. Find it at http://pytree.org
I would simply use tree myself but here's a simple thing that I wrote a few days ago that prints a tree of a directory. It doesn't expect input from find (which makes is different from your requirements) and doesn't do the |- display (which can be done with some small modifications). You have to call it like so tree <base_path> <initial_indent>. intial_indent is the number of characters the first "column" is indented.
function tree() {
local root=$1
local indent=$2
cd $root
for i in *
do
for j in $(seq 0 $indent)
do
echo -n " "
done
if [ -d $i ]
then
echo "$i/"
(tree $i $(expr $indent + 5))
else
echo $i
fi
done
}

How to extract the name of immediate directory along with the filename?

I have a file whose complete path is like
/a/b/c/d/filename.txt
If I do a basename on it, I get filename.txt. But this filename is not too unique.
So, it would be better if I could extract the filename as d_filename.txt i.e.
{immediate directory}_{basename result}
How can I achieve this result?
file="/path/to/filename"
echo $(basename $(dirname "$file")_$(basename "$file"))
or
file="/path/to/filename"
filename="${file##*/}"
dirname="${file%/*}"
dirname="${dirname##*/}"
filename="${dirname}_${filename}"
This code will recursively search through your hierarchy starting with the directory that you run the script in. I've coded the loop in such a way that it will handle any filename you throw at it; file names with spaces, newlines etc.
*Note**: the loop is currently written to not include any files in the directory that this script resides in, it only looks at subdirs below it. This was done as it was the easiest way to make sure the script does not include itself in its processing. If for some reason you must include the directory the script resides in, it can be changed to accommodate this.
Code
#!/bin/bash
while IFS= read -r -d $'\0' file; do
dirpath="${file%/*}"
filename="${file##*/}"
temp="${dirpath}_${filename}"
parent_file="${temp##*/}"
printf "dir: %10s orig: %10s new: %10s\n" "$dirpath" "$filename" "$parent_file"
done < <(find . -mindepth 2 -type f -print0)
Test tree
$ tree -a
.
|-- a
| |-- b
| | |-- bar
| | `-- c
| | |-- baz
| | `-- d
| | `-- blah
| `-- foo
`-- parent_file.sh
Output
$ ./parent_file.sh
dir: ./a/b/c/d orig: blah new: d_blah
dir: ./a/b/c orig: baz new: c_baz
dir: ./a/b orig: bar new: b_bar
dir: ./a orig: foo new: a_foo
$ FILE=/a/b/c/d/f.txt
$ echo $FILE
/a/b/c/d/f.txt
$ echo $(basename ${FILE%%$(basename $FILE)})_$(basename $FILE)
d_f.txt
don't need to call external command
s="/a/b/c/d/filename.txt"
t=${s%/*}
t=${t##*/}
filename=${t}_${s##*/}
Take the example:
/a/1/b/c/d/file.txt
/a/2/b/c/d/file.txt
The only reliable way to qualify file.txt and avoid conflicts is to build the entire path into the new filename, e.g.
/a/1/b/c/d/file.txt -> a_1_b_c_d_file.txt
/a/2/b/c/d/file.txt -> a_2_b_c_d_file.txt
You may be able to skip part of the beginning if you know for sure that it will be common to all files, e.g if you know that all files reside somewhere underneath the directory /a above:
/a/1/b/c/d/file.txt -> 1_b_c_d_file.txt
/a/2/b/c/d/file.txt -> 2_b_c_d_file.txt
To achieve this on a per-file basis:
# file="/path/to/filename.txt"
new_file="`echo \"$file\" | sed -e 's:^/::' -e 's:/:_:g'`"
# new_file -> path_to_filename.txt
Say you want do do this recursively in a directory and its subdirectories:
# dir = /a/b
( cd "$dir" && find . | sed -e 's:^\./::' | while read file ; do
new_file="`echo \"$file\" | sed -e 's:/:_:g'`"
echo "rename $dir/$file to $new_file"
done )
Output:
rename /a/b/file.txt to file.txt
rename /a/b/c/file.txt to c_file.txt
rename /a/b/c/e/file.txt to c_e_file.txt
rename /a/b/d/e/file.txt to d_e_file.txt
...
The above is highly portable and will run on essentially any Unix system under any variant of sh (inclusing bash, ksh etc.)

Resources