I can't figure out the correct syntax to use a conditionally defined variable with a foreach loop in GNU Make 3.81.
A simple makefile
SET := A B C
define da_loop
ifeq ($(S), A)
T := equals_A
else
T := not_equals_A
endif
out_$(S):
echo "$(S) $$(T)"
endef
$(foreach S, $(SET), $(eval $(call da_loop, $S)))
Expected output:
$ make out_A out_B out_C
echo "A equals_A"
A equals_A
echo "B not_equals_A"
B not_equals_A
echo "C not_equals_A"
C not_equals_A
Actual output:
$ make out_A out_B out_C
echo "A not_equals_A"
A not_equals_A
echo "B not_equals_A"
B not_equals_A
echo "C not_equals_A"
C not_equals_A
Changing "eval" to "info" looks to me like this ought to work:
ifeq (A, A)
T := equals_A
else
T := not_equals_A
endif
out_A:
echo "A $(T)"
ifeq (B, A)
T := equals_A
else
T := not_equals_A
endif
out_B:
echo "B $(T)"
ifeq (C, A)
T := equals_A
else
T := not_equals_A
endif
out_C:
echo "C $(T)"
I've tried every combination of extra "$", quotes, = vs :=, that I can think of, but none have worked yet. Any ideas?
The problem has nothing to do with loops, eval, etc. The problem is simpler and more fundamental than that: in make, variables are globally defined, and all rules are only invoked after the entire makefile has been parsed. The info trick actually DID show you the problem. Let's simplify it a bit:
T := equals_A
out_A:
echo "A $(T)"
T := not_equals_A
out_B:
echo "B $(T)"
T := not_equals_A
out_C:
echo "C $(T)"
The problem is that all variable assignment happens when the makefile is read in, but expansion of recipes doesn't happen until much later, when make is building targets and decides to run the recipe. So, your makefile could be written equivalently as:
T := equals_A
T := not_equals_A
T := not_equals_A
out_A:
echo "A $(T)"
out_B:
echo "B $(T)"
out_C:
echo "C $(T)"
Now you can see why you get the behavior you do.
There are numerous ways to "fix" this, which of them is most appropriate depends on what you really want to do in your real makefile. A very simple option is to use target-specific variables, like this:
SET := A B C
define da_loop
ifeq ($(S), A)
out_$(S): T := equals_A
else
out_$(S): T := not_equals_A
endif
out_$(S):
echo "$(S) $$(T)"
endef
$(foreach S, $(SET), $(eval $(call da_loop, $S)))
By using this, you've specified that each T variable is bound to the scope of that specific target and every target can have a different value. Other options would be constructed variable names, or simply expanding the value directly into the recipe using the $(if ...) function, rather than setting it as a separate variable.
ifeq of course has another syntax (with quotes),
where no stripping of white-space is done inside the quoted text!
Related
I'm trying to do some floating-point math in a zsh script. I'm seeing the following behavior:
$ (( a = 1.23456789 * 0.00000001 )); printf "a = %g\n" $a
a = 1.23e-08
$ (( a = 1.23456789 * 0.00000001 )); printf "a = %e\n" $a
a = 1.230000e-08
$ (( a = 1.23456789 * 0.0000001 )); printf "a = %e\n" $a
a = 1.235000e-07
I expect not to loose the 1st number's mantissa precision when I merely multiply it by a number whose mantissa is 1 (or at least very close to 1, if the true binary representation is considered). In other words, I'd expect to get a = 1.23456789e-08 or maybe some truncated mantissa, but not zeros after 1.23 / 1.235.
I'm running the following version:
$ zsh --version
zsh 5.8 (x86_64-apple-darwin20.0)
Am I missing something? Or is it an issue in zsh? I'm new to zsh, and I don't have a lot of experience in shell programming in general, so any help is appreciated. Thanks!
It appears that (( x = 1.0 )), when x is not defined, will cause Zsh to declare the variable as -F: a double precision floating point which is formatted to fixed-point with 10 decimal digits on output:
% unset x; (( x = 0.12345678901234567 )); declare -p x
typeset -F x=0.1234567890
% unset x; x=$((0.12345678901234567)); declare -p x
typeset x=0.12345678901234566
I don't know why it works this way, but if you manually declare your variable as a string first, this won't happen, and you'll get the full value:
% unset a; typeset a; (( a = 1.23456789 * 0.00000001 )); printf "a = %g\n" $a
a = 1.23457e-08
The difference comes from the way of how you pass the value of a to printf. If you write it as
(( a = 1.23456789 * 0.00000001 )); printf "a = %e\n" $((a))
$ (( a = 1.23456789 * 0.0000001 )); printf "a = %e\n" $((a))
the problem does not occur. This is described here, where it says:
floating point numbers can be declared with the float builtin; there are two types, differing only in their output format, as described for the typeset builtin. The output format can be bypassed by using arithmetic substitution instead of the parameter substitution, i.e. ‘${float}’ uses the defined format, but ‘$((float))’ uses a generic floating point format
I need to auto-complete two parameters of a function, where the second parameter depends on the first one.
An example: the first parameter of a function "foo" can have values of "a" or "b". The second parameter can have values "10" or "11" in case the first parameter is "a", and "20" and "21" in case the first parameter is "b". So the following combinations of the parameters are legal:
foo a 10
foo a 11
foo b 20
foo b 21
The combinations are known upfront (they can be hardcoded).
The zsh completion system doc is quite obscure and the great How To didn't solve my problem neither. The closest would be to use _arguments, possibly with state action, but I didn't manage to make it work.
_arguments would make sense if you've got options mixed in with the arguments that you've described. Otherwise, just look in the $words array for the previous word - something like:
if (( CURRENT == 1 )); then
_wanted letters expl letter compadd a b
else
case $words[1] in
a) numbers=( 10 11 ) ;;
b) numbers=( 20 21 ) ;;
esac
_wanted numbers expl number compadd -a numbers
fi
I solved my case with the following:
IFS='
'
local -a options
_arguments "1:first:->first" "*:next:->next"
case "$state" in
first)
for o in $(_command_to_autocomplete_first); do
options+=("$o")
done
_describe 'values' options
;;
next)
for o in $(_command_to_complete_others $words[2]); do
options+=("$o")
done
_describe 'values' options
;;
esac
I am currently writing a short program to print the global macro variables of the current Stata session.
I cannot understand the outcome of the following piece of code:
macro drop _all
global glob0: all globals
cap program drop print_globals
program define print_globals
args start_globs
di "$glob0"
di "`start_globs'"
end
print_globals $glob0
The outcome of this is:
S_level S_ADO S_StataMP S_StataSE S_FLAVOR S_OS S_OSDTL S_MACH
S_level
Why am I not passing to start globs the entire information contained in glob0?
Your args statement assigns only the first argument supplied to the program to a local macro; if there are other arguments they are ignored.
The essence of the matter is whether double quotes are used to bind what is supplied into one argument.
Whether you supply an argument as a global or a local is immaterial: globals and locals mentioned on the command line are evaluated before the program even runs and are not seen as such; only their contents are passed to the program.
Define this simpler program and run through the possibilities:
program showfirstarg
args first
di "`first'"
end
global G "A B C D E"
local L "A B C D E"
showfirstarg $G
showfirstarg "$G"
showfirstarg `L'
showfirstarg "`L'"
Results in turn:
. showfirstarg $G
A
. showfirstarg "$G"
A B C D E
. showfirstarg `L'
A
. showfirstarg "`L'"
A B C D E
In order to print the content of the program argument as intended, one must use compound quotes:
print_globals `" ${glob0} "'
and not print_globals ${glob0}.
To see this, consider the following example:
local A "a b c d e"
global B "a b c d e"
cap program drop print_prog
program define print_prog
args loc_input
di "print global: $B"
di "print local: `loc_input'"
end
print_prog `A'
print_prog `" `A' "' // prints both A and B as initially intended
The confusion here is given by the fact that B is printed as intended without having to use compound quotes, whereas the same does not apply for the local macro A when it's passed as argument to the program.
In fact, as highligted in the comments below, in the latter case only the first element is passed as program argument (a in the example).
By using compound quotes we supply a b c d e as a single argument and the final result is the one wanted.
I have a list of products:
1, 2, 3, 4...
Which depends on a different list of sources:
z, y, x, w...
The dependence is one-to-one (the nth element of the first list has the nth element of the second list as its source), and the recipe is the same in all cases. There is, for all intents and purposes, no structure to the lists - it's not possible to write a simple expression which allows the nth element of the list to be generated from n. The solution that I know will work is
1 : z
[recipe]
2 : y
[identical recipe]
3 : x
[identical recipe]
4 : w
[identical recipe]
...but I don't like this because it makes it easier to make a mistake when modifying the lists or the recipe. I would prefer to take advantage of the correspondence pattern and begin with
SRCLIST = z y x w
DSTLIST = 1 2 3 4
And then somehow have a general rule like
DSTLIST_n : SRCLIST_n
[recipe]
Is there any way of doing something like this?
This is a bit ugly but I believe it should work. (There are probably slightly better ways but this was the first thing I came up with.)
SRCLIST = z y x w
DSTLIST = 1 2 3 4
# Create a list of : the length of SRCLIST
MIDLIST = $(foreach s,$(SRCLIST),:)
$(info SRCLIST:$(SRCLIST))
$(info DSTLIST:$(DSTLIST))
$(info MIDLIST:$(MIDLIST))
# Join the three lists together (in two passes since $(join) only takes two lists)
JOINLIST = $(join $(join $(DSTLIST),$(MIDLIST)),$(SRCLIST))
$(info joined:$(JOINLIST))
# eval each of the created strings to create the prerequisite entries
$(foreach r,$(JOINLIST),$(eval $r))
# Set the rules to build all the targets.
$(DSTLIST):
echo '$# for $^'
$ touch z y x w
$ make
SRCLIST:z y x w
DSTLIST:1 2 3 4
MIDLIST:: : : :
joined:1:z 2:y 3:x 4:w
echo '1 for z'
1 for z
echo '2 for y'
2 for y
echo '3 for x'
3 for x
echo '4 for w'
4 for w
I should note that this will not deal with spaces in any of the entries at all well (but that's generally true of make so nothing specific to this solution).
You could also always just create a Canned Recipe and then just stick that in each explicitly written out target as in your original idea.
Inspired by Etan, here is what I found worked:
SRCLIST = z y x w
DSTLIST = 1 2 3 4
# Make a list of ":" for combining
SEPARATOR = $(foreach s,$(SRCLIST),:)
# Define a parameterized rule which accepts the dst:src info as an argument
define dst-src
$1
[rule]
endef
# Define the list of dependencies
DST_SRC_RELNS = $(join $(join $(DSTCLIST),$(SEPARATOR)),$(SRCLIST))
# ^ DST_SRC_RELNS evaluates to z:1 y:2 x:3 w:4
# Print a preview of the rules the makefile generates itself
$(info $(foreach r,$(DST_SRC_RELNS),$(call dst-src,$r)))
# Generate the rules
$(foreach r,$(DST_SRC_RELNS),$(eval $(call dst-src,$r)))
I think that you could get away with not defining the parameterized rule dst-src by actually writing the rule out inside the $(eval ...), but I didn't like this for two reasons:
you need to define a newline macro for the result to be something that make will recognize as a rule
adding more text within the $(foreach ...) makes it even harder for a human reader to figure out what's really going on
Nice problem. You didn't mention which version of make you are using, but .SECONDEXPANSION often works well for these sorts of source lookup tables.
A sketch:
srcs := z x y w
targets := 1 2 3 4
.SECONDEXPANSION:
pairs := $(join ${targets},$(addprefix :,${srcs}))
lookup-src = $(patsubst $1:%,%,$(filter $1:%,${pairs}))
${targets}: $$(call lookup-src,$$#)
echo '[$^] -> [$#]'
Let's say I have an input file where each line contains the path from the root (A) to a leaf
echo "A\tB\tC\nA\tB\tD\nA\tE" > lines.txt
A B C
A B D
A E
How can I easily generate the resulting tree?: (A(B(C,D),E))
I'd like to use GNU tools (awk, sed, etc.) because they tend to work better with large files, but an R script would also work. The R input would be:
# lines <- lapply(readLines("lines.txt"), strsplit, " +")
lines <- list(list(c("A", "B", "C")), list(c("A", "B", "D")), list(c("A","E")))
In Perl:
#!/usr/bin/env perl
use strict;
my $t = {};
while (<>) {
my #a = split;
my $t1 = $t;
while (my $a = shift #a) {
$t1->{$a} = {} if not exists $t1->{$a};
$t1 = $t1->{$a};
}
}
print &p($t)."\n";
sub p {
my ($t) = #_;
return
unless keys %$t;
return '('
. join(',', map { $_ . p($t->{$_}) } sort keys %$t)
. ')';
}
This script returns:
% cat <<EOF | perl l.pl
A B C
A B D
A E
EOF
(A(B(C,D),E))
Note that this script, due to recursion in p is not at all suited for large datasets. But that can be easily resolved by turning that into a double for loop, like in the first while above.
Why do it the easy way, if you can use Bourne Shell script instead? Note, this is not even Bash, this is plain old Bourne shell, without arrays...
#!/bin/sh
#
# A B C
# A B D
# A E
#
# "" vs "A B C" -> 0->3, ident 0 -> -0+3 -> "(A(B(C"
# "A B C" vs "A B D" -> 3->3, ident 2 -> -1+1 -> ",D"
# "A B D" vs "A E" -> 3->2, ident 1 -> -2+1 -> "),E"
# "A E" vs. endc -> 2->0, ident 0 -> -2+0 -> "))"
#
# Result: (A(B(C,D),E))
#
# Input stream is a path per line, path segments separated with spaces.
process_line () {
local line2="$#"
n2=$#
set -- $line1
n1=$#
s=
if [ $n2 = 0 ]; then # last line (empty)
for s1 in $line1; do
s="$s)"
done
else
sep=
remainder=false
for s2 in $line2; do
if ! $remainder; then
if [ "$1" != $s2 ]; then
remainder=true
if [ $# = 0 ]; then # only children
sep='('
else # sibling to an existing element
sep=,
shift
for s1 in $#; do
s="$s)"
done
fi
fi
fi
if $remainder; then # Process remainder as mismatch
s="$s$sep$s2"
sep='('
fi
shift # remove the first element of line1
done
fi
result="$result$s"
}
result=
line1=
(
cat - \
| sed -e 's/[[:space:]]\+/ /' \
| sed -e '/^$/d' \
| sort -u
echo '' # last line marker
) | while read line2; do
process_line $line2
line1="$line2"
test -n "$line2" \
|| echo $result
done
This produces the correct answer for two different files (l.sh is the shell version, l.pl the version in Perl):
% for i in l l1; do cat $i; ./l.sh < $i; ./l.pl < $i; echo; done
A
A B
A B C D
A B E F
A G H
A G H I
(A(B(C(D),E(F)),G(H(I))))
(A(B(C(D),E(F)),G(H(I))))
A B C
A B D
A E
(A(B(C,D),E))
(A(B(C,D),E))
Hoohah!
Okay, so I think I got it:
# input
lines <- c(list(c("A", "B", "C")), list(c("A", "B", "D")), list(c("A","E")))
# generate children
generate_children <- function(lines){
children <- list()
for (line in lines) {
for (index in 1:(length(line)-1)){
parent <- line[index]
next_child <- line[index + 1]
if (is.null(children[[parent]])){
children[[parent]] <- next_child
} else {
if (next_child %notin% children[[parent]]){
children[[parent]] <- c(children[[parent]], next_child)
}
}
}
}
children
}
expand_children <- function(current_parent, children){
if (current_parent %in% names(children)){
expanded_children <- sapply(children[[current_parent]], function(current_child){
expand_children(current_child, children)
}, USE.NAMES = FALSE)
output <- setNames(list(expanded_children), current_parent)
} else {
output <- current_parent
}
output
}
children <- generate_children(lines)
root <- names(children)[1]
tree <- expand_children(root, children)
dput(tree)
# structure(list(A = structure(list(B = c("C", "D"), "E"), .Names = c("B",""))), .Names = "A")
Is there a simpler answer?