Variable expansion in makefile for loop - gnu-make

APPS = a b c
a_OBJ = x.o y.o z.o
b_OBJ = p.o q.o r.o
c_OBJ = s.o t.o k.o
all:
for i in $(APPS); \
do
echo $("$${i}_OBJ"); \
done;
In the above given sample Makefile I want to print the list of obj files inside the for loop but the list is not getting expanded as expected, what exactly am I missing here for the proper expansion of the OBJ lists.

All make does with recipe lines is pass them one by one to the shell after expanding them. Each line is a seperate shell instance, so by the time you reach echo the shell won't even know about a b c let alone the makefile variables a_OBJ etc (also it simply fails beforehand because for i in a b c; do isn't valid bash syntax - remember each line is completely separate).
You can override this behavior with the .ONESHELL special target which will send multiple recipe lines to a single shell instance
.ONESHELL:
all:
# etc.
but if you try this you'll see that bash executes
for i in a b c; \
do
echo ; \
done;
Make expands $("$${i}_OBJ") to an empty string before sending the line to the shell because there's no variable named "${i}_OBJ".
You'll need to expand the variables before they are sent with something like the following
APPS = a b c
a_OBJ = x.o y.o z.o
b_OBJ = p.o q.o r.o
c_OBJ = s.o t.o k.o
all: $(APPS)
$(APPS):
#echo $($#_OBJ)

Related

Print sets of lines from multiple folders as rows, not columns?

I have .out files in multiple folders.
Let's say I am in a directory containing folders A, B, C, D. I use the command below to print a specific value from the 8th column of lines containing the keyword VALUE in all .out files in folders A, B, C, D
awk '/VALUE/{print $8}' ./*/.out
My result would look like:
output1_A
output2_A
output3_A
output1_B
output2_B
output3_B
output1_C
output2_C
output3_C
Is there a way I could get my output to look like what is shown below instead?
output1_A output2_A output3_A
output1_B output2_B output3_B
output1_C output2_C output3_C
In other words, have a space separate outputs from the same folder, and not a linebreak?
Could you please try following(since I don't have directory structure so I couldn't test it or if OP could post file's contents inside directory perhaps we could do in single awk itself too).
awk '/VALUE/{print $8}' ./*/.out | xargs -n 3
Another:
$ awk '/VALUE/{b=b (FNR==(NR>FNR)?ORS:ofs) $8;ofs=OFS}END{print b}' dir?/file1
output1_A output2_A output3_A
output1_B output2_B output3_B
output1_C output2_C output3_C
Explained:
$ awk '
/VALUE/ { # magic keyword
b=b (FNR==(NR>FNR)?ORS:ofs) $8 # gathering a buffer set ORS or OFS appropriately
ofs=OFS # ... but #NR==1 we want ""
}
END {
print b # output buffer
}' dir?/file1
The unexplained two empty records in your sample are not considered but would probably cause extra OFSes in the ends of the output records.

Unix: Using filename from another file

A basic Unix question.
I have a script which counts the number of records in a delta file.
awk '{
n++
} END {
if(n >= 1000) print "${completeFile}"; else print "${deltaFile}";
}' <${deltaFile} >${fileToUse}
Then, depending on the IF condition, I want to process the appropriate file:
cut -c2-11 < ${fileToUse}
But how do I use the contents of the file as the filename itself?
And if there are any tweaks to be made, feel free.
Thanks in advance
Cheers
Simon
To use as a filename the contents of a file which is itself identified by a variable (as asked)
cut -c2-11 <"$( cat $filetouse )"
// or in zsh just
cut -c2-11 <"$( < $filetouse )"
unless the filename in the file ends with one or more newline character(s), which people rarely do because it's quite awkward and inconvenient, then something like:
read -rdX var <$filetouse; cut -c2-11 < "${var%?}"
// where X is a character that doesn't occur in the filename
// maybe something like $'\x1f'
Tweaks: your awk prints the variable reference ${completeFile} or ${deltaFile} (because they're within the single-quoted awk script) not the value of either variable. If you actually want the value, as I'd expect from your description, you should pass the shell vars to awk vars like this
awk -vf="$completeFile" -vd="$deltaFile" '{n++} END{if(n>=1000)print f; else print d}' <"$deltaFile"`
# the " around $var can be omitted if the value contains no whitespace and no glob chars
# people _often_ but not always choose filenames that satisfy this
# and they must not contain backslash in any case
or export the shell vars as env vars (if they aren't already) and access them like
awk '{n++} END{if(n>=1000) print ENVIRON["completeFile"]; else print ENVIRON["deltaFile"]}' <"$deltaFile"
Also you don't need your own counter, awk already counts input records
awk -vf=... -vd=... 'END{if(NR>=1000)print f;else print d}' <...
or more briefly
awk -vf=... -vd=... 'END{print (NR>=1000?f:d)}' <...
or using a file argument instead of redirection so the name is available to the script
awk -vf="$completeFile" 'END{print (NR>=1000?f:FILENAME)}' "$deltaFile" # no <
and barring trailing newlines as above you don't need an intermediate file at all, just
cut -c2-11 <"$( awk -vf="$completeFile" -'END{print (NR>=1000?f:FILENAME)}' "$deltaFile")"
Or you don't really need awk, wc can do the counting and any POSIX or classic shell can do the comparison
if [ $(wc -l <"$deltaFile") -ge 1000 ]; then c="$completeFile"; else c="$deltaFile"; fi
cut -c2-11 <"$c"

How to extract only the required portion of a line in shell script?

In a shell script of a line I have a pattern like
a=1;
b=2;
c = 3;
d =4
I used sed -n -e 's/.*a=//p' filename which is giving the entire line but I need it as a result with values a=1 b=2
I need the value of a and b. How can I extract them?
You can user the command like -->
cat filename |tr ' ' '\n'|egrep 'a|b'
This command will work if you need to extract only a or b.

ive been searching this to get a sense but i am still confused

i'm confused about the $symbol for unix.
according to the definition, it states that it is the value stored by the variable following it. i'm not following the definition - could you please give me an example of how it is being used?
thanks
You define a variable like this:
greeting=hello
export name=luc
and use like this:
echo $greeting $name
If you use export that means the variable will be visible to subshells.
EDIT: If you want to assign a string containing spaces, you have to quote it either using double quotes (") or single quotes ('). Variables inside double quotes will be expanded whereas in single quotes they won't:
axel#loro:~$ name=luc
axel#loro:~$ echo "hello $name"
hello luc
axel#loro:~$ echo 'hello $name'
hello $name
In case of shell sctipts. When you assign a value to a variable you does not need to use $ simbol. Only if you want to acces the value of that variable.
Examples:
VARIABLE=100000;
echo "$VARIABLE";
othervariable=$VARIABLE+10;
echo $othervariable;
The other thing: if you use assignment , does not leave spaces before and after the = simbol.
Here is a good bash tutorial:
http://linuxconfig.org/Bash_scripting_Tutorial
mynameis.sh:
#!/bin/sh
finger | grep "`whoami` " | tail -n 1 | awk '{FS="\t";print $2,$3;}'
finger: prints all logged in user example result:
login Name Tty Idle Login Time Office Office Phone
xuser Forname Nickname tty7 3:18 Mar 9 07:23 (:0)
...
grep: filter lines what containing the given string (in this example we need to filter xuser if our loginname is xuser)
http://www.gnu.org/software/grep/manual/grep.html
whoami: prints my loginname
http://linux.about.com/library/cmd/blcmdl1_whoami.htm
tail -n 1 : shows only the last line of results
http://unixhelp.ed.ac.uk/CGI/man-cgi?tail
the awk script: prints the second and third column of the result: Forname, Nickname
http://www.staff.science.uu.nl/~oostr102/docs/nawk/nawk_toc.html

Maximum number of characters in a field of a csv file using unix shell commands?

I have a csv file. In one of the fields, say the second field, I need to know maximum number of characters in that field. For example, given the file below:
adf,jlkjl,lkjlk
jf,j,lkjljk
jlkj,lkejflkj,adfafef,
jfje,jj,lkjlkj
jjee,eeee,ereq
the answer would be 8 because row 3 has 8 characters in the second field. I would like to integrate this into a bash script, so common unix command line programs are preferred. Imaginary bonus points for explaining what the command is doing.
EDIT: Here is what I have so far
cut --delimiter=, -f 2 test.csv | wc -m
This gives me the character count for all of the fields, not just one, so I still have progress to make.
I would use awk for the task. It uses a comma to split line in fields and for each line checks if the length of second field is bigger that the value already saved.
awk '
BEGIN {
FS = ","
}
{ c = length( $2 ) > c ? length( $2 ) : c }
END {
print c
}
' infile
Use it as a one-liner and assign the return value to a variable, like:
num=$(awk 'BEGIN { FS = "," } { c = length( $2 ) > c ? length( $2 ) : c } END { print c }' infile)
Well #oob, you basically provided the answer with your last edit, and it's the most simple of all answers given. However, I also like #Birei's answer just because I enjoy AWK. :-)
I too had to find the longest possible value for a given field inside a text file today. Tested with your sample and got the expected 8.
cut -d, -f2 test.csv | wc -L
As you see, just a matter of using the correct option for wc (which I hope you have already figured by now).
My solution is to loop over the lines. Than I exchange the commas with new lines to loop over the words than I check which is the longest word and save the data.
#!/bin/bash
lineno=1
matchline=0
matchlen=0
for line in $(cat input.txt); do
words=`echo $line | sed -e 's/,/\n/g'`
for word in $words; do
# echo "line: $lineno; length: ${#word}; input: $word"
if [ $matchlen -lt ${#word} ]; then
matchlen=${#word}
matchline=$lineno
fi
done;
lineno=$(($lineno + 1))
done;
echo max length is $matchlen in line $matchline
Bash and Coreutils Solution
There are a number of ways to solve this, but I vote for simplicity. Here's a solution that uses Bash parameter expansion and a few standard shell utilities to measure each line:
cut -d, -f2 /tmp/foo |
while read; do
echo ${#REPLY}
done | sort | tail -n1
The idea here is to split the CSV file, and then use the parameter length expansion of the implicit REPLY variable to measure the characters on each line. When we sort the measurements, the last line of the sorted output will hold the length of the longest line found.
cut out the desired column
print each line length
sort the line lengths
grab the max line length
cut -d, -f2 test.csv | awk '{print length($0);}' | sort -n | tail -n 1

Resources