How to coerce AWK to evaluate string as math expression? - math

Is there a way to evaluate a string as a math expression in awk?
balter#spectre3:~$ echo "sin(0.3) 0.3" | awk '{print $1,sin($2)}'
sin(0.3) 0.29552
I would like to know a way to also have the first input evaluated to 0.29552.

You can just create your own eval function which calls awk again to execute whatever command you want it to:
$ cat tst.awk
{ print eval($1), sin($2) }
function eval(str, cmd,line,ret) {
cmd = "awk \047BEGIN{print " str "; exit}\047"
if ( (cmd | getline line) > 0 ) {
ret = line
}
close(cmd)
return ret
}
$ echo 'sin(0.3) 0.3' | awk -f tst.awk
0.29552 0.29552
$ echo '4*7 0.3' | awk -f tst.awk
28 0.29552
$ echo 'tolower("FOO") 0.3' | awk -f tst.awk
foo 0.29552

awk lacks an eval(...) function. This means that you cannot do string to code translation based on input after the awk program initializes. Ok, perhaps it could be done, but not without writing your own parsing and evaluation engine in awk.
I would recommend using bc for this effort, like
[edwbuck#phoenix ~]$ echo "s(0.3)" | bc -l
.29552020666133957510
Note that this would require sin to be shortened to s as that's the bc sine operation.

Here's a simple one liner!
math(){ awk "BEGIN{printf $1}"; }
Examples of use:
math 1+1
Yields "2"
math 'sqrt(25)'
Yeilds "5"
x=100; y=5; math "sqrt($x) + $y"
Yeilds "15"

With gawk version 4.1.2 :
echo "sin(0.3) 0.3" | awk '{split($1,a,/[()]/);f=a[1];print #f(a[2]),sin($2)}'
It's ok with tolower(FOO) too.

You can try Perl as it has eval() function.
$ echo "sin(0.3)" | perl -ne ' print eval '
0.29552020666134
$
For the given input,
$ echo "sin(0.3) 0.3" | perl -ne ' /(\S+)\s+(\S+)/ and print eval($1), " ", $2 '
0.29552020666134 0.3
$

Related

Remove duplicated string stored in variable

I have a variable $var with this content:
var=word1,word2,word3,word1,word3
and I need to delete duplicate words and the results is required stored in the same variable $var.
Try
var="word1,word2,word3,word1,word3"
list=$(echo $var | tr "," "\n")
var=($(printf "%s\n" "${list[#]}" | sort | uniq -c | sort -rnk1 | awk '{ print $2 }'))
echo "${var[#]}"
If open to perl then:
$ var="word1,word2,word3,word1,word3"
$ var=$(perl -F, -lane'{$h{$_}++ or push #a, $_ for #F; print join ",", #a}' <<< "$var")
$ echo "$var"
word1,word2,word3

How to reverse a string in ksh

please help me with this problem, i have an array witch includes 1000 lines with number which are treated as strings and i want for all of them to reverse them one by one, my problem is how to reverse them because i have to use ksh or else with bash or something it would be so easy..... what i have now is this, but
rev="$rev${copy:$y:1}" doesnt work in ksh.
i=0
while [[ $i -lt 999 ]]
do
rev=""
var=${xnumbers[$i]}
copy=${var}
len=${#copy}
y=$(expr $len - 1)
while [[ $y -ge 0 ]]
do
rev="$rev${copy:$y:1}"
echo "y = " $y
y=$(expr $y - 1)
done
echo "i = " $i
echo "rev = " $rev
#xnumbers[$i]=$(expr $xnumbers[$i] "|" $rev)
echo "xum = " ${xnumbers[$i]}
echo "##############################################"
i=$(expr $i + 1)
done
I am not sure why we cannot use built in rev function.
$ echo 798|rev
897
You can also try:
$ echo 798 | awk '{ for(i=length;i!=0;i--)x=x substr($0,i,1);}END{print x}'
897
If, you can print the contents of the array to a file, you can then process the file with this awk oneliner.
awk '{s1=split($0,A,""); line=""; for (i=s1;i>0;i--) line=line A[i];print line}' file
Check this!!
other_var=`echo ${xnumbers[$i]} | awk '{s1=split($0,A,""); line=""; for (i=s1;i>0;i--) line=line A[i];print line}'`
I have tested this on Ubuntu with ksh, same results:
number="789"
other_var=`echo $number | awk '{s1=split($0,A,""); line=""; for (i=s1;i>0;i--) line=line A[i];print line}'`
echo $other_var
987
You could use cut, paste and rev together, just change printf to cat file.txt:
paste -d' ' <(printf "%s data\n" {1..100} | cut -d' ' -f1) <(printf "%s data\n" {1..100} | cut -d' ' -f2 |rev)
Or rev alone if, it's not a numbered file as clarified by the OP.

Find all words starting with a fix string in a file?

How can I find all the words in my csv file starting with $
My file is like:
Test1,$Var1,$varCab1,$Vargab1,Comment1
Test2,$Var2,$varCab2,$Vargab2,Comment2
Test3,$Var3,$varCab3,$Vargab3,Comment3
As an output I want
$Var1
$varCab1
$Vargab1
$Var2
$varCab2
$Vargab2
$Var3
$varCab3
$Vargab3
Try following (grep -oE '\$\w+' filename):
$ cat 1.csv
Test1,$Var1,$varCab1,$Vargab1,Comment1
Test2,$Var2,$varCab2,$Vargab2,Comment2
Test3,$Var3,$varCab3,$Vargab3,Comment3
$ grep -oE '\$\w+' 1.csv
$Var1
$varCab1
$Vargab1
$Var2
$varCab2
$Vargab2
$Var3
$varCab3
$Vargab3
Using awk:
$ awk -F, '{ for(i=1;i<=NF;i++) if ($i ~ /\$/) print $i; }' 1.csv
$Var1
$varCab1
$Vargab1
$Var2
$varCab2
$Vargab2
$Var3
$varCab3
$Vargab3
Use tr and grep:
$ tr ',' '\n' < inputfile | grep "^[$]"
$Var1
$varCab1
$Vargab1
$Var2
$varCab2
$Vargab2
$Var3
$varCab3
$Vargab3
Using perl:
perl -ne 'for (m/\$\w+/g) { print $_, "\n" }' < inputfile
Or even shorter:
perl -ne 'print map("$_\n", m/\$\w+/g)' < inputfile
Explanation:
The regular expression \$\w+ matches a $ followed by one or more word characters.
The m//g expression returns a list of matches.
perl -ne runs the expression for each file in the input, inserting the line in $_, which is then used by the m//g expression.

How to find the distinct values in unix

I need distinct values from the below columns:
AA|BB|CC
a#gmail.com,c#yahoo.co.in|a#gmail.com|a#gmail.com
y#gmail.com|x#yahoo.in,z#redhat.com|z#redhat.com
c#gmail.com|b#yahoo.co.in|c#uix.xo.in
Here records are '|' seperated and in the 1st column, we can two email id's which are ',' seperated. so, I want to consider that also. I want distinct email id's in the AA,BB,CC column, whether it is '|' seperated or ',' seperated.
Expected output:
c#yahoo.co.in|a#gmail.com|
y#gmail.com|x#yahoo.in|z#redhat.com
c#gmail.com|b#yahoo.co.in|c#uix.xo.in
is awk unix enough for you?
{
for(i=1; i < NF; i++) {
if ($i ~ /#/) {
mail[$i]++
}
}
}
END {
for (x in mail) {
print mail[x], x
}
}
output:
$ awk -F'[|,]' -f v.awk f1
2 z#redhat.com
3 a#gmail.com
1 x#yahoo.in
1 c#yahoo.co.in
1 c#gmail.com
1 y#gmail.com
1 b#yahoo.co.in
Using awk :
cat file | tr ',' '|' | awk -F '|' '{ line=""; for (i=1; i<=NF; i++) {if ($i != "" && list[NR"#"$i] != 1){line=line $i "|"}; list[NR"#"$i]=1 }; print line}'
Prints :
a#gmail.com|c#yahoo.co.in|
y#gmail.com|x#yahoo.in|z#redhat.com|
c#gmail.com|b#yahoo.co.in|c#uix.xo.in|
Edit :
Now works properly with inputs such as :
a#gmail.com|c#yahoo.co.in|
y#gmail.com|x#yahoo.in|a#gmail.com|
c#gmail.com|c#yahoo.co.in|c#uix.xo.in|
Prints :
a#gmail.com|c#yahoo.co.in|
y#gmail.com|x#yahoo.in|a#gmail.com|
c#gmail.com|c#yahoo.co.in|c#uix.xo.in|
The following python code will solve your problem:
#!/usr/bin/env python
while True:
try:
addrs = raw_input()
except EOFError:
break
print '|'.join(set(addrs.replace(',', '|').split('|')))
In Bash only:
while read s; do
IFS='|,'
for e in $s; do
echo "$e"
done | sort | uniq
unset IFS
done
This seems to work, although I'm not sure what to do if there are more than three unique mails. Run with awk -f filename.awk dataname.dat
BEGIN {IFS=/[,|]/}
NF {
delete uniqmails;
for (i=1; i<=NF; i++)
uniqmails[$i] = 1;
sep="";
n=0;
for (m in uniqmails) {
printf "%s%s", sep, m;
sep="|";
n++;
}
for (;n<3;n++) printf "|";
print ""; // EOL
}
There's also this "one-liner" that doesn't need awk:
while read line; do
echo $line | tr ",|" "\n" | sort -u |\
paste <( seq 3) - | cut -f 2 |\
tr "\n" "|" |\
rev | cut -c 2- | rev;
done
With perl:
perl -lane '$s{$_}++ for split /[|,]/; END { print for keys %s;}' input
I have edited this post, Hope it will work
while read line
do
val1=`echo $line|awk -F"|" '{print $1}'`
val2=`echo $line|awk -F"|" '{print $2}'`
val3=`echo $line|awk -F"|" '{print $3}'`
a=`echo $line|awk -F"|" '{print $2,"|",$3}'|sed 's/'$val1'//g'`
aa=`echo "$val1|$a"`
b=`echo $aa|awk -F"|" '{print $1,"|",$3}'|sed 's/'$val2'//g'`
b1=`echo $b|awk -F"|" '{print $1}'`
b2=`echo $b|awk -F"|" '{print $2}'`
bb=`echo "$b1|$val2|$b2"`
c=`echo $bb|awk -F"|" '{print $1,"|",$2}'|sed 's/'$val3'//g'`
cc=`echo "$c|$val3"|sed 's/,,/,/;s/,|/|/;s/|,/|/;s/^,//;s/ //g'`
echo "$cc">>abcd
done<ab.dat
cat abcd
c#yahoo.co.in||a#gmail.com
y#gmail.com|x#yahoo.in|z#redhat.com
c#gmail.com|b#yahoo.co.in|c#uix.xo.in
You can subtract all "," separated values and parse in the same way...if your all values are having "," separated.

Unix cut command taking an unordered list as arguments

The Unix cut command takes a list of fields, but not the order that I need it in.
$ echo 1,2,3,4,5,6 | cut -d, -f 1,2,3,5
1,2,3,5
$ echo 1,2,3,4,5,6 | cut -d, -f 1,3,2,5
1,2,3,5
However, I would like a Unix shell command that will give me the fields in the order that I specify.
Use:
pax> echo 1,2,3,4,5,6 | awk -F, 'BEGIN {OFS=","}{print $1,$3,$2,$5}'
1,3,2,5
or:
pax> echo 1,2,3,4,5,6 | awk -F, -vOFS=, '{print $1,$3,$2,$5}'
1,3,2,5
Or just use the shell
$ set -f
$ string="1,2,3,4,5"
$ IFS=","
$ set -- $string
$ echo $1 $3 $2 $5
1 3 2 5
Awk based solution is elegant. Here is a perl based solution:
echo 1,2,3,4,5,6 | perl -e '#order=(1,3,2,5);#a=split/,/,<>;for(#order){print $a[$_-1];}'

Resources