How to run awk on a file with cedella as delimiter - unix

I have a file with below contents
cat file1.dat
anuÇ89Çhyd
binduÇ45Çchennai
I would like to print the second column with Ç as delimiter.
output should be
89
45

The manpage of awk mentions the following:
-F fs
--field-separator fs
Use fs for the input field separator (the value of the FS predefined variable).
So, this command does what you want:
cat file1.dat | awk -F'Ç' '{print $2}'

Given:
$ cat file
anuÇ89Çhyd
binduÇ45Çchennai
You can use cut:
$ cut -f 2 -d 'Ç' file
awk:
$ awk -F'Ç' '{print $2}' file
sed:
$ sed -E 's/^[^Ç]*Ç([^Ç]*).*/\1/' file
GNU grep:
$ grep -oP '^[^Ç]*Ç\K[^Ç]+(?=Ç)' file
Perl:
$ perl -lnE 'print $1 if /^[^Ç]*Ç([^Ç]+)Ç/' file
All those print:
89
45

Related

awk to sort two fields:

Would like to sort Input.csv file based on fields $1 and $5 and generate country wise A-Z order.
While doing sort need to consider country name either from $1 or $5 if any of the fields are blank.
Input.csv
Country,Amt,Des,Details,Country,Amt,Des,Network,Details
abc,10,03-Apr-14,Aug,abc,10,DL,ABC~XYZ,Sep
,,,,mno,50,DL,ABC~XYZ,Sep
abc,10,22-Jan-07,Aug,abc,10,DL,ABC~XYZ,Sep
jkl,40,11-Sep-13,Aug,,,,,
,,,,ghi,30,AL,DEF~PQZ,Sep
abc,10,03-Apr-14,Aug,abc,10,MN,ABC~XYZ,Sep
abc,10,19-Feb-14,Aug,abc,10,MN,ABC~XYZ,Sep
def,20,02-Jul-13,Aug,,,,,
def,20,02-Aug-13,Aug,,,,,
Desired Output.csv
Country,Amt,Des,Details,Country,Amt,Des,Network,Details
abc,10,03-Apr-14,Aug,abc,10,DL,ABC~XYZ,Sep
abc,10,22-Jan-07,Aug,abc,10,DL,ABC~XYZ,Sep
abc,10,03-Apr-14,Aug,abc,10,MN,ABC~XYZ,Sep
abc,10,19-Feb-14,Aug,abc,10,MN,ABC~XYZ,Sep
def,20,02-Jul-13,Aug,,,,,
def,20,02-Aug-13,Aug,,,,,
,,,,ghi,30,AL,DEF~PQZ,Sep
jkl,40,11-Sep-13,Aug,,,,,
,,,,mno,50,DL,ABC~XYZ,Sep
I have tried below command but not getting desired output. Please suggest..
head -1 Input.csv > Output.csv; sort -t, -k1,1 -k5,5 <(tail -n +2 Input.csv) >> Output.csv
awk to the rescue!
$ awk -F, '{print ($1==""?$5:$1) "\t" $0}' file | sort | cut -f2-
Country,Amt,Des,Details,Country,Amt,Des,Network,Details
abc,10,03-Apr-14,Aug,abc,10,DL,ABC~XYZ,Sep
abc,10,03-Apr-14,Aug,abc,10,MN,ABC~XYZ,Sep
abc,10,19-Feb-14,Aug,abc,10,MN,ABC~XYZ,Sep
abc,10,22-Jan-07,Aug,abc,10,DL,ABC~XYZ,Sep
def,20,02-Aug-13,Aug,,,,,
def,20,02-Jul-13,Aug,,,,,
,,,,ghi,30,AL,DEF~PQZ,Sep
jkl,40,11-Sep-13,Aug,,,,,
,,,,mno,50,DL,ABC~XYZ,Sep
here the header starting with uppercase and data is lowercase. If this is not a valid assumption special handling of header required as you did above or better with awk
$ awk -F, 'NR==1{print; next} {print ($1==""?$5:$1) "\t" $0 | "sort | cut -f2-"}' file
Is this what you want? (Omitted first line)
cat file_containing_your_lines | awk 'NR != 1' | sed "s/,/\t/g" | sort -k 1 -k 5 | sed "s/\t/,/g"

No results while making precise matching using awk

I am having rows like this in my source file:
"Sumit|My Application|PROJECT|1|6|Y|20161103084527"
I want to make a precise match on Column 3 i.e. I do not want to use '~' operator while writing my awk command. However the command:
awk -F '|' '($3 ~ /'"$Var_ApPJ"'/) {print $3}' ${Var_RDR}/${Var_RFL};
is fetching me correct result but the command:
awk -F '|' '($3 == "${Var_ApPJ}") {print $3}' ${Var_RDR}/${Var_RFL};
fails to do so. Can anyone help in explaining why it happens and I am willing to use '==' because I do not want to match if the value is "PROJECT1" in source file.
Parameter Var_ApPJ="PROJECT"
${Var_RDR}/${Var_RFL} -> Refers to source file.
Refer to this part of documentation to know how to pass variable to awk.
I found an alternative way of '==' with '~':
awk -F '|' '($3 ~ "^${Var_ApPJ}"$) {print $3}' ${Var_RDR}/${Var_RFL};
here is the problem -
try below command -
awk -F '|' '$3 == Var_ApPJ {print $3}' ${Var_RDR}/${Var_RFL};
Remove curly braces and bracket.
vipin#kali:~$ cat kk.txt
a 5 b cd ef gh
vipin#kali:~$ awk -v var1="5" '$2 == var1 {print $3}' kk.txt
b
vipin#kali:~$
OR
#cat kk.txt
a 5 b cd ef gh
#var1="5"
#echo $var1
5
#awk '$2 == "'"$var1"'" {print $3}' kk.txt ### With "{}"
b
#
#awk '$2 == "'"${var1}"'" {print $3}' kk.txt ### without "{}"
b
#

How to find count of a particular word in Different Files in Unix

How Do i Find Count of a particular word in Different Files in Unix:
I have: 50 file in a Directory (abc.txt, abc.txt.1,abc.txt.2, etc)
What I want: To Find number of instances of word 'Hello' in each file.
What I have used is grep -c Hello abc* | grep -v :0
It gave me result in Form of,
<<File name>> : <<count>>
I want Output to be in a form
<<Date>> <<File_Name>> <<Number of Instances of word Hello in the file>>
1-1-2001 abc.txt 23
1-1-2014 abc.txt.19 57
2-5-2015 abc.txt.49 16
You can use gnu awk >=4.0 (due to ENDFILE) to get the number.
If we know where the data comes from, I will add it.
awk '{for (i=1;i<=NF;i++) if ($i~/Hello/) a++} ENDFILE {print FILENAME,a;a=0}' abc.txt*
### Sample code for you to tweak for your needs:
touch test.txt
echo "ravi chandran marappan 30" > test.txt
echo "ramesh kumar marappan 24" >> test.txt
echo "ram lakshman marappan 22" >> test.txt
sed -e 's/ /\n/g' test.txt | sort | uniq | awk '{print "echo """,$1,
"""`grep -wc ",$1," test.txt`"}' | sh
Results:
22 -1
24 -1
30 -1
chandran -1
kumar -1
lakshman -1
marappan -3
ram -1
ramesh -1
ravi -1`

How to add a value/data to end of each row in Unix

I have fileA, fileB data as shown below
fileA
,,"user1","email"
,,"user2","email"
,,"user3","email"
,,"user4","email"
fileB
,,user2,location
,,user4,location
,,user1,location
,,user3,location
I want to search fileA user on fileB and get only location and add that one to fileA/or other file
Output expecting like
,,"user1","email",location
,,"user2","email",location
,,"user3","email",location
,,"user4","email",location
I'm trying the logic, using while get the fileA username and search that on fileB to get the location. but getting failed to add that with fileA back
Your help much appreciated
This should work:
for user in `awk -F\" '{print $2}' fileA`
do
loc=`grep ${user} fileB | awk -F',' '{print $4}'`
sed -i "/${user}/ s/$/,${loc}/" fileA
done
Adding the example:
$ cat fileA
,,"user1","email"
,,"user2","email"
,,"user3","email"
,,"user4","email"
$ cat fileB
,,user2,location2
,,user4,location4
,,user1,location1
,,user3,location3
$ for user in `awk -F\" '{print $2}' fileA`; do echo ${user}; loc=`grep ${user} fileB | awk -F',' '{print $4}'`; echo ${loc}; sed -i "/${user}/ s/$/,${loc}/" fileA; done
$ cat fileA
,,"user1","email",location1
,,"user2","email",location2
,,"user3","email",location3
,,"user4","email",location4
The description is not clear but based on the question you can use the following command to append a value/data to end of each row in Unix
sed -i '/search_pattern/ s/$/string_to_be_appended/' filename
You can do this entirely in awk
awk -F, '
NR==FNR{a[$3]=$4;next}
{for(x in a) if(index($3,x)>0) print $0","a[x]}' file2 file1
Test:
$ cat file1
,,"user1","email"
,,"user2","email"
,,"user3","email"
,,"user4","email"
$ cat file2
,,user2,location2
,,user4,location4
,,user1,location1
,,user3,location3
$ awk -F, 'NR==FNR{a[$3]=$4;next}{for(x in a) if(index($3,x)>0) print $0","a[x]}' file2 file1
,,"user1","email",location1
,,"user2","email",location2
,,"user3","email",location3
,,"user4","email",location4

Unix cut command taking an unordered list as arguments

The Unix cut command takes a list of fields, but not the order that I need it in.
$ echo 1,2,3,4,5,6 | cut -d, -f 1,2,3,5
1,2,3,5
$ echo 1,2,3,4,5,6 | cut -d, -f 1,3,2,5
1,2,3,5
However, I would like a Unix shell command that will give me the fields in the order that I specify.
Use:
pax> echo 1,2,3,4,5,6 | awk -F, 'BEGIN {OFS=","}{print $1,$3,$2,$5}'
1,3,2,5
or:
pax> echo 1,2,3,4,5,6 | awk -F, -vOFS=, '{print $1,$3,$2,$5}'
1,3,2,5
Or just use the shell
$ set -f
$ string="1,2,3,4,5"
$ IFS=","
$ set -- $string
$ echo $1 $3 $2 $5
1 3 2 5
Awk based solution is elegant. Here is a perl based solution:
echo 1,2,3,4,5,6 | perl -e '#order=(1,3,2,5);#a=split/,/,<>;for(#order){print $a[$_-1];}'

Resources