AWK to print field $2 first, then field $1 - unix

Here is the input(sample):
name1#gmail.com|com.emailclient.account
name2#msn.com|com.socialsite.auth.account
I'm trying to achieve this:
Emailclient name1#gmail.com
Socialsite name2#msn.com
If I use AWK like this:
cat foo | awk 'BEGIN{FS="|"} {print $2 " " $1}'
it messes up the output by overlaying field 1 on the top of field 2.
Any tips/suggestions? Thank you.

A couple of general tips (besides the DOS line ending issue):
cat is for concatenating files, it's not the only tool that can read files! If a command doesn't read files then use redirection like command < file.
You can set the field separator with the -F option so instead of:
cat foo | awk 'BEGIN{FS="|"} {print $2 " " $1}'
Try:
awk -F'|' '{print $2" "$1}' foo
This will output:
com.emailclient.account name1#gmail.com
com.socialsite.auth.accoun name2#msn.com
To get the desired output you could do a variety of things. I'd probably split() the second field:
awk -F'|' '{split($2,a,".");print a[2]" "$1}' file
emailclient name1#gmail.com
socialsite name2#msn.com
Finally to get the first character converted to uppercase is a bit of a pain in awk as you don't have a nice built in ucfirst() function:
awk -F'|' '{split($2,a,".");print toupper(substr(a[2],1,1)) substr(a[2],2),$1}' file
Emailclient name1#gmail.com
Socialsite name2#msn.com
If you want something more concise (although you give up a sub-process) you could do:
awk -F'|' '{split($2,a,".");print a[2]" "$1}' file | sed 's/^./\U&/'
Emailclient name1#gmail.com
Socialsite name2#msn.com

Use a dot or a pipe as the field separator:
awk -v FS='[.|]' '{
printf "%s%s %s.%s\n", toupper(substr($4,1,1)), substr($4,2), $1, $2
}' << END
name1#gmail.com|com.emailclient.account
name2#msn.com|com.socialsite.auth.account
END
gives:
Emailclient name1#gmail.com
Socialsite name2#msn.com

Maybe your file contains CRLF terminator. Every lines followed by \r\n.
awk recognizes the $2 actually $2\r. The \r means goto the start of the line.
{print $2\r$1} will print $2 first, then return to the head, then print $1. So the field 2 is overlaid by the field 1.

The awk is ok. I'm guessing the file is from a windows system and has a CR (^m ascii 0x0d) on the end of the line.
This will cause the cursor to go to the start of the line after $2.
Use dos2unix or vi with :se ff=unix to get rid of the CRs.

Related

Merge a string to a line extracted from a text file in UNIX

I wanted to merge a string ABC to a line that I have extracted from a file.
The following command is used to extract the lines 20-25 in file_ABC, take only the first column, which is then transposed to become a row (or line).
sed -n '20,25p' < file_ABC | awk '{print $1}' | paste -s
This is the result:
2727778 14734 0 0 0 2713044
I would like to add at the first position of this line the string ABC.
ABC 2727778 14734 0 0 0 2713044
Any suggestion on how to do that?
A quick hack would be to use something like
printf 'ABC\t%s\n' "$(sed -n '20,25p' < file_ABC | awk '{print $1}' | paste -s)"
You could modify your initial command instead to use awk for everything, though:
awk '
BEGIN {printf "ABC"}
NR>=20 && NR<=25 {printf "\t%s", $1}
END {print ""}
' file_ABC
This might work for you (GNU sed):
sed '20,25{s/\s.*//;H};$!d;x;s/^/ABC/;s/\n/ /g' file
Gather up the first column fields by appending them to the hold space for rows 20 to 25 only. At the end of the file prepend ABC and replace the introduced newlines by spaces.
For fun, bash only
filename=file_ABC
words=("${filename##*_}")
i=0
while read -r word rest_of_line; do
((++i < 20 )) && continue
(( i > 25 )) && break
words+=("$word")
done < "$filename"
join() { local IFS=$1; shift; echo "$*"; }
join $'\t' "${words[#]}"
But this will be much slower than a single awk call.
if you want to keep all in one script
$ awk 'BEGIN {line="ABC"}
NR>=20 && NR<=25 {line=line FS $1}
NR==25 {print line; exit}' file
improved version as suggested by #EdMorton
$awk 'NR>=20 {line=line OFS $1}
NR==25 {print "ABC" line; exit}' file

Unix find line number of a string in a file using awk/grip

I'm trying to find a position of a string
awk -F : '{if ( $0 ~ /Red Car/) print $0}' /var/lab/lab2/rusiuot/stud2001 | tail -l
and somehow I need to find a line position of Red Car. It is possible to do that using awk or grep?
You can do
awk '/Red Car/ {print NR}' /var/lab/lab2/rusiuot/stud2001
This will print the line number for the line with Red Car
If you like the line number to be printed at end of the file:
awk '/Red Car/ {a[NR]} 1; END {print "\nlines with pattern";for (i in a) printf "%s ",i;print ""}' file
Try something like:
grep -n "Red Car" /var/lab/lab2/rusiuot/stud2001 | cut -d":" -f 1
-n option will display the line number along with line where pattern is found.

awk — getting minus instead of FILENAME

I am trying to add the filename to the end of each line as a new field. It works except instead of getting the filename I get -.
Base file:
070323111|Hudson
What I want:
070323111|Hudson|20150106.csv
What I get:
070323111|Hudson|-
This is my code:
mv $1 $1.bak
cat $1.bak | awk '{print $0 "|" FILENAME}' > $1
- is the way to present the filename when there is not such info. Since your are doing cat $1.bak | awk ..., awk is not reading from a file but from stdin.
Instead, just do:
awk '...' file
in your case:
awk '{print $0 "|" FILENAME}' $1.bak > $1
From man awk:
FILENAME
The name of the current input file. If no files are specified on the
command line, the value of FILENAME is “-”. However, FILENAME is
undefined inside the BEGIN rule (unless set by getline).

awk syntax to invoke function with argument read from a file

I have a function
xyz()
{
x=$1*2
echo x
}
then I want to use it to replace a particular column in a csv file by awk.
File input.csv:
abc,2,something
def,3,something1
I want output like:
abc,4,somthing
def,6,something1
Command used:
cat input.csv|awk -F, -v v="'"`xyz "$2""'" 'BEGIN {FS=","; OFS=","} {$2=v1; print $0}'
Open file input.csv, calling function xyz by passing file 2nd filed as argument and result is stored back to position 2 of file, but is not working!
If I put constant in place of $2 while calling function it works:
Please help me to do this.
cat input.csv|awk -F, -v v="'"`xyz "14""'" 'BEGIN {FS=","; OFS=","} {$2=v1; print $0}'
This above line of code is working properly by calling the xyz function and putting the result back to 2nd column of file input.csv, but with only 14*2, as 14 is taken as constant.
There's a back-quote missing from your command line, and a UUOC (Useless Use of Cat), and a mismatch between variable v on the command line and v1 in the awk program:
cat input.csv|awk -F, -v v="'"`xyz "$2""'" 'BEGIN {FS=","; OFS=","} {$2=v1; print $0}'
^ Here ^ Here ^ Here
That should be written using $(…) instead:
awk -F, -v v="'$(xyz "$2")'" 'BEGIN {FS=","; OFS=","} {$2=v; print $0}' input.csv
This leaves you with a problem, though; the function xyz is invoked once by the shell before you start your awk script running, and is never invoked by awk. You simply can't do it that way. However, you can define your function in awk (and on the fly):
awk -F, 'BEGIN { FS = ","; OFS = "," }
function xyz(a) { return a * 2 }
{ $2 = xyz($2); print $0 }' \
input.csv
For your two-line input file, it produces your desired output.

Parsing each field and process it using 'awk'/'gawk'

Here is a query:
grep bar 'foo.txt' | awk '{print $3}'
The field name emitted by the 'awk' query are mangled C++ symbol names. I want to pass each to dem and finally output the output of 'dem'- i.e the demangled symbols.
Assume that the field separator is a ' ' (space).
awk is a pattern matching language. The grep is totally unnecessary.
awk '/bar/{print $3}' foot.txt
does what your example does.
Edit Fixed up a bit after reading the comments on the precedeing answer (I don't know a thing about dem...):
You can make use of the system call in awk with something like:
awk '/bar/{cline="dem " $3; system(cline)}' foot.txt
but this would spawn an instance of dem for each symbol processed. Very inefficient.
So lets get more clever:
awk '/bar/{list = list " " $3;}END{cline="dem " list; system(cline)}' foot.txt
BTW-- Untested as I don't have dem or your input.
Another thought: if you're going to use the xargs formulation offered by other posters, cut might well be more efficient than awk. At that point, however, you would need grep again.
How about
grep bar 'foo.txt' | awk '{ print $3 }' | xargs dem | awk '{ print $3 }'
This will print the demangled symbols, complete with argument lists in the case of methods:
awk '/bar/ { print $3 }' foo.txt | xargs dem | sed -e 's:.* == ::'
This will print the demangled symbols, without argument lists in the case of methods:
awk '/bar/ { print $3 }' foo.txt | xargs dem | sed -e 's:.* == \([^(]*\).*:\1:'
Cheers,
V.

Resources