Remove a number string from a text file in unix - unix

I have a file with a list of names and phone numbers as follows:
Smith,John 315-555-1212
Jones,Graham 315-234-2344
Aikman,Troy 312-153-3232
Young,Steve 415-343-3421
I need a command string that will output just the lines with "315" area code, output the line without the area code and sort according to last name. i would like the output to look like this:
Jones,Graham 234-2344
Smith,John 555-1212
So far I have this for getting the list. "areacode" is the name of my file. How can I print the sorted list of names and remove the area code from my list of names in the output?
awk '$2~/315/ { print }' areacode

grep ' 315-' areacode | awk '{ sub(/315-/ ,"" ); print $0}' | sort -t, -k1 > newfile
Try that for a start, assuming I got what you asked. This (except the sort ) done all in awk as well.
awk '/315-'/ { sub(/315-/ ,"" ); print $0 } ' areacode | sort -t, -k1 > newfile

Here is one way to do it:
awk '/ 315-/ {sub(/315-/ ,"");print|sort}' sort="sort -t, -k1" areacode
Jones,Graham 234-2344
Smith,John 555-1212

try as below...you can pass the areacode which you are searching as parameter...
awk '$2 ~ "^"areacode {split($2,a,"-"); print $1" "a[2]"-"a[3]}' areacode=315 myfile.txt | sort -t, -k2
sample output when area 315 is passed
Jones,Graham 234-2344
Smith,John 555-1212

Using GNU awk:
awk -F'[ ,-]' '$3==315 { a[$2]=$1 } END { n=asorti(a,b); for(i=1;i<=n;++i) print b[i],a[b[i]] }' file
Split the line into fields on space, comma or hyphen. Populate an array a whose keys are the surnames and values are the forenames. After processing the file, sort on the array keys and loop through the sorted array.
Using awk and sort:
awk -F'[ ,-]' '$3==315 { print $2,$1 }' file | sort -k2
Print the forename followed by the surname and sort alphabetically on the surname.
Output, using either approach:
Graham Jones
John Smith
If you want to keep the original "surname,forename" format, you could instead use (GNU awk):
awk -F'[ ,]' 'BEGIN { OFS="," } $3 ~ /^315/ { a[$2]=$1 } END { n=asorti(a,b); for(i=1;i<=n;++i) print a[b[i]],b[i] }' file
Or:
awk '$2 ~ /^315/ { print $1 }' file | sort -t, -k1
Output:
Jones,Graham
Smith,John

$ sort -t, -k2 file | awk -v area="315" 'sub("^"area"-","",$2)'
Jones,Graham 234-2344
Smith,John 555-1212
or if you only want the names:
$ sort -t, -k2 file | awk -v area="315" '$2 ~ "^"area{print $1}'
Jones,Graham
Smith,John

Related

Merge a string to a line extracted from a text file in UNIX

I wanted to merge a string ABC to a line that I have extracted from a file.
The following command is used to extract the lines 20-25 in file_ABC, take only the first column, which is then transposed to become a row (or line).
sed -n '20,25p' < file_ABC | awk '{print $1}' | paste -s
This is the result:
2727778 14734 0 0 0 2713044
I would like to add at the first position of this line the string ABC.
ABC 2727778 14734 0 0 0 2713044
Any suggestion on how to do that?
A quick hack would be to use something like
printf 'ABC\t%s\n' "$(sed -n '20,25p' < file_ABC | awk '{print $1}' | paste -s)"
You could modify your initial command instead to use awk for everything, though:
awk '
BEGIN {printf "ABC"}
NR>=20 && NR<=25 {printf "\t%s", $1}
END {print ""}
' file_ABC
This might work for you (GNU sed):
sed '20,25{s/\s.*//;H};$!d;x;s/^/ABC/;s/\n/ /g' file
Gather up the first column fields by appending them to the hold space for rows 20 to 25 only. At the end of the file prepend ABC and replace the introduced newlines by spaces.
For fun, bash only
filename=file_ABC
words=("${filename##*_}")
i=0
while read -r word rest_of_line; do
((++i < 20 )) && continue
(( i > 25 )) && break
words+=("$word")
done < "$filename"
join() { local IFS=$1; shift; echo "$*"; }
join $'\t' "${words[#]}"
But this will be much slower than a single awk call.
if you want to keep all in one script
$ awk 'BEGIN {line="ABC"}
NR>=20 && NR<=25 {line=line FS $1}
NR==25 {print line; exit}' file
improved version as suggested by #EdMorton
$awk 'NR>=20 {line=line OFS $1}
NR==25 {print "ABC" line; exit}' file

Extract file string from left side but following 2nd delimiter from right

Below are the full file names.
qwertyuiop.abcdefgh.1234567890.txt
qwertyuiop.1234567890.txt
trying to use
awk -F'.' '{print $1}'
How can i use awk command to extract below output.
qwertyuiop.abcdefgh
qwertyuiop
Edit
i have a list of files in a directory
i am trying to extract time,size,owner,filename into seperate variables.
for filenames.
NAME=$(ls -lrt /tmp/qwertyuiop.1234567890.txt | awk -F'/' '{print $3}' | awk -F'.' '{print $1}')
$ echo $NAME
qwertyuiop
$
NAME=$(ls -lrt /tmp/qwertyuiop.abcdefgh.1234567890.txt | awk -F'/' '{print $3}' | awk -F'.' '{print $1}')
$ echo $NAME
qwertyuiop
$
expected
qwertyuiop.abcdefgh
With GNU awk and other versions that allow manipulation of NF
$ awk -F. -v OFS=. '{NF-=2} 1' ip.txt
qwertyuiop.abcdefgh
qwertyuiop
NF-=2 will effectively delete last two fields
1 is an awk idiom to print contents of $0
Note that this assumes there are at least two fields in every line, otherwise you'd get an error
Similar concept with perl, prints empty line if number of fields in the line is less than 3
$ perl -F'\.' -lane 'print join ".", #F[0..$#F-2]' ip.txt
qwertyuiop.abcdefgh
qwertyuiop
With sed, you can preserve lines if number of fields is less than 3
$ sed 's/\.[^.]*\.[^.]*$//' ip.txt
qwertyuiop.abcdefgh
qwertyuiop
EDIT: Taking inspiration from Sundeep sir's solution and adding this following too in this mix.
awk 'BEGIN{FS=OFS="."} {$(NF-1)=$NF="";sub(/\.+$/,"")} 1' Input_file
Could you please try following.
awk -F'.' '{for(i=(NF-1);i<=NF;i++){$i=""};sub(/\.+$/,"")} 1' OFS="." Input_file
OR
awk 'BEGIN{FS=OFS="."} {for(i=(NF-1);i<=NF;i++){$i=""};sub(/\.+$/,"")} 1' Input_file
Explanation: Adding explanation for above code too here.
awk '
BEGIN{ ##Mentioning BEGIN section of awk program here.
FS=OFS="." ##Setting FS and OFS variables for awk to DOT here as per OPs sample Input_file.
} ##Closing BEGIN section here.
{
for(i=(NF-1);i<=NF;i++){ ##Starting for loop from i value from (NF-1) to NF for all lines.
$i="" ##Setting value if respective field to NULL.
} ##Closing for loop block here.
sub(/\.+$/,"") ##Substituting all DOTs till end of line with NULL in current line.
}
1 ##Mentioning 1 here to print edited/non-edited current line here.
' Input_file ##Mentioning Input_file name here.

Unix find line number of a string in a file using awk/grip

I'm trying to find a position of a string
awk -F : '{if ( $0 ~ /Red Car/) print $0}' /var/lab/lab2/rusiuot/stud2001 | tail -l
and somehow I need to find a line position of Red Car. It is possible to do that using awk or grep?
You can do
awk '/Red Car/ {print NR}' /var/lab/lab2/rusiuot/stud2001
This will print the line number for the line with Red Car
If you like the line number to be printed at end of the file:
awk '/Red Car/ {a[NR]} 1; END {print "\nlines with pattern";for (i in a) printf "%s ",i;print ""}' file
Try something like:
grep -n "Red Car" /var/lab/lab2/rusiuot/stud2001 | cut -d":" -f 1
-n option will display the line number along with line where pattern is found.

AWK to print field $2 first, then field $1

Here is the input(sample):
name1#gmail.com|com.emailclient.account
name2#msn.com|com.socialsite.auth.account
I'm trying to achieve this:
Emailclient name1#gmail.com
Socialsite name2#msn.com
If I use AWK like this:
cat foo | awk 'BEGIN{FS="|"} {print $2 " " $1}'
it messes up the output by overlaying field 1 on the top of field 2.
Any tips/suggestions? Thank you.
A couple of general tips (besides the DOS line ending issue):
cat is for concatenating files, it's not the only tool that can read files! If a command doesn't read files then use redirection like command < file.
You can set the field separator with the -F option so instead of:
cat foo | awk 'BEGIN{FS="|"} {print $2 " " $1}'
Try:
awk -F'|' '{print $2" "$1}' foo
This will output:
com.emailclient.account name1#gmail.com
com.socialsite.auth.accoun name2#msn.com
To get the desired output you could do a variety of things. I'd probably split() the second field:
awk -F'|' '{split($2,a,".");print a[2]" "$1}' file
emailclient name1#gmail.com
socialsite name2#msn.com
Finally to get the first character converted to uppercase is a bit of a pain in awk as you don't have a nice built in ucfirst() function:
awk -F'|' '{split($2,a,".");print toupper(substr(a[2],1,1)) substr(a[2],2),$1}' file
Emailclient name1#gmail.com
Socialsite name2#msn.com
If you want something more concise (although you give up a sub-process) you could do:
awk -F'|' '{split($2,a,".");print a[2]" "$1}' file | sed 's/^./\U&/'
Emailclient name1#gmail.com
Socialsite name2#msn.com
Use a dot or a pipe as the field separator:
awk -v FS='[.|]' '{
printf "%s%s %s.%s\n", toupper(substr($4,1,1)), substr($4,2), $1, $2
}' << END
name1#gmail.com|com.emailclient.account
name2#msn.com|com.socialsite.auth.account
END
gives:
Emailclient name1#gmail.com
Socialsite name2#msn.com
Maybe your file contains CRLF terminator. Every lines followed by \r\n.
awk recognizes the $2 actually $2\r. The \r means goto the start of the line.
{print $2\r$1} will print $2 first, then return to the head, then print $1. So the field 2 is overlaid by the field 1.
The awk is ok. I'm guessing the file is from a windows system and has a CR (^m ascii 0x0d) on the end of the line.
This will cause the cursor to go to the start of the line after $2.
Use dos2unix or vi with :se ff=unix to get rid of the CRs.

Parsing each field and process it using 'awk'/'gawk'

Here is a query:
grep bar 'foo.txt' | awk '{print $3}'
The field name emitted by the 'awk' query are mangled C++ symbol names. I want to pass each to dem and finally output the output of 'dem'- i.e the demangled symbols.
Assume that the field separator is a ' ' (space).
awk is a pattern matching language. The grep is totally unnecessary.
awk '/bar/{print $3}' foot.txt
does what your example does.
Edit Fixed up a bit after reading the comments on the precedeing answer (I don't know a thing about dem...):
You can make use of the system call in awk with something like:
awk '/bar/{cline="dem " $3; system(cline)}' foot.txt
but this would spawn an instance of dem for each symbol processed. Very inefficient.
So lets get more clever:
awk '/bar/{list = list " " $3;}END{cline="dem " list; system(cline)}' foot.txt
BTW-- Untested as I don't have dem or your input.
Another thought: if you're going to use the xargs formulation offered by other posters, cut might well be more efficient than awk. At that point, however, you would need grep again.
How about
grep bar 'foo.txt' | awk '{ print $3 }' | xargs dem | awk '{ print $3 }'
This will print the demangled symbols, complete with argument lists in the case of methods:
awk '/bar/ { print $3 }' foo.txt | xargs dem | sed -e 's:.* == ::'
This will print the demangled symbols, without argument lists in the case of methods:
awk '/bar/ { print $3 }' foo.txt | xargs dem | sed -e 's:.* == \([^(]*\).*:\1:'
Cheers,
V.

Resources