i wanted to keep only the 10.100.52.11 and delete everything else, 10.100.52.11 keeps changing so i don't want to hard code it
The original output was as below
"PrivateIpAddress": "10.100.52.111",
I tried the below command and removed "PrivateIpAddress": "
sudo aws ec2 describe-instances --filter Name=tag:Name,Values=bip-spark-es-worker1 |grep PrivateIpAddress |head -1|sed 's/^[ ^t]*\"PrivateIpAddress\"[:]* \"//g'
so the output for the above command now is
10.100.52.111",
I want to delete even the ending quotes and comma.
I tried with ["].$ and also \{2\}.$ did not work.
Please help.
Let sed do all the work. You don't need grep or head:
sed -n '/"PrivateIpAddress": /{s///; s/[",]//g; p; q}'
If content within " do not have " themselves,
grep PrivateIpAddress |head -1|sed 's/^[ ^t]*\"PrivateIpAddress\"[:]* \"//g'
can be replaced with
awk -F\" '/PrivateIpAddress/{print $4; exit}'
-F\" use " as field separator
/PrivateIpAddress/ if line matches this string
print $4 print 4th field which is 10.100.52.111 for given sample
exit will quit as only first match is required
some awk proposals
echo '"PrivateIpAddress": "10.100.52.111",'| awk -F: '{print substr($2,3,13)}'
10.100.52.111
echo '"PrivateIpAddress": "10.100.52.111",'| awk -F\" '{print $4}'
10.100.52.111
Alternative :
$ echo "\"PrivateIpAddress\": \"10.100.52.111\", "
"PrivateIpAddress": "10.100.52.111",
$ echo "\"PrivateIpAddress\": \"10.100.52.111\", " |grep -Po '(\d+[.]){3}\d+'
10.100.52.111
$ echo "\"PrivateIpAddress\": \"10.100.52.111\", " |grep -Eo '([[:digit:]]+[.]){3}[[:digit:]]+'
10.100.52.111
Related
Below are the full file names.
qwertyuiop.abcdefgh.1234567890.txt
qwertyuiop.1234567890.txt
trying to use
awk -F'.' '{print $1}'
How can i use awk command to extract below output.
qwertyuiop.abcdefgh
qwertyuiop
Edit
i have a list of files in a directory
i am trying to extract time,size,owner,filename into seperate variables.
for filenames.
NAME=$(ls -lrt /tmp/qwertyuiop.1234567890.txt | awk -F'/' '{print $3}' | awk -F'.' '{print $1}')
$ echo $NAME
qwertyuiop
$
NAME=$(ls -lrt /tmp/qwertyuiop.abcdefgh.1234567890.txt | awk -F'/' '{print $3}' | awk -F'.' '{print $1}')
$ echo $NAME
qwertyuiop
$
expected
qwertyuiop.abcdefgh
With GNU awk and other versions that allow manipulation of NF
$ awk -F. -v OFS=. '{NF-=2} 1' ip.txt
qwertyuiop.abcdefgh
qwertyuiop
NF-=2 will effectively delete last two fields
1 is an awk idiom to print contents of $0
Note that this assumes there are at least two fields in every line, otherwise you'd get an error
Similar concept with perl, prints empty line if number of fields in the line is less than 3
$ perl -F'\.' -lane 'print join ".", #F[0..$#F-2]' ip.txt
qwertyuiop.abcdefgh
qwertyuiop
With sed, you can preserve lines if number of fields is less than 3
$ sed 's/\.[^.]*\.[^.]*$//' ip.txt
qwertyuiop.abcdefgh
qwertyuiop
EDIT: Taking inspiration from Sundeep sir's solution and adding this following too in this mix.
awk 'BEGIN{FS=OFS="."} {$(NF-1)=$NF="";sub(/\.+$/,"")} 1' Input_file
Could you please try following.
awk -F'.' '{for(i=(NF-1);i<=NF;i++){$i=""};sub(/\.+$/,"")} 1' OFS="." Input_file
OR
awk 'BEGIN{FS=OFS="."} {for(i=(NF-1);i<=NF;i++){$i=""};sub(/\.+$/,"")} 1' Input_file
Explanation: Adding explanation for above code too here.
awk '
BEGIN{ ##Mentioning BEGIN section of awk program here.
FS=OFS="." ##Setting FS and OFS variables for awk to DOT here as per OPs sample Input_file.
} ##Closing BEGIN section here.
{
for(i=(NF-1);i<=NF;i++){ ##Starting for loop from i value from (NF-1) to NF for all lines.
$i="" ##Setting value if respective field to NULL.
} ##Closing for loop block here.
sub(/\.+$/,"") ##Substituting all DOTs till end of line with NULL in current line.
}
1 ##Mentioning 1 here to print edited/non-edited current line here.
' Input_file ##Mentioning Input_file name here.
I have the following list in a text file:
10.1.2.200
10.1.2.201
10.1.2.202
10.1.2.203
I want to encase in "double quotes", comma separate and join the values as one string.
Can this be done in sed or awk?
Expected output:
"10.1.2.200","10.1.2.201","10.1.2.202","10.1.2.203","10.1.2.204"
The easiest is something like this (in pseudo code):
Read a line;
Put the line in quotes;
Keep that quoted line in a stack or string;
At the end (or while constructing the string), join the lines together with a comma.
Depending on the language, that is fairly straightforward to do:
With awk:
$ awk 'BEGIN{OFS=","}{s=s ? s OFS "\"" $1 "\"" : "\"" $1 "\""} END{print s}' file
"10.1.2.200","10.1.2.201","10.1.2.202","10.1.2.203"
Or, less 'wall of quotes' to define a quote character:
$ awk 'BEGIN{OFS=",";q="\""}{s=s ? s OFS q$1q : q$1q} END{print s}' file
With sed:
$ sed -E 's/^(.*)$/"\1"/' file | sed -e ':a' -e 'N' -e '$!ba' -e 's/\n/,/g'
"10.1.2.200","10.1.2.201","10.1.2.202","10.1.2.203"
(With Perl and Ruby, with a join function, it is easiest to push the elements onto a stack and then join that.)
Perl:
$ perl -lne 'push #a, "\"$_\""; END{print join(",", #a)}' file
"10.1.2.200","10.1.2.201","10.1.2.202","10.1.2.203"
Ruby:
$ ruby -ne 'BEGIN{#arr=[]}; #arr.push "\"#{$_.chomp}\""; END{puts #arr.join(",")}' file
"10.1.2.200","10.1.2.201","10.1.2.202","10.1.2.203"
here is another alternative
sed 's/.*/"&"/' file | paste -sd,
"10.1.2.200","10.1.2.201","10.1.2.202","10.1.2.203"
awk -F'\n' -v RS="\0" -v OFS='","' -v q='"' '{NF--}$0=q$0q' file
should work for given example.
Tested with gawk:
kent$ cat f
10.1.2.200
10.1.2.201
10.1.2.202
10.1.2.203
kent$ awk -F'\n' -v RS="\0" -v OFS='","' -v q='"' '{NF--}$0=q$0q' f
"10.1.2.200","10.1.2.201","10.1.2.202","10.1.2.203"
$ awk '{o=o (NR>1?",":"") "\""$0"\""} END{print o}' file
"10.1.2.200","10.1.2.201","10.1.2.202","10.1.2.203"
I'm trying to extract headers from emails and create a JSON fragment from them. I'm using sed to pull out the keys and values, but it's failing to put the trailing quote on each of the lines:
$ cat email1 | grep -i -e "^subject:" -e "^from:" -e "^to:" | \
sed -n 's/\^([^:]*\):[ ]*\(.*\)$/"\1":"\2"/gp'
"From":"Blah Blech <blah.blech#blahblech.com>
"To":"foo#bar.com
"Subject":"Yeah
I don't understand why the replacement pattern isn't working.
awk to the rescue!
$ awk -F": *" -vOFS=":" -vq="\"" 'tolower($0)~/^from|to|subject/
{print q$1q,q$2q}' email1
which combines cat or grep steps as well.
Stripping the carriage returns as #tripleee suggested fixed the issue with sed (using ctrl-v ctrl-m to capture the literal carriage return):
$ cat email1 | tr -d '^M' | grep -i -e "^subject:" -e "^from:" -e "^to:" | \
sed -n 's/^\([^:]*\):[ ]*\(.*\)$/"\1":"\2"/gp'
"From":"Blah Blech <blah.blech#blahblech.com>"
"To":"foo#bar.com"
"Subject":"Yeah"
I have one file:
file.txt
101|aaa {rating=1, dept=10, date=10/02/2013, com=11}
106|bbb {rating=2, dept=11, date=10/03/2013, com=11}
103|vvv {rating=3, dept=12, date=10/03/2013, com=11}
102|aaa {rating=1, dept=10, date=10/04/2013, com=11}
109|bbb {rating=2, dept=11, date=10/05/2013, com=11}
104|bbb {rating=2, dept=11, date=10/07/2013, com=11}
I am greping it based on:
for i in `cat file.txt | grep -i "|aaa "`
do
echo `echo $i|cut -d' ' -f1`"|" `sed -n '/date=/,/, com/p' $i` >> output.txt
done
This error occurs
"/sysdate=/,/systime/p: No such file or directory"
Please help me?
The output should be:
output.txt
101|aaa|10/02/2013
102|aaa|10/04/2013
awk is way better for these cases:
$ awk -F"[ =,]" -v OFS="|" '/aaa/{print $1, $9}' a
101|aaa|10/02/2013
102|aaa|10/04/2013
This sets field separators to either space, = or , and fetches the first and 9th fields, whenever the text aaa is found in the line.
I am missing something subtle. I tried running below command but it didn't work. Can you please help .
ls | awk '{ split($1,a,".gz")} {cp " " $1 " " a[1]".gz"}'
Although when i am trying to print it is showing copy command.
ls | awk '{ split($1,a,".gz")} {print "cp" " " $1 " " a[1]".gz"}'
Not sure where the problem is. Any pointers will be helpful
To summarize some of the comments and point out what's wrong with the first example:
ls | awk '{ split($1,a,".gz")} {cp " " $1 " " a[1]".gz"}'
^ unassigned variable
The cp defaults to "" and is not treated as the program cp. If you do the following in a directory with one file, test.gz_monkey, you'll see why:
ls | awk '{split($1,a,".gz"); cmd=cp " " $1 " " a[1] ".gz"; print ">>" cmd "<<" }'
results in
>> test.gz_monkey test.gz<<
^ the space here is because cp was "" when cmd was assigned
Notice that you can separate statements with a ; instead of having two action blocks. Awk does support running commands in a subshell - one of which is system, another is getline. With the following changes, your concept can work:
ls | awk '{split($1,a,".gz"); cmd="cp "$1" "a[1]".gz"; system(cmd) }'
^ notice cp has moved inside a string
Another thing to notice - ls isn't a good choice for only finding files in the current directory. Instead, try find:
find . -type f -name "*.gz_*" | awk '{split($1,a,".gz"); cmd="cp "$1" "a[1]".gz"; system(cmd) }'
while personally, I think something like the following is more readable:
find . -type f -name "*.gz_*" | awk '{split($1,a,".gz"); system(sprintf( "cp %s %s.gz", $1, a[1])) }'
Why are you using awk at all? Try:
for f in *; do cp "$f" "${f%.gz*}.gz"; done