AWK : Unzipping and printing File Name and first line - unix

I am trying to unzip files in folder and print first line LASTMODIFIEDDATE
But the below will print First line with '-'
for file in /export/home/xxxxxx/New_folder/*.gz;
do
gzip -dc "$file" | awk 'NR=1 {print $0, FILENAME}' | awk '/LASTMODIFIEDDATE/'
done
1.How can i modify the above code to print filename that is unzipped.
2.I am a beginner and suggestion to improve the above code are welcome

A few issues:
Your first awk should have double equals signs if you mean to address the first line:
awk 'NR==1{...}'
Your second awk will only ever see the output of the first awk, which only shows the first line, so you will not see any lines with LASTMODIFIED in them unless they are the first. So this will show you the first line and any lines containing LASTMODIFIED.
for ...
do
echo $file
gzip -dc "$file" | awk 'NR==1 || /LASTMODIFIED/'
done
Or you may mean this:
for ...
do
gzip -dc "$file" | awk -v file="$file" 'NR==1{print $0 " " file} /LASTMODIFIED/'
done
which will print the first line followed by the filename and also any lines containing LASTMODIFIED.

Do this with an echo. Also you might want to use grep instead of awk in this case.
for file in /export/home/xxxxxx/New_folder/*.gz;
do
echo $file
gzip -dc "$file" | grep LASTMODIFIEDDATE
done

Related

AWK or bash script to get the rows of a file where the specific column is equal to the given variable [duplicate]

I found some ways to pass external shell variables to an awk script, but I'm confused about ' and ".
First, I tried with a shell script:
$ v=123test
$ echo $v
123test
$ echo "$v"
123test
Then tried awk:
$ awk 'BEGIN{print "'$v'"}'
$ 123test
$ awk 'BEGIN{print '"$v"'}'
$ 123
Why is the difference?
Lastly I tried this:
$ awk 'BEGIN{print " '$v' "}'
$ 123test
$ awk 'BEGIN{print ' "$v" '}'
awk: cmd. line:1: BEGIN{print
awk: cmd. line:1: ^ unexpected newline or end of string
I'm confused about this.
#Getting shell variables into awk
may be done in several ways. Some are better than others. This should cover most of them. If you have a comment, please leave below.                                                                                    v1.5
Using -v (The best way, most portable)
Use the -v option: (P.S. use a space after -v or it will be less portable. E.g., awk -v var= not awk -vvar=)
variable="line one\nline two"
awk -v var="$variable" 'BEGIN {print var}'
line one
line two
This should be compatible with most awk, and the variable is available in the BEGIN block as well:
If you have multiple variables:
awk -v a="$var1" -v b="$var2" 'BEGIN {print a,b}'
Warning. As Ed Morton writes, escape sequences will be interpreted so \t becomes a real tab and not \t if that is what you search for. Can be solved by using ENVIRON[] or access it via ARGV[]
PS If you have vertical bar or other regexp meta characters as separator like |?( etc, they must be double escaped. Example 3 vertical bars ||| becomes -F'\\|\\|\\|'. You can also use -F"[|][|][|]".
Example on getting data from a program/function inn to awk (here date is used)
awk -v time="$(date +"%F %H:%M" -d '-1 minute')" 'BEGIN {print time}'
Example of testing the contents of a shell variable as a regexp:
awk -v var="$variable" '$0 ~ var{print "found it"}'
Variable after code block
Here we get the variable after the awk code. This will work fine as long as you do not need the variable in the BEGIN block:
variable="line one\nline two"
echo "input data" | awk '{print var}' var="${variable}"
or
awk '{print var}' var="${variable}" file
Adding multiple variables:
awk '{print a,b,$0}' a="$var1" b="$var2" file
In this way we can also set different Field Separator FS for each file.
awk 'some code' FS=',' file1.txt FS=';' file2.ext
Variable after the code block will not work for the BEGIN block:
echo "input data" | awk 'BEGIN {print var}' var="${variable}"
Here-string
Variable can also be added to awk using a here-string from shells that support them (including Bash):
awk '{print $0}' <<< "$variable"
test
This is the same as:
printf '%s' "$variable" | awk '{print $0}'
P.S. this treats the variable as a file input.
ENVIRON input
As TrueY writes, you can use the ENVIRON to print Environment Variables.
Setting a variable before running AWK, you can print it out like this:
X=MyVar
awk 'BEGIN{print ENVIRON["X"],ENVIRON["SHELL"]}'
MyVar /bin/bash
ARGV input
As Steven Penny writes, you can use ARGV to get the data into awk:
v="my data"
awk 'BEGIN {print ARGV[1]}' "$v"
my data
To get the data into the code itself, not just the BEGIN:
v="my data"
echo "test" | awk 'BEGIN{var=ARGV[1];ARGV[1]=""} {print var, $0}' "$v"
my data test
Variable within the code: USE WITH CAUTION
You can use a variable within the awk code, but it's messy and hard to read, and as Charles Duffy points out, this version may also be a victim of code injection. If someone adds bad stuff to the variable, it will be executed as part of the awk code.
This works by extracting the variable within the code, so it becomes a part of it.
If you want to make an awk that changes dynamically with use of variables, you can do it this way, but DO NOT use it for normal variables.
variable="line one\nline two"
awk 'BEGIN {print "'"$variable"'"}'
line one
line two
Here is an example of code injection:
variable='line one\nline two" ; for (i=1;i<=1000;++i) print i"'
awk 'BEGIN {print "'"$variable"'"}'
line one
line two
1
2
3
.
.
1000
You can add lots of commands to awk this way. Even make it crash with non valid commands.
One valid use of this approach, though, is when you want to pass a symbol to awk to be applied to some input, e.g. a simple calculator:
$ calc() { awk -v x="$1" -v z="$3" 'BEGIN{ print x '"$2"' z }'; }
$ calc 2.7 '+' 3.4
6.1
$ calc 2.7 '*' 3.4
9.18
There is no way to do that using an awk variable populated with the value of a shell variable, you NEED the shell variable to expand to become part of the text of the awk script before awk interprets it. (see comment below by Ed M.)
Extra info:
Use of double quote
It's always good to double quote variable "$variable"
If not, multiple lines will be added as a long single line.
Example:
var="Line one
This is line two"
echo $var
Line one This is line two
echo "$var"
Line one
This is line two
Other errors you can get without double quote:
variable="line one\nline two"
awk -v var=$variable 'BEGIN {print var}'
awk: cmd. line:1: one\nline
awk: cmd. line:1: ^ backslash not last character on line
awk: cmd. line:1: one\nline
awk: cmd. line:1: ^ syntax error
And with single quote, it does not expand the value of the variable:
awk -v var='$variable' 'BEGIN {print var}'
$variable
More info about AWK and variables
Read this faq.
It seems that the good-old ENVIRON awk built-in hash is not mentioned at all. An example of its usage:
$ X=Solaris awk 'BEGIN{print ENVIRON["X"], ENVIRON["TERM"]}'
Solaris rxvt
You could pass in the command-line option -v with a variable name (v) and a value (=) of the environment variable ("${v}"):
% awk -vv="${v}" 'BEGIN { print v }'
123test
Or to make it clearer (with far fewer vs):
% environment_variable=123test
% awk -vawk_variable="${environment_variable}" 'BEGIN { print awk_variable }'
123test
You can utilize ARGV:
v=123test
awk 'BEGIN {print ARGV[1]}' "$v"
Note that if you are going to continue into the body, you will need to adjust
ARGC:
awk 'BEGIN {ARGC--} {print ARGV[2], $0}' file "$v"
I just changed #Jotne's answer for "for loop".
for i in `seq 11 20`; do host myserver-$i | awk -v i="$i" '{print "myserver-"i" " $4}'; done
I had to insert date at the beginning of the lines of a log file and it's done like below:
DATE=$(date +"%Y-%m-%d")
awk '{ print "'"$DATE"'", $0; }' /path_to_log_file/log_file.log
It can be redirect to another file to save
Pro Tip
It could come handy to create a function that handles this so you dont have to type everything every time. Using the selected solution we get...
awk_switch_columns() {
cat < /dev/stdin | awk -v a="$1" -v b="$2" " { t = \$a; \$a = \$b; \$b = t; print; } "
}
And use it as...
echo 'a b c d' | awk_switch_columns 2 4
Output:
a d c b

Download and change filename to a list of urls in a txt file

Let's say I have a .txt file where I have a list of image links that I want to download.
exaple:
image.jpg
image2.jpg
image3.jpg
I use: cat images.txt | xargs wget and it works just fine
What I want to do now is to provide another .txt file with the following format:
some_id1:image.jpg
some_id2:image2.jpg
some_id3:image3.jpg
What I want to do is to split each line in the ':' , download the link to the right, and change the downloaded file-name with the id provided to the left.
I want to somehow use wget image.jpg -O some_id1.jpg
So the output will be:
some_id1.jpg
some_id2.jpg
some_id3.jpg
Any ideas ?
My goto for such tasks is awk:
while read line; do lfn=`echo "$line" | awk -F":" '{ print $1".jpg" }'` ; rfn=`echo "$line" | awk -F":" '{ print $2 }'` ; wget $rfn -O $lfn ; done < images.txt
This presumes, of course, all the local file names should have the .jpg extension.

need some help on awk command

need a help with awk. reading a csv file and, doing some substitution on some of the columns. It's like 9th column(string type) should be replaced by value of (9th column itself + value of the 4th column(integer)), then 15th column by $15+$12, column 26th with $26+$23. same has to be done line by line for all the records. Suggestions please
Below is the sample I/O. and the first line which is Description must be left as is.
sample Input
EmpID|Empname|Empadd|roleId|roleDesc|Dept
100|mst|Del|20|SD|DA
101|ms|Del|21|XS|DA
Sample output
EmpID|Empname|Empadd|roleId|roleDesc|Dept
100|mst100|Del|20|SD20|DA
101|ms101|Del|21|XS21|DA
it's like empname has been concatenated with empid & the role desc with roleID.Hope that's helpful :)
This will perform the needed transformation:
$ awk 'NR>1{$2=$2$1;$5=$5$4}1' FS='|' OFS='|' file
EmpID|Empname|Empadd|roleId|roleDesc|Dept
100|mst100|Del|20|SD20|DA
101|ms101|Del|21|XS21|DA
If you have to do this for many columns you can use a for loop like so (provided a arithmetic or geometric stepsize):
$ awk 'NR>1{for(i=2;i<=5;i+=3)$i=$i$(i-1)}1' FS='|' OFS='|' file
EmpID|Empname|Empadd|roleId|roleDesc|Dept
100|mst100|Del|20|SD20|DA
101|ms101|Del|21|XS21|DA
When you say +, I'm assuming you mean string concatentation. IN awk, there is no specific concatenation operator, you just put two strings side-by-side.
awk -F, -v OFS=, '{$9 = $9 $4; $15=$15$12; $26=$26$23; print}' file.csv
Also assuming that by "csv", you actually mean comma-separated.
If you want to edit the file in-place, you need to do this:
awk ... file.csv > newfile && mv file.csv file.csv.bak && mv newfile file.csv
Edit: to leave the first line untouched:
awk -F, -v OFS=, 'NR>1 {$9 = $9 $4; $15=$15$12; $26=$26$23} {print}' file.csv
Now the columns are modified for the 2nd and subsequent lines, but every line is printed.
You'll sometimes see that written this way:
awk -F, -v OFS=, 'NR>1 {$9 = $9 $4; $15=$15$12; $26=$26$23} 1' file.csv

Combining awk and csum to hash a field

I have pipe-delimited text files that requires an MD5 hash of a particular field, or set of fields. Because I'm on AIX and have to use the csum function, I don't think I can simply pass the file and a hashing function to awk to do it in one fell swoop.
So I'm writing a script that reads through each line, passes the to-be-hashed field to csum, then drops the result back in as a replacement via a gsub. 99% of the time it seems to work OK, but sometimes something goes afoul because the gsub replaces something it shouldn't.
#!/bin/ksh
rm $2 #Get rid of output file
while read line; do #loop through each line
MYFIELD=$(echo "$line" | cut -d "|" -f 6); #push the 6th field into a var
MYHASH=$(echo $MYFIELD | csum -h MD5 -); #csum will hash a string only on the stdin
echo $line | sed -e "s/$MYFIELD/${MYHASH}/g" >> $2 #gsub replaces, but not always what we want
done < $1 #read in the input file
I think instead I could use awk to update the field. But it's beyond me how to do that one line at a time. Ideally I would like to have a script that would allow me to pass two mandatory parameters (infile and outfile) and then any number of field positions that would get hashed and replaced. A la
foo infile.txt outfile.txt 2 6 12
Which would read in infile.txt, hash fields 2, 6, and 12, and write out to outfile.txt.
Your suggestions would be most appreciated
What about doing it with awk?
Instead of
echo $line | sed -e "s/$MYFIELD/${MYHASH}/g" >> $2 #gsub replaces, but not always what we want
You can use
old=$MYFIELD; new=$MYHASH; echo $line | awk -F"|" -v o="$old" -v n="$new" '{OFS=FS} sub(o, n, $6) {print}' >> $2
Basically what we do is:
old=$MYFIELD; new=$MYHASH We assign the parameters to be sent to awk.
echo $line We output the line so that awk can get it.
In awk,
-F"|" define | as field separator.
-v o="$old" and -v n="$new" let awk work with variables $old and $new naming them o and n respectively.
{OFS=FS} - define the delimiter between fields. It could also be OFS="|", but this way we indicate awk to use the same we defined on -F="|". It is more flexible to keep the field separator in case it changes.
sub(o, n, $6) replaces the text on variable o (that is, $MYFIELD) with text on variable v (that is, $MYHASH), but just on field 6.
print the whole line with substituted text
This worked for me in the example you gave on comments:
old="hashit"; new="WE_DID"; echo "donthashit|foo1|bar1|foo2|bar2|hashit" | awk -F"|" -v o="$old" -v n="$new" '{OFS=FS} sub(o,n,$6) {print}'
donthashit|foo1|bar1|foo2|bar2|WE_DID
Hope it helps.
Edit
I found a way to pass variables to awk easily: -v o=${variable_name}
This way, the solution can be:
echo $line | awk -F"|" -v o=${MYFIELD} -v n=${MYHASH} '{OFS=FS} sub(o, n, $6) {print}' >> $2

How to keep a file's format if you use the uniq command (in shell)?

In order to use the uniq command, you have to sort your file first.
But in the file I have, the order of the information is important, thus how can I keep the original format of the file but still get rid of duplicate content?
Another awk version:
awk '!_[$0]++' infile
This awk keeps the first occurrence. Same algorithm as other answers use:
awk '!($0 in lines) { print $0; lines[$0]; }'
Here's one that only needs to store duplicated lines (as opposed to all lines) using awk:
sort file | uniq -d | awk '
FNR == NR { dups[$0] }
FNR != NR && (!($0 in dups) || !lines[$0]++)
' - file
There's also the "line-number, double-sort" method.
nl -n ln | sort -u -k 2| sort -k 1n | cut -f 2-
You can run uniq -d on the sorted version of the file to find the duplicate lines, then run some script that says:
if this_line is in duplicate_lines {
if not i_have_seen[this_line] {
output this_line
i_have_seen[this_line] = true
}
} else {
output this_line
}
Using only uniq and grep:
Create d.sh:
#!/bin/sh
sort $1 | uniq > $1_uniq
for line in $(cat $1); do
cat $1_uniq | grep -m1 $line >> $1_out
cat $1_uniq | grep -v $line > $1_uniq2
mv $1_uniq2 $1_uniq
done;
rm $1_uniq
Example:
./d.sh infile
You could use some horrible O(n^2) thing, like this (Pseudo-code):
file2 = EMPTY_FILE
for each line in file1:
if not line in file2:
file2.append(line)
This is potentially rather slow, especially if implemented at the Bash level. But if your files are reasonably short, it will probably work just fine, and would be quick to implement (not line in file2 is then just grep -v, and so on).
Otherwise you could of course code up a dedicated program, using some more advanced data structure in memory to speed it up.
for line in $(sort file1 | uniq ); do
grep -n -m1 line file >>out
done;
sort -n out
first do the sort,
for each uniqe value grep for the first match (-m1)
and preserve the line numbers
sort the output numerically (-n) by line number.
you could then remove the line #'s with sed or awk

Resources