I have 10 devices that using hp-ux and i want to check the disk space in each devices.
my requirement is if the space more than 90%, the info of device ans space will be save to a log.
this is list of device and ip address which i set as variable ipadd:
lo1 100.45.32.43
lot2 100.45.32.44
lot3 100.45.32.44
lot4 100.45.32.45
lot5 100.45.32.46
and so on..
This is my script so far :
#!/bin/csh -f
set ipaddress = (`awk '{print $2}' "ipadd"`)
set device = (`awk '{print $1}' "ipadd"`)
# j = 1
while ($j <= $#ipaddress)
echo $ipaddress
set i = 90 # Threshold set at 90%
set max = 100
while ($i <= $max)
rsh $ipaddress[$j] bdf | grep /dev/vg00 | grep $i% \
|awk '{ file=substr($6,index($6,"/") + 1,length($6)); print "WARNING: $device[$j]:/" file " has reached " $5 ". Perform HouseKeeping IMMEDIATELY..." >> "/scripts/space." file ".file"}'
# i++
end
# j++
end
The output after bdf:
/dev/vg00/lvol2 15300207 10924582 28566314 79% /
/dev/vg00/lvol4 42529 23786 25510 55% /stand
The output at terminal after exec the script:
100.45.32.43
100.45.32.44
The output at .file:
WARNING: $device[$j]:/ has reached 79%. Perform HouseKeeping IMMEDIATELY...
My question is, is it my looping have something wrong cause only iterates one time only because my .file output only show one device only?
And why $device[$j] not come out in .file output?
or awk have problem?
Thank you for the advice.
Your code tested for each possible percentage between 90 and 100.
Persumably, you'd be OK with code that checks once, and asks 'is device percent greater than 90%'?. So then you don't need the inner loop at all, and you make only 1 connection per machine, try
#!/bin/csh -f
set ipaddress = (`awk '{print $2}' "ipadd"`)
set device = (`awk '{print $1}' "ipadd"`)
# j = 1
set i = 90 # Threshold set at 90%
while ($j <= $#ipaddress)
echo $ipaddress
echo "#dbg: ipaddress[$j]=${ibpaddress[$j]}"
rsh $ipaddress[$j] bdf \
| awk -v thresh="$i" -v dev="$device[$j]" \
'/\/dev\/vg00/ { \
sub(/%/,"",$5) \
if ($5 > thresh) { \
file=substr($6,index($6,"/") + 1,length($6)) \
print "WARNING: " dev ":/" file " has reached " $5 ". Perform HouseKeeping IMMEDIATELY..." >> "/scripts/space." file ".file" \
}\
}'
# j++
end
Sorry, but I don't have a csh available to dbl-chk for syntax errors.
So here is a one liner that we determined worked in your environment.
rsh $ipaddress[$j] bdf | nawk -v thresh="$i" -v dev="$device[$j]" '/\/dev\/vg00/ { sub(/%/,"",$5) ; if ($5 > thresh) { file=substr($6,index($6,"/") + 1,length($6));print "#dbg:file="file; print "WARNING: " dev ":/" file " has reached " $5 ". Perform HouseKeeping IMMEDIATELY..." >> "/scripts/space.file.TMP" } }'
I don't have a system with bdf available. Change the two references to $5 in the sub() and if test to match the field-number of the output that has the percentage you want to test.
Note that -v var="value" is the standard way to pass a variable value from the shell to an awk script that is enclosed in single-quotes.
Be careful that any '\' chars at the end of a line are the last chars, no trailing space or tabs, or you'll get an indecipherable error msg. ;-)
IHTH
Related
I have got a unix(aix) command which includes a small awk script. It works and here it is...
ps -eaf | awk 'ARGIND == 1 {$pids[$0] = 1} ARGIND > 1 {if ($2 in pids) printf("%s\n",$0)}' /home/richard/myFile.flg -
When I run this command from a different box using ssh it doesn't work.
ssh myuser#myOtherBox ps -eaf | awk 'ARGIND == 1 {$pids[$0] = 1} ARGIND > 1 {if ($2 in pids) printf("%s\n",$0)}' /home/richard/myFile.flg -
I've worked out that I need to quote the awk script and escape some characters in the awk command but I can't get the escapes right.
Would someone pls help me with quoting the awk part of the script and escaping what is required.
thanks
What happens when you execute
ssh myuser#myOtherBox ps -eaf | ...
is that ps -eaf is run on the other box, and the output is returned; ssh then writes the output it receives to its own stdout, which is (locally) redirected through the command ...; in this case, an awk command.
Unfortunately, (I assume) /home/richard/myFile.flg is on the remote mache and not the local machine, so the awk command fails.
To get the whole thing to run on the remote machine, you need to provide it as a single argument; one way which doesn't require much quoting effort is to use a here-doc:
ssh myuser#myOtherBox "$(cat<<"END"
ps -eaf |
awk 'ARGIND == 1 {pids[$0] = 1}
ARGIND > 1 {if ($2 in pids) printf("%s\n",$0)}' \
/home/richard/myFile.flg -
END
)"
Note that printf("%s\n",$0) is really just a complicated way of writing print, so you could simplify the remote command quite a bit. But you would still need to deal with the single quotes in the awk command:
ssh myuser#myOtherBox '
ps -eaf |
awk '"'"'ARGIND == 1 {pids[$0] = 1; next}
$2 in pids {print}'"'"' \
/home/richard/myFile.flg -'
To understand '"'"', you need to break it into pieces:
' close '-quoted string
"'" A (quoted) '
' open another '-quoted string
In cases like this you need double escaping, this should work:
ssh myuser#myOtherBox "ps -eaf | awk \"ARGIND == 1 {\\\$pids[\\\$0] = 1} ARGIND > 1 {if (\\\$2 in pids) printf(\\\"%s\n\\\",\\\$0)}\" /home/richard/myFile.flg -"
If you can use bash's $'STRING' syntax, then things remain quite
readable; in this case one only has to escape the single-quotes and
backslashes:
$'ps -eaf |
awk \'
ARGIND == 1 {$pids[$0] = 1}
ARGIND > 1 {if ($2 in pids) printf("%s\\n",$0)}\' /home/richard/myFile.flg -'
I have the following line in a unix script:
head -1 $line | cut -c22-29 >> $file
I want to append this output with no newline, but rather separated with commas. Is there any way to feed the output of this command to printf? I have tried:
head -1 $line | cut -c22-29 | printf "%s, " >> $file
I have also tried:
printf "%s, " head -1 $line | cut -c22-29 >> $file
Neither of those has worked. Anyone have any ideas?
You just want tr in your case
tr '\n' ','
will replace all the newlines ('\n') with commas
head -1 $line | cut -c22-29 | tr '\n' ',' >> $file
An very old topic, but even now i have been needed to do this (on limited command resources) and that one (replied) command havent been working for me due to its length.
Appending to a file can be done also by using file-descriptors:
touch file.txt (create new blank file),
exec 100<> file.txt (new fd with id 100),
echo -n test >&100 (echo test to new fd)
exec 100>&- (close new fd)
Appending starting from specyfic character can be done by reading file from certain point eg.
exec 100 <> file.txt - new descriptor
read -n 4 < &100 - read 4 characters
echo -n test > &100 - append echo test to a file starting from forth character.
exec 100>&- - (close new fd)
I have a file in following format:
B: that
I: White
I: House
B: the
I: emergency
I: rooms
B: trauma
I: centers
What I need to do is to read line by line from the top, if the line begin with B then remove B:
If it begin with I: then remove I: and connect to the previous one (the previous one is processed in the same rule).
Expected Output:
that White House
the emergency rooms
trauma centers
What I tried:
while read line
do
string=$line
echo $string | grep "B:" 1>/dev/null
if [ `echo $?` -eq 0 ] //if start with " B: "
then
$newstring= echo ${var:4} //cut first 4 characters which including B: and space
echo $string | grep "I:" 1>/dev/null
if [ `echo $?` -eq 0 ] //if start with " I: "
then
$newstring= echo ${var:4} //cut first 4 characters which including I: and space
done < file.txt
What I don't know is how to put it back to the line (in the file) and how to connect the line to the previous processed one.
Using awk print the second field of I: and B: records. The variable first is used to control the newline output.
/B:/ searches for the B: pattern. This pattern marks the start of the record. If the record is NOT the first, then a newline is printed, then the data $2 is printed.
If the pattern found is I: the data $2 (the second field which follows I: is printed.
awk 'BEGIN{first=1}
/B:/ { if (first) first=0; else print ""; printf("%s ", $2); }
/I:/ { printf("%s ", $2) }
END {print ""}' filename
awk -F":" '{a[NR]=$0}
/^ B:/{print line;line=$2}
/^ I:/{line=line" "$2}
END{
if(a[NR]!~/^B/)
{print line}
}' Your_file
awk '/^B/ {printf "\n%s",$2} /^I/ {printf " %s",$2}' file
that White House
the emergency rooms
trauma centers
Shorten it some
awk '/./ {printf /^B/?"\n%s":" %s",$2}' file
There is an interesting solution using awk auto-split on RS patterns. Note that this is a bit sensitive to variations in the input format:
<infile awk 1 RS='(^|\n)B: ' | awk 1 RS='\n+I: ' ORS=' ' | grep -v '^ *$'
Output:
that White House
the emergency rooms
trauma centers
This works at least with GNU awk and Mikes awk.
This might work for you (GNU sed):
sed -r ':a;$!N;s/\n$//;s/\n\s*I://;ta;s/B://g;s/^\s*//;P;D' file
or:
sed -e ':a' -e '$!N' -e 's/\n$//' -e 's/\n\s*I://' -e 'ta' -e 's/B://g' -e 's/^\s*//' -e 'P' -e 'D' file
When I run the below program, I get no return, however the program still runs forever until I end it. Can some one please exoplain to me why this would happen. I am trying to get this complex awk statement to work, however, have been very unsuccessful.
The code I am using for my Cshell is (its all on one line, but I split it here to make it easier to read):
awk '{split($2,b,""); counter = 1; while (counter < 13)
{if (b[counter] == 1 && "'$cmonth'" > counter)
{{printf("%s%s%s\n", $1, "'$letter'","'$year3'")}; counter++;
else if (b[counter] == 1 && "'$cmonth'" <= counter)
{{printf("%s%s%s\n", $1, "'$letter'","'$year2'")}; counter++;}
else echo "fail"}}' fileRead >> $year$month
The text file I am reading from looks like
fff 101010101010
yyy 100100100100
Here $year2 and $year3 represent counters that start from 1987 and go up 1 year for each line read.
$cmonth is just a month counter from 1–12.
$letter is just a ID.
The goal is for the program to read each line and print out the ID, month, and year if the position in the byte code is 1.
You have some mismatched curly braces, I have reformatted to one standard of indentation.
awk '{ \
split($2,b,""); counter = 1 \
while (counter < 13) { \
if (b[counter] == 1 && "'$cmonth'" > counter){ \
printf("%s%s%s\n", $1, "'$letter'","'$year3'") \
counter++ \
} \
else if (b[counter] == 1 && "'$cmonth'" <= counter) { \
printf("%s%s%s\n", $1, "'$letter'","'$year2'") \
counter++ \
} \
else print "fail" \
} # while \
}' fileRead >> $year$month
Also awk does'nt support echo.
Make sure that the \ is the LAST char on the line (no space or tab chars!!!), or you'll get a syntax error.
Else, you can 'fold' up all of the lines into one line. adding the occasional ';' as needed.
edit
OR you can take the previous version of this awk script (without the \ line continuation chars), put it in a file (without any of the elements outside of the ' ....' (single quotes) and call it from awk as a file. You'll also need to made so you can pass the variables cmonth, letter, year2 and any others that I've missed.
save as file
edit file, remove any `\' chars, change all vars like "'$letter'" to letter **
call program like
**
awk -v letter="$letter" -v year2="$year2" -v month="$month" -f myScript fileRead >> $year$month
**
for example
printf("%s%s%s\n", $1, "'$letter'","'$year2'")
becomes
printf("%s%s%s\n", $1, letter,year2)
IHTH.
This was an interview question, nevertheless still a programming question.
I have a unix file with two columns name and score. I need to display count of all the scores.
like
jhon 100
dan 200
rob 100
mike 100
the output should be
100 3
200 1
You only need to use built in unix utility to solve it, so i am assuming using shell scripts . or reg ex. or unix commands
I understand looping would be one way to do. store all the values u have already seen and then grep every record for unseen values. any other efficient way of doing it
Try this:
cut -d ' ' -f 2 < /tmp/foo | sort -n | uniq -c \
| (while read n v ; do printf "%s %s\n" "$v" "$n" ; done)
The initial cut could be replaced with another while read loop, which would be more resilient to input file format variations (extra whitespace). If some of the names consist in several words, simple field extraction will not work as easily, but sed can do it.
Otherwise, use your favorite programming language. Perl would probably shine. It is not difficult either in Java or even in C or Forth.
$ cat foo.txt
jhon 100
dan 200
rob 100
mike 100
$ awk '{print $2}' foo.txt | sort | uniq -c
3 100
1 200
Its a pity you can't do a count with sort or uniq alone.
Edit: I just noticed I have the count in front ... to get it exactly the same you can do:
$ awk '{print $2}' foo.txt | sort | uniq -c | awk '{ print $2 " " $1 }'
Not very complicated in perl:
#!/usr/bin/perl -w
use strict;
use warnings;
my %count = ();
while (<>) {
chomp;
my ($name, $score) = split(/ /);
$count{$score}++;
}
foreach my $key (sort keys %count) {
print "$key ", $count{$key}, "\n";
}
You could go with awk:
awk '/.*/ { a[$2] = a[$2] + 1; } END { for (x in a) { print x, " ", a[x] } }' record_file.txt
Alternatively with shell commands:
for i in `awk '{print $2}' inputfile | sort -u`
do
echo -n "$i "
grep $i inputfile | wc -l
done
The first awk command will give a list of all the different scores (e.g. 100 and 200) which then
the for loop iterates over, counting up each separately. Not very super efficient, but simple. If the file is not to big is should not be a too big problem.