Get last five minutes load average using ksh with uptime - unix

To an idea of CPU load average, I'm using uptime in a ksh script:
uptime | awk '{print $11}' | sed '$s/.$//' | read CPU
where I then use the variable CPU later.
The $11 part is to isolate the last five minutes part. But, I noticed today that this was not working. Specifically, the last five minutes part was returned with $9. The function is returning less parameters. This is because the machine was recently rebooted, and so uptime shows minutes since reboot instead of days and minutes.
Is there a way I can consistently get only the last five minutes part of uptime?

cut -d ' ' -f2 /proc/loadavg
/proc/loadvg is the source of data for uptime, w, who and others.
It has a simpler format and the numbers always have a dot before the decimal part (uptime and such use the current locale, so you may find something like
load average: 0,18, 0,26, 0,30
which are harder to parse
plus is faster by an incredibly low factor! ;-)

Try to split away the text before "Load Average", and then use awk on the remaining part.
uptime | sed 's/.*load average: //' | awk -F\, '{print $2}'

It might be simpler to read the 2nd to last field rather than the 9th or the 11th:
uptime | awk '{print $(NF-1)}' FS=,

This little shell function should work with bash or ksh(93)
function loadavg {
typeset minutes=$1 t1=$(uptime)
echo ${t1#*load average: } | (
IFS=', ' && read L1 L5 L15
case $minutes in
(1) echo $L1;;
(5) echo $L5;;
(15) echo $L15;;
("") echo $L1 $L5 $L15;;
(*) echo "usage: loadavg [ 1 | 5 | 15 ]" 1>& 2
esac
)
}
Explanation:
This code uses IFS to split the string after "load average: " into three fields.
'typeset' and the subshell isolate the function variables from other shell variables.
The following simplifies the result, and just returns the answer to the original
question:
function load5 {
typeset t1=$(uptime)
echo ${t1#*load average: } | (
IFS=', ' && read L1 L5 L15
echo $L5
)
}

This could give you best result
I am using it to get my load average for every 5 minutes:
$ uptime | awk '{ print $11 }'| tr -d ','

So, I was having trouble writing one that worked in both Linux and Mac OS X. After much fighting I came up with this:
uptime | sed 's/.*load average[s]*://' | awk '{print $3}'
Hope this is helpful for someone.

Related

Introducing wait time in UNIX shell script for every 'n' executions

I have 2 files: file.dat and mapping.dat.
file.dat contains entries (can contain duplicate), and mapping.dat is a static file which contains the entries and their corresponding session/job names separated by a comma.
I have developed a simple UNIX Shell script which runs in loop where for each entry in file.dat, it searches for the session/job name in mapping.dat and displays the output.
Content of file.dat
ekm
ckm
cnc
cnx
ekm
hnm
dam
Content of mapping.dat
#Entry,Job_Name#
ckm,BDSCKM
cnc,BDSCNC
cnx,BDSCNX
ekm,BDSEKM
azm,BDSAZM
bam,BDSBAM
cam,BDSCAM
oid,BDSOID
hnm,BDSHNM
dam,BDSDAM
Current Script:
#!/bin/ksh
for FILE in `cat file.dat`
do
SESSION=`grep $FILE mapping.dat | cut -d, -f2`
echo "For ${FILE}, Session launched is: ${SESSION} "
done
Current output
For ekm, Session launched is: BDSEKM
For ckm, Session launched is: BDSCKM
For cnc, Session launched is: BDSCNC
For cnx, Session launched is: BDSCNX
For ekm, Session launched is: BDSEKM
For hnm, Session launched is: BDSHNM
For dam, Session launched is: BDSDAM
My question is I want to introduce a wait/sleep time for every 2 occurrences of output i.e. it should first display
For ekm, Session launched is: BDSEKM
For ckm, Session launched is: BDSCKM
wait for 90 seconds, and then
For cnc, Session launched is: BDSCNC
For cnx, Session launched is: BDSCNX
..and so on
Try helping yourself using the modulo operator %.
#!/bin/ksh
count=1
for file in $( cat file.dat )
do
session=$( grep $file mapping.dat | cut -d, -f2 )
echo "For ${file}, session launched is ${session}."
if (( count % 2 == 0 ))
then
sleep 90
fi
(( count++ ))
done
Here's how I would do it, I added in a message for unrecognised items (you can safely remove that by simply deleting || session='UNRECOGNIZED' from the first line of the while read loop). I'm not overly familiar with ksh, but I believe read is the same as bash in this context (I'm very familiar with bash).
I tested with your example data, and it works on both ksh and bash.
#!/bin/ksh
# Print 2 mappings every 90 seconds
FILE="./file.dat"
MAP="./mapping.dat"
while IFS= read -r line; do
session=$(grep "$line" "$MAP") || session='UNRECOGNIZED'
echo "For $line, session launched is: ${session#*,}"
((count++ % 2)) && sleep 90
done < "$FILE"
I used non greedy suffix removal (#) to isolate the 'session'.
To print the first 2 consecutive lines, then wait 90 seconds, then print the next 2 lines, etc.
Use the % modulo operator as suggested by Tony Stark. There is no need to initialize the counter.
#!/bin/bash
for file in $( cat file.dat )
do
session=$( grep $file mapping.dat | cut -d, -f2 )
echo "For ${file}, session launched is ${session}."
(( count++ % 2 )) && sleep 90
done

Performance considerations when using pipe | within awk

awk -F'/' '{ print $1 |" sort " }' infile > outfile
versus
awk -F'/' '{ print $1 }' infile | sort > outfile
Are these MVCE's exactly equivalent or are there portability / performance issues that I don't know about if I use a pipe ( or a redirect ) from within awk.
Both commands produce the correct output.
Update: Did some research myself - see my answer below.
tl;dr Using a pipe within awk can be twice as slow.
I went and had a quick read through of io.c in the gawk source.
Piping with awk is POSIX as long as you don't use co-processes. ie |&
If you have an OS that doesn't support pipes (this came up in the comments), gawk will simulate them by writing to files like you'd expect. That will take a while but at least you have pipes when you didn't.
If you have a real OS, it will fork children and write the output there, so you wouldn't expect a huge performance drop by using the pipe within awk.
Interestingly though gawk has some optimisations for simple cases like
awk '{print $1}'
so I ran a test case.
for i in $(seq 1 10000000); do echo $(( 10000000-$i )) " " $i;done > infile
Ten million records seemed like enough to smooth out variance from other jobs on the system.
Then
time awk '{ print $1 }' infile | sort -n > /dev/null
real 0m10.350s
user 0m7.770s
sys 0m3.000s
or thereabouts on average.
but
time awk '{ print $1 | " sort -n " }' infile > /dev/null
real 0m25.870s
user 0m13.880s
sys 0m13.030s
As you can see this is quite a dramatic difference.
So the conclusion: Although it can be potentially much slower there are plenty of use cases where the gains far outweigh the extra performance hit. It really is only in simple cases like the MVCE where you should keep the pipe outside.
There is a discussion here about the difference between redirecting into awk versus calling awk with a filename. Although not directly related, it might be of interest if you have bothered to read this far.
If you use | inside awk, the output of the print statements accumulate into a single string and then the shell command inside of "xxx" is executed with that string.
Consider:
$ echo 1 4 2 3 | awk '{for (i=1; i<=NF; i++) print $i}'
1
4
2
3
Now try:
$ echo 1 4 2 3 | awk '{for (i=1; i<=NF; i++) print $i | "sort" }'
1
2
3
4
The single string of 1\n4\n2\n3 is being constructed internally and then passed by awk to sort This could be combined into a more complex invocation, such as:
awk '{ print $1 > "names.unsorted"
command = "sort -r > names.sorted"
print $1 | command }' names
More at GNU awk manual on redirection.

How to repeat a character in Bourne Shell?

I want to repeat # 10 times, something like:
"##########"
How can i do it in Bourne shell (/bin/sh) ? I have tried using print but I guess it only works for bash shell.
Please don't give bash syntax.
The shell itself has no obvious facility for repeating a string. For just ten repetitions, it's hard to beat the obvious
echo '##########'
For repeating a single character a specified number of times, this should work even on a busy BusyBox.
dd if=/dev/zero bs=10 count=1 | tr '\0' '#'
Not very elegant but fairly low overhead. (You may need to redirect the standard error from dd to get rid of pesky progress messages.)
If you have a file which is guaranteed to be long enough (such as, for example, the script you are currently running) you could replace the first 10 characters with tr.
head -c 10 "$0" | tr '\000-\377' '#'
If you have a really traditional userspace (such that head doesn't support the -c option) a 1980s-compatible variant might be
yes '#' | head -n 10 | tr -d '\n'
(Your tr might not support exactly the backslash sequences I have used here. Consult its man page or your local academic programmer from the late 1970s.)
... or, heck
strings /bin/sh | sed 's/.*/#/;10q' | tr -d '\n' # don't do this at home :-)
In pure legacy Bourne shell with no external utilities, you can't really do much better than
for f in 0 1 2 3 4 5 6 7 8 9; do
printf '#'
done
In the general case, if you can come up with a generator expression which produces (at least) the required number of repetitions of something, you can loop over that. Here's a simple replacement for seq and jot:
echo | awk '{ for (i=0; i<10; ++i) print i }'
but then you might as well do the output from Awk:
echo | awk '{ for (i=0; i<10; ++i) printf "#" }'
Well, the pure Bourne Shell (POSIX) solution, without pipes and forks would probably be
i=0; while test $i -lt 10; do printf '#'; : $((++i)); done; printf '\n'
This easily generalizes to other repeated strings, e.g. a shell function
rept () {
i=0
while test $i -lt $1; do
printf '%s' "$2"
: $((++i))
done
printf '\n'
}
rept 10 '#'
If the HP Bourne Shell is not quite POSIX and does not support arithmetic substitution with : $(()) you can use i=$(expr $i + 1) instead.
You can use this trick with printf:
$ printf "%0.s#" {1..10}
##########
If the number can be a variable, then you need to use seq:
$ var=30
$ printf "%0.s#" $(seq $var)
##############################
This prints ##########:
for a in `seq 10`; do echo -n "#"; done

Unix - Need to cut a file which has multiple blanks as delimiter - awk or cut?

I need to get the records from a text file in Unix. The delimiter is multiple blanks. For example:
2U2133 1239
1290fsdsf 3234
From this, I need to extract
1239
3234
The delimiter for all records will be always 3 blanks.
I need to do this in an unix script(.scr) and write the output to another file or use it as an input to a do-while loop. I tried the below:
while read readline
do
read_int=`echo "$readline"`
cnt_exc=`grep "$read_int" ${Directory path}/file1.txt| wc -l`
if [ $cnt_exc -gt 0 ]
then
int_1=0
else
int_2=0
fi
done < awk -F' ' '{ print $2 }' ${Directoty path}/test_file.txt
test_file.txt is the input file and file1.txt is a lookup file. But the above way is not working and giving me syntax errors near awk -F
I tried writing the output to a file. The following worked in command line:
more test_file.txt | awk -F' ' '{ print $2 }' > output.txt
This is working and writing the records to output.txt in command line. But the same command does not work in the unix script (It is a .scr file)
Please let me know where I am going wrong and how I can resolve this.
Thanks,
Visakh
The job of replacing multiple delimiters with just one is left to tr:
cat <file_name> | tr -s ' ' | cut -d ' ' -f 2
tr translates or deletes characters, and is perfectly suited to prepare your data for cut to work properly.
The manual states:
-s, --squeeze-repeats
replace each sequence of a repeated character that is
listed in the last specified SET, with a single occurrence
of that character
It depends on the version or implementation of cut on your machine. Some versions support an option, usually -i, that means 'ignore blank fields' or, equivalently, allow multiple separators between fields. If that's supported, use:
cut -i -d' ' -f 2 data.file
If not (and it is not universal — and maybe not even widespread, since neither GNU nor MacOS X have the option), then using awk is better and more portable.
You need to pipe the output of awk into your loop, though:
awk -F' ' '{print $2}' ${Directory_path}/test_file.txt |
while read readline
do
read_int=`echo "$readline"`
cnt_exc=`grep "$read_int" ${Directory_path}/file1.txt| wc -l`
if [ $cnt_exc -gt 0 ]
then int_1=0
else int_2=0
fi
done
The only residual issue is whether the while loop is in a sub-shell and and therefore not modifying your main shell scripts variables, just its own copy of those variables.
With bash, you can use process substitution:
while read readline
do
read_int=`echo "$readline"`
cnt_exc=`grep "$read_int" ${Directory_path}/file1.txt| wc -l`
if [ $cnt_exc -gt 0 ]
then int_1=0
else int_2=0
fi
done < <(awk -F' ' '{print $2}' ${Directory_path}/test_file.txt)
This leaves the while loop in the current shell, but arranges for the output of the command to appear as if from a file.
The blank in ${Directory path} is not normally legal — unless it is another Bash feature I've missed out on; you also had a typo (Directoty) in one place.
Other ways of doing the same thing aside, the error in your program is this: You cannot redirect from (<) the output of another program. Turn your script around and use a pipe like this:
awk -F' ' '{ print $2 }' ${Directory path}/test_file.txt | while read readline
etc.
Besides, the use of "readline" as a variable name may or may not get you into problems.
In this particular case, you can use the following line
sed 's/ /\t/g' <file_name> | cut -f 2
to get your second columns.
In bash you can start from something like this:
for n in `${Directoty path}/test_file.txt | cut -d " " -f 4`
{
grep -c $n ${Directory path}/file*.txt
}
This should have been a comment, but since I cannot comment yet, I am adding this here.
This is from an excellent answer here: https://stackoverflow.com/a/4483833/3138875
tr -s ' ' <text.txt | cut -d ' ' -f4
tr -s '<character>' squeezes multiple repeated instances of <character> into one.
It's not working in the script because of the typo in "Directo*t*y path" (last line of your script).
Cut isn't flexible enough. I usually use Perl for that:
cat file.txt | perl -F' ' -e 'print $F[1]."\n"'
Instead of a triple space after -F you can put any Perl regular expression. You access fields as $F[n], where n is the field number (counting starts at zero). This way there is no need to sed or tr.

Generate a random filename in unix shell

I would like to generate a random filename in unix shell (say tcshell). The filename should consist of random 32 hex letters, e.g.:
c7fdfc8f409c548a10a0a89a791417c5
(to which I will add whatever is neccesary). The point is being able to do it only in shell without resorting to a program.
Assuming you are on a linux, the following should work:
cat /dev/urandom | tr -cd 'a-f0-9' | head -c 32
This is only pseudo-random if your system runs low on entropy, but is (on linux) guaranteed to terminate. If you require genuinely random data, cat /dev/random instead of /dev/urandom. This change will make your code block until enough entropy is available to produce truly random output, so it might slow down your code. For most uses, the output of /dev/urandom is sufficiently random.
If you on OS X or another BSD, you need to modify it to the following:
cat /dev/urandom | env LC_CTYPE=C tr -cd 'a-f0-9' | head -c 32
why do not use unix mktemp command:
$ TMPFILE=`mktemp tmp.XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX` && echo $TMPFILE
tmp.MnxEsPDsNUjrzDIiPhnWZKmlAXAO8983
One command, no pipe, no loop:
hexdump -n 16 -v -e '/1 "%02X"' -e '/16 "\n"' /dev/urandom
If you don't need the newline, for example when you're using it in a variable:
hexdump -n 16 -v -e '/1 "%02X"' /dev/urandom
Using "16" generates 32 hex digits.
uuidgen generates exactly this, except you have to remove hyphens. So I found this to be the most elegant (at least to me) way of achieving this. It should work on linux and OS X out of the box.
uuidgen | tr -d '-'
As you probably noticed from each of the answers, you generally have to "resort to a program".
However, without using any external executables, in Bash and ksh:
string=''; for i in {0..31}; do string+=$(printf "%x" $(($RANDOM%16)) ); done; echo $string
in zsh:
string=''; for i in {0..31}; do string+=$(printf "%x" $(($RANDOM%16)) ); dummy=$RANDOM; done; echo $string
Change the lower case x in the format string to an upper case X to make the alphabetic hex characters upper case.
Here's another way to do it in Bash but without an explicit loop:
printf -v string '%X' $(printf '%.2s ' $((RANDOM%16))' '{00..31})
In the following, "first" and "second" printf refers to the order in which they're executed rather than the order in which they appear in the line.
This technique uses brace expansion to produce a list of 32 random numbers mod 16 each followed by a space and one of the numbers in the range in braces followed by another space (e.g. 11 00). For each element of that list, the first printf strips off all but the first two characters using its format string (%.2) leaving either single digits followed by a space each or two digits. The space in the format string ensures that there is then at least one space between each output number.
The command substitution containing the first printf is not quoted so that word splitting is performed and each number goes to the second printf as a separate argument. There, the numbers are converted to hex by the %X format string and they are appended to each other without spaces (since there aren't any in the format string) and the result is stored in the variable named string.
When printf receives more arguments than its format string accounts for, the format is applied to each argument in turn until they are all consumed. If there are fewer arguments, the unmatched format string (portion) is ignored, but that doesn't apply in this case.
I tested it in Bash 3.2, 4.4 and 5.0-alpha. But it doesn't work in zsh (5.2) or ksh (93u+) because RANDOM only gets evaluated once in the brace expansion in those shells.
Note that because of using the mod operator on a value that ranges from 0 to 32767 the distribution of digits using the snippets could be skewed (not to mention the fact that the numbers are pseudo random in the first place). However, since we're using mod 16 and 32768 is divisible by 16, that won't be a problem here.
In any case, the correct way to do this is using mktemp as in Oleg Razgulyaev's answer.
Tested in zsh, should work with any BASH compatible shell!
#!/bin/zsh
SUM=`md5sum <<EOF
$RANDOM
EOF`
FN=`echo $SUM | awk '// { print $1 }'`
echo "Your new filename: $FN"
Example:
$ zsh ranhash.sh
Your new filename: 2485938240bf200c26bb356bbbb0fa32
$ zsh ranhash.sh
Your new filename: ad25cb21bea35eba879bf3fc12581cc9
Yet another way[tm].
R=$(echo $RANDOM $RANDOM $RANDOM $RANDOM $RANDOM | md5 | cut -c -8)
FILENAME="abcdef-$R"
This answer is very similar to fmarks, so I cannot really take credit for it, but I found the cat and tr command combinations quite slow, and I found this version quite a bit faster. You need hexdump.
hexdump -e '/1 "%02x"' -n32 < /dev/urandom
Another thing you can add is running the date command as follows:
date +%S%N
Reads nonosecond time and the result adds a lot of randomness.
The first answer is good but why fork cat if not required.
tr -dc 'a-f0-9' < /dev/urandom | head -c32
Grab 16 bytes from /dev/random, convert them to hex, take the first line, remove the address, remove the spaces.
head /dev/random -c16 | od -tx1 -w16 | head -n1 | cut -d' ' -f2- | tr -d ' '
Assuming that "without resorting to a program" means "using only programs that are readily available", of course.
If you have openssl in your system you can use it for generating random hex (also it can be -base64) strings with defined length. I found it pretty simple and usable in cron in one line jobs.
openssl rand -hex 32
8c5a7515837d7f0b19e7e6fa4c448400e70ffec88ecd811a3dce3272947cb452
Hope to add a (maybe) better solution to this topic.
Notice: this only works with bash4 and some implement of mktemp(for example, the GNU one)
Try this
fn=$(mktemp -u -t 'XXXXXX')
echo ${fn/\/tmp\//}
This one is twice as faster as head /dev/urandom | tr -cd 'a-f0-9' | head -c 32, and eight times as faster as cat /dev/urandom | tr -cd 'a-f0-9' | head -c 32.
Benchmark:
With mktemp:
#!/bin/bash
# a.sh
for (( i = 0; i < 1000; i++ ))
do
fn=$(mktemp -u -t 'XXXXXX')
echo ${fn/\/tmp\//} > /dev/null
done
time ./a.sh
./a.sh 0.36s user 1.97s system 99% cpu 2.333 total
And the other:
#!/bin/bash
# b.sh
for (( i = 0; i < 1000; i++ ))
do
cat /dev/urandom | tr -dc 'a-zA-Z0-9' | head -c 32 > /dev/null
done
time ./b.sh
./b.sh 0.52s user 20.61s system 113% cpu 18.653 total
If you are on Linux, then Python will come pre-installed. So you can go for something similar to the below:
python -c "import uuid; print str(uuid.uuid1())"
If you don't like the dashes, then use replace function as shown below
python -c "import uuid; print str(uuid.uuid1()).replace('-','')"

Resources