Using QNX Neutrino, I need to subtract hex-valued file names from their predecessors. The files are named by their creation time in hex. The following gets me a list of pure hex values, but I cannot subtrace them from eachother.
last=0
find /path/ -type f\(! iname ".*" \) -exec basename {} |
while read fname
do
current=$fname
echo "difference is $((current - last)) seconds
done
The find command gives me:
51b71f38
51b71f44
51b71f50
51b71f5c
51b71f74
I have tried using echo "ibase=16; $name" | bc but that only switches the value for the output. Is there a way to return an integer number which is the difference for these hex values?
May be something like this:
find /path/ -type f\(! iname ".*" \) -exec basename {} |
while read fname; do
last="$current"
current="$fname"
if [ $(( 0x$last )) -ne 0 ]; then
echo "difference is $(( 0x$current - 0x$last )) seconds"
fi
done
Test:
I used your find command as input from a file for the test.
$ cat ff
51b71f38
51b71f44
51b71f50
51b71f5c
51b71f74
$ while read fname; do last="$current" ; current="$fname" ; if [ $(( 0x$last )) -ne 0 ]; then echo "difference is $(( 0x$current - 0x$last )) seconds" ; fi ; done < ff
difference is 12 seconds
difference is 12 seconds
difference is 12 seconds
difference is 24 seconds
current=$(echo "ibase=16; $fname" |bc)
actually gives me the decimal value I needed inline
Related
I have 60 subdirs in a directory, example name of the directory: test/queues.
The subdirs:
test/queues/subdir1
test/queues/subdir2
test/queues/subdir3
(...)
test/queues/subdir60
I want a command that gives me the output of the number of files in each subdirectory, listed separately, example:
test/queues/subdir1 - 45 files
test/queues/subdir2 - 76 files
test/queues/subdir3 - 950 files
(...)
test/queues/subdir60 - 213 files
Through my researchs, I only got the command ls -lat test/queues/* | wc -l, but this command outputs me the total of files in all of these subdirs. For example, It returns me only 4587, that is the total number of files in all these 60 subdirs. I want the output listing separately, the quantity of files in each folder.
How can I do that ?
Use a loop to count the lines for every subdirectory individually:
for d in test/queues/*/
do
echo "$d" - $(ls -lat "$d" | wc -l)
done
Note that the output of ls -lat some_directory will contain a few additional lines like
total 123
drwxr-xr-x 1 user group 0 Feb 26 09:51 ../
drwxr-xr-x 1 user group 0 Jan 25 12:35 ./
If your ls command supports these options you can use
for d in test/queues/*/
do
echo "$d" - $(ls -A1 "$d" | wc -l)
done
You can apply ls | wc -l in a loop to all subdirs
for x in *; do echo "$x => $(ls $x | wc -l)"; done;
If you want to restrict the output to directories that are one level deep and you only want a count of regular files, you could do:
find . -type d -maxdepth 1 -exec sh -c '
printf "%s\t" "$0"; find "$0" -type f -maxdepth 1 | wc -l' {} \; \
| column -t
You can get the "name - %d files" format with:
find . -type d -maxdepth 1 -exec sh -c '
printf "%s - %d files\n" "$0" \
"$(find "$0" -type f -maxdepth 1 | wc -l)"' {} \;
Using find and awk:
find test/queues -maxdepth 2 -mindepth 2 -printf "%h\n" | awk '{ map[$0]++ } END { for (i in map) { print i" - "map[i]} }'
Use maxdepth and mindepth to ensure that we are only searching the directory structure one level down. Print only the leading directories thorough printf "%h" Pipe the output into awk and create an incrementing map array with the directories as the index. At the end, loop through the map array printing the directories and the counts.
On Unix in the case of not -printf option with find, use exec dirname instead:
find test/queues -maxdepth 2 -mindepth 2 -exec dirname {} \; | awk '{ map[$0]++ } END { for (i in map) { print i" - "map[i]} }'
I want to count number of characters until the pattern 030 in megarow (do not read data forward from that point) such that you do not read the whole megarow in in memory.
It should return 28.
Megastring Data
48000000fe5a1eda480000000d00030001000000cd010000020000000000000000000000000000000000000000000000000000000200000001000000ffffffff57ea5e55ff640c00585e0000fe5a1eda480000000d00030007000000cd010000010000000000000002000000000000800000000000000000000000
My initial idea was to split at first instance of 030 but I did not succeed with this.
I am also not familiar with split command's capability to read only until the end of the pattern.
How can you count quickly until the first match?
If your megarow is in a file named megarow_file you could do the following:
#!/bin/bash
INPUT=megarow_file
SEARCH_STRING="030"
comp_string=""
while IFS= read -r -n1 char
do
char_count=`expr $char_count + 1`
comp_string="${comp_string}${char}"
comp_string_length=${#comp_string}
if [ $comp_string_length -eq 3 ]; then
# echo comparing value $comp_string
if [ $comp_string = $SEARCH_STRING ]; then
# echo match
break
fi
fi
if [ $comp_string_length -gt 3 ]; then
# echo its bigger than 3, strip 1st char
comp_string="${comp_string:1:3}"
# echo comparing value $comp_string
if [ $comp_string = $SEARCH_STRING ]; then
# echo match
break
fi
fi
done < "$INPUT"
count_up_to_comp_string=`expr $char_count - ${#SEARCH_STRING}`
echo "Length up to ${SEARCH_STRING} was ${count_up_to_comp_string} characters"
Comparing GNU awk and BSD AWK initiated by BlueMoon's comment
$ time cat megaRow | awk '{print index($0, "fafafafa")-1}'
48584
real 1m13.489s
user 1m11.608s
sys 0m4.685s
$ time cat megaRow | gawk '{print index($0, "fafafafa")-1}'
48584
real 1m12.792s
user 1m8.845s
sys 0m4.933s
where GNU AWK little faster but not enough significantly, because within uncertainty.
Usually grep command is used to display the line contaning the specified pattern. Is there any way to display n lines before and after the line which contains the specified pattern?
Can this will be achieved using awk?
Yes, use
grep -B num1 -A num2
to include num1 lines of context before the match, and num2 lines of context after the match.
EDIT:
Seems the OP is using AIX. This has a different set of options which doesn't include -B and -A
this link describes grep on AIX 4.3 (it doesn't look promising)
Matt's perl script might be a better solution.
Here is what I usually do on AIX:
before=2 << The number of lines to be shown Before >>
after=2 << The number of lines to be shown After >>
grep -n <pattern> <filename> | cut -d':' -f1 | xargs -n1 -I % awk "NR<=%+$after && NR>=%-$before" <filename>
If you do not want the extra 2 varialbles you can always use it an a one line:
grep -n <pattern> <filename> | cut -d':' -f1 | xargs -n1 -I % awk 'NR<=%+<<after>> && NR>=%-<<before>>' <filename>
Suppose I have a pattern 'stack' and the filename is flow.txt
I want 2 lines before and 3 lines after. The the command will be like:
grep -n 'stack' flow.txt | cut -d':' -f1 | xargs -n1 -I % awk 'NR<=%+3 && NR>=%-2' flow.txt
I want 2 lines before and only - the the command will be like:
grep -n 'stack' flow.txt | cut -d':' -f1 | xargs -n1 -I % awk 'NR<=% && NR>=%-2' flow.txt
I want 3 lines after and only - the the command will be like:
grep -n 'stack' flow.txt | cut -d':' -f1 | xargs -n1 -I % awk 'NR<=%+3 && NR>=%' flow.txt
Multiple Files - change it for Awk & grep. From above for the pattern 'stack' with the filename is flow.* - 2 lines before and 3 lines after. The the command will be like:
awk 'BEGIN {
before=1; after=3; pattern="stack";
i=0; hold[before]=""; afterprints=0}
{
#Print the lines from the previous Match
if (afterprints > 0)
{
print FILENAME ":" FNR ":" $0
afterprints-- #keep a track of the lines to print after - this can be reset if a match is found
if (afterprints == 0) print "---"
}
#Look for the pattern in current line
if ( match($0, pattern) > 0 )
{
# print the lines in the hold round robin buffer from the current line to line-1
# if (before >0) => user wants lines before avoid divide by 0 in %
# and afterprints => 0 - we have not printed the line already
for(j=i; j < i+before && before > 0 && afterprints == 0 ; j++)
print hold[j%before]
if (afterprints == 0) # print the line if we have not printed the line already
print FILENAME ":" FNR ":" $0
afterprints=after
}
if (before > 0) # Store the lines in the round robin hold buffer
{ hold[i]=FILENAME ":" FNR ":" $0
i=(i+1)%before }
}' flow.*
From the tags, it's likely that the system has a grep that may not support providing context (Solaris is one system that doesn't and I can't remember about AIX). If that is the case, there's a perl script that may help at http://www.sun.com/bigadmin/jsp/descFile.jsp?url=descAll/cgrep__context_grep.
If you have sed you could use this shell script
BEFORE=2
AFTER=3
FILE=file.txt
PATTERN=pattern
for i in $(grep -n $PATTERN $FILE | sed -e 's/\:.*//')
do head -n $(($AFTER+$i)) $FILE | tail -n $(($AFTER+$BEFORE+1))
done
What it does is, grep -n prefixes each match with the line it was found at, the sed strips all but the line it was found at. Then you use head to get the lines up to the line it was found on plus an additional $AFTER lines. That's then piped to tail to just get $BEFORE + $AFTER + 1 lines (that is, your matching line plus the number of lines before and after)
Sure there is (from the grep man page):
-B NUM, --before-context=NUM
Print NUM lines of leading context before matching lines.
Places a line containing a group separator (--) between
contiguous groups of matches. With the -o or --only-matching
option, this has no effect and a warning is given.
-A NUM, --after-context=NUM
Print NUM lines of trailing context after matching lines.
Places a line containing a group separator (--) between
contiguous groups of matches. With the -o or --only-matching
option, this has no effect and a warning is given.
and if you want the same amount of lines before AND after the match, use:
-C NUM, -NUM, --context=NUM
Print NUM lines of output context. Places a line containing a
group separator (--) between contiguous groups of matches. With
the -o or --only-matching option, this has no effect and a
warning is given.
you can use awk
awk 'BEGIN{t=4}
c--&&c>=0
/pattern/{ c=t; for(i=NR;i<NR+t;i++)print a[i%t] }
{ a[NR%t]=$0}
' file
output
$ more file
1
2
3
4
5
pattern
6
7
8
9
10
11
$ ./shell.sh
2
3
4
5
6
7
8
9
In a UNIX shell script, what can I use to convert decimal numbers into hexadecimal? I thought od would do the trick, but it's not realizing I'm feeding it ASCII representations of numbers.
printf? Gross! Using it for now, but what else is available?
Tried printf(1)?
printf "%x\n" 34
22
There are probably ways of doing that with builtin functions in all shells but it would be less portable. I've not checked the POSIX sh specs to see whether it has such capabilities.
echo "obase=16; 34" | bc
If you want to filter a whole file of integers, one per line:
( echo "obase=16" ; cat file_of_integers ) | bc
Hexidecimal to decimal:
$ echo $((0xfee10000))
4276158464
Decimal to hexadecimal:
$ printf '%x\n' 26
1a
bash-4.2$ printf '%x\n' 4294967295
ffffffff
bash-4.2$ printf -v hex '%x' 4294967295
bash-4.2$ echo $hex
ffffffff
Sorry my fault, try this...
#!/bin/bash
:
declare -r HEX_DIGITS="0123456789ABCDEF"
dec_value=$1
hex_value=""
until [ $dec_value == 0 ]; do
rem_value=$((dec_value % 16))
dec_value=$((dec_value / 16))
hex_digit=${HEX_DIGITS:$rem_value:1}
hex_value="${hex_digit}${hex_value}"
done
echo -e "${hex_value}"
Example:
$ ./dtoh 1024
400
Try:
printf "%X\n" ${MY_NUMBER}
In my case, I stumbled upon one issue with using printf solution:
$ printf "%x" 008
bash: printf: 008: invalid octal number
The easiest way was to use solution with bc, suggested in post higher:
$ bc <<< "obase=16; 008"
8
In zsh you can do this sort of thing:
% typeset -i 16 y
% print $(( [#8] x = 32, y = 32 ))
8#40
% print $x $y
8#40 16#20
% setopt c_bases
% print $y
0x20
Example taken from zsh docs page about Arithmetic Evaluation.
I believe Bash has similar capabilities.
xd() {
printf "hex> "
while read i
do
printf "dec $(( 0x${i} ))\n\nhex> "
done
}
dx() {
printf "dec> "
while read i
do
printf 'hex %x\n\ndec> ' $i
done
}
# number conversion.
while `test $ans='y'`
do
echo "Menu"
echo "1.Decimal to Hexadecimal"
echo "2.Decimal to Octal"
echo "3.Hexadecimal to Binary"
echo "4.Octal to Binary"
echo "5.Hexadecimal to Octal"
echo "6.Octal to Hexadecimal"
echo "7.Exit"
read choice
case $choice in
1) echo "Enter the decimal no."
read n
hex=`echo "ibase=10;obase=16;$n"|bc`
echo "The hexadecimal no. is $hex"
;;
2) echo "Enter the decimal no."
read n
oct=`echo "ibase=10;obase=8;$n"|bc`
echo "The octal no. is $oct"
;;
3) echo "Enter the hexadecimal no."
read n
binary=`echo "ibase=16;obase=2;$n"|bc`
echo "The binary no. is $binary"
;;
4) echo "Enter the octal no."
read n
binary=`echo "ibase=8;obase=2;$n"|bc`
echo "The binary no. is $binary"
;;
5) echo "Enter the hexadecimal no."
read n
oct=`echo "ibase=16;obase=8;$n"|bc`
echo "The octal no. is $oct"
;;
6) echo "Enter the octal no."
read n
hex=`echo "ibase=8;obase=16;$n"|bc`
echo "The hexadecimal no. is $hex"
;;
7) exit
;;
*) echo "invalid no."
;;
esac
done
This is not a shell script, but it is the cli tool I'm using to convert numbers among bin/oct/dec/hex:
#!/usr/bin/perl
if (#ARGV < 2) {
printf("Convert numbers among bin/oct/dec/hex\n");
printf("\nUsage: base b/o/d/x num num2 ... \n");
exit;
}
for ($i=1; $i<#ARGV; $i++) {
if ($ARGV[0] eq "b") {
$num = oct("0b$ARGV[$i]");
} elsif ($ARGV[0] eq "o") {
$num = oct($ARGV[$i]);
} elsif ($ARGV[0] eq "d") {
$num = $ARGV[$i];
} elsif ($ARGV[0] eq "h") {
$num = hex($ARGV[$i]);
} else {
printf("Usage: base b/o/d/x num num2 ... \n");
exit;
}
printf("0x%x = 0d%d = 0%o = 0b%b\n", $num, $num, $num, $num);
}
For those who would like to use variables, first export it by running:
export NUM=100
Then run:
printf "%x\n" $NUM
Else, you can you can ignore the use case of the variables and run it directly as shown below:
printf "%x\n" 100
NB:Substitute NUM with the variable name of your choice.
Exporting makes it an environmental variable(global).
Wow, I didn't realize that printf was available at the shell!
With that said, I'm surprised no-one commented about putting the printf into a shell script (which then you could put in your personal bin directory if you wanted).
echo "printf "0x%x\n" $1" > hex
chmod +x hex
Now just run:
./hex 123
It returns:
0x7b
I am working on a UNIX box, and trying to run an application, which gives some debug logs to the standard output. I have redirected this output to a log file, but now wish to get the lines where the error is being shown.
My problem here is that a simple
cat output.log | grep FAIL
does not help out. As this shows only the lines which have FAIL in them. I want some more information along with this. Like the 2-3 lines above this line with FAIL. Is there any way to do this via a simple shell command? I would like to have a single command line (can have pipes) to do the above.
grep -C 3 FAIL output.log
Note that this also gets rid of the useless use of cat (UUOC).
grep -A $NUM
This will print $NUM lines of trailing context after matches.
-B $NUM prints leading context.
man grep is your best friend.
So in your case:
cat log | grep -A 3 -B 3 FAIL
I have two implementations of what I call sgrep, one in Perl, one using just pre-Perl (pre-GNU) standard Unix commands. If you've got GNU grep, you've no particular need of these. It would be more complex to deal with forwards and backwards context searches, but that might be a useful exercise.
Perl solution:
#!/usr/perl/v5.8.8/bin/perl -w
#
# #(#)$Id: sgrep.pl,v 1.6 2007/09/18 22:55:20 jleffler Exp $
#
# Perl-based SGREP (special grep) command
#
# Print lines around the line that matches (by default, 3 before and 3 after).
# By default, include file names if more than one file to search.
#
# Options:
# -b n1 Print n1 lines before match
# -f n2 Print n2 lines following match
# -n Print line numbers
# -h Do not print file names
# -H Do print file names
use strict;
use constant debug => 0;
use Getopt::Std;
my(%opts);
sub usage
{
print STDERR "Usage: $0 [-hnH] [-b n1] [-f n2] pattern [file ...]\n";
exit 1;
}
usage unless getopts('hnf:b:H', \%opts);
usage unless #ARGV >= 1;
if ($opts{h} && $opts{H})
{
print STDERR "$0: mutually exclusive options -h and -H specified\n";
exit 1;
}
my $op = shift;
print "# regex = $op\n" if debug;
# print file names if -h omitted and more than one argument
$opts{F} = (defined $opts{H} || (!defined $opts{h} and scalar #ARGV > 1)) ? 1 : 0;
$opts{n} = 0 unless defined $opts{n};
my $before = (defined $opts{b}) ? $opts{b} + 0 : 3;
my $after = (defined $opts{f}) ? $opts{f} + 0 : 3;
print "# before = $before; after = $after\n" if debug;
my #lines = (); # Accumulated lines
my $tail = 0; # Line number of last line in list
my $tbp_1 = 0; # First line to be printed
my $tbp_2 = 0; # Last line to be printed
# Print lines from #lines in the range $tbp_1 .. $tbp_2,
# leaving $leave lines in the array for future use.
sub print_leaving
{
my ($leave) = #_;
while (scalar(#lines) > $leave)
{
my $line = shift #lines;
my $curr = $tail - scalar(#lines);
if ($tbp_1 <= $curr && $curr <= $tbp_2)
{
print "$ARGV:" if $opts{F};
print "$curr:" if $opts{n};
print $line;
}
}
}
# General logic:
# Accumulate each line at end of #lines.
# ** If current line matches, record range that needs printing
# ** When the line array contains enough lines, pop line off front and,
# if it needs printing, print it.
# At end of file, empty line array, printing requisite accumulated lines.
while (<>)
{
# Add this line to the accumulated lines
push #lines, $_;
$tail = $.;
printf "# array: N = %d, last = $tail: %s", scalar(#lines), $_ if debug > 1;
if (m/$op/o)
{
# This line matches - set range to be printed
my $lo = $. - $before;
$tbp_1 = $lo if ($lo > $tbp_2);
$tbp_2 = $. + $after;
print "# $. MATCH: print range $tbp_1 .. $tbp_2\n" if debug;
}
# Print out any accumulated lines that need printing
# Leave $before lines in array.
print_leaving($before);
}
continue
{
if (eof)
{
# Print out any accumulated lines that need printing
print_leaving(0);
# Reset for next file
close ARGV;
$tbp_1 = 0;
$tbp_2 = 0;
$tail = 0;
#lines = ();
}
}
Pre-Perl Unix solution (using plain ed, sed, and sort - though it uses getopt which was not necessarily available back then):
#!/bin/ksh
#
# #(#)$Id: old.sgrep.sh,v 1.5 2007/09/15 22:15:43 jleffler Exp $
#
# Special grep
# Finds a pattern and prints lines either side of the pattern
# Line numbers are always produced by ed (substitute for grep),
# which allows us to eliminate duplicate lines cleanly. If the
# user did not ask for numbers, these are then stripped out.
#
# BUG: if the pattern occurs in in the first line or two and
# the number of lines to go back is larger than the line number,
# it fails dismally.
set -- `getopt "f:b:hn" "$#"`
case $# in
0) echo "Usage: $0 [-hn] [-f x] [-b y] pattern [files]" >&2
exit 1;;
esac
# Tab required - at least with sed (perl would be different)
# But then the whole problem would be different if implemented in Perl.
number="'s/^\\([0-9][0-9]*\\) /\\1:/'"
filename="'s%^%%'" # No-op for sed
f=3
b=3
nflag=no
hflag=no
while [ $# -gt 0 ]
do
case $1 in
-f) f=$2; shift 2;;
-b) b=$2; shift 2;;
-n) nflag=yes; shift;;
-h) hflag=yes; shift;;
--) shift; break;;
*) echo "Unknown option $1" >&2
exit 1;;
esac
done
pattern="${1:?'No pattern'}"
shift
case $# in
0) tmp=${TMPDIR:-/tmp}/`basename $0`.$$
trap "rm -f $tmp ; exit 1" 0
cat - >$tmp
set -- $tmp
sort="sort -t: -u +0n -1"
;;
*) filename="'s%^%'\$file:%"
sort="sort -t: -u +1n -2"
;;
esac
case $nflag in
yes) num_remove='s/[0-9][0-9]*://';;
no) num_remove='s/^//';;
esac
case $hflag in
yes) fileremove='s%^$file:%%';;
no) fileremove='s/^//';;
esac
for file in $*
do
echo "g/$pattern/.-${b},.+${f}n" |
ed - $file |
eval sed -e "$number" -e "$filename" |
$sort |
eval sed -e "$fileremove" -e "$num_remove"
done
rm -f $tmp
trap 0
exit 0
The shell version of sgrep was written in February 1989, and bug fixed in May 1989. It then remained unchanged except for an administrative change (SCCS to RCS transition) in 1997 until 2007, when I added the -h option. I switched to the Perl version in 2007.
http://thedailywtf.com/Articles/The_Complicator_0x27_s_Gloves.aspx
You can use sed to print specific lines, lets say you want line 20
sed '20 p' -n FILE_YOU_WANT_THE_LINE_FROM
Done.
-n prevents echoing lines from the file. The part in quotes is a sed rule to apply, it specifies that you want the rule to apply to line 20, and you want to print.
With GNU grep on Windows:
$ grep --context 3 FAIL output.log
$ grep --help | grep context
-B, --before-context=NUM print NUM lines of leading context
-A, --after-context=NUM print NUM lines of trailing context
-C, --context=NUM print NUM lines of output context
-NUM same as --context=NUM