i want the word count wc -w value be assigned to a variable
i've tried something like this, but i'm getting error, what is wrong?
winget="this is the first line"
wdCount=$winget | wc -w
echo $wdCount
You need to $(...) to assign the result:
wdCount=$(echo $winget | wc -w)
Or you could also avoid echo by using here-document:
wdCount=$(wc -w <<<$winget)
You can pass word count without the filename using the following:
num_of_lines=$(< "$file" wc -w)
See https://unix.stackexchange.com/a/126999/320461
You can use this to store the word count in variable:
word_count=$(wc -w filename.txt | awk -F ' ' '{print $1}'
I am struggling with an awk problem in my bash shell script. In the below snippet of code i am passing a variable var_awk for regular expression in awk. The idea is to get lines above a regular expression but the below echo is not displaying any data
echo `ls -ltr $date*$f* | /usr/xpg4/bin/awk -v reg=$var_awk '/reg/ {print $0}'`
I am unable to reg for regex though when i do print reg it is printing but when not doing regex as expected.
if [ $GE == "HBCA" ] || [ $GE == "HBUS" ] || [ $GE == "HBEU" ]; then
for f in `ls -ltr $date*GEN*REVAL*log|grep -v LPD | awk '{split($9,a,"_")}{print a[3]}'`; do
echo $f
var_awk="$date"_RESET_CALC_"$f"
echo $var_awk
echo `ls -ltr $date*$f* | /usr/xpg4/bin/awk -v reg=$var_awk '/reg/ {print $0}'`
You cannot use variable in regex that way. You need to do:
/usr/xpg4/bin/awk -v reg="$var_awk" '$0~reg{ print $0 }'
or simply
/usr/xpg4/bin/awk -v reg="$var_awk" '$0~reg'
Inside / / your variable reg will be used as a literal word.
Quote your shell variables.
try this:
...whatever you had already..|awk -v reg="$var_awk" '$0~reg'
it is better to wrap shell variable with quotes, e.g. if your var has spaces.
/pattern/ in awk is called regex constant. It cannot be used with variable, that's why it is called constant. We need to use dynamic regex here in this example.
I have a fixed-width-field file which I'm trying to sort using the UNIX (Cygwin, in my case) sort utility.
The problem is there is a two-line header at the top of the file which is being sorted to the bottom of the file (as each header line begins with a colon).
Is there a way to tell sort either "pass the first two lines across unsorted" or to specify an ordering which sorts the colon lines to the top - the remaining lines are always start with a 6-digit numeric (which is actually the key I'm sorting on) if that helps.
Example:
:0:12345
:1:6:2:3:8:4:2
010005TSTDOG_FOOD01
500123TSTMY_RADAR00
222334NOTALINEOUT01
477821USASHUTTLES21
325611LVEANOTHERS00
should sort to:
:0:12345
:1:6:2:3:8:4:2
010005TSTDOG_FOOD01
222334NOTALINEOUT01
325611LVEANOTHERS00
477821USASHUTTLES21
500123TSTMY_RADAR00
(head -n 2 <file> && tail -n +3 <file> | sort) > newfile
The parentheses create a subshell, wrapping up the stdout so you can pipe it or redirect it as if it had come from a single command.
If you don't mind using awk, you can take advantage of awk's built-in pipe abilities
eg.
extract_data | awk 'NR<3{print $0;next}{print $0| "sort -r"}'
This prints the first two lines verbatim and pipes the rest through sort.
Note that this has the very specific advantage of being able to selectively sort parts
of a piped input. all the other methods suggested will only sort plain files which can be read multiple times. This works on anything.
In simple cases, sed can do the job elegantly:
your_script | (sed -u 1q; sort)
or equivalently,
cat your_data | (sed -u 1q; sort)
The key is in the 1q -- print first line (header) and quit (leaving the rest of the input to sort).
For the example given, 2q will do the trick.
The -u switch (unbuffered) is required for those seds (notably, GNU's) that would otherwise read the input in chunks, thereby consuming data that you want to go through sort instead.
Here is a version that works on piped data:
(read -r; printf "%s\n" "$REPLY"; sort)
If your header has multiple lines:
(for i in $(seq $HEADER_ROWS); do read -r; printf "%s\n" "$REPLY"; done; sort)
This solution is from here
You can use tail -n +3 <file> | sort ... (tail will output the file contents from the 3rd line).
head -2 <your_file> && nawk 'NR>2' <your_file> | sort
example:
> cat temp
10
8
1
2
3
4
5
> head -2 temp && nawk 'NR>2' temp | sort -r
10
8
5
4
3
2
1
It only takes 2 lines of code...
head -1 test.txt > a.tmp;
tail -n+2 test.txt | sort -n >> a.tmp;
For a numeric data, -n is required. For alpha sort, the -n is not required.
Example file:
$ cat test.txt
header
8
5
100
1
-1
Result:
$ cat a.tmp
header
-1
1
5
8
100
So here's a bash function where arguments are exactly like sort. Supporting files and pipes.
function skip_header_sort() {
if [[ $# -gt 0 ]] && [[ -f ${#: -1} ]]; then
local file=${#: -1}
set -- "${#:1:$(($#-1))}"
fi
awk -vsargs="$*" 'NR<2{print; next}{print | "sort "sargs}' $file
}
How it works. This line checks if there is at least one argument and if the last argument is a file.
if [[ $# -gt 0 ]] && [[ -f ${#: -1} ]]; then
This saves the file to separate argument. Since we're about to erase the last argument.
local file=${#: -1}
Here we remove the last argument. Since we don't want to pass it as a sort argument.
set -- "${#:1:$(($#-1))}"
Finally, we do the awk part, passing the arguments (minus the last argument if it was the file) to sort in awk. This was orignally suggested by Dave, and modified to take sort arguments. We rely on the fact that $file will be empty if we're piping, thus ignored.
awk -vsargs="$*" 'NR<2{print; next}{print | "sort "sargs}' $file
Example usage with a comma separated file.
$ cat /tmp/test
A,B,C
0,1,2
1,2,0
2,0,1
# SORT NUMERICALLY SECOND COLUMN
$ skip_header_sort -t, -nk2 /tmp/test
A,B,C
2,0,1
0,1,2
1,2,0
# SORT REVERSE NUMERICALLY THIRD COLUMN
$ cat /tmp/test | skip_header_sort -t, -nrk3
A,B,C
0,1,2
2,0,1
1,2,0
Here's a bash shell function derived from the other answers. It handles both files and pipes. First argument is the file name or '-' for stdin. Remaining arguments are passed to sort. A couple examples:
$ hsort myfile.txt
$ head -n 100 myfile.txt | hsort -
$ hsort myfile.txt -k 2,2 | head -n 20 | hsort - -r
The shell function:
hsort ()
{
if [ "$1" == "-h" ]; then
echo "Sort a file or standard input, treating the first line as a header.";
echo "The first argument is the file or '-' for standard input. Additional";
echo "arguments to sort follow the first argument, including other files.";
echo "File syntax : $ hsort file [sort-options] [file...]";
echo "STDIN syntax: $ hsort - [sort-options] [file...]";
return 0;
elif [ -f "$1" ]; then
local file=$1;
shift;
(head -n 1 $file && tail -n +2 $file | sort $*);
elif [ "$1" == "-" ]; then
shift;
(read -r; printf "%s\n" "$REPLY"; sort $*);
else
>&2 echo "Error. File not found: $1";
>&2 echo "Use either 'hsort <file> [sort-options]' or 'hsort - [sort-options]'";
return 1 ;
fi
}
This is the same as Ian Sherbin answer but my implementation is :-
cut -d'|' -f3,4,7 $arg1 | uniq > filetmp.tc
head -1 filetmp.tc > file.tc;
tail -n+2 filetmp.tc | sort -t"|" -k2,2 >> file.tc;
Another simple variation on all the others, reading a file once
HEADER_LINES=2
(head -n $HEADER_LINES; sort) < data-file.dat
With Python:
import sys
HEADER_ROWS=2
for _ in range(HEADER_ROWS):
sys.stdout.write(next(sys.stdin))
for row in sorted(sys.stdin):
sys.stdout.write(row)
cat file_name.txt | sed 1d | sort
This will do what you want.
This was an interview question, nevertheless still a programming question.
I have a unix file with two columns name and score. I need to display count of all the scores.
like
jhon 100
dan 200
rob 100
mike 100
the output should be
100 3
200 1
You only need to use built in unix utility to solve it, so i am assuming using shell scripts . or reg ex. or unix commands
I understand looping would be one way to do. store all the values u have already seen and then grep every record for unseen values. any other efficient way of doing it
Try this:
cut -d ' ' -f 2 < /tmp/foo | sort -n | uniq -c \
| (while read n v ; do printf "%s %s\n" "$v" "$n" ; done)
The initial cut could be replaced with another while read loop, which would be more resilient to input file format variations (extra whitespace). If some of the names consist in several words, simple field extraction will not work as easily, but sed can do it.
Otherwise, use your favorite programming language. Perl would probably shine. It is not difficult either in Java or even in C or Forth.
$ cat foo.txt
jhon 100
dan 200
rob 100
mike 100
$ awk '{print $2}' foo.txt | sort | uniq -c
3 100
1 200
Its a pity you can't do a count with sort or uniq alone.
Edit: I just noticed I have the count in front ... to get it exactly the same you can do:
$ awk '{print $2}' foo.txt | sort | uniq -c | awk '{ print $2 " " $1 }'
Not very complicated in perl:
#!/usr/bin/perl -w
use strict;
use warnings;
my %count = ();
while (<>) {
chomp;
my ($name, $score) = split(/ /);
$count{$score}++;
}
foreach my $key (sort keys %count) {
print "$key ", $count{$key}, "\n";
}
You could go with awk:
awk '/.*/ { a[$2] = a[$2] + 1; } END { for (x in a) { print x, " ", a[x] } }' record_file.txt
Alternatively with shell commands:
for i in `awk '{print $2}' inputfile | sort -u`
do
echo -n "$i "
grep $i inputfile | wc -l
done
The first awk command will give a list of all the different scores (e.g. 100 and 200) which then
the for loop iterates over, counting up each separately. Not very super efficient, but simple. If the file is not to big is should not be a too big problem.
Here is a query:
grep bar 'foo.txt' | awk '{print $3}'
The field name emitted by the 'awk' query are mangled C++ symbol names. I want to pass each to dem and finally output the output of 'dem'- i.e the demangled symbols.
Assume that the field separator is a ' ' (space).
awk is a pattern matching language. The grep is totally unnecessary.
awk '/bar/{print $3}' foot.txt
does what your example does.
Edit Fixed up a bit after reading the comments on the precedeing answer (I don't know a thing about dem...):
You can make use of the system call in awk with something like:
awk '/bar/{cline="dem " $3; system(cline)}' foot.txt
but this would spawn an instance of dem for each symbol processed. Very inefficient.
So lets get more clever:
awk '/bar/{list = list " " $3;}END{cline="dem " list; system(cline)}' foot.txt
BTW-- Untested as I don't have dem or your input.
Another thought: if you're going to use the xargs formulation offered by other posters, cut might well be more efficient than awk. At that point, however, you would need grep again.
How about
grep bar 'foo.txt' | awk '{ print $3 }' | xargs dem | awk '{ print $3 }'
This will print the demangled symbols, complete with argument lists in the case of methods:
awk '/bar/ { print $3 }' foo.txt | xargs dem | sed -e 's:.* == ::'
This will print the demangled symbols, without argument lists in the case of methods:
awk '/bar/ { print $3 }' foo.txt | xargs dem | sed -e 's:.* == \([^(]*\).*:\1:'
Cheers,
V.