Unix awk scripting to convert columns to rows - unix

Need help to convert rows to columns in unix scripting. My source is file system.
Tried the below script:
`perl -nle '
if($. == 1)
{ (#a)=/([\w - .]+)(?=,|\s*$)/g }
else
{
(#b)=/([\w - .]+)(?=,|\s*$)/g;
print "$a[0]|$b[0]|$b[1]|$b[2}|$a[$_]|$b[$_+3]" foreach (0..$#a)
}
' ip.txt >op.txt
input data from file:
src,FI,QMA,PCG,PCC,PREI,G T
PIM2016.csv,MMR.S T - RED,334,114,120,34,123,725
output with latest script:
SRC|PIM2016.csv|MMRPPS|RED|SRC|334 SRC|PIM2016.csv|MMRPPS|RED|FI|114
SDRC|PIM2016.csv|MMRPPS|RED|QMA|120 SRC|PIM2016.csv|MMRPPS|RED|PCG|34
SRC|PIM2016.csv|MMRPPS|RED|PCC|123 SRC|PIM2016.csv|MMRPPS|RED|PREI|725
SRC|PIM2016.csv|MMRPPS|RED|G T|
Required output:
SRC|PIM2016.csv|MMRPPS|S T -RED|FI|334 SRC|PIM2016.csv|MMRPPS|S T
-RED|QMA|114 SRC|PIM2016.csv|MMRPPS|S T -RED|PCG|120 SRC|PIM2016.csv|MMRPPS|S T -RED|PCC|34 SRC|PIM2016.csv|MMRPPS|S T
-RED|PREI|123 SRC|PIM2016.csv|MMRPPS|S T -RED|G T|725

$ cat ip.txt
HDR :FI,QA,PC,PM,PRE,G T
Detail row: MMRPPS,ST - RED,334,114,120,34,123,725
UP,UPR,0,0,0,0,0,0
Assuming no blank lines between rows:
$ perl -nle '
s/^.*:\s*|^\s*|\s*$//;
if($. == 1)
{ (#a) = /[^,]+/g }
else
{
(#b) = /[^,]+/g;
print "$b[0] $a[$_] $b[1] $b[$_+2]" foreach (0..$#a);
}
' ip.txt
MMRPPS FI ST - RED 334
MMRPPS QA ST - RED 114
MMRPPS PC ST - RED 120
MMRPPS PM ST - RED 34
MMRPPS PRE ST - RED 123
MMRPPS G T ST - RED 725
UP FI UPR 0
UP QA UPR 0
UP PC UPR 0
UP PM UPR 0
UP PRE UPR 0
UP G T UPR 0
Input lines are pre-processed to remove leading text upto :, any leading and trailing white-spaces
From first line, extract comma separated values into #a array. The regex looks for string of non , characters
For all other lines,
same regex to extract comma separated values into #b array
print in desired order

#sundeep : thanks for your answer. Below script works
perl -nle '
if($. == 1)
{ (#a)=/([\w -]+)(?=,|\s*$)/g }
else
{
(#b)=/([\w -]+)(?=,|\s*$)/g;
print "$b[0] $a[$_] $b[1] $b[$_+2]" foreach (0..$#a)
}
' ip.txt

Related

how to use awk to pull fields and make them variables to do data calculation

I am using awk to compute Mean Frational Bias form a data file. How can I make the data points a variable to call in to my equation?
Input.....
col1 col2
row #1 Yavg: 14.87954
row #2 Xavg: 20.83804
row #3 Ystd: 7.886613
row #4 Xstd: 8.628519
I am looking to feed into this equation....
MFB = .5 * (Yavg-Xavg)/[(Yavg+Xavg)/2]
output....
col1 col2
row #1 Yavg: 14.87954
row #2 Xavg: 20.83804
row #3 Ystd: 7.886613
row #4 Xstd: 8.628519
row #5 MFB: (computed value)
currently trying to use the following code to do this but not working....
var= 'linear_reg-County119-O3-2004-Winter2013-2018XYstats.out.out'
val1=$(awk -F, OFS=":" "NR==2{print $2; exit}" <$var)
val2=$(awk -F, OFS=":" "NR==1{print $2; exit}" <$var)
#MFB = .5*((val2-val1)/((val2+val1)/2))
awk '{ print "MFB :" .5*((val2-val1)/((val2+val1)/2))}' >> linear_regCounty119-O3-2004-Winter2013-2018XYstats-wMFB.out
Try running: awk -f mfb.awk input.txt where
mfb.awk:
BEGIN { FS = OFS = ": " } # set the separators
{ v[$1] = $2; print } # store each line in an array named "v"
END {
MFB = 0.5 * (v["Yavg"] - v["Xavg"]) / ((v["Yavg"] + v["Xavg"]) / 2)
print "MFB", MFB
}
input.txt:
Yavg: 14.87954
Xavg: 20.83804
Ystd: 7.886613
Xstd: 8.628519
Output:
Yavg: 14.87954
Xavg: 20.83804
Ystd: 7.886613
Xstd: 8.628519
MFB: -0.166823
Alternatively, mfb.awk can be the following, resembling your original code:
BEGIN { FS = OFS = ": " }
{ print }
NR == 1 { Yavg = $2 } NR == 2 { Xavg = $2 }
END {
MFB = 0.5 * (Yavg - Xavg) / ((Yavg + Xavg) / 2)
print "MFB", MFB
}
Note that you don't usually toss variables back and forth between the shell and Awk (at least when you deal with a single input file).

Combining big data files with different columns into one big file

I have N tab-separated files. Each file has a header line saying the names of the columns. Some of the columns are common to all of the files, but some are unique.
I want to combine all of the files into one big file containing all of the relevant headers.
Example:
> cat file1.dat
a b c
5 7 2
3 9 1
> cat file2.dat
a b e f
2 9 8 3
2 8 3 3
1 0 3 2
> cat file3.dat
a c d g
1 1 5 2
> merge file*.dat
a b c d e f g
5 7 2 - - - -
3 9 1 - - - -
2 9 - - 8 3 -
2 8 - - 3 3 -
1 0 - - 3 2 -
1 - 1 5 - - 2
The - can be replaced by anything, for example NA.
Caveat: the files are so big that I can not load all of them into memory simultaneously.
I had a solution in R using
write.table(do.call(plyr:::rbind.fill,
Map(function(filename)
read.table(filename, header=1, check.names=0),
filename=list.files('.'))),
'merged.dat', quote=FALSE, sep='\t', row.names=FALSE)
but this fails with a memory error when the data are too large.
What is the best way to accomplish this?
I am thinking the best route will be to first loop through all the files to collect the column names, then loop through the files to put them into the right format, and write them to disc as they are encountered. However, is there perhaps already some code available that performs this?
From an algorithm point of view I would take the following steps:
Process the headers:
read all headers of all input files and extract all column names
sort the column names in the order you want
create a lookup table which returns the column-name when a field number is given (h[n] -> "name")
process the files: after the headers, you can reprocess the files
read the header of the file
create a lookup table which returns the field number when given a column name. An associative array is useful here: (a["name"] -> field_number)
process the remainder of the file
loop over all fields of the merged file
get the column name with h
check if the column name is in a, if not print -, if so print the field number corresponding with a.
This is easily done with a GNU awk making use of the extensions nextfile and asorti. The nextfile function allows us to read the header only and move to the next file without processing the full file. Since we need to process the file twice (step 1 reading the header and step 2 reading the file), we will ask awk to dynamically manipulate its argument list. Every time a file's header is processed, we add it at the end of the argument list ARGV so it can be used for step 2.
BEGIN { s="-" } # define symbol
BEGIN { f=ARGC-1 } # get total number of files
f { for (i=1;i<=NF;++i) h[$i] # read headers in associative array h[key]
ARGV[ARGC++] = FILENAME # add file at end of argument list
if (--f == 0) { # did we process all headers?
n=asorti(h) # sort header into h[idx] = key
for (i=1;i<=n;++i) # print header
printf "%s%s", h[i], (i==n?ORS:OFS)
}
nextfile # end of processing headers
}
# Start of processing the files
(FNR==1) { delete a; for(i=1;i<=NF;++i) a[$i]=i; next } # read header
{ for(i=1;i<=n;++i) printf "%s%s", (h[i] in a ? $(a[h[i]]) : s), (i==n?ORS:OFS) }
If you store the above in a file merge.awk you can use the command:
awk -f merge.awk f1 f2 f3 f4 ... fx
A similar way, but less hastle with f:
BEGIN { s="-" } # define symbol
BEGIN { # modify argument list from
c=ARGC; # from: arg1 arg2 ... argx
ARGV[ARGC++]="f=1" # to: arg1 arg2 ... argx f=1 arg1 arg2 ... argx
for(i=1;i<c;++i) ARGV[ARGC++]=ARGV[i]
}
!f { for (i=1;i<=NF;++i) h[$i] # read headers in associative array h[key]
nextfile
}
(f==1) && (FNR==1) { # process merged header
n=asorti(h) # sort header into h[idx] = key
for (i=1;i<=n;++i) # print header
printf "%s%s", h[i], (i==n?ORS:OFS)
f=2
}
# Start of processing the files
(FNR==1) { delete a; for(i=1;i<=NF;++i) a[$i]=i; next } # read header
{ for(i=1;i<=n;++i) printf "%s%s", (h[i] in a ? $(a[h[i]]) : s), (i==n?ORS:OFS) }
This method is slightly different, but allows the processing of files with different field separators as
awk -f merge.awk f1 FS="," f2 f3 FS="|" f4 ... fx
If your argument list becomes too long, you can use awk to create it for you :
BEGIN { s="-" } # define symbol
BEGIN { # read argument list from input file:
fname=(ARGC==1 ? "-" : ARGV[1])
ARGC=1 # from: filelist or /dev/stdin
while ((getline < fname) > 0) # to: arg1 arg2 ... argx
ARGV[ARGC++]=$0
}
BEGIN { # modify argument list from
c=ARGC; # from: arg1 arg2 ... argx
ARGV[ARGC++]="f=1" # to: arg1 arg2 ... argx f=1 arg1 arg2 ... argx
for(i=1;i<c;++i) ARGV[ARGC++]=ARGV[i]
}
!f { for (i=1;i<=NF;++i) h[$i] # read headers in associative array h[key]
nextfile
}
(f==1) && (FNR==1) { # process merged header
n=asorti(h) # sort header into h[idx] = key
for (i=1;i<=n;++i) # print header
printf "%s%s", h[i], (i==n?ORS:OFS)
f=2
}
# Start of processing the files
(FNR==1) { delete a; for(i=1;i<=NF;++i) a[$i]=i; next } # read header
{ for(i=1;i<=n;++i) printf "%s%s", (h[i] in a ? $(a[h[i]]) : s), (i==n?ORS:OFS) }
which can be ran as:
$ awk -f merge.awk filelist
$ find . | awk -f merge.awk "-"
$ find . | awk -f merge.awk
or any similar command.
As you see, by adding only a tiny block of code, we were able to flexibly adjust to awk code to support our needs.
Miller (johnkerl/miller) is so underused when dealing with huge files. It has tons of features included from all useful file processing tools out there. Like the official documentation says
Miller is like awk, sed, cut, join, and sort for name-indexed data such as CSV, TSV, and tabular JSON. You get to work with your data using named fields, without needing to count positional column indices.
For this particular case, it supports a verb unsparsify, which by the documentation says
Prints records with the union of field names over all input records.
For field names absent in a given record but present in others, fills in
a value. This verb retains all input before producing any output.
You just need to do the following and reorder the file back with the column positions as you desire
mlr --tsvlite --opprint unsparsify then reorder -f a,b,c,d,e,f file{1..3}.dat
which produces the output in one-shot as
a b c d e f g
5 7 2 - - - -
3 9 1 - - - -
2 9 - - 8 3 -
2 8 - - 3 3 -
1 0 - - 3 2 -
1 - 1 5 - - 2
You can even customize what characters you can use to fill the empty fields with, with default being -. For custom characters use unsparsify --fill-with '#'
A brief explanation of the fields used
To delimit the input stream as a tab delimited content, --tsvlite
To pretty print the tabular data --opprint
And unsparsify like explained above does a union of all the field names over all input stream
The reordering verb reorder is needed because the column headers appear in random order between the files. So to define the order explicitly, use the -f option with the column headers you want the output to appear with.
And installation of the package is so straightforward. Miller is written in portable, modern C, with zero runtime dependencies. The installation via package managers is so easy and it supports all major package managers Homebrew, MacPorts, apt-get, apt and yum.
Given your updated information in comments about having about 10^5 input files (and so exceeding the shells max number of args for a non-builtin command) and wanting the output columns in the order they're seen rather than alphabetically sorted, the following will work using any awk and any find:
$ cat tst.sh
#!/bin/env bash
find . -maxdepth 1 -type f -name "$1" |
awk '
NR==FNR {
fileName = $0
ARGV[ARGC++] = fileName
if ( (getline fldList < fileName) > 0 ) {
if ( !seenList[fldList]++ ) {
numFlds = split(fldList,fldArr)
for (inFldNr=1; inFldNr<=numFlds; inFldNr++) {
fldName = fldArr[inFldNr]
if ( !seenName[fldName]++ ) {
hdr = (numOutFlds++ ? hdr OFS : "") fldName
outNr2name[numOutFlds] = fldName
}
}
}
}
close(fileName)
next
}
FNR == 1 {
if ( !doneHdr++ ) {
print hdr
}
delete name2inNr
for (inFldNr=1; inFldNr<=NF; inFldNr++) {
fldName = $inFldNr
name2inNr[fldName] = inFldNr
}
next
}
{
for (outFldNr=1; outFldNr<=numOutFlds; outFldNr++) {
fldName = outNr2name[outFldNr]
inFldNr = name2inNr[fldName]
fldValue = (inFldNr ? $inFldNr : "-")
printf "%s%s", fldValue, (outFldNr<numOutFlds ? OFS : ORS)
}
}
' -
.
$ ./tst.sh 'file*.dat'
a b c e f d g
5 7 2 - - - -
3 9 1 - - - -
2 9 - 8 3 - -
2 8 - 3 3 - -
1 0 - 3 2 - -
1 - 1 - - 5 2
Note that input to the script is now the globbing pattern you want find to use to find the files, not the list of files.
Original answer:
If you don't mind a combined shell+awk script then this will work using any awk:
$ cat tst.sh
#!/bin/env bash
awk -v hdrs="$(head -1 -q "$#" | tr ' ' '\n' | sort -u)" '
BEGIN {
numOutFlds = split(hdrs,outNr2name)
for (outFldNr=1; outFldNr<=numOutFlds; outFldNr++) {
fldName = outNr2name[outFldNr]
printf "%s%s", fldName, (outFldNr<numOutFlds ? OFS : ORS)
}
}
FNR == 1 {
delete name2inNr
for (inFldNr=1; inFldNr<=NF; inFldNr++) {
fldName = $inFldNr
name2inNr[fldName] = inFldNr
}
next
}
{
for (outFldNr=1; outFldNr<=numOutFlds; outFldNr++) {
fldName = outNr2name[outFldNr]
inFldNr = name2inNr[fldName]
fldValue = (inFldNr ? $inFldNr : "-")
printf "%s%s", fldValue, (outFldNr<numOutFlds ? OFS : ORS)
}
}
' "$#"
.
$ ./tst.sh file{1..3}.dat
a b c d e f g
5 7 2 - - - -
3 9 1 - - - -
2 9 - - 8 3 -
2 8 - - 3 3 -
1 0 - - 3 2 -
1 - 1 5 - - 2
otherwise this is all awk using GNU awk for arrays of arrays, sorted_in, and ARGIND:
$ cat tst.awk
BEGIN {
for (inFileNr=1; inFileNr<ARGC; inFileNr++) {
inFileName = ARGV[inFileNr]
if ( (getline < inFileName) > 0 ) {
for (inFldNr=1; inFldNr<=NF; inFldNr++) {
fldName = $inFldNr
name2inNr[fldName][inFileNr] = inFldNr
}
}
close(inFileName)
}
PROCINFO["sorted_in"] = "#ind_str_asc"
for (fldName in name2inNr) {
printf "%s%s", (numOutFlds++ ? OFS : ""), fldName
for (inFileNr in name2inNr[fldName]) {
outNr2inNr[numOutFlds][inFileNr] = name2inNr[fldName][inFileNr]
}
}
print ""
}
FNR > 1 {
for (outFldNr=1; outFldNr<=numOutFlds; outFldNr++) {
inFldNr = outNr2inNr[outFldNr][ARGIND]
fldValue = (inFldNr ? $inFldNr : "-")
printf "%s%s", fldValue, (outFldNr<numOutFlds ? OFS : ORS)
}
}
.
$ awk -f tst.awk file{1..3}.dat
a b c d e f g
5 7 2 - - - -
3 9 1 - - - -
2 9 - - 8 3 -
2 8 - - 3 3 -
1 0 - - 3 2 -
1 - 1 5 - - 2
For efficiency the 2nd script above does all the heavy lifting in the BEGIN section so there's as little work left to do as possible in the main body of the script that's evaluated once per input line. In the BEGIN section it creates an associative array (outNr2inNr[]) that maps the outgoing field numbers (alphabetically sorted list of all field names across all input files) to the incoming field numbers so all that's left to do in the body is print the fields in that order.
Here is the solution I (the OP) have come up with so far. It may have some advantage over other approaches in that it processes the files in parallel.
R code:
library(parallel)
library(parallelMap)
# specify the directory containing the files we want to merge
args <- commandArgs(TRUE)
directory <- if (length(args)>0) args[1] else 'sg_grid'
#output_fname <- paste0(directory, '.dat')
# make a tmp directory that will store all the files
tmp_dir <- paste0(directory, '_tmp')
dir.create(tmp_dir)
# list the .dat files we want to merge
filenames <- list.files(directory)
filenames <- filenames[grep('.dat', filenames)]
# a function to read the column names
get_col_names <- function(filename)
colnames(read.table(file.path(directory, filename),
header=T, check.names=0, nrow=1))
# grab all the headers of all the files and merge them
col_names <- get_col_names(filenames[1])
for (simulation in filenames) {
col_names <- union(col_names, get_col_names(simulation))
}
# put those column names into a blank data frame
name_DF <- data.frame(matrix(ncol = length(col_names), nrow = 0))
colnames(name_DF) <- col_names
# save that as the header file
write.table(name_DF, file.path(tmp_dir, '0.dat'),
col.names=TRUE, row.names=F, quote=F, sep='\t')
# now read in every file and merge with the blank data frame
# it will have NAs in any columns it didn't have before
# save it to the tmp directory to be merged later
parallelStartMulticore(max(1,
min(as.numeric(Sys.getenv('OMP_NUM_THREADS')), 62)))
success <- parallelMap(function(filename) {
print(filename)
DF <- read.table(file.path(directory, filename),
header=1, check.names=0)
DF <- plyr:::rbind.fill(name_DF, DF)
write.table(DF, file.path(tmp_dir, filename),
quote=F, col.names=F, row.names=F, sep='\t')
}, filename=filenames)
# and we're done
print(all(unlist(success)))
This creates temporary versions of all the files, which each now have all the headers, which we can then cat together into the result:
ls -1 sg_grid_tmp/* | while read fn ; do cat "$fn" >> sg_grid.dat; done

Comparing two files column by column in unix shell

I need to compare two files column by column using unix shell, and store the difference in a resulting file.
For example if column 1 of the 1st record of the 1st file matches the column 1 of the 1st record of the 2nd file then the result will be stored as '=' in the resulting file against the column, but if it finds any difference in column values the same need to be printed in the resulting file.
Below is the exact requirement.
File 1:
id code name place
123 abc Tom phoenix
345 xyz Harry seattle
675 kyt Romil newyork
File 2:
id code name place
123 pkt Rosy phoenix
345 xyz Harry seattle
421 uty Romil Sanjose
Expected resulting file:
id_1 id_2 code_1 code_2 name_1 name_2 place_1 place_2
= = abc pkt Tom Rosy = =
= = = = = = = =
675 421 kyt uty = = Newyork Sanjose
Columns are tab delimited.
This is rather crudely coded, but shows a way to use awk to emit what you want, and can handle files of identical "schema" - not just the particular 4-field files you give as tests.
This approach uses pr to do a simple merge of the files: the same line of each input file is concatenated to present one line to the awk script.
The awk script assumes clean input, and uses the fact that if a variable n has the value 2, the value of $n in the script is the the same as $2. So, the script walks though pairs of fields using the i and j variables. For your test input, fields 1 and 5, then 2 and 6, etc., are processed.
Only very limited testing of input is performed: mainly, that the implied schema of the two input files (the names of columns/fields) is the same.
#!/bin/sh
[ $# -eq 2 ] || { echo "Usage: ${0##*/} <file1> <file2>" 1>&2; exit 1; }
[ -r "$1" -a -r "$2" ] || { echo "$1 or $2: cannot read" 1>&2; exit 1; }
set -e
pr -s -t -m "$#" | \
awk '
{
offset = int(NF/2)
tab = ""
for (i = 1; i <= offset; i++) {
j = i + offset
if (NR == 1) {
if ($i != $j) {
printf "\nColumn name mismatch (%s/%s)\n", $i, $j > "/dev/stderr"
exit
}
printf "%s%s_1\t%s_2", tab, $i, $j
} else if ($i == $j) {
printf "%s=\t=", tab
} else {
printf "%s%s\t%s", tab, $i, $j
}
tab = "\t"
}
printf "\n"
}
'
Tested on Linux: GNU Awk 4.1.0 and pr (GNU coreutils) 8.21.

How to concatenate the values based on other field in unix

I have detail.txt file ,which contains
cat >detail.txt
Student ID,Student Name, Percentage
101,A,75
102,B,77
103,C,34
104,D,42
105,E,75
106,F,42
107,G,77
1.I want to print concatenated output based on Percentage (group by Percentage) and print student name in single line separated by comma(,).
Expected Output:
75-A,E
77-B,G
42-D,F
34-C
For above question i got that how can achieve this for 75 or 77 or 42. But i did not get how to write a code grouping third field (Percentage).
I tried below code
awk -F"," '{OFS=",";if($3=="75") print $2}' detail.txt
2. I want to get output based on grading system which is given below.
marks < 45=THIRD
marks>=45 and marks<60 =SECOND
marks>=60 and marks<=75 =FIRST
marks>75 =DIST
Expected Output:
DIST:B,G
FIRST:A,E
THIRD:C,D,F
Please help me to get the expected output. Thank You..
awk solution:
awk -F, 'NR>1{
if ($3<45) k="THIRD"; else if ($3>=45 && $3<60) k="SECOND";
else if ($3>=60 && $3<=75) k="FIRST"; else k="DIST";
a[k] = a[k]? a[k]","$2 : $2;
}END{ for(i in a) print i":"a[i] }' detail.txt
k - variable that will be assigned with "grading system" name according to one of the if (...) <exp>; else if(...) <exp> ... statements
a[k] - array a is indexed by determined "grading system" name k
a[k] = a[k]? a[k]","$2 : $2 - all "student names"(presented by the 2nd field $2) are accumulated/grouped into the needed "grading system"
The output:
DIST:B,G
THIRD:C,D,F
FIRST:A,E
With GNU awk for true multi-dimensional arrays:
$ cat tst.awk
BEGIN { FS=OFS="," }
NR>1 {
stud = $2
pct = $3
if ( pct <= 45 ) { band = "THIRD" }
else if ( pct <= 60 ) { band = "SECOND" }
else if ( pct <= 75 ) { band = "FIRST" }
else { band = "DIST" }
pcts[pct][stud]
bands[band][stud]
}
END {
for (pct in pcts) {
out = ""
for (stud in pcts[pct]) {
out = (out == "" ? pct "-" : out OFS) stud
}
print out
}
print "----"
for (band in bands) {
out = ""
for (stud in bands[band]) {
out = (out == "" ? band ":" : out OFS) stud
}
print out
}
}
.
$ gawk -f tst.awk file
34-C
42-D,F
75-A,E
77-B,G
----
DIST:B,G
THIRD:C,D,F
FIRST:A,E
For your first question, the following awk one-liner should do:
awk -F, '{a[$3]=a[$3] (a[$3] ? "," : "") $2} END {for(i in a) printf "%s-%s\n", i, a[i]}' input.txt
The second question can work almost the same way, storing your mark divisions in an array, then stepping through that array to determine the subscript for a new array:
BEGIN { FS=","; m[0]="THIRD"; m[45]="SECOND"; m[60]="FIRST"; m[75]="DIST" } { for (i=0;i<=100;i++) if ((i in m) && $3 > i) mdiv=m[i]; marks[mdiv]=marks[mdiv] (marks[mdiv] ? "," : "") $2 } END { for(i in marks) printf "%s:%s\n", i, marks[i] }
But this is unreadable. When you need this level of complexity, you're past the point of a one-liner. :)
So .. combining the two and breaking them out for easier reading (and commenting) we get the following:
BEGIN {
FS=","
m[0]="THIRD"
m[45]="SECOND"
m[60]="FIRST"
m[75]="DIST"
}
{
a[$3]=a[$3] (a[$3] ? "," : "") $2 # Build an array with percentage as the index
for (i=0;i<=100;i++) # Walk through the possible marks
if ((i in m) && $3 > i) mdiv=m[i] # selecting the correct divider on the way
marks[mdiv]=marks[mdiv] (marks[mdiv] ? "," : "") $2
# then build another array with divider
# as the index
}
END { # Once we've processed all the input,
for(i in a) # step through the array,
printf "%s-%s\n", i, a[i] # printing the results.
print "----"
for(i in marks) # step through the array,
printf "%s:%s\n", i, marks[i] # printing the results.
}
You may be wondering why we for (i=0;i<=100;i++) instead of simply using for (i in m). This is because awk does not guarantee the order of array elements, and when stepping through the m array, it's important that we see the keys in increasing order.

How to gather characters usage statistics in text file using Unix commands?

I have got a text file created using OCR software - about one megabyte in size.
Some uncommon characters appears all over document and most of them are OCR errors.
I would like find all characters used in document to easily spot errors (like UNIQ command but for characters, not for lines).
I am on Ubuntu.
What Unix command I should use to display all characters used in text file?
This should do what you're looking for:
cat inputfile | sed 's/\(.\)/\1\n/g' | sort | uniq -c
The premise is that the sed puts each character in the file onto a line by itself, then the usual sort | uniq -c sequence strips out all but one of each unique character that occurs, and provides counts of how many times each occurred.
Also, you could append | sort -n to the end of the whole sequence to sort the output by how many times each character occurred. Example:
$ echo hello | sed 's/\(.\)/\1\n/g' | sort | uniq -c | sort -n
1
1 e
1 h
1 o
2 l
This will do it:
#!/usr/bin/perl -n
#
# charcounts - show how many times each code point is used
# Tom Christiansen <tchrist#perl.com>
use open ":utf8";
++$seen{ ord() } for split //;
END {
for my $cp (sort {$seen{$b} <=> $seen{$a}} keys %seen) {
printf "%04X %d\n", $cp, $seen{$cp};
}
}
Run on itself, that program produces:
$ charcounts /tmp/charcounts | head
0020 46
0065 20
0073 18
006E 15
000A 14
006F 12
0072 11
0074 10
0063 9
0070 9
If you want the literal character and/or name of the character, too, that’s easy to add.
If you want something more sophisticated, this program figures out characters by Unicode property. It may be enough for your purposes, and if not, you should be able to adapt it.
#!/usr/bin/perl
#
# unicats - show character distribution by Unicode character property
# Tom Christiansen <tchrist#perl.com>
use strict;
use warnings qw<FATAL all>;
use open ":utf8";
my %cats;
our %Prop_Table;
build_prop_table();
if (#ARGV == 0 && -t STDIN) {
warn <<"END_WARNING";
$0: reading UTF-8 character data directly from your tty
\tSo please type stuff...
\t and then hit your tty's EOF sequence when done.
END_WARNING
}
while (<>) {
for (split(//)) {
$cats{Total}++;
if (/\p{ASCII}/) { $cats{ASCII}++ }
else { $cats{Unicode}++ }
my $gcat = get_general_category($_);
$cats{$gcat}++;
my $subcat = get_general_subcategory($_);
$cats{$subcat}++;
}
}
my $width = length $cats{Total};
my $mask = "%*d %s\n";
for my $cat(qw< Total ASCII Unicode >) {
printf $mask, $width => $cats{$cat} || 0, $cat;
}
print "\n";
my #catnames = qw[
L Lu Ll Lt Lm Lo
N Nd Nl No
S Sm Sc Sk So
P Pc Pd Ps Pe Pi Pf Po
M Mn Mc Me
Z Zs Zl Zp
C Cc Cf Cs Co Cn
];
#for my $cat (sort keys %cats) {
for my $cat (#catnames) {
next if length($cat) > 2;
next unless $cats{$cat};
my $prop = length($cat) == 1
? ( " " . q<\p> . $cat )
: ( q<\p> . "{$cat}" . "\t" )
;
my $desc = sprintf("%-6s %s", $prop, $Prop_Table{$cat});
printf $mask, $width => $cats{$cat}, $desc;
}
exit;
sub get_general_category {
my $_ = shift();
return "L" if /\pL/;
return "S" if /\pS/;
return "P" if /\pP/;
return "N" if /\pN/;
return "C" if /\pC/;
return "M" if /\pM/;
return "Z" if /\pZ/;
die "not reached one: $_";
}
sub get_general_subcategory {
my $_ = shift();
return "Lu" if /\p{Lu}/;
return "Ll" if /\p{Ll}/;
return "Lt" if /\p{Lt}/;
return "Lm" if /\p{Lm}/;
return "Lo" if /\p{Lo}/;
return "Mn" if /\p{Mn}/;
return "Mc" if /\p{Mc}/;
return "Me" if /\p{Me}/;
return "Nd" if /\p{Nd}/;
return "Nl" if /\p{Nl}/;
return "No" if /\p{No}/;
return "Pc" if /\p{Pc}/;
return "Pd" if /\p{Pd}/;
return "Ps" if /\p{Ps}/;
return "Pe" if /\p{Pe}/;
return "Pi" if /\p{Pi}/;
return "Pf" if /\p{Pf}/;
return "Po" if /\p{Po}/;
return "Sm" if /\p{Sm}/;
return "Sc" if /\p{Sc}/;
return "Sk" if /\p{Sk}/;
return "So" if /\p{So}/;
return "Zs" if /\p{Zs}/;
return "Zl" if /\p{Zl}/;
return "Zp" if /\p{Zp}/;
return "Cc" if /\p{Cc}/;
return "Cf" if /\p{Cf}/;
return "Cs" if /\p{Cs}/;
return "Co" if /\p{Co}/;
return "Cn" if /\p{Cn}/;
die "not reached two: <$_> " . sprintf("U+%vX", $_);
}
sub build_prop_table {
for my $line (<<"End_of_Property_List" =~ m{ \S .* \S }gx) {
L Letter
Lu Uppercase_Letter
Ll Lowercase_Letter
Lt Titlecase_Letter
Lm Modifier_Letter
Lo Other_Letter
M Mark (combining characters, including diacritics)
Mn Nonspacing_Mark
Mc Spacing_Mark
Me Enclosing_Mark
N Number
Nd Decimal_Number (also Digit)
Nl Letter_Number
No Other_Number
P Punctuation
Pc Connector_Punctuation
Pd Dash_Punctuation
Ps Open_Punctuation
Pe Close_Punctuation
Pi Initial_Punctuation (may behave like Ps or Pe depending on usage)
Pf Final_Punctuation (may behave like Ps or Pe depending on usage)
Po Other_Punctuation
S Symbol
Sm Math_Symbol
Sc Currency_Symbol
Sk Modifier_Symbol
So Other_Symbol
Z Separator
Zs Space_Separator
Zl Line_Separator
Zp Paragraph_Separator
C Other (means not L/N/P/S/Z)
Cc Control (also Cntrl)
Cf Format
Cs Surrogate (not usable)
Co Private_Use
Cn Unassigned
End_of_Property_List
my($short_prop, $long_prop) = $line =~ m{
\b
( \p{Lu} \p{Ll} ? )
\s +
( \p{Lu} [\p{L&}_] + )
\b
}x;
$Prop_Table{$short_prop} = $long_prop;
}
}
For example:
$ unicats book.txt
2357232 Total
2357199 ASCII
33 Unicode
1604949 \pL Letter
74455 \p{Lu} Uppercase_Letter
1530485 \p{Ll} Lowercase_Letter
9 \p{Lo} Other_Letter
10676 \pN Number
10676 \p{Nd} Decimal_Number
19679 \pS Symbol
10705 \p{Sm} Math_Symbol
8365 \p{Sc} Currency_Symbol
603 \p{Sk} Modifier_Symbol
6 \p{So} Other_Symbol
111899 \pP Punctuation
2996 \p{Pc} Connector_Punctuation
6145 \p{Pd} Dash_Punctuation
11392 \p{Ps} Open_Punctuation
11371 \p{Pe} Close_Punctuation
79995 \p{Po} Other_Punctuation
548529 \pZ Separator
548529 \p{Zs} Space_Separator
61500 \pC Other
61500 \p{Cc} Control
As far as using *nix commands, the answer above is good, but it doesn't get usage stats.
However, if you actually want stats (like the rarest used, median, most used, etc) on the file, this Python should do it.
def get_char_counts(fname):
f = open(fname)
usage = {}
for c in f.read():
if c not in usage:
usage.update({c:1})
else:
usage[c] += 1
return usage

Resources