Unix Pipeling "AWK" - The summation whilst matching - unix

Below I have some raw data. My goal is to match 'column one' values and have the total number of bytes in a single line of output for each ip address.
For example output:
81.220.49.127 6654
81.226.10.238 328
81.227.128.93 84700
Raw Data:
81.220.49.127 328
81.220.49.127 328
81.220.49.127 329
81.220.49.127 367
81.220.49.127 5302
81.226.10.238 328
81.227.128.93 84700
Can anyone advise me on how to do this.

Using an associative array:
awk '{a[$1]+=$2}END{for (i in a){print i,a[i]}}' infile
Alternative to preserve order:
awk '!($1 in a){b[++cont]=$1}{a[$1]+=$2}END{for (c=1;c<=cont;c++){print b[c],a[b[c]]}}' infile
Another way where arrays are not needed:
awk 'lip != $1 && lip != ""{print lip,sum;sum=0}
{sum+=$NF;lip=$1}
END{print lip,sum}' infile
Result
81.220.49.127 6654
81.226.10.238 328
81.227.128.93 84700

Related

Awk program to compare number of fields by space of each line

I am trying to check if each line has a same length(or number of fields) in a file.
I am doing the following but it seems not to work.
NR==1 {length=NF}
NR>1 && NF!=length {print}
Can this be done by a one-liner awk? or a program is fine.
A sample of input would be:
12 34 54 56
12 89 34 33
12
29 56 42 42
My expected output would be "yes" or "no" if they have the same number of fields or not.
You could try this command which checks the number of fields in each line and compares it to the number of fields of the first line:
awk 'NR==1{a=NF; b=0} (NR>1 && NF!=a){print "No"; b=1; exit 1}END{if (b==0) print "Yes"}' test.txt
Checking is aborted in the first line whose number of fields is distinct from the first line of input.
For input
12 43 43
12 32
you will get "No"
Try:
awk 'BEGIN{a="yes"} last!="" && NF!=last{a="no"; exit} {last=NF} END{print a}' file
How it works
BEGIN{a="yes"}
This initializes the variable a to yes. (We assume all lines have the same number fields until proven otherwise.)
last!="" && NF!=last{a="no"; exit}
If last has been assigned a value and the number of fields on the current line is not the same as last, then set a to no and exit.
{last=NF}
Update last to the number of fields on the current line.
END{print a}
Before exiting, print a.
Examples
$ cat file1
2 34 54 56
12 89 34 33
12
29 56 42 42
$ awk 'BEGIN{a="yes"} last!="" && NF!=last{a="no"; exit} {last=NF} END{print a}' file1
no
$ cat file2
2 34 54 56
12 89 34 33
29 56 42 42
$ awk 'BEGIN{a="yes"} last!="" && NF!=last{a="no"; exit} {last=NF} END{print a}' file2
yes
I am assuming that you want to check fields of all lines, if they are equal or not if this is case then try following.
awk '
FNR==1{
value=NF
count++
next
}
{
count=NF==value?++count:count
}
END{
if(count==FNR){
print "All lines are of same fields"
}
else{
print "All lines are NOT of same fields."
}
}
' Input_file
Additional stuff(only if require): In case you want to print contents of file whose all lines are having same fields along with yes or all are same fields in file message in output then try following.
awk '
{
val=val?val ORS $0:$0
}
FNR==1{
value=NF
count++
next
}
{
count=NF==value?++count:count
}
END{
if(count==FNR){
print "All lines are of same fields" ORS val
}
else{
print "All lines are NOT of same fields."
}
}
' Input_file
this should do
$ awk 'NR==1{p=NF} p!=NF{s=1; exit} END{print s?"No":"Yes"}' file
however, setting the exit status would be better if this will be part of a workflow.
Since equivalence has transitive property, there is no need to keep NF other than the first line; setting 0 as your success value doesn't require initialization to default value.
An efficient even fields shell function, using sed to construct a regex, (based on the first line of input), to feed to GNU grep, which looks for field length mismatches:
# Usage: ef filename
ef() { sed '1s/[^ ]*/[^ ]*/g;q' "$1" | grep -v -m 1 -q -f - "$1" \
&& echo no || echo yes ; }
For files with uneven fields grep -m 1 quits after the first non-uniform line -- so if the file is a million lines long, but the mismatch occurs on line #2, grep only needs to read two lines, not a million. On the other hand, if there's no mismatch grep would have to read a million lines.

R truncates text files with certain encodings

I'm trying to read into R a test file encoded in Code page 437. Here is the file, and here is its hex-dump:
00000000: 0b0c 0e0f 1011 1213 1415 1617 1819 1a1b ................
00000010: 1c1d 1e1f 2021 2223 2425 2627 2829 2a2b .... !"#$%&'()*+
00000020: 2c2d 2e2f 3031 3233 3435 3637 3839 3a3b ,-./0123456789:;
00000030: 3c3d 3e3f 4041 4243 4445 4647 4849 4a4b <=>?#ABCDEFGHIJK
00000040: 4c4d 4e4f 5051 5253 5455 5657 5859 5a5b LMNOPQRSTUVWXYZ[
00000050: 5c5d 5e5f 6061 6263 6465 6667 6869 6a6b \]^_`abcdefghijk
00000060: 6c6d 6e6f 7071 7273 7475 7677 7879 7a7b lmnopqrstuvwxyz{
00000070: 7c7d 7e7f ffad 9b9c 9da6 aeaa f8f1 fde6 |}~.............
00000080: faa7 afac aba8 8e8f 9280 90a5 999a e185 ................
00000090: a083 8486 9187 8a82 8889 8da1 8c8b a495 ................
000000a0: a293 94f6 97a3 9681 989f e2e9 e4e8 eae0 ................
000000b0: ebee e3e5 e7ed fc9e f9fb ecef f7f0 f3f2 ................
000000c0: a9f4 f5c4 b3da bfc0 d9c3 b4c2 c1c5 cdba ................
000000d0: d5d6 c9b8 b7bb d4d3 c8be bdbc c6c7 ccb5 ................
000000e0: b6b9 d1d2 cbcf d0ca d8d7 cedf dcdb ddde ................
000000f0: b0b1 b2fe 0a .....
The file contains 245 characters (including the final newline), but R only reads 242 of them:
> test_text <- readLines(file('437__characters.txt', encoding='437'))
Warning message:
In readLines(file("437__characters.txt", :
incomplete final line found on '437__characters.txt'
> test_text
[1] "\v\f\016\017\020\021\022\023\024\025\026\027\030\031\032\033\034\035\036\037 !\"#$%&'()*+,-./0123456789:;<=>?#ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~\177 ¡¢£¥ª«¬°±²µ·º»¼½¿ÄÅÆÇÉÑÖÜßàáâäåæçèéêëìíîïñòóôö÷ùúûüÿƒΓΘΣΦΩαδεπστφⁿ₧∙√∞∩≈≡≤≥⌐⌠⌡─│┌┐└┘├┤┬┴┼═║╒╓╔╕╖╗╘╙╚╛╜╝╞╟╠╡╢╣╤╥╦╧╨╩╪╫╬▀▄█▌▐░▒"
> nchar(test_text)
[1] 242
You'll note that R doesn't read the final characters "▓■\n".
My best guess is that this is something to do with how R determines the length of text files, because of the following:
Even though the file is terminated with a newline (0x0a), R gives an 'incomplete final line found' warning
Adding seven or more characters to the end of the file makes it read correctly
Similarly, the file is read correctly if you remove three characters from anywhere in the file
The same issue seems to occur with reading files encoded in other DOS code pages
This question might be related: R: read.table stops when meeting specific utf-16 characters.
It appears to be something wrong with readLines(), but could very well be an issue with the file connection for text, with something amiss happening in the encoding = part. Anyway, here's a workaround: Load the file as binary, and then convert. And stay away from bad voodoo 1980s code pages.
Using readLines()
This does not capture the last \n since that delimits the unit of text input by `readLines().
test_text2 <- readLines(file("~/Downloads/437__characters.txt", raw = TRUE))
test_text3 <- stringi::stri_conv(test_text2, "IBM437", "UTF-8")
stringi::stri_length(test_text3)
## [1] 244
test_text3
## [1] "\v\f\016\017\020\021\022\023\024\025\026\027\030\031\034\033\177\035\036\037 !\"#$%&'()*+,-./
## 0123456789:;<=>?#ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~\032 ¡¢£¥ª«¬°±²μ·º
## »¼½¿ÄÅÆÇÉÑÖÜßàáâäåæçèéêëìíîïñòóôö÷ùúûüÿƒΓΘΣΦΩαδεπστφⁿ₧∙√∞∩≈≡≤≥⌐⌠⌡─│┌┐└┘├┤┬┴┼═║╒╓╔╕╖╗╘╙╚╛╜╝╞╟╠╡╢╣╤╥
## ╦╧╨╩╪╫╬▀▄█▌▐░▒▓■"
Using readBin()
Captures everything including the \n.
test_text_bin <- readBin(file("~/Downloads/437__characters.txt", "rb"),
n = 245, what = "raw")
test_text_bin_UTF8 <- stringi::stri_conv(test_text_bin, "IBM437", "UTF-8")
stringi::stri_length(test_text_bin_UTF8)
## [1] 245
test_text_bin_UTF8
## [1] "\v\f\016\017\020\021\022\023\024\025\026\027\030\031\034\033\177\035\036\037 !\"#$%&'()*+,-./
## 0123456789:;<=>?#ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_`abcdefghijklmnopqrstuvwxyz{|}~\032 ¡¢£¥ª«¬°±²μ·º
## »¼½¿ÄÅÆÇÉÑÖÜßàáâäåæçèéêëìíîïñòóôö÷ùúûüÿƒΓΘΣΦΩαδεπστφⁿ₧∙√∞∩≈≡≤≥⌐⌠⌡─│┌┐└┘├┤┬┴┼═║╒╓╔╕╖╗╘╙╚╛╜╝╞╟╠╡╢╣╤╥
## ╦╧╨╩╪╫╬▀▄█▌▐░▒▓■\n"

AWK: extract lines if column in file 1 falls within a range declared in two columns in other file

Currently I'm struggling with an AWK problem that I haven't been able to solve yet. I have one huge file (30GB) with genomic data that holds a list with positions (declared in col 1 and 2) and a second list that holds a number of ranges (declared in col 3, 4 and 5). I want to extract all lines in the first file where the position falls within the range declared in the seconds file. As the position is only unique within a certain chromosome (chr) first it has to be tested if the chr's are identical (ie. col1 in file 1 matches col3 in file2)
file 1
chromosome position another....hundred.....columns
chr1 816 .....
chr1 991 .....
chr2 816 .....
chr2 880 .....
chr2 18768 .....
...
chr22 9736286 .....
file 2
name identifier chromosome start end
GENE1 ucsc.86 chr1 800 900
GENE2 ucsc.45 chr2 700 1700
GENE3 ucsc.46 chr2 18000 19000
expected output
chromosome position another....hundred.....columns
chr1 816 .....
chr2 816 .....
chr2 880 .....
chr2 18768 .....
A summery of what I intend to do in (half coded):
(if $1(in file 1) matches $3(in file 2){ ##test if in the correct chr
if ($2(in file 1) >= $4 && =< $5 (in file 2){ ##test if pos is in the range
print $0 (in file 1) ##if so print the row from file1
}
}
I kind if understand how to solve this problem by putting file1 in an array and using position as the index but then I still have a problem with the chr and besides that file1 is way to big to put in an array (although I have 128GB of RAM). I've tried some things with multi-dimensional arrays but couldn't really figure out how to do that either.
Thanks a lot for all your help.
Update 8/5/14
Added a third line in file 2 containing another range in the same chrom. as on the second line. This line is skipped in the script below.
It'd be something like this, untested:
awk '
NR==FNR{ start[$3] = $4; end[$3] = $5; next }
(FNR==1) || ( ($1 in start) && ($2 >= start[$1]) && ($2 <= end[$1]) )
' file2 file1
The change in your data set actually modified the question greatly. You introduced an element which was used as a key and since keys have to be unique it got overwritten.
For your data set, you are better off making composite keys. Something like:
awk '
NR==FNR{ range[$3,$4,$5]; next }
FNR==1
{
for(x in range) {
split(x, check, SUBSEP);
if($1==check[1] && $2>=check[2] && $2<=check[3]) print $0
}
}
' file2 file1
chromosome position another....hundred.....columns
chr1 816 .....
chr2 816 .....
chr2 880 .....
chr2 18768

extract a string after a pattern

I want to extract the numbers following client_id and id and pair up client_id and id in each line.
For example, for the following lines of log,
User(client_id:03)) results:[RelatedUser(id:204, weight:10),_RelatedUser(id:491,_weight:10),_RelatedUser(id:29, weight: 20)
User(client_id:04)) results:[RelatedUser(id:209, weight:10),_RelatedUser(id:301,_weight:10)
User(client_id:05)) results:[RelatedUser(id:20, weight: 10)
I want to output
03 204
03 491
03 29
04 209
04 301
05 20
I know I need to use sed or awk. But I do not know exactly how.
Thanks
This may work for you:
awk -F "[):,]" '{ for (i=2; i<=NF; i++) if ($i ~ /id/) print $2, $(i+1) }' file
Results:
03 204
03 491
03 29
04 209
04 301
05 20
Here's a awk script that works (I put it on multiple lines and made it a bit more verbose so you can see what's going on):
#!/bin/bash
awk 'BEGIN{FS="[\(\):,]"}
/client_id/ {
cid="no_client_id"
for (i=1; i<NF; i++) {
if ($i == "client_id") {
cid = $(i+1)
} else if ($i == "id") {
id = $(i+1);
print cid OFS id;
}
}
}' input_file_name
Output:
03 204
03 491
03 29
04 209
04 301
05 20
Explanation:
awk 'BEGIN{FS="[\(\):,]"}: invoke awk, use ( ) : and , as delimiters to separate your fields
/client_id/ {: Only do the following for the lines that contain client_id:
for (i=1; i<NF; i++) {: iterate through the fields on each line one field at a time
if ($i == "client_id") { cid = $(i+1) }: if the field we are currently on is client_id, then its value is the next field in order.
else if ($i == "id") { id = $(i+1); print cid OFS id;}: otherwise if the field we are currently on is id, then print the client_id : id pair onto stdout
input_file_name: supply the name of your input file as first argument to the awk script.
This might work for you (GNU sed):
sed -r '/.*(\(client_id:([0-9]+))[^(]*\(id:([0-9]+)/!d;s//\2 \3\n\1/;P;D' file
/.*(\(client_id:([0-9]+))[^(]*\(id:([0-9]+)/!d if the line doesn't have the intended strings delete it.
s//\2 \3\n\1/ re-arrange the line by copying the client_id and moving the first id ahead thus reducing the line for successive iterations.
P print upto the introduced newline.
D delete upto the introduced newline.
I would prefer awk for this, but if you were wondering how to do this with sed, here's one way that works with GNU sed.
parse.sed
/client_id/ {
:a
s/(client_id:([0-9]+))[^(]+\(id:([0-9]+)([^\n]+)(.*)/\1 \4\5\n\2 \3/
ta
s/^[^\n]+\n//
}
Run it like this:
sed -rf parse.sed infile
Or as a one-liner:
<infile sed '/client_id/ { :a; s/(client_id:([0-9]+))[^(]+\(id:([0-9]+)([^\n]+)(.*)/\1 \4\5\n\2 \3/; ta; s/^[^\n]+\n//; }'
Output:
03 204
03 491
03 29
04 209
04 301
05 20
Explanation:
The idea is to repeatedly match client_id:([0-9]+) and id:([0-9]+) pairs and put them at the end of pattern space. On each pass the id:([0-9]+) is removed.
The final replace removes left-overs from the loop.

Awk script for substracting from the above field

Hi I have my input file with one field:
30
58
266
274
296
322
331
I need the output to be the difference of 2nd and 1st rows(58-30=28) and 3rd and 2nd rows(266-58=208) and so on.
my output should look like below:
30 30
58 28
266 208
274 8
any help please?
data=`cat file | xargs`
echo $data | awk '{a=0; for(i=1; i<=NF;i++) { print $i, $i-a; a=$i}}'
30 30
58 28
266 208
274 8
296 22
322 26
331 9
Update upon comment Without cat/xargs:
awk '{printf "%d %d\n", $1, $1-a; a=$1;}' file
You don't actually need the for loop from Khachick's answer as Awk will go through all the rows anyway. Simpler is:
cat file | awk '{ BEGIN { a=0 }; { print $1, $1-a; a=$1 }'
However it is also possible to skip the first row that you don't really want by initialising a variable in the BEGIN block and not doing the print if the variable is so initialised before changing its value. Sort of like:
BEGIN { started=0 }; { if(0 == started) { started = 1 } else { print $1, $1-a } }

Resources