While read line, awk $line with multiple delimiters - unix

I am trying a small variation of this, except I telling awk that the delimiter of the file to be split based on the 5th field can either be a colon ":" or a tab \t. I do the awk -F '[:\t]' part alone, it does indeed print the right $5 field.
However, when I try to incorporate this into the bigger command, it returns the following error:
print > f
awk: cmd. line:9: ^ syntax error
This is the code:
awk -F '[:\t]' ' # read the list of numbers in Tile_Number_List
FNR == NR {
num[$1]
next
}
# process each line of the .BAM file
# any lines with an "unknown" $5 will be ignored
$5 in num {
f = "Alignments_" $5 ".sam" print > f
} ' Tile_Number_List.txt little.sam
Why won't it work with the -F option?

The problem isn't with the value of FS it's this line as pointed to by the error:
f = "Alignments_" $5 ".sam" print > f
You have two statements on one line so either separate them with a ; or a newline:
f = "Alignments_" $5 ".sam"; print > f
Or:
f = "Alignments_" $5 ".sam"
print > f
As full one liner:
awk -F '[:\t]' 'FNR==NR{n[$1];next}$5 in n{print > ("Alignments_"$5".sam")}'
Or as a script file i.e script.awk:
BEGIN {
FS="[:\t]"
}
# read the list of numbers in Tile_Number_List
FNR == NR {
num[$1]
next
}
# process each line of the .BAM file
# any lines with an "unknown" $5 will be ignored
$5 in num {
f = "Alignments_" $5 ".sam"
print > f
}
To run in this form awk -f script.awk Tile_Number_List.txt little.sam.
Edit:
The character - is used to represent input from stdin instead of a file with many *nix tools.
command | awk -f script.awk Tile_Number_List.txt -

Related

How to calculate max and min of multiple columns (row wise) using awk

This might be simple - I have a file as below:
df.csv
col1,col2,col3,col4,col5
A,2,5,7,9
B,6,10,2,3
C,3,4,6,8
I want to perform max(col2,col4) - min(col3,col5) but I get an error using max and min in awk and write the result in a new column. So the desired output should look like:
col1,col2,col3,col4,col5,New_col
A,2,5,7,9,2
B,6,10,2,3,3
C,3,4,6,8,2
I used the code below but is does not work - how can I solve this?
awk -F, '{print $1,$2,$3,$4,$5,$(max($7,$9)-min($8,$10))}'
Thank you.
$ cat tst.awk
BEGIN { FS=OFS="," }
{ print $0, (NR>1 ? max($2,$4) - min($3,$5) : "New_col") }
function max(a,b) {return (a>b ? a : b)}
function min(a,b) {return (a<b ? a : b)}
$ awk -f tst.awk file
col1,col2,col3,col4,col5,New_col
A,2,5,7,9,2
B,6,10,2,3,3
C,3,4,6,8,2
If your actual "which is larger" calculation is more involved than just using >, e.g. if you were comparing dates in some non-alphabetic format or peoples names where you have to compare the surname before the forename and handle titles, etc., then you'd write the functions as:
function max(a,b) {
# some algorithm to compare the 2 strings
}
function min(a,b) {return (max(a,b) == a ? b : a)}
You may use this awk:
awk 'BEGIN{FS=OFS=","} NR==1 {print $0, "New_col"; next} {print $0, ($2 > $4 ? $2 : $4) - ($3 < $5 ? $3 : $5)}' df.csv
col1,col2,col3,col4,col5,New_col
A,2,5,7,9,2
B,6,10,2,3,3
C,3,4,6,8,2
A more readable version:
awk '
BEGIN { FS = OFS = "," }
NR == 1 {
print $0, "New_col"
next
}
{
print $0, ($2 > $4 ? $2 : $4) - ($3 < $5 ? $3 : $5)
}' df.csv
get an error using max and min in awk and write the result in a new column.
No such function are available in awk but for two values you might harness ternary operator, so in place of
max($7,$9)
try
($7>$9)?$7:$9
and in place of
min($8,$10)
try
($8<$10)?$8:$10
Above exploit ?: which might be explained as check?valueiftrue:valueiffalse, simple example, let file.txt content be
100,100
100,300
300,100
300,300
then
awk 'BEGIN{FS=","}{print ($1>$2)?$1:$2}' file.txt
output
100
300
300
300
Also are you sure about 1st $ in $(max($7,$9)-min($8,$10))? By doing so you instructed awk to get value of n-th column, where n is result of computation inside (...).

Stop awk search on a condition

At the moment I have my R function generate an awk script to load, selectively, a subset of a csv into fread.
The resulting awk string looks something like this:
tail -n +2 ../data/faults_main_only_dp_1_shopFlag.csv | parallel -k -q --block 500M --pipe awk -F , ' $5 > \"2013-01-01\" && $5 < \"2015-11-17\" && $2 ~ /^F59PHI$|^GP20ECO$|^GT42CU-ACE$/ && $20 ~ /^Disregard$|^EMD Work Item$|^Pending$|^Pre-Work Item$|^Road Failure$|^Unit Shopped$|^Watch$|^Work Item$|^NA$/ {print $2 \",\" $88 \",\" $17 \",\" $5 \",\" $9 \",\" $22 \",\" $3 \",\" $15 \",\" $14 } '
The thing is: as of recent, my csv is ordered by dates ($5), in descending order, so if the user enters a specific lower-bound date, and awk gets to that line, it make sense for it to stop. (I am not sure how that would work the parallelization I am doing above. Maybe there is a way to select only the part of the csv that is “above” the lower-bound of the date and then pass the resulting csv into the awk script.) Is there a way to do that?
Generate an awk program such as this which explitly exits on a condition (here if field 3 exceeds 4):
$2 > 3 & $2 < 10
$3 > 4 { exit}
Put the above in a file called myprog.awk, say, and assuming default separators run it with this (your awk may be called something else):
gawk -f myprog.awk mydata.dat
or put it on the command line but you will have to be careful regarding quoting depending on the shell you use:
$2 > 3 & $2 < 10; $3 > 4 { exit}

awk if statement with simple math

I'm just trying to do some basic calculations on a CSV file.
Data:
31590,Foo,70
28327,Bar,291
25155,Baz,583
24179,Food,694
28670,Spaz,67
22190,bawk,4431
29584,alfred,142
27698,brian,379
24372,peter,22
25064,weinberger,8
Here's my simple awk script:
#!/usr/local/bin/gawk -f
BEGIN { FPAT="([^,]*)|(\"[^\"]+\")"; OFS=","; OFMT="%.2f"; }
NR > 1
END { if ($3>1336) $4=$3*0.03; if ($3<1336) $4=$3*0.05;}1**
Wrong output:
31590,Foo,70
28327,Bar,291
28327,Bar,291
25155,Baz,583
25155,Baz,583
24179,Food,694
24179,Food,694
28670,Spaz,67
28670,Spaz,67
22190,bawk,4431
22190,bawk,4431
29584,alfred,142
29584,alfred,142
27698,brian,379
27698,brian,379
24372,peter,22
24372,peter,22
25064,weinberger,8
25064,weinberger,8
Excepted output:
31590,Foo,70,3.5
28327,Bar,291,14.55
25155,Baz,583,29.15
24179,Food,694,34.7
28670,Spaz,67,3.35
22190,bawk,4431,132.93
29584,alfred,142,7.1
27698,brian,379,18.95
24372,peter,22,1.1
25064,weinberger,8,.04
Simple math is if
field $3 > 1336 = $3*.03 and results in field $4
field $3 < 1336 = $3*.05 and results in field $4
There's no need to force awk to recompile every record (by assigning to $4), just print the current record followed by the result of your calculation:
awk 'BEGIN{FS=OFS=","; OFMT="%.2f"} {print $0, $3*($3>1336?0.03:0.05)}' file
You shouldn't have anything in the END block
BEGIN {
FS = OFS = ","
OFMT="%.2f"
}
{
if ($3 > 1336)
$4 = $3 * 0.03
else
$4 = $3 * 0.05
print
}
This results in
31590,Foo,70,3.5
28327,Bar,291,14.55
25155,Baz,583,29.15
24179,Food,694,34.7
28670,Spaz,67,3.35
22190,bawk,4431,132.93
29584,alfred,142,7.1
27698,brian,379,18.95
24372,peter,22,1.1
25064,weinberger,8,0.4
$ awk -F, -v OFS=, '{if ($3>1336) $4=$3*0.03; else $4=$3*0.05;} 1' data
31590,Foo,70,3.5
28327,Bar,291,14.55
25155,Baz,583,29.15
24179,Food,694,34.7
28670,Spaz,67,3.35
22190,bawk,4431,132.93
29584,alfred,142,7.1
27698,brian,379,18.95
24372,peter,22,1.1
25064,weinberger,8,0.4
Discussion
The END block is not executed at the end of each line but at the end of the whole file. Consequently, it is not helpful here.
The original code has two free standing conditions, NR>1 and 1. The default action for each is to print the line. That is why, in the "wrong output," all lines after the first were doubled in the output.
With awk:
awk -F, -v OFS=, '$3>1336?$4=$3*.03:$4=$3*.05' file
The conditional-expression ? action1 : action2 ; is the much shorter terinary operator in awk.

While read line, awk $line and write to variable

I am trying to split a file into different smaller files depending on the value of the fifth field. A very nice way to do this was already suggested and also here.
However, I am trying to incorporate this into a .sh script for qsub, without much success.
The problem is that in the section where the file to which output the line is specified,
i.e., f = "Alignments_" $5 ".sam" print > f
, I need to pass a variable declared earlier in the script, which specifies the directory where the file should be written. I need to do this with a variable which is built for each task when I send out the array job for multiple files.
So say $output_path = ./Sample1
I need to write something like
f = $output_path "/Alignments_" $5 ".sam" print > f
But it does not seem to like having a $variable that is not a $field belonging to awk. I don't even think it likes having two "strings" before and after the $5.
The error I get back is that it takes the first line of the file to be split (little.sam) and tries to name f like that, followed by /Alignments_" $5 ".sam" (those last three put together correctly). It says, naturally, that it is too big a name.
How can I write this so it works?
Thanks!
awk -F '[:\t]' ' # read the list of numbers in Tile_Number_List
FNR == NR {
num[$1]
next
}
# process each line of the .BAM file
# any lines with an "unknown" $5 will be ignored
$5 in num {
f = "Alignments_" $5 ".sam" print > f
} ' Tile_Number_List.txt little.sam
UPDATE, AFTER ADDING -V TO AWK AND DECLARING THE VARIABLE OPATH
input=$1
outputBase=${input%.bam}
mkdir -v $outputBase\_TEST
newdir=$outputBase\_TEST
samtools view -h $input | awk 'NR >= 18' | awk -F '[\t:]' -v opath="$newdir" '
FNR == NR {
num[$1]
next
}
$5 in num {
f = newdir"/Alignments_"$5".sam";
print > f
} ' Tile_Number_List.txt -
mkdir: created directory little_TEST'
awk: cmd. line:10: (FILENAME=- FNR=1) fatal: can't redirect to `/Alignments_1101.sam' (Permission denied)
awk variables are like C variables - just reference them by name to get their value, no need to stick a "$" in front of them like you do with shell variables:
awk -F '[:\t]' ' # read the list of numbers in Tile_Number_List
FNR == NR {
num[$1]
next
}
# process each line of the .BAM file
# any lines with an "unknown" $5 will be ignored
$5 in num {
output_path = "./Sample1/"
f = output_path "Alignments_" $5 ".sam"
print > f
} ' Tile_Number_List.txt little.sam
To pass the value of the shell variable such as $output_path to awk you need to use the -v option.
$ output_path=./Sample1/
$ awk -F '[:\t]' -v opath="$ouput_path" '
# read the list of numbers in Tile_Number_List
FNR == NR {
num[$1]
next
}
# process each line of the .BAM file
# any lines with an "unknown" $5 will be ignored
$5 in num {
f = opath"Alignments_"$5".sam"
print > f
} ' Tile_Number_List.txt little.sam
Also you still have the error from your previous question left in your script
EDIT:
The awk variable created with -v is obase but you use newdir what you want is:
input=$1
outputBase=${input%.bam}
mkdir -v $outputBase\_TEST
newdir=$outputBase\_TEST
samtools view -h "$input" | awk -F '[\t:]' -v opath="$newdir" '
FNR == NR && NR >= 18 {
num[$1]
next
}
$5 in num {
f = opath"/Alignments_"$5".sam" # <-- opath is the awk variable not newdir
print > f
}' Tile_Number_List.txt -
You should also move NR >= 18 into the second awk script.

extracting a pattern and a certain field from the line above it using awk and grep preferably

i have a text file like this:
********** time1 **********
line of text1
line of text1.1
line of text1.2
********** time2 **********
********** time3 **********
********** time4 **********
line of text2.1
line of text2.2
********** time5 **********
********** time6 **********
line of text3.1
i want to extract line of text and the time(without the stars) above it and store it in a file.(time with no line of text beneath them have to be ignored). I want to do this preferably with grep and awk.
So for example, my output for the above code should be
time1 : line of text1
time1 : line of text1.1
time1 : line of text1.2
time4 : line of text2.1
time4 : line of text2.2
time6 : line of text3
how do i go about it?
This assumes that there are no spaces in the time and that there is only one (or zero) line of text after each time marker.
awk '$1 ~ /\*+/ {prev = $2} $1 !~ /\*+/ {print prev, ":", $0}' inputfile
Works with spaces in the time:
awk '/^[^*]+/ { gsub(/*/,"",x);printf x": "; print };{x=$0}' data.txt
You can do it like this with vim:
:%s_\*\+ \(YOUR TIME PATTERN\) \*\+\_.\(\[^*\].*\)$_\1 : \2_ | g_\*\+ YOUR TIME PATTERN \*\+_d
That is search for TIME PATTERN lines and saves the time pattern and the next line if it's not started with *. Then create the new line from them. Then delete every remaining TIME PATTERN line.
Note this assumes, that the time pattern lines are ending with *, etc.
With awk:
awk '/\*+ YOUR TIME PATTERN \*+/ { time=gensub("\*+ (YOUR TIME PATTERN) \*+","\\1","g") }
! /\*+ YOUR TIME PATTERN \*+/ { print time " : " $0 }' INPUTFILE
And there are other ways to do it.
In awk, see :
#!/bin/bash
awk '
BEGIN{
t=0
}
{
if ($0 ~ " time[0-9]+ ") {
v=$2
t=1
}
else if ($0 ~ "line of text") {
if (t==1) {
printf("%s : %s\n", v, $0)
} else {
t=0;
}
}
}
' FILE
Just replace FILE by your filename.
This might work for you (GNU sed):
sed '/^\*\+ \S\+.*/!d;s/[ *]//g;$!N;/\n[^*]/!D;s/\n/ : /' file
Explanation:
Look for lines beginning with *'s if not delete. /^\*\+ \S\+.*/!d
Got a time line. Delete *'s and spaces (leaving time). s/[ *]//g
Get next line $!N
Check the second line doesn't begin with *'s otherwise delete first line /\n[^*]/!D
Got intended pattern, replace \n with spaced : and print. s/\n/ : /
awk '{ if( $0 ~ /^\*+ time[0-9] \*+$/ ) { time = $2 } else { print time " : " $0 } }' file
$ uniq -f 2 input-file | awk '{getline n; print $2 " : " n}'
If your timestamp has spaces in it, change the argument to the -f option so that uniq is only comparing the final string of *. Eg, use -f X where X-2 is the number of spaces in the timestamp. Also if there are spaces in the timestamp, the awk will need to change. Either of these will work:
$ uniq -f 3 input-file | awk -F '**********' '{getline n; print $2 " : " n}'
$ uniq -f 3 input-file | awk '{getline n; $1=""; $NF=""; print $0 ": " n }'

Resources