How can I add days and months in the field? - unix

using if statement if field one + 1 day equal field two.
using if statement if field one + 1 month equal field two.
I have this input
09-11-2013 09-12-2013
10-02-2013 10-02-2013
26-10-2013 27-10-2013
12-01-2013 12-02-2013
22-02-2013 23-02-2013
I used this code but it works with years only:
awk '{if ($1+1==$2) print }'

Have a look at the mktime funktin in awk
With it you can convert date to seconds so it be easy to compare.
This prints how many days there are between $1 and $2
awk '{split($1, sd, "-");split($2, ed, "-");print $0,(mktime(ed[3] s ed[2] s ed[1] s 0 s 0 s 0)-mktime(sd[3] s sd[2] s sd[1] s 0 s 0 s 0))/86400}' s=' ' file
09-11-2013 09-12-2013 30
10-02-2013 10-02-2013 0
26-10-2013 27-10-2013 1
12-01-2013 12-02-2013 31
22-02-2013 23-02-2013 1
Her it prints 1 of its one day, and 2 if its one month.
It take in count that February may have 28 or 29 days
awk '
BEGIN {
arr="31,28,31,30,31,30,31,31,30,31,30,31"
split(arr, month, ",")
x=0}
{
split($1, sd, "-")
split($2, ed, "-")
t=(mktime(ed[3] s ed[2] s ed[1] s 0 s 0 s 0)-mktime(sd[3] s sd[2] s sd[1] s 0 s 0 s 0))/86400
month[2]=sd[3]%4==0?29:28
}
t==month[sd[2]+0] {x=2}
t==1 {x=1}
{print $0,x
x=0}
' s=' ' file
09-11-2013 09-12-2013 2
10-02-2013 10-02-2013 0
26-10-2013 27-10-2013 1
12-01-2013 12-02-2013 2
22-02-2013 23-02-2013 1

Related

Splitting text file based on column value in unix

I have a text file:
head train_test_split.txt
1 0
2 1
3 0
4 1
5 1
What I want to do is save the first column values for which second column value is 1 to file train.txt.
So, the corresponding first column value for second column value with 1 are: 2,4,5. So, in my train.txt file i want:
2
4
5
How can I do this easily unix?
You can use awk for this:
awk '$2 == 1 { print $1 }' inputfile
That is,
$2 == 1 is a filter,
matching lines where the 2nd column is 1,
and print $1 means to print the first column.
In Perl:
$ perl -lane 'print "$F[0]" if $F[1]==1' file
Or GNU grep:
$ grep -oP '^(\S+)(?=[ \t]+1$)' file
But awk is the best. Use awk...

Replacing alternate spaces with newline?

I am trying to replace the alternate spaces with newlines using UNIX.
I tried using tr command in UNIX but was unable modify it to replace alternate spaces.
Sample input:
0 1 2 3 4 5
Sample output:
0 1
2 3
4 5
How do we achieve this ?
awk might help in this case:
echo "0 1 2 3 4 5" | awk '
{
for (i=1; i<=NF; i++)
{
if ((i-1)%2 == 0)
{
printf "%d ",$i;
}
else
{
print $i
}
}
}
'
We split by space and have 6 items. We, then, are looping through all fields and outputting each field. Every other field is output in a new line with print $i; otherwise we print using printf "%d ",$i; and not create a new line.
echo "0 1 2 3 4 5" | sed 's/\([^ ][^ ]* [^ ][^ ]*\) */\1\n/g'
This can be made shorter with GNU sed which has the '+' notation.

Make a file counting instances in sets of 5

I have a file that looks like this:
1 rs531842 503939 61733 G A
1 rs10494103 35025 114771 C T
1 rs17038458 254490 21116837 G A
1 rs616378 525783 21127670 T C
1 rs3845293 432526 21199392 A C
2 rs16840461 233620 157112959 A G
2 rs1560628 224228 157113214 T C
2 rs17200880 269314 257145829 C T
2 rs10497165 35844 357156412 C T
2 rs7607531 624696 457156575 T C
...with column 1 stretching on to 22, and several thousand entries in total.
I want to create a file that lists bins of 5 million from column 4 which have data, separating by column 1.
Basically, all but column 1 and 4 can be discarded. A simple imput would look like this:
InputChr1:
61733
114771
21116837
21127670
21199392
InputChr2:
157112959
157113214
257145829
357156412
457156575
So, for the example above, I would want to get two files that look like this:
OutputChr1.txt
Start End Occurrences
1 5000000 2
20000001 25000000 3
OutputChr2.txt
Start End Occurrences
155000001 160000000 2
255000001 260000000 1
355000001 360000000 1
455000001 460000000 1
Any ideas? It seems like something that should be doable with lapply in R, but I can't get the for loops to work...
EDIT: Actually, I made this look much harder than it needed to be - basically, I want to split the original file by column 1, extract the data in column 4, and then count the instances in bins of 5 million.
(Apologies for slightly random tags, just trying to think of which tools might be best!)
Well, this happened to be very challenging. I couldn't find a way to use an unique awk command, though.
awk -v const=5000000 -v max=150
'{a[$1,int($4/const)]++; b[$1]}
END{for (i in b)
{for (j=0; j<max; j++)
print i, j*const +1, (j+1)*const, a[i,j]
}
}' file
And then to get only the results:
awk 'NF==4'
Explanation
-v const=5000000 -v max=150 give the variables. const is the 5 million value to split the results. max is the biggest number up to which we will look for info in the END block.
a[$1,int($4/const)]++ create an array with (1st field, 4th field) as index. Note the second is int($4/const) is to get from 23432 --> 0, 6000000 --> 1, etc. That is, to see in which block of values is every 4th column.
b[$1] keep track of the first columns that have been processed.
END{for (i in b) {for (j=0; j<max; j++) print j, j*const +1, (j+1)*const, a[i,j]}}' print the values.
awk 'NF==4' just print those lines that have 4 columns. This way it just outputs those cases in which there were matches.
In case you want to store the values into a new file, you can do
awk 'NF==4 {print > "OutputChr"$1".txt}'
Sample output
$ awk -v const=5000000 -v max=150 '{a[$1,int($4/const)]++; b[$1]} END{for (i in b) {for (j=0; j<max; j++) print i, j*const +1, (j+1)*const, a[i,j]}}' a | awk 'NF==4'
1 1 5000000 2
1 20000001 25000000 3
2 155000001 160000000 2
2 255000001 260000000 1
2 355000001 360000000 1
2 455000001 460000000 1
All in one
awk '{ v=int($4/const)
a[$1 FS v]++
min[$1]=min[$1]<v?min[$1]:v # get the Minimum of column $4 for group $1
max[$1]=max[$1]>v?max[$1]:v # get the Minimum of column $4 for group $1
}END{ for (i in min)
for (j=min[i];j<=max[i];j++) # set the for loop, and use the min and max value.
if (a[i FS j]!="") print j*const+1,(j+1)*const,a[i FS j] > "OutputChr" i ".txt" # if the data is exist, print to file "OutputChr" i ".txt"
}' const=5000000 file
result:
$ cat OutputChr1.txt
1 5000000 2
20000001 25000000 3
$ cat OutputChr2.txt
155000001 160000000 2
255000001 260000000 1
355000001 360000000 1
455000001 460000000 1

Unix: Increment date column by one day in csv file

Help needed. I want to increment Date (which is a string) column in csv by one day.
e.g. (Date Format yyyy-MM-dd)
Col1,Col2,Col3
ABC,001,1900-01-01
XYZ,002,2000-01-01
Expected OutPut
Col1,Col2,Col3
ABC,001,1900-01-02
XYZ,002,2000-01-02
There's one standard Unix utility that has all the date magic from September 14, 1752 through December 31, 9999 built-in: the calendar cal. Instead of reinventing the wheel and do messy date calculations we will use its intelligence to our advantage. The basic problem is: given a date, is it the last day of a month? If not, simply increment the day. If yes, reset day to 1 and increment month (and possibly year).
However, the output of cal is unspecified and it may look like this:
$ cal 2 1900
February 1900
Su Mo Tu We Th Fr Sa
1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28
What we would need is a list of days, 1 2 3 ... 28. We can do this by skipping everything up to the "1":
set -- $(cal 2 1900)
while test $1 != 1; do shift; done
Now the number of args gives us the number of days in February 1900:
$ echo $#
28
Putting it all together in a script:
#!/bin/sh
read -r header
printf "%s\n" "$header"
while IFS=,- read -r col1 col2 y m d; do
case $m-$d in
(12-31) y=$((y+1)) m=01 d=01;;
(*)
set -- $(cal $m $y)
# Shift away the month and weekday names.
while test $1 != 1; do shift; done
# Is the day the last day of a month?
if test ${d#0} -eq $#; then
# Yes: increment m and reset d=01.
m=$(printf %02d $((${m#0}+1)))
d=01
else
# No: increment d.
d=$(printf %02d $((${d#0}+1)))
fi
;;
esac
printf "%s,%s,%s-%s-%s\n" "$col1" "$col2" $y $m $d
done
Running it on this input:
Col1,Col2,Col3
ABC,001,1900-01-01
ABC,001,1900-02-28
ABC,001,1900-12-31
XYZ,002,2000-01-01
XYZ,002,2000-02-28
XYZ,002,2000-02-29
yields
Col1,Col2,Col3
ABC,001,1900-01-02
ABC,001,1900-03-01
ABC,001,1901-01-01
XYZ,002,2000-01-02
XYZ,002,2000-02-29
XYZ,002,2000-03-01
I made one little assumption: The first two columns don't contain a - or escaped comma. If they do, the IFS=,- read will act up.
Using the date command, this can be done in awk:
awk 'BEGIN{FS=OFS=","}NR>1{("date -d\""$3" +1 day\" +%Y-%m-%d")|getline newdate; $3=newdate; print}' file.in
If you can extract the date from the file, you can use this:
d="1900-01-01" # date from file
date --date '#'$(( $(date --date $d +"%s") + 86400 ))

Read a column value from previous line and next line but insert them as additional fields in the current line using awk

I hope you can help me out with my problem.
I have an input file with 3 columns of data which looks like this:
Apl_No Act_No Sfx_No
100 10 0
100 11 1
100 12 2
100 13 3
101 20 0
101 21 1
I need to create an output file which contains the data as in the input and 3 additional fileds in its output. It should look like this:
Apl_No Act_No Sfx_No Crt_Act_No Prs_Act_No Cd_Act_No
100 10 0 - - -
100 11 1 10 11 12
100 12 2 11 12 13
100 13 3 12 13 10
101 20 0 - - -
101 21 1 20 21 20
Every Apl_No has a set of Act_No that are mapped to it. 3 new fields need to be created: Crt_Act_No Prs_Act_No Cd_Act_No. When the first unique Apl_No is encountered the column values 4, 5 and 6 (Crt_Act_No Prs_Act_No Cd_Act_No) need to be dashed out. For every following occurrence of the same Apl_No the Crt_Act_No is the same as the Act_No on the previous line, the Prs_Act_No is same as the Act_No on the current line and the Cd_Act_No is same as the Act_No on the next line. This continues for all the following rows bearing the same Apl_No except for the last row. In the last row the Crt_Act_No and Prs_Act_No is filled in the same way as the above rows but the Cd_Act_No needs to be pulled from the Act_No from the first row when the first unique Apl_No is encountered.
I wish to achieve this using awk. Can anyone please help me out how to go about this.
One solution:
awk '
## Print header in first line.
FNR == 1 {
printf "%s %s %s %s\n", $0, "Crt_Act_No", "Prs_Act_No", "Cd_Act_No";
next;
}
## If first field not found in the hash means that it is first unique "Apl_No", so
## print line with dashes and save some data for use it later.
## "line" variable has the content of the previous iteration. Print it if it is set.
! apl[ $1 ] {
if ( line ) {
sub( /-/, orig_act, line );
print line;
line = "";
}
printf "%s %s %s %s\n", $0, "-", "-", "-";
orig_act = prev_act = $2;
apl[ $1 ] = 1;
next;
}
## For all non-unique "Apl_No"...
{
## If it is the first one after the line with
## dashes (line not set) save it is content in "line" and the variable
## that I will have to check later ("Act_No"). Note that I leave a dash in last
## field to substitute in the following iteration.
if ( ! line ) {
line = sprintf( "%s %s %s %s", $0, prev_act, $2, "-" );
prev_act = $2;
next;
}
## Now I know the field, so substitute the dash with it, print and repeat
## the process with current line.
sub( /-/, $2, line );
print line;
line = sprintf( "%s %s %s %s", $0, prev_act, $2, "-" );
prev_act = $2;
}
END {
if ( line ) {
sub( /-/, orig_act, line );
print line;
}
}
' infile | column -t
That yields:
Apl_No Act_No Sfx_No Crt_Act_No Prs_Act_No Cd_Act_No
100 10 0 - - -
100 11 1 10 11 12
100 12 2 11 12 13
100 13 3 12 13 10
101 20 0 - - -
101 21 1 20 21 20

Resources