Merging Data frames and creating columns based on conditions - r

I have 2 data frames
Data Frame A:
Time Reading
1 20
2 23
3 25
4 22
5 24
6 23
7 24
8 23
9 23
10 22
Data Frame B:
TimeStart TimeEnd Alarm
2 5 556
7 9 556
I would like to create the following joined dataframe:
Time Reading Alarmtime Alarm alarmno
1 20 n/a n/a n/a
2 23 2 556 1
3 25 556 1
4 22 556 1
5 24 5 556 1
6 23 n/a n/a n/a
7 24 7 556 2
8 23 556 2
9 23 9 556 2
10 22 n/a n/a n/a
I can do the join easy enough however im struggling with getting the following rows filled with the alarm until the time the alarm ended. Also numbering each individual alarm so even if they are the same alarm they are counted separately. Any thoughts on how i can do this would be great
Thanks

library(sqldf)
df_b$AlarmNo <- seq_len(nrow(df_b))
sqldf('
select a.Time
, a.Reading
, case when a.Time in (b.TimeStart, b.TimeEnd)
then a.Time
else NULL
end as AlarmTime
, b.Alarm
, b.AlarmNo
from df_a a
left join df_b b
on a.Time between b.TimeStart and b.TimeEnd
')
# Time Reading AlarmTime Alarm AlarmNo
# 1 1 20 NA NA NA
# 2 2 23 2 556 1
# 3 3 25 NA 556 1
# 4 4 22 NA 556 1
# 5 5 24 5 556 1
# 6 6 23 NA NA NA
# 7 7 24 7 556 2
# 8 8 23 NA 556 2
# 9 9 23 9 556 2
# 10 10 22 NA NA NA
Or
library(data.table)
setDT(df_b)
df_c <-
df_b[, .(Time = seq(TimeStart, TimeEnd), Alarm, AlarmNo = .GRP)
, by = TimeStart]
merge(df_a, df_c, by = 'Time', all.x = T)
# Time Reading TimeStart Alarm AlarmNo
# 1: 1 20 NA NA NA
# 2: 2 23 2 556 1
# 3: 3 25 2 556 1
# 4: 4 22 2 556 1
# 5: 5 24 2 556 1
# 6: 6 23 NA NA NA
# 7: 7 24 7 556 2
# 8: 8 23 7 556 2
# 9: 9 23 7 556 2
# 10: 10 22 NA NA NA
Data used:
df_a <- fread('
Time Reading
1 20
2 23
3 25
4 22
5 24
6 23
7 24
8 23
9 23
10 22
')
df_b <- fread('
TimeStart TimeEnd Alarm
2 5 556
7 9 556
')

Related

Extend numerical series in data frame

Data
Let's take a look at a simple dataset (mine is actually >200,000 rows):
df <- data.frame(
id = c(rep(1, 11), rep(2,6)),
ref.pos = c(NA,NA,NA,301,302,303,800,801,NA,NA,NA, 500,501,502, NA, NA, NA),
pos = c(1:11, 30:35)
)
Which thus looks like this:
id ref.pos pos
1 1 NA 1
2 1 NA 2
3 1 NA 3
4 1 301 4
5 1 302 5
6 1 303 6
7 1 800 7
8 1 801 8
9 1 NA 9
10 1 NA 10
11 1 NA 11
12 2 500 30
13 2 501 31
14 2 502 32
15 2 NA 33
16 2 NA 34
17 2 NA 35
What I want to achieve
Per id I want to extend the numbers in the ref.pos to fill out the whole column, where the ref.pos numbers go down moving up in the data frame and up moving down in the colum. This would result in the following data frame:
id ref.pos pos
1 1 298 1
2 1 299 2
3 1 300 3
4 1 301 4
5 1 302 5
6 1 303 6
7 1 800 7
8 1 801 8
9 1 802 9
10 1 803 10
11 1 804 11
12 2 500 30
13 2 501 31
14 2 502 32
15 2 503 33
16 2 504 34
17 2 505 35
What I tried
I wish I could provide some code here however I haven't figure out a proper way in two days, especially not something applicable to large datasets. I found df %>% group_by(id) %>% tidyr::fill(ref.pos, .direction = "downup") interesting however this repeats numbers rather than going down and up for me.
I hope my question is clear, otherwise let me know in the comments!
An option using data.table:
fillends <- function(x) nafill(nafill(x, "locf"), "nocb")
setDT(df)[, ref.pos2 := {
dif <- fillends(c(diff(ref.pos), NA_integer_))
frp <- fillends(ref.pos)
fp <- fillends(replace(pos, is.na(ref.pos), NA_integer_))
fifelse(is.na(ref.pos), frp + dif*(pos - fp), ref.pos)
}, id]
output:
id ref.pos pos ref.pos2
1: 1 NA 1 298
2: 1 NA 2 299
3: 1 NA 3 300
4: 1 301 4 301
5: 1 302 5 302
6: 1 303 6 303
7: 1 802 7 802
8: 1 801 8 801
9: 1 NA 9 800
10: 1 NA 10 799
11: 1 NA 11 798
12: 2 500 30 500
13: 2 501 31 501
14: 2 502 32 502
15: 2 NA 33 503
16: 2 NA 34 504
17: 2 NA 35 505
data:
df <- data.frame(
id = c(rep(1, 11), rep(2,6)),
ref.pos = c(NA,NA,NA,301,302,303,802,801,NA,NA,NA, 500,501,502, NA, NA, NA),
pos = c(1:11, 30:35)
)
A base R option is to define custom function fill, which is applied in ave
fill <- function(v) {
inds <- range(which(!is.na(v)))
l <- 1:inds[1]
u <- inds[2]:length(v)
v[l] <- v[inds[1]] - rev(l)+1
v[u] <- v[inds[2]] + seq_along(u)-1
v
}
df <- within(df,ref.pos <- ave(ref.pos,id,FUN = fill))
such that
> df
id ref.pos pos
1 1 298 1
2 1 299 2
3 1 300 3
4 1 301 4
5 1 302 5
6 1 303 6
7 1 800 7
8 1 801 8
9 1 802 9
10 1 803 10
11 1 804 11
12 2 500 30
13 2 501 31
14 2 502 32
15 2 503 33
16 2 504 34
17 2 505 35

R Data.Table Filter by table A in table B

The goal of this code is that finding the quadrant of a given point on the given circle equation.
I have two separate data.table. In table A, I have a different variation of circle equation variables. In table B I have raw data for finding how many points are lie on each circle quadrant. I have following sequence:
Get the circle equation from Table A
Filter out points where circle lie on the coordinates from Table B
Find the each points where they lie on the circle (getQuadrant function)
Count how many points lie on each quadrant (Quadrants function)
I had some attempts but it is kind of slow to return the results. The tables are as follows:
set.seed(4)
TableA <- data.table(speed=rep(42:44,each=3),
minX = rep(c(1:12),3),
maxX = rep(c(10:21),3),
minY = 1,
maxY = 10,
r = 5,
cX = rep(c(6:17),3),
cY = 6,
indx = 1:36)
TableA
speed minX maxX minY maxY r cX cY indx
1: 42 1 10 1 10 1 2 2 1
2: 42 2 11 1 10 1 2 2 2
3: 42 3 12 1 10 1 2 2 3
4: 43 1 10 1 10 1 2 2 4
5: 43 2 11 1 10 1 2 2 5
6: 43 3 12 1 10 1 2 2 6
7: 44 1 10 1 10 1 2 2 7
8: 44 2 11 1 10 1 2 2 8
9: 44 3 12 1 10 1 2 2 9
TableB <- data.table(speed=rep(42:44,each=100),
x = rep(sample(12),100),
y = rep(sample(12),100),
n = rep(sample(12),100))
TableB
speed x y n
1: 42 8 2 8
2: 42 1 11 10
3: 42 3 5 5
4: 42 10 10 12
5: 42 7 8 11
Function to find quadrant:
getQuadrant <- function(X=0,Y=0,R=1,PX=10,PY=10){
#' X and Y are center of the circle
#' R = Radius
#' PX and PY are a point anywhere
# The point is on the center
if (PX == X & PY == Y)
return(0)
val = ((PX - X)^2 + (PY - Y)^2)
# Outside the circle
if (val > R^2)
return(5)
# 1st quadrant
if (PX > X & PY >= Y)
return(1)
# 2nd quadrant
if (PX <= X & PY > Y)
return(2)
# 3rd quadrant
if (PX < X & PY <= Y)
return(3)
# 4th quadrant
if (PX >= X & PY < Y)
return(4)
}
Function to return number of points in the quadrant.
Quadrants <- function(dt,radius,centerX,centerY){
#' dt is filtered data for the circle
#' radius of the circle equation
#' centerX and centerY are the center point of the circle equation
if(nrow(dt) > 0 ){
dt[,quadrant:=factor(mapply(function(X,Y,R,PX,PY) getQuadrant(X=X,Y=Y,R=R,PX=PX,PY=PY),centerX,centerY,radius,x_cut,y_cut), levels = c("1","2","3","4","5"))]
dt <- dt[, .(.N), keyby = .(quadrant)]
setkeyv(dt, c("quadrant"))
dt <- dt[CJ(levels(dt[,quadrant])),]
dd <- list(Q1=dt$N[1],Q2=dt$N[2],Q3=dt$N[3],Q4=dt$N[4],Q5=dt$N[5])
}else{
dd <- list(Q1=NA,Q2=NA,Q3=NA,Q4=NA,Q5=NA) }
return(dd)
}
I have following solution but it won't work.
finalTable <- TableA[,c('Q1','Q2','Q3','Q4','Q5') := mapply(function(a,b,c,d,e,f,g,h) Quadrants(TableB[, .SD[x %between% c(a,b) & y %between% c(c,d) & speed == h]], radius=e, centerX = f, centerY = g),minX,maxX,minY,maxY,r,cX,cY,speed)]
I don't think so I am doing right. Because below results are not the expected one.
speed minX maxX minY maxY r cX cY indx Q1 Q2 Q3 Q4 Q5
1: 42 1 10 1 10 5 6 6 1 32 32 100 68 68
2: 42 2 11 1 10 5 7 6 2 32 32 100 68 68
3: 42 3 12 1 10 5 8 6 3 32 32 100 68 68
4: 43 4 13 1 10 5 9 6 4 32 32 100 68 68
...
11: 42 11 20 1 10 5 16 6 11 32 32 100 68 68
12: 42 12 21 1 10 5 17 6 12 32 32 100 68 68
13: 43 1 10 1 10 5 6 6 13 32 32 100 68 68
14: 43 2 11 1 10 5 7 6 14 32 32 100 68 68
15: 43 3 12 1 10 5 8 6 15 32 32 100 68 68
...
22: 43 10 19 1 10 5 15 6 22 32 32 100 68 68
23: 43 11 20 1 10 5 16 6 23 32 32 100 68 68
24: 43 12 21 1 10 5 17 6 24 32 32 100 68 68
25: 44 1 10 1 10 5 6 6 25 32 32 100 68 68
26: 44 2 11 1 10 5 7 6 26 32 32 100 68 68
27: 44 3 12 1 10 5 8 6 27 32 32 100 68 68
28: 42 4 13 1 10 5 9 6 28 32 32 100 68 68
...
35: 44 11 20 1 10 5 16 6 35 32 32 100 68 68
36: 44 12 21 1 10 5 17 6 36 32 32 100 68 68
Can anyone take a look please. I really appreciated.
Expected Output:
speed minX maxX minY maxY r cX cY indx Q1 Q2 Q3 Q4 Q5
1: 42 2 11 1 10 5 7 6 1 200 100 400 100 200
2: 42 3 12 1 10 5 8 6 2 200 100 300 100 200
3: 42 4 13 1 10 5 9 6 3 200 100 300 100 100
4: 42 5 14 1 10 5 10 6 4 100 200 300 NA 100
...
11: 42 12 21 1 10 5 17 6 11 NA NA NA NA NA
12: 42 13 22 1 10 5 18 6 12 NA NA NA NA NA
13: 43 2 11 1 10 5 7 6 13 200 100 400 100 200
14: 43 3 12 1 10 5 8 6 14 200 100 300 100 200
15: 43 4 13 1 10 5 9 6 15 200 100 300 100 100
...
22: 43 11 20 1 10 5 16 6 22 NA NA NA NA 100
23: 43 12 21 1 10 5 17 6 23 NA NA NA NA NA
24: 43 13 22 1 10 5 18 6 24 NA NA NA NA NA
25: 44 2 11 1 10 5 7 6 25 200 100 400 100 200
26: 44 3 12 1 10 5 8 6 26 200 100 300 100 200
27: 44 4 13 1 10 5 9 6 27 200 100 300 100 100
28: 44 5 14 1 10 5 10 6 28 100 200 300 NA 100
...
35: 44 12 21 1 10 5 17 6 35 NA NA NA NA NA
36: 44 13 22 1 10 5 18 6 36 NA NA NA NA NA

How to insert a row which calculates the average of the rows above it?

I was looking to separate rows of data by Cue and adding a row which calculate averages per subject. Here is an example:
Before:
Cue ITI a b c
1 0 16 0.82062 0.52185 0.27679
2 0 24 0.53894 0.49957 0.35767
3 4 22 0.26855 0.17487 0.22461
4 4 20 0.15106 0.48767 0.49072
5 7 18 0.11627 0.12604 0.2832
6 7 24 0.50201 0.14252 0.21454
7 12 16 0.27649 0.96008 0.42114
8 12 18 0.60852 0.21637 0.18799
9 22 20 0.32867 0.65308 0.29388
10 22 24 0.25726 0.37048 0.32379
After:
Cue ITI a b c
1 0 16 0.82062 0.52185 0.27679
2 0 24 0.53894 0.49957 0.35767
3 0.67978 0.51071 0.31723
4 4 22 0.26855 0.17487 0.22461
5 4 20 0.15106 0.48767 0.49072
6 0.209 0.331 0.357
7 7 18 0.11627 0.12604 0.2832
8 7 24 0.50201 0.14252 0.21454
9 0.309 0.134 0.248
10 12 16 0.27649 0.96008 0.42114
11 12 18 0.60852 0.21637 0.18799
12 0.442 0.588 0.304
13 22 20 0.32867 0.65308 0.29388
14 22 24 0.25726 0.37048 0.32379
15 0.292 0.511 0.308
So in the "after" example, line 3 is the average of lines 1 and 2 (line 6 is the average of lines 4 and 5, etc...).
Any help/information would be greatly appreciated!
Thank you!
You can use base r to do something like:
Reduce(rbind,by(data,data[1],function(x)rbind(x,c(NA,NA,colMeans(x[-(1:2)])))))
Cue ITI a b c
1 0 16 0.820620 0.521850 0.276790
2 0 24 0.538940 0.499570 0.357670
3 NA NA 0.679780 0.510710 0.317230
32 4 22 0.268550 0.174870 0.224610
4 4 20 0.151060 0.487670 0.490720
31 NA NA 0.209805 0.331270 0.357665
5 7 18 0.116270 0.126040 0.283200
6 7 24 0.502010 0.142520 0.214540
33 NA NA 0.309140 0.134280 0.248870
7 12 16 0.276490 0.960080 0.421140
8 12 18 0.608520 0.216370 0.187990
34 NA NA 0.442505 0.588225 0.304565
9 22 20 0.328670 0.653080 0.293880
10 22 24 0.257260 0.370480 0.323790
35 NA NA 0.292965 0.511780 0.308835
Here is one idea. Split the data frame, perform the analysis, and then combine them together.
DF_list <- split(DF, f = DF$Cue)
DF_list2 <- lapply(DF_list, function(x){
df_temp <- as.data.frame(t(colMeans(x[, -c(1, 2)])))
df_temp[, c("Cue", "ITI")] <- NA
df <- rbind(x, df_temp)
return(df)
})
DF2 <- do.call(rbind, DF_list2)
rownames(DF2) <- 1:nrow(DF2)
DF2
# Cue ITI a b c
# 1 0 16 0.820620 0.521850 0.276790
# 2 0 24 0.538940 0.499570 0.357670
# 3 NA NA 0.679780 0.510710 0.317230
# 4 4 22 0.268550 0.174870 0.224610
# 5 4 20 0.151060 0.487670 0.490720
# 6 NA NA 0.209805 0.331270 0.357665
# 7 7 18 0.116270 0.126040 0.283200
# 8 7 24 0.502010 0.142520 0.214540
# 9 NA NA 0.309140 0.134280 0.248870
# 10 12 16 0.276490 0.960080 0.421140
# 11 12 18 0.608520 0.216370 0.187990
# 12 NA NA 0.442505 0.588225 0.304565
# 13 22 20 0.328670 0.653080 0.293880
# 14 22 24 0.257260 0.370480 0.323790
# 15 NA NA 0.292965 0.511780 0.308835
DATA
DF <- read.table(text = " Cue ITI a b c
1 0 16 0.82062 0.52185 0.27679
2 0 24 0.53894 0.49957 0.35767
3 4 22 0.26855 0.17487 0.22461
4 4 20 0.15106 0.48767 0.49072
5 7 18 0.11627 0.12604 0.2832
6 7 24 0.50201 0.14252 0.21454
7 12 16 0.27649 0.96008 0.42114
8 12 18 0.60852 0.21637 0.18799
9 22 20 0.32867 0.65308 0.29388
10 22 24 0.25726 0.37048 0.32379", header = TRUE)
A data.table approach, but if someone can offer some improvements I'd be keen to hear.
library(data.table)
dt <- data.table(df)
dt2 <- dt[, lapply(.SD, mean), by = Cue][,ITI := NA][]
data.table(rbind(dt, dt2))[order(Cue)][is.na(ITI), Cue := NA][]
> data.table(rbind(dt, dt2))[order(Cue)][is.na(ITI), Cue := NA][]
Cue ITI a b c
1: 0 16 0.820620 0.521850 0.276790
2: 0 24 0.538940 0.499570 0.357670
3: NA NA 0.679780 0.510710 0.317230
4: 4 22 0.268550 0.174870 0.224610
5: 4 20 0.151060 0.487670 0.490720
6: NA NA 0.209805 0.331270 0.357665
If you want to leave the Cue values as-is to confirm group, just drop the [is.na(ITI), Cue := NA] from the last line.
I would use group_by and summarise from the DPLYR package to get a dataframe with the average values. Then rbind the new data frame with the old one and sort by Cue:
df_averages <- df_orig >%>
group_by(Cue) >%>
summarise(ITI = NA, a = mean(a), b = mean(b), c = mean(c)) >%>
ungroup()
df_all <- rbind(df_orig, df_averages)

Aggregation of all possible unique combinations with observations in the same column in R

I am trying to shorten a chunk of code to make it faster and easier to modify. This is a short example of my data.
order obs year var1 var2 var3
1 3 1 1 32 588 NA
2 4 1 2 33 689 2385
3 5 1 3 NA 678 2369
4 33 3 1 10 214 1274
5 34 3 2 10 237 1345
6 35 3 3 10 242 1393
7 78 6 1 5 62 NA
8 79 6 2 5 75 296
9 80 6 3 5 76 500
10 93 7 1 NA NA NA
11 94 7 2 4 86 247
12 95 7 3 3 54 207
Basically, what I want is R to find any possible and unique combination of two values (observations) in column "obs", within the same year, to create a new matrix or DF with observations being the aggregation of the originals. Order is not important, so 1+6 = 6+1. For instance, having 150 observations, I will expect 11,175 feasible combinations (each year).
I sort of got what I want with basic coding but, as you will see, is way too long (I have built this way 66 different new data sets so it does not really make a sense) and I am wondering how to shorten it. I did some trials (plyr,...) with no real success. Here what I did:
# For the 1st year, groups of 2 obs
newmatrix <- data.frame(t(combn(unique(data$obs[data$year==1]), 2)))
colnames(newmatrix) <- c("obs1", "obs2")
newmatrix$name <- do.call(paste, c(newmatrix[c("obs1", "obs2")], sep = "_"))
# and the aggregation of var. using indexes, which I will skip here to save your time :)
To ilustrate, here the result, considering above sample, of what I would get for the 1st year. NA is because I only computed those where the 2 values were valid. And only for variables 1 and 3. More, I did the sum but it could be any other possible Function:
order obs1 obs2 year var1 var3
1 1 1 3 1_3 42 NA
2 2 1 6 1_6 37 NA
3 3 1 7 1_7 NA NA
4 4 3 6 3_6 15 NA
5 5 3 7 3_7 NA NA
6 6 6 7 6_7 NA NA
As for the 2 first lines in the 3rd year, same type of matrix:
order obs1 obs2 year var1 var3
1 1 1 3 1_3 NA 3762
2 2 1 6 1_6 NA 2868
.......... etc ............
I hope I explained myself. Thank you in advance for your hints on how to do this more efficient.
I would use split-apply-combine to split by year, find all the combinations, and then combine back together:
do.call(rbind, lapply(split(data, data$year), function(x) {
p <- combn(nrow(x), 2)
data.frame(order=paste(x$order[p[1,]], x$order[p[2,]], sep="_"),
obs1=x$obs[p[1,]],
obs2=x$obs[p[2,]],
year=x$year[1],
var1=x$var1[p[1,]] + x$var1[p[2,]],
var2=x$var2[p[1,]] + x$var2[p[2,]],
var3=x$var3[p[1,]] + x$var3[p[2,]])
}))
# order obs1 obs2 year var1 var2 var3
# 1.1 3_33 1 3 1 42 802 NA
# 1.2 3_78 1 6 1 37 650 NA
# 1.3 3_93 1 7 1 NA NA NA
# 1.4 33_78 3 6 1 15 276 NA
# 1.5 33_93 3 7 1 NA NA NA
# 1.6 78_93 6 7 1 NA NA NA
# 2.1 4_34 1 3 2 43 926 3730
# 2.2 4_79 1 6 2 38 764 2681
# 2.3 4_94 1 7 2 37 775 2632
# 2.4 34_79 3 6 2 15 312 1641
# 2.5 34_94 3 7 2 14 323 1592
# 2.6 79_94 6 7 2 9 161 543
# 3.1 5_35 1 3 3 NA 920 3762
# 3.2 5_80 1 6 3 NA 754 2869
# 3.3 5_95 1 7 3 NA 732 2576
# 3.4 35_80 3 6 3 15 318 1893
# 3.5 35_95 3 7 3 13 296 1600
# 3.6 80_95 6 7 3 8 130 707
This enables you to be very flexible in how you combine data pairs of observations within a year --- x[p[1,],] represents the year-specific data for the first element in each pair and x[p[2,],] represents the year-specific data for the second element in each pair. You can return a year-specific data frame with any combination of data for the pairs, and the year-specific data frames are combined into a single final data frame with do.call and rbind.

Conditional column-wise or row-wise subtraction in a data frame

I need to do a column-wise subtraction and row-wise subtraction in R.
id on fail
1 10-10-2014 11-11-2014
1 11-10-2014 12-12-2014
1 12-10-2014 12-01-2015
2 13-10-2014 12-02-2015
2 14-10-2014 15-03-2015
2 15-10-2014 15-04-2015
2 16-10-2014 16-05-2015
3 17-10-2014 16-06-2015
3 18-10-2014 17-07-2015
3 19-10-2014 17-08-2015
3 20-10-2014 17-09-2015
For example, in the above table whenever a new id appears it should do a column-wise subtraction, else it should do row-wise subtraction. I need to have a result like this:
id on fail res
1 10-10-2014 11-11-2014 32
1 11-10-2014 12-12-2014 31
1 12-10-2014 12-01-2015 31
2 13-10-2014 12-02-2015 122
2 14-10-2014 15-03-2015 31
2 15-10-2014 15-04-2015 31
2 16-10-2014 16-05-2015 31
3 17-10-2014 16-06-2015 242
3 18-10-2014 17-07-2015 31
3 19-10-2014 17-08-2015 31
3 20-10-2014 17-09-2015 31
As of now I am using the following code:
data[,2] <- as.Date(data[,2],format="%d-%m-%Y")
data[,3] <- as.Date(data[,3],format="%d-%m-%Y")
x <- as.numeric(diff(data[,3]))
DF <- read.table(text="id on fail
1 10-10-2014 11-11-2014
1 11-10-2014 12-12-2014
1 12-10-2014 12-01-2015
2 13-10-2014 12-02-2015
2 14-10-2014 15-03-2015
2 15-10-2014 15-04-2015
2 16-10-2014 16-05-2015
3 17-10-2014 16-06-2015
3 18-10-2014 17-07-2015
3 19-10-2014 17-08-2015
3 20-10-2014 17-09-2015 ", header=TRUE)
DF[,2:3] <- lapply(DF[,2:3], as.Date, format="%d-%m-%Y")
DF$res <- c(NA, diff(DF$fail))
DF[c(TRUE ,diff(DF$id)!=0), "res"] <- DF[c(TRUE ,diff(DF$id)!=0), "fail"] -
DF[c(TRUE ,diff(DF$id)!=0), "on"]
# id on fail res
# 1 1 2014-10-10 2014-11-11 32
# 2 1 2014-10-11 2014-12-12 31
# 3 1 2014-10-12 2015-01-12 31
# 4 2 2014-10-13 2015-02-12 122
# 5 2 2014-10-14 2015-03-15 31
# 6 2 2014-10-15 2015-04-15 31
# 7 2 2014-10-16 2015-05-16 31
# 8 3 2014-10-17 2015-06-16 242
# 9 3 2014-10-18 2015-07-17 31
# 10 3 2014-10-19 2015-08-17 31
# 11 3 2014-10-20 2015-09-17 31

Resources