Find daily percentile for large data set of irregular data - r

I have a very large data set (> 1 million rows) with percentiles that need to be calculated for all of the same day (e.g., all Jan 1, all Jan 2, ..., all Dec 31). There are many rows of the same year, month and day with different data. Below is an example of the data:
Year Month Day A B C D
2007 Jan 1 1 2 3 4
2007 Jan 1 5 6 7 8
2007 Feb 1 1 2 3 4
2007 Feb 1 5 6 7 8
.
.
2010 Dec 30 1 2 3 4
2010 Dec 30 5 6 7 8
2010 Dec 31 1 2 3 4
2010 Dec 31 5 6 7 8
So to calculate the 95th percentile for Jan 1, it would need to include all Jan 1 for all years (e.g., 2007-2010) and for all columns (A, B, C and D). This is then done for all Jan 2, Jan 3, ..., Dec 30 and Dec 31. This can easily be done with small data sets in Excel by using nested if statements; e.g., ={PERCENTILE(IF(Month($B$2:$B$1000000)="Jan",IF(Day($C$2:$C$1000000)="1",$D$2:$G$1000000)),95%)}
The percentiles could then be added to a a new data table containing only the month and days:
Month Day P95 P05
Jan 1
Jan 2
Jan 3
.
.
Dec 30
Dec 31
Then using the percentiles, I need to evaluate whether each data value in column names A, B, C and D for their respective date (e.g., Jan 1) is larger than P95 or smaller than P05. Then new columns could be added to the first data table containing 1 or 0 (1 if larger or smaller, 0 if not larger or smaller than the percentiles):
Year Month Day A B C D A05 B05 C05 D05 A95 B95 C95 D95
2007 Jan 1 1 2 3 4 1 0 0 0 0 0 0 0
2007 Jan 1 5 6 7 8 0 0 0 0 0 0 1 1
.
.
2010 Dec 31 5 6 7 8 0 0 0 0 0 0 0 1

I've called your data dat:
library(plyr)
library(reshape2)
# melt values so all values are in 1 column
dat_melt <- melt(dat, id.vars=c("Year", "Month", "Day"), variable.name="letter", value.name="value")
# get quantiles, split by day
dat_quantiles <- ddply(dat_melt, .(Month, Day), summarise,
P05=quantile(value, 0.05), P95=quantile(value, 0.95))
# merge original data with quantiles
all_dat <- merge(dat_melt, dat_quantiles)
# See if in bounds
all_dat <- transform(all_dat, less05=ifelse(value < P05, 1, 0), greater95=ifelse(value > P95, 1, 0))
Month Day Year letter value P05 P95 less05 greater95
1 Dec 30 2010 A 1 1.35 7.65 1 0
2 Dec 30 2010 A 5 1.35 7.65 0 0
3 Dec 30 2010 B 2 1.35 7.65 0 0
4 Dec 30 2010 B 6 1.35 7.65 0 0
5 Dec 30 2010 C 3 1.35 7.65 0 0
6 Dec 30 2010 C 7 1.35 7.65 0 0
7 Dec 30 2010 D 4 1.35 7.65 0 0
8 Dec 30 2010 D 8 1.35 7.65 0 1
9 Dec 31 2010 A 1 1.35 7.65 1 0
10 Dec 31 2010 A 5 1.35 7.65 0 0
11 Dec 31 2010 B 2 1.35 7.65 0 0
12 Dec 31 2010 B 6 1.35 7.65 0 0
13 Dec 31 2010 C 3 1.35 7.65 0 0
14 Dec 31 2010 C 7 1.35 7.65 0 0
15 Dec 31 2010 D 4 1.35 7.65 0 0
16 Dec 31 2010 D 8 1.35 7.65 0 1
17 Feb 1 2007 A 1 1.35 7.65 1 0
18 Feb 1 2007 A 5 1.35 7.65 0 0
19 Feb 1 2007 B 2 1.35 7.65 0 0
20 Feb 1 2007 B 6 1.35 7.65 0 0
21 Feb 1 2007 C 3 1.35 7.65 0 0
22 Feb 1 2007 C 7 1.35 7.65 0 0
23 Feb 1 2007 D 4 1.35 7.65 0 0
24 Feb 1 2007 D 8 1.35 7.65 0 1
25 Jan 1 2007 A 1 1.35 7.65 1 0
26 Jan 1 2007 A 5 1.35 7.65 0 0
27 Jan 1 2007 B 2 1.35 7.65 0 0
28 Jan 1 2007 B 6 1.35 7.65 0 0
29 Jan 1 2007 C 3 1.35 7.65 0 0
30 Jan 1 2007 C 7 1.35 7.65 0 0
31 Jan 1 2007 D 4 1.35 7.65 0 0
32 Jan 1 2007 D 8 1.35 7.65 0 1

Something along these lines can be merged to the original dataframe:
aggregate(dfrm[ , c("A","B","C","D")] , list(dfrm$month, dfrm$day),
FUN=quantile, probs=c(0.05,0.95))
Notice I suggested merge(). Your description suggested (but was not explicit) that you wanted all years worth of Jan-1 values to be submitted together. I think this is a lot "easier" than the expression you are using in Excel. This does both 0.05 and 0.95 on all four columns.

Related

Annual moving window over a data frame

I have a data frame of discharge data. Below is a reproducible example:
library(lubridate)
Date <- sample(seq(as.Date('1981/01/01'), as.Date('1982/12/31'), by="day"), 24)
Date <- sort(Date, decreasing = F)
Station <- rep(as.character("A"), 24)
Discharge <- rnorm(n = 24, mean = 1, 1)
df <- cbind.data.frame(Station, Date, Discharge)
df$Year <- year(df$Date)
df$Month <- month(df$Date)
df$Day <- day(df$Date)
The output:
> df
Station Date Discharge Year Month Day
1 A 1981-01-23 0.75514968 1981 1 23
2 A 1981-02-17 -0.08552776 1981 2 17
3 A 1981-03-20 1.47586712 1981 3 20
4 A 1981-04-26 3.64823544 1981 4 26
5 A 1981-05-22 1.21880453 1981 5 22
6 A 1981-05-23 2.19482857 1981 5 23
7 A 1981-07-02 -0.13598754 1981 7 2
8 A 1981-07-23 0.12365626 1981 7 23
9 A 1981-07-24 2.12557882 1981 7 24
10 A 1981-09-02 2.79879494 1981 9 2
11 A 1981-09-04 1.67926948 1981 9 4
12 A 1981-11-06 0.49720784 1981 11 6
13 A 1981-12-21 -0.25272271 1981 12 21
14 A 1982-04-08 1.39706157 1982 4 8
15 A 1982-04-19 -0.13965981 1982 4 19
16 A 1982-05-26 0.55238425 1982 5 26
17 A 1982-06-23 3.94639154 1982 6 23
18 A 1982-06-25 -0.03415929 1982 6 25
19 A 1982-07-15 1.00996167 1982 7 15
20 A 1982-09-11 3.18225186 1982 9 11
21 A 1982-10-17 0.30875497 1982 10 17
22 A 1982-10-30 2.26209011 1982 10 30
23 A 1982-11-06 0.34430489 1982 11 6
24 A 1982-11-19 2.28251458 1982 11 19
What I need to do is to create a moving window function using base R. I have tried using runner package but it is proving not to be so flexible. This moving window (say 3) shall take 3 rows at a time and calculate the mean discharge. This window shall continue till the last date of the year 1981. Another window shall start from 1982 and do the same. How to approach this?
Using base R only
w=3
df$DischargeM=sapply(1:nrow(df),function(x){
tmp=NA
if (x>=w) {
if (length(unique(df$Year[(x-w+1):x]))==1) {
tmp=mean(df$Discharge[(x-w+1):x])
}
}
tmp
})
Station Date Discharge Year Month Day DischargeM
1 A 1981-01-21 2.0009355 1981 1 21 NA
2 A 1981-02-11 0.5948567 1981 2 11 NA
3 A 1981-04-17 0.2637090 1981 4 17 0.95316705
4 A 1981-04-18 3.9180253 1981 4 18 1.59219699
5 A 1981-05-09 -0.2589129 1981 5 9 1.30760712
6 A 1981-07-05 1.1055913 1981 7 5 1.58823456
7 A 1981-07-11 0.7561600 1981 7 11 0.53427946
8 A 1981-07-22 0.0978999 1981 7 22 0.65321706
9 A 1981-08-04 0.5410163 1981 8 4 0.46502541
10 A 1981-08-13 -0.5044425 1981 8 13 0.04482458
11 A 1981-10-06 1.5954315 1981 10 6 0.54400178
12 A 1981-11-08 -0.5757041 1981 11 8 0.17176164
13 A 1981-12-24 1.3892440 1981 12 24 0.80299047
14 A 1982-01-07 1.9363874 1982 1 7 NA
15 A 1982-02-20 1.4340554 1982 2 20 NA
16 A 1982-05-29 0.4536461 1982 5 29 1.27469632
17 A 1982-06-10 2.9776761 1982 6 10 1.62179253
18 A 1982-06-17 1.6371733 1982 6 17 1.68949847
19 A 1982-06-28 1.7585579 1982 6 28 2.12446908
20 A 1982-08-17 0.8297518 1982 8 17 1.40849432
21 A 1982-09-21 1.6853808 1982 9 21 1.42456348
22 A 1982-11-13 0.6066167 1982 11 13 1.04058309
23 A 1982-11-16 1.4989263 1982 11 16 1.26364126
24 A 1982-11-28 0.2273658 1982 11 28 0.77763625
(make sure your df is ordered).
You can do this by using dplyr and the rollmean or rollmeanr function from zoo.
You group the data by year, and apply the rollmeanr in a mutate function.
library(dplyr)
df %>%
group_by(Year) %>%
mutate(avg = zoo::rollmeanr(Discharge, k = 3, fill = NA))
# A tibble: 24 x 7
# Groups: Year [2]
Station Date Discharge Year Month Day avg
<chr> <date> <dbl> <dbl> <dbl> <int> <dbl>
1 A 1981-01-04 1.00 1981 1 4 NA
2 A 1981-03-26 0.0468 1981 3 26 NA
3 A 1981-03-28 0.431 1981 3 28 0.494
4 A 1981-05-04 1.30 1981 5 4 0.593
5 A 1981-08-26 2.06 1981 8 26 1.26
6 A 1981-10-14 1.09 1981 10 14 1.48
7 A 1981-12-10 1.28 1981 12 10 1.48
8 A 1981-12-23 0.668 1981 12 23 1.01
9 A 1982-01-02 -0.333 1982 1 2 NA
10 A 1982-04-13 0.800 1982 4 13 NA
# ... with 14 more rows
Kindly let me know if this is what you were anticipating
Base version:
result <- transform(df,
Discharge_mean = ave(Discharge,Year,
FUN= function(x) rollapply(x,width = 3, mean, align='right',fill=NA))
)
dplyr version:
result <-df %>%
group_by(Year)%>%
mutate(Discharge_mean=rollapply(Discharge,3,mean,align='right',fill=NA))
Output:
> result
Station Date Discharge Year Month Day Discharge_mean
1 A 1981-01-09 0.560448487 1981 1 9 NA
2 A 1981-01-17 0.006777809 1981 1 17 NA
3 A 1981-02-08 2.008959399 1981 2 8 0.8587286
4 A 1981-02-21 1.166452993 1981 2 21 1.0607301
5 A 1981-04-12 3.120080595 1981 4 12 2.0984977
6 A 1981-04-24 2.647325960 1981 4 24 2.3112865
7 A 1981-05-01 0.764980310 1981 5 1 2.1774623
8 A 1981-05-20 2.203700845 1981 5 20 1.8720024
9 A 1981-06-19 0.519390897 1981 6 19 1.1626907
10 A 1981-07-06 1.704146872 1981 7 6 1.4757462
# 14 more rows

Summarizing percentage by subgroups

I don't know how to explain my problem, but I want to summarize the categories distance and get the percentage for each distance per month. In my table 1 week is 100% and now I want to calculate the same for the month but using the percentage from the weeks.
Something like sum(percent)/ amount of weeks in this month
This is what I have:
year month year_week distance object_remarks weeksum percent
1 2017 05 2017_21 15 ctenolabrus_rupestris 3 0.75
2 2017 05 2017_21 10 ctenolabrus_rupestris 1 0.25
3 2017 05 2017_22 5 ctenolabrus_rupestris 5 0.833
4 2017 05 2017_22 0 ctenolabrus_rupestris 1 0.167
5 2017 06 2017_22 0 ctenolabrus_rupestris 9 1
6 2017 06 2017_23 20 ctenolabrus_rupestris 6 0.545
7 2017 06 2017_23 0 ctenolabrus_rupestris 5 0.455
I want to have an output like this:
year month distance object_remarks weeksum percent percent_month
1 2017 05 15 ctenolabrus_rupestris 3 0.75 0.375
2 2017 05 10 ctenolabrus_rupestris 1 0.25 0.1225
3 2017 05 5 ctenolabrus_rupestris 5 0.833 0.4165
4 2017 05 0 ctenolabrus_rupestris 1 0.167 0.0835
5 2017 06 0 ctenolabrus_rupestris 14 1.455 0.7275
6 2017 06 20 ctenolabrus_rupestris 6 0.545 0.2775
Thanks a lot!
You may need to use group_by() twice.
df %>%
select(-year_week) %>%
group_by(month, distance) %>%
mutate(percent = sum(percent), weeksum = sum(weeksum)) %>%
distinct %>%
group_by(month) %>%
mutate(percent_month = percent/sum(percent))
# A tibble: 6 x 7
# Groups: month [2]
# year month distance object_remarks weeksum percent percent_month
# <int> <int> <int> <chr> <int> <dbl> <dbl>
# 1 2017 5 15 ctenolabrus_rupestris 3 0.75 0.375
# 2 2017 5 10 ctenolabrus_rupestris 1 0.25 0.125
# 3 2017 5 5 ctenolabrus_rupestris 5 0.833 0.416
# 4 2017 5 0 ctenolabrus_rupestris 1 0.167 0.0835
# 5 2017 6 0 ctenolabrus_rupestris 14 1.46 0.728
# 6 2017 6 20 ctenolabrus_rupestris 6 0.545 0.272

Is there a way to make `lm` report the amount of observations used (instead of omitted)?

Simple question. I would like to immediately get the amount of observations used by the lm model when I subset the data. But just to give a reproducible example:
library(data.table)
df <- fread(
"ID DEP C fac H I clvl iso year matchcode
1 1 1 NA 9 1 1 NLD 2009 NLD2009
2 1 1 NA 8 1 1 NLD 2009 NLD2009
3 7 0 NA 3 0 2 NLD 2014 NLD2014
4 8 0 NA 4 0 2 NLD 2014 NLD2014
5 1 0 B 6 0 2 AUS 2011 AUS2011
6 2 0 B 7 0 2 AUS 2011 AUS2011
7 4 1 B 8 1 2 AUS 2007 AUS2007
8 5 1 B 7 7 2 AUS 2007 AUS2007
9 6 0 NA 5 1 1 USA 2007 USA2007
10 1 0 NA 5 1 1 USA 2007 USA2007
11 0 1 NA 0 0 2 USA 2011 USA2010
12 2 1 NA 1 0 2 USA 2011 USA2010
13 2 0 NA 6 NA 3 USA 2013 USA2013
14 9 0 NA 4 0 3 USA 2013 USA2013
15 8 1 A 5 1 2 BLG 2007 BLG2007
16 2 0 A 6 0 4 BEL 2009 BEL2009
17 NA 0 A 1 0 4 BEL 2009 BEL2009
18 9 1 A 0 1 4 BEL 2012 BEL2012",
header = TRUE
)
ols <- lm(DEP ~ H + I + iso, data=df, subset=(ID != 15))
summary(ols)
Is there a way to make lm report the amount of observations used (instead of omitted)? I can seriously not find this anywhere. It is important because I not know the amount of observations of each subset by heart.
Call:
lm(formula = DEP ~ H + I + iso, data = df, subset = (ID != 15))
Residuals:
Min 1Q Median 3Q Max
-4.930 -1.907 -0.167 1.855 6.065
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 5.658 3.589 1.58 0.15
H -0.499 0.436 -1.14 0.28
I 0.417 0.594 0.70 0.50
isoBEL 1.130 3.592 0.31 0.76
isoNLD 1.376 2.685 0.51 0.62
isoUSA -0.728 3.030 -0.24 0.82
Residual standard error: 3.6 on 9 degrees of freedom
(2 observations deleted due to missingness)
Multiple R-squared: 0.206, Adjusted R-squared: -0.235
F-statistic: 0.468 on 5 and 9 DF, p-value: 0.791
If not, what would be the easiest way to deduce it from the lm output?
The nobs() function tells you how many observations were used
nobs(ols)

Inserting rows into a table

I have this table (visit_ts) -
Year Month Number_of_visits
2011 4 1
2011 6 3
2011 7 23
2011 12 32
2012 1 123
2012 11 3200
The aim is to insert rows with Number_of_visits as 0, for months which are missing in the table.
Do not insert rows for 2011 where month is 1,2,3 or 2012 where month is 12.
The following code works correctly -
vec_month=c(1,2,3,4,5,6,7,8,9,10,11,12)
vec_year=c(2011,2012,2013,2014,2015,2016)
i=1
startyear=head(visit_ts$Year,n=1)
endyear=tail(visit_ts$Year,n=1)
x=head(visit_ts$Month,n=1)
y=tail(visit_ts$Month,n=1)
for (year in vec_year)
{
if(year %in% visit_ts$Year)
{
a=subset(visit_ts,visit_ts$Year==year)
index= which(!vec_month %in% a$Month)
for (j in index)
{
if((year==startyear & j>x )|(year==endyear & j<y))
visit_ts=rbind(visit_ts,c(year,j,0))
else
{
if(year!=startyear & year!=endyear)
visit_ts=rbind(visit_ts,c(year,j,0))
}
}}
else
{
i=i+1
}}
As I am new to R I am looking for an alternative/better solution to the problem which would not involve hard-coding the year and month vectors. Also please feel free to point out best programming practices.
We can use expand.grid with merge or left_join
library(dplyr)
expand.grid(Year = min(df1$Year):max(df1$Year), Month = 1:12) %>%
filter(!(Year == min(df1$Year) & Month %in% 1:3|
Year == max(df1$Year) & Month == 12)) %>%
left_join(., df1) %>%
mutate(Number_of_visits=replace(Number_of_visits, is.na(Number_of_visits), 0))
# Year Month Number_of_visits
#1 2012 1 123
#2 2012 2 0
#3 2012 3 0
#4 2011 4 1
#5 2012 4 0
#6 2011 5 0
#7 2012 5 0
#8 2011 6 3
#9 2012 6 0
#10 2011 7 23
#11 2012 7 0
#12 2011 8 0
#13 2012 8 0
#14 2011 9 0
#15 2012 9 0
#16 2011 10 0
#17 2012 10 0
#18 2011 11 0
#19 2012 11 3200
#20 2011 12 32
We can make it more dynamic by grouping by 'Year', get the sequence of 'Month' from minimum to maximum in a list, unnest the column, join with the original dataset (left_join) and replace the NA values with 0.
library(tidyr)
df1 %>%
group_by(Year) %>%
summarise(Month = list(min(Month):max(Month))) %>%
unnest(Month) %>%
left_join(., df1) %>%
mutate(Number_of_visits=replace(Number_of_visits, is.na(Number_of_visits), 0))
# Year Month Number_of_visits
# <int> <int> <dbl>
#1 2011 4 1
#2 2011 5 0
#3 2011 6 3
#4 2011 7 23
#5 2011 8 0
#6 2011 9 0
#7 2011 10 0
#8 2011 11 0
#9 2011 12 32
#10 2012 1 123
#11 2012 2 0
#12 2012 3 0
#13 2012 4 0
#14 2012 5 0
#15 2012 6 0
#16 2012 7 0
#17 2012 8 0
#18 2012 9 0
#19 2012 10 0
#20 2012 11 3200
Or another option is data.table. Convert the 'data.frame' to 'data.table' (setDT(df1)), grouped by 'Year', we get the sequence of min to max 'Month', join with the original dataset on 'Year' and 'Month', replace the NA values to 0.
library(data.table)
setDT(df1)
df1[df1[, .(Month=min(Month):max(Month)), Year],
on = c("Year", "Month")][is.na(Number_of_visits), Number_of_visits := 0][]
# Year Month Number_of_visits
# 1: 2011 4 1
# 2: 2011 5 0
# 3: 2011 6 3
# 4: 2011 7 23
# 5: 2011 8 0
# 6: 2011 9 0
# 7: 2011 10 0
# 8: 2011 11 0
# 9: 2011 12 32
#10: 2012 1 123
#11: 2012 2 0
#12: 2012 3 0
#13: 2012 4 0
#14: 2012 5 0
#15: 2012 6 0
#16: 2012 7 0
#17: 2012 8 0
#18: 2012 9 0
#19: 2012 10 0
#20: 2012 11 3200

Counting the distinct values for each day and group and inserting the value in an array in R

I want to transform the data below to give me an association array with the count of each unique id in each group for each day. So, for example, from the data below
Year Month Day Group ID
2014 04 26 1 A
2014 04 26 1 B
2014 04 26 2 B
2014 04 26 2 C
2014 05 12 1 B
2014 05 12 2 E
2014 05 12 2 F
2014 05 12 2 G
2014 05 12 3 G
2014 05 12 3 F
2015 05 19 1 F
2015 05 19 1 D
2015 05 19 2 E
2015 05 19 2 G
2015 05 19 2 D
2015 05 19 3 A
2015 05 19 3 E
2015 05 19 3 B
I want to make an array that gives:
[1] (04/26/2014)
Grp 1 2 3
1 0 1 0
2 1 0 0
3 0 0 0
[2] (05/12/2014)
Grp 1 2 3
1 0 0 1
2 0 0 2
3 1 2 0
[3] (05/19/2015)
Grp 1 2 3
1 0 1 0
2 1 0 1
3 0 1 0
The 'Grp' is just to indicate the group number. I know how to count the distinct values within the table, overall, but I’m trying to use for loops to also insert the appropriate unique value for each day for e.g., inserting the unique number of IDs that are present in both group 1 and 2 in 04/26/2014 and inserting that number in the group 1 and group 2 association matrix for that day. Any help would be appreciated.
I don't quite understand how you get the second one, but you can try this
dd <- read.table(header = TRUE, text = "Year Month Day Group ID
2014 04 26 1 A
2014 04 26 1 B
2014 04 26 2 B
2014 04 26 2 C
2014 05 12 1 B
2014 05 12 2 E
2014 05 12 2 F
2014 05 12 2 G
2014 05 12 3 G
2014 05 12 3 F
2015 05 19 1 F
2015 05 19 1 D
2015 05 19 2 E
2015 05 19 2 G
2015 05 19 2 D
2015 05 19 3 A
2015 05 19 3 E
2015 05 19 3 B")
dd <- within(dd, {
date <- as.Date(apply(dd[, 1:3], 1, paste0, collapse = '-'))
Group <- factor(Group)
Year <- Month <- Day <- NULL
})
Eg, for the first one
sp <- split(dd, dd$date)[[1]]
tbl <- table(sp$ID, sp$Group)
`diag<-`(crossprod(tbl), 0)
# 1 2 3
# 1 0 1 0
# 2 1 0 0
# 3 0 0 0
And do them all at once
lapply(split(dd, dd$date), function(x) {
cp <- crossprod(table(x$ID, x$Group))
diag(cp) <- 0
cp
})
# $`2014-04-26`
#
# 1 2 3
# 1 0 1 0
# 2 1 0 0
# 3 0 0 0
#
# $`2014-05-12`
#
# 1 2 3
# 1 0 0 0
# 2 0 0 2
# 3 0 2 0
#
# $`2015-05-19`
#
# 1 2 3
# 1 0 1 0
# 2 1 0 1
# 3 0 1 0
A possible solution with dplyr and tidyr will be as follows:
library(dplyr)
library(tidyr)
df$date <- as.Date(paste(df$Year, df$Month, df$Day, sep = '-'))
df %>%
expand(date, Group) %>%
left_join(., df) %>%
group_by(date, Group) %>%
summarise(nID = n_distinct(ID)) %>%
split(., .$date)
Resulting output:
$`2014-04-26`
Source: local data frame [3 x 3]
Groups: date [1]
date Group nID
(date) (int) (int)
1 2014-04-26 1 2
2 2014-04-26 2 2
3 2014-04-26 3 1
$`2014-05-12`
Source: local data frame [3 x 3]
Groups: date [1]
date Group nID
(date) (int) (int)
1 2014-05-12 1 1
2 2014-05-12 2 3
3 2014-05-12 3 2
$`2015-05-19`
Source: local data frame [3 x 3]
Groups: date [1]
date Group nID
(date) (int) (int)
1 2015-05-19 1 2
2 2015-05-19 2 3
3 2015-05-19 3 3

Resources