I have a column with ID and for each ID several even dates. I want to create two columns with rows for each id one column with the first date and the other with the next consecutive date. The next row for the ID should have the entry in the previous row second column and the next consecutive date for this ID. An example:
This is the data I have
id date
1 1 2015-01-01
2 1 2015-01-18
3 1 2015-08-02
4 2 2015-01-01
5 2 2015-01-13
6 3 2015-01-01
This is data I want
id date1 date2
1 1 2015-01-01 2015-01-18
2 1 2015-01-18 2015-08-02
3 1 2015-08-02 NA
4 2 2015-01-01 2015-01-13
5 2 2015-01-13 NA
6 3 2015-01-01 NA
Using dplyr:
library(dplyr)
df %>% group_by(id) %>%
mutate(date2 = lead(date))
id date date2
(int) (fctr) (fctr)
1 1 2015-01-01 2015-01-18
2 1 2015-01-18 2015-08-02
3 1 2015-08-02 NA
4 2 2015-01-01 2015-01-13
5 2 2015-01-13 NA
6 3 2015-01-01 NA
Using data.table, you can do as follow:
require(data.table)
DT[, .(date1 = date, date2 = shift(date, type = "lead")), by = id]
Or simply (also mentioned by #docendodiscimus)
DT[, date2 := shift(date, type = "lead"), by = id]
Also, if you are interested on making a recursive n columns (edited, taking advantage of #docendodiscimus comment to simplify the code)
i = 1:5
DT[, paste0("date", i+1) := shift(date, i, type = "lead"), by = id]
Base R solution using transform() and ave():
transform(df,date1=date,date2=ave(date,id,FUN=function(x) c(x[-1L],NA)),date=NULL);
## id date1 date2
## 1 1 2015-01-01 2015-01-18
## 2 1 2015-01-18 2015-08-02
## 3 1 2015-08-02 <NA>
## 4 2 2015-01-01 2015-01-13
## 5 2 2015-01-13 <NA>
## 6 3 2015-01-01 <NA>
The above line of code produces a copy of the data.frame. The return value can be assigned over the original df, assigned to a new variable, or passed as an argument/operand to a function/operator. If you want to modify it in-place, which would be a more efficient way to overwrite df, you can do this:
df$date2 <- ave(df$date,df$id,FUN=function(x) c(x[-1L],NA));
colnames(df)[colnames(df)=='date'] <- 'date1';
df;
## id date1 date2
## 1 1 2015-01-01 2015-01-18
## 2 1 2015-01-18 2015-08-02
## 3 1 2015-08-02 <NA>
## 4 2 2015-01-01 2015-01-13
## 5 2 2015-01-13 <NA>
## 6 3 2015-01-01 <NA>
df$date2 = ifelse(df$id==c(df$id[-1],-1), c(df$date[-1],NA), NA)
Related
In R, how can you count the number of observations fulfilling a condition over a time range?
Specifically, I want to count the number of different id by country over the last 8 months, but only if id occurs at least twice during these 8 months. Hence, for the count, it does not matter whether an id occurs 2x or 100x (doing this in 2 steps is maybe easier). NA exists both in id and country. Since this could otherwise be taken care off, accounting for this is not necessary but still helpful.
My current best try is, but does not account for the restriction (ID must appear at least twice in the previous 8 months) and also I find its counting odd when looking at the dates="2017-12-12", where desired_unrestricted should be equal to 4 according to my counting but the code gives 2.
dt[, date := as.Date(date)][
, totalids := sapply(date,
function(x) length(unique(id[between(date, x - lubridate::month(8), x)]))),
by = country]
Data
library(data.table)
library(lubridate)
ID <- c("1","1","1","1","1","1","2","2","2","3","3",NA,"4")
Date <- c("2017-01-01","2017-01-01", "2017-01-05", "2017-05-01", "2017-05-01","2018-05-02","2017-01-01", "2017-01-05", "2017-05-01", "2017-05-01","2017-05-01","2017-12-12","2017-12-12" )
Value <- c(2,4,3,5,2,5,8,17,17,3,7,5,3)
Country <- c("UK","UK","US","US",NA,"US","UK","UK","US","US","US","US","US")
Desired <- c(1,1,0,2,NA,0,1,2,2,2,2,1,1)
Desired_unrestricted <- c(2,2,1,3,NA,1,2,2,3,3,3,4,4)
dt <- data.frame(id=ID, date=Date, value=Value, country=Country, desired_output=Desired, desired_unrestricted=Desired_unrestricted)
setDT(dt)
Thanks in advance.
This data.table-only answer is motivated by a comment,
dt[, date := as.Date(date)] # if not already `Date`-class
dt[, date8 := do.call(c, lapply(dt$date, function(z) seq(z, length=2, by="-8 months")[2]))
][, results := dt[dt, on = .(country, date > date8, date <= date),
length(Filter(function(z) z > 1, table(id))), by = .EACHI]$V1
][, date8 := NULL ]
# id date value country desired_output desired_unrestricted results
# <char> <Date> <num> <char> <num> <num> <int>
# 1: 1 2017-01-01 2 UK 1 2 1
# 2: 1 2017-01-01 4 UK 1 2 1
# 3: 1 2017-01-05 3 US 0 1 0
# 4: 1 2017-05-01 5 US 1 3 2
# 5: 1 2017-05-01 2 <NA> NA NA 0
# 6: 1 2018-05-02 5 US 0 1 0
# 7: 2 2017-01-01 8 UK 1 2 1
# 8: 2 2017-01-05 17 UK 2 2 2
# 9: 2 2017-05-01 17 US 1 3 2
# 10: 3 2017-05-01 3 US 2 3 2
# 11: 3 2017-05-01 7 US 2 3 2
# 12: <NA> 2017-12-12 5 US 2 4 1
# 13: 4 2017-12-12 3 US 2 4 1
That's a lot to absorb.
Quick walk-through:
"8 months ago":
seq(z, length=2, by="-8 months")[2]
seq.Date (inferred by calling seq with a Date-class first argument) starts at z (current date for each row) and produces a sequence of length 2 with 8 months between them. seq always starts at the first argument, so length=1 won't work (it'll only return z); length=2 guarantees that the second value in the returned vector will be the "8 months before date" that we need.
Date subtraction:
[, date8 := do.call(c, lapply(dt$date, function(z) seq(...)[2])) ]
A simple base-R method for subtracting 8 months is seq(date, length=2, by="-8 months")[2]. seq.Date requires its first argument to be length-1, so we need to sapply or lapply it; unfortunately, sapply drops the class, so we lapply it and then programmatically combine them with do.call(c, ...) (since c(..) creates a list-column, and unlist will de-class it). (Perhaps this part can be improved.)
We need that in dt first since we do a non-equi (range-based) join based on this value.
Counting id with 2 or more visits:
length(Filter(function(z) z > 1, table(id)))
We produce a table(id), which gives us the count of each id within the join-period. Filter(fun, ...) allows us to reduce those that have a count below 2, and we're left with a named-vector of ids that had 2 or more visits. Retrieving the length is what we need.
Self non-equi join:
dt[dt, on = .(country, date > date8, date <= date), ... ]
Relatively straight-forward. This is an open/closed ranging, it can be changed to both-closed if you prefer.
Self non-equi join but count ids by-row: by=.EACHI.
Retrieve the results of that and assign into the original dt:
[, results := dt[...]$V1 ]
Since the non-equi join included a value (length(Filter(...))) without a name, it's named V1, and all we want is that. (To be honest, I don't know exactly why assigning it more directly doesn't work ... but the counts are all wrong. Perhaps it's backwards by-row tallying.)
Cleanup:
[, date8 := NULL ]
(Nothing fancy here, just proper data-stewardship :-)
There are some discrepancies in my counts versus your desired_output, I wonder if those are just typos in the OP; I think the math is right ...
Here is another option:
setkey(dt, country, date, id)
dt[, date := as.IDate(date)][,
eightmthsago := as.IDate(sapply(as.IDate(date), function(x) seq(x, by="-8 months", length.out=2L)[2L]))]
dt[, c("out", "out_unres") :=
dt[dt, on=.(country, date>=eightmthsago, date<=date),
by=.EACHI, {
v <- id[!is.na(id)]
.(uniqueN(v[duplicated(v)]), uniqueN(v))
}][,1L:3L := NULL]
]
dt
output (like r2evans, I am also getting different output from desired as there seems to be a miscount in the desired output):
id date value country desired_output desired_unrestricted eightmthsago out out_unres
1: 1 2017-05-01 2 <NA> NA NA 2016-09-01 0 1
2: 1 2017-01-01 2 UK 1 2 2016-05-01 1 2
3: 1 2017-01-01 4 UK 1 2 2016-05-01 1 2
4: 2 2017-01-01 8 UK 1 2 2016-05-01 1 2
5: 2 2017-01-05 17 UK 2 2 2016-05-05 2 2
6: 1 2017-01-05 3 US 0 1 2016-05-05 0 1
7: 1 2017-05-01 5 US 1 3 2016-09-01 2 3
8: 2 2017-05-01 17 US 1 3 2016-09-01 2 3
9: 3 2017-05-01 3 US 2 3 2016-09-01 2 3
10: 3 2017-05-01 7 US 2 3 2016-09-01 2 3
11: <NA> 2017-12-12 5 US 2 4 2017-04-12 1 4
12: 4 2017-12-12 3 US 2 4 2017-04-12 1 4
13: 1 2018-05-02 5 US 0 1 2017-09-02 0 2
Although this question is tagged with data.table, here is a dplyr::rowwise solution to the problem. Is this what you had in mind? The output looks valid to me: The number of ìds in the last 8 months which have a count of at least greater than 2.
library(dplyr)
library(lubridate)
dt <- dt %>% mutate(date = as.Date(date))
dt %>%
group_by(country) %>%
group_modify(~ .x %>%
rowwise() %>%
mutate(totalids = .x %>%
filter(date <= .env$date, date >= .env$date %m-% months(8)) %>%
pull(id) %>%
table() %>%
`[`(. >1) %>%
length
))
#> # A tibble: 13 x 7
#> # Groups: country [3]
#> country id date value desired_output desired_unrestricted totalids
#> <chr> <chr> <date> <dbl> <dbl> <dbl> <int>
#> 1 UK 1 2017-01-01 2 1 2 1
#> 2 UK 1 2017-01-01 4 1 2 1
#> 3 UK 2 2017-01-01 8 1 2 1
#> 4 UK 2 2017-01-05 17 2 2 2
#> 5 US 1 2017-01-05 3 0 1 0
#> 6 US 1 2017-05-01 5 1 3 2
#> 7 US 1 2018-05-02 5 0 1 0
#> 8 US 2 2017-05-01 17 1 3 2
#> 9 US 3 2017-05-01 3 2 3 2
#> 10 US 3 2017-05-01 7 2 3 2
#> 11 US <NA> 2017-12-12 5 2 4 1
#> 12 US 4 2017-12-12 3 2 4 1
#> 13 <NA> 1 2017-05-01 2 NA NA 0
Created on 2021-09-02 by the reprex package (v2.0.1)
I have two data frames (DF1 and DF2):
(1) DF1 contains information on individual-level, i.e. on 10.000 individuals nested in 30 units across 11 years (2000-2011). It contains four variables:
"individual" (numeric id for each individual; ranging from 1-10.000)
"unit" (numeric id for each unit; ranging from 1-30)
"date1" (a date in date format, i.e. 2000-01-01, etc; ranging from 2000-01-01 to 2010-12-31)
"date2" ("Date1" + 1 year)
(2) DF2 contains information on unit-level, i.e. on the same 30 units as in DF1 across the same time period (2000-2011) and further contains a numeric variable ("x"):
"unit" (numeric id for each unit; ranging from 1-30)
"date" (a date in date format, i.e. 2000-01-01, etc; ranging from 2000-01-01 to 2011-12-31)
"x" (a numeric variable, ranging from 0 to 200)
I would like to create new variable ("newvar") that gives me for each "individual" per "unit" the sum of "x" (DF2) counting from "date1" (DF1) to "date2" (DF2). This means that I would like to add this new variable to DF1.
For instance, if "individual"=1 in "unit"=1 has "date1"=2000-01-01 and "date2"=2001-01-01, and in DF2 "unit"=1 has three observations in the time period "date1" to "date2" (i.e. 2000-01-01 to 2001-01-01) with "x"=1, "x"=2 and "x"=3, then I would like add a new variable that gives for "individual"=1 in "unit"=1 "newvar"=6.
I assume that I need to use a for loop in R and have been using the following code:
for(i in length(DF1)){
DF1$newvar[i] <-sum(DF2$x[which(DF1$date == DF1$date1[i] &
DF1$date == DF1P$date1[i] &
DF2$unit == DF1P$unit[i]),])
}
but get the error message:
Error in DF2$x[which(DF2$date == : incorrect number of dimensions
Any ideas of how to create this variable would be tremendously appreciated!
Here is a small example as well as the expected output, using one unit for the sake of simplicity:
Assume DF1 looks as follows:
individual unit date1 date2
1 1 2000-01-01 2001-01-01
2 1 2000-02-02 2001-02-02
3 1 2000-03-03 2000-03-03
4 1 2000-04-04 2000-04-04
5 1 2000-12-31 2001-12-31
(...)
996 1 2010-01-01 2011-01-01
997 1 2010-02-15 2011-02-15
998 1 2010-03-05 2011-03-05
999 1 2010-04-10 2011-04-10
1000 1 2010-12-27 2011-12-27
1001 2 2000-01-01 2001-01-01
1002 2 2000-02-02 2001-02-02
1003 2 2000-03-03 2000-03-03
1004 2 2000-04-04 2000-04-04
1005 2 2000-12-31 2001-12-31
(...)
1996 2 2010-01-01 2011-01-01
1997 2 2010-02-15 2011-02-15
1998 2 2010-03-05 2011-03-05
1999 2 2010-04-10 2011-04-10
2000 2 2010-12-027 2011-12-27
(...)
3000 34 2000-02-02 2002-02-02
3001 34 2000-05-05 2001-05-05
3002 34 2000-06-06 2001-06-06
3003 34 2000-07-07 2001-07-07
3004 34 2000-11-11 2001-11-11
(...)
9996 34 2010-02-06 2011-02-06
9997 34 2010-05-05 2011-05-05
9998 34 2010-09-09 2011-09-09
9999 34 2010-09-25 2011-09-25
10000 34 2010-10-15 2011-10-15
Assume DF2 looks as follows:
unit date x
1 2000-01-01 1
1 2000-05-01 2
1 2000-12-01 3
1 2001-01-02 10
1 2001-07-05 20
1 2001-12-31 30
(...)
2 2010-05-05 1
2 2010-07-01 1
2 2010-08-09 1
3 (...)
This is what I would like DF1 to look like after running the code:
individual unit date1 date2 newvar
1 1 2000-01-01 2001-01-01 6
2 1 2000-02-02 2001-02-02 16
3 1 2000-03-03 2001-03-03 15
4 1 2000-04-04 2001-04-04 15
5 1 2000-12-31 2001-12-31 60
(...)
996 1 2010-01-01 2011-01-01 3
997 1 2010-02-15 2011-02-15 2
998 1 2010-03-05 2011-03-05 2
999 1 2010-04-10 2011-04-10 2
1000 1 2010-12-27 2011-12-27 0
(...)
However, I cannot simply aggregate: Imagine that in DF1 each "unit" has several hundreds of individuals for each year between 2000 and 2011. And DF2 has many observations for each unit across the years 2000-2011.
We can use data.table
library(data.table)
setDT(DF1)
setDT(DF2)
DF1[DF2[, .(newvar = sum(x)), .(unit, individual = cumsum(date %in% DF1$date1))],
newvar := newvar, on = .(individual, unit)]
DF1
# individual unit date1 date2 newvar
#1: 1 1 2000-01-01 2001-01-01 6
#2: 2 1 2001-01-02 2002-01-02 60
Or we can use a non-equi join
DF1[DF2[DF1, sum(x), on = .(unit, date >= date1, date <= date2),
by = .EACHI], newvar := V1, on = .(unit, date1=date)]
DF1
# individual unit date1 date2 newvar
#1: 1 1 2000-01-01 2001-01-01 6
#2: 2 1 2001-01-02 2002-01-02 60
You were almost there, I just modified slightly your for loop, and also made sure that the date variables are considered as such:
DF1$date1 = as.Date(DF1$date1,"%Y-%m-%d")
DF1$date2 = as.Date(DF1$date2,"%Y-%m-%d")
DF2$date = as.Date(DF2$date,"%Y-%m-%d")
for(i in 1:nrow(DF1)){
DF1$newvar[i] <-sum(DF2$x[which(DF2$unit == DF1$unit[i] &
DF2$date>= DF1$date1[i] &
DF2$date<= DF1$date2[i])])
}
The problem was, that you were asking DF2$date to be simultaneously == DF1$date1 & DF1$date2.
And also, length(DF1) gives you the number of columns. To have the number of rows you can either use nrow(DF1), or dim(DF1)[1].
I have a dataset in long form for start and end date. for each id you will see multiple start and end dates.
I need to find the difference between the first end date and second start date. I am not sure how to use two rows to calculate the difference. Any help is appreciated.
df=data.frame(c(1,2,2,2,3,4,4),
as.Date(c( "2010-10-01","2009-09-01","2014-01-01","2014-02-01","2009-01-01","2013-03-01","2014-03-01")),
as.Date(c("2016-04-30","2013-12-31","2014-01-31","2016-04-30","2014-02-28","2013-05-01","2014-08-31")));
names(df)=c('id','start','end')
my output would look like this:
df$diff=c(NA,1,1,NA,NA,304, NA)
Here's an attempt in base R that I think does what you want:
df$diff <- NA
split(df$diff, df$id) <- by(df, df$id, FUN=function(SD) c(SD$start[-1], NA) - SD$end)
df
# id start end diff
#1 1 2010-10-01 2016-04-30 NA
#2 2 2009-09-01 2013-12-31 1
#3 2 2014-01-01 2014-01-31 1
#4 2 2014-02-01 2016-04-30 NA
#5 3 2009-01-01 2014-02-28 NA
#6 4 2013-03-01 2013-05-01 304
#7 4 2014-03-01 2014-08-31 NA
Alternatively, in data.table it would be:
setDT(df)[, diff := shift(start,n=1,type="lead") - end, by=id]
Here's an alternative using the popular dplyr package:
library(dplyr)
df %>%
group_by(id) %>%
mutate(diff = difftime(lead(start), end, units = "days"))
# id start end diff
# (dbl) (date) (date) (dfft)
# 1 1 2010-10-01 2016-04-30 NA days
# 2 2 2009-09-01 2013-12-31 1 days
# 3 2 2014-01-01 2014-01-31 1 days
# 4 2 2014-02-01 2016-04-30 NA days
# 5 3 2009-01-01 2014-02-28 NA days
# 6 4 2013-03-01 2013-05-01 304 days
# 7 4 2014-03-01 2014-08-31 NA days
You can wrap diff in as.numeric if you want.
Again with base R, you can do the following:
df$noofdays <- as.numeric(as.difftime(df$end-df$start, units=c("days"), format="%Y-%m-%d"))
I'm trying to aggregate two data frames (df1 and df2).
The first contains 3 variables: ID, Date1 and Date2.
df1
ID Date1 Date2
1 2016-03-01 2016-04-01
1 2016-04-01 2016-05-01
2 2016-03-14 2016-04-15
2 2016-04-15 2016-05-17
3 2016-05-01 2016-06-10
3 2016-06-10 2016-07-15
The second also contains 3 variables: ID, Date3 and Value.
df2
ID Date3 Value
1 2016-03-15 5
1 2016-04-04 7
1 2016-04-28 7
2 2016-03-18 3
2 2016-03-27 5
2 2016-04-08 9
2 2016-04-20 2
3 2016-05-05 6
3 2016-05-25 8
3 2016-06-13 3
The idea is to get, for each df1 row, the sum of df2$Value that have the same ID and for which Date3 is between Date1 and Date2:
ID Date1 Date2 SumValue
1 2016-03-01 2016-04-01 5
1 2016-04-01 2016-05-01 14
2 2016-03-14 2016-04-15 17
2 2016-04-15 2016-05-17 2
3 2016-05-01 2016-06-10 14
3 2016-06-10 2016-07-15 3
I know how to make a loop on this, but the data frames are huge! Does someone has an efficient solution? Exploring data.table, plyr and dplyr but could not find a solution.
A couple of data.table solutions that should scale well (and a good stop-gap until non-equi joins are implemented):
Do the comparison in J using by=EACHI.
library(data.table)
setDT(df1)
setDT(df2)
df1[, `:=`(Date1 = as.Date(Date1), Date2 = as.Date(Date2))]
df2[, Date3 := as.Date(Date3)]
df1[ df2,
{
idx = Date1 <= i.Date3 & i.Date3 <= Date2
.(Date1 = Date1[idx],
Date2 = Date2[idx],
Date3 = i.Date3,
Value = i.Value)
},
on=c("ID"),
by=.EACHI][, .(sumValue = sum(Value)), by=.(ID, Date1, Date2)]
# ID Date1 Date2 sumValue
# 1: 1 2016-03-01 2016-04-01 5
# 2: 1 2016-04-01 2016-05-01 14
# 3: 2 2016-03-14 2016-04-15 17
# 4: 2 2016-04-15 2016-05-17 2
# 5: 3 2016-05-01 2016-06-10 14
# 6: 3 2016-06-10 2016-07-15 3
foverlap join (as suggested in the comments)
library(data.table)
setDT(df1)
setDT(df2)
df1[, `:=`(Date1 = as.Date(Date1), Date2 = as.Date(Date2))]
df2[, Date3 := as.Date(Date3)]
df2[, Date4 := Date3]
setkey(df1, ID, Date1, Date2)
foverlaps(df2,
df1,
by.x=c("ID", "Date3", "Date4"),
type="within")[, .(sumValue = sum(Value)), by=.(ID, Date1, Date2)]
# ID Date1 Date2 sumValue
# 1: 1 2016-03-01 2016-04-01 5
# 2: 1 2016-04-01 2016-05-01 14
# 3: 2 2016-03-14 2016-04-15 17
# 4: 2 2016-04-15 2016-05-17 2
# 5: 3 2016-05-01 2016-06-10 14
# 6: 3 2016-06-10 2016-07-15 3
Further reading
Rolling join on data.table with duplicate keys
foverlap joins in data.table
With the recently implemented non-equi joins feature in the current development version of data.table, v1.9.7, this can be done as follows:
dt2[dt1, .(sum = sum(Value)), on=.(ID, Date3>=Date1, Date3<=Date2), by=.EACHI]
# ID Date3 Date3 sum
# 1: 1 2016-03-01 2016-04-01 5
# 2: 1 2016-04-01 2016-05-01 14
# 3: 2 2016-03-14 2016-04-15 17
# 4: 2 2016-04-15 2016-05-17 2
# 5: 3 2016-05-01 2016-06-10 14
# 6: 3 2016-06-10 2016-07-15 3
The column names needs some fixing.. will work on it later.
Here's a base R solution using sapply():
df1 <- data.frame(ID=c(1L,1L,2L,2L,3L,3L),Date1=as.Date(c('2016-03-01','2016-04-01','2016-03-14','2016-04-15','2016-05-01','2016-06-01')),Date2=as.Date(c('2016-04-01','2016-05-01','2016-04-15','2016-05-17','2016-06-15','2016-07-15')));
df2 <- data.frame(ID=c(1L,1L,1L,2L,2L,2L,2L,3L,3L,3L),Date3=as.Date(c('2016-03-15','2016-04-04','2016-04-28','2016-03-18','2016-03-27','2016-04-08','2016-04-20','2016-05-05','2016-05-25','2016-06-13')),Value=c(5L,7L,7L,3L,5L,9L,2L,6L,8L,3L));
cbind(df1,SumValue=sapply(seq_len(nrow(df1)),function(ri) sum(df2$Value[df1$ID[ri]==df2$ID & df1$Date1[ri]<=df2$Date3 & df1$Date2[ri]>df2$Date3])));
## ID Date1 Date2 SumValue
## 1 1 2016-03-01 2016-04-01 5
## 2 1 2016-04-01 2016-05-01 14
## 3 2 2016-03-14 2016-04-15 17
## 4 2 2016-04-15 2016-05-17 2
## 5 3 2016-05-01 2016-06-15 17
## 6 3 2016-06-01 2016-07-15 3
Note that your df1 and expected output have slightly different dates in some cases; I used the df1 dates.
Here's another approach that attempts to be more vectorized: Precompute a cartesian product of indexes into the two frames, then perform a single vectorized conditional expression using the index vectors to get matching pairs of indexes, and finally use the matching indexes to aggregate the desired result:
cbind(df1,SumValue=with(expand.grid(i1=seq_len(nrow(df1)),i2=seq_len(nrow(df2))),{
x <- df1$ID[i1]==df2$ID[i2] & df1$Date1[i1]<=df2$Date3[i2] & df1$Date2[i1]>df2$Date3[i2];
tapply(df2$Value[i2[x]],i1[x],sum);
}));
## ID Date1 Date2 SumValue
## 1 1 2016-03-01 2016-04-01 5
## 2 1 2016-04-01 2016-05-01 14
## 3 2 2016-03-14 2016-04-15 17
## 4 2 2016-04-15 2016-05-17 2
## 5 3 2016-05-01 2016-06-15 17
## 6 3 2016-06-01 2016-07-15 3
I want to create a 4-hrs interval using a reference column from a data frame. I have a data frame like this one:
species<-"ABC"
ind<-rep(1:4,each=24)
hour<-rep(seq(0,23,by=1),4)
depth<-runif(length(ind),1,50)
df<-data.frame(cbind(species,ind,hour,depth))
df$depth<-as.numeric(df$depth)
What I would like is to create a new column (without changing the information or dimensions of the original data frame) that could look at my hour column (reference column) and based on that value will give me a 4-hrs time interval. For example, if the value from the hour column is between 0 and 3, then the value in new column will be 0; if the value is between 4 and 7 the value in the new column will be 4, and so on... In excel I used to use the floor/ceiling functions for this, but in R they are not exactly the same. Also, if someone has an easier suggestion for this using the original date/time data that could work too. In my original script I used the function as.POSIXct to get the date/time data, and from there my hour column.
I appreciate your help,
what about taking the column of hours, converting it to integers, and using integer division to get the floor? something like this
# convert hour to integer (hour is currently a col of factors)
i <- as.numeric(levels(df$hour))[df$hour]
# make new column
df$interval <- (i %/% 4) * 4
Expanding on my comment, since I think you're ultimately looking for actual dates at some point...
Some sample hourly data:
set.seed(1)
mydata <- data.frame(species = "ABC",
ind = rep(1:4, each=24),
depth = runif(96, 1, 50),
datetime = seq(ISOdate(2000, 1, 1, 0, 0, 0),
by = "1 hour", length.out = 96))
list(head(mydata), tail(mydata))
# [[1]]
# species ind depth datetime
# 1 ABC 1 14.00992 2000-01-01 00:00:00
# 2 ABC 1 19.23407 2000-01-01 01:00:00
# 3 ABC 1 29.06981 2000-01-01 02:00:00
# 4 ABC 1 45.50218 2000-01-01 03:00:00
# 5 ABC 1 10.88241 2000-01-01 04:00:00
# 6 ABC 1 45.02109 2000-01-01 05:00:00
#
# [[2]]
# species ind depth datetime
# 91 ABC 4 12.741841 2000-01-04 18:00:00
# 92 ABC 4 3.887784 2000-01-04 19:00:00
# 93 ABC 4 32.472125 2000-01-04 20:00:00
# 94 ABC 4 43.937191 2000-01-04 21:00:00
# 95 ABC 4 39.166819 2000-01-04 22:00:00
# 96 ABC 4 40.068132 2000-01-04 23:00:00
Transforming that data using cut and format:
mydata <- within(mydata, {
hourclass <- cut(datetime, "4 hours") # Find the intervals
hourfloor <- format(as.POSIXlt(hourclass), "%H") # Display just the "hour"
})
list(head(mydata), tail(mydata))
# [[1]]
# species ind depth datetime hourclass hourfloor
# 1 ABC 1 14.00992 2000-01-01 00:00:00 2000-01-01 00:00:00 00
# 2 ABC 1 19.23407 2000-01-01 01:00:00 2000-01-01 00:00:00 00
# 3 ABC 1 29.06981 2000-01-01 02:00:00 2000-01-01 00:00:00 00
# 4 ABC 1 45.50218 2000-01-01 03:00:00 2000-01-01 00:00:00 00
# 5 ABC 1 10.88241 2000-01-01 04:00:00 2000-01-01 04:00:00 04
# 6 ABC 1 45.02109 2000-01-01 05:00:00 2000-01-01 04:00:00 04
#
# [[2]]
# species ind depth datetime hourclass hourfloor
# 91 ABC 4 12.741841 2000-01-04 18:00:00 2000-01-04 16:00:00 16
# 92 ABC 4 3.887784 2000-01-04 19:00:00 2000-01-04 16:00:00 16
# 93 ABC 4 32.472125 2000-01-04 20:00:00 2000-01-04 20:00:00 20
# 94 ABC 4 43.937191 2000-01-04 21:00:00 2000-01-04 20:00:00 20
# 95 ABC 4 39.166819 2000-01-04 22:00:00 2000-01-04 20:00:00 20
# 96 ABC 4 40.068132 2000-01-04 23:00:00 2000-01-04 20:00:00 20
Note that your new "hourclass" variable is a factor and the new "hourfloor" variable is character, but you can easily change those, even during the within stage.
str(mydata)
# 'data.frame': 96 obs. of 6 variables:
# $ species : Factor w/ 1 level "ABC": 1 1 1 1 1 1 1 1 1 1 ...
# $ ind : int 1 1 1 1 1 1 1 1 1 1 ...
# $ depth : num 14 19.2 29.1 45.5 10.9 ...
# $ datetime : POSIXct, format: "2000-01-01 00:00:00" "2000-01-01 01:00:00" ...
# $ hourclass: Factor w/ 24 levels "2000-01-01 00:00:00",..: 1 1 1 1 2 2 2 2 3 3 ...
# $ hourfloor: chr "00" "00" "00" "00" ...
tip number 1, don't use cbind to create a data.frame with differing type of columns, everything gets coerced to the same type (in this case factor)
findInterval or cut would seem appropriate here.
df <- data.frame(species,ind,hour,depth)
# copy
df2 <- df
df2$fourhour <- c(0,4,8,12,16,20)[findInterval(df$hour, c(0,4,8,12,16,20))]
Though there is probably a simpler way, here is one attempt.
Make your data.frame not using cbind first though, so hour is not a factor but numeric
df <- data.frame(species,ind,hour,depth)
Then:
df$interval <- factor(findInterval(df$hour,seq(0,23,4)),labels=seq(0,23,4))
Result:
> head(df)
species ind hour depth interval
1 ABC 1 0 23.11215 0
2 ABC 1 1 10.63896 0
3 ABC 1 2 18.67615 0
4 ABC 1 3 28.01860 0
5 ABC 1 4 38.25594 4
6 ABC 1 5 30.51363 4
You could also make the labels a bit nicer like:
cutseq <- seq(0,23,4)
df$interval <- factor(
findInterval(df$hour,cutseq),
labels=paste(cutseq,cutseq+3,sep="-")
)
Result:
> head(df)
species ind hour depth interval
1 ABC 1 0 23.11215 0-3
2 ABC 1 1 10.63896 0-3
3 ABC 1 2 18.67615 0-3
4 ABC 1 3 28.01860 0-3
5 ABC 1 4 38.25594 4-7
6 ABC 1 5 30.51363 4-7