I have a dataframe (tibble) with multiple rows, each row contains an IDNR, a start date, an end date and an exposure status. The IDNR is a character variable, the start and end date are date variables and the exposure status is a numerical variable. This is what the top 3 rows look like:
# A tibble: 48,266 x 4
IDNR start end exposure
<chr> <date> <date> <dbl>
1 1 2018-02-15 2018-07-01 0
2 2 2017-10-30 2018-07-01 0
3 3 2016-02-11 2016-12-03 1
# ... with 48,256 more rows
In order to do a time-varying cox regression, I want to split up the rows into 90 day parts, while maintaining the start and end date. Here is an example of what I would like to achieve. What happens, is that the new end date is start + 90 days, and a new row is created. This row has the start date which is the same as the end date from the previous row. If the time between start and end is now less than 90 days, this is fine (as for IDNR 1 and 3), however, for IDNR 2 the time is still exceeding 90 days. Therefore a third row needs to be added.
# A tibble: 48,266 x 4
# Groups: IDNR [33,240]
IDNR start end exposure
<chr> <date> <date> <dbl>
1 1 2018-02-15 2018-05-16 0
2 1 2018-05-16 2018-07-01 0
3 2 2017-10-30 2018-01-28 0
4 2 2018-01-28 2018-04-28 0
5 2 2018-04-28 2018-07-01 0
6 3 2016-02-11 2016-08-09 1
7 3 2016-08-09 2016-12-03 1
I'm relatively new to coding in R, but I've found dplyr to be very useful so far. So, if someone knows a solution using dplyr I would really appreciate that.
Thanks in advance!
Here you go:
Using df as your data frame:
df = data.frame(IDNR = 1:3,
start = c("2018-02-15","2017-10-30","2016-02-11"),
end = c("2018-07-01","2018-07-01","2016-12-03"),
exposure = c(0,0,1))
Do:
library(lubridate)
newDF = apply(df, 1, function(x){
newStart = seq(from = ymd(x["start"]), to = ymd(x["end"]), by = 90)
newEnd = c(seq(from = ymd(x["start"]), to = ymd(x["end"]), by = 90)[-1], ymd(x["end"]))
d = data.frame(IDNR = rep(x["IDNR"], length(newStart)),
start = newStart,
end = newEnd,
exposure = rep(x["exposure"], length(newStart)))
})
newDF = do.call(rbind, newDF)
newDF = newDF[newDF$start != newDF$end,]
Result:
> newDF
IDNR start end exposure
1 1 2018-02-15 2018-05-16 0
2 1 2018-05-16 2018-07-01 0
3 2 2017-10-30 2018-01-28 0
4 2 2018-01-28 2018-04-28 0
5 2 2018-04-28 2018-07-01 0
6 3 2016-02-11 2016-05-11 1
7 3 2016-05-11 2016-08-09 1
8 3 2016-08-09 2016-11-07 1
9 3 2016-11-07 2016-12-03 1
What this does is create a sequence of days from start to end by 90 days and create a smaller data frame with them along with the IDNR and exposure. This apply will return a list of data frames that you can join together using do.call. The last line removes lines that have the same start and end date
Related
In my longitudinal data set, each row represents a time period of observation for each person, and each row is bounded by a start and end date. The rows are numbered ('episode'), and contain many row-specific variables (eg, 'edu_level') that I need to retain throughout the following steps.
I created a new date variable, hx_start, which can relate to the start and end date of each row of data in 1 of 3 ways (below). For each scenario, I need to edit (splice) the existing row of data accordingly, using dplyr:
1. Between a given row's start and end date (ie, as it does for persons 2 and 4)
In this case, I want to splice the existing row into two new ones, so that the date of
hx_start is the start date of one of the rows. The other row would retain the original row's
start date and its end date would be one day before the date of hx_start.
2. On the same date as someone's row start date (ie, person 1)
In this case, no change is needed.
3. On the same date as someone's row end date (ie, person 3)
Same as #1: I need to splice the existing row into two new ones, so that the date of hx_start
is the start date of one of the rows. The other row would retain the original row's
start date and its end date would be one day before the date of hx_start.
So far, I have created a new data set that has 2 duplicates of each row, assuming that I will need to edit up to 2 rows per existing row, and then drop the originals (or retain only the original, in the case of person 1). Importantly, I need a way to carry forward all of the other variables from the original row to all new rows without naming them all, if possible (there are many in my real data set).
#Load packages
library(lubridate)
#>
#> Attaching package: 'lubridate'
#> The following objects are masked from 'package:base':
#>
#> date, intersect, setdiff, union
#Create data set
person <- c(1, 2, 3, 4)
episode <- c(33, 50, 65, 70)
start <- c('2013-01-01', '2010-01-21', '2009-09-18', '2010-05-26')
end <- c('2013-06-04', '2010-06-19', '2009-12-31', '2010-12-24')
hx_start <- c('2013-01-01', '2010-03-09', '2009-12-31', '2010-07-04')
edu_level <- c(2, 3, 2, 1)
#Populate data frame
d <- cbind(person, episode, start, hx_start, end, edu_level)
d <- as.data.frame(d)
#Format dates and add to data frame
d$start <- as.Date(start, format = '%Y-%m-%d')
d$end <- as.Date(end, format = '%Y-%m-%d')
d$hx_start <- as.Date(hx_start, format = '%Y-%m-%d')
#Create 2 duplicates of this row for each person
d1 <- d[rep(seq_len(nrow(d)), each = 3), ]
d1
#> person episode start hx_start end edu_level
#> 1 1 33 2013-01-01 2013-01-01 2013-06-04 2
#> 1.1 1 33 2013-01-01 2013-01-01 2013-06-04 2
#> 1.2 1 33 2013-01-01 2013-01-01 2013-06-04 2
#> 2 2 50 2010-01-21 2010-03-09 2010-06-19 3
#> 2.1 2 50 2010-01-21 2010-03-09 2010-06-19 3
#> 2.2 2 50 2010-01-21 2010-03-09 2010-06-19 3
#> 3 3 65 2009-09-18 2009-12-31 2009-12-31 2
#> 3.1 3 65 2009-09-18 2009-12-31 2009-12-31 2
#> 3.2 3 65 2009-09-18 2009-12-31 2009-12-31 2
#> 4 4 70 2010-05-26 2010-07-04 2010-12-24 1
#> 4.1 4 70 2010-05-26 2010-07-04 2010-12-24 1
#> 4.2 4 70 2010-05-26 2010-07-04 2010-12-24 1
Created on 2022-03-23 by the reprex package (v2.0.0)
You can do this by creating a small helper function. I've done this using data.table formatting
library(data.table)
f <- function(s,m,e) {
if(m>s) return(list("start" = c(m,s),"hx_start" = c(m,m),"end" = c(e,m-1)))
if(m == s) return (list("start" = s,"hx_start" = m,"end" =e))
}
setDT(d)[,!c(3:5)][d[ ,f(start,hx_start,end), by=person], on=.(person)]
Output:
person episode edu_level start hx_start end
1: 1 33 2 2013-01-01 2013-01-01 2013-06-04
2: 2 50 3 2010-03-09 2010-03-09 2010-06-19
3: 2 50 3 2010-01-21 2010-03-09 2010-03-08
4: 3 65 2 2009-12-31 2009-12-31 2009-12-31
5: 3 65 2 2009-09-18 2009-12-31 2009-12-30
6: 4 70 1 2010-07-04 2010-07-04 2010-12-24
7: 4 70 1 2010-05-26 2010-07-04 2010-07-03
Notice that:
For person 2,4, one row now has hx_start as the start date, and the other row has the original start date, while the end date is one day before the hx_start date.
For person 1, there has been no change
For person 3, one row now has hx_start as the start date, and the other row has the original start date, while the end date is one day before the hx_start date.
Tidyverse option (also uses function above)
inner_join(
d %>% select(-c(start,hx_start,end)),
d %>%
rowwise() %>%
summarize(person = max(person),
dates = list(f(start,hx_start,end))) %>%
unnest_wider(dates) %>%
unnest(cols=everything()),
by = "person"
)
Output:
person episode edu_level start hx_start end
1: 1 33 2 2013-01-01 2013-01-01 2013-06-04
2: 2 50 3 2010-03-09 2010-03-09 2010-06-19
3: 2 50 3 2010-01-21 2010-03-09 2010-03-08
4: 3 65 2 2009-12-31 2009-12-31 2009-12-31
5: 3 65 2 2009-09-18 2009-12-31 2009-12-30
6: 4 70 1 2010-07-04 2010-07-04 2010-12-24
7: 4 70 1 2010-05-26 2010-07-04 2010-07-03
This seems like it should be straightforward but I cannot find a way to do this.
I have a sales cycle that begins ~ August 1 of each year and need to sum sales by week number. I need to create a "week number" field where week #1 begins on a date that I specify. Thus far I have looked at lubridate, baseR, and strftime, and I cannot find a way to change the "start" date from 01/01/YYYY to something else.
Solution needs to let me specify the start date and iterate week numbers as 7 days from the start date. The actual start date doesn't always occur on a Sunday or Monday.
EG Data Frame
eg_data <- data.frame(
cycle = c("cycle2019", "cycle2019", "cycle2018", "cycle2018", "cycle2017", "cycle2017", "cycle2016", "cycle2016"),
dates = as.POSIXct(c("2019-08-01" , "2019-08-10" ,"2018-07-31" , "2018-08-16", "2017-08-03" , "2017-08-14" , "2016-08-05", "2016-08-29")),
week_n = c("1", "2","1","3","1","2","1","4"))
I'd like the result to look like what is above - it would take the min date for each cycle and use that as a starting point, then iterate up week numbers based on a given date's distance from the cycle starting date.
This almost works. (Doing date arithmetic gives us durations in seconds: there may be a smoother way to convert with lubridate tools?)
secs_per_week <- 60*60*24*7
(eg_data
%>% group_by(cycle)
%>% mutate(nw=1+as.numeric(round((dates-min(dates))/secs_per_week)))
)
The results don't match for 2017, because there is an 11-day gap between the first and second observation ...
cycle dates week_n nw
<chr> <dttm> <chr> <dbl>
5 cycle2017 2017-08-03 00:00:00 1 1
6 cycle2017 2017-08-14 00:00:00 2 3
If someone has a better answer plz post, but this works -
Take the dataframe in the example, eg_data -
eg_data %>%
group_by(cycle) %>%
mutate(
cycle_start = as.Date(min(dates)),
days_diff = as.Date(dates) - cycle_start,
week_n = days_diff / 7,
week_n_whole = ceiling(days_diff / 7) ) -> eg_data_check
(First time I've answered my own question)
library("lubridate")
eg_data %>%
as_tibble() %>%
group_by(cycle) %>%
mutate(new_week = week(dates)-31)
This doesn't quite work the same as your example, but perhaps with some fiddling based on your domain experience you could adapt it:
library(lubridate)
eg_data %>%
mutate(aug1 = ymd_h(paste(str_sub(cycle, start = -4), "080100")),
week_n2 = ceiling((dates - aug1)/ddays(7)))
EDIT: If you have specific known dates for the start of each cycle, it might be helpful to join those dates to your data for the calc:
library(lubridate)
cycle_starts <- data.frame(
cycle = c("cycle2019", "cycle2018", "cycle2017", "cycle2016"),
start_date = ymd_h(c(2019080100, 2018072500, 2017080500, 2016071300))
)
eg_data %>%
left_join(cycle_starts) %>%
mutate(week_n2 = ceiling((dates - start_date)/ddays(7)))
#Joining, by = "cycle"
# cycle dates week_n start_date week_n2
#1 cycle2019 2019-08-01 1 2019-08-01 1
#2 cycle2019 2019-08-10 2 2019-08-01 2
#3 cycle2018 2018-07-31 1 2018-07-25 1
#4 cycle2018 2018-08-16 3 2018-07-25 4
#5 cycle2017 2017-08-03 1 2017-08-05 0
#6 cycle2017 2017-08-14 2 2017-08-05 2
#7 cycle2016 2016-08-05 1 2016-07-13 4
#8 cycle2016 2016-08-29 4 2016-07-13 7
This is a concise solution using lubridate
library(lubridate)
eg_data %>%
group_by(cycle) %>%
mutate(new_week = floor(as.period(ymd(dates) - ymd(min(dates))) / weeks()) + 1)
# A tibble: 8 x 4
# Groups: cycle [4]
cycle dates week_n new_week
<chr> <dttm> <chr> <dbl>
1 cycle2019 2019-08-01 00:00:00 1 1
2 cycle2019 2019-08-10 00:00:00 2 2
3 cycle2018 2018-07-31 00:00:00 1 1
4 cycle2018 2018-08-16 00:00:00 3 3
5 cycle2017 2017-08-03 00:00:00 1 1
6 cycle2017 2017-08-14 00:00:00 2 2
7 cycle2016 2016-08-05 00:00:00 1 1
8 cycle2016 2016-08-29 00:00:00 4 4
I have a data frame like this
transactionId user_id total_in_pennies created_at X yearmonth
1 345068 8 9900 2018-09-13 New Customer 2018-09-01
2 346189 8 9900 2018-09-20 Repeat Customer 2018-09-01
3 363500 8 7700 2018-10-11 Repeat Customer 2018-10-01
4 376089 8 7700 2018-10-25 Repeat Customer 2018-10-01
5 198450 11 0 2018-01-18 New Customer 2018-01-01
6 203966 11 0 2018-01-25 Repeat Customer 2018-01-01
it has many more rows, but that little snippet can be used.
I am trying to group using dplyr so I can get a final data frame like this
I use this code
df_RFM11 <- data2 %>% group_by(yearmonth) %>%
summarise(New_Customers=sum(X=="New Customer"), Repeat_Customers=sum(X=="Repeat Customer"), New_Customers_sales=sum(total_in_pennies & X=="New Customers"), Repeat_Customers_sales=sum(total_in_pennies & X=="Repeat Customers"))
and I get this result
> head(df_RFM11)
# A tibble: 6 x 5
yearmonth New_Customers Repeat_Customers New_Customers_sales Repeat_Customers_sales
<date> <int> <int> <int> <int>
1 2018-01-01 4880 2428 0 0
2 2018-02-01 2027 12068 0 0
3 2018-03-01 1902 15296 0 0
4 2018-04-01 1921 13363 0 0
5 2018-05-01 2631 18336 0 0
6 2018-06-01 2339 14492 0 0
and I am able to get the first 2 column I need, the count of new customers and repeat customers, but i get 0's when I try to get the sum of "total_in_pennies" for New Customers and repeat customer
Any help on what am i doing wrong?
You'd need to put them in brackets, like below:
df_RFM11 <- data2 %>%
group_by(yearmonth) %>%
summarise(New_Customers=sum(X=="New Customer"),
Repeat_Customers=sum(X=="Repeat Customer"),
New_Customers_sales=sum(total_in_pennies[X=="New Customer"]),
Repeat_Customers_sales=sum(total_in_pennies[X=="Repeat Customer"])
)
I'm getting started with R, so please bear with me
For example, I have this data.table (or data.frame) object :
Time Station count_starts count_ends
01/01/2015 00:30 A 2 3
01/01/2015 00:40 A 2 1
01/01/2015 00:55 B 1 1
01/01/2015 01:17 A 3 1
01/01/2015 01:37 A 1 1
My end goal is to group the "Time" column to hourly and sum the count_starts and count_ends based on the hourly time and station :
Time Station sum(count_starts) sum(count_ends)
01/01/2015 01:00 A 4 4
01/01/2015 01:00 B 1 1
01/01/2015 02:00 A 4 2
I did some research and found out that I should use the xts library.
Thanks for helping me out
UPDATE :
I converted the type of transactions$Time to POSIXct, so the xts package should be able to use the timeseries directly.
Using base R, we can still do the above. Only that the hour will be one less for all of them:
dat=read.table(text = "Time Station count_starts count_ends
'01/01/2015 00:30' A 2 3
'01/01/2015 00:40' A 2 1
'01/01/2015 00:55' B 1 1
'01/01/2015 01:17' A 3 1
'01/01/2015 01:37' A 1 1",
header = TRUE, stringsAsFactors = FALSE)
dat$Time=cut(strptime(dat$Time,"%m/%d/%Y %H:%M"),"hour")
aggregate(.~Time+Station,dat,sum)
Time Station count_starts count_ends
1 2015-01-01 00:00:00 A 4 4
2 2015-01-01 01:00:00 A 4 2
3 2015-01-01 00:00:00 B 1 1
You can use the order function to rearrange the table or even the sort.POSIXlt function:
m=aggregate(.~Time+Station,dat,sum)
m[order(m[,1]),]
Time Station count_starts count_ends
1 2015-01-01 00:00:00 A 4 4
3 2015-01-01 00:00:00 B 1 1
2 2015-01-01 01:00:00 A 4 2
A solution using dplyr and lubridate. The key is to use ceiling_date to convert the date time column to hourly time-step, and then group and summarize the data.
library(dplyr)
library(lubridate)
dt2 <- dt %>%
mutate(Time = mdy_hm(Time)) %>%
mutate(Time = ceiling_date(Time, unit = "hour")) %>%
group_by(Time, Station) %>%
summarise(`sum(count_starts)` = sum(count_starts),
`sum(count_ends)` = sum(count_ends)) %>%
ungroup()
dt2
# # A tibble: 3 x 4
# Time Station `sum(count_starts)` `sum(count_ends)`
# <dttm> <chr> <int> <int>
# 1 2015-01-01 01:00:00 A 4 4
# 2 2015-01-01 01:00:00 B 1 1
# 3 2015-01-01 02:00:00 A 4 2
DATA
dt <- read.table(text = "Time Station count_starts count_ends
'01/01/2015 00:30' A 2 3
'01/01/2015 00:40' A 2 1
'01/01/2015 00:55' B 1 1
'01/01/2015 01:17' A 3 1
'01/01/2015 01:37' A 1 1",
header = TRUE, stringsAsFactors = FALSE)
Explanation
mdy_hm is the function to convert the string to date-time class. It means "month-day-year hour-minute", which depends on the structure of the string. ceiling_date rounds a date-time object up based on the unit specified. group_by is to group the variable. summarise is to conduct summary operation.
There are basically two things required:
1) round of the Time to nearest 1 hour window:
library(data.table)
library(lubridate)
data=data.table(Time=c('01/01/2015 00:30','01/01/2015 00:40','01/01/2015 00:55','01/01/2015 01:17','01/01/2015 01:37'),Station=c('A','A','B','A','A'),count_starts=c(2,2,1,3,1),count_ends=c(3,1,1,1,1))
data[,Time_conv:=as.POSIXct(strptime(Time,'%d/%m/%Y %H:%M'))]
data[,Time_round:=floor_date(Time_conv,unit="1 hour")]
2) List the data table obtained above to get the desired result:
New_data=data[,list(count_starts_sum=sum(count_starts),count_ends_sum=sum(count_ends)),by='Time_round']
I have a data table of three columns id, dtstart, dtend. For example:
id start end
1 01/01/2015 31/01/2015
1 02/02/2015 28/02/2015
1 01/07/2016 31/07/2016
1 01/08/2016 31/08/2016
2 01/03/2015 31/03/2015
2 01/04/2015 30/04/2015
2 01/02/2016 28/02/2016
2 01/03/2016 31/03/2016
...
I need to create another data table grouped by id with the same columns but the new start date is the minimum date in the original start date and the new end date is the maximum date in the original dtend.
When there is a break of more then one day between an end date and the next start date then it should be grouped separately.
For example for the above the new table would be:
id start end
1 01/01/2015 28/02/2015
1 01/07/2016 31/08/2016
2 01/03/2015 30/04/2016
2 01/02/2016 31/03/2016
...
Do I need a for loop or is there a more efficient way (data table grouping for example)? The table is over 20 million rows with 100k+ unique ids.
Cheers
Andrew
This can be done using dplyr
dt.new <- dt %>%
arrange(id, start, end) %>%
mutate(gr = cumsum(lag(id, default = min(id)) != id |
as.numeric(difftime(start, lag(end, default = first(start)), units = 'days')) > 1)) %>%
group_by(id, gr) %>%
summarise(start = first(start),
end = last(end))
The result is:
Source: local data frame [6 x 4]
Groups: id [?]
id gr start end
<int> <int> <dttm> <dttm>
1 1 0 2015-01-01 2015-01-31
2 1 1 2015-02-02 2015-02-28
3 1 2 2016-07-01 2016-08-31
4 2 3 2015-03-01 2015-04-30
5 2 4 2016-02-01 2016-02-28
6 2 5 2016-03-01 2016-03-31
This works and doesn't match your output because you requested a one day margin (if you want two day margins then switch from >1 to >2), and 2016 was a leap year, which is in R's internal calendar. So the margin between 2/28/2016 and 3/1/2016 is 2 days.
Thanks again #akash87
For example row 6 below is within a month so it should still return one row for id 1 from 1/02/2006 to 30/09/2006 but it breaks into two, the first from 01/02/2006 to 12/06/2006 and then from 01/07/2006 to 30/09/2016
id dtstart dtend
1 01/02/2006 28/02/2006
1 01/03/2006 31/03/2006
1 01/04/2006 30/04/2006
1 01/05/2006 31/05/2006
1 01/06/2006 30/06/2006
1 10/06/2006 12/06/2006
1 01/07/2006 31/07/2006
1 01/08/2006 31/08/2006
1 01/09/2006 30/09/2006
2 01/04/2006 30/04/2006
2 01/05/2006 31/05/2006
2 01/09/2006 30/09/2006
2 01/10/2006 31/10/2006
So instead of returning
id start end
1 01/02/2006 30/09/2006
2 01/04/2006 31/05/2006
2 01/09/2006 31/10/2006
We have
id start end
1 01/02/2006 12/06/2006
1 01/07/2006 30/09/2006
2 01/04/2006 31/05/2006
2 01/09/2006 31/10/2006
Andrew