I have lab records of 30,000 unique ID's. I need to convert my data from long to wider format for each ID and TEST_DATE related to that unique ID.
Example for one ID :
I need to convert this to a wider format like this:
I have a dataset with 30,000 ID's and I need to do this for each ID. The ID with the maximum number of tests will determine our number of columns.
I will appreciate any ideas that you might have to solve this problem! Thank you
Try this:
library(dplyr)
library(tidyr)
#Code
new <- df %>%
group_by(ACCT,TEST_DATE) %>%
summarise(RESULT=round(mean(RESULT,na.rm=T),2)) %>%
ungroup() %>%
mutate(across(-ACCT,~as.character(.))) %>%
pivot_longer(-ACCT) %>%
group_by(ACCT,name) %>%
mutate(name=paste0(name,row_number())) %>%
pivot_wider(names_from = name,values_from=value) %>%
mutate(across(starts_with('RESULT'),~as.numeric(.)))
Output:
# A tibble: 2 x 7
# Groups: ACCT [2]
ACCT TEST_DATE1 RESULT1 TEST_DATE2 RESULT2 TEST_DATE3 RESULT3
<int> <chr> <dbl> <chr> <dbl> <chr> <dbl>
1 37733 9/1/2016 3 10/18/2016 2 11/1/2016 1
2 37734 9/1/2016 5 10/18/2016 4 11/1/2016 3
Some data used:
#Data
df <- structure(list(ACCT = c(37733L, 37733L, 37733L, 37734L, 37734L,
37734L), TEST_DATE = c("9/1/2016", "10/18/2016", "11/1/2016",
"9/1/2016", "10/18/2016", "11/1/2016"), RESULT = c(3L, 2L, 1L,
5L, 4L, 3L)), class = "data.frame", row.names = c(NA, -6L))
Here is a data.table option with dcast that might help (borrow data from #Duck)
> dcast(setDT(df)[, Q := seq(.N), ACCT], ACCT ~ Q, value.var = c("TEST_DATE", "RESULT"))
ACCT TEST_DATE_1 TEST_DATE_2 TEST_DATE_3 RESULT_1 RESULT_2 RESULT_3
1: 37733 9/1/2016 10/18/2016 11/1/2016 3 2 1
2: 37734 9/1/2016 10/18/2016 11/1/2016 5 4 3
Another option is using melt along with dcast, where the resulting format might be the one you are exactly after
suppressWarnings({
type.convert(
dcast(
melt(
setDT(df)[, Q := seq(.N), ACCT],
id = c("ACCT", "Q"),
measure = c("TEST_DATE", "RESULT")
)[order(ACCT, Q)],
ACCT ~ Q + variable,
value.var = "value"
),
as.is = TRUE
)
})
which gives
ACCT 1_TEST_DATE 1_RESULT 2_TEST_DATE 2_RESULT 3_TEST_DATE 3_RESULT
1: 37733 9/1/2016 3 10/18/2016 2 11/1/2016 1
2: 37734 9/1/2016 5 10/18/2016 4 11/1/2016 3
Take this simple route
library(tidyverse)
df %>% group_by(ACCT, TEST_DATE) %>% summarise(RESULT = mean(RESULT)) %>%
group_by(ACCT) %>% mutate(testno = row_number(), resultno = row_number()) %>%
pivot_wider(id_cols = ACCT, names_from = c("testno", "resultno"), values_from = c(TEST_DATE, RESULT))
# A tibble: 2 x 9
# Groups: ACCT [2]
ACCT TEST_DATE_1_1 TEST_DATE_2_2 TEST_DATE_3_3 TEST_DATE_4_4 RESULT_1_1 RESULT_2_2 RESULT_3_3 RESULT_4_4
<int> <date> <date> <date> <date> <dbl> <dbl> <dbl> <dbl>
1 37733 2016-01-07 2016-01-09 2016-01-11 2016-08-10 5 4.5 1 2
2 37734 2016-01-21 2016-08-20 NA NA 3 4 NA NA
data (dput) used
> dput(df)
structure(list(ACCT = c(37733L, 37733L, 37733L, 37733L, 37734L,
37734L, 37733L), TEST_DATE = structure(c(16809, 17023, 16811,
16807, 17033, 16821, 16809), class = "Date"), RESULT = c(3L,
2L, 1L, 5L, 4L, 3L, 6L)), row.names = c(NA, -7L), class = "data.frame")
df
> df
ACCT TEST_DATE RESULT
1 37733 2016-01-09 3
2 37733 2016-08-10 2
3 37733 2016-01-11 1
4 37733 2016-01-07 5
5 37734 2016-08-20 4
6 37734 2016-01-21 3
7 37733 2016-01-09 6
Related
i know my question is not as clear as it should be so i hope my explanation will make it more comprehensible. I have a data like this:
# total_call data
call_id | from_number | retrieved_date
1 1 2020-01-12 12:03:34
2 1 2020-01-12 12:06:34
3 2 2020-01-15 13:02:40
4 2 2020-01-15 13:05:40
5 1 2020-01-12 13:09:34
I want to group the calls by the from_number and the retrieved_date variables, which its time must be within 1 hour since the earliest. After 1 hour, it belongs to a new group. Then i want to filter the latest time of each group. This is the result i want:
# total_call data
call_id | from_number | retrieved_date
2 1 2020-01-12 12:06:34
4 2 2020-01-15 13:05:40
5 1 2020-01-12 13:09:34
Thanks for your attention. I’m looking forward to your reply.
We convert retrieved_date to POSIXct format, arrange the data and create a new group when the current retrieved_date is greater than previous retrieved_date by more than an hour and select the row with max retrieved_date.
library(dplyr)
df %>%
mutate(retrieved_date = lubridate::ymd_hms(retrieved_date)) %>%
arrange(from_number, retrieved_date) %>%
group_by(from_number) %>%
group_by(gr = cumsum(difftime(retrieved_date, lag(retrieved_date,
default = first(retrieved_date)), units = "hours") > 1),add = TRUE) %>%
slice(which.max(retrieved_date)) %>%
ungroup() %>%
select(-gr)
# A tibble: 3 x 3
# call_id from_number retrieved_date
# <int> <int> <dttm>
#1 2 1 2020-01-12 12:06:34
#2 5 1 2020-01-12 13:09:34
#3 4 2 2020-01-15 13:05:40
data
df <- structure(list(call_id = 1:5, from_number = c(1L, 1L, 2L, 2L,
1L), retrieved_date = structure(c(1L, 2L, 4L, 5L, 3L),
.Label = c("2020- 01-12 12:03:34","2020-01-12 12:06:34", "2020-01-12 13:09:34",
"2020-01-15 13:02:40", "2020-01-15 13:05:40"), class = "factor")),
class = "data.frame", row.names = c(NA, -5L))
I was wondering if someone here can help me with a lapply question.
Every month, data are extracted and the data frames are named according to the date extracted (01-08-2019,01-09-2019,01-10-2019 etc). The contents of each data frame are similar to the example below:
01-09-2019
ID DOB
3 01-07-2019
5 01-06-2019
7 01-05-2019
8 01-09-2019
01-10-2019
ID DOB
2 01-10-2019
5 01-06-2019
8 01-09-2019
9 01-02-2019
As the months roll on, there are more data sets being downloaded.
I am wanting to calculate the ages of people in each of the data sets based on the date the data was extracted - so in essence, the age would be the date difference between the data frame name and the DOB variable.
01-09-2019
ID DOB AGE(months)
3 01-07-2019 2
5 01-06-2019 3
7 01-05-2019 4
8 01-09-2019 0
01-10-2019
ID DOB AGE(months)
2 01-10-2019 0
5 01-06-2019 4
8 01-09-2019 1
9 01-02-2019 8
I was thinking of putting all of the data frames together in a list (as there are a lot) and then using lapply to calculate age across all data frames. How do I go about calculating the difference between a data frame name and a column?
If I may suggest a slightly differen approach: It might make more sense to compress your list into a single data frame before calculating the ages. Given your data looks something like this, i.e. it is a list of data frames, where the list element names are the dates of access:
$`01-09-2019`
# A tibble: 4 x 2
ID DOB
<dbl> <date>
1 3 2019-07-01
2 5 2019-06-01
3 7 2019-05-01
4 8 2019-09-01
$`01-10-2019`
# A tibble: 4 x 2
ID DOB
<dbl> <date>
1 2 2019-10-01
2 5 2019-06-01
3 8 2019-09-01
4 9 2019-02-01
You can call bind_rows first with parameter .id = "date_extracted" to turn your list into a data frame, and then calculate age in months.
library(tidyverse)
library(lubridate)
tib <- bind_rows(tib_list, .id = "date_extracted") %>%
mutate(date_extracted = dmy(date_extracted),
DOB = dmy(DOB),
age_months = month(date_extracted) - month(DOB)
)
#### OUTPUT ####
# A tibble: 8 x 4
date_extracted ID DOB age_months
<date> <dbl> <date> <dbl>
1 2019-09-01 3 2019-07-01 2
2 2019-09-01 5 2019-06-01 3
3 2019-09-01 7 2019-05-01 4
4 2019-09-01 8 2019-09-01 0
5 2019-10-01 2 2019-10-01 0
6 2019-10-01 5 2019-06-01 4
7 2019-10-01 8 2019-09-01 1
8 2019-10-01 9 2019-02-01 8
This can be solved with lapply as well but we can also use Map in this case to iterate over list and their names after adding all the dataframes in a list. In base R,
Map(function(x, y) {
x$DOB <- as.Date(x$DOB)
transform(x, age = as.integer(format(as.Date(y), "%m")) -
as.integer(format(x$DOB, "%m")))
}, list_df, names(list_df))
#$`01-09-2019`
# ID DOB age
#1 3 0001-07-20 2
#2 5 0001-06-20 3
#3 7 0001-05-20 4
#4 8 0001-09-20 0
#$`01-10-2019`
# ID DOB age
#1 2 0001-10-20 0
#2 5 0001-06-20 4
#3 8 0001-09-20 1
#4 9 0001-02-20 8
We can also do the same in tidyverse
library(dplyr)
library(lubridate)
purrr::imap(list_df, ~.x %>% mutate(age = month(.y) - month(DOB)))
data
list_df <- list(`01-09-2019` = structure(list(ID = c(3L, 5L, 7L, 8L),
DOB = structure(c(3L, 2L, 1L, 4L), .Label = c("01-05-2019", "01-06-2019",
"01-07-2019", "01-09-2019"), class = "factor")), class = "data.frame",
row.names = c(NA, -4L)), `01-10-2019` = structure(list(ID = c(2L, 5L, 8L, 9L),
DOB = structure(c(4L, 2L, 3L, 1L), .Label = c("01-02-2019",
"01-06-2019", "01-09-2019", "01-10-2019"), class = "factor")),
class = "data.frame", row.names = c(NA, -4L)))
It's bad practice to use dates and numbers as dataframe names consider prefix the date with an "x" as shown below in this base R solution:
df_list <- list(x01_09_2019 = `01-09-2019`, x01_10_2019 = `01-10-2019`)
df_list <- mapply(cbind, "report_date" = names(df_list), df_list, SIMPLIFY = F)
df_list <- lapply(df_list, function(x){
x$report_date <- as.Date(gsub("_", "-", gsub("x", "", x$report_date)), "%d-%m-%Y")
x$Age <- x$report_date - x$DOB
return(x)
}
)
Data:
`01-09-2019` <- structure(list(ID = c(3, 5, 7, 8),
DOB = structure(c(18078, 18048, 18017, 18140), class = "Date")),
class = "data.frame", row.names = c(NA, -4L))
`01-10-2019` <- structure(list(ID = c(2, 5, 8, 9),
DOB = structure(c(18170, 18048, 18140, 17928), class = "Date")),
class = "data.frame", row.names = c(NA, -4L))
I have a dataset for patient medications with Start.Date and Stop.Date. Each is represented in a row. I would like to merge rows where the time intervals are sequential as below:
ID = c(2, 2, 2, 2, 3, 5)
Medication = c("aspirin", "aspirin", "aspirin", "tylenol", "lipitor", "advil")
Start.Date = c("05/01/2017", "05/05/2017", "06/20/2017", "05/01/2017", "05/06/2017", "05/28/2017")
Stop.Date = c("05/04/2017", "05/10/2017", "06/27/2017", "05/15/2017", "05/12/2017", "06/13/2017")
df = data.frame(ID, Medication, Start.Date, Stop.Date)
ID Medication Start.Date Stop.Date
2 aspirin 05/01/2017 05/04/2017
2 aspirin 05/05/2017 05/10/2017
2 aspirin 06/20/2017 06/27/2017
2 tylenol 05/01/2017 05/15/2017
3 lipitor 05/06/2017 05/12/2017
5 advil 05/28/2017 06/13/2017
I would like to reduce rows by ID and medication if the Stop.Date for one is a day before the next Start.Date. It should look like below:
ID Medication Start.Date Stop.Date
2 aspirin 05/01/2017 05/10/2017
2 aspirin 06/20/2017 06/27/2017
2 tylenol 05/01/2017 05/15/2017
3 lipitor 05/06/2017 05/12/2017
5 advil 05/28/2017 06/13/2017
library(tidyverse)
library(lubridate)
df%>%
group_by(Medication)%>%
mutate_at(vars(3:4),mdy)%>%
mutate(Start.Date = coalesce(
if_else((Start.Date-lag(Stop.Date))==1,lag(Start.Date),Start.Date),Start.Date),
s = lead(Start.Date)!=Start.Date)%>%
filter(s|is.na(s))%>%
select(-s)
# A tibble: 5 x 4
# Groups: ID, Medication [4]
ID Medication Start.Date Stop.Date
<dbl> <chr> <date> <date>
1 2 aspirin 2017-05-01 2017-05-10
2 2 aspirin 2017-06-20 2017-06-27
3 2 tylenol 2017-05-01 2017-05-15
4 3 lipitor 2017-05-06 2017-05-12
5 5 advil 2017-05-28 2017-06-13
How about this?
df %>%
mutate_at(vars(ends_with("Date")), function(x) as.Date(x, format = "%m/%d/%Y")) %>%
group_by(ID, Medication) %>%
mutate(
isConsecutive = lead(Start.Date) - Stop.Date == 1,
isConsecutive = ifelse(
is.na(isConsecutive) & lag(isConsecutive) == TRUE, FALSE, isConsecutive),
grp = cumsum(isConsecutive)) %>%
group_by(ID, Medication, grp) %>%
mutate(Start.Date = min(Start.Date), Stop.Date = max(Stop.Date)) %>%
slice(1) %>%
ungroup() %>%
select(-isConsecutive, -grp)
## A tibble: 5 x 4
# ID Medication Start.Date Stop.Date
# <dbl> <fct> <date> <date>
#1 2. aspirin 2017-05-01 2017-05-10
#2 2. aspirin 2017-06-20 2017-06-27
#3 2. tylenol 2017-05-01 2017-05-15
#4 3. lipitor 2017-05-06 2017-05-12
#5 5. advil 2017-05-28 2017-06-13
Best to test this with a few more examples to ensure robustness. Let's try a more complex example
df <- structure(list(ID = c(2, 2, 2, 2, 2, 3, 5, 5), Medication = structure(c(2L,
2L, 2L, 2L, 4L, 3L, 1L, 1L), .Label = c("advil", "aspirin", "lipitor",
"tylenol"), class = "factor"), Start.Date = structure(c(1L, 2L,
6L, 7L, 1L, 3L, 4L, 5L), .Label = c("05/01/2017", "05/05/2017",
"05/06/2017", "05/28/2017", "06/14/2017", "06/20/2017", "06/28/2017"
), class = "factor"), Stop.Date = structure(c(2L, 3L, 8L, 1L,
5L, 4L, 6L, 7L), .Label = c("04/30/2017", "05/04/2017", "05/10/2017",
"05/12/2017", "05/15/2017", "06/13/2017", "06/20/2017", "06/27/2017"
), class = "factor")), .Names = c("ID", "Medication", "Start.Date",
"Stop.Date"), row.names = c(NA, -8L), class = "data.frame")
df;
# ID Medication Start.Date Stop.Date
#1 2 aspirin 05/01/2017 05/04/2017
#2 2 aspirin 05/05/2017 05/10/2017
#3 2 aspirin 06/20/2017 06/27/2017
#4 2 aspirin 06/28/2017 04/30/2017
#5 2 tylenol 05/01/2017 05/15/2017
#6 3 lipitor 05/06/2017 05/12/2017
#7 5 advil 05/28/2017 06/13/2017
#8 5 advil 06/14/2017 06/20/2017
Note that here we have two consecutive blocks for ID=2 (rows 1+2 and rows 3+4), as well as one consecutive block for ID=5 (rows 7+8).
Output is
df %>%
mutate_at(vars(ends_with("Date")), function(x) as.Date(x, format = "%m/%d/%Y")) %>%
group_by(ID, Medication) %>%
mutate(
isConsecutive = lead(Start.Date) - Stop.Date == 1,
isConsecutive = ifelse(
is.na(isConsecutive) & lag(isConsecutive) == TRUE, FALSE, isConsecutive),
grp = cumsum(isConsecutive)) %>%
group_by(ID, Medication, grp) %>%
mutate(Start.Date = min(Start.Date), Stop.Date = max(Stop.Date)) %>%
slice(1) %>%
ungroup() %>%
select(-isConsecutive, -grp)
## A tibble: 5 x 4
# ID Medication Start.Date Stop.Date
# <dbl> <fct> <date> <date>
#1 2. aspirin 2017-05-01 2017-05-10
#2 2. aspirin 2017-06-20 2017-06-27
#3 2. tylenol 2017-05-01 2017-05-15
#4 3. lipitor 2017-05-06 2017-05-12
#5 5. advil 2017-05-28 2017-06-20
Results seem to be robust.
Convert the 'Start' and 'Stop' date columns to Date class with mdy (from lubridate), grouped by 'ID', 'Medication', filter the abs difference of the 'lead` of 'Start.Date' and 'Stop.Date' that are not equal to 1
library(dplyr)
library(lubridate)
df %>%
mutate_at(3:4, mdy) %>%
group_by(ID, Medication) %>%
filter(abs(lead(Start.Date, default = last(Start.Date)) - Stop.Date) != 1)
# A tibble: 5 x 4
# Groups: ID, Medication [4]
# ID Medication Start.Date Stop.Date
# <dbl> <fct> <date> <date>
#1 2 aspirin 2017-05-05 2017-05-10
#2 2 aspirin 2017-06-20 2017-06-27
#3 2 tylenol 2017-05-01 2017-05-15
#4 3 lipitor 2017-05-06 2017-05-12
#5 5 advil 2017-05-28 2017-06-13
Or using the similar methodology in data.table
library(data.table)
setDT(df)[df[, (shift(mdy(Start.Date), type = 'lead',
fill = last(Start.Date)) - mdy(Stop.Date)) != 1 , ID]$V1]
# ID Medication Start.Date Stop.Date
#1: 2 aspirin 05/05/2017 05/10/2017
#2: 2 aspirin 06/20/2017 06/27/2017
#3: 2 tylenol 05/01/2017 05/15/2017
#4: 3 lipitor 05/06/2017 05/12/2017
#5: 5 advil 05/28/2017 06/13/2017
NOTE: We could convert the Date columns to Date class first as before
NOTE2: Both are simple methods based on the example provided by the OP
So i've been trying to get my head around this but i can't figure out how to do it.
This is an example:
ID Hosp. date Discharge date
1 2006-02-02 2006-02-04
1 2006-02-04 2006-02-18
1 2006-02-22 2006-03-24
1 2008-08-09 2008-09-14
2 2004-01-03 2004-01-08
2 2004-01-13 2004-01-15
2 2004-06-08 2004-06-28
What i want is a way to combine rows by ID, IF the discarge date is the same as the Hosp. date (or +-7 days) in the next row. So it would look like this:
ID Hosp. date Discharge date
1 2006-02-02 2006-03-24
1 2008-08-09 2008-09-14
2 2004-01-03 2004-01-15
2 2004-06-08 2004-06-28
Using the data.table-package:
# load the package
library(data.table)
# convert to a 'data.table'
setDT(d)
# make sure you have the correct order
setorder(d, ID, Hosp.date)
# summarise
d[, grp := cumsum(Hosp.date > (shift(Discharge.date, fill = Discharge.date[1]) + 7))
, by = ID
][, .(Hosp.date = min(Hosp.date), Discharge.date = max(Discharge.date))
, by = .(ID,grp)]
you get:
ID grp Hosp.date Discharge.date
1: 1 0 2006-02-02 2006-03-24
2: 1 1 2008-08-09 2008-09-14
3: 2 0 2004-01-03 2004-01-15
4: 2 1 2004-06-08 2004-06-28
The same logic with dplyr:
library(dplyr)
d %>%
arrange(ID, Hosp.date) %>%
group_by(ID) %>%
mutate(grp = cumsum(Hosp.date > (lag(Discharge.date, default = Discharge.date[1]) + 7))) %>%
group_by(grp, add = TRUE) %>%
summarise(Hosp.date = min(Hosp.date), Discharge.date = max(Discharge.date))
Used data:
d <- structure(list(ID = c(1L, 1L, 1L, 1L, 2L, 2L, 2L),
Hosp.date = structure(c(13181, 13183, 13201, 14100, 12420, 12430, 12577), class = "Date"),
Discharge.date = structure(c(13183, 13197, 13231, 14136, 12425, 12432, 12597), class = "Date")),
.Names = c("ID", "Hosp.date", "Discharge.date"), class = "data.frame", row.names = c(NA, -7L))
ID Date
1 1-1-2016
1 2-1-2016
1 3-1-2016
2 5-1-2016
3 6-1-2016
3 11-1-2016
3 12-1-2016
4 7-1-2016
5 9-1-2016
5 19-1-2016
5 20-1-2016
6 11-04-2016
6 12-04-2016
6 16-04-2016
6 04-08-2016
6 05-08-2016
6 06-08-2016
Expected Data Frame is based on consecutive dates pairwise
1st_Date is when he visited for first time
2nd_Date is the date after which he visited for 2 consecutive days
3rd_Date is the date after which he visited for 3 consecutive days
For e.g :
For ID = 1 , He visited first time on 1-1-2016 and his 2 consecutive visits also began on the 1-1-2016 as well as his 3rd one .
Similarly For ID = 2 , He only visited 1 time so rest will remain blank
For ID = 3 , he visited 1st Time on 6-1-2016 but visited for 2 consecutive days starting on 11-1-2016.
NOTE : This has to be done till earliest 3rd Date only
Expected Output
ID 1st_Date 2nd_Date 3rd_Date
1 1-1-2016 1-1-2016 1-1-2016
2 5-1-2016 NA NA
3 6-1-2016 11-1-2016 NA
4 7-1-2016 NA NA
5 9-1-2016 19-1-2016 NA
6 11-04-2016 11-04-2016 04-08-2016
Here is an attempt using dplyr and tidyr. The first thing to do is to convert your Date to as.Date and group_by the IDs. We next create a few new variables. The first one, new, checks to see which dates are consecutive. Date is then updated to give NA for those consecutive dates. However, If not all the dates are consecutive, then we filter out the ones that were converted to NA. We then fill (replace NA with latest non-na date for each ID), remove unwanted columns and spread.
library(dplyr)
library(tidyr)
df %>%
mutate(Date = as.Date(Date, format = '%d-%m-%Y')) %>%
group_by(ID) %>%
mutate(new = cumsum(c(1, diff.difftime(Date, units = 'days'))),
Date = replace(Date, c(0, diff(new)) == 1, NA),
new1 = sum(is.na(Date)),
new2 = seq(n())) %>%
filter(!is.na(Date)|new1 != 1) %>%
fill(Date) %>%
select(-c(new, new1)) %>%
spread(new2, Date) %>%
select(ID:`3`)
# ID `1` `2` `3`
#* <int> <date> <date> <date>
#1 1 2016-01-01 2016-01-01 2016-01-01
#2 2 2016-01-05 <NA> <NA>
#3 3 2016-01-06 2016-01-11 <NA>
#4 4 2016-01-07 <NA> <NA>
#5 5 2016-01-09 2016-01-09 2016-01-09
With your Updated Data set, It gives
# ID `1` `2` `3`
#* <int> <date> <date> <date>
#1 1 2016-01-01 2016-01-01 2016-01-01
#2 2 2016-01-05 <NA> <NA>
#3 3 2016-01-06 2016-01-11 <NA>
#4 4 2016-01-07 <NA> <NA>
#5 5 2016-01-09 2016-01-19 <NA>
DATA USED
dput(df)
structure(list(ID = c(1L, 1L, 1L, 2L, 3L, 3L, 3L, 4L, 5L, 5L,
5L), Date = structure(c(1L, 5L, 7L, 8L, 9L, 2L, 3L, 10L, 11L,
4L, 6L), .Label = c("1-1-2016", "11-1-2016", "12-1-2016", "19-1-2016",
"2-1-2016", "20-1-2016", "3-1-2016", "5-1-2016", "6-1-2016",
"7-1-2016", "9-1-2016"), class = "factor")), .Names = c("ID",
"Date"), class = "data.frame", row.names = c(NA, -11L))
Use reshape. Code below assumes z is your data frame where date is a numeric date/time variable, ordered increasingly.
# a "set" variable represents a set of consecutive dates
z$set <- unsplit(tapply(z$date, z$ID, function(x) cumsum(diff(c(x[1], x)) > 1)), z$ID)
# "first.date" represents the first date in the set (of consecutive dates)
z$first.date <- unsplit(lapply(split(z$date, z[, c("ID", "set")]), min), z[, c("ID", "set")])
# "occurence" is a consecutive occurence #
z$occurrence <- unsplit(lapply(split(seq(nrow(z)), z$ID), seq_along), z$ID)
reshape(z[, c("ID", "first.date", "occurrence")], direction = "wide",
idvar = "ID", v.names = "first.date", timevar = "occurrence")
The result:
ID first.date.1 first.date.2 first.date.3
1 1 2016-01-01 2016-01-01 2016-01-01
4 2 2016-01-05 <NA> <NA>
5 3 2016-01-06 2016-01-11 2016-01-11
8 4 2016-01-07 <NA> <NA>
9 5 2016-01-09 2016-01-09 2016-01-09