I have two data frames (DF1 and DF2):
(1) DF1 contains information on individual-level, i.e. on 10.000 individuals nested in 30 units across 11 years (2000-2011). It contains four variables:
"individual" (numeric id for each individual; ranging from 1-10.000)
"unit" (numeric id for each unit; ranging from 1-30)
"date1" (a date in date format, i.e. 2000-01-01, etc; ranging from 2000-01-01 to 2010-12-31)
"date2" ("Date1" + 1 year)
(2) DF2 contains information on unit-level, i.e. on the same 30 units as in DF1 across the same time period (2000-2011) and further contains a numeric variable ("x"):
"unit" (numeric id for each unit; ranging from 1-30)
"date" (a date in date format, i.e. 2000-01-01, etc; ranging from 2000-01-01 to 2011-12-31)
"x" (a numeric variable, ranging from 0 to 200)
I would like to create new variable ("newvar") that gives me for each "individual" per "unit" the sum of "x" (DF2) counting from "date1" (DF1) to "date2" (DF2). This means that I would like to add this new variable to DF1.
For instance, if "individual"=1 in "unit"=1 has "date1"=2000-01-01 and "date2"=2001-01-01, and in DF2 "unit"=1 has three observations in the time period "date1" to "date2" (i.e. 2000-01-01 to 2001-01-01) with "x"=1, "x"=2 and "x"=3, then I would like add a new variable that gives for "individual"=1 in "unit"=1 "newvar"=6.
I assume that I need to use a for loop in R and have been using the following code:
for(i in length(DF1)){
DF1$newvar[i] <-sum(DF2$x[which(DF1$date == DF1$date1[i] &
DF1$date == DF1P$date1[i] &
DF2$unit == DF1P$unit[i]),])
}
but get the error message:
Error in DF2$x[which(DF2$date == : incorrect number of dimensions
Any ideas of how to create this variable would be tremendously appreciated!
Here is a small example as well as the expected output, using one unit for the sake of simplicity:
Assume DF1 looks as follows:
individual unit date1 date2
1 1 2000-01-01 2001-01-01
2 1 2000-02-02 2001-02-02
3 1 2000-03-03 2000-03-03
4 1 2000-04-04 2000-04-04
5 1 2000-12-31 2001-12-31
(...)
996 1 2010-01-01 2011-01-01
997 1 2010-02-15 2011-02-15
998 1 2010-03-05 2011-03-05
999 1 2010-04-10 2011-04-10
1000 1 2010-12-27 2011-12-27
1001 2 2000-01-01 2001-01-01
1002 2 2000-02-02 2001-02-02
1003 2 2000-03-03 2000-03-03
1004 2 2000-04-04 2000-04-04
1005 2 2000-12-31 2001-12-31
(...)
1996 2 2010-01-01 2011-01-01
1997 2 2010-02-15 2011-02-15
1998 2 2010-03-05 2011-03-05
1999 2 2010-04-10 2011-04-10
2000 2 2010-12-027 2011-12-27
(...)
3000 34 2000-02-02 2002-02-02
3001 34 2000-05-05 2001-05-05
3002 34 2000-06-06 2001-06-06
3003 34 2000-07-07 2001-07-07
3004 34 2000-11-11 2001-11-11
(...)
9996 34 2010-02-06 2011-02-06
9997 34 2010-05-05 2011-05-05
9998 34 2010-09-09 2011-09-09
9999 34 2010-09-25 2011-09-25
10000 34 2010-10-15 2011-10-15
Assume DF2 looks as follows:
unit date x
1 2000-01-01 1
1 2000-05-01 2
1 2000-12-01 3
1 2001-01-02 10
1 2001-07-05 20
1 2001-12-31 30
(...)
2 2010-05-05 1
2 2010-07-01 1
2 2010-08-09 1
3 (...)
This is what I would like DF1 to look like after running the code:
individual unit date1 date2 newvar
1 1 2000-01-01 2001-01-01 6
2 1 2000-02-02 2001-02-02 16
3 1 2000-03-03 2001-03-03 15
4 1 2000-04-04 2001-04-04 15
5 1 2000-12-31 2001-12-31 60
(...)
996 1 2010-01-01 2011-01-01 3
997 1 2010-02-15 2011-02-15 2
998 1 2010-03-05 2011-03-05 2
999 1 2010-04-10 2011-04-10 2
1000 1 2010-12-27 2011-12-27 0
(...)
However, I cannot simply aggregate: Imagine that in DF1 each "unit" has several hundreds of individuals for each year between 2000 and 2011. And DF2 has many observations for each unit across the years 2000-2011.
We can use data.table
library(data.table)
setDT(DF1)
setDT(DF2)
DF1[DF2[, .(newvar = sum(x)), .(unit, individual = cumsum(date %in% DF1$date1))],
newvar := newvar, on = .(individual, unit)]
DF1
# individual unit date1 date2 newvar
#1: 1 1 2000-01-01 2001-01-01 6
#2: 2 1 2001-01-02 2002-01-02 60
Or we can use a non-equi join
DF1[DF2[DF1, sum(x), on = .(unit, date >= date1, date <= date2),
by = .EACHI], newvar := V1, on = .(unit, date1=date)]
DF1
# individual unit date1 date2 newvar
#1: 1 1 2000-01-01 2001-01-01 6
#2: 2 1 2001-01-02 2002-01-02 60
You were almost there, I just modified slightly your for loop, and also made sure that the date variables are considered as such:
DF1$date1 = as.Date(DF1$date1,"%Y-%m-%d")
DF1$date2 = as.Date(DF1$date2,"%Y-%m-%d")
DF2$date = as.Date(DF2$date,"%Y-%m-%d")
for(i in 1:nrow(DF1)){
DF1$newvar[i] <-sum(DF2$x[which(DF2$unit == DF1$unit[i] &
DF2$date>= DF1$date1[i] &
DF2$date<= DF1$date2[i])])
}
The problem was, that you were asking DF2$date to be simultaneously == DF1$date1 & DF1$date2.
And also, length(DF1) gives you the number of columns. To have the number of rows you can either use nrow(DF1), or dim(DF1)[1].
Related
This question already has answers here:
How to create a lag variable within each group?
(5 answers)
Closed 2 years ago.
I have data that looks similar to the following except with hundreds of IDs and thousands of observations:
ID date measles
1 2008-09-12 1
1 2008-10-25 NA
1 2009-01-12 1
1 2009-03-12 NA
1 2009-05-12 1
2 2010-05-17 NA
2 2010-06-12 NA
2 2010-07-02 1
2 2010-08-13 NA
I want to create a variable that will store the previous date for each pid like the following:
ID date measles previous_date
1 2008-09-12 1 NA
1 2008-10-25 NA 2008-09-12
1 2009-01-12 1 2008-10-25
1 2009-03-12 NA 2009-01-12
1 2009-05-12 1 2009-03-12
2 2010-05-17 NA NA
2 2010-06-12 NA 2010-05-17
2 2010-07-02 1 2010-06-12
2 2010-08-13 NA 2010-07-02
This should be an extremely easy task, but I have been unsuccessful at getting a lag variable to work properly. I have tried a few methods, such as the following:
dt[, previous_date:=c(NA, current_date[-.N]), by=c("ID")]
dt[,previous_date:=current_date-shift(current_date,1,type="lag"),by=ID]
The code samples above either produce sporadic numbers in the previous_date variable or produce all NAs. I'm not sure why this is? Is it because I'm using a date variable as opposed to an integer?
Is there a better way to accomplish this task that would work for a date variable?
We can just use shift on the 'date' column grouped by 'ID'. By default the type is lag
library(data.table)
dt[, previous_date := shift(date), ID]
dt
# ID date measles previous_date
#1: 1 2008-09-12 1 <NA>
#2: 1 2008-10-25 NA 2008-09-12
#3: 1 2009-01-12 1 2008-10-25
#4: 1 2009-03-12 NA 2009-01-12
#5: 1 2009-05-12 1 2009-03-12
#6: 2 2010-05-17 NA <NA>
#7: 2 2010-06-12 NA 2010-05-17
#8: 2 2010-07-02 1 2010-06-12
#9: 2 2010-08-13 NA 2010-07-02
I have this situation:
ID date Weight
1 2014-12-02 23
1 2014-10-02 25
2 2014-11-03 27
2 2014-09-03 45
3 2014-07-11 56
3 NA 34
4 2014-10-05 25
4 2014-08-09 14
5 NA NA
5 NA NA
And I would like split the dataset in this, like this:
1-
ID date Weight
1 2014-12-02 23
1 2014-10-02 25
2 2014-11-03 27
2 2014-09-03 45
4 2014-10-05 25
4 2014-08-09 14
2- Lowest Date
ID date Weight
3 2014-07-11 56
3 NA 34
5 NA NA
5 NA NA
I tried this for second dataset:
dt <- dt[order(dt$ID, dt$date), ]
dt.2=dt[duplicated(dt$ID), ]
but didn't work
Get the ID's for which date are NA and then subset based on that
NA_ids <- unique(df$ID[is.na(df$date)])
subset(df, !ID %in% NA_ids)
# ID date Weight
#1 1 2014-12-02 23
#2 1 2014-10-02 25
#3 2 2014-11-03 27
#4 2 2014-09-03 45
#7 4 2014-10-05 25
#8 4 2014-08-09 14
subset(df, ID %in% NA_ids)
# ID date Weight
#5 3 2014-07-11 56
#6 3 <NA> 34
#9 5 <NA> NA
#10 5 <NA> NA
Using dplyr, we can create a new column which has TRUE/FALSE for each ID based on presence of NA and then use group_split to split into list of two.
library(dplyr)
df %>%
group_by(ID) %>%
mutate(NA_ID = any(is.na(date))) %>%
ungroup %>%
group_split(NA_ID, keep = FALSE)
The above dplyr logic can also be implemented in base R by using ave and split
df$NA_ID <- with(df, ave(is.na(date), ID, FUN = any))
split(df[-4], df$NA_ID)
By using the data below, I want to create a new unique customer id by considering their contact date.
Rule: After every two days, I want each customer to get a new unique customer id and preserve it on the following record if the following contact date for the same customer is within the following two days if not assign a new id to this same customer.
I couldn't go any further than calculating date differences.
The original dataset I work is bigger; therefore, I prefer a data.table solution if possible.
library(data.table)
treshold <- 2
dt <- structure(list(customer_id = c('10','20','20','20','20','20','30','30','30','30','30','40','50','50'),
contact_date = as.Date(c("2019-01-05","2019-01-01","2019-01-01","2019-01-02",
"2019-01-08","2019-01-09","2019-02-02","2019-02-05",
"2019-02-05","2019-02-09","2019-02-12","2019-02-01",
"2019-02-01","2019-02-05")),
desired_output = c(1,2,2,2,3,3,4,5,5,6,7,8,9,10)),
class = "data.frame",
row.names = 1:14)
setDT(dt)
setorder(dt, customer_id, contact_date)
dt[, date_diff_in_days:=contact_date - shift(contact_date, type = c("lag")), by=customer_id]
dt[, date_diff_in_days:=as.numeric(date_diff_in_days)]
dt
customer_id contact_date desired_output date_diff_in_days
1: 10 2019-01-05 1 NA
2: 20 2019-01-01 2 NA
3: 20 2019-01-01 2 0
4: 20 2019-01-02 2 1
5: 20 2019-01-08 3 6
6: 20 2019-01-09 3 1
7: 30 2019-02-02 4 NA
8: 30 2019-02-05 5 3
9: 30 2019-02-05 5 0
10: 30 2019-02-09 6 4
11: 30 2019-02-12 7 3
12: 40 2019-02-01 8 NA
13: 50 2019-02-01 9 NA
14: 50 2019-02-05 10 4
Rule: After every two days, I want each customer to get a new unique customer id and preserve it on the following record if the following contact date for the same customer is within the following two days if not assign a new id to this same customer.
When creating a new ID, if you set up the by= vectors correctly to capture the rule, the auto-counter .GRP can be used:
thresh <- 2
dt[, g := .GRP, by=.(
customer_id,
cumsum(contact_date - shift(contact_date, fill=first(contact_date)) > thresh)
)]
dt[, any(g != desired_output)]
# [1] FALSE
I think the code above is correct since it works on the example, but you might want to check on your actual data (comparing against results from, eg, Gregor's approach) to be sure.
We use cumsum to increment whenever date_diff_in_days is NA or when the threshold is exceeded.
dt[, result := cumsum(is.na(date_diff_in_days) | date_diff_in_days > treshold)]
# customer_id contact_date desired_output date_diff_in_days result
# 1: 10 2019-01-05 1 NA 1
# 2: 20 2019-01-01 2 NA 2
# 3: 20 2019-01-01 2 0 2
# 4: 20 2019-01-02 2 1 2
# 5: 20 2019-01-08 3 6 3
# 6: 20 2019-01-09 3 1 3
# 7: 30 2019-02-02 4 NA 4
# 8: 30 2019-02-05 5 3 5
# 9: 30 2019-02-05 5 0 5
# 10: 30 2019-02-09 6 4 6
# 11: 30 2019-02-12 7 3 7
# 12: 40 2019-02-01 8 NA 8
# 13: 50 2019-02-01 9 NA 9
# 14: 50 2019-02-05 10 4 10
I have a list of 100+ time series dataframes my.list with daily observations for each product in its own data frame. Some values are NA without any record of the date. I would like to update each data frame in this list to show the date and NA if it does not have a record on this date.
Dates:
start = as.Date('2016/04/08')
full <- seq(start, by='1 days', length=10)
Sample Time Series Data:
d1 <- data.frame(Date = seq(start, by ='2 days',length=5), Sales = c(5,10,15,20,25))
d2 <- data.frame(Date = seq(start, by= '1 day', length=10),Sales = c(1, 2, 3,4,5,6,7,8,9,10))
my.list <- list(d1, d2)
I want to merge all full date values into each data frame, and if no match exists then sales is NA:
my.list
[[d1]]
Date Sales
2016-04-08 5
2016-04-09 NA
2016-04-10 10
2016-04-11 NA
2016-04-12 15
2016-04-13 NA
2016-04-14 20
2016-04-15 NA
2016-04-16 25
2016-04-17 NA
[[d2]]
Date Sales
2016-04-08 1
2016-04-09 2
2016-04-10 3
2016-04-11 4
2016-04-12 5
2016-04-13 6
2016-04-14 7
2016-04-15 8
2016-04-16 9
2016-04-17 10
If I understand correctly, the OP wants to update each of the dataframes in my.list to contain one row for each date given in the vector of dates full
Base R
In base R, merge() can be used as already mentioned by Hack-R. However, th answer below expands this to work on all dataframes in the list:
# creat dataframe from vector of full dates
full.df <- data.frame(Date = full)
# apply merge on each dataframe in the list
lapply(my.list, merge, y = full.df, all.y = TRUE)
[[1]]
Date Sales
1 2016-04-08 5
2 2016-04-09 NA
3 2016-04-10 10
4 2016-04-11 NA
5 2016-04-12 15
6 2016-04-13 NA
7 2016-04-14 20
8 2016-04-15 NA
9 2016-04-16 25
10 2016-04-17 NA
[[2]]
Date Sales
1 2016-04-08 1
2 2016-04-09 2
3 2016-04-10 3
4 2016-04-11 4
5 2016-04-12 5
6 2016-04-13 6
7 2016-04-14 7
8 2016-04-15 8
9 2016-04-16 9
10 2016-04-17 10
Caveat
The answer assumes that full covers the overall range of Date of all dataframes in the list.
In order to avoid any mishaps, the overall range of Date can be retrieved from the available data in my.list:
overall_date_range <- Reduce(range, lapply(my.list, function(x) range(x$Date)))
full <- seq(overall_date_range[1], overall_date_range[2], by = "1 days")
Using rbindlist()
Alternatively, the list of dataframes which are identical in structure can be stored in a large dataframe. An additional attribute indicates to which product each row belongs to. The homogeneous structure simplifies subsequent operations.
The code below uses the rbindlist() function from the data.table package to create a large data.table. CJ() (cross join) creates all combinations of dates and product id which is then merged / joined to fill in the missing dates:
library(data.table)
all_products <- rbindlist(my.list, idcol = "product.id")[
CJ(product.id = unique(product.id), Date = seq(min(Date), max(Date), by = "1 day")),
on = .(Date, product.id)]
all_products
product.id Date Sales
1: 1 2016-04-08 5
2: 1 2016-04-09 NA
3: 1 2016-04-10 10
4: 1 2016-04-11 NA
5: 1 2016-04-12 15
6: 1 2016-04-13 NA
7: 1 2016-04-14 20
8: 1 2016-04-15 NA
9: 1 2016-04-16 25
10: 1 2016-04-17 NA
11: 2 2016-04-08 1
12: 2 2016-04-09 2
13: 2 2016-04-10 3
14: 2 2016-04-11 4
15: 2 2016-04-12 5
16: 2 2016-04-13 6
17: 2 2016-04-14 7
18: 2 2016-04-15 8
19: 2 2016-04-16 9
20: 2 2016-04-17 10
Subsequent operations can be grouped by product.id, e.g., to determine the number of valid sales data for each product:
all_products[!is.na(Sales), .(valid.sales.data = .N), by = product.id]
product.id valid.sales.data
1: 1 5
2: 2 10
Or, the totals sales per product:
all_products[, .(total.sales = sum(Sales, na.rm = TRUE)), by = product.id]
product.id total.sales
1: 1 75
2: 2 55
If required for some reason the result can be converted back to a list by
split(all_products, by = "product.id")
I thought it could be a very easy question, but I am really a new beginner for R.
I have a data.table with key and lots of rows, two of which could be set as key. I want to recreate the table by Key.
For example, the simple data. In this case, the key is ID and Act, and here we can get a total of 4 groups.
ID ValueDate Act Volume
1 2015-01-01 EUR 21
1 2015-02-01 EUR 22
1 2015-01-01 MAD 12
1 2015-02-01 MAD 11
2 2015-01-01 EUR 5
2 2015-02-01 EUR 7
3 2015-01-01 EUR 4
3 2015-02-01 EUR 2
3 2015-03-01 EUR 6
Here is a code to generate test data:
dd <- data.table(ID = c(1,1,1,1,2,2,3,3,3),
ValueDate = c("2015-01-01", "2015-02-01", "2015-01-01","2015-02-01", "2015-01-01","2015-02-01","2015-01-01","2015-02-01","2015-03-01"),
Act = c("EUR","EUR","MAD","MAD","EUR","EUR","EUR","EUR","EUR"),
Volume=c(21,22,12,11,5,7,4,2,6))
After change, each column should present a specific group which is defined by Key (ID and Act).
Below is the result:
ValueDate ID1_EUR D1_MAD D2_EUR D3_EUR
2015-01-01 21 12 5 4
2015-02-01 22 11 7 2
2015-03-01 NA NA NA 6
Thanks a lot !
What you are trying to do is not recreating the data.table, but reshaping it from a long format to a wide format. You can use dcast for this:
dcast(dd, ValueDate ~ ID + Act, value.var = "Volume")
which gives:
ValueDate 1_EUR 1_MAD 2_EUR 3_EUR
1: 2015-01-01 21 12 5 4
2: 2015-02-01 22 11 7 2
3: 2015-03-01 NA NA NA 6
If you want the numbers in the resulting columns to be preceded with ID, then you can use:
dcast(dd, ValueDate ~ paste0("ID",ID) + Act, value.var = "Volume")
which gives:
ValueDate ID1_EUR ID1_MAD ID2_EUR ID3_EUR
1: 2015-01-01 21 12 5 4
2: 2015-02-01 22 11 7 2
3: 2015-03-01 NA NA NA 6