Adding row for missing value in data.table - r

My question is somehow related to Fastest way to add rows for missing values in a data.frame? but a bit tougher I think. And I can't figure out how to adapt this solution to my problem.
Here is what my data.table looks like :
ida idb value date
1: A 2 26600 2004-12-31
2: A 3 19600 2005-03-31
3: B 3 18200 2005-06-30
4: B 4 1230 2005-09-30
5: C 2 8700 2005-12-31
The difference is that every 'ida' has his own dates and there is at least one row where 'ida' appears with each date but not necessarily for all 'idb'. I want to insert every missing ('ida','idb') couple missing with the corresponding date and 0 as a value.
Moreover, there is no periodicity for the dates.
How would you do this ?
Desired output :
ida idb value date
1: A 2 26600 2004-12-31
1: A 2 0 2005-03-31
2: A 3 19600 2005-03-31
2: A 3 0 2004-12-31
3: B 3 18200 2005-06-30
4: B 3 0 2005-09-30
5: B 4 1230 2005-09-30
4: B 4 0 2005-06-30
6: C 2 8700 2005-12-31
The order doesn't matter. Every date missing is filled with a 0 value.

You just do the same thing as in your linked question by each ida:
setkey(dt, idb, date)
dt[, .SD[CJ(unique(idb), unique(date))], by = ida][is.na(value), value := 0][]
# ida idb value date
#1: A 2 26600 2004-12-31
#2: A 2 0 2005-03-31
#3: A 3 0 2004-12-31
#4: A 3 19600 2005-03-31
#5: C 2 8700 2005-12-31
#6: B 3 18200 2005-06-30
#7: B 3 0 2005-09-30
#8: B 4 0 2005-06-30
#9: B 4 1230 2005-09-30

Related

How to calculate moving average from previous rows in data.table?

I've a data like this;
library(data.table)
set.seed(1)
df <- data.table(store = sample(LETTERS[1:2],size = 10,replace = T),
week = sample(1:10),
demand = round(sample(rnorm(10,mean = 20,sd=2)),2))
random_na_index <- sample(1:nrow(df),3)
df[random_na_index,demand := NA]
setorder(df,store,week)
store week demand
1: A 3 19.18
2: A 5 NA
3: A 6 NA
4: A 8 19.55
5: A 9 20.50
6: A 10 NA
7: B 1 20.75
8: B 2 17.70
9: B 4 19.40
10: B 7 17.52
I need to calculate moving average using the 2 weeks before the current week. I couldn't do it because zoo's and data.table's frollmean uses current row also while calculating moving average. I don't also know how to handle NA's while applying a rolling function.
The desired output should look like;
store week demand desired_column
1: A 3 19.18 NA
2: A 5 NA 19.180
3: A 6 NA 19.180
4: A 8 19.55 NA
5: A 9 20.50 19.550
6: A 10 NA 20.025
7: B 1 20.75 NA
8: B 2 17.70 20.750
9: B 4 19.40 19.225
10: B 7 17.52 18.550
You could shift the values before applying frollmean with na.rm = TRUE argument:
df[order(store,week),desired:=frollmean(shift(demand),n=2,na.rm=T),by=.(store)][]
store week demand desired
<char> <int> <num> <num>
1: A 3 19.18 NA
2: A 5 NA 19.180
3: A 6 NA 19.180
4: A 8 19.55 NaN
5: A 9 20.50 19.550
6: A 10 NA 20.025
7: B 1 20.75 NA
8: B 2 17.70 20.750
9: B 4 19.40 19.225
10: B 7 17.52 18.550

How to add rows and extrapolate the data by multiple variables?

I'm trying to add missing lines for "day" and extrapolate the data for "value". In my data each subject ("id") has 2 periods (period 1 and period 2) and values for consecutive days.
An example of my data looks like this:
df <- data.frame(
id = c(1,1,1,1, 1,1,1,1, 2,2,2,2, 2,2,2,2, 3,3,3,3, 3,3,3,3),
period = c(1,1,1,1, 2,2,2,2, 1,1,1,1, 2,2,2,2, 1,1,1,1, 2,2,2,2),
day= c(1,2,4,5, 1,3,4,5, 2,3,4,5, 1,2,3,5, 2,3,4,5, 1,2,3,4),
value =c(10,12,15,16, 11,14,15,17, 13,14,15,16, 15,16,18,20, 16,17,19,29, 14,16,18,20))
For each id and period I am missing data for days 3,2,1,4,1,5, respectively. I want to expand the data to let's say 10 days and extrapolate the data on value column (e.g. with linear regression).
My final df should be something like that:
df2 <- data.frame(
id = c(1,1,1,1,1,1,1, 1,1,1,1,1,1,1, 2,2,2,2,2,2,2, 2,2,2,2,2,2,2, 3,3,3,3,3,3,3, 3,3,3,3,3,3,3),
period = c(1,1,1,1,1,1,1, 2,2,2,2,2,2,2, 1,1,1,1,1,1,1, 2,2,2,2,2,2,2, 1,1,1,1,1,1,1, 2,2,2,2,2,2,2),
day= c(1,2,3,4,5,6,7, 1,2,3,4,5,6,7, 1,2,3,4,5,6,7, 1,2,3,4,5,6,7, 1,2,3,4,5,6,7, 1,2,3,4,5,6,7),
value =c(10,12,13,15,16,17,18, 11,12,14,15,17,18,19, 12,13,14,15,16,18,22, 15,16,18,19,20,22,23, 15,16,17,19,29,39,49, 14,16,18,20,22,24,26))
The most similar example I found doesn't extrapolate by two variables (ID and period in my case), it extrapolates only by year. I tried to adapt the code but no success :(
Another example extrapolates the data by multiple id but doesn't add rows for missing data.
I couldn't combine both codes with my limited experience in R. Any suggestions?
Thanks in advance...
We can use complete
library(dplyr)
library(tidyr)
library(forecast)
df %>%
group_by(id, period) %>%
complete(day =1:7)%>%
mutate(value = as.numeric(na.interp(value)))
#akrun's answer is good, as long as you don't mind using linear interpolation. However, if you do want to use a linear model, you could try this data.table approach.
library(data.table)
model <- lm(value ~ day + period + id,data=df)
dt <- as.data.table(df)[,.SD[,.(day = 1:7,value = value[match(1:7,day)])],by=.(id,period)]
dt[is.na(value), value := predict(model,.SD),]
dt
id period day value
1: 1 1 1 10.00000
2: 1 1 2 12.00000
3: 1 1 3 12.86714
4: 1 1 4 15.00000
5: 1 1 5 16.00000
6: 1 1 6 18.13725
7: 1 1 7 19.89396
8: 1 2 1 11.00000
9: 1 2 2 12.15545
10: 1 2 3 14.00000
11: 1 2 4 15.00000
12: 1 2 5 17.00000
13: 1 2 6 19.18227
14: 1 2 7 20.93898
15: 2 1 1 11.90102
16: 2 1 2 13.00000
17: 2 1 3 14.00000
18: 2 1 4 15.00000
19: 2 1 5 16.00000
20: 2 1 6 20.68455
21: 2 1 7 22.44125
22: 2 2 1 15.00000
23: 2 2 2 16.00000
24: 2 2 3 18.00000
25: 2 2 4 18.21616
26: 2 2 5 20.00000
27: 2 2 6 21.72957
28: 2 2 7 23.48627
29: 3 1 1 14.44831
30: 3 1 2 16.00000
31: 3 1 3 17.00000
32: 3 1 4 19.00000
33: 3 1 5 29.00000
34: 3 1 6 23.23184
35: 3 1 7 24.98855
36: 3 2 1 14.00000
37: 3 2 2 16.00000
38: 3 2 3 18.00000
39: 3 2 4 20.00000
40: 3 2 5 22.52016
41: 3 2 6 24.27686
42: 3 2 7 26.03357
id period day value

How to update a data.frame based on information from another data.frame

I have two tables: Display and Review. The Review table contains information on reviews on products of an online store. Each row represents the date of the review as well as the cumulative number of reviews and the average rating for the product up to the date.
page_id<-c("1072659", "1072659" , "1072659","1072650","1072660","1072660")
review_id<-c("1761023","1761028","1762361","1918387","1761427","1863914")
date<-as.Date(c("2013-07-11","2013-08-12","2014-07-15","2014-09-10","2013-07-27","2014-08-12"),format = "%Y-%m-%d")
cumulative_No_reviews<-c(1,2,3,1,1,2)
average_rating<-c(5,3.5,4,3,5,5)
Review<-data.frame(page_id,review_id,date,cumulative_No_reviews,average_rating)
page_id review_id date cumulative_No_reviews average_rating
1072659 1761023 2013-07-11 1 5
1072659 1761028 2013-08-12 2 3.5
1072659 1762361 2014-07-15 3 4
1072650 1918387 2014-09-10 1 3
1072660 1761427 2013-07-27 1 5
1072660 1863914 2014-08-12 2 5
The Display table captures the data on customers’ visit to product pages.
page_id<-c("1072659","1072659","1072659","1072650","1072650","1072660","1072660","1072660")
date<-as.Date(c("2013-07-10","2013-08-03","2015-02-11","2014-08-10","2014-09-09","2013-08-12","2014-09-12","2015-08-12"),format = "%Y-%m-%d")
Display<-data.frame(page_id,date)
page_id date
1072659 2013-07-10
1072659 2013-08-03
1072659 2015-02-11
1072650 2014-08-10
1072650 2014-09-09
1072660 2013-08-12
1072660 2014-09-12
1072660 2015-08-12
I’d like to add two column to the Display table (call it Display2) in a way that it reflects the latest review information up the point of visit for each product, as follows:
page_id<-c("1072659","1072659","1072659","1072650","1072650","1072660","1072660","1072660")
date<-as.Date(c("2013-07-10","2013-08-03","2015-02-11","2014-08-10","2014-09-09","2013-08-12","2014-09-12","2015-08-12"),format = "%Y-%m-%d")
cumulative_No_reviews<-c(0,1,3,0,0,1,2,2)
average_rating<-c(NA,5,4,NA,NA,5,5,5)
Display2<-data.frame(page_id,date,cumulative_No_reviews,average_rating)
page_id date cumulative_No_reviews average_rating
1072659 2013-07-10 0 NA
1072659 2013-08-03 1 5
1072659 2015-02-11 3 4
1072650 2014-08-10 0 NA
1072650 2014-09-09 0 NA
1072660 2013-08-14 1 5
1072660 2014-09-11 2 5
1072660 2015-08-12 2 5
I would appreciate your help with this.
You can do this with a data.table join. You can join the Review table with the Display table on the condition that the page_ids match and the Review date is less than the Display date. For some rows of Display there will be multiple rows of Review which match according to these conditions, so with mult = 'last' we're just picking the last one. Since Review is sorted by date, this means the one with the most recent date.
library(data.table) # 1.12.6 for nafill (used below)
setDT(Display)
setDT(Review)
Display2 <- Review[Display, on = .(page_id, date < date), mult = 'last']
Display2
# page_id review_id date cumulative_No_reviews average_rating
# 1: 1072659 <NA> 2013-07-10 NA NA
# 2: 1072659 1761023 2013-08-03 1 5
# 3: 1072659 1762361 2015-02-11 3 4
# 4: 1072650 <NA> 2014-08-10 NA NA
# 5: 1072650 <NA> 2014-09-09 NA NA
# 6: 1072660 1761427 2013-08-12 1 5
# 7: 1072660 1863914 2014-09-12 2 5
# 8: 1072660 1863914 2015-08-12 2 5
Now this output almost matches what you show in the question, we just need to remove the review_id column and replace NAs in the cumulative_No_reviews column with 0s.
Display2[, review_id := NULL]
Display2[, cumulative_No_reviews := nafill(cumulative_No_reviews, fill = 0)][]
# page_id date cumulative_No_reviews average_rating
# 1: 1072659 2013-07-10 0 NA
# 2: 1072659 2013-08-03 1 5
# 3: 1072659 2015-02-11 3 4
# 4: 1072650 2014-08-10 0 NA
# 5: 1072650 2014-09-09 0 NA
# 6: 1072660 2013-08-12 1 5
# 7: 1072660 2014-09-12 2 5
# 8: 1072660 2015-08-12 2 5

Remove rows after a certain date based on a condition in R

There are similar questions I've seen, but none of them apply it to specific rows of a data.table or data.frame, rather they apply it to the whole matrix.
Subset a dataframe between 2 dates
How to select some rows with specific date from a data frame in R
I have a dataset with patients who were diagnosed with OA and those who were not:
dt <- data.table(ID = seq(1,10,1), OA = c(1,0,0,1,0,0,0,1,1,0),
oa.date = as.Date(c("01/01/2006", "01/01/2001", "01/01/2001", "02/03/2005","01/01/2001","01/01/2001","01/01/2001","05/06/2010", "01/01/2011", "01/01/2001"), "%d/%m/%Y"),
stop.date = as.Date(c("01/01/2006", "31/12/2007", "31/12/2008", "02/03/2005", "31/12/2011", "31/12/2011", "31/12/2011", "05/06/2010", "01/01/2011", "31/12/2011"), "%d/%m/%Y"))
dt$oa.date[dt$OA==0] <- NA
> dt
ID OA oa.date stop.date
1: 1 1 2006-01-01 2006-01-01
2: 2 0 <NA> 2007-12-31
3: 3 0 <NA> 2008-12-31
4: 4 1 2005-03-02 2005-03-02
5: 5 0 <NA> 2011-12-31
6: 6 0 <NA> 2011-12-31
7: 7 0 <NA> 2011-12-31
8: 8 1 2010-06-05 2010-06-05
9: 9 1 2011-01-01 2011-01-01
10: 10 0 <NA> 2011-12-31
What I want to do is delete those who were diagnosed with OA (OA==1) before start:
start <- as.Date("01/01/2009", "%d/%m/%Y")
So I want my final data to be:
> dt
ID OA oa.date stop.date
1: 2 0 <NA> 2009-12-31
2: 3 0 <NA> 2008-12-31
3: 5 0 <NA> 2011-12-31
4: 6 0 <NA> 2011-12-31
5: 7 0 <NA> 2011-12-31
6: 8 1 2010-06-05 2010-06-05
7: 9 1 2011-01-01 2011-01-01
8: 10 0 <NA> 2011-12-31
My tries are:
dt[dt$OA==1] <- dt[!(oa.date < start)]
I've also tried a loop but to no effect.
Any help is much appreciated.
This should be straightforward:
> dt[!(OA & oa.date < start)]
# ID OA oa.date stop.date
#1: 2 0 <NA> 2007-12-31
#2: 3 0 <NA> 2008-12-31
#3: 5 0 <NA> 2011-12-31
#4: 6 0 <NA> 2011-12-31
#5: 7 0 <NA> 2011-12-31
#6: 8 1 2010-06-05 2010-06-05
#7: 9 1 2011-01-01 2011-01-01
#8: 10 0 <NA> 2011-12-31
The OA column is binary (1/0) which is coerced to logical (TRUE/FALSE) in the i-expression.
You can try
dt=dt[dt$OA==0|(dt$OA==1&!(dt$oa.date < start)),]

R: How can I make a new Variable with numbers of order (by date) for every level (for reshaping).)

I´m new to R and I have to deal with a large data set. I googled a lot but I just can´t find the way to do what i need (although it sounds like an easy thing to do).
What I want to do is reshape my data in a wide form. To do it in the way that I want, I need a new variable with numbers of order by dates for every factor (that will start with one for each new factor).
Now, this is a small example of what I have:
ID<-c("A","A","A","B","B","C","D","D","D","D")
Date<-c("01-01-2014", "05-01-2014", "06-01-2014",
"01-01-2014", "12-01-2014", "25-01-2014",
"06-01-2014", "12-01-2014", "25-01-2014",
"26-01-2014")
Value<-c(2.5, 3.4, 2.5, 305.66, 300.00, 55.01,
205.32, 99.99, 210.25, 105.125)
mydata<-data.frame(ID, Date, Value)
mydata
ID Date Value
1 A 01-01-2014 2.500
2 A 05-01-2014 3.400
3 A 06-01-2014 2.500
4 B 01-01-2014 305.660
5 B 12-01-2014 300.000
6 C 25-01-2014 55.010
7 D 06-01-2014 205.320
8 D 12-01-2014 99.990
9 D 25-01-2014 210.250
10 D 26-01-2014 105.125
(Data set is sorted first by ID factor, than by date for each factor.)
And this is what I need: new variable called "Order".
ID Date Value Order
1 A 01-01-2014 2.500 1
2 A 05-01-2014 3.400 2
3 A 06-01-2014 2.500 3
4 B 01-01-2014 305.660 1
5 B 12-01-2014 300.000 2
6 C 25-01-2014 55.010 1
7 D 06-01-2014 205.320 1
8 D 12-01-2014 99.990 2
9 D 25-01-2014 210.250 3
10 D 26-01-2014 105.125 4
The end goal is to reshape data based on the variable "Order" like this:
library(reshape)
goal<-reshape(mydata2,
idvar="ID",
timevar="Order",
direction="wide")
goal
ID Date.1 Value.1 Date.2 Value.2 Date.3 Value.3 Date.4 Value.4
1 A 01-01-2014 2.50 05-01-2014 3.40 06-01-2014 2.50 <NA> NA
4 B 01-01-2014 305.66 12-01-2014 300.00 <NA> NA <NA> NA
6 C 25-01-2014 55.01 <NA> NA <NA> NA <NA> NA
7 D 06-01-2014 205.32 12-01-2014 99.99 25-01-2014 210.25 26-01-2014 105.125
Or is there another way to reshape data like this without the "Order" Variable?
This is precisely what the getanID function in my "splitstackshape" package is for:
> library(splitstackshape)
> getanID(mydata, "ID")
ID Date Value .id
1: A 01-01-2014 2.500 1
2: A 05-01-2014 3.400 2
3: A 06-01-2014 2.500 3
4: B 01-01-2014 305.660 1
5: B 12-01-2014 300.000 2
6: C 25-01-2014 55.010 1
7: D 06-01-2014 205.320 1
8: D 12-01-2014 99.990 2
9: D 25-01-2014 210.250 3
10: D 26-01-2014 105.125 4
Alternatively, you can explore the development version of "data.table" which reimplements dcast in a very flexible way that will allow you to do this transformation without needing to generate a "time" variable.

Resources