R: Find and add missing (/non existing) rows in time related data frame - r

I'm struggling with the following.
If have a (big) data frame with the following:
several columns for which the combination of columns is a 'unique' combination, say ID
a time related column
a measure related column
I want to make sure that for each unique ID for each time interval a measure is available in the data frame. And if it is not, I want to add a 0 (or NA) measure for that time/ID.
To illustrate the problem, create the following test data frame:
test <- data.frame(
YearWeek =rep(c("2012-01","2012-02"),each=4),
ProductID =rep(c(1,2), times=4),
CustomerID =rep(c("a","b"), each=2, times=2),
Quantity =5:12
)[1:7,]
YearWeek ProductID CustomerID Quantity
1 2012-01 1 a 5
2 2012-01 2 a 6
3 2012-01 1 b 7
4 2012-01 2 b 8
5 2012-02 1 a 9
6 2012-02 2 a 10
7 2012-02 1 b 11
The 8th row is left out, on purpose. This way I simulate a 'missing value' (missing Quantity) for ID '2-b' (ProductID-CustomerID) for the time value "2012-02".
What I want to do is adjust the data.frame in such a way that for all time values (these are known, in this example just "2012-01" and "2012-02"), for all ID-combinations (these are not known upfront, but this is 'all unique ID combinations in the data frame', thus the unique set on the ID columns), a Quantity is available in the data frame.
This should result for this example (if we choose NA for the missing value, typically I want to have control on that):
YearWeek ProductID CustomerID Quantity
1 2012-01 1 a 5
2 2012-01 2 a 6
3 2012-01 1 b 7
4 2012-01 2 b 8
5 2012-02 1 a 9
6 2012-02 2 a 10
7 2012-02 1 b 11
8 2012-02 2 b NA
The ultimate goal is to create time series for these ID combinations and I therefore want to have Quantities for all time values. I need to do different aggregations (on time) and using different levels of ID's from a big dataset
I tried several things, for instance with melt and cast from the reshape package. But so far I didn't manage to do it. The next step is creating a function, with for-loops etc. but that is not really useful from a performance perspective.
Maybe there is an easier way to create time series instantly, giving a data.frame like test. Does anybody have an idea on this one??
Thanks in advance!
Note that in the actual problem there are more than two 'ID columns'.
EDIT:
I should describe the problem further. There is a difference between the 'time' column and the 'ID' columns. The first (and great!) answer on the question by joran, maybe didn't get a clear understanding from what I want (and the example I gave didn't made the difference clear). I said above:
for all ID-combinations (these are not known upfront, but this is 'all
unique ID combinations in the data frame', thus the unique set on the
ID columns)
So I do not want 'all possible ID combinations' but 'all ID combinations within the data'.
For each of those combinations I want a value for every unique time-value.
Let me make it clear by expanding test to test2, as follows
> test2 <- rbind(test, c("2012-02", 3, "a", 13))
> test2
YearWeek ProductID CustomerID Quantity
1 2012-01 1 a 5
2 2012-01 2 a 6
3 2012-01 1 b 7
4 2012-01 2 b 8
5 2012-02 1 a 9
6 2012-02 2 a 10
7 2012-02 1 b 11
8 2012-02 3 a 13
Which means I want in the resulting data frame no '3-b' ID combination, because this combination is not within test2. If I use the method of the first answer I will get the following:
> vals2 <- expand.grid(YearWeek = unique(test2$YearWeek),
ProductID = unique(test2$ProductID),
CustomerID = unique(test2$CustomerID))
> merge(vals2,test2,all = TRUE)
YearWeek ProductID CustomerID Quantity
1 2012-01 1 a 5
2 2012-01 1 b 7
3 2012-01 2 a 6
4 2012-01 2 b 8
5 2012-01 3 a <NA>
6 2012-01 3 b <NA>
7 2012-02 1 a 9
8 2012-02 1 b 11
9 2012-02 2 a 10
10 2012-02 2 b <NA>
11 2012-02 3 a 13
12 2012-02 3 b <NA>
So I don't want the rows 6 and 12 to be here.
To overcome this problem I found a solution in the one below. In here I split the 'unique time column' and the 'unique ID combination'. The difference with above is thus the word 'combination' and not unique for every ID column.
> temp_merge <- merge(unique(test2["YearWeek"]),
unique(test2[c("ProductID", "CustomerID")]))
> merge(temp_merge,test2,all = TRUE)
YearWeek ProductID CustomerID Quantity
1 2012-01 1 a 5
2 2012-01 1 b 7
3 2012-01 2 a 6
4 2012-01 2 b 8
5 2012-01 3 a <NA>
6 2012-02 1 a 9
7 2012-02 1 b 11
8 2012-02 2 a 10
9 2012-02 2 b <NA>
10 2012-02 3 a 13
What are the comments on this one?
Is this an elegant way, or are there better ways?

Use expand.grid and merge:
vals <- expand.grid(YearWeek = unique(test$YearWeek),
ProductID = unique(test$ProductID),
CustomerID = unique(test$CustomerID))
> merge(vals,test,all = TRUE)
YearWeek ProductID CustomerID Quantity
1 2012-01 1 a 5
2 2012-01 1 b 7
3 2012-01 2 a 6
4 2012-01 2 b 8
5 2012-02 1 a 9
6 2012-02 1 b 11
7 2012-02 2 a 10
8 2012-02 2 b NA
The NAs can be replaced after the fact with whatever values you choose using subsetting and is.na.

Related

How to create a new dataframe with the serie of the answer ( at different times ) to a question for each user id

I'm working on a dataframe with a lot of questions and some people answered several times to the inquiry. I would like to study the evolution of their answer.
I have a database that looks like:
User ID
Time
Answer
User A
2012-01
5
User B
2012-02
6
User B
2012-01
5
User B
2012-03
6
User A
2012-02
5
User C
2012-03
6
And I would like to have a dataframe with the answer of each user classed by time like that:
User ID
2012-01
2012-02
2012-03
User A
5
6
X
User B
5
5
X
User C
X
X
6
Do you know how I could do that?
I've tried to use group by user ID but it didn't work.
library(tidyr)
# Your data
df<-read.table(text="
User ID Time Answer
User A 2012-01 5
User B 2012-02 6
User B 2012-01 5
User B 2012-03 6
User A 2012-02 5
User C 2012-03 6",
header=TRUE)
df %>%
pivot_wider(names_from = Time,
values_from=Answer)
# A tibble: 3 × 5
User ID `2012-01` `2012-02` `2012-03`
<chr> <chr> <int> <int> <int>
1 User A 5 5 NA
2 User B 5 6 6
3 User C NA NA 6

How to run a loop in R to find a unique combination of numbers within a range of 7?

I have a dataset which looks something like this:-
Key Days
A 1
A 2
A 3
A 8
A 9
A 36
A 37
B 14
B 15
B 44
B 45
I would like to split the individual keys based on the days in groups of 7. For e.g.:-
Key Days
A 1
A 2
A 3
Key Days
A 8
A 9
Key Days
A 36
A 37
Key Days
B 14
B 15
Key Days
B 44
B 45
I could use ifelse and specify buckets of 1-7, 7-14 etc until 63-70 (max possible value of days). However the issue lies with the days column. There are lots of cases wherein there is an overlap in days - Take days 14-15 as an example which would fall into 2 brackets if split using the ifelse logic (7-14 & 15-21).
The ideal method of splitting this would be to identify a day and add 7 to it and check how many rows of data are actually falling under that category. I think we need to use loops for this. I could do it in excel but i have 20000 rows of data for 2000 keys hence i'm using R. I would need a loop which checks each key value and for each key it further checks the value of days and buckets them in group of 7 by checking the first day value of each range.
We create a grouping variable by applying %/% on the 'Day' column and then split the dataset into a list based on that 'grp'.
grp <- df$Day %/%7
split(df, factor(grp, levels = unique(grp)))
#$`0`
# Key Days
#1 A 1
#2 A 2
#3 A 3
#$`1`
# Key Days
#4 A 8
#5 A 9
#$`5`
# Key Days
#6 A 36
#7 A 37
#$`2`
# Key Days
#8 B 14
#9 B 15
#$`6`
# Key Days
#10 B 44
#11 B 45
Update
If we need to split by 'Key' also
lst <- split(df, list(factor(grp, levels = unique(grp)), df$Key), drop=TRUE)

Identify and remove duplicates by a criteria in R

Hi I am puzzled with a problem concerning duplicates in R. I have looked around a lot and don't seem to find any help. I have a dataset like that
x = data.frame( id = c("A","A","A","A","A","A","A","B","B","B","B"),
StartDate = c("09/07/2006", "09/07/2006", "09/07/2006", "08/10/2006",
"08/10/2006", "09/04/2007", "02/03/2011","05/05/2005", "08/06/2009", "07/09/2009", "07/09/2009"),
EndDate = c("06/08/2006", "06/08/2006", "06/08/2006", "19/11/2006", "19/11/2006", "07/05/2007", "30/03/2011",
"02/06/2005", "06/07/2009", "05/10/2009", "05/10/2009"),
Group = c(1,1,1,2,2,3,4,2,3,4,4),
TestDate = c("09/06/2006", "08/09/2006", "08/10/2006", "08/09/2006", "08/10/2006", "NA", "02/03/2011",
"NA", "07/09/2009", "07/09/2009", "08/10/2009"),
Code = c(4,4,4858,4,4858,NA,4,NA, 795, 795, 4)
)
> x
id StartDate EndDate Group TestDate Code
1 A 09/07/2006 06/08/2006 1 09/06/2006 4
2 A 09/07/2006 06/08/2006 1 08/09/2006 4
3 A 09/07/2006 06/08/2006 1 08/10/2006 4858
4 A 08/10/2006 19/11/2006 2 08/09/2006 4
5 A 08/10/2006 19/11/2006 2 08/10/2006 4858
6 A 09/04/2007 07/05/2007 3 NA NA
7 A 02/03/2011 30/03/2011 4 02/03/2011 4
8 B 05/05/2005 02/06/2005 2 NA NA
9 B 08/06/2009 06/07/2009 3 07/09/2009 795
10 B 07/09/2009 05/10/2009 4 07/09/2009 795
11 B 07/09/2009 05/10/2009 4 08/10/2009 4
So basically what I am trying to do is to identify duplicates in the TestDate variable by ID. For example dates 08/09/2006 and 08/10/2006 seem to be repeated in the same person but for different Group and I don't want the same Testdate to be in different Group by ID. The criteria to choose which TestDate to choose is to take the difference in days of TestDate with StartDate and EndDate for the different groups and then keep the one with the smallest difference in days. For example, about the date 08/10/2006 I would like to keep row 5 as the TestDate there is closer to the StartDate, than compared with the same differences in row 3. Eventually, I would like to get with a dataset like that
> xfinal
id StartDate EndDate Group TestDate Code
1 A 09/07/2006 06/08/2006 1 09/06/2006 4
4 A 08/10/2006 19/11/2006 2 08/09/2006 4
5 A 08/10/2006 19/11/2006 2 08/10/2006 4858
6 A 09/04/2007 07/05/2007 3 NA NA
7 A 02/03/2011 30/03/2011 4 02/03/2011 4
8 B 05/05/2005 02/06/2005 2 NA NA
10 B 07/09/2009 05/10/2009 4 07/09/2009 795
11 B 07/09/2009 05/10/2009 4 08/10/2009 4
Any help on that will be much appreciated. Thanks
x$StartDate <- as.Date(x$StartDate,format="%d/%m/%Y")
x$EndDate <- as.Date(x$EndDate,format="%d/%m/%Y")
x$TestDate <- as.Date(x$TestDate,format="%d/%m/%Y")
x$Diff <- difftime(x$EndDate,x$StartDate,"days")
x <- x[order(x$id,x$Diff),]
x <- x[!duplicated(x[,c("id","TestDate")]),]
x$Diff <- NULL
x

R finding date intervals by ID

Having the following table which comprises some key columns which are: customer ID | order ID | product ID | Quantity | Amount | Order Date.
All this data is in LONG Format, in that you will get multi line items for the 1 Customer ID.
I can get the first date last date using R DateDiff but converting the file to WIDE format using Plyr, still end up with the same problem of getting multiple orders by customer, just less rows and more columns.
Is there an R function that extends R DateDiff to work out how to get the time interval between purchases by Customer ID? That is, time between order 1 and 2, order 2 and 3, and so on assuming these orders exists.
CID Order.Date Order.DateMY Order.No_ Amount Quantity Category.Name Locality
1 26/02/13 Feb-13 zzzzz 1 r MOSMAN
1 26/05/13 May-13 qqqqq 1 x CHULLORA
1 28/05/13 May-13 wwwww 1 r MOSMAN
1 28/05/13 May-13 wwwww 1 x MOSMAN
2 19/08/13 Aug-13 wwwwww 1 o OAKLEIGH SOUTH
3 3/01/13 Jan-13 wwwwww 1 x CURRENCY CREEK
4 28/08/13 Aug-13 eeeeeee 1 t BRISBANE
4 10/09/13 Sep-13 rrrrrrrrr 1 y BRISBANE
4 25/09/13 Sep-13 tttttttt 2 e BRISBANE
It is not clear what do you want to do since you don't give the expected result. But I guess you want to the the intervals between 2 orders.
library(data.table)
DT <- as.data.table(DF)
DT[, list(Order.Date,
diff = c(0,diff(sort(as.Date(Order.Date,'%d/%m/%y')))) ),CID]
CID Order.Date diff
1: 1 26/02/13 0
2: 1 26/05/13 89
3: 1 28/05/13 2
4: 1 28/05/13 0
5: 2 19/08/13 0
6: 3 3/01/13 0
7: 4 28/08/13 0
8: 4 10/09/13 13
9: 4 25/09/13 15
Split the data frame and find the intervals for each Customer ID.
df <- data.frame(customerID=as.factor(c(rep("A",3),rep("B",4))),
OrderDate=as.Date(c("2013-07-01","2013-07-02","2013-07-03","2013-06-01","2013-06-02",
"2013-06-03","2013-07-01")))
dfs <- split(df,df$customerID)
lapply(dfs,function(x){
tmp <-diff(x$OrderDate)
tmp
})
Or use plyr
library(plyr)
dfs <- dlply(df,.(customerID),function(x)return(diff(x$OrderDate)))
I know this question is very old, but I just figured out another way to do it and wanted to record it:
> library(dplyr)
> library(lubridate)
> df %>% group_by(customerID) %>%
mutate(SinceLast=(interval(ymd(lag(OrderDate)),ymd(OrderDate)))/86400)
# A tibble: 7 x 3
# Groups: customerID [2]
customerID OrderDate SinceLast
<fct> <date> <dbl>
1 A 2013-07-01 NA
2 A 2013-07-02 1.
3 A 2013-07-03 1.
4 B 2013-06-01 NA
5 B 2013-06-02 1.
6 B 2013-06-03 1.
7 B 2013-07-01 28.

cross sectional sub-sets in data.table

I have a data.table which contains multiple columns, which is well represented by the following:
DT <- data.table(date = as.IDate(rep(c("2012-10-17", "2012-10-18", "2012-10-19"), each=10)),
session = c(1,2,3), price = c(10, 11, 12,13,14),
volume = runif(30, min=10, max=1000))
I would like to extract a multiple column table which shows the volume traded at each price in a particular type of session -- with each column representing a date.
At present, i extract this data one date at a time using the following:
DT[session==1,][date=="2012-10-17", sum(volume), by=price]
and then bind the columns.
Is there a way of obtaining the end product (a table with each column referring to a particular date) without sticking all the single queries together -- as i'm currently doing?
thanks
Does the following do what you want.
A combination of reshape2 and data.table
library(reshape2)
.DT <- DT[,sum(volume),by = list(price,date,session)][, DATE := as.character(date)]
# reshape2 for casting to wide -- it doesn't seem to like IDate columns, hence
# the character DATE co
dcast(.DT, session + price ~ DATE, value.var = 'V1')
session price 2012-10-17 2012-10-18 2012-10-19
1 1 10 308.9528 592.7259 NA
2 1 11 649.7541 NA 816.3317
3 1 12 NA 502.2700 766.3128
4 1 13 424.8113 163.7651 NA
5 1 14 682.5043 NA 147.1439
6 2 10 NA 755.2650 998.7646
7 2 11 251.3691 695.0153 NA
8 2 12 791.6882 NA 275.4777
9 2 13 NA 111.7700 240.3329
10 2 14 230.6461 817.9438 NA
11 3 10 902.9220 NA 870.3641
12 3 11 NA 719.8441 963.1768
13 3 12 361.8612 563.9518 NA
14 3 13 393.6963 NA 718.7878
15 3 14 NA 871.4986 582.6158
If you just wanted session 1
dcast(.DT[session == 1L], session + price ~ DATE)
session price 2012-10-17 2012-10-18 2012-10-19
1 1 10 308.9528 592.7259 NA
2 1 11 649.7541 NA 816.3317
3 1 12 NA 502.2700 766.3128
4 1 13 424.8113 163.7651 NA
5 1 14 682.5043 NA 147.1439

Resources