Need to format R data - r

This is a follow up to my only other question, but hopefully more direct. I need data that looks like this:
custID custChannel custDate
1 151 Direct 2015-10-10 00:15:32
2 151 GooglePaid 2015-10-10 00:16:45
3 151 Converted 2015-10-10 00:17:01
4 5655 BingPaid 2015-10-11 00:20:12
5 7855 GoogleOrganic 2015-10-12 00:05:32
6 7862 YahooOrganic 2015-10-13 00:18:20
7 9655 GooglePaid 2015-10-13 00:08:35
8 9655 GooglePaid 2015-10-13 00:11:11
9 9655 Converted 2015-10-13 00:11:35
10 9888 GooglePaid 2015-10-14 00:08:35
11 9888 GooglePaid 2015-10-14 00:11:11
12 9888 Converted 2015-10-14 00:11:35
To be sorted so that the output looks like this:
Path Path Count
BingPaid 1
Direct>GooglePaid>Converted 1
GoogleOrganic 1
GooglePaid>GooglePaid>Converted 2
YahooOrganic 1
The idea is to capture customer paths (as identified by custID) and count for the entire data set how many people took that exact path (Path Count). I need to perform this over a data set of 5 million rows.

Using data.table you can do this as follows:
require(data.table)
setDT(dat)[,paste(custChannel, collapse = ">"), custID][,.("path length"=.N), .(path=V1)]
Result:
path path length
1: Direct>GooglePaid>Converted 1
2: BingPaid 1
3: GoogleOrganic 1
4: YahooOrganic 1
5: GooglePaid>GooglePaid>Converted 2
Step by step:
setDT(dat) # make dat a data.table
# get path by custID
dat_path <- dat[,paste(custChannel, collapse = ">"), custID]
#get length by path created in the previous step
res <- dat_path[,.("path length"=.N), by=.(path=V1)]
Have a look at dat_path and resto understand what happened.

Related

Scoping when creating a new r data.table column in a function using :=

This is a continuation for a question I posted here Creating a new r data.table column based on values in another column and grouping, and to which #Frank provided an excellent answer.
As I have to do multiple of these calculations with different date intervals, I want to do a function which does them. However, I seem to be running into a scoping problem. I read the Vignettes, FAQ, and a ton of questions here and I still am left baffled.
We'll use the same data:
library(data.table)
set.seed(88)
DT <- data.table(date = Sys.Date()-365 + sort(sample(1:100, 10)),
zip = sample(c("2000", "1150", "3000"),10, replace = TRUE),
purchaseAmount = sample(1:20, 10))
Here is the answer #Frank provided:
DT[, new_col :=
DT[.(zip = zip, d0 = date - 10, d1 = date), on=.(zip, date >= d0, date <= d1),
sum(purchaseAmount)
, by=.EACHI ]$V1
]
DT
date zip purchaseAmount new_col
1: 2016-01-08 1150 5 5
2: 2016-01-15 3000 15 15
3: 2016-02-15 1150 16 16
4: 2016-02-20 2000 18 18
5: 2016-03-07 2000 19 19
6: 2016-03-15 2000 11 30
7: 2016-03-17 2000 6 36
8: 2016-04-02 1150 17 17
9: 2016-04-08 3000 7 7
10: 2016-04-09 3000 20 27
And now the actual problem I have encountered. I created the following function which enables dynamically changing the interval:
sumPreviousPurchases = function(dt, newColName, daysFrom, daysUntil){
zip = substitute(zip)
newColName = substitute(newColName)
dt[, newColName :=
dt[.(zip = zip, d0 = (date - daysUntil), d1 = (date - daysFrom))
, on=.(zip, date >= d0, date <= d1),
sum(purchaseAmount)
, by=.EACHI ]$V1
]
}
sumPreviousPurchases(DT, prevPurch1to10, 0, 10)
DT
date zip purchaseAmount newColName
1: 2016-02-07 1150 5 5
2: 2016-02-14 3000 15 15
3: 2016-03-16 1150 16 16
4: 2016-03-21 2000 18 18
5: 2016-04-06 2000 19 19
6: 2016-04-14 2000 11 30
7: 2016-04-16 2000 6 36
8: 2016-05-02 1150 17 17
9: 2016-05-08 3000 7 7
10: 2016-05-09 3000 20 27
What troubles me is the scoping. The function names the new column newColName regardless of what I insert in the function call. From reading I got that when calling for data.table column names in function arguments, one should use the substitute()-function. However, this does not work here, the result is the same even if I leave the whole newColName = substitute(newColName) line out. I suppose it is because the column does not exist yet, but I do not know how to address this issue.
As a bonus I would like to ask, is there also a way to name the columns dynamically, ie. in the example for instance to be "daysFrom_ to_daysUntil", and the name would be "0_to_10"?
----- EDIT ----
I also stumbled upon a possible answer myself, somewhat similarly to #lmo's answer using an idea from here: http://brooksandrew.github.io/simpleblog/articles/advanced-data-table/#assign-a-column-with--named-with-a-character-object
Most important differences on the question: I removed the newColName = substitute(newColName) entirely, and added brackets around the (newColName) on dt[, (newColName) :=
sumPreviousPurchases = function(dt, newColName, daysFrom, daysUntil){
zip = substitute(zip)
#newColName = substitute(newColName)
dt[, (newColName) :=
dt[.(zip = zip, d0 = (date - daysUntil), d1 = (date - daysFrom))
, on=.(zip, date >= d0, date <= d1),
sum(purchaseAmount)
, by=.EACHI ]$V1
]
}
Additionally I added quotes to the "prevPurch1to10".
sumPreviousPurchases(DT, "prevPurch1to10", 0, 10)
and got the answer
date zip purchaseAmount prevPurch1to10
1: 2016-02-17 1150 7 7
2: 2016-02-22 3000 8 8
3: 2016-03-04 1150 2 2
4: 2016-03-16 2000 14 14
5: 2016-04-03 2000 11 11
6: 2016-04-11 3000 12 12
7: 2016-04-21 1150 17 17
8: 2016-04-22 3000 3 3
9: 2016-05-03 2000 9 9
10: 2016-05-11 3000 4 4
However, there are still the two following weird things:
a) substitute() is not needed when adding the brackets on (newColName). Why is that?
b) quotes are required around the "prevPurch1to10". Again, why? Is there a more data.tableish way to do this, without the quotes?
You can use substitute directly in the assignment:
sumPreviousPurchases = function(dt, newColName, daysFrom, daysUntil){
zip = substitute(zip)
dt[, substitute(newColName) :=
dt[.(zip = zip, d0 = (date - daysUntil), d1 = (date - daysFrom))
, on=.(zip, date >= d0, date <= d1),
sum(purchaseAmount)
, by=.EACHI ]$V1
]
}
Then give it a try
sumPreviousPurchases(DT, "prevPurch1to10", 0, 10)
which returns
DT
date zip purchaseAmount prevPurch1to10
1: 2016-02-07 1150 5 5
2: 2016-02-14 3000 15 15
3: 2016-03-16 1150 16 16
4: 2016-03-21 2000 18 18
5: 2016-04-06 2000 19 19
6: 2016-04-14 2000 11 30
7: 2016-04-16 2000 6 36
8: 2016-05-02 1150 17 17
9: 2016-05-08 3000 7 7
10: 2016-05-09 3000 20 27
Notes:
The parentheses in your solution () force the evaluation of the argument. This is implemented in base R and is a common technique across many programming languages, based on the mathematical concept of order of operations. (first evaluate objects in parentheses, then exponetiate, etc.). The use of substitute makes the substitution explicit, perhaps for easier reading.
Often, an argument to a function that will define a future object, like prevPurch1to10, requires quotes, since the object does not exist prior to calling the function. Using such an argument without quotes will usually result in an error: "object X not found."

Filtering rows in a data frame based on date column

I have the following data frame:
id day total_amount
1 2016-06-09 1000
1 2016-06-23 100
1 2016-06-24 200
1 2015-11-27 2392
1 2015-12-16 123
7 2015-07-09 200
7 2015-07-09 1000
7 2015-08-27 100018
7 2015-11-25 1000
How can I throw away rows where day column is older than three weeks from today using both base R packages and other packages such as dplyr .
We can use subset
subset(df1, as.Date(day) > Sys.Date()-21)
Just to fill in two additional possibilities (that are nearly identical to one another in terms of syntax and quite similar to #akrun's use of subset).
You can use with in as follows to shorten the number of characters:
with(df, df[as.Date(day) > Sys.Date()-21,])
As you mentioned a desire to see other packages, here is one way to drop old observations using the data.table package.
library(data.table)
# turn df into a data.table
setDT(df)
df[as.Date(day) > Sys.Date()-21,]
data
df <- read.table(header=T, text="id day total_amount
1 2016-06-09 1000
1 2016-06-23 100
1 2016-06-24 200
1 2015-11-27 2392
1 2015-12-16 123
7 2015-07-09 200
7 2015-07-09 1000
7 2015-08-27 100018
7 2015-11-25 1000")

How do I copy a date from one variable to another in R data.table without losing the date format?

I have a data.table containing two date variables. The data set was read into R from a .csv file (was originally an .xlsx file) as a data.frame and the two variables then converted to date format using as.Date() so that they display as below:
df
id specdate recdate
1 1 2014-08-12 2014-08-17
2 2 2014-08-15 2014-08-20
3 3 2014-08-21 2014-08-26
4 4 <NA> 2014-08-28
5 5 2014-08-25 2014-08-30
6 6 <NA> <NA>
I then converted the data.frame to a data.table:
df <- data.table(df)
I then wanted to create a third variable, that would include "specdate" if present, but replace it with "recdate" if "specdate" was missing (NA). This is where I'm having some difficulty, as it seems that no matter how I approach this, data.table displays dates in date format only if a complete variable that is already in date format is copied. Otherwise, individual values are displayed as a number (even when using as.IDate) and I gather that an origin date is needed to correct this. Is there any way to avoid supplying an origin date but display the dates as dates in data.table?
Below is my attempt to fill the NAs of specdate with the recdate dates:
# Function to fill NAs:
fillnas <- function(dataref, lookupref, nacol, replacecol, replacelist=NULL) {
nacol <- as.character(nacol)
if(!is.null(replacelist)) nacol <- factor(ifelse(dataref==lookupref & (is.na(nacol) | nacol %in% replacelist), replacecol, nacol))
else nacol <- factor(ifelse(dataref==lookupref & is.na(nacol), replacecol, nacol))
nacol
}
# Fill the NAs in specdate with the function:
df[, finaldate := fillnas(dataref=id, lookupref=id, nacol=specdate, replacecol=as.IDate(recdate, format="%Y-%m-%d"))]
Here is what happens:
> df
id specdate recdate finaldate
1: 1 2014-08-12 2014-08-17 2014-08-12
2: 2 2014-08-15 2014-08-20 2014-08-15
3: 3 2014-08-21 2014-08-26 2014-08-21
4: 4 <NA> 2014-08-28 16310
5: 5 2014-08-25 2014-08-30 2014-08-25
6: 6 <NA> <NA> NA
The display problem is compounded if I create the new variable from scratch by using ifelse:
df[, finaldate := ifelse(!is.na(specdate), specdate, recdate)]
This gives:
> df
id specdate recdate finaldate
1: 1 2014-08-12 2014-08-17 16294
2: 2 2014-08-15 2014-08-20 16297
3: 3 2014-08-21 2014-08-26 16303
4: 4 <NA> 2014-08-28 16310
5: 5 2014-08-25 2014-08-30 16307
6: 6 <NA> <NA> NA
Alternately if I try a find-and-replace type approach, I get an error about the number of items to replace not matching the replacement length (I'm guessing this is because that approach is not vectorised?), the values from recdate are recycled and end up in the wrong place:
> df$finaldate <- df$specdate
> df$finaldate[is.na(df$specdate)] <- df$recdate
Warning message:
In NextMethod(.Generic) :
number of items to replace is not a multiple of replacement length
> df
id specdate recdate finaldate
1: 1 2014-08-12 2014-08-17 2014-08-12
2: 2 2014-08-15 2014-08-20 2014-08-15
3: 3 2014-08-21 2014-08-26 2014-08-21
4: 4 <NA> 2014-08-28 2014-08-17
5: 5 2014-08-25 2014-08-30 2014-08-25
6: 6 <NA> <NA> 2014-08-20
So in conclusion - the function I applied gets me closest to what I want, except that where NAs have been replaced, the replacement value is displayed as a number and not in date format. Once displayed as a number, the origin is required to again display it as a date (and I would like to avoid supplying the origin since I usually don't know it and it seems unnecessarily repetitive to have to supply it when the date was originally in the correct format).
Any insights as to where I'm going wrong would be much appreciated.
I'd approach it like this, maybe :
DT <- data.table(df)
DT[, finaldate := specdata]
DT[is.na(specdata), finaldate := recdate]
It seems you want to add a new column so you can can retain the original columns as well. I do that as well a lot. Sometimes, I just update in place :
DT <- data.table(df)
DT[!is.na(specdate), specdate:=recdate]
setnames(DT, "specdate", "finaldate")
Using i like that avoids creating a whole new column's worth of data which might be very large. Depends on how important retaining the original columns is to you and how many of them there are and your data size. (Note that a whole column's worth of data is still created by the is.na() call and then again by ! but at least there isn't a third column's worth for the new finaldate. Would be great to optimize i=!is.na() in future (#1386) and if you use data.table this way now you won't need to change your code in future to benefit.)
It seems that you might have various "NA" strings that you're replacing. Note that fread in v1.9.6 on CRAN has a fix for that. From README :
correctly handles na.strings argument for all types of columns - it detect possible NA values without coercion to character, like in base read.table. fixes #504. Thanks to #dselivanov for the PR. Also closes #1314, which closes this issue completely, i.e., na.strings = c("-999", "FALSE") etc. also work.
Btw, you've made one of the top 3 mistakes mentioned here : https://github.com/Rdatatable/data.table/wiki/Support
Works for me. You may want to test to be sure that your NA values are not strings or factors "<NA>"; they will look like real NA values:
dt[, finaldate := ifelse(is.na(specdate), recdate, specdate)][
,finaldate := as.POSIXct(finaldate*86400, origin="1970-01-01", tz="UTC")]
# id specdate recdate finaldate
# 1: 1 2014-08-12 2014-08-17 2014-08-12
# 2: 2 2014-08-15 2014-08-20 2014-08-15
# 3: 3 2014-08-21 2014-08-26 2014-08-21
# 4: 4 NA 2014-08-28 2014-08-28
# 5: 5 2014-08-25 2014-08-30 2014-08-25
# 6: 6 NA NA NA
Data
df <- read.table(text=" id specdate recdate
1 1 2014-08-12 2014-08-17
2 2 2014-08-15 2014-08-20
3 3 2014-08-21 2014-08-26
4 4 NA 2014-08-28
5 5 2014-08-25 2014-08-30
6 6 NA NA", header=T, stringsAsFactors=F)
dt <- as.data.table(df)

data.table outer join based on groups in R

I have a data with the following columns:
CaseID, Time, Value.
The 'time' column values are not at regular intervals of 1. I am trying to add the missing values of time with 'NA' for the rest of the columns except CaseID.
Case Value Time
1 100 07:52:00
1 110 07:53:00
1 120 07:55:00
2 10 08:35:00
2 11 08:36:00
2 12 08:38:00
Desired output:
Case Value Time
1 100 07:52:00
1 110 07:53:00
1 NA 07:54:00
1 120 07:55:00
2 10 08:35:00
2 11 08:36:00
2 NA 08:37:00
2 12 08:38:00
I tried dt[CJ(unique(CaseID),seq(min(Time),max(Time),"min"))] but it gives the following error:
Error in vecseq(f__, len__, if (allow.cartesian || notjoin) NULL else as.integer(max(nrow(x), :
Join results in 9827315 rows; more than 9620640 = max(nrow(x),nrow(i)). Check for duplicate key values in i, each of which join to the same group in x over and over again. If that's ok, try including `j` and dropping `by` (by-without-by) so that j runs for each group to avoid the large allocation. If you are sure you wish to proceed, rerun with allow.cartesian=TRUE. Otherwise, please search for this error message in the FAQ, Wiki, Stack Overflow and datatable-help for advice.
I cannot able to make it work..any help would be appreciated.
Like this??
dt[,Time:=as.POSIXct(Time,format="%H:%M:%S")]
result <- dt[,list(Time=seq(min(Time),max(Time),by="1 min")),by=Case]
setkey(result,Case,Time)
setkey(dt,Case,Time)
result <- dt[result][,Time:=format(Time,"%H:%M:%S")]
result
# Case Value Time
# 1: 1 100 07:52:00
# 2: 1 110 07:53:00
# 3: 1 NA 07:54:00
# 4: 1 120 07:55:00
# 5: 2 10 08:35:00
# 6: 2 11 08:36:00
# 7: 2 NA 08:37:00
# 8: 2 12 08:38:00
Another way:
dt[, Time := as.POSIXct(Time, format = "%H:%M:%S")]
setkey(dt, Time)
dt[, .SD[J(seq(min(Time), max(Time), by='1 min'))], by=Case]
We group by Case and join on Time on each group using .SD (hence setting key on Time). From here you can use format() as shown above.

R: subset columns entries in "df A" to columns entries in "df B" and eliminate if true match

I’m a R beginner and having difficulty with the following pretty simple problem;
I have two data frames ( All_df, Bad_df) and want to generate a third such that
All_df – Bad_df = Good_df
> All_df
Row# Originator Recipient Date Time
4 1 6 2000-05-16 16:15:00
7 2 7 2000-05-16 16:25:00
22 2 4 2000-07-04 18:05:00
25 2 9 2000-08-07 05:23:00
10 3 2 2000-06-17 18:07:00
13 4 8 2000-06-21 06:49:00
> Bad_df
Row# Originator Recipient Date Time
4 2 6 2000-05-16 16:15:00
7 2 7 2000-05-16 16:25:00
22 6 4 2000-07-04 18:05:00
25 12 9 2000-08-07 05:23:00
10 30 2 2000-06-17 18:07:00
13 32 8 2000-06-21 06:49:00
I want to generate Good_df similar to this:
> Good_df
Row# Originator Recipient Date Time
4 1 6 2000-05-16 16:15:00
10 3 2 2000-06-17 18:07:00
13 4 8 2000-06-21 06:49:00
Essentially I need a function which searches All_df$ Originator for values that appear in Bad_df$ Originator, eliminating any matches before returning the remaining values to the Good_df.
I have tried
Good_df <-subset(All_df, Originator %in% Bad_df$Originator)
however nrows of each df looks a little off!
> nrow(All_df)
[1] 26,032
> nrow(Bad_df)
[1] 1,452
> nrow(Good_df)
[1] 12,395
Any assistance would be greatly appreciated.
Quite intuitively,
Good_df <-subset(All_df, Originator %in% Bad_df$Originator)
gives you the subset of All_df for bad originators. What you want is to negate your filter to get the subset of good (or non-bad) originators, using the ! operator:
Good_df <-subset(All_df, ! Originator %in% Bad_df$Originator)
If you are uncomfortable with the precedency rule, you can add a set of parenthesis:
Good_df <-subset(All_df, !(Originator %in% Bad_df$Originator))

Resources