I have read many of the threads and do not think my question has been asked before. I have a data.frame in R related to advertisements shown to customers as such:.. I have many customers, 8 different products.. so this is just a sample
mydf <- data.frame(Cust = c(1, 1), age = c(24, 24),
state = c("NJ", "NJ"), Product = c(1, 1), cost = c(400, 410),
Time = c(35, 25), Purchased = c("N", "Y"))
mydf
# Cust age state Product cost Time Purchased
# 1 1 24 NJ 1 400 35 N
# 2 1 24 NJ 1 410 23 Y
And I want to transform it to look as such ...
Cust | age | state | Product | cost.1 | time.1 | purch.1 | cost.2 | time.2 | purch.2
1 | 24 | NJ | 1 | 400 | 35 | N | 410 | 23 | Y
How can I do this? There are a few static variables for each customer such as age, state and a few others... and then there are the details associated with each offer that was presented to a given customer, the product # in the offer, the cost, the time, and if they purchased it... I want to get all of this onto 1 line for each customer to perform analysis.
It is worth noting that the number of products maxes out at 7, but for some customers it ranges from 1 to 7.
I have no sample code to really show. I have tried using the aggregate function, but I do not want to aggregate, or do any SUMs. I just want to do some joins. Research suggests the cbind, and tapply functions may be useful.
Thank you for your help. I am very new to R.
You are essentially asking to do a "long" to "wide" reshape of your data.
It looks to me like you're using "Cust", "age", "state", and "Product" as your ID variables. You don't have a an actual "time" variable though ("time" as in the sequential count of records by the IDs mentioned above). However, such a variable is easy to create:
mydf$timevar <- with(mydf,
ave(rep(1, nrow(mydf)),
Cust, age, state, Product, FUN = seq_along))
mydf
# Cust age state Product cost Time Purchased timevar
# 1 1 24 NJ 1 400 35 N 1
# 2 1 24 NJ 1 410 23 Y 2
From there, this is pretty straightforward with the reshape function in base R.
reshape(mydf, direction = "wide",
idvar=c("Cust", "age", "state", "Product"),
timevar = "timevar")
# Cust age state Product cost.1 Time.1 Purchased.1 cost.2 Time.2 Purchased.2
# 1 1 24 NJ 1 400 35 N 410 23 Y
Related
This question already has answers here:
dplyr filter with condition on multiple columns
(6 answers)
Closed 2 years ago.
I have a data set like such.
df = data.frame(Business = c('HR','HR','Finance','Finance','Legal','Legal','Research'), Country = c('Iceland','Iceland','Norway','Norway','US','US','France'), Gender=c('Female','Male','Female','Male','Female','Male','Male'), Value =c(10,5,20,40,10,20,50))
I need to be filter out all rows where both male value and female value are >= 10. For example, Iceland HR should be removed as well as Research France.
I've tried df %>% group_by(Business,Country) %>% filter((Value>=10)) but this filters out any value less than 10. any ideas?
Maybe this can help:
library(reshape2)
df2 <- reshape(df,idvar = c('Business','Country'),timevar = 'Gender',direction = 'wide')
df2 %>% mutate(Index=ifelse(Value.Female>=10 & Value.Male>=10,1,0)) %>%
filter(Index==1) -> df3
df4 <- reshape2::melt(df3[,-5],idvar=c('Business','Country'))
Business Country variable value
1 Finance Norway Value.Female 20
2 Legal US Value.Female 10
3 Finance Norway Value.Male 40
4 Legal US Value.Male 20
You could just use two ave steps, one with length, one with min.
df <- df[with(df, ave(Value, Country, FUN=length)) == 2, ]
df[with(df, ave(Value, Country, FUN=min)) >= 10, ]
# Business Country Gender Value
# 3 Finance Norway Female 20
# 4 Finance Norway Male 40
# 5 Legal US Female 10
# 6 Legal US Male 20
Notice that this also works if we disturb the data frame.
set.seed(42)
df2 <- df[sample(1:nrow(df)), ]
df2 <- df2[with(df2, ave(Value, Country, FUN=length)) == 2, ]
df2[with(df2, ave(Value, Country, FUN=min)) >= 10, ]
# Business Country Gender Value
# 5 Legal US Female 10
# 6 Legal US Male 20
# 3 Finance Norway Female 20
# 4 Finance Norway Male 40
I found that dplyr is speedy and simple for aggregate and summarise data. But I can't find out how to solve the following problem with dplyr.
Given these data frames:
df_2017 <- data.frame(
expand.grid(1:195,1:65,1:39),
value = sample(1:1000000,(195*65*39)),
period = rep("2017",(195*65*39)),
stringsAsFactors = F
)
df_2017 <- df_2017[sample(1:(195*65*39),450000),]
names(df_2017) <- c("company", "product", "acc_concept", "value", "period")
df_2017$company <- as.character(df_2017$company)
df_2017$product <- as.character(df_2017$product)
df_2017$acc_concept <- as.character(df_2017$acc_concept)
df_2017$value <- as.numeric(df_2017$value)
ratio_df <- data.frame(concept=c("numerator","numerator","numerator","denom", "denom", "denom","name"),
ratio1=c("1","","","4","","","Sales over Assets"),
ratio2=c("1","","","5","6","","Sales over Expenses A + B"), stringsAsFactors = F)
where the columns in df_2017 are:
company = This is a categorical variable with companies from 1 to 195
product = This is a categorical, with home apliance products from 1 to 65. For example, 1 could be equal to irons, 2 to television, etc
acc_concept = This is a categorical variable with accounting concepts from 1 to 39. For example, 1 would be equal to "Sales", 2 to "Total Expenses", 3 to Returns", 4 to "Assets, etc
value = This is a numeric variable, with USD from 1 to 100.000.000
period = Categorical variable. Always 2017
As the expand.grid implies, the combinations of company - product - acc_concept are never duplicated, but, It could happen that certains subjects have not every company - product - acc_concept combinations. That's why the code line "df_2017 <- df_2017[sample(1:195*65*39),450000),]", and that's why the output could turn out into NA (see below).
And where the columns in ratio_df are:
Concept = which acc_concept corresponds to numerator, which one to
denominator, and which is name of the ratio
ratio1 = acc_concept and name for ratio1
ratio2 = acc_concept and name for ratio2
I want to calculate 2 ratios (ratio_df) between acc_concept, for each product within each company.
For example:
I take the first ratio "acc_concepts" and "name" from ratio_df:
num_acc_concept <- ratio_df[ratio_df$concept == "numerator", 2]
denom_acc_concept <- ratio_df[ratio_df$concept == "denom", 2]
ratio_name <- ratio_df[ratio_df$concept == "name", 2]
Then I calculate the ratio for one product of one company, just to show you want i want to do:
ratio1_value <- sum(df_2017[df_2017$company == 1 & df_2017$product == 1 & df_2017$acc_concept %in% num_acc_concept, 4]) / sum(df_2017[df_2017$company == 1 & df_2017$product == 1 & df_2017$acc_concept %in% denom_acc_concept, 4])
Output:
output <- data.frame(Company="1", Product="1", desc_ratio=ratio_name, ratio_value = ratio1_value, stringsAsFactors = F)
As i said before i want to do this for each product within each company
The output data.frame could be something like this (ratios aren't the true ones because i haven't done the calculations yet):
company product desc_ratio ratio_value
1 1 Sales over Assets 0.9303675
1 2 Sales over Assets 1.30
1 3 Sales over Assets Nan
1 4 Sales over Assets Inf
1 5 Sales over Assets 2.32
1 6 Sales over Assets NA
.
.
.
1 1 Sales over Expenses A + B 3.25
.
.
.
2 1 Sales over Assets 0.256
and so on...
NaN when ratio is 0 / 0
Inf when ratio is number / 0
NA when there is no data for certain company and product.
I hope i have made myself clear this time :)
Is there any way to solve this row problem with dplyr? Should I cast the df_2017 for mutating? In this case, which is the best way for casting?
Any help would be welcome!
This is one way of doing it. At the end I timed the code on all of your records.
First create a function to create all the ratios. Do note, this function is only useful inside the dplyr code.
ratio <- function(data){
result <- data.frame(desc_ratio = rep(NA, ncol(ratio_df) -1), ratio_value = rep(NA, ncol(ratio_df) -1))
for(i in 2:ncol(ratio_df)){
num <- ratio_df[ratio_df$concept == "numerator", i]
denom <- ratio_df[ratio_df$concept == "denom", i]
result$desc_ratio[i-1] <- ratio_df[ratio_df$concept == "name", i]
result$ratio_value[i-1] <- sum(ifelse(data$acc_concept %in% num, data$value, 0)) / sum(ifelse(data$acc_concept %in% denom, data$value, 0))
}
return(result)
}
Using dplyr, tidyr and purrr to put everything together. First group by the data, nest the data needed for the function, run the function with a mutate on the nested data. Drop the not needed nested data and unnest to get your wanted output. I leave the sorting up to you.
library(dplyr)
library(purrr)
library(tidyr)
output <- df_2017 %>%
group_by(company, product, period) %>%
nest() %>%
mutate(ratios = map(data, ratio)) %>%
select(-data) %>%
unnest
output
# A tibble: 25,350 x 5
company product period desc_ratio ratio_value
<chr> <chr> <chr> <chr> <dbl>
1 103 2 2017 Sales over Assets 0.733
2 103 2 2017 Sales over Expenses A + B 0.219
3 26 26 2017 Sales over Assets 0.954
4 26 26 2017 Sales over Expenses A + B 1.01
5 85 59 2017 Sales over Assets 4.14
6 85 59 2017 Sales over Expenses A + B 1.83
7 186 38 2017 Sales over Assets 7.85
8 186 38 2017 Sales over Expenses A + B 0.722
9 51 25 2017 Sales over Assets 2.34
10 51 25 2017 Sales over Expenses A + B 0.627
# ... with 25,340 more rows
Time it took to run this code on my machine measured with system.time:
user system elapsed
6.75 0.00 6.81
Was wondering how I would use R to calculate the below.
Assuming a CSV with the following purchase data:
| Customer ID | Purchase Date |
| 1 | 01/01/2017 |
| 2 | 01/01/2017 |
| 3 | 01/01/2017 |
| 4 | 01/01/2017 |
| 1 | 02/01/2017 |
| 2 | 03/01/2017 |
| 2 | 07/01/2017 |
I want to figure out the average time between repurchases by customer.
The math would be like the one below:
| Customer ID | AVG repurchase |
| 1 | 30 days | = (02/01 - 01/01 / 1 order
| 2 | 90 days | = ( (03/01 - 01/01) + (07 - 3/1) ) /2 orders
| 3 | n/a |
| 4 | n/a |
The output would be the total average across customers -- so: 60 days = (30 avg for customer1 + 90 avg for customer2) / 2 customers.
I've assumed you have read your CSV into a dataframe named df and I've renamed your variables using snake case, since having variables with a space in the name can be inconvenient, leading many to use either snake case or camel case variable naming conventions.
Here is a base R solution:
mean(sapply(by(df$purchase_date, df$customer_id, diff), mean), na.rm=TRUE)
[1] 60.75
You may notice that we get 60.75 rather than 60 as you expected. This is because there are 31 days between customer 1's purchases (31 days in January until February 1), and similarly for customer 2's purchases -- there are not always 30 days in a month.
Explanation
by(df$purchase_date, df$customer_id, diff)
The by() function applies another function to data by groupings. Here, we are applying diff() to df$purchase_date by the unique values of df$customer_id. By itself, this would result in the following output:
df$customer_id: 1
Time difference of 31 days
-----------------------------------------------------------
df$customer_id: 2
Time differences in days
[1] 59 122
We then use
sapply(by(df$purchase_date, df$customer_id, diff), mean)
to apply mean() to the elements of the previous result. This gives us each customer's average time to repurchase:
1 2 3 4
31.0 90.5 NaN NaN
(we see customers 3 and 4 never repurchased). Finally, we need to average these average repurchase times, which means we need to also deal with those NaN values, so we use:
mean(sapply(by(df$purchase_date, df$customer_id, diff), mean), na.rm=TRUE)
which will average the previous results, ignoring missing values (which, in R include NaN values).
Here's another solution with dplyr + lubridate:
library(dplyr)
library(lubridate)
df %>%
mutate(Purchase_Date = mdy(Purchase_Date)) %>%
group_by(Customer_ID) %>%
summarize(AVG_Repurchase = sum(difftime(Purchase_Date,
lag(Purchase_Date), units = "days"),
na.rm=TRUE)/(n()-1))
or with data.table:
library(data.table)
setDT(df)[, Purchase_Date := mdy(Purchase_Date)]
df[, .(AVG_Repurchase = sum(difftime(Purchase_Date,
shift(Purchase_Date), units = "days"),
na.rm=TRUE)/(.N-1)), by = "Customer_ID"]
Result:
# A tibble: 4 x 2
Customer_ID AVG_Repurchase
<dbl> <time>
1 1 31.0 days
2 2 90.5 days
3 3 NaN days
4 4 NaN days
Customer_ID AVG_Repurchase
1: 1 31.0 days
2: 2 90.5 days
3: 3 NaN days
4: 4 NaN days
Note:
I first converted Purchase_Date to mmddyyyy format, then group_by Customer_ID. Final for each Customer_ID, I calculated the mean of the days difference between Purchase_Date and it's lag.
Data:
df = structure(list(Customer_ID = c(1, 2, 3, 4, 1, 2, 2), Purchase_Date = c(" 01/01/2017",
" 01/01/2017", " 01/01/2017", " 01/01/2017", " 02/01/2017", " 03/01/2017",
" 07/01/2017")), .Names = c("Customer_ID", "Purchase_Date"), class = "data.frame", row.names = c(NA,
-7L))
I am looking for a loop throughout unknown hierarchy R (I only know the data when I request). For example
I request the highest Hierachy and put them in a dataframe
id name
1 Books
2 DVDs
3 Computer
For the next step I want to loop into the books category so, I do a new request with the id(1) and get:
id name
11 Child books
12 Fantasy
Again now I want to look into the next parent catagory of Child books and do a new request for id(11)
id name
111 Baby
112 Education
113 History
And so on:
id name
1111 Sound
1112 Touch
On this moment I don't know how deep each hierarchy is, but I can tell it is different for each different category. On the end I would like that the data frame looks like this:
Id name Id name Id name id name id name
1 Books 11 Child books 111 Baby 1111 Sound ...
1 Books 11 Child books 111 Baby 1112 Touch ...
1 Books 11 Child books 112 Education etc.
1 Books 11 Child books 113 History etc.
1 Books 12 Fantasy etc.
.................
2 DVDs etc.
.................
3 Computer etc.
.................
So I can extract the numbers of rows of the next hierarchy and repeat the row that number of times.
df[rep(x,each=nrow(df_next)),]
But I have no idea how to loop over an unknown (and changing) i.
Here's a not so elegant solution:
(i) subFn is a custom function that split id based on different lengths:
subFn <- function(id){
len <- nchar(id)
tmp <- lapply(1:len, function(x)substring(id, x, x))
names(tmp) <- paste0("level_", 1:length(tmp))
return(tmp)
}
## example
subFn("1111")
$level_1
[1] "1"
$level_2
[1] "1"
$level_3
[1] "1"
$level_4
[1] "1"
(ii) create a list of data.frame, where the id is separated into different number of columns based on its length:
dat_list <- lapply(list(df1, df2, df3), function(x) do.call(data.frame, c(list(name=x[, "name"], stringsAsFactors=FALSE), subFn(x[, "id"]))))
(iii) Using dplyr left_join to join two frames at a time:
dat_list[[1]] %>%
left_join(dat_list[[2]], by="level_1") %>%
left_join(dat_list[[3]], by=c("level_1", "level_2"))
name.x level_1 name.y level_2 name level_3
1 Books 1 Child books 1 Baby 1
2 Books 1 Child books 1 Education 2
3 Books 1 Child books 1 History 3
4 Books 1 Fantasy 2 <NA> <NA>
5 DVDs 2 <NA> <NA> <NA> <NA>
6 Computer 3 <NA> <NA> <NA> <NA>
To prevent the lengthy and convoluted steps in left_joining multiple data.frame, here's a solution inspired by How to join multiple data frames using dplyr?
func <- function(...){
df1 <- list(...)[[1]]
df2 <- list(...)[[2]]
col <- grep("level", names(df1), value=T)
left_join(..., by = col)
}
Reduce( func, dat_list)
Input data:
df1 <- data.frame(id = 1:3, name = c("Books", "DVDs", "Computer"))
df2 <- data.frame(id = 11:12, name = c("Child books", "Fantasy"))
df3 <- data.frame(id = 111:113, name=c("Baby", "Education", "History"))
I am trying to group my data by Year and CountyID then use splinefun (cubic spline interpolation) on the subset data. I am open to ideas, however the splinefun is a must and cannot be changed.
Here is the code I am trying to use:
age <- seq(from = 0, by = 5, length.out = 18)
TOT_POP <- df %.%
group_by(unique(df$Year), unique(df$CountyID) %.%
splinefun(age, c(0, cumsum(df$TOT_POP)), method = "hyman")
Here is a sample of my data Year = 2010 : 2013, Agegrp = 1 : 17 and CountyIDs are equal to all counties in the US.
CountyID Year Agegrp TOT_POP
1001 2010 1 3586
1001 2010 2 3952
1001 2010 3 4282
1001 2010 4 4136
1001 2010 5 3154
What I am doing is taking the Agegrp 1 : 17 and splitting the grouping into individual years 0 - 84. Right now each group is a representation of 5 years. The splinefun allows me to do this while providing a level of mathematical rigour to the process i.e., splinefun allows me provide a population total per each year of age, in each individual county in the US.
Lastly, the splinefun code by itself does work but within the group_by function it does not, it produces:
Error: wrong result size(4), expected 68 or 1.
The splinefun code the way I am using it works like this
TOT_POP <- splinefun(age, c(0, cumsum(df$TOT_POP)),
method = "hyman")
TOT_POP = pmax(0, diff(TOT_POP(c(0:85))))
Which was tested on one CountyID during one Year. I need to iterate this process over "x" amount of years and roughly 3200 counties.
# Reproducible data set
set.seed(22)
df = data.frame( CountyID = rep(1001:1005,each = 100),
Year = rep(2001:2010, each = 10),
Agegrp = sample(1:17, 500, replace=TRUE),
TOT_POP = rnorm(500, 10000, 2000))
# Convert Agegrp to age
df$Agegrp = df$Agegrp*5
colnames(df)[3] = "age"
# Make a spline function for every CountyID-Year combination
split.dfs = split(df, interaction(df$CountyID, df$Year))
spline.funs = lapply(split.dfs, function(x) splinefun(x[,"age"], x[,"TOT_POP"]))
# Use the spline functions to interpolate populations for all years between 0 and 85
new.split.dfs = list()
for( i in 1:length(split.dfs)) {
new.split.dfs[[i]] = data.frame( CountyID=split.dfs[[i]]$CountyID[1],
Year=split.dfs[[i]]$Year[1],
age=0:85,
TOT_POP=spline.funs[[i]](0:85))
}
# Does this do what you want? If so, then it will be
# easier for others to work from here
# > head(new.split.dfs[[1]])
# CountyID Year age TOT_POP
# 1 1001 2001 0 909033.4
# 2 1001 2001 1 833999.8
# 3 1001 2001 2 763181.8
# 4 1001 2001 3 696460.2
# 5 1001 2001 4 633716.0
# 6 1001 2001 5 574829.9
# > tail(new.split.dfs[[2]])
# CountyID Year age TOT_POP
# 81 1002 2001 80 10201.693
# 82 1002 2001 81 9529.030
# 83 1002 2001 82 8768.306
# 84 1002 2001 83 7916.070
# 85 1002 2001 84 6968.874
# 86 1002 2001 85 5923.268
First, I believe I was using the wrong wording in what I was trying to achieve, my apologies; group_by actually wasn't going to solve the issue. However, I was able to solve the problem using two functions and ddply. Here is the code that solved the issue:
interpolate <- function(x, ageVector){
result <- splinefun(ageVector,
c(0, cumsum(x)), method = "hyman")
diff(result(c(0:85)))
}
mainFunc <- function(df){
age <- seq(from = 0, by = 5, length.out = 18)
colNames <- setdiff(colnames(df)
c("Year","CountyID","AgeGrp"))
colWiseSpline <- colwise(interpolate, .cols = true,
age)(df[ , colNames])
cbind(data.frame(
Year = df$Year[1],
County = df$CountyID[1],
Agegrp = 0:84
),
colWiseSpline
)
}
CompleteMainRaw <- ddply(.data = df,
.variables = .(CountyID, Year),
.fun = mainFunc)
The code now takes each county by year and runs the splinefun on that subset of population data. At the same time it creates a data.frame with the results i.e., splits the data from 17 age groups to 85 age groups while factoring it our appropriately; which is what splinefun does.
Thanks!