R applying a data frame on another data frame - r

I have two data frames.
set.seed(1234)
df <- data.frame(
id = factor(rep(1:24, each = 10)),
price = runif(20)*100,
quantity = sample(1:100,240, replace = T)
)
df2 <- data.frame(
id = factor(seq(1:24)),
eq.quantity = sample(1:100, 24, replace = T)
)
I would like to use df2$­eq.quantity to find the closest absolute value compared to df$quantity, by the factor variable, id. I would like to do that for each id in df2 and bind it into a new data-frame, called results.
I can do it like this for each individually ID:
d.1 <- df2[df2$id == 1, 2]
df.1 <- subset(df, id == 1)
id.1 <- df.1[which.min(abs(df.1$quantity-d.1)),]
Which would give the solution:
id price quantity
1 66.60838 84
But I would really like to be able to use a smarter solution, and also gathered the results into a dataframe, so if I do it manually it would look kinda like this:
results <- cbind(id.1, id.2, etc..., id.24)
I had some trouble giving this question a good name?

data.tables are smart!
Adding this to your current example...
library(data.table)
dt = data.table(df)
dt2 = data.table(df2)
setkey(dt, id)
setkey(dt2, id)
dt[dt2, dif:=abs(quantity - eq.quantity)]
dt[,list(price=price[which.min(dif)], quantity=quantity[which.min(dif)]), by=id]
result:
dt[,list(price=price[which.min(dif)], quantity=quantity[which.min(dif)]), by=id]
id price quantity
1: 1 66.6083758 84
2: 2 29.2315840 19
3: 3 62.3379442 63
4: 4 54.4974836 31
5: 5 66.6083758 6
6: 6 69.3591292 13
...

Merge the two datasets and use lapply to perform the function on each id.
df3 <- merge(df,df2,all.x=TRUE,by="id")
diffvar <- function(df){
df4 <- subset(df3, id == df)
df4[which.min(abs(df4$quantity-df4$eq.quantity)),]
}
resultslist <- lapply(levels(df3$id),function(df) diffvar(df))
Combine the resulting list elements in a dataframe:
resultsdf <- data.frame(matrix(unlist(resultslist), ncol=4, byrow=T))
Or more easy:
library(plyr)
resultsdf <- ddply(df3, .(id), function(x)x[which.min(abs(x$quantity-x$eq.quantity)),])

Related

How can I create a function to generate new variables based on values in different dataframe in R

I would like to create a function like this (obviously not proper code):
forEach ID in DATAFRAME1 look at each row with ID in DATAFRAME2 {
if DATAFRAME2$VARIABLE1 = something {
DATAFRAME1$VARIABLE1 = TRUE;
DATAFRAME1$VARIABLE2 = DATAFRAME2$VARIABLE2
}
}
In plain text, I've got a list of individuals and a database with mixed information on these
individuals. Let's say DATAFRAME2 contains informations on books read c(id, title, author, date). I want to create a new variable in DATAFRAME1 with a boolean of if the individual has read a specific book (VARIABLE1 above) and the date they first read it (VARIABLE2above). Also adding a third variable with number of times read would be interesting but not neccesary.
I haven't really done this in R before, mostly doing basic statistics and basic wrangling with dplyr. I guess I could use dplyr and join but this feels like a better approach. Any help to get me started would be much appreciated.
The following function does what the question asks for. Its arguments are
DF1 and DF2 have an obvious meaning;
var1 and var2 are VARIABLE1 and VARIABLE2 in the question;
value is the value of something.
The test data is at the end.
fun <- function(DF1, DF2, ID = 'ID', var1, var2, value){
DF1[[var1]] <- NA
DF1[[var2]] <- NA
k <- DF2[[var1]] == value
for(id in df1[[ID]]){
i <- DF1[[ID]] == id
j <- DF2[[ID]] == id
if(any(j & k)){
DF1[[var1]][i] <- TRUE
DF1[[var2]][i] <- DF2[[var2]][j & k]
}
}
DF1
}
fun(df1, df2, value = 4, var1 = 'X', var2 = 'Y')
# ID X Y
#1 a NA NA
#2 d TRUE 19
Test data.
set.seed(1234)
df1 <- data.frame(ID = c("a", "d"))
df2 <- data.frame(ID = rep(letters[1:5], 4),
X = sample(20, 20, TRUE),
Y = sample(20))

How I can select rows from a dataframe that do not match BUT group by id?

I'm trying to identify the values in a data frame that do not match based on id, but can't figure out how to do this.
a_id <- c(1,1,1,2,2,2,3,3,3)
a_no <- c(1,2,3,1,2,3,1,2,3)
a <- data.frame(a_id,a_no)
b_id <- c(1,1,1,2,2,3,3)
b_no <- c(1,2,3,1,3,2,3)
b <- data.frame(b_id,b_no)
Looking for dataframe similar to this
output_id <- c(2,3)
output_no <- c(2,1)
output <- data.frame(output_id,output_no)
I've tried to adjust the code here, but no luck: How I can select rows from a dataframe that do not match?
We could use anti_join
library(dplyr)
anti_join(a, b, by = c("a_id" = "b_id", "a_no" = "b_no"))
# a_id a_no
#1 2 2
#2 3 1
Or with data.table
library(data.table)
setDT(a)[!b, on = .(a_id = b_id, a_no = b_no)]

Big tasks In R, how to avoid for loops to run faster

My code is running but very very slowly. So this is a big problem and it has to run quicker. So here is the task:
I have a dataset with telecommunication records and i want to apply multiple functions on all records to each customer and put the results in a another data frame.
So df1 is the data frame where each row has a unique customer id and columns with some profil infomations. df2 is a very big data frame with about 800 000 telecommunications records identifyed over the customer ids. Now i want to compute e.g. the average data usage for each customer in df2 and save the result in df1.
df1 looks like
df1 <- read.table(header = TRUE, sep=",",
text="CUSTOMER_ID,Age,ContractType, Gender
ID1,45,Postpaid,m
ID2,50,Postpaid,f
ID3,35,Postpaid,f
ID4,44,Postpaid,m
ID5,32,Postpaid,m
ID6,48,Postpaid,f
ID7,50,Postpaid,m
ID8,51,Postpaid,f")
df2 looks like
df2 <- read.table(header = TRUE, sep=",",
text="CUSTOMER_ID,EVENT,VOLUME, DURATION, MONTH
ID1,100,500,200,201505
ID1,50,400,150,201506
ID1,80,600,50,201507
ID2,40,800,45,201505
ID2,25,650,120,201506
ID2,65,380,250,201507
ID3,30,950,110,201505
ID3,25,630,85,201506
ID3,15,780,60,201507")
My codes is like
USAGE <- c("EVENT", "VOLUME", "DURATION") #column names of df2
list of functions i want to apply on df2
StatFunctions <- list(
max = function(x) max(x),
mean = function(x) mean(x),
sum = function(x) sum(x)
)
In my original data set the Customer IDs are more complex so i choose this pattern search for the cutsomer ids. This is only a cut out of my code. But with the rest it is the same problem with the for loops.
func.num <- function(prefix, target.df, n) {
active.df <- get(target.df)
return(StatFunctions[[n]](active.df[grep(pattern = prefix,
x = active.df$CUSTOMER_ID), USAGE[m]]))
}
for (x in df1$CUSTOMER_ID) {
for (m in 1:length(USAGE)) {
for (n in 1:length(StatFunctions)) {
df1[df1$CUSTOMER_ID == x, paste(names(StatFunctions[n]),
USAGE[m], sep = "_")] <- func.num(prefix = x, target.df = "df2",n)
}
}
}
I know the code is very complicated and should be simplified.
And i want a data frame like this
Customer_ID Age contractType Gender max_EVENT mean_EVENT sum_EVENT ... sum_DURATION
ID1 45 Postpaid m 100 76 230 ... 400
So how can i avoid the for loops to run faster?
I would use dplyr package to summarize df2 by customer ID, then merge with df1.
df1 <- read.table(header = TRUE, sep=",",
text="CUSTOMER_ID,Age,ContractType, Gender
ID1,45,Postpaid,m
ID2,50,Postpaid,f
ID3,35,Postpaid,f
ID4,44,Postpaid,m
ID5,32,Postpaid,m
ID6,48,Postpaid,f
ID7,50,Postpaid,m
ID8,51,Postpaid,f")
df2 <- read.table(header = TRUE, sep=",",
text="CUSTOMER_ID,EVENT,VOLUME, DURATION, MONTH
ID1,100,500,200,201505
ID1,50,400,150,201506
ID1,80,600,50,201507
ID2,40,800,45,201505
ID2,25,650,120,201506
ID2,65,380,250,201507
ID3,30,950,110,201505
ID3,25,630,85,201506
ID3,15,780,60,201507")
df1$CUSTOMER_ID <- gsub(" ", "", df1$CUSTOMER_ID)
df2$CUSTOMER_ID <- gsub(" ", "", df2$CUSTOMER_ID)
library(dplyr)
USAGE <- c("EVENT", "VOLUME", "DURATION")
FUNC <- c("max", "mean", "sum")
dots <- lapply(USAGE, function(u) sprintf("%s(%s)", FUNC, u)) %>% unlist()
dots <- setNames(dots, sub("\\)", "", sub("\\(", "_", dots)))
sum_df <- df2 %>% group_by(CUSTOMER_ID) %>%
summarize_(.dots = dots) %>%
ungroup()
df1$CUSTOMER_ID <- as.character(df1$CUSTOMER_ID)
sum_df$CUSTOMER_ID <- as.character(sum_df$CUSTOMER_ID)
df1 <- left_join(df1, sum_df)
First we fetch the columns that are to be operated on and the ID's
mycols <- c("EVENT","VOLUME","DURATION")
id <- levels(df2$CUSTOMER_ID)
We are going to do this by using the (much faster) apply functions, that will allow us to do the operations parallel on each column, instead of one by one. Create a function that takes such operation on each of the columns. This we will apply over each ID.
For taking mean and summing, we may use the (very fast) colMeans and colSums.
applyfun <- function(i,FUN){
FUN(df2[df2$CUSTOMER_ID == i,mycols])
}
For maximum, we create a similar function
colMax <- function (colData) {
apply(colData, MARGIN=c(2), max)
}
Apply the three functions
outmean <- sapply(id,applyfun,colMeans)
outsum <- sapply(id,applyfun,colSums)
outmax <- sapply(id,applyfun,colMax)
out <- data.frame(CUSTOMER_ID = rownames(t(outmean)),
mean = t(outmean),
sum = t(outsum),
max = t(outmax))
Merge the data onto df1
merge(df1,out,key = "CUSTOMER_ID",all.x = TRUE)
which gives the output:
CUSTOMER_ID Age ContractType Gender mean.EVENT ... max.DURATION
1 ID1 45 Postpaid m 76.66667 ... 200
2 ID2 50 Postpaid f 43.33333 ... 250
3 ID3 35 Postpaid f 23.33333 ... 110
4 ID4 44 Postpaid m NA ... NA
I had some whitespace problems with the CUSTOMER_ID from your examples of df1 and df2 and suppose you do not. To fix this I used
df1$CUSTOMER_ID <- as.factor(trimws(df1$CUSTOMER_ID))
df2$CUSTOMER_ID <- as.factor(trimws(df2$CUSTOMER_ID))

Select row by level of a factor

I have a data frame, df2, containing observations grouped by a ID factor that I would like to subset. I have used another function to identify which rows within each factor group that I want to select. This is shown below in df:
df <- data.frame(ID = c("A","B","C"),
pos = c(1,3,2))
df2 <- data.frame(ID = c(rep("A",5), rep("B",5), rep("C",5)),
obs = c(1:15))
In df, pos corresponds to the index of the row that I want to select within the factor level mentioned in ID, not in the whole dataframe df2.I'm looking for a way to select the rows for each ID according to the right index (so their row number within the level of each factor of df2).
So, in this example, I want to select the first value in df2 with ID == 'A', the third value in df2 with ID == 'B' and the second value in df2 with ID == 'C'.
This would then give me:
df3 <- data.frame(ID = c("A", "B", "C"),
obs = c(1, 8, 12))
dplyr
library(dplyr)
merge(df,df2) %>%
group_by(ID) %>%
filter(row_number() == pos) %>%
select(-pos)
# ID obs
# 1 A 1
# 2 B 8
# 3 C 12
base R
df2m <- merge(df,df2)
do.call(rbind,
by(df2m, df2m$ID, function(SD) SD[SD$pos[1], setdiff(names(SD),"pos")])
)
by splits the merged data frame df2m by df2m$ID and operates on each part; it returns results in a list, so they must be rbinded together at the end. Each subset of the data (associated with each value of ID) is filtered by pos and deselects the "pos" column using normal data.frame syntax.
data.table suggested by #DavidArenburg in a comment
library(data.table)
setkey(setDT(df2),"ID")[df][,
.SD[pos[1L], !"pos", with=FALSE]
, by = ID]
The first part -- setkey(setDT(df2),"ID")[df] -- is the merge. After that, the resulting table is split by = ID, and each Subset of Data, .SD is operated on. pos[1L] is subsetting in the normal way, while !"pos", with=FALSE corresponds to dropping the pos column.
See #eddi's answer for a better data.table approach.
Here's the base R solution:
df2$pos <- ave(df2$obs, df2$ID, FUN=seq_along)
merge(df, df2)
ID pos obs
1 A 1 1
2 B 3 8
3 C 2 12
If df2 is sorted by ID, you can just do df2$pos <- sequence(table(df2$ID)) for the first line.
Using data.table version 1.9.5+:
setDT(df2)[df, .SD[pos], by = .EACHI, on = 'ID']
which merges on ID column, then selects the pos row for each of the rows of df.

Matching data from unequal length data frames in r

This seems like it should be really simple. Ive 2 data frames of unequal length in R. one is simply a random subset of the larger data set. Therefore, they have the same exact data and a UniqueID that is exactly the same. What I would like to do is put an indicator say a 0 or 1 in the larger data set that says this row is in the smaller data set.
I can use which(long$UniqID %in% short$UniqID) but I can't seem to figure out how to match this indicator back to the long data set
Made same sample data.
long<-data.frame(UniqID=sample(letters[1:20],20))
short<-data.frame(UniqID=sample(letters[1:20],10))
You can use %in% without which() to get values TRUE and FALSE and then with as.numeric() convert them to 0 and 1.
long$sh<-as.numeric(long$UniqID %in% short$UniqID)
I'll use #AnandaMahto's data to illustrate another way using duplicated which also works if you've a unique ID or not.
Case 1: Has unique id column
set.seed(1)
df1 <- data.frame(ID = 1:10, A = rnorm(10), B = rnorm(10))
df2 <- df1[sample(10, 4), ]
transform(df1, indicator = 1 * duplicated(rbind(df2, df1)[, "ID",
drop=FALSE])[-seq_len(nrow(df2))])
Case 2: Has no unique id column
set.seed(1)
df1 <- data.frame(A = rnorm(10), B = rnorm(10))
df2 <- df1[sample(10, 4), ]
transform(df1, indicator = 1 * duplicated(rbind(df2, df1))[-seq_len(nrow(df2))])
The answers so far are good. However, a question was raised, "what if there wasn't a "UniqID" column?
At that point, perhaps merge can be of assistance:
Here's an example using merge and %in% where an ID is available:
set.seed(1)
df1 <- data.frame(ID = 1:10, A = rnorm(10), B = rnorm(10))
df2 <- df1[sample(10, 4), ]
temp <- merge(df1, df2, by = "ID")$ID
df1$matches <- as.integer(df1$ID %in% temp)
And, a similar example where an ID isn't available.
set.seed(1)
df1_NoID <- data.frame(A = rnorm(10), B = rnorm(10))
df2_NoID <- df1_NoID[sample(10, 4), ]
temp <- merge(df1_NoID, df2_NoID, by = "row.names")$Row.names
df1_NoID$matches <- as.integer(rownames(df1_NoID) %in% temp)
You can directly use the logical vector as a new column:
long$Indicator <- 1*(long$UniqID %in% short$UniqID)
See if this can get you started:
long <- data.frame(UniqID=sample(1:100)) #creating a long data frame
short <- data.frame(UniqID=long[sample(1:100, 30), ]) #creating a short one with the same ids.
long$indicator <- long$UniqID %in% short$UniqID #creating an indicator column in long.
> head(long)
UniqID indicator
1 87 TRUE
2 15 TRUE
3 100 TRUE
4 40 FALSE
5 89 FALSE
6 21 FALSE

Resources