Create data.frame conditional on another df without for loop - r

I'm trying to create a data.frame that takes different values depending on the value of a reference data.frame. I only know how to do this with a "for loop", but have been advised to avoid for loops in R... and my actual data have ~500,000 rows x ~200 columns.
a <- as.data.frame(matrix(rbinom(10,1,0.5),5,2,dimnames=list(c(1:5),c("a","b"))))
b <- data.frame(v1=c(2,10,12,5,11,3,4,14,2,13),v2=c("a","b","b","a","b","a","a","b","a","b"))
c <- as.data.frame(matrix(0,5,2))
for (i in 1:5){
for(j in 1:2){
if(a[i,j]==1){
c[i,j] <- mean(b$v1[b$v2==colnames(a)[j]])
} else {
c[i,j]= mean(b$v1)
}}}
c
I create data.frame "c" based on the value in each cell, and the corresponding column name, of data.frame "a".
Is there another way to do this? Indexing? Using data.table? Maybe apply functions?
Any and all help is greatly appreciated!

(a == 0) * mean(b$v1) + t(t(a) * c(tapply(b$v1, b$v2, mean)))
Run in pieces to understand what's happening. Also, note that this assumes ordered names in a (and 0's and 1's as entries in it, as per OP).
An alternative to a bunch of t's as above is using mapply (this assumes a is a data.frame or data.table and not a matrix, while the above doesn't care):
(a == 0) * mean(b$v1) + mapply(`*`, a, tapply(b$v1, b$v2, mean))

#subsetting a matrix is faster
res <- as.matrix(a)
#calculate fill-in values outside the loop
in1 <- mean(b$v1)
in2 <- sapply(colnames(a),function(i) mean(b$v1[b$v2==i]))
#loop over columns and use a vectorized approach
for (i in seq_len(ncol(res))) {
res[,i] <- ifelse(res[,i]==0, in1, in2[i])
}

Related

Saving rows into variables in R

I have a 18-by-48 matrix.
Is there a way to save each of the 18 rows automatically in a separate variable (e.g., from r1 to r18) ?
I'd definitely advise against splitting a data.frame or matrix into its constituent rows. If i absolutely had to split the rows up, I'd put them in a list then operate from there.
If you desperately had to split it up, you could do something like this:
toy <- matrix(1:(18*48),18,48)
variables <- list()
for(i in 1:nrow(toy)){
variables[[paste0("variable", i)]] <- toy[i,]
}
list2env(variables, envir = .GlobalEnv)
I'd be inclined to stop after the for loop and avoid the list2env. But I think this should give you your result.
I believe you can select a row r from your dataframe d by indexing without a column specified:
var <- d[r,]
Thus you can extract all of the rows into a variable by using
var <- d[1:length(d),]
Where var[1] is the first row, var[2] the second. Etc.. not sure if this is exactly what you are looking for. Why would you want 18 different variables for each row?
result <- data.frame(t(mat))
colnames(result) <- paste("r", 1:18, sep="")
attach(result)
your matrix is mat

converting FOR loop to apply in R

I'm trying to learn how to use apply, but am getting a bit stuck. This loop is converting all of the columns (except the first four) to their accumulated sums.
Can anyone help me? Thanks!
for (i in seq_along(newbuilds.byfuel.bydate)) {
if (i >= 5) {
newbuilds.byfuel.bydate[i] <- cumsum(newbuilds.byfuel.bydate[i])
}
}
This is how I would do this if your object is a data.frame:
## dummy dataset
x <- mtcars
Use lapply to loop over the desired columns, calculate the cumsum, and then overwrite the original columns:
x[5:ncol(x)] <- lapply(x[5:ncol(x)], cumsum)
Alternatively, loop over the un-dropped columns:
x[-(1:4)] <- lapply(x[-(1:4)], cumsum)

Count the occurrences of unequal numbers from a data frame

I have a data frame df with four columns. I would like to find the number of unequal number for each pair of rows.
I have tried to do it using for loop and it works out perfectly. However, it take a very long time to run. Please see below my code:
dist_mat <- matrix(0, nrow(df), nrow(df))
for(i in 1:nrow(df))
{
for(j in 1:nrow(df))
{
dist_mat[i,j] <- sum(df[,1:4][i,]!=df[,1:4][j,])
}
}
I thought there would be other way of doing this fast. Any suggestion is appreciated.
P.S. The data is numeric.
Given that the matrix is symmetric, and the diagonal will be zero, you don't need to loop twice over each row so you can cut the looping down by over half:
for(i in 1:(nrow(df)-1))
{
for(j in (i+1):nrow(df))
{
dist_mat[i,j] <- sum(df[i,1:4]!=df[j,1:4])
}
}
dist_mat[lower.tri(dist_mat)] <- dist_mat[upper.tri(dist.mat)]
This is a job for combn:
DF <- data.frame(x=rep(1,6), y=rep(1:2,3))
combn(seq_len(nrow(DF)), 2, FUN=function(ind, df) {
c(ind[1], ind[2], sum(df[ind[1],]!=df[ind[2],]))
}, df=as.matrix(DF))
Note that I convert the data.frame into a matrix, since matrix subsetting is faster than data.frame subsetting. Depending on your data types this could become a problem.
If your distance measure wasn't so unusual, dist would be helpful (and fast).

How to vectorize a for loop in R

I'm trying to clean this code up and was wondering if anybody has any suggestions on how to run this in R without a loop. I have a dataset called data with 100 variables and 200,000 observations. What I want to do is essentially expand the dataset by multiplying each observation by a specific scalar and then combine the data together. In the end, I need a data set with 800,000 observations (I have four categories to create) and 101 variables. Here's a loop that I wrote that does this, but it is very inefficient and I'd like something quicker and more efficient.
datanew <- c()
for (i in 1:51){
for (k in 1:6){
for (m in 1:4){
sub <- subset(data,data$var1==i & data$var2==k)
sub[,4:(ncol(sub)-1)] <- filingstat0711[i,k,m]*sub[,4:(ncol(sub)-1)]
sub$newvar <- m
datanew <- rbind(datanew,sub)
}
}
}
Please let me know what you think and thanks for the help.
Below is some sample data with 2K observations instead of 200K
# SAMPLE DATA
#------------------------------------------------#
mydf <- as.data.frame(matrix(rnorm(100 * 20e2), ncol=20e2, nrow=100))
var1 <- c(sapply(seq(41), function(x) sample(1:51)))[1:20e2]
var2 <- c(sapply(seq(2 + 20e2/6), function(x) sample(1:6)))[1:20e2]
#----------------------------------#
mydf <- cbind(var1, var2, round(mydf[3:100]*2.5, 2))
filingstat0711 <- array(round(rnorm(51*6*4)*1.5 + abs(rnorm(2)*10)), dim=c(51,6,4))
#------------------------------------------------#
You can try the following. Notice that we replaced the first two for loops with a call to mapply and the third for loop with a call to lapply.
Also, we are creating two vectors that we will combine for vectorized multiplication.
# create a table of the i-k index combinations using `expand.grid`
ixk <- expand.grid(i=1:51, k=1:6)
# Take a look at what expand.grid does
head(ixk, 60)
# create two vectors for multiplying against our dataframe subset
multpVec <- c(rep(c(0, 1), times=c(4, ncol(mydf)-4-1)), 0)
invVec <- !multpVec
# example of how we will use the vectors
(multpVec * filingstat0711[1, 2, 1] + invVec)
# Instead of for loops, we can use mapply.
newdf <-
mapply(function(i, k)
# The function that you are `mapply`ing is:
# rbingd'ing a list of dataframes, which were subsetted by matching var1 & var2
# and then multiplying by a value in filingstat
do.call(rbind,
# iterating over m
lapply(1:4, function(m)
# the cbind is for adding the newvar=m, at the end of the subtable
cbind(
# we transpose twice: first the subset to multiply our vector.
# Then the result, to get back our orignal form
t( t(subset(mydf, var1==i & mydf$var2==k)) *
(multpVec * filingstat0711[i,k,m] + invVec)),
# this is an argument to cbind
"newvar"=m)
)),
# the two lists you are passing as arguments are the columns of the expanded grid
ixk$i, ixk$k, SIMPLIFY=FALSE
)
# flatten the data frame
newdf <- do.call(rbind, newdf)
Two points to note:
Try not to use words like data, table, df, sub etc which are commonly used functions
In the above code I used mydf in place of data.
You can use apply(ixk, 1, fu..) instead of the mapply that I used, but I think mapply makes for cleaner code in this situation

Subsetting a list of different data types

I have a list of different data types (factors, data.frames, and vectors, all the same length or number of rows), What I would like to do is subset each element of the list by a vector (let's call it rows) that represents row names.
If it was a data.frame() I would:
x <- x[rows,]
If it was a vector() or factor() I would:
x <- x[rows]
So, I've been playing around with this:
x <- lapply(my_list, function(x) ifelse(is.data.frame(x), x[rows,], x[rows]))
So, how do I accomplish my goal of getting a list of subsetted data?
I think this is YAIEP (Yet Another If Else Problem). From ?ifelse:
ifelse returns a value with the same shape as test which is filled
with elements selected from either yes or no depending on whether the
element of test is TRUE or FALSE.
See the trouble? Same shape as test.
So just do this:
l <- list(a = data.frame(x=1:10,y=1:10),b = 1:10, c = factor(letters[1:20]))
rows <- 1:3
fun <- function(x){
if (is.data.frame(x)){
x[rows,]
}
else{
x[rows]
}
}
lapply(l,fun)

Resources