I have 100 matrices which each have 604800 columns, and 101 rows.
For each matrix, I need to reduce the number of columns to 60480 by computing the 10 column averages.
For example, for a vector
c(1,2,3,4,5,6,7,8,9,10,...)
The 5 column average would be:
c(3,8,13,18,...)
The code I am using to do this is:
col.av = tapply(col, rep(1:(length(col)/10), each = 10), mean)
Where col is one of my 101 x 604800 matrices. I have a for loop which iterates over the 100 matrices, however my problem is in the length of time needed to compute one run.
If I am just using one matrix, it takes 20 minutes+ to execute which is not feasible.
Are there any suggestions on how I can improve the speed of computation?
Thanks
If you are fine with for loop, this one works for your case:
col.av <- matrix(0, nrow(col), ncol(col)/10)
for (i in 1:ncol(col.av)) {
col.av[,i] <- rowMeans(col[,(10*(i-1)+1):(10*i)])
}
Or without a for-loop and a custom function for readability. You can always wrap this in your for-loop or a call to apply.
#generate data
nc=604800
nr=101
test_m <- matrix(rnorm(nc*nr),ncol=nc)
#function to get rowmeans by 'window'-columns
get_rowmeans <- function(mm, window=10){
indices <- seq(1,ncol(mm),by=window)
res <- sapply(indices, function(i){
return(rowMeans(mm[,i:(i+(window-1))]))
})
res
}
tt <- get_rowmeans(test_m)
#check one
> all(tt[,1]==rowMeans(test_m[,1:10]))
[1] TRUE
Related
I am trying to multiply the values stored in a list containing 1,000 values with another list containing ages. Ultimately, I want to store 1,000 rows to a dataframe.
I wonder if it's better to use lapply fucntion or for loop function here.
list 1
lambdaSamples1 <- lapply(
floor(runif(numSamples, min = 1, max = nrow(mcmcMatrix))),
function(x) mcmcMatrix[x, lambdas[[1]]])
*the out put is 1,000 different values in a list. *
list 2
ager1= 14:29
What I want to do is
for (i in 1: numSamples) {
assign(paste0("newRow1_", i), 1-exp(-lambdaSample1[[i]]*ager1))
}
now I got 1,000 rows of values that I want to store in a predetermiend dataframe, outDf_1 (nrow=1000, ncol = ager1).
I tried
`
for (i in 1:numSamples) {
outDf_1[i,] <- newRow1_i
}
I want to store newRow1_1, ,,,,,, , newRow1_1000 to each of the 1,000 row of outDf_1 dataframe.
SHould I approach different way?
I think you're overcomplicating this a bit. Many operations in R are vectorized so you shoudln't need lapply or for loops for this. You didn't give us any data to work with but the code below should do what you want in a more straightforward and fast way.
lambdaSamples1 <- mcmcMatrix[sample(nrow(mcmcMatrix), numSamples, replace=T),
lambdas[[1]]]
outDF_1 <- 1 - exp(-lambdaSamples1 %*% t(ager1))
Just note that this makes outDF_1 a matrix, not a data frame.
To do this for multiple ages, you could use a loop to save your resulting matrices in a list:
outDF <- list()
x <- 5
for (i in seq_len(x)) {
lambdaSamples <- mcmcMatrix[sample(nrow(mcmcMatrix), numSamples, replace=T),
lambdas[[1]]]
outDF[[i]] <- 1 - exp(-lambdaSamples %*% t(ager[[i]]))
}
Here, ager1, ..., agerx are expected to be stored in a list (ager).
I understand for-loops are slow in R, and the suite of apply() functions are designed to be used instead (in many cases).
However, I can't figure out how to use those functions in my situation, and advice would be greatly appreciated.
I have a list/vector of values (let's say length=10,000) and at every point, starting at the 21st value, I need to take the standard deviation of the trailing 20 values. So at 21st, I take SD of 1st-21st . At 22nd value, I take SD(2:22) and so on.
So you see I have a rolling window where I need to take the SD() of the previous 20 indices. Is there any way to accomplish this faster, without a for-loop?
I found a solution to my question.
The zoo package has a function called "rollapply" which does exactly that: uses apply() on a rolling window basis.
library(microbenchmark)
library(ggplot2)
# dummy vector
c <- 50
x <- sample(1:100, c, replace=T)
# parameter
y <- 20 # length of each vector
z <- length(x) - y # final starting index
# benchmark
xx <-
microbenchmark(lapply = {a <- lapply( 1:z, \(i) sd(x[i:(i+y)]) )}
, loop = {
b <- vector("list", z)
for (i in 1:z)
{
b[[i]] <- sd(x[i:(i+y)])
}
}
, times = 30
)
# plot
autoplot(xx) +
ggtitle(paste('vector of size', c))
It would appear while lapply has the speed advantage of a smaller vector, a loop should be used with longer vectors.
I would maintain, however, loops are not slow per se as long as they are not applied incorrectly (iterating over rows).
I'm trying to calculate the difference between all points in a vector of length 10605 in R. For example, I am trying to do this:
for (i in 1:10605){
for (j in 1:10605){
differences[i] = housedata$Mean_household_income[i] - housedata$Mean_household_income[j]
}
}
It is taking so long to compute, and I'm thinking there's a more timely way to calculate the difference between all the points with each other in this vector. Does anyone have any suggestions?
Thanks!
Seems like the dist function should do that. Distance matrices are only lower triangular because distance(x,y) == distance(y,x):
my.distances <- dist(housedata$Mean_household_income,
housedata$Mean_household_income)
It's going to be faster since it's done in C code. Just type:
dist
You could loop through an incrementally shifted/wrapped copy of the vector and subtract the two vectors. You still have to loop through the length of the data once and shift and subtract the vector each time, but it will probably save some time.
Here is an example:
# make a shift/wrap function
shift <- function(df,offset){
df[((1:length(df))-1-offset)%%length(df)+1]
}
# make some data
data <- seq(1,4)
# make an empty vector to hold the data
difs = vector()
# loop through the data
for(i in 1:length(data)){
shifted <- shift(data,i)
result <- data - shifted
difs <- c(difs, result)
}
print(difs)
What about using outer? It uses a vectorized function (here -) on all combinations of two vectors and stores the results in a matrix.
For example,
x <- runif(10605)
system.time(
differences <- outer(x, x, '-')
)
takes one second on my computer.
dataset <- matrix(rnorm(100), 20, 5)
My dataset is a matrix of 100 returns, of 5 assets over 20 days.
I want to caluclate the average return for each asset, between 1:10 rows and 11:20 rows.
Then, I want to include the returns so computed in two vectors, in turn, included in a list.
The following list should include the two vectors of returns computed between rows 1:10 and 11:20.
returns <- vector(mode="list", 2)
I have implemented a for-loop, as reported below, to calculate the mean of returns only between 1:10.
assets <- 5
r <- rep(0, assets) # this vector should include the returns over 1:10
for(i in 1:assets){
r[i] <- mean(data[1:10,i])
}
returns[[1]] <- r
How could I manage this for-loop in order to calculate also the mean of returns between 11:20 rows?
I have tried to "index" the rows of the dataset, in the following way.
time <- c(1, 10, 11, 20)
and then implement a double for-loop, but the length are different. Moreover, in this case, I meet difficulties in managing the vector "r". Because, in this case, I should have two vectors and no longer only one as before.
for(j 1:length(time){
for(i in 1:assets){
r[i] <- mean(data[1:10,i])
}}
returns[[1]] <- r
You don't even need a for loop. You can use colMeans
returns <- vector(mode="list", 2)
returns[[1]] <- colMeans(dataset[1:10,])
returns[[2]] <- colMeans(dataset[11:20,])
Using a for loop, your solution could be something like the following
for(i in 1:assets){
returns[[1]] <- c(returns[[1]], mean(dataset[1:10,i]))
returns[[2]] <- c(returns[[2]], mean(dataset[11:20,i]))
}
Preliminaries: this question is mostly of educational value, the actual task at hand is completed, even if the approach is not entirely optimal. My question is whether the code below can be optimized for speed and/or implemented more elegantly. Perhaps using additional packages, such as plyr or reshape. Run on the actual data it takes about 140 seconds, much higher than the simulated data, since some of the original rows contain nothing but NA, and additional checks have to be made. To compare, the simulated data are processed in about 30 seconds.
Conditions: the dataset contains 360 variables, 30 times the set of 12. Let's name them V1_1, V1_2... (first set), V2_1, V2_2 ... (second set) and so forth. Each set of 12 variables contains dichotomous (yes/no) responses, in practice corresponding to a career status. For instance: work (yes/no), study (yes/no) and so forth, in total 12 statuses, repeated 30 times.
Task: the task at hand is to recode each set of 12 dichotomous variables into a single variable with 12 response categories (e.g. work, study... ). Ultimately we should get 30 variables, each with 12 response categories.
Data: I cannot post the actual dataset, but here is a good simulated approximation:
randomRow <- function() {
# make a row with a single 1 and some NA's
sample(x=c(rep(0,9),1,NA,NA),size=12,replace=F)
}
# create a data frame with 12 variables and 1500 cases
makeDf <- function() {
data <- matrix(NA,ncol=12,nrow=1500)
for (i in 1:1500) {
data[i,] <- randomRow()
}
return(data)
}
mydata <- NULL
# combine 30 of these dataframes horizontally
for (i in 1:30) {
mydata <- cbind(mydata,makeDf())
}
mydata <- as.data.frame(mydata) # example data ready
My solution:
# Divide the dataset into a list with 30 dataframes, each with 12 variables
S1 <- lapply(1:30,function(i) {
Z <- rep(1:30,each=12) # define selection vector
mydata[Z==i] # use selection vector to get groups of variables (x12)
})
recodeDf <- function(df) {
result <- as.numeric(apply(df,1,function(x) {
if (any(!is.na(df))) which(x == 1) else NA # return the position of "1" per row
})) # the if/else check is for the real data
return(result)
}
# Combine individual position vectors into a dataframe
final.df <- as.data.frame(do.call(cbind,lapply(S1,recodeDf)))
All in all, there is a double *apply function, one across the list, the other across the dataframe rows. This makes it a bit slow. Any suggestions? Thanks in advance.
Here is an approach that is basically instantaneous. (system.time = 0.1 seconds)
se set. The columnMatch component will depend on your data, but if it is every 12 columns, then the following will work.
MYD <- data.table(mydata)
# a new data.table (changed to numeric : Arun)
newDT <- as.data.table(replicate(30, numeric(nrow(MYD)),simplify = FALSE))
# for each column, which values equal 1
whiches <- lapply(MYD, function(x) which(x == 1))
# create a list of column matches (those you wish to aggregate)
columnMatch <- split(names(mydata), rep(1:30,each = 12))
setattr(columnMatch, 'names', names(newDT))
# cycle through all new columns
# and assign the the rows in the new data.table
## Arun: had to generate numeric indices for
## cycling through 1:12, 13:24 in whiches[[.]]. That was the problem.
for(jj in seq_along(columnMatch)) {
for(ii in seq_along(columnMatch[[jj]])) {
set(newDT, j = jj, i = whiches[[ii + 12 * (jj-1)]], value = ii)
}
}
This would work just as well adding columns by reference to the original.
Note set works on data.frames as well....
I really like #Arun's matrix multiplication idea. Interestingly, if you compiling R against some OpenBLAS libraries, you could get this to operate in parallel.
However, I wanted to provide you with another, perhaps slower than matrix multiplication, solution that uses your original pattern, but is much faster than your implementation:
# Match is usually faster than which, because it only returns the first match
# (and therefore won't fail on multiple matches)
# It also neatly handles your *all NA* case
recodeDf2 <- function(df) apply(df,1,match,x=1)
# You can split your data.frame by column with split.default
# (Using split on data.frame will split-by-row)
S2<-split.default(mydata,rep(1:30,each=12))
final.df2<-lapply(S2,recodeDf2)
If you had a very large data frame, and many processors, you may consider parallelizing this operation with:
library(parallel)
final.df2<-mclapply(S2,recodeDf2,mc.cores=numcores)
# Where numcores is your number of processors.
Having read #Arun and #mnel, I learned a lot about how to improve this function, by avoiding the coercion to an array, by processing the data.frame by column instead of by row. I don't mean to "steal" an answer here; OP should consider switching the checkbox to #mnel's answer.
I wanted, however, to share a solution that doesn't use data.table, and avoids for. It is still, however, slower than #mnel's solution, albeit slightly.
nograpes2<-function(mydata) {
test<-function(df) {
l<-lapply(df,function(x) which(x==1))
lens<-lapply(l,length)
rep.int(seq.int(l),times=lens)[order(unlist(l))]
}
S2<-split.default(mydata,rep(1:30,each=12))
data.frame(lapply(S2,test))
}
I would also like to add that #Aaron's approach, using which with arr.ind=TRUE would also be very fast and elegant, if mydata started out as a matrix, rather than a data.frame. Coercion to a matrix is slower than the rest of the function. If speed were an issue, it would be worth considering reading the data in as a matrix in the first place.
IIUC, you've only one 1 per 12 columns. You've the rest with 0's or NA's. If so, the operation can be performed much faster by this idea.
The idea: Instead of going through each row and asking for the position of 1, you could use a matrix with dimensions 1500 * 12 where each row is just 1:12. That is:
mul.mat <- matrix(rep(1:12, nrow(DT)), ncol = 12, byrow=TRUE)
Now, you can multiply this matrix with each of your subset'd data.frame (of same dimensions, 1500*12 here) and them take their "rowSums" (which is vectorised) with na.rm = TRUE. This'll just give directly the row where you have 1 (because that 1 will have been multiplied by the corresponding value between 1 and 12).
data.table implementation: Here, I'll use data.table to illustrate the idea. Since it creates column by references, I'd expect that the same idea used on a data.frame would be a tad slower, although it should drastically speed up your current code.
require(data.table)
DT <- data.table(mydata)
ids <- seq(1, ncol(DT), by=12)
# for multiplying with each subset and taking rowSums to get position of 1
mul.mat <- matrix(rep(1:12, nrow(DT)), ncol = 12, byrow=TRUE)
for (i in ids) {
sdcols <- i:(i+12-1)
# keep appending the new columns by reference to the original data
DT[, paste0("R", i %/% 12 + 1) := rowSums(.SD * mul.mat,
na.rm = TRUE), .SDcols = sdcols]
}
# delete all original 360 columns by reference from the original data
DT[, grep("V", names(DT), value=TRUE) := NULL]
Now, you'll be left with 30 columns that correspond to the position of 1's. On my system, this takes about 0.4 seconds.
all(unlist(final.df) == unlist(DT)) # not a fan of `identical`
# [1] TRUE
Another way this could be done with base R is with simply getting the values you want to put in the new matrix and filling them in directly with matrix indexing.
idx <- which(mydata==1, arr.ind=TRUE) # get indices of 1's
i <- idx[,2] %% 12 # get column that was 1
idx[,2] <- ((idx[,2] - 1) %/% 12) + 1 # get "group" and put in "col" of idx
out <- array(NA, dim=c(1500,30)) # make empty matrix
out[idx] <- i # and fill it in!