How to optimize 2 loops - r

I am running a simulation trying to find the probability of something taking place in a number of binomial trials. I start with specifying the data
iter=5000
data=data.frame(prob=runif(300), value=runif(300))
data<-data[sample(nrow(data), iter, replace=T),]
then I add the trials
cols <- c("one","two","three","four","five","six",
"seven","eight","nine","ten","eleven","twelve")
data[,cols] <- NA
one contains the results of only one binomial trials, two contains the results of two binomial trials and so on. If a binomial event takes place in any of the one, two, three, ..., twelve, the cell is marked 1 else 0.
Then I run the trials for iter=5000 simulations
for (col in 3:14) {
for (i in 1:iter) if (sum(rbinom((col-2),1,data[i,1]))>0) data[i,col]<-1 else data[i,col]<-0
}
Then I evaluate the mean(data$value[data$one==0] till ... mean(data$value[data$twelve==0]
My problem is that the simulation code takes forever for iter>15000.
for (col in 3:14) {
for (i in 1:iter)
data[i,col] <- if (sum(rbinom((col-2),1,data[i,1]))>0) 1 else 0
}
Any ideas?

sim2 <- function(iter) {
dat <- data.frame(prob=runif(300), value=runif(300))
dat <- dat[sample(nrow(dat), iter, replace=TRUE),]
cols <- c("one","two","three","four","five","six",
"seven","eight","nine","ten","eleven","twelve")
dat[,cols] <- 0
for (col in 3:14) {
dat[,col] <- as.numeric(vapply(dat[,1],
function(p) {sum(rbinom((col-2), 1, p))>0},
FUN.VALUE = TRUE))
}
vapply(3:14, function(col) {mean(dat$value[dat[,col]==0])}, FUN.VALUE=1)
}
For iter of 16000, this runs in 2.29s on my machine, compared to an (estimated) 1781s for the ordering in your original algorithm. In general, don't assign individual elements in the data frame when you can assign the whole column at once. There may be more improvements possible, but I'll stop at >750x speedup (and changing the algorithm from running time of O(n^2) to O(n)).

Related

While loop within function stops even when conditions not met

I'm a bit stumped, I'm attempting to write a code that runs Monte Carlo simulations of increasing sample sizes until certain conditions are met. First off, the bit of code that I know does work:
##Step 0 - load packages##
library(tidyverse)
library(ggplot2)
library(ggthemes)
##Step 1 - Define number of cycles per simulation##
ncycles <- 250000
##Step 2 - Define function for generating volumes and checking proportion of failed cycles##
volSim <- function(ncycles){
tols <- rnorm(ncycles,0,0.3) #Generate n unique tolerances
vols <- 0 #Establish vols variable within function
for (tol in 2:ncycles){ #for loop creates n unique volumes from tolerances
vols[tol] <- 2.2+tols[tol]-tols[tol-1]
}
cell <- rnorm(1,3.398864,0.4810948) #Generate a unique threshold
return(c(mean(vols>cell),mean(vols>cell*2),mean(vols>cell*20))) #Output a vector of failure rate
}
This works fine and outputs three values equivalent to the proportion of events over multiples of the threshold. Now, for the bit that's not behaving;
##Step 3 - Define a function to run multiple iterations of simulation and check convergence ##
regres <- function(ncycles){
#Establish parameters used within function#
converged <- FALSE
fail_rate_5k <- 0
se_5k <- 0
ncells <- 0
fail_rate_10k <- 0
se_10k <- 0
fail_rate_100k <- 0
se_100k <- 0
n <- 0
while ((converged == FALSE & n<6) | n<4){
n <- n+1
res <- replicate(2^(n+5),volSim(ncycles))
fail_rate_5k[n] <- mean(res[1,]>0)
se_5k[n] <- sqrt(fail_rate_5k[n]*(1-fail_rate_5k[n])/2^(n+5))
ncells[n] <- 2^(n+5)
fail_rate_10k[n] <- mean(res[2,]>0)
se_10k[n] <- sqrt(fail_rate_10k[n]*(1-fail_rate_10k[n])/2^(n+5))
fail_rate_100k[n] <- mean(res[3,]>0)
se_100k[n] <- sqrt(fail_rate_100k[n]*(1-fail_rate_100k[n])/2^(n+5))
if((fail_rate_5k[n] <= 0 | se_5k[n] < 0.5*fail_rate_5k[n]) &
(fail_rate_10k[n] <= 0 | se_10k[n] < 0.5*fail_rate_10k[n]) &
(fail_rate_100k[n] <= 0 | se_100k[n] < 0.5*fail_rate_100k[n])){
converged <- TRUE}
else {converged <- FALSE}
return(data.frame(k5 = fail_rate_5k, se_k5 = se_5k, ncells_k5 = ncells, k10 = fail_rate_10k, se_k10 = se_10k, ncells_k10 = ncells, k100 = fail_rate_100k, se_k100 = se_100k, ncells_k100 = ncells))}
}
The intention is that the simulation will repeat at increasing sample sizes until the standard error for all fail rates (5k, 10k, 100k) is less than half of the fail rate, or the fail rate itself is zero (to avoid a dividing by zero scenario). Two caveats are that the simulation must run at least four times (the n<4 condition in the while loop), and stop after a maximum of six.
Now, if I run the code within the regres function in isolation (with ncycles set to 250000), I generate a nice data frame with 5 rows, I can see that n = 5, converged = TRUE, and everything else that I expect to be happening within the function just fine. If I run result <- regres(ncycles) however, it outputs a single row data frame every time. The while loop is stopping at n=1 despite the n<4 condition. I cannot for the life of me figure out why the behaviour is different when the function is called from when the code inside it is run in isolation.
While I'm really looking to find out why this method is not working, if the method itself is completely outlandish I'm open to using a different approach entirely too.
Your return statement is in the while loop. It will return the data.frame at the end of the first iteration (essentially a break before it even checks the condition)
Try:
...
converged <- TRUE}
else {converged <- FALSE}
}
return(data.frame(k5 = fail_rate_5k, se_k5 = se_5k, ncells_k5 = ncells, k10 = fail_rate_10k, se_k10 = se_10k, ncells_k10 = ncells, k100 = fail_rate_100k, se_k100 = se_100k, ncells_k100 = ncells))
}

Trouble coding a number of matrix models to run simultaneously

I made a matrix based population model, however, I would like to run more than one simultaneously in order to represent different groups of animals, in order that dispersing individuals can move between matrices. I originally just repeated everything to get a second matrix but then I realised that because I run the model using a for loop and break() under certain conditions (when that specific matrix should stop running, ie that group has died out) it is, understandably, stopping the whole model rather than just that singular matrix.
I was wondering if anyone had any suggestions on the best ways to code the model so that instead of breaking, and stopping the whole for loop, it just stops running across that specific matrix. I'm a little stumped. I have include a single run of one matrix below.
Also if anyone has a more efficient way of creating and running 9 matrices than writing everything out 9 times advice much appreciated.
n.steps <- 100
mats <- array(0,c(85,85,n.steps))
ns <- array(0,c(85,n.steps))
ns[1,1]<-0
ns[12,1]<-rpois(1,3)
ns[24,1]<-rpois(1,3)
ns[85,1] <- 1
birth<-4
nextbreed<-12
for (i in 2:n.steps){
# set up an empty matrix;
mat <- matrix(0,nrow=85,ncol=85)
surv.age.1 <- 0.95
x <- 2:10
diag(mat[x,(x-1)]) <- surv.age.1
surv.age.a <- 0.97
disp <- 1:74
disp <- disp*-0.001
disp1<-0.13
disp<-1-(disp+disp1)
survdisp<-surv.age.a*disp
x <- 11:84
diag(mat[x,(x-1)])<-survdisp
if (i == nextbreed) {
pb <- 1
} else {
pb <- 0
}
if (pb == 1) {
(nextbreed <- nextbreed+12)
}
mat[1,85] <- pb*birth
mat[85,85]<-1
death<-sample(c(replicate(1000,
sample(c(1,0), prob=c(0.985, 1-0.985), size = 1))),1)
if (death == 0) {
break()}
mats[,,i]<- mat
ns[,i] <- mat%*%ns[,i-1]
}
group.size <- apply(ns[1:85,],2,sum)
plot(group.size)
View(mat)
View(ns)
As somebody else suggested on Twitter, one solution might be to simply turn the matrix into all 0s whenever death happens. It looks to me like death is the probability that a local population disappears? It which case it seems to make good biological sense to just turn the entire population matrix into 0s.
A few other small changes: I made a list of replicate simulations so I could summarize them easily.
If I understand correctly,
death<-sample(c(replicate(1000,sample(c(1,0), prob=c(0.985, 1-0.985), size =1))),1)
says " a local population dies completely with probability 1.5% ". In which case, I think you could replace it with rbinom(). I did that below and my plots look similar to those I made with your code.
Hope that helps!
lots <- replicate(100, simplify = FALSE, expr = {
for (i in 2:n.steps){
# set up an empty matrix;
mat <- matrix(0,nrow=85,ncol=85)
surv.age.1 <- 0.95
x <- 2:10
diag(mat[x,(x-1)]) <- surv.age.1
surv.age.a <- 0.97
disp <- 1:74
disp <- disp*-0.001
disp1<-0.13
disp<-1-(disp+disp1)
survdisp<-surv.age.a*disp
x <- 11:84
diag(mat[x,(x-1)])<-survdisp
if (i == nextbreed) {
pb <- 1
} else {
pb <- 0
}
if (pb == 1) {
(nextbreed <- nextbreed+12)
}
mat[1,85] <- pb*birth
mat[85,85]<-1
death<-rbinom(1, size = 1, prob = 0.6)
if (death == 0) {
mat <- 0
}
mats[,,i]<- mat
ns[,i] <- mat%*%ns[,i-1]
}
ns
})
lapply(lots, FUN = function(x) apply(x[1:85,],2,sum))

improve a for loop with apply inside

I have a data.frame, ordered by mean column that looks like this:
10SE191_2 10SE207 10SE208 mean
7995783 12.64874 13.06391 12.69378 12.73937
8115327 12.69979 12.52285 12.41582 12.50363
8108370 12.58685 12.87818 12.66021 12.45720
7945680 12.46392 12.26087 11.77040 12.36518
7923547 11.98463 11.96649 12.50666 12.33138
8016718 12.81610 12.71548 12.48164 12.32703
I would like to apply a t.test to each row, using as input the intensity values: df[i,1:3] and the mean values from the rows with lower intensities. For example, for the first row I want to compute a t.test for df[1,1:3] vs _mean values_ from row 2 to row 6. My code uses a for loop but my current data.frame has more than 20,000 rows and 24 columns and it takes a long time. Any ideas for improving the code?
Thanks
Code:
temp <- matrix(-9, nrow=dim(matrix.order)[1], ncol=2) #create a result matrix
l <- dim(matrix.order)[1]
for (i in 1:l){
j <- 1+i
if (i < l | j +2 == l) { #avoid not enough y observations
mean.val <- matrix.order[j:l,4]
p <- t.test(matrix.order[i, 1:3], mean.val)
temp[i,1] <- p$p.value
}
else {temp[i,1] <- 1}
}
dput for my df
structure(list(`10SE191_2` = c(12.6487418898415, 12.6997932097351,12.5868508174491, 12.4639169398277, 11.9846348627906, 12.8160978540904), `10SE207` = c(13.0639063105224, 12.522848114011, 12.8781769160682, 12.260865493177, 11.9664905651469, 12.7154788700468), `10SE208` = c(12.6937808736673, 12.4158248856386, 12.6602128982717, 11.7704045448312, 12.5066604109231, 12.4816357798965), mean = c(12.7393707471856, 12.5036313008127, 12.4572035036992, 12.3651842840775, 12.3313821056582, 12.3270331271091)), .Names = c("10SE191_2", "10SE207", "10SE208", "mean"), row.names = c("7995783", "8115327", "8108370", "7945680", "7923547", "8016718"), class = "data.frame")
You can obtain all p-values (if possible) with this command:
apply(df, 1, function(x) {
y <- df$mean[df$mean < x[4]]
if(length(y) > 1)
t.test(x[1:3], y)$p.value
else NA
})
The function will return NA if there are not enough values for y.
7995783 8115327 8108370 7945680 7923547 8016718
0.08199794 0.15627947 0.04993244 0.50885253 NA NA
Running 2E4 t.tests probably takes a lot of time no matter what. Try using Rprof to find the hot spots. You might also want to use mcapply or similar parallel processing tools, since your analysis of each row is independent of all other data (which means this is a task well-suited to multicore parallel processing).

statistical moments in R

I've got a data set in R of a variable, repeated 10,000 times and sampled 200 times on each repeat so a 10,000 by 200 matrix, I would like to calculate statistical moments for the variable up to an arbitrary number. So in the end I would like a numeric vector holding the value of moments.
I can get the variance and the mean for the data set using colMean and colVar, but they only go so far.
I am also aware of the moments package in R, however using the all.moments command is returning me moments for each time course, or treating each column or row as an individual variable, not what I want.
Does anyone know an equivalent to colMean and colVar for higher order moments? And if possible also for cross moments?
Many thanks!
I stole this code from an obscure R package e1071:
theskew<- function (x) {
x<-as.vector(x)
sum((x-mean(x))^3)/(length(x)*sd(x)^3)
}
thekurt <- function (x) {
x<-as.vector(x)
sum((x-mean(x))^4)/(length(x)*var(x)^2) - 3
}
You can fold that into your code by feeding them one column at a time
Okay did this yesterday for posterity here is a loop that will do what I asked.
Provided your data is a time course of a variable you are measuring, and you want the moments of that variable:
rm(list=ls())
yourdata<-read.table("whereveryourdatais/and/variableyouwant")
yourdata<-t(yourdata) #only do this at your own discretion
mu<-colMeans(yourdata,1:ncol(yourdata))
NumMoments <- 5
rawmoments <- matrix(NA, nrow=NumMoments, ncol=ncol(yourdata))
for(i in 1:NumMoments) {
rawmoments[i, ] <- colMeans(yourdata^i)
}
plot(rawmoments[1,])
holder<-matrix(NA,nrow=nrow(yourdata),ncol=ncol(yourdata))
middles<-matrix(NA,nrow=1,ncol=ncol(yourdata))
for(j in 1:nrow(yourdata)){
for(o in 1:ncol(rawmoments)){
middles[o]<-yourdata[j,o]-rawmoments[1,o]
}
holder[j,] <- middles
}
centmoments<-matrix(NA,nrow=NumMoments,ncol=ncol(yourdata))
for(i in 1:NumMoments){
centmoments[i,]<-colMeans(holder^i)
}
Then centmoments has the centralmoments and rawmoments has the raw moments, you can specify how many moments to take by changing the value of NumMoments.
Note that the first row in "centmoments" will be approximately 0.
Is this what you're looking for?
X <- matrix(1:12, 3, 4) # your data
NumMoments <- 5
moments <- matrix(NA, nrow=NumMoments, ncol=ncol(X))
for(i in 1:NumMoments) {
moments[i, ] <- colMeans(X^i)
}
EDIT:
okay, apparently you want "central moments"
X <- matrix(1:12, 3, 4)
NumMoments <- 5
moments <- matrix(NA, nrow=NumMoments, ncol=ncol(X))
Y <- X
for(i in 1:ncol(X)) {
Y[, i] <- Y[, i] - moments[1, i]
}
for(i in 2:NumMoments) {
moments[i, ] <- colMeans(Y^i)
}

Faster solution to looped grouped RLE calculation

I have a working solution to my problem, but I will not be able to use it because it is so slow (my calculations predict that the whole simulation will take 2-3 years!). Thus I am looking for a better (faster) solution. This is (in essence) the code I am working with:
N=4
x <-NULL
for (i in 1:N) { #first loop
v <-sample(0:1, 1000000, 1/2) #generate data
v <-as.data.frame(v) #convert to dataframe
v$t <-rep(1:2, each=250) #group
v$p <-rep(1:2000, each=500) #p.number
# second loop
for (j in 1:2000) { #second loop
#count rle for group 1 for each pnumber
x <- rbind(x, table(rle(v$v[v$t==1&v$p==j])))
#count rle for group 2 for each pnumber
x <- rbind(x, table(rle(v$v[v$t==2&v$p==j])))
} #end second loop
} #end first loop
#total rle counts for both group 1 & 2
y <-aggregate(x, list(as.numeric(rownames(x))), sum)
In words: The code generates a coin-flip simulation (v). A group factor is generated (1 & 2). A p.number factor is generated (1:2000). The run lengths are recorded for each p.number (1:2000) for both groups 1 & group 2 (each p.number has runs in both groups). After N loops (the first loop), the total run lengths are presented as a table (aggregate) (that is, the run lengths for each group, for each p.number, over N loops as a total).
I need the first loop because the data that I am working with comes in individual files (so I'm loading the file, calculating various statistics etc and then loading the next file and doing the same). I am much less attached to the second loop, but can't figure out how to replace it with something faster.
What can be done to the second loop to make it (hopefully, a lot) faster?
You are committing the cardinal sin of growing an object within a for() loop in R. Don't (I repeat don't) do this. Allocate sufficient storage for x at the beginning and then fill in x as you go.
x <- matrix(nrow = N * (2000 * 2), ncol = ??)
Then in the inner loop
x[ii, ] <- table(rle(....))
where ii is a loop counter that you initialise to 1 before the first loop and increment within the second loop:
x <- matrix(nrow = N * (2000 * 2), ncol = ??)
ii <- 1
for(i in 1:N) {
.... # stuff here
for(j in 1:2000) {
.... # stuff here
x[ii, ] <- table(rle(....))
## increment ii
ii <- ii + 1
x[ii, ] <- table(rle(....))
## increment ii
ii <- ii + 1
} ## end inner loop
} ## end outer loop
Also note that you are reusing index i in bot for()loops which will not work.iis just a normal R object and so bothfor()loops will be overwriting it as the progress. USej` for the second loop as I did above.
Try that simple optimisation first and see if that will allow the real simulation to complete in an acceptable amount of time. If not, come back with a new Q showing the latest code and we can think about other optimisations. The optimisation above is simple to do, optimising table() and rle() might take a lot more work. Noting that, you might look at the tabulate() function which does the heavy lifting in table(), which might be one avenue for optimising that particular step.
If you just want to run rle and table for each combination of the values of v$t and v$p separately, there is no need for the second loop. It is much faster in this way:
values <- v$v + v$t * 10 + v$p * 100
runlength <- rle(values)
runlength$values <- runlength$values %% 2
x <- table(runlength)
y <- aggregate(unclass(x), list(as.numeric(rownames(x))), sum)
The whole code will look like this. If N is as low as 4, the growing object x will not be a severe problem. But generally I agree with #GavinSimpson, that it is not a good programming technique.
N=4
x <-NULL
for (i in 1:N) { #first loop
v <-sample(0:1, 1000000, 1/2) #generate data
v <-as.data.frame(v) #convert to dataframe
v$t <-rep(1:2, each=250) #group
v$p <-rep(1:2000, each=500) #p.number
values <- v$v + N * 10 + v$t * 100 + v$p * 1000
runlength <- rle(values)
runlength$values <- runlength$values %% 2
x <- rbind(x, table(runlength))
} #end first loop
y <-aggregate(x, list(as.numeric(rownames(x))), sum) #tota

Resources