Optimise code for a simple monte carlo like simulation - r

I run the following code that works but just take ages and I'm sure there is a way to get the same results much faster.
runs <- 1000
prediction <- runif(77,0,1)
n< - length(prediction)
df.all <- data.frame(Preds = rep(prediction, runs),
simno=rep(1:runs,each=n))
for (x in 1:runs) {
for (i in 1:length(df.all$Preds)){
df.all$rand[i] <- sample(1:100,1)
df.all$Win[i] <- ifelse(df.all$rand[i]<df.all$Preds[i]*100,1,0)
}
}
df.all% >% group_by(simno) %>% summarise(Wins=sum(Win)) -> output

This can easily be vectorise by:
Performing a single sample operation (not the additional replace = TRUE argument.
Performing a single comparison >
You can remove the inner for loop to get
for (x in 1:runs) {
df.all$rand = sample(1:100, size = length(prediction), replace=TRUE)
df.all$Win = df.all$rand < df.all$Preds*100
}
You can then take it one step further and remove that loop
df.all$rand = sample(1:100, n = nrow(df.all), replace=TRUE)
df.all$Win = df.all$rand < df.all$Preds*100

Related

R: C stack error when using recursive function - optimizing the code

I am trying to make a simple "model" of loss of an initial value based on a number of criteria, as seen in my code below:
mtDNAlen = 16299
copies = 10
mito = 50
fraction = copies*mtDNAlen*0.75
Rounds = 40
A = data.frame(
Length = c(rep(mtDNAlen,copies))
)
data = list()
for (i in 1:mito){
data[[i]]=list()
}
for (i in 1:length(data)){
for (j in 1:(Rounds+1)){
data[[i]][[j]]=data.frame(A)
}}
finaldata = data.frame(X = seq(1,copies,1))
random.sample = function(x) {
x = sample(sample.data, copies,
prob=Prob,
replace=FALSE)
if (sum(x) > fraction) return(x)
Recall(x)
}
for (i in 1:length(data)){
for(j in 1:Rounds){
data[[i]][[j]]$Deletion = sample(c("Yes","No"), nrow(data[[i]][[j]]), prob=c(0.05,0.95), replace=TRUE)
data[[i]][[j]]$DelLength = ifelse(data[[i]][[j]]$Deletion == "Yes", sample(seq(0,15000,1), replace = TRUE),0)
data[[i]][[j]]$Length = data[[i]][[j]]$Length - data[[i]][[j]]$DelLength
data[[i]][[j]] = data[[i]][[j]][data[[i]][[j]]$Length > 2000,]
data[[i]][[j]] = rbind(data[[i]][[j]],data[[i]][[j]])
Prob = c(rep(16300,nrow(data[[i]][[j]]))) - data[[i]][[j]]$Length
Prob = Prob / sum(Prob)
sample.data = c(data[[i]][[j]]$Length)
data[[i]][[j+1]]$Length = random.sample(sample.data)
finaldata[[i]] = data[[i]][[j+1]]$Length
}
}
However, when I try to run my code I get the following error:
Error: C stack usage 7969700 is too close to the limit
After searching, this seems to occur when using recursive functions. However, I am new to these types of functions, and I don't know how to optimise my code further, in order to et rig of this error.
I should mention, that when I run my code without prob=Prob in the random.sample function, there is no error. So I guess the error comes from having to redo the random.sample so many times? Can I do this in a better way to avoid the C stack error? I haven't been able to find an alternative myself.
Finally, if I set Rounds and mito to very small values, I can complete the calculation, but it is not really usable for me...
Thanks!
EDIT
I tried to switch to using repeat instead, making my for loop look like this:
for (i in 1:length(data)){
for(j in 1:Rounds){
data[[i]][[j]]$Deletion = sample(c("Yes","No"), nrow(data[[i]][[j]]), prob=c(0.05,0.95), replace=TRUE)
data[[i]][[j]]$DelLength = ifelse(data[[i]][[j]]$Deletion == "Yes", sample(seq(0,15000,1), replace = TRUE),0)
data[[i]][[j]]$Length = data[[i]][[j]]$Length - data[[i]][[j]]$DelLength
data[[i]][[j]] = data[[i]][[j]][data[[i]][[j]]$Length > 2000,]
data[[i]][[j]] = rbind(data[[i]][[j]],data[[i]][[j]])
Prob = c(rep(16300,nrow(data[[i]][[j]]))) - data[[i]][[j]]$Length
Prob = Prob / sum(Prob)
sample.data = c(data[[i]][[j]]$Length)
repeat {
v2 <- sample(sample.data, copies,
prob=Prob,
replace=FALSE)
if( sum(v2) > fraction )
break
}
return(v2)
data[[i]][[j+1]]$Length = v2
finaldata[[i]] = data[[i]][[j+1]]$Length
}
}
However, now I can't get the sampled data to go to the next dataframe, ie. the line data[[i]][[j+1]]$Length = v2 seems to not be working. I can see v2 is getting generated and it looks to have the appropriate form and data stored...

Loop in a dataset simulation

I hope to get help on the following problem in R.
I have the folowing code to generate 30 column dataset based on an exponential distribuition:
x0=0
xmax=8000
xout=3000
lambda=0.0002
n=1
x1=x0+rexp(n,lambda)-xout
x2=x1+rexp(n,lambda)-xout
x3=x2+rexp(n,lambda)-xout
x4=x3+rexp(n,lambda)-xout
x5=x4+rexp(n,lambda)-xout
x6=x5+rexp(n,lambda)-xout
x7=x6+rexp(n,lambda)-xout
x8=x7+rexp(n,lambda)-xout
x9=x8+rexp(n,lambda)-xout
x10=x9+rexp(n,lambda)-xout
x11=x10+rexp(n,lambda)-xout
x12=x11+rexp(n,lambda)-xout
x13=x12+rexp(n,lambda)-xout
x14=x13+rexp(n,lambda)-xout
x15=x14+rexp(n,lambda)-xout
x16=x15+rexp(n,lambda)-xout
x17=x16+rexp(n,lambda)-xout
x18=x17+rexp(n,lambda)-xout
x19=x18+rexp(n,lambda)-xout
x20=x19+rexp(n,lambda)-xout
x21=x20+rexp(n,lambda)-xout
x22=x21+rexp(n,lambda)-xout
x23=x22+rexp(n,lambda)-xout
x24=x23+rexp(n,lambda)-xout
x25=x24+rexp(n,lambda)-xout
x26=x25+rexp(n,lambda)-xout
x27=x26+rexp(n,lambda)-xout
x28=x27+rexp(n,lambda)-xout
x29=x28+rexp(n,lambda)-xout
x30=x29+rexp(n,lambda)-xout
I have three doubts:
1 - Is there any way to write this function in a reduced form?
2 - This row (30 columns) needs to be simulated 10,000 times. How to do this in a loop?
3 - The values ​​of each cell (x1, x2, x3 ...) must be limited to the interval x0 and xmax (0-8000). How to do this?
That depends on what you want to do with values over 8000. Here's a solution that just takes those values and wraps them around with a modulo operator.
library(tidyverse)
test <- data.frame(x0 = rep(0, n))
for (i in 1:30) {
new_col <- sym(paste0("x", i))
old_col <- sym(paste0("x", i - 1))
test <- test %>%
mutate(!!new_col := (!!old_col + rexp(n, lambda) - xout) %% xmax)
}
I don't know how familiar you may or may not be with the tidyverse and tidy evaluation, which I've used here liberally. The !! operator, combined with sym(), turns the variable names into actual variables. The %>% operator "pipes" data from one function to the next. The := operator is needed only if you want to make assignments with a !! on the lefthand side.
I think this is my first time actually trying to post an answer on StackOverflow, so be easy on me! :)
As I'm fairly new to R myself, I thought it would be good practice to try to write this out. Perhaps not the most efficient code, but it works:
xmax <- 8000
xout <- 3000
lambda <- 0.0002
n <- 1
iterations <- 30
df <- data.frame(matrix(ncol = 31, nrow = iterations))
names(df) <- c(paste("x", 0:30, sep=""))
for (j in 1:iterations) {
df$x0[j] <- 0
df$x1[j] <- df$x0[j] + rexp(n,lambda)-xout
if (df$x1[j] < 0) {
df$x1[j] <- 0
}
if (df$x1[j] > 8000) {
df$x1[j] <- 8000
}
for (i in 3:31) {
df[j,i] <- df[j, i-1] + rexp(n,lambda)-xout
if (df[j,i] < 0) {
df[j,i] <- 0
}
if (df[j,i] > 8000) {
df[j,i] <- 8000
}
}
}
You can change iterations to 30000, for testing purposes I've used 30. Also I didn't know if you wanted to limit to 0 and 8000 before or after the next iterations, I've done it before.
Is there any way to write this function in a reduced form?
I would do it like this. Pretty sure this is equivalent.
ncol = 30
row = rexp(ncol, lambda)
row = cumsum(row) - xout * (1:ncol)
This row (30 columns) needs to be simulated 10,000 times. How to do this in a loop?
Use replicate with the code above:
sim_data = t(replicate(10000, {
row = rexp(ncol, lambda)
row = cumsum(row) - xout * (1:ncol)
}))
replicate gives 10000 columns and 30 rows. We use t() to transpose it to 10000 rows with 30 columns.
The values ​​of each cell (x1, x2, x3 ...) must be limited to the interval x0 and xmax (0-8000). How to do this?
Use pmin() and pmax(). Not sure if you want this done before or after the cumulative summing...
sim_data = t(replicate(10000, {
row = rexp(ncol, lambda)
row = cumsum(row) - xout * (1:ncol)
row = pmax(0, row)
row = pmin(xmax, row)
row
}))

Avoiding a loop when populating data frames in R

I have an empty data frame T_modelled with 2784 columns and 150 rows.
T_modelled <- data.frame(matrix(ncol = 2784, nrow = 150))
names(T_modelled) <- paste0("t=", t_sec_ERT)
rownames(T_modelled) <- paste0("z=", seq(from = 0.1, to = 15, by = 0.1))
where
t_sec_ERT <- seq(from = -23349600, to = 6706800, by = 10800)
z <- seq(from = 0.1, to = 15, by = 0.1)
I filled T_modelled by column with a nested for loop, based on a formula:
for (i in 1:ncol(T_modelled)) {
col_tmp <- colnames(T_modelled)[i]
for (j in 1:nrow(T_modelled)) {
z_tmp <- z[j]-0.1
T_tmp <- MANSRT+As*e^(-z_tmp*(omega/(2*K))^0.5)*sin(omega*t_sec_ERT[i]-((omega/(2*K))^0.5)*z_tmp)
T_modelled[j ,col_tmp] <- T_tmp
}
}
where
MANSRT <- -2.051185
As <- 11.59375
omega <- (2*pi)/(347.875*24*60*60)
c <- 790
k <- 0.00219
pb <- 2600
K <- (k*1000)/(c*pb)
e <- exp(1)
I do get the desired results but I keep thinking there must be a more efficient way of filling that data frame. The loop is quite slow and looks cumbersome to me. I guess there is an opportunity to take advantage of R's vectorized way of calculating. I just cannot see myself how to incorporate the formula in an easier way to fill T_modelled.
Anyone got any ideas how to get the same result in a faster, more "R-like" manner?
I believe this does it.
Run this first instruction right after creating T_modelled, it will be needed to test that the results are equal.
Tm <- T_modelled
Now run your code then run the code below.
z_tmp <- z - 0.1
for (i in 1:ncol(Tm)) {
T_tmp <- MANSRT + As*exp(-z_tmp*(omega/(2*K))^0.5)*sin(omega*t_sec_ERT[i]-((omega/(2*K))^0.5)*z_tmp)
Tm[ , i] <- T_tmp
}
all.equal(T_modelled, Tm)
#[1] TRUE
You don't need the inner loop, that's the only difference.
(I also used exp directly but that is of secondary importance.)
Much like your previous question's solution which you accepted, consider simply using sapply, iterating through the vector, t_sec_ERT, which is the same length as your desired dataframe's number of columns. But first adjust every element of z by 0.1. Plus, there's no need to create empty dataframe beforehand.
z_adj <- z - 0.1
T_modelled2 <- data.frame(sapply(t_sec_ERT, function(ert)
MANSRT+As*e^(-z_adj*(omega/(2*K))^0.5)*sin(omega*ert-((omega/(2*K))^0.5)*z_adj)))
colnames(T_modelled2) <- paste0("t=", t_sec_ERT)
rownames(T_modelled2) <- paste0("z=", z)
all.equal(T_modelled, T_modelled2)
# [1] TRUE
Rui is of course correct, I just want to suggest a way of reasoning when writing a loop like this.
You have two numeric vectors. Functions for numerics in R are usually vectorized. By which I mean you can do stuff like this
x <- c(1, 6, 3)
sum(x)
not needing something like this
x_ <- 0
for (i in x) {
x_ <- i + x_
}
x_
That is, no need for looping in R. Of course looping takes place none the less, it just happens in the underlying C, Fortran etc. code, where it can be done more efficiently. This is usually what we mean when we call a function vectorized: looping takes place "under the hood" as it were. The output of Vectorize() thus isn't strictly vectorized by this definition.
When you have two numeric vectors you want to loop over you have to first see if the constituent functions are vectorized, usually by reading the docs.
If it is, you continue by constructing that central vectorized compound function and and start testing it with one vector and one scalar. In your case it would be something like this (testing with just the first element of t_sec_ERT).
z_tmp <- z - 0.1
i <- 1
T_tmp <- MANSRT + As *
exp(-z_tmp*(omega/(2*K))^0.5) *
sin(omega*t_sec_ERT[i] - ((omega/(2*K))^0.5)*z_tmp)
Looks OK. Then you start looping over the elements of t_sec_ERT.
T_tmp <- matrix(nrow=length(z), ncol=length(t_sec_ERT))
for (i in 1:length(t_sec_ERT)) {
T_tmp[, i] <- MANSRT + As *
exp(-z_tmp*(omega/(2*K))^0.5) *
sin(omega*t_sec_ERT[i] - ((omega/(2*K))^0.5)*z_tmp)
}
Or you can do it with sapply() which is often neater.
f <- function(x) {
MANSRT + As *
exp(-z_tmp*(omega/(2*K))^0.5) *
sin(omega*x - ((omega/(2*K))^0.5)*z_tmp)
}
T_tmp <- sapply(t_sec_ERT, f)
I would prefer to put the data in a long format, with all combinations of z and t_sec_ERT as two columns, in order to take advantage of vectorization. Although I usually prefer tidyr for switching between long and wide formats, I've tried to keep this as a base solution:
t_sec_ERT <- seq(from = -23349600, to = 6706800, by = 10800)
z <- seq(from = 0.1, to = 15, by = 0.1)
v <- expand.grid(t_sec_ERT, z)
names(v) <- c("t_sec_ERT", "z")
v$z_tmp <- v$z-0.1
v$T_tmp <- MANSRT+As*e^(-v$z_tmp*(omega/(2*K))^0.5)*sin(omega*v$t_sec_ERT-((omega/(2*K))^0.5)*v$z_tmp)
T_modelled <- data.frame(matrix(v$T_tmp, nrow = length(z), ncol = length(t_sec_ERT), byrow = TRUE))
names(T_modelled) <- paste0("t=", t_sec_ERT)
rownames(T_modelled) <- paste0("z=", seq(from = 0.1, to = 15, by = 0.1))

Speed up while loop in R

As part of a project I made a smoother to smooth out missing data. I make use of the previous slope of the last data points to calculate new values. After calculated each new point I use this data to calculate a new value (and so on). Hence I used a while-loop to calculate each value (both from left to right as from right to left to eventually take a average of these 2 values). This scripts works fine!
Although I expect that I can significantly accelerate this with a function from the apply-family, I still want to use this while loop. The script is however really slow (3 days for ~ 2,500,000 data points). Do you have tips (for the current script) for me to change to speed things up?
#Loop from: bottom -> top
number_rows <- nrow(weight_id)
i <- nrow(weight_id)
while (i >= 1){
j = as.integer(weight_id[i,1])
prev1 <- temp[j+1,]$new_MAP_bottom
if(j<max(weight_id)){
previous_slope <- ifelse((temp[j+2,]$duration-temp[j+1,]$duration)>0,prev1-temp[j+2,]$new_MAP_bottom,0)
}else{
previous_slope <- 0
}
new_MAP <- round(prev1+((previous_slope-(factor*temp[j,]$steps))/(1+factor)), digit=2)
temp[j,]$new_MAP_bottom <- new_MAP
i <- i-1
}
#Loop from: top -> bottom
weight_factor <- 0
i <- 1
while (i <= nrow(weight_id)) {
j = as.integer(weight_id[i,1])
prev1 <- temp[j-1,]$new_MAP_top
if(j>2){
previous_slope <- ifelse((temp[j-1,]$duration-temp[j-2,]$duration)>0,prev1-temp[j-2,]$new_MAP_top,0)
}else{
previous_slope <- 0
}
new_MAP <- round(prev1+((previous_slope+(factor*temp[j,]$steps))/(1+factor)), digit=2)
temp[j,]$new_MAP_top <- new_MAP
#Take weighted average of two approaches (top -> bottom/bottom -> top)
if(weight_factor < 1){ weight_factor = temp[j,]$weight-1 }
weight_top <- weight_factor
weight_bottom <- temp[j,]$weight-weight_factor
if(weight_top>weight_bottom){ weight_top<-weight_top-1 }
if(weight_top<weight_bottom){ weight_bottom<-weight_bottom-1}
temp[j,]$MAP <- round(((new_MAP*weight_top)+(temp[j,]$new_MAP_bottom*weight_bottom))/(weight_top+weight_bottom),digit=0)
weight_factor <- weight_factor-1
i <- i+1
}
I did not read all of your code, especially without example data, but from the textual description, its only linear approximation: Please check, if the buildin functions approx and approxfun already do what you try to implement yourself, as these will be optimized more than you can with suitable effort.
par(mfrow=c(2,1))
example <- data.frame(x = 1:14,
y = c(3,4,5,NA, NA, NA, 6,7,8.1, 8.2, NA, 8.4, 8.5, NA))
plot(example)
f <- approxfun(example)
plot(example$x, f(example$x))
The apply family tends to give you shorter, more succinct code, but not necessarily much more speed then loops. If you are into speed, first check, if somebody else has already implemented, what you need, then try vectorization.
Edit:
The following runs in about a second on my computer. If this does something close enough to your own "linear smoother" so that you can replace yours with this, that is a speed increase of about 3 days.
n <- 2500000
example <- data.frame(x = 1:n,
y = sample(1:1000, n, replace = TRUE))
example$y[sample(1:n, n/5)] <- NA
print(Sys.time())
f <- approxfun(example)
mean(f(example$x))
print(Sys.time())

Optimising a calculation on every cumulative subset of a vector in R

I have a collection of DNA sequencing reads of various lengths, sorted from longest to shortest. I would like to know the largest number of reads I can include in a set such that the N50 of that set is above some threshold t
For any given set of reads, the total amount of data is just the cumulative sum of the lengths of the reads. The N50 is defined as the length of the read such that half of the data are contained in reads at least that long.
I have a solution below, but it is slow for very large read sets. I tried vectorising it, but this was slower (probably because my threshold is usually relatively large, such that my solution below stops calculating fairly early on).
Here's a worked example:
df = data.frame(l = 100:1) # read lengths
df$cs = cumsum(df$l) # getting the cumulative sum is easy and quick
t = 95 # let's imagine that this is my threshold N50
for(i in 1:nrow(df)){
N50 = df$l[min(which(df$cs>df$cs[i]/2))]
if(N50 < t){ break }
}
# the loop will have gone one too far, so I subtract one
number.of.reads = as.integer(i-1)
This works fine on small datasets, but my actual data are more like 5m reads that vary from ~200,000 to 1 in length (longer reads are rarer), and I'm interested in an N50 of 100,000, then it gets pretty slow.
This example is closer to something that's realistic. It takes ~15s on my desktop.
l = ceiling(runif(100000, min = 0, max = 19999))
l = sort(l, decreasing = T)
df = data.frame(l = l)
df$cs = cumsum(df$l)
t = 18000
for(i in 1:nrow(df)){
n = df$l[min(which(df$cs>df$cs[i]/2))]
if(n < t){ break }
}
result = as.integer(i-1)
So, I'm interested in any ideas, tips, or tricks to noticeably optimise this. It seems like this should be possible, but I'm out of ideas.
As n is decreasing with i, you should use a binary search algorithm.
binSearch <- function(min, max) {
print(mid <- floor(mean(c(min, max))))
if (mid == min) {
if (df$l[min(which(df$cs>df$cs[min]/2))] < t) {
return(min - 1)
} else {
return(max - 1)
}
}
n = df$l[min(which(df$cs>df$cs[mid]/2))]
if (n >= t) {
return(binSearch(mid, max))
} else {
return(binSearch(min, mid))
}
}
Then, just call
binSearch(1, nrow(df))
Since your data are ordered by DNA/read length, maybe you could avoid testing every single row. On the contrary, you can iterate and test a limited number of rows (reasonably spaced) at each iteration (using while() for example), and so get progressively closer to your solution. This should make things much faster. Just make sure that once you get close to the solution, you stop iterating.
This is your solution
set.seed(111)
l = ceiling(runif(100000, min = 0, max = 19999))
l = sort(l, decreasing = T)
df = data.frame(l = l)
df$cs = cumsum(df$l)
t = 18000
for(i in 1:nrow(df)){
n = df$l[min(which(df$cs>df$cs[i]/2))]
if(n < t){ break }
}
result = as.integer(i-1)
result
# 21216, in ~29 seconds
Instead of testing every row, let's set a range
i1 <- 1
i2 <- nrow(df)
i.range <- as.integer(seq(i1, i2, length.out = 10))
Now, test only these 10 rows. Get the closest one and "focus in" by re-defining the range. Stop when you cannot increase granularity.
while(sum(duplicated(i.range))==0){
for(i in 1:length(i.range)){
N50 = df$l[min(which(df$cs>df$cs[i.range[i]]/2))]
if(N50 < t){ break }
}
#update i1 and i2
i1 <- i.range[(i-1)]
i2 <- i.range[i]
i.range <- as.integer(seq(i1, i2, length.out = 10))
}
i.range <- seq(i1, i2, by=1)
for(i in i.range){
N50 = df$l[min(which(df$cs>df$cs[i]/2))]
if(N50 < t){ break }
}
result <- as.integer(i-1)
result
#21216, in ~ 0.06 seconds
Same result in a fraction of the time.

Resources