R: How could I change this loop to apply? - r

I'm currently working on an R program, where there is one part of this program that computes in a loop two values which are interdependant. Although since I have to do 100,000 iterations it takes so long time.
So I would like to substitute this for loop for an apply loop or some more efficient function, but I don't know how to do it. Could someone help me?
p <- c()
for(i in 1:n) {
if(i == 1) {
x <- b[i]
}
else {
x <- c(x, max(h[i - 1], p[i]))
}
h <- c(h, x[i] + y[i])
}
Thank you very much!!

You don't seem to have a full working example here, but the main problem is that building up the x and h vectors with the c() function is very slow. It's better to preallocate them:
x <- numeric(n) # allocate vector of size n
h <- numeric(n)
and then fill them in as you go by assigning to x[i] and h[i]. For example, the following loop:
x <- c(); for (i in 1:100000) x <- c(x,1)
takes about 10 seconds to run on my laptop, but this version:
x <- numeric(100000); for (i in 1:100000) x[i] <- 1
does the same thing while running almost instantly.

Related

General code for a summation in R

I'm writing some code in R and I came across following problem:
Basically, I want to calculate a variable X[k], where X takes on values for each k, like this:
where A is a known variable which takes on different values for each index.
For the moment, I have something like this:
k <- NULL
X <- NULL
z<- 1: n
for (k in seq(along =z)){
for (j in seq (along = 1:k)){
X[k] = 1/k*sum(A[n-k]/A[n-j+1])
}
}
which can't be right. Any idea on how to fix this one?
As always, any help would be dearly appreciated.
Try this
# define A
A <- c(1,2,3,4)
n <- length(A)
z <- 1:n
#predefine X (don't worry, all values will be overwritten, but it will have the same length as A
X <- A
for(k in z){
for(j in 1:k){
X[k] = 1/k*sum(A[n-k]/A[n-j+1])
}
}
You don't need to define z, it is only used inside the for. In this case, do for(k in 1:n){
As
You can do the following
set.seed(42)
A <- rnorm(10)
k <- sample(length(A), 4)
calc_x <- function(A, k){
n <- length(A)
c_sum <- cumsum(1/rev(A)[1:max(k)])
A[n-k]/k * c_sum[k]
}
calc_x(A,k)
what returns:
[1] 0.07775603 2.35789999 -0.45393983 0.13323284

Poisson Distribution Function infinite loop

I have the following code that is trying to simulate the deposition of metallic atoms onto a cold substrate; however, it runs in an infinite loop.
Can anyone see where I'm making a mistake?
l <- 20
n <- 2000
e <- 1000
lsize <- matrix(0,l,l)
deposits <- rep(0,n)
avg.deposits <- rep(0,n)
prob <- rep(0,n)
n.deposits <- rep(0,n)
for(m in 1:e){
for(j in 1:l){
for(k in 1:l){
lsize[j,k] <- 0
}
}
for(i in 1:n){
ra <- runif(1)
x <- floor(1+l*ra)
ra <- runif(1)
y <- floor(1+l*ra)
lsize[x,y] <- lsize[x,y]+1
s <- 0
for(j in 1:l){
for(k in 1:l){
if(lsize[j,k] <- 1){
s <- s+1
}
}
}
n.deposits[i] <- n.deposits[i]+s
}
}
for(i in 1:n){
avg.deposits[i] <- n.deposits[i]/e
prob[i] <- avg.deposits[i]/(l*l)
deposits[i] <- i
}
plot(deposits, prob)
There is no infinite loop problem.
This is easy to check if you go ahead and run your code with smaller l,n,e arguments. Your code scales sub-optimally (super-linearly in this case) when increasing any of the arguments mentioned.
Obvious points:
Preallocate matrices. Do not allocate lsize in each loops again and again.
Limit your functions calls; to runif() in this case. You do not have to call the same function thousands of times. Call it once outside the loop to generate the random number you want and then within the loop just access the next element in line.
Use print and cat statement to print out the loop counters you use. Try small values that ensure what they program does what you want and then set your counters to thousands.
Look to vectorize your code when possible. Eg. If 'a = runif(100)' and you want to set all the instances where a < 0.5 to equal 4 there is no reason to loop over all elements of a sequentially. a[ a < 0.5] = 4 is enough.

Loop inside a loop in R

I am trying to create an R code that puts another loop inside of the one I've already created. Here is my code:
t <- rep(1,1000)
omega <- seq(from=1,to=12,by=1)
for(i in 1:1000){
omega <- setdiff(omega,sample(1:12,1))
t[i] <- length(omega)
remove <- 0
f <- length(t [! t %in% remove]) + 1
}
When I run this code, I get a number a trials it takes f to reach the zero vector, but I want to do 10000 iterations of this experiment.
replicate is probably how you want to run the outer loop. There's also no need for the f assignment to be inside the loop. Here I've moved it outside and converted it to simply count of the elements of t that are greater than 0, plus 1.
result <- replicate(10000, {
t <- rep(1, 1000)
omega <- 1:12
for(i in seq_along(t)) {
omega <- setdiff(omega,sample(1:12,1))
t[i] <- length(omega)
}
sum(t > 0) + 1
})
I suspect your code could be simplified in other ways as well, and also that you could just write down the distribution that you're looking for without simulation. I believe your variable of interest is just how long until you get at least one of each of the numbers 1:12, yes?
Are you just looking to run your existing loop 10,000 times, like below?
t <- rep(1,1000)
omega <- seq(from=1,to=12,by=1)
f <- rep(NA, 10000)
for(j in 1:10000) {
for(i in 1:1000){
omega <- setdiff(omega,sample(1:12,1))
t[i] <- length(omega)
remove <- 0
f[j] <- length(t [! t %in% remove]) + 1
}
}

R: Speeding up Code, multicore

I have a problem with the Performance of my R Code.
My Code is very slow. I have to loop over a vector of 3000 elements. On every loop I call many functions.
I tried first with parallelization, but it doesn’t work. In every step I need the results previous steps.
Now I have an idea: I would divide the vector in 3 pieces of 1000 elements. And make the calculation of each piece by itself. On the first element of piece 1 and 2, I will have a problem, but I can handle it.
I would like to calculate each of the 3 pieces by a separate CPU-Core.
Actually I could make 3 .R-Files and start 3 R-Sessions (=3 Cores) and calculate it.
But I would like to do it in one file. I would like to define, that my first loop is going to be calculated by Core 1, and the other ones by the other Cores.
Is it possible?
Thank you.
This is an simple Example. It describes my problem.
#Situation now
vec3000 <- rnorm(3000)
result3000 <- rep(NA, length(vec3000))
for (i in 1 : 3000){
if (i == 1){
result3000[i] <- vec3000[i]
}else{
result3000[i] <- result3000[i - 1] + vec3000[i]
}
}
#New Situation
vec1000_1 <- vec3000[1:1000]
vec1000_2 <- vec3000[1001:2000]
vec1000_3 <- vec3000[2001:3000]
result1000_1 <- rep(NA, 1000)
result1000_2 <- rep(NA, 1000)
result1000_3 <- rep(NA, 1000)
#Calculated by Core 1
for (i in 1 : 1000){
if (i == 1){
result1000_1[i] <- vec1000_1[i]
}else{
result1000_1[i] <- result1000_1[i - 1] + vec1000_1[i]
}
}
#Calculated by Core 2
for (i in 1 : 1000){
if (i == 1){
result1000_2[i] <- vec1000_2[i]
}else{
result1000_2[i] <- result1000_2[i - 1] + vec1000_2[i]
}
}
#Calculated by Core 3
for (i in 1 : 1000){
if (i == 1){
result1000_3[i] <- vec1000_3[i]
}else{
result1000_3[i] <- result1000_3[i - 1] + vec1000_3[i]
}
}
Here's an example using the foreach package that operates on vector chunks in parallel:
library(doParallel)
library(itertools)
nworkers <- 3
cl <- makePSOCKcluster(nworkers)
registerDoParallel(cl)
vec3000 <- rnorm(3000) # dummy input
# This computes "resvecs" which is a list of "nworkers" vectors
resvecs <- foreach(vec=isplitVector(vec3000, chunks=nworkers)) %dopar% {
result <- double(length=length(vec))
for (i in seq_along(result)) {
if (i == 1) {
result[i] <- vec[i]
} else {
result[i] <- result[i - 1] + vec[i]
}
}
result
}
This uses the "isplitVector" function from the itertools package to split "vec3000" into three chunks to make use of three cores. You can change the value of "nworkers" to control the number of cores that are used.
Note that I used the doParallel backend so the example would work on Windows, Mac OS X and Linux.

How to vectorize triple nested loops?

I've done searching similar problems and I have a vague idea about what should I do: to vectorize everything or use apply() family. But I'm a beginner on R programming and both of the above methods are quite confusing.
Here is my source code:
x<-rlnorm(100,0,1.6)
j=0
k=0
i=0
h=0
lambda<-rep(0,200)
sum1<-rep(0,200)
constjk=0
wj=0
wk=0
for (h in 1:200)
{
lambda[h]=2+h/12.5
N=ceiling(lambda[h]*max(x))
for (j in 0:N)
{
wj=(sum(x<=(j+1)/lambda[h])-sum(x<=j/lambda[h]))/100
for (k in 0:N)
{
constjk=dbinom(k, j + k, 0.5)
wk=(sum(x<=(k+1)/lambda[h])-sum(x<=k/lambda[h]))/100
sum1[h]=sum1[h]+(lambda[h]/2)*constjk*wk*wj
}
}
}
Let me explain a bit. I want to collect 200 sum1 values (that's the first loop), and for every sum1 value, it is the summation of (lambda[h]/2)*constjk*wk*wj, thus the other two loops. Most tedious is that N changes with h, so I have no idea how to vectorize the j-loop and the k-loop. But of course I can vectorize the h-loop with lambda<-seq() and N<-ceiling(), and that's the best I can do. Is there a way to further simplify the code?
Your code can be perfectly verctorized with 3 nested sapply calls. It might be a bit hard to read for the untrained eye, but the essence of it is that instead of adding one value at a time to sum1[h] we calculate all the terms produced by the innermost loop in one go and sum them up.
Although this vectorized solution is faster than your tripple for loop, the improvement is not dramatical. If you plan to use it many times I suggest you implement it in C or Fortran (with regular for loops), which improves the speed a lot. Beware though that it has high time complexity and will scale badly with increased values of lambda, ultimatelly reaching a point when it is not possible to compute within reasonable time regardless of the implementation.
lambda <- 2 + 1:200/12.5
sum1 <- sapply(lambda, function(l){
N <- ceiling(l*max(x))
sum(sapply(0:N, function(j){
wj <- (sum(x <= (j+1)/l) - sum(x <= j/l))/100
sum(sapply(0:N, function(k){
constjk <- dbinom(k, j + k, 0.5)
wk <- (sum(x <= (k+1)/l) - sum(x <= k/l))/100
l/2*constjk*wk*wj
}))
}))
})
Btw, you don't need to predefine variables like h, j, k, wj and wk. Especially since not when vectorizing, as assignments to them inside the functions fed to sapply will create overlayered local variables with the same name (i.e. ignoring the ones you predefied).
Let`s wrap your simulation in a function and time it:
sim1 <- function(num=20){
set.seed(42)
x<-rlnorm(100,0,1.6)
j=0
k=0
i=0
h=0
lambda<-rep(0,num)
sum1<-rep(0,num)
constjk=0
wj=0
wk=0
for (h in 1:num)
{
lambda[h]=2+h/12.5
N=ceiling(lambda[h]*max(x))
for (j in 0:N)
{
wj=(sum(x<=(j+1)/lambda[h])-sum(x<=j/lambda[h]))/100
for (k in 0:N)
{
set.seed(42)
constjk=dbinom(k, j + k, 0.5)
wk=(sum(x<=(k+1)/lambda[h])-sum(x<=k/lambda[h]))/100
sum1[h]=sum1[h]+(lambda[h]/2)*constjk*wk*wj
}
}
}
sum1
}
system.time(res1 <- sim1())
# user system elapsed
# 5.4 0.0 5.4
Now let's make it faster:
sim2 <- function(num=20){
set.seed(42) #to make it reproducible
x <- rlnorm(100,0,1.6)
h <- 1:num
sum1 <- numeric(num)
lambda <- 2+1:num/12.5
N <- ceiling(lambda*max(x))
#functions for wj and wk
wjfun <- function(x,j,lambda,h){
(sum(x<=(j+1)/lambda[h])-sum(x<=j/lambda[h]))/100
}
wkfun <- function(x,k,lambda,h){
(sum(x<=(k+1)/lambda[h])-sum(x<=k/lambda[h]))/100
}
#function to calculate values of sum1
fun1 <- function(N,h,x,lambda) {
sum1 <- 0
set.seed(42) #to make it reproducible
#calculate constants using outer
const <- outer(0:N[h],0:N[h],FUN=function(j,k) dbinom(k, j + k, 0.5))
wk <- numeric(N[h]+1)
#loop only once to calculate wk
for (k in 0:N[h]){
wk[k+1] <- (sum(x<=(k+1)/lambda[h])-sum(x<=k/lambda[h]))/100
}
for (j in 0:N[h])
{
wj <- (sum(x<=(j+1)/lambda[h])-sum(x<=j/lambda[h]))/100
for (k in 0:N[h])
{
sum1 <- sum1+(lambda[h]/2)*const[j+1,k+1]*wk[k+1]*wj
}
}
sum1
}
for (h in 1:num)
{
sum1[h] <- fun1(N,h,x,lambda)
}
sum1
}
system.time(res2 <- sim2())
#user system elapsed
#1.25 0.00 1.25
all.equal(res1,res2)
#[1] TRUE
Timings for #Backlin`s code (with 20 interations) for comparison:
user system elapsed
3.30 0.00 3.29
If this is still too slow and you cannot or don't want to use another language, there is also the possibility of parallelization. As far as I see the outer loop is embarrassingly parallel. There are some nice and easy packages for parallelization.

Resources