Creating a Vector from a Loop in R - r

Hello, I am entirely new to using R and experiencing some problems trying to develop the attached equation. Provided below is the general idea of what I am trying to code where PMU1 = omega and PMU2 = omega' from the images.
I am running into two problems that I see which is that Vh[i] is out of bounds for "i+1" when i = 7, and that I can't get a vector solution. The answer for evaluating the above omega matrix is Vh[i] = (0.25,0.25,0,0,0.5,0). I'll eventually be using a different matrix set, but I am just trying to generate a code from the equation.
PMU1 <- as.matrix(PMU1)
PMU2 <- as.matrix(PMU2)
m <- nrow(PMU1)
n <- ncol(PMU1)
for (j in 1:n)
{
Vh[i] <- sum(abs(PMU1[i,j]-PMU1[i+1,j]))
}
Vh[i]

While a more vectorized approach probably exists, a simple approach is to use sapply:
PMU <- matrix(c(0,1,1,1,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0,1,1,0,0,0,0,0,1,1),nrow = 7)
V <- sapply(1:(nrow(PMU)-1),function(i)mean(PMU[i+1,]-PMU[i,]))
After running this code, V = 0.25 0.25 0.00 0.00 0.50 0.00

Related

Solve equation with datatable in R

I am working with a large dataset in R using data table. I need to solve an equation and find the value of x in the expression:
data[,mean(pnorm(qnorm(var1)+x))]= 0.07
I have tried to use the function optimx using the following code:
library(optimx)
fnToFindRoot = function(x) {
data[,mean(pnorm(qnorm(var1)+x))]}
rootSearch = optimx(0.07, fnToFindRoot)
str(rootSearch)
fnToFindRoot(rootSearch$par)
But the produced result is not correct. Can someone help me to solve this equation?
As it's only doing optimisation over one variable, optimize should work fine e.g.
fnToFindRoot = function(x, a=0.07) {
y <- data[,(mean(pnorm(qnorm(var1)+x)) - a)^2]
print(sprintf("x: %s, y:%s", x, y))
y
}
rootSearch = optimize(fnToFindRoot, interval=c(-5, 5), a=0.07)
fnToFindRoot(rootSearch$minimum)
The problem with the way you had it setup is that the optim function is always trying to minimise the objective. The way you were writing it, it was trying to minimise mean(pnorm(qnorm(var1)+x)) with 0.07 as the starting value of x. Instead, you want to get the objective as close to 0.07 as possible, so minimise (mean(pnorm(qnorm(var1)+x)) - a)^2.
The interval controls the range of x that optimize can use
edit: I was using made up data, so check if rootSearch$minimum works for you. My made up data:
set.seed(1)
data <- data.table()
data[, var1 := runif(100, 0.04, 0.45)]
> fnToFindRoot(rootSearch$minimum)
[1] "x: -0.857685927870974, y:4.1043516952502e-13"
[1] 4.104352e-13

Running two loops in R for mathematical expression calculation

I am trying to write the R code for the expression given in the image for calculation purposes. I tried to use two loops and sapply function but I failed. Can anyone suggest a suitable code for the calculation of this expression?
I tried below lines given in the image.
R Code tried:
Please see as below:
gamma <- 1.5
s <- 1
k <- 3
i <- s:k
j <- lapply(i, function(x) 0:x)
prod_i <- sapply(j, function(x) prod(k + gamma - x))
f <- sum(factorial(k) / factorial(k - i) * prod_i)
f
# [1] 637.875

Calculating a GP correlation matrix outside of a loop

So I am trying to calculate the correlation matrix associated with a Gaussian Process using R and was hoping for some suggestions for doing so without using the triple for-loop I have written below. Mainly I want to try and condense the code for readable purposes and also to speed up calculations.
#Example Data
n = 500
x1 = sample(1:100,n,replace=T)
x2 = sample(1:100,n,replace=T)
x3 = sample(1:100,n,replace=T)
X = cbind(x1,x2,x3)
R = matrix(NA,nrow=n,ncol=n)
for(i in 1:nrow(X)){
for(j in 1:nrow(X)){
temp = 0
for(k in 1:ncol(X)){
temp = -abs(X[i,k]-X[j,k])^1.99 + temp
}
R[i,j] = exp(temp)
}
}
So as n gets large, the code gets much slower. Also worth noting, since this is a correlation matrix, the matrix is syymetric and the diagonal is equal to 1.
It's much faster using this:
y <- t(X)
R <- exp(-sapply(1:ncol(y), function(i) colSums((y-y[,i])^2)))
If you want ot keep your original formula:
R <- exp(-sapply(1:ncol(y), function(i) colSums(abs(y-y[,i])^1.99)))
I'm wondering if you could cut your calculation and looping times in half by changing these two lines? (Actually the timing was improved by more than 50% 14.304 secs improved to 6.234 secs )
1: for(j in 1:nrow(X)){
2: R[i,j] = exp(temp)
To:
1: for(j in i:nrow(X)){
2: R[i,j] = R[j,i]= exp(temp)
Tested:
> all.equal(R, R2)
[1] TRUE
That way you populate the lower triangle without doing any calculations.BTW, what's with the 1.99? This is perhaps a problem more suited to submitting as a C program. The Rcpp package supports this and there are a lot of worked examples on SO. Perhaps a search on: [r] rcpp nested loops

Stock Price Simulation R code - Slow - Monte Carlo

I need to perform a stock price simulation using R code. The problem is that the code is a little bit slow.
Basically I need to simulate the stock price for each time step (daily) and store it in a matrix.
An example assuming the stock process is Geometric Brownian Motion
for(j in 1:100000){
for(i in 1:252){
S[i] <- S[i-1]*exp((r-v^2/2)*dt+v*sqrt(dt)*rnorm(1))
}
U[j,] <- S
}
Any suggestion to improve and speed up the code?
Assuming S[0] = 1, you can build U as a follows:
Ncols <- 252
Nrows <- 100000
U <- matrix(exp((r-v^2/2)*dt+v*sqrt(dt)*rnorm(Ncols*Nrows)), ncol=Ncols, nrow=Nrows)
U <- do.call(rbind, lapply(1:Nrows, function(j)cumprod(U[j,])))
EDIT: using Joshua's and Ben's suggestions:
product version:
U <- matrix(exp((r-v^2/2)*dt+v*sqrt(dt)*rnorm(Ncols*Nrows)), ncol=Ncols, nrow=Nrows)
U <- t(apply(U, 1, cumprod))
sum version:
V <- matrix((r-v^2/2)*dt+v*sqrt(dt)*rnorm(Ncols*Nrows), ncol=Ncols, nrow=Nrows)
V <- exp( t(apply(V, 1, cumsum)) )
EDIT: as suggested by #Paul:
Execution time for each proposal (using 10000 rows instead of 10^5):
Using apply + cumprod
user system elapsed
0.61 0.01 0.62
Using apply + cumsum
user system elapsed
0.61 0.02 0.63
Using OP's original code
user system elapsed
67.38 0.00 67.52
Notes: The times shown above are the third measures of system.time. The first two measures for each code were discarded. I've used r <- sqrt(2), v <- sqrt(3) and dt <- pi. In his original code, I've also replaced S[i-1] for ifelse(i==1,1,S[i-1]), and preallocated U.

Is there an efficient way to parallelize mapply?

I have many rows and on every row I compute the uniroot of a non-linear function. I have a quad-core Ubuntu machine which hasn't stopped running my code for two days now. Not surprisingly, I'm looking for ways to speed things up ;-)
After some research, I noticed that only one core is currently used and parallelization is the thing to do. Digging deeper, I came to the conclusion (maybe incorrectly?) that the package foreach isn't really meant for my problem because too much overhead is produced (see, for example, SO). A good alternative seems to be multicore for Unix machines. In particular, the pvec function seems to be the most efficient one after I checked the help page.
However, if I understand it correctly, this function only takes one vector and splits it up accordingly. I need a function that can be parallized, but takes multiple vectors (or a data.frame instead), just like the mapply function does. Is there anything out there that I missed?
Here is a small example of what I want to do: (Note that I include a plyr example here because it can be an alternative to the base mapply function and it has a parallelize option. However, it is slower in my implementation and internally, it calls foreach to parallelize, so I think it won't help. Is that correct?)
library(plyr)
library(foreach)
n <- 10000
df <- data.frame(P = rnorm(n, mean=100, sd=10),
B0 = rnorm(n, mean=40, sd=5),
CF1 = rnorm(n, mean=30, sd=10),
CF2 = rnorm(n, mean=30, sd=5),
CF3 = rnorm(n, mean=90, sd=8))
get_uniroot <- function(P, B0, CF1, CF2, CF3) {
uniroot(function(x) {-P + B0 + CF1/x + CF2/x^2 + CF3/x^3},
lower = 1,
upper = 10,
tol = 0.00001)$root
}
system.time(x1 <- mapply(get_uniroot, df$P, df$B0, df$CF1, df$CF2, df$CF3))
#user system elapsed
#0.91 0.00 0.90
system.time(x2 <- mdply(df, get_uniroot))
#user system elapsed
#5.85 0.00 5.85
system.time(x3 <- foreach(P=df$P, B0=df$B0, CF1=df$CF1, CF2=df$CF2, CF3=df$CF3, .combine = "c") %do% {
get_uniroot(P, B0, CF1, CF2, CF3)})
#user system elapsed
# 10.30 0.00 10.36
all.equal(x1, x2$V1) #TRUE
all.equal(x1, x3) #TRUE
Also, I tried to implement Ryan Thompson's function chunkapply from the SO link above (only got rid of doMC part, because I couldn't install it. His example works, though, even after adjusting his function.),
but didn't get it to work. However, since it uses foreach, I thought the same arguments mentioned above apply, so I didn't try it too long.
#chunkapply(get_uniroot, list(P=df$P, B0=df$B0, CF1=df$CF1, CF2=df$CF2, CF3=df$CF3))
#Error in { : task 1 failed - "invalid function value in 'zeroin'"
PS: I know that I could just increase tol to reduce the number of steps that are necessary to find a uniroot. However, I already set tol as big as possible.
I'd use the parallel package that's built into R 2.14 and work with matrices. You could then simply use mclapply like this:
dfm <- as.matrix(df)
result <- mclapply(seq_len(nrow(dfm)),
function(x) do.call(get_uniroot,as.list(dfm[x,])),
mc.cores=4L
)
unlist(result)
This is basically doing the same mapply does, but in a parallel way.
But...
Mind you that parallelization always counts for some overhead as well. As I explained in the question you link to, going parallel only pays off if your inner function calculates significantly longer than the overhead involved. In your case, your uniroot function works pretty fast. You might then consider to cut your data frame in bigger chunks, and combine both mapply and mclapply. A possible way to do this is:
ncores <- 4
id <- floor(
quantile(0:nrow(df),
1-(0:ncores)/ncores
)
)
idm <- embed(id,2)
mapply_uniroot <- function(id){
tmp <- df[(id[1]+1):id[2],]
mapply(get_uniroot, tmp$P, tmp$B0, tmp$CF1, tmp$CF2, tmp$CF3)
}
result <-mclapply(nrow(idm):1,
function(x) mapply_uniroot(idm[x,]),
mc.cores=ncores)
final <- unlist(result)
This might need some tweaking, but it essentially breaks your df in exactly as many bits as there are cores, and run the mapply on every core. To show this works :
> x1 <- mapply(get_uniroot, df$P, df$B0, df$CF1, df$CF2, df$CF3)
> all.equal(final,x1)
[1] TRUE
it's an old topic but fyi you now have parallel::mcmapply doc is here. don't forget to set mc.cores in the options. I usually use mc.cores=parallel::detectCores()-1 to let one cpu free for OS operations.
x4 <- mcmapply(get_uniroot, df$P, df$B0, df$CF1, df$CF2, df$CF3,mc.cores=parallel::detectCores()-1)
This isn't exactly a best practices suggestion, but considerable speed-up can be had by identifying the root for all parameters in a 'vectorized' fashion. For instance,
bisect <-
function(f, interval, ..., lower=min(interval), upper=max(interval),
f.lower=f(lower, ...), f.upper=f(upper, ...), maxiter=20)
{
nrow <- length(f.lower)
bounds <- matrix(c(lower, upper), nrow, 2, byrow=TRUE)
for (i in seq_len(maxiter)) {
## move lower or upper bound to mid-point, preserving opposite signs
mid <- rowSums(bounds) / 2
updt <- ifelse(f(mid, ...) > 0, 0L, nrow) + seq_len(nrow)
bounds[updt] <- mid
}
rowSums(bounds) / 2
}
and then
> system.time(x2 <- with(df, {
+ f <- function(x, PB0, CF1, CF2, CF3)
+ PB0 + CF1/x + CF2/x^2 + CF3/x^3
+ bisect(f, c(1, 10), PB0, CF1, CF2, CF3)
+ }))
user system elapsed
0.180 0.000 0.181
> range(x1 - x2)
[1] -6.282406e-06 6.658593e-06
versus about 1.3s for application of uniroot separately to each. This also combined P and B0 into a single value ahead of time, since that is how they enter the equation.
The bounds on the final value are +/- diff(interval) * (.5 ^ maxiter) or so. A fancier implementation would replace bisection with linear or quadratic interpolation (as in the reference cited in ?uniroot), but then uniform efficient convergence (and in all cases error handling) would be more tricky to arrange.

Resources