Stock Price Simulation R code - Slow - Monte Carlo - r

I need to perform a stock price simulation using R code. The problem is that the code is a little bit slow.
Basically I need to simulate the stock price for each time step (daily) and store it in a matrix.
An example assuming the stock process is Geometric Brownian Motion
for(j in 1:100000){
for(i in 1:252){
S[i] <- S[i-1]*exp((r-v^2/2)*dt+v*sqrt(dt)*rnorm(1))
}
U[j,] <- S
}
Any suggestion to improve and speed up the code?

Assuming S[0] = 1, you can build U as a follows:
Ncols <- 252
Nrows <- 100000
U <- matrix(exp((r-v^2/2)*dt+v*sqrt(dt)*rnorm(Ncols*Nrows)), ncol=Ncols, nrow=Nrows)
U <- do.call(rbind, lapply(1:Nrows, function(j)cumprod(U[j,])))
EDIT: using Joshua's and Ben's suggestions:
product version:
U <- matrix(exp((r-v^2/2)*dt+v*sqrt(dt)*rnorm(Ncols*Nrows)), ncol=Ncols, nrow=Nrows)
U <- t(apply(U, 1, cumprod))
sum version:
V <- matrix((r-v^2/2)*dt+v*sqrt(dt)*rnorm(Ncols*Nrows), ncol=Ncols, nrow=Nrows)
V <- exp( t(apply(V, 1, cumsum)) )
EDIT: as suggested by #Paul:
Execution time for each proposal (using 10000 rows instead of 10^5):
Using apply + cumprod
user system elapsed
0.61 0.01 0.62
Using apply + cumsum
user system elapsed
0.61 0.02 0.63
Using OP's original code
user system elapsed
67.38 0.00 67.52
Notes: The times shown above are the third measures of system.time. The first two measures for each code were discarded. I've used r <- sqrt(2), v <- sqrt(3) and dt <- pi. In his original code, I've also replaced S[i-1] for ifelse(i==1,1,S[i-1]), and preallocated U.

Related

Creating a Vector from a Loop in R

Hello, I am entirely new to using R and experiencing some problems trying to develop the attached equation. Provided below is the general idea of what I am trying to code where PMU1 = omega and PMU2 = omega' from the images.
I am running into two problems that I see which is that Vh[i] is out of bounds for "i+1" when i = 7, and that I can't get a vector solution. The answer for evaluating the above omega matrix is Vh[i] = (0.25,0.25,0,0,0.5,0). I'll eventually be using a different matrix set, but I am just trying to generate a code from the equation.
PMU1 <- as.matrix(PMU1)
PMU2 <- as.matrix(PMU2)
m <- nrow(PMU1)
n <- ncol(PMU1)
for (j in 1:n)
{
Vh[i] <- sum(abs(PMU1[i,j]-PMU1[i+1,j]))
}
Vh[i]
While a more vectorized approach probably exists, a simple approach is to use sapply:
PMU <- matrix(c(0,1,1,1,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0,1,1,0,0,0,0,0,1,1),nrow = 7)
V <- sapply(1:(nrow(PMU)-1),function(i)mean(PMU[i+1,]-PMU[i,]))
After running this code, V = 0.25 0.25 0.00 0.00 0.50 0.00

Speed up Simulation in R with Code Optimization

The generic version of what I am trying to do is to conduct a simulation study where I manipulate a few variables to see how that impacts a result. I'm having some speed issues with R. The latest simulation worked with a few iterations (10 per experiment). However, when I moved to a large scale (10k per experiment) version, the simulation has been running for 14 hours (and is still running).
Below is the code (with comments) that I am running. Being a rookie with R, and am struggling to optimize the simulation to be efficient. My hope is to learn from the comments and suggestions provided here to optimize this code and use these comments for future simulation studies.
Let me say a few things about what this code is supposed to do. I am manipulating two variables: effect size and sample size. Each combination is run 10k times (i.e., 10k experiments per condition). I initialize a data frame to store my results (called Results). I loop over three variables: Effect size, sample size, and iterations (10k).
Within the loops, I initialize four NULL components: p.test, p.rep, d.test, and d.rep. The former two capture the p-value of the initial t-test and the p-value of the replication (replicated under similar conditions). The latter two calculate the effect size (Cohen's d).
I generate my random data from a standard normal for the control condition (DVcontrol), and I use my effect size as the mean for the experimental condition (DVexperiment). I take the difference between the values and throw the result into the t-test function in R (paired-samples t-test). I store the results in a list called Trials and I rbind this to the Results data frame. This process is repeated 10k times until completion.
# Set Simulation Parameters
## Effect Sizes (ES is equal to mean difference when SD equals Variance equals 1)
effect_size_range <- seq(0, 2, .1) ## ES
## Sample Sizes
sample_size_range <- seq(10, 1000, 10) ## SS
## Iterations for each ES-SS Combination
iter <- 10000
# Initialize the Vector of Results
Results <- data.frame()
# Set Random Seed
set.seed(12)
# Loop over the Different ESs
for(ES in effect_size_range) {
# Loop over the Different Sample Sizes
for(SS in sample_size_range) {
# Create p-value Vectors
p.test <- NULL
p.rep <- NULL
d.test <- NULL
d.rep <- NULL
# Loop over the iterations
for(i in 1:iter) {
# Generate Test Data
DVcontrol <- rnorm(SS, mean=0, sd=1)
DVexperiment <- rnorm(SS, mean=ES, sd=1)
DVdiff <- DVexperiment - DVcontrol
p.test[i] <- t.test(DVdiff, alternative="greater")$p.value
d.test[i] <- mean(DVdiff) / sd(DVdiff)
# Generate Replication Data
DVcontrol <- rnorm(iter, mean=0, sd=1)
DVexperiment <- rnorm(iter, mean=ES, sd=1)
DVdiff <- DVexperiment - DVcontrol
p.rep[i] <- t.test(DVdiff, alternative="greater")$p.value
d.rep[i] <- mean(DVdiff) / sd(DVdiff)
}
# Results
Trial <- list(ES=ES, SS=SS,
d.test=mean(d.test), d.rep=mean(d.rep),
p.test=mean(p.test), p.rep=mean(p.rep),
r=cor(p.test, p.rep, method="kendall"),
r.log=cor(log2(p.test)*(-1), log2(p.rep)*(-1), method= "kendall"))
Results <- rbind(Results, Trial)
}
}
Thanks in advance for your comments and suggestions,
Josh
The general approach to optimization is to run a profiler to determine what portion of the code the interpreter spends the most time in, and then to optimize that portion. Let's say your code resides in a file called test.R. In R, you can profile it by running the following sequence of commands:
Rprof() ## Start the profiler
source( "test.R" ) ## Run the code
Rprof( NULL ) ## Stop the profiler
summaryRprof() ## Display the results
(Note that these commands will generate a file Rprof.out in the directory of your R session.)
If we run the profiler on your code (with iter <- 10, rather than iter <- 10000), we get the following profile:
# $by.self
# self.time self.pct total.time total.pct
# "rnorm" 1.56 24.53 1.56 24.53
# "t.test.default" 0.66 10.38 2.74 43.08
# "stopifnot" 0.32 5.03 0.86 13.52
# "rbind" 0.32 5.03 0.52 8.18
# "pmatch" 0.30 4.72 0.34 5.35
# "mean" 0.26 4.09 0.42 6.60
# "var" 0.24 3.77 1.38 21.70
From here, we observe that rnorm and t.test are your most expensive operations (shouldn't really be a surprise as these are in your inner-most loop).
Once you figured out where the expensive function calls are, the actual optimization consists of two steps:
Optimize the function, and/or
Optimize the number of times the function is called.
Since t.test and rnorm are built-in R functions, your only option for Step 1 above is to look for alternative packages that may have faster implementations of sampling from the normal distribution and/or running multiple t tests. Step 2 is really about restructuring your code in a way that does not recompute the same thing multiple times. For example, the following lines of code do not depend on i:
# Generate Test Data
DVcontrol <- rnorm(SS, mean=0, sd=1)
DVexperiment <- rnorm(SS, mean=ES, sd=1)
Does it make sense to move these outside the loop, or do you really need a new sample of your test data for each different value of i?

Speeding up time series simulation (for bootstrap)

I need to run a bootstrap on a time series with non-standard dependence. So to do this I need to create a function that simulates the time series by making time by time adjustments.
testing<-function(){
sampleData<-as.zoo(data.frame(index=1:1000,vol=(rnorm(1000))^2,x=NA))
sampleData[,"x"]<-sampleData[,"vol"]+rnorm(1000) #treat this is completely exognenous and unknown in connection to vol
sampleData<-cbind(sampleData,mean=rollmean(sampleData[,"vol"],k=3,align="right"))
sampleData<-cbind(sampleData,vol1=lag(sampleData[,"vol"],k=-1),x1=lag(sampleData[,"x"],k=-1),mean1=lag(sampleData[,"mean"],k=-1))
#get estimate
mod<-lm(vol~vol1+x1+mean1,data=sampleData)
res<-mod$residuals
for(i in 5:1000){
#recursively estimate
sampleData[i,"vol"]<-as.numeric(predict(mod,newdata=data.frame(sampleData[i-1,])))+res[i-3]
#now must update other paramaters
#first our rolled average
sampleData[i,"mean"]<-mean(sampleData[(i-3):i,"vol"])
#reupdate our lagged variables
sampleData[i,"vol1"]<-sampleData[i-1,"vol"]
sampleData[i,"mean1"]<-sampleData[i-1,"mean"]
}
lm(vol~vol1+x1+mean1,data=sampleData)
}
When I run this code and measure the run time I get
system.time(testing())
user system elapsed
2.711 0.201 2.915
This is a slight problem for me as a will be integrating this code to construct a bootstrap. This means any time taken here is multiplied by about 100 for each step. And I am updating this at a few thousand times. That means a single run will take hours (to days) to run.
Is there anyway to speed this code up?
Kind regards,
Matthew
Here's how to avoid the overhead of predict.lm. Also note that I used a matrix instead of a zoo object, which would be a tiny bit slower. You can see just how much this slowed down your code. That's the price you pay for convenience.
testing.jmu <- function() {
if(!require(xts)) stop("xts package not installed")
set.seed(21) # for reproducibility
sampleData <- .xts(data.frame(vol=(rnorm(1000))^2,x=NA), 1:1000)
sampleData$x <- sampleData$vol+rnorm(1000)
sampleData$mean <- rollmean(sampleData$vol, k=3, align="right")
sampleData$vol1 <- lag(sampleData$vol,k=1)
sampleData$x1 <- lag(sampleData$x,k=1)
sampleData$mean1 <- lag(sampleData$mean,k=1)
sampleMatrix <- na.omit(cbind(as.matrix(sampleData),constant=1))
mod.fit <- lm.fit(sampleMatrix[,c("constant","vol1","x1","mean1")],
sampleMatrix[,"vol"])
res.fit <- mod.fit$residuals
for(i in 5:nrow(sampleMatrix)){
sampleMatrix[i,"vol"] <-
sum(sampleMatrix[i-1,c("constant","vol1","x1","mean1")] *
mod.fit$coefficients)+res.fit[i-3]
sampleMatrix[i,"mean"] <- mean(sampleMatrix[(i-3):i,"vol"])
sampleMatrix[i,c("vol1","mean1")] <- sampleMatrix[i-1,c("vol","mean")]
}
lm.fit(sampleMatrix[,c("constant","vol1","x1","mean1")], sampleMatrix[,"vol"])
}
system.time(out <- testing.jmu())
# user system elapsed
# 0.05 0.00 0.05
coef(out)
# constant vol1 x1 mean1
# 1.08787779 -0.06487441 0.03416802 -0.02757601
Add the set.seed(21) call to your function and you'll see that my function returns the same coefficients as yours.

Why is R slow on this random permutation function?

I am new to R (Revolution Analytics R) and have been translating some Matlab functions into R.
Question: Why is the function GRPdur(n) so slow?
GRPdur = function(n){
#
# Durstenfeld's Permute algorithm, CACM 1964
# generates a random permutation of {1,2,...n}
#
p=1:n # start with identity p
for (k in seq(n,2,-1)){
r = 1+floor(runif(1)*k); # random integer between 1 and k
tmp = p[k];
p[k] = p[r]; # Swap(p(r),p(k)).
p[r] = tmp;
}
return(p)
}
Here is what I get on a Dell Precision 690, 2xQuadcore Xeon 5345 # 2.33 GHz, Windows 7 64-bit:
> system.time(GRPdur(10^6))
user system elapsed
15.30 0.00 15.32
> system.time(sample(10^6))
user system elapsed
0.03 0.00 0.03
Here is what I get in Matlab 2011b
>> tic;p = GRPdur(10^6);disp(toc)
0.1364
tic;p = randperm(10^6);disp(toc)
0.1116
Here is what I get in Matlab 2008a
>> tic;p=GRPdur(10^6);toc
Elapsed time is 0.124169 seconds.
>> tic;p=randperm(10^6);toc
Elapsed time is 0.211372 seconds.
>>
LINKS : GRPdur is part of RPGlab, a package of Matlab functions that I wrote that generates and tests various random permutation generators. The notes can be viewed separately here: Notes on RPGlab.
The original Durstenfeld Algol program is here
Both Matlab and S (later R) started out as thin wrappers around FORTRAN functions for doing math stuff.
In S/R the for-loops have "always" been slow, but that has been OK because there are usually vectorized ways of expressing the problem. Also, R has thousands of functions in Fortran or C that do higher-level things quickly. For instance, the sample function which does exactly what your for-loop does - but much more quickly.
So why then is MATLAB much better at executing scripted for-loops? Two simple reasons: RESOURCES and PRIORITIES.
MathWorks who make MATLAB is a rather big company with around 2000 employees. They decided years ago to prioritize improving the performance of scripts. They hired a bunch of compiler experts and spent years developing a Just-In-Time compiler (JIT) that takes the script code and turns it into assembler code. They did a very good job too. Kudos to them!
R is open source, and the R core team works on improving R in their spare time. Luke Tierney of R core has worked hard and developed a compiler package for R that compiles R scripts to byte code. It does NOT turn it into assembler code however, but works pretty well. Kudos to him!
...But the amount of effort put into the R compiler vs. the MATLAB compiler is simply much less, and therefore the result is slower:
system.time(GRPdur(10^6)) # 9.50 secs
# Compile the function...
f <- compiler::cmpfun(GRPdur)
system.time(f(10^6)) # 3.69 secs
As you can see, the for-loop became 3x faster by compiling it to byte code. Another difference is that the R JIT compiler is not enabled by default as it is in MATLAB.
UPDATE Just for the record, a slightly more optimized R version (based on Knuth's algorithm), where the random generation has been vectorized as #joran suggested:
f <- function(n) {
p <- integer(n)
p[1] <- 1L
rv <- runif(n, 1, 1:n) # random integer between 1 and k
for (k in 2:n) {
r <- rv[k]
p[k] = p[r] # Swap(p(r),p(k)).
p[r] = k
}
p
}
g <- compiler::cmpfun(f)
system.time(f(1e6)) # 4.84
system.time(g(1e6)) # 0.98
# Compare to Joran's version:
system.time(GRPdur1(10^6)) # 6.43
system.time(GRPdur2(10^6)) # 1.66
...still a magnitude slower than MATLAB. But again, just use sample or sample.int which apparently beats MATLAB's randperm by 3x!
system.time(sample.int(10^6)) # 0.03
Because you wrote a c-program in an R-skin
n = 10^6L
p = 1:n
system.time( sample(p,n))
0.03 0.00 0.03
Responding to the OP's request was too long to fit in a comment, so here's what I was referring to:
#Create r outside for loop
GRPdur1 <- function(n){
p <- 1:n
k <- seq(n,2,-1)
r <- 1 + floor(runif(length(k)) * k)
for (i in 1:length(k)){
tmp <- p[k[i]];
p[k[i]] <- p[r[i]];
p[r[i]] <- tmp;
}
return(p)
}
library(compiler)
GRPdur2 <- cmpfun(GRPdur1)
set.seed(1)
out1 <- GRPdur(100)
set.seed(1)
out2 <- GRPdur1(100)
#Check the GRPdur1 is generating the identical output
identical(out1,out2)
system.time(GRPdur(10^6))
user system elapsed
12.948 0.389 13.232
system.time(GRPdur2(10^6))
user system elapsed
1.908 0.018 1.910
Not quite 10x, but more than the 3x Tommy showed just using the compiler. For a somewhat more accurate timing:
library(rbenchmark)
benchmark(GRPdur(10^6),GRPdur2(10^6),replications = 10)
test replications elapsed relative user.self sys.self
1 GRPdur(10^6) 10 127.315 6.670946 124.358 3.656
2 GRPdur2(10^6) 10 19.085 1.000000 19.040 0.222
So the 10x comment was (perhaps not surprisingly, being based on a single system.time run) optimistic, but the vectorization gains you a fair bit more speed over what the byte compiler does.

Surprisingly Slow Standard Deviation in R

I am calculating standard deviations on an expanding window where at each point I recalculate the standard deviation. This seems like a fairly straightforward thing to do that should be relatively fast. However, it takes a lot longer than you might think (~45 seconds). Am I missing something here? In Matlab this is quite fast.
t0 <- proc.time()[[3]]
z <- rep(0, 7000)
x <- rnorm(8000)
for(i in 1000:8000){
## print(i)
z[i] <- sd(x[1:i])
}
print(proc.time()[[3]]- t0)
You might also try an algorithm that updates the standard deviation (well, actually, the sum of squares of differences from the mean) as you go. On my system this reduces the time from ~0.8 seconds to ~0.002 seconds.
n <- length(x)
m <- cumsum(x)/(1:n)
m1 <- c(NA,m[1:(n-1)])
ssd <- (x-m)*(x-m1)
v <- c(0,cumsum(ssd[-1])/(1:(n-1)))
z <- sqrt(v)
See http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance for details.
Also see the answers to this question: Efficient calculation of matrix cumulative standard deviation in r
Edited to fix some typos, sorry.
This takes ~1.3 seconds on my machine:
t0 <- proc.time()[[3]]
x <- rnorm(8000)
z <- sapply(1000:8000,function(y){sd(x[seq_len(y)])})
print(proc.time()[[3]]- t0)
and I'd be willing to bet there are even faster ways of doing this. Avoid explicit for loops!
When a somewhat similar question about a cumulative variance and a cumularive kurtosis operation came up in rhelp a few days ago, here is what I offered :
daily <- rnorm(1000000)
mbar <- mean(daily)
cumvar <- cumsum( (daily-cumsum(daily)/1:length(daily) )^2)
cumskew <- cumsum( (daily-cumsum(daily)/1:length(daily))^3)/cumvar^(3/2)
It's certainly faster than the sapply method but may be comparable to Aaron's.
system.time( cumvar <- cumsum( (daily-cumsum(daily)/1:length(daily) )^2) )
user system elapsed
0.037 0.026 0.061
system.time(cumsd <- sqrt(cumvar) )
user system elapsed
0.009 0.005 0.013

Resources