I am trying to implement a Monte carlo simulation method to estimate an integral in R. However, I still get wrong answer. My code is as follows:
f <- function(x){
((cos(x))/x)*exp(log(x)-3)^3
}
t <- integrate(f,0,1)
n <- 10000 #Assume we conduct 10000 simulations
int_gral <- Monte_Car(n)
int_gral
You are not doing Monte-Carlo here. Monte-Carlo is a simulation method that helps you approximating integrals using sums/mean based on random variables.
You should do something in this flavor (you might have to verify that it's correct to say that the mean of the f output can approximates your integral:
f <- function(n){
x <- runif(n)
return(
((cos(x))/x)*exp(log(x)-3)^3
)
}
int_gral <- mean(f(10000))
What your code does is taking a number n and return ((cos(n))/n)*exp(log(n)-3)^3 ; there is no randomness in that
Update
Now, to get a more precise estimates, you need to replicate this step K times. Rather than using a loop, you can use replicate function:
K <- 100
dist <- data.frame(
int = replicate(K, mean(f(10000)))
)
You get a distribution of estimators for your integral :
library(ggplot2)
ggplot(dist) + geom_histogram(aes(x = int, y = ..density..))
and you can use mean to have a numerical value:
mean(dist$int)
# [1] 2.95036e-05
You can evaluate the precision of your estimates with
sd(dist$int)
# [1] 2.296033e-07
Here it is small because N is already large, giving you a good precision of first step.
I have managed to change the codes as follows. Kindly confirm to me that I am doing the right thing.
regards.
f <- function(x){
((cos(x))/x)*exp(log(x)-3)^3
}
set.seed(234)
n<-10000
for (i in 1:10000) {
x<-runif(n)
I<-sum(f(x))/n
}
I
Related
I was under the impression that simulations involving geometric brownian motion are not supposed to yield negative numbers. However, I was trying the following Monte Carlo simulation in R for a GBM, where my initial asset price is: $98.78$, $\mu = 0.208$, $\sigma = 0.824$. I initialized my dataframe as such: (I am just doing 1000 simulations over 5 years, simulating the price each year)
V = matrix(0, nrow = 1000, ncol = 6)
V_df = data.frame(V)
Then:
V[, 1] <- 98.78
I then perform the simulations (with dt = 1):
for (i in 1:1000) {
for (j in 1:5) {
V_df[i,j+1] <- V_df[i,j]*(mu*dt + sigma*sqrt(dt)*rnorm(1)) + V_df[i,j]
}
}
When I then check V_df there are many negative entries, which is not supposed to be the case. Would anyone have an idea as to why this is so?
Thanks.
Your solution to the GBM is not correct. One step should read
V_df[i,j+1] <- V_df[i,j]*exp((mu - sigma^2/2)*dt + sigma*sqrt(dt)*rnorm(1))
However, doing this with a double loop is very inefficient. You can create a matrix of random numbers and use cumprod or cumsum to generate the paths. Which function you use depends on when you take the exp.
See also https://en.m.wikipedia.org/wiki/Geometric_Brownian_motion
I have the following likelihood function which I used in a rather complex model (in practice on a log scale):
library(plyr)
dcustom=function(x,sd,L,R){
R. = (log(R) - log(x))/sd
L. = (log(L) - log(x))/sd
ll = pnorm(R.) - pnorm(L.)
return(ll)
}
df=data.frame(Range=seq(100,500),sd=rep(0.1,401),L=200,U=400)
df=mutate(df, Likelihood = dcustom(Range, sd,L,U))
with(df,plot(Range,Likelihood,type='l'))
abline(v=200)
abline(v=400)
In this function, the sd is predetermined and L and R are "observations" (very much like the endpoints of a uniform distribution), so all 3 of them are given. The above function provides a large likelihood (1) if the model estimate x (derived parameter) is in between the L-R range, a smooth likelihood decrease (between 0 and 1) near the bounds (of which the sharpness is dependent on the sd), and 0 if it is too much outside.
This function works very well to obtain estimates of x, but now I would like to do the inverse: draw a random x from the above function. If I would do this many times, I would generate a histogram that follows the shape of the curve plotted above.
The ultimate goal is to do this in C++, but I think it would be easier for me if I could first figure out how to do this in R.
There's some useful information online that helps me start (http://matlabtricks.com/post-44/generate-random-numbers-with-a-given-distribution, https://stats.stackexchange.com/questions/88697/sample-from-a-custom-continuous-distribution-in-r) but I'm still not entirely sure how to do it and how to code it.
I presume (not sure at all!) the steps are:
transform likelihood function into probability distribution
calculate the cumulative distribution function
inverse transform sampling
Is this correct and if so, how do I code this? Thank you.
One idea might be to use the Metropolis Hasting Algorithm to obtain a sample from the distribution given all the other parameters and your likelihood.
# metropolis hasting algorithm
set.seed(2018)
n_sample <- 100000
posterior_sample <- rep(NA, n_sample)
x <- 300 # starting value: I chose 300 based on your likelihood plot
for (i in 1:n_sample){
lik <- dcustom(x = x, sd = 0.1, L = 200, R =400)
# propose a value for x (you can adjust the stepsize with the sd)
x.proposed <- x + rnorm(1, 0, sd = 20)
lik.proposed <- dcustom(x = x.proposed, sd = 0.1, L = 200, R = 400)
r <- lik.proposed/lik # this is the acceptance ratio
# accept new value with probablity of ratio
if (runif(1) < r) {
x <- x.proposed
posterior_sample[i] <- x
}
}
# plotting the density
approximate_distr <- na.omit(posterior_sample)
d <- density(approximate_distr)
plot(d, main = "Sample from distribution")
abline(v=200)
abline(v=400)
# If you now want to sample just a few values (for example, 5) you could use
sample(approximate_distr,5)
#[1] 281.7310 371.2317 378.0504 342.5199 412.3302
I have a relationship:
y = a + b + c
I have the average and standard deviation of a, b and c
and I would like to obtain the probability distribution of y
from this by Monte Carlo simulation.
Is there a function or package or easy way that I can use to do this?
I assume that your are assuming your inputs a,b and c are normally distributed because you say you can define them with mean and standard deviation. If that is the case, you can do this pretty fast without any special package.
mu.a=33
mu.b=32
mu.c=13
sigma.a=22
sigma.b=22
sigma.c=222
n= a.large.number=10^5
a=rnorm(n,mu.a,sigma.a)
b=rnorm(n,mu.b,sigma.b)
c=rnorm(n,mu.c,sigma.c)
y=a+b+c
plot(density(y))
mean(y)
sd(y)
Make sure to be aware of all the assumptions we are making about y,a,b and c.
If you want to do something more complex like figure out the sampling variance of the mean of y. Then do this procedure many times collecting the mean and plot it.
mysimfun=function(n,mu,sigma,stat.you.want='mean')
# mu is length 3 and sigma is too.
{
n= a.large.number=10^5
a=rnorm(n,mu[1],sigma[1])
b=rnorm(n,mu[2],sigma[2])
c=rnorm(n,mu[3],sigma[3])
y=a+b+c
plot(density(y))
return(ifelse(stat.you.want=='mean',mean(y),sd(y))
}
mu=c(mu.a,my.b,mu.c)
sigma=c(sigma.a,sigma.b,sigma.c)
mi=rep(NA,100)
Then run it in a loop of some sort.
for(i in 1:100) {mi[i]=mysimfun(10,mu,sigma,stat.you.want='mean') }
par(mfrow=c(2,1)
hist(mi)
plot(density(mi))
mean(mi)
sd(mi)
There would be two approaches: bootstrapping which I think is what you might mean by MonteCarlo or if you are more interested in the theory than constructing estimates from empiric distributions, the 'distr' package and its friends 'distrSim" and "distrTEst".
require(boot)
ax <- rnorm(100); bx<-runif(100); cx<- rexp(100)
dat <- data.frame(ax=ax,bx=bx,cx=cx)
boot(dat, function(d){ with(d, mean(ax+bx+cx) )}, R=1000, sim="parametric")
boot(dat, function(d){ with(d, sd(ax+bx+cx) )}, R=1000, sim="parametric")
im now performing Location Model using non-parametric smoothing to estimate the paramneters.....one of the smoothed paramater is the lamdha that i have to optimize...
so in that case, i decide to use "nlminb function" to achieve it.....
however, my programing give me the same "$par" value even though it was iterate 150 time and make 200 evaluation (by default)..... which is it choose "the start value as $par" (that is 0.000001 ...... i think, there must be something wrong with my written program....
my programing look like:- (note: w is the parameter that i want to optimize and LOO is
stand for leave-one-out
BEGIN
Myfunc <- function(w, n1, n2, v1, v2, g)
{ ## open loop for main function
## DATA generation
# generate data from group 1 and 2
# for each group: discretise the continuous to binary
# newdata <- combine the groups 1 and 2
## MODEL construction
countError <- 0
n <- nrow(newdata)
for (k in 1:n)
{# open loop for leave-one-out
# construct model based on n-1 object using smoothing method
# classify omitted object
countError <- countError + countE
} # close loop for LOO process
Error <- countError / n # error rate counted from LOO procedure
return(Error) # The Average ERROR Rate from LOO procedure
} # close loop for Myfunc
library(stats)
nlminb(start=0.000001, Myfunc, lower=0.000001, upper=0.999999,
control=list(eval.max=100, iter.max=100))
END
could someone help me......
your concerns and guidances is highly appreciated and really100 needed......
Hashibah,
Statistic PhD Student
In your question, provide a nlminb with a univariate starting value. If you are doing univariate optimisation, it is probably worth looking at optimize. If your function is multivariate, then you need to call nlminb slightly differently.
You need define the objective function such that you provide the parameters to optimize over as a vector which is the first argument. Other inputs to the objective function should be provided as subsequent arguments.
For example (modified from the nlminb help page):
X <- rnbinom(100, mu = 10, size = 10)
hdev <- function(par, x) {
-sum(dnbinom(x, mu = par[1], size = par[2], log = TRUE))
}
nlminb(start = c(9, 12), hdev, x = X)
In R, what's the best way to simulate an arbitrary univariate random variate if only its probability density function is available?
Here is a (slow) implementation of the inverse cdf method when you are only given a density.
den<-dnorm #replace with your own density
#calculates the cdf by numerical integration
cdf<-function(x) integrate(den,-Inf,x)[[1]]
#inverts the cdf
inverse.cdf<-function(x,cdf,starting.value=0){
lower.found<-FALSE
lower<-starting.value
while(!lower.found){
if(cdf(lower)>=(x-.000001))
lower<-lower-(lower-starting.value)^2-1
else
lower.found<-TRUE
}
upper.found<-FALSE
upper<-starting.value
while(!upper.found){
if(cdf(upper)<=(x+.000001))
upper<-upper+(upper-starting.value)^2+1
else
upper.found<-TRUE
}
uniroot(function(y) cdf(y)-x,c(lower,upper))$root
}
#generates 1000 random variables of distribution 'den'
vars<-apply(matrix(runif(1000)),1,function(x) inverse.cdf(x,cdf))
hist(vars)
To clarify the "use Metropolis-Hastings" answer above:
suppose ddist() is your probability density function
something like:
n <- 10000
cand.sd <- 0.1
init <- 0
vals <- numeric(n)
vals[1] <- init
oldprob <- 0
for (i in 2:n) {
newval <- rnorm(1,mean=vals[i-1],sd=cand.sd)
newprob <- ddist(newval)
if (runif(1)<newprob/oldprob) {
vals[i] <- newval
} else vals[i] <- vals[i-1]
oldprob <- newprob
}
Notes:
completely untested
efficiency depends on candidate distribution (i.e. value of cand.sd).
For maximum efficiency, tune cand.sd to an acceptance rate of 25-40%
results will be autocorrelated ... (although I guess you could always
sample() the results to scramble them, or thin)
may need to discard a "burn-in", if your starting value is weird
The classical approach to this problem is rejection sampling (see e.g. Press et al Numerical Recipes)
Use cumulative distribution function http://en.wikipedia.org/wiki/Cumulative_distribution_function
Then just use its inverse.
Check here for better picture http://en.wikipedia.org/wiki/Normal_distribution
That mean: pick random number from [0,1] and set as CDF, then check Value
It is also called quantile function.
This is a comment but I don't have enough reputation to drop a comment to Ben Bolker's answer.
I am new to Metropolis, but IMHO this code is wrong because:
a) the newval is drawn from a normal distribution whereas in other codes it is drawn from a uniform distribution; this value must be drawn from the range covered by the random number. For example, for a gaussian distribution this should be something like runif(1, -5, +5).
b) the prob value must be updated only if acceptance.
Hope this help and hope that someone with reputation could correct this answer (especially mine if I am wrong).
# the distribution
ddist <- dnorm
# number of random number
n <- 100000
# the center of the range is taken as init
init <- 0
# the following should go into a function
vals <- numeric(n)
vals[1] <- init
oldprob <- 0
for (i in 2:n) {
newval <- runif(1, -5, +5)
newprob <- ddist(newval)
if (runif(1) < newprob/oldprob) {
vals[i] <- newval
oldprob <- newprob
} else vals[i] <- vals[i-1]
}
# Final view
hist(vals, breaks = 100)
# and comparison
hist(rnorm(length(vals)), breaks = 100)