Monte Carlo simulation from a pdf using runif - r

I'm given a pdf for X where f(x) = 2x when x is between 0 and 1, and f(x) = 0 otherwise. In class we learned to sample from a uniform distribution and transform the data to solve for y, however, I'm unsure how to apply that here because if I generate data from a uniform distribution then most of it will be between 0 and 1.
Am I doing these steps in the wrong order? It just seems weird to have a PDF that will lead to most of the data just being multiplied by 2.

I will use R's convention of naming PDF's with an initial d and CDF's with an initial p.
It is very simple. Compute the antiderivative of dmydist(x) = 2*x to get pmydist = sqrt(x). The associate RNG is immediate.
dmydist <- function(x) {
ifelse(x >= 0 & x <= 1, 2*x, 0)
}
pmydist <- function(y) {
ifelse(x >= 0 & x <= 1, sqrt(y), 0)
}
rmydist <- function(n) pmydist(runif(n))
set.seed(1234)
x <- rmydist(10000)
hist(x, prob = TRUE)
lines(seq(0, 1, by = 0.01), dmydist(seq(0, 1, by = 0.01)))

There are many ways how to do this. One way could be with rejection sampling https://en.wikipedia.org/wiki/Rejection_sampling. Simply put:
Sample a point on the x-axis from the proposal distribution.
Draw a vertical line at this x-position, up to the curve of the proposal distribution.
Sample uniformly along this line from 0 to the maximum of the probability density function. If the sampled value is greater than the value of the desired distribution at this vertical line, return to step 1.
n=1e5
x=runif(n)
t=runif(n)
hist(x[ifelse(2*t<2*x,T,F)])

Related

how would you count the number of elements that are true in vector?

PDF=Fr(r)=1/(1+r)^2 and Rsample=Xsample/Ysample where X,Y are independent exponential distributions with rate = 0.001.xsample=100 values stored in x,ysample=100 values stored in y.
Find the CDF FR(r) corresponding to the PDF and evaluate this at r ∈{0.1,0.2,0.25,0.5,1,2,4,5,10}. Find the proportions of values in R-sample less than each of these values of r and plot the proportions against FR(0.1), FR(0.2), ... ,FR(5),FR(10). What does this plot show?
I know that the CDF is the integral of the pdf but wouldn't this give me negative values of r.also for the proportions section how would you count the number of elements that are true, that is the number of elements for which R-sample is less than each element of r.
r=c(0.1,0.2,0.2,0.5,1,2,4,5,10)
prop=c(1:9)
for(i in 1:9)
{
x=Rsample<r[i]
prop[i]=c(TRUE,FALSE)
}
sum(prop[i])
You've made a few different errors here. The solution should look something like this.
Start by defining your variables and drawing your samples from the exponential distribution using rexp(100, 0.001):
r <- c(0.1, 0.2, 0.25, 0.5, 1, 2, 4, 5, 10)
set.seed(69) # Make random sample reproducible
x <- rexp(100, 0.001) # 100 random samples from exponential distribution
y <- rexp(100, 0.001) # 100 random samples from exponential distribution
Rsample <- x/y
The tricky part is getting the proportion of Rsample that is less than each value of r. For this we can use sapply instead of a loop.
props <- sapply(r, function(x) length(which(Rsample < x))/length(Rsample))
We get the cdf from the pdf by integrating (not shown):
cdf_at_r <- 1/(-r-1) # Integral of 1/(1+r)^2 at above values of r
And we can see what happens when we plot the proportions that are less than the sample against the cdf:
plot(cdf_at_r, props)
# What do we notice?
lines(c(-1, 0), c(0, 1), lty = 2, col = "red")
Created on 2020-03-05 by the reprex package (v0.3.0)
This is how you can count the number of elements for which R-sample is less than each element of r:
r=c(0.1,0.2,0.2,0.5,1,2,4,5,10)
prop=c(1:9)
less = 0;
for(i in 1:9)
{
if (Rsample<r[i]) {
less = less + 1
}
}
sum(prop[i])
less

How to run monte carlo simulation from a custom distribution in R

I would like to pull 1000 samples from a custom distribution in R
I have the following custom distribution
library(gamlss)
mu <- 1
sigma <- 2
tau <- 3
kappa <- 3
rate <- 1
Rmax <- 20
x <- seq(1, 2e1, 0.01)
points <- Rmax * dexGAUS(x, mu = mu, sigma = sigma, nu = tau) * pgamma(x, shape = kappa, rate = rate)
plot(points ~ x)
How can I randomly sample via Monte Carlo simulation from this distribution?
My first attempt was the following code which produced a histogram shape I did not expect.
hist(sample(points, 1000), breaks = 51)
This is not what I was looking for as it does not follow the same distribution as the pdf.
If you want a Monte Carlo simulation, you'll need to sample from the distribution a large number of times, not take a large sample one time.
Your object, points, has values that increases as the index increases to a threshold around 400, levels off, and then decreases. That's what plot(points ~ x) shows. It may describe a distribution, but the actual distribution of values in points is different. That shows how often values are within a certain range. You'll notice your x axis for the histogram is similar to the y axis for the plot(points ~ x) plot. The actual distribution of values in the points object is easy enough to see, and it is similar to what you're seeing when sampling 1000 values at random, without replacement from an object with 1900 values in it. Here's the distribution of values in points (no simulation required):
hist(points, 100)
I used 100 breaks on purpose so you could see some of the fine details.
Notice the little bump in the tail at the top, that you may not be expecting if you want the histogram to look like the plot of the values vs. the index (or some increasing x). That means that there are more values in points that are around 2 then there are around 1. See if you can look at how the curve of plot(points ~ x) flattens when the value is around 2, and how it's very steep between 0.5 and 1.5. Notice also the large hump at the low end of the histogram, and look at the plot(points ~ x) curve again. Do you see how most of the values (whether they're at the low end or the high end of that curve) are close to 0, or at least less than 0.25. If you look at those details, you may be able to convince yourself that the histogram is, in fact, exactly what you should expect :)
If you want a Monte Carlo simulation of a sample from this object, you might try something like:
samples <- replicate(1000, sample(points, 100, replace = TRUE))
If you want to generate data using points as a probability density function, that question has been asked and answered here
Let's define your (not normalized) probability density function as a function:
library(gamlss)
fun <- function(x, mu = 1, sigma = 2, tau = 3, kappa = 3, rate = 1, Rmax = 20)
Rmax * dexGAUS(x, mu = mu, sigma = sigma, nu = tau) *
pgamma(x, shape = kappa, rate = rate)
Now one approach is to use some MCMC (Markov chain Monte Carlo) method. For instance,
simMCMC <- function(N, init, fun, ...) {
out <- numeric(N)
out[1] <- init
for(i in 2:N) {
pr <- out[i - 1] + rnorm(1, ...)
r <- fun(pr) / fun(out[i - 1])
out[i] <- ifelse(runif(1) < r, pr, out[i - 1])
}
out
}
It starts from point init and gives N draws. The approach can be improved in many ways, but I'm simply only going to start form init = 5, include a burnin period of 20000 and to select every second draw to reduce the number of repetitions:
d <- tail(simMCMC(20000 + 2000, init = 5, fun = fun), 2000)[c(TRUE, FALSE)]
plot(density(d))
You invert the ECDF of the distribution:
ecd.points <- ecdf(points)
invecdfpts <- with( environment(ecd.points), approxfun(y,x) )
samp.inv.ecd <- function(n=100) invecdfpts( runif(n) )
plot(density (samp.inv.ecd(100) ) )
plot(density(points) )
png(); layout(matrix(1:2,1)); plot(density (samp.inv.ecd(100) ),main="The Sample" )
plot(density(points) , main="The Original"); dev.off()
Here's another way to do it that draws from R: Generate data from a probability density distribution and How to create a distribution function in R?:
x <- seq(1, 2e1, 0.01)
points <- 20*dexGAUS(x,mu=1,sigma=2,nu=3)*pgamma(x,shape=3,rate=1)
f <- function (x) (20*dexGAUS(x,mu=1,sigma=2,nu=3)*pgamma(x,shape=3,rate=1))
C <- integrate(f,-Inf,Inf)
> C$value
[1] 11.50361
# normalize by C$value
f <- function (x)
(20*dexGAUS(x,mu=1,sigma=2,nu=3)*pgamma(x,shape=3,rate=1)/11.50361)
random.points <- approx(cumsum(pdf$y)/sum(pdf$y),pdf$x,runif(10000))$y
hist(random.points,1000)
hist((random.points*40),1000) will get the scaling like your original function.

Creating a histogram from iterations of a binomial distribution in R

Here are the instructions:
Create 10,000 iterations (N = 10,000) of
rbinom(50,1, 0.5) with n = 50 and your guess of p0 = 0.50 (hint: you will need to
construct a for loop). Plot a histogram of the results of the sample. Then plot your
pstar on the histogram. If pstar is not in the extreme region of the histogram, you would
assume your guess is correct and vice versa. Finally calculate the probability that
p0 < pstar (this is a p value).
I know how to create the for loop and the rbinom function, but am unsure on how transfer this information to plotting on a histogram, in addition to plotting a custom point (my guess value).
I'm not doing your homework for you, but this should get you started. You don't say what pstar is supposed to be, so I am assuming you are interested in the (distribution of the) maximum likelihood estimates for p.
You create 10,000 N=50 binomial samples (there is no need for a for loop):
sample <- lapply(seq(10^5), function(x) rbinom(50, 1, 0.5))
The ML estimates for p are then
phat <- sapply(sample, function(x) sum(x == 1) / length(x))
Inspect the distribution
require(ggplot)
ggplot(data.frame(phat = phat), aes(phat)) + geom_histogram(bins = 30)
and calculate the probability that p0 < phat.
Edit 1
If you insist, you can also use a for loop to generate your samples.
sample <- list();
for (i in 1:10^5) {
sample[[i]] <- rbinom(50, 1, 0.5);
}

R: what is the vector of quantiles in density function dvmnorm

library(mvtnorm)
dmvnorm(x, mean = rep(0, p), sigma = diag(p), log = FALSE)
The dmvnorm provides the density function for a multivariate normal distribution. What exactly does the first parameter, x represent? The documentation says "vector or matrix of quantiles. If x is a matrix, each row is taken to be a quantile."
> dmvnorm(x=c(0,0), mean=c(1,1))
[1] 0.0585
Here is the sample code on the help page. In that case are you generating the probability of having quantile 0 at a normal distribution with mean 1 and sd 1 (assuming that's the default). Since this is a multivariate normal density function, and a vector of quantiles (0, 0) was passed in, why isn't the output a vector of probabilities?
Just taking bivariate normal (X1, X2) as an example, by passing in x = (0, 0), you get P(X1 = 0, X2 = 0) which is a single value. Why do you expect to get a vector?
If you want a vector, you need to pass in a matrix. For example, x = cbind(c(0,1), c(0,1)) gives
P(X1 = 0, X2 = 0)
P(X1 = 1, X2 = 1)
In this situation, each row of the matrix is processed in parallel.

Extract approximate probability density function (pdf) in R from random sampling

I have got n>2 independent continuous Random Variables(RV). For example say I have 4 Uniform RVs with different set of Upper and lowers.
W~U[-1,5], X~U[0,1], Y~[0,2], Z~[0.5,2]
I am trying to find out the approximate PDF for the sum of these RVs i.e. for T=W+X+Y+Z. As I don't need any closed form solution, I have sampled 1 million points for each of them to get 1 million samples for T. Is it possible in R to get the approximate PDF function or a way to get approximate probability of P(t<T)from this samples I have drawn. For example is there a easy way I can calculate P(0.5<T) in R. My priority here is to get probability first even if getting the density function is not possible.
Thanks
Consider the ecdf function:
set.seed(123)
W <- runif(1e6, -1, 5)
X <- runif(1e6, 0, 1)
Y <- runif(1e6, 0, 2)
Z <- runif(1e6, 0.5, 2)
T <- Reduce(`+`, list(W, X, Y, Z))
cdfT <- ecdf(T)
1 - cdfT(0.5) # Pr(T > 0.5)
# [1] 0.997589
See How to calculate cumulative distribution in R? for more details.

Resources