Plot density curve of mixture of two normal distribution - r

I am rather new to R and could use some basic help. I'd like to generate sums of two normal random variables (variance = 1 for each) as their means move apart and plot the results. The basic idea: if the means are sufficiently far apart, the distribution will be bimodal. Here's the code I'm trying:
x <- seq(-3, 3, length=500)
for(i in seq(0, 3, 0.25)) {
y <- dnorm(x, mean=0-i, sd=1)
z <- dnorm(x, mean=0+i, sd=1)
plot(x,y+z, type="l", xlim=c(-3,3))
}
Several questions:
Are there better ways to do this?
I'm only getting one PDF on my plot. How can I put multiple PDFs on the same plot?
Thank you in advance!

It is not difficult to do this using basic R features. We first define a function f to compute the density of this mixture of normal:
## `x` is an evaluation grid
## `dev` is deviation of mean from 0
f <- function (x, dev) {
(dnorm(x, -dev) + dnorm(x, dev)) / 2
}
Then we use sapply to loop through various dev to get corresponding density:
## `dev` sequence to test
dev <- seq(0, 3, 0.25)
## evaluation grid; extending `c(-1, 1) * max(dev)` by 4 standard deviation
x <- seq(-max(dev) -4, max(dev) + 4, by = 0.1)
## density matrix
X <- sapply(dev, f, x = x)
## a comment on 2022-07-31: X <- outer(x, dev, f)
Finally we use matplot for plotting:
matplot(x, X, type = "l", lty = 1)
Explanation of sapply:
During sapply, x is not changed, while we pick up and try one element of dev each iteration. It is like
X <- matrix(0, nrow = length(x), ncol = length(dev))
for (i in 1:length(dev)) X[, i] <- f(x, dev[i])
matplot(x, X) will plot columns of X one by one, against x.
A comment on 2022-07-31: Just use outer. Here are more examples:
Run a function of 2 arguments over a span of parameter values in R
Plot of a Binomial Distribution for various probabilities of success in R

Related

Trying to replicate rgeom() funtion

As an exercise, I'm trying to write a function which replicates the rgeom() function. I want it to have the same arguments and return values. I've started out by using runif to generate a vector with x elements, but I'm not sure how to apply the probability distribution:
rgeometric <- function(x, prob) {
outcomes <- runif(x)
P <- (1 - prob)^length(x) * prob
return (P)
}
Would it be something like the following? How can I check that the distribution is geometric?
set.seed(0)
rgeometric <- function(x, prob) {
outcomes <- runif(x)
P <- (1 - prob)^length(x) * prob
for (i in x) {
x[i] <- x[i]*P
}
return (outcomes)
}
rgeometric(5, 0.4)
We can accomplish this task using Inverse Transform Sampling.
First, let's clear up some of your notation.
In the rgeom() function, we'll want that first argument to be n, an integer vector of length one giving the number of samples to generate:
rgeometric <- function(n, prob) {
u <- runif(n)
## do stuff
}
So how does inverse transform sampling work?
First we generate a vector u of standard uniform deviates, as shown above.
Then, for each element ui of u, we find the value of the inverse of the cumulative density function at ui.
For the geometric distribution, the CDF is 1 - (1 - prob)^(x+1); the inverse of the CDF is ceiling(log(1-u) / log(1-prob)) - 1 (link to derivation, p. 11).
So, we can complete the function like so:
rgeometric <- function(n, prob) {
u <- runif(n)
return(ceiling(log(1-u) / log(1-prob)) - 1)
}
Your last question is how can we test if the resulting samples are distributed geometric?
I don't know of a formal test that will help, but we can see it appears to work when we compare the density of 1 million random draws from this custom function to the density of 1 million random draws from base R's rgeom() function:
n <- 1e6
p <- 0.25
set.seed(0)
x <- rgeometric(n, p)
y <- rgeom(n, p)
png("so-answer.png", width = 960)
opar <- par(mfrow = c(1, 2))
plot(density(x), main = "Draws from custom function")
plot(density(y), main = "Draws from base R function")
par(opar)
dev.off()
Note that for the definition of the geometric function implemented by r, the random variable is the number of failures until the first success. Therefore you could do:
my_rgeom <- function(n, p){
fun <- function(p){
n <- 0
stopifnot(p>0)
while(runif(1)>p) n <- n+1
n
}
replicate(n, fun(p))
}
Now test the function:
n <- 100000
p <- 0.25
X <- rgeom(n, p)
Y <- my_rgeom(n, p)
You can do a ks.test on X and Y, though this is for continuous variables. The best thing to do is the chisq.test to determine whether the two are similar.
Lastly we could use graphical methods. eg superimposed histogram:
barplot(table(X), col = rgb(0.5, 1, 0.5, 0.4))
barplot(table(Y), add = TRUE, col = rgb(1, 0.5, 0, 0.3))
From the image above you can see that the two are nearly identical

integrating the square of probability density?

Suppose I have
set.seed(2020) # make the results reproducible
a <- rnorm(100, 0, 1)
My probability density is estimated through kernel density estimator (gaussian) in R using the R built in function density. The question is how to integrate the square of the estimated density. It does not matter between which values, let us suppose between -Inf and +Inf. I have tried the following:
f <- approxfun(density(a)$x, density(a)$y)
integrate (f*f, min(density(a)$x), max(density(a)$x))
There are a couple of problems here. First you have the x and y round the wrong way in approxfun. Secondly, you can't multiply function names together. You need to specify a new function that gives you the square of your original function:
set.seed(2020)
a <- rnorm(100, 0, 1)
f <- approxfun(density(a)$x, density(a)$y)
f2 <- function(v) ifelse(is.na(f(v)), 0, f(v)^2)
integrate (f2, -Inf, Inf)
#> 0.2591153 with absolute error < 0.00011
We can also plot the original density function and the squared density function:
curve(f, -3, 3)
curve(f2, -3, 3, add = TRUE, col = "red")
I think you should write the objective function as function(x) f(x)**2, rather than f*f, e.g.,
> integrate (function(x) f(x)**2, min(density(a)$x), max(density(a)$x))
0.2331793 with absolute error < 6.6e-06
Here is a way using package caTools, function trapz. It computes the integral given a vector x and its corresponding image y using the trapezoidal rule.
I also include a function trapzf based on the original to have the integral computed with the function returned by approxfun
trapzf <- function(x, FUN) trapz(x, FUN(x))
set.seed(2020) # make the results reproducible
a <- rnorm(100, 0, 1)
d <- density(a)
f <- approxfun(d$x, d$y)
int1 <- trapz(d$x, d$y^2)
int2 <- trapzf(d$x, function(x) f(x)^2)
int1
#[1] 0.2591226
identical(int1, int2)
#[1] TRUE

Find x from a given y in a spline function in R [duplicate]

I am interested in a general root finding problem for an interpolation function.
Suppose I have the following (x, y) data:
set.seed(0)
x <- 1:10 + runif(10, -0.1, 0.1)
y <- rnorm(10, 3, 1)
as well as a linear interpolation and a cubic spline interpolation:
f1 <- approxfun(x, y)
f3 <- splinefun(x, y, method = "fmm")
How can I find x-values where these interpolation functions cross a horizontal line y = y0? The following is a graphical illustration with y0 = 2.85.
par(mfrow = c(1, 2))
curve(f1, from = x[1], to = x[10]); abline(h = 2.85, lty = 2)
curve(f3, from = x[1], to = x[10]); abline(h = 2.85, lty = 2)
I am aware of a few previous threads on this topic, like
predict x values from simple fitting and annoting it in the plot
Predict X value from Y value with a fitted model
It is suggested that we simply reverse x and y, do an interpolation for (y, x) and compute the interpolated value at y = y0.
However, this is a bogus idea. Let y = f(x) be an interpolation function for (x, y), this idea is only valid when f(x) is a monotonic function of x so that f is invertible. Otherwise x is not a function of y and interpolating (y, x) makes no sense.
Taking the linear interpolation with my example data, this fake idea gives
fake_root <- approx(y, x, 2.85)[[2]]
# [1] 6.565559
First of all, the number of roots is incorrect. We see two roots from the figure (on the left), but the code only returns one. Secondly, it is not a correct root, as
f1(fake_root)
#[1] 2.906103
is not 2.85.
I have made my first attempt on this general problem at How to estimate x value from y value input after approxfun() in R. The solution turns out stable for linear interpolation, but not necessarily stable for non-linear interpolation. I am now looking for a stable solution, specially for a cubic interpolation spline.
How can a solution be useful in practice?
Sometimes after a univariate linear regression y ~ x or a univariate non-linear regression y ~ f(x) we want to backsolve x for a target y. This Q & A is an example and has attracted many answers: Solve best fit polynomial and plot drop-down lines, but none is truly adaptive or easy to use in practice.
The accepted answer using polyroot only works for a simple polynomial regression;
Answers using quadratic formula for an analytical solution only works for a quadratic polynomial;
My answer using predict and uniroot works in general, but is not convenient, as in practice using uniroot needs interaction with users (see Uniroot solution in R for more on uniroot).
It would be really good if there is an adaptive and easy-to-use solution.
First of all, let me copy in the stable solution for linear interpolation proposed in my previous answer.
## given (x, y) data, find x where the linear interpolation crosses y = y0
## the default value y0 = 0 implies root finding
## since linear interpolation is just a linear spline interpolation
## the function is named RootSpline1
RootSpline1 <- function (x, y, y0 = 0, verbose = TRUE) {
if (is.unsorted(x)) {
ind <- order(x)
x <- x[ind]; y <- y[ind]
}
z <- y - y0
## which piecewise linear segment crosses zero?
k <- which(z[-1] * z[-length(z)] <= 0)
## analytical root finding
xr <- x[k] - z[k] * (x[k + 1] - x[k]) / (z[k + 1] - z[k])
## make a plot?
if (verbose) {
plot(x, y, "l"); abline(h = y0, lty = 2)
points(xr, rep.int(y0, length(xr)))
}
## return roots
xr
}
For cubic interpolation splines returned by stats::splinefun with methods "fmm", "natrual", "periodic" and "hyman", the following function provides a stable numerical solution.
RootSpline3 <- function (f, y0 = 0, verbose = TRUE) {
## extract piecewise construction info
info <- environment(f)$z
n_pieces <- info$n - 1L
x <- info$x; y <- info$y
b <- info$b; c <- info$c; d <- info$d
## list of roots on each piece
xr <- vector("list", n_pieces)
## loop through pieces
i <- 1L
while (i <= n_pieces) {
## complex roots
croots <- polyroot(c(y[i] - y0, b[i], c[i], d[i]))
## real roots (be careful when testing 0 for floating point numbers)
rroots <- Re(croots)[round(Im(croots), 10) == 0]
## the parametrization is for (x - x[i]), so need to shift the roots
rroots <- rroots + x[i]
## real roots in (x[i], x[i + 1])
xr[[i]] <- rroots[(rroots >= x[i]) & (rroots <= x[i + 1])]
## next piece
i <- i + 1L
}
## collapse list to atomic vector
xr <- unlist(xr)
## make a plot?
if (verbose) {
curve(f, from = x[1], to = x[n_pieces + 1], xlab = "x", ylab = "f(x)")
abline(h = y0, lty = 2)
points(xr, rep.int(y0, length(xr)))
}
## return roots
xr
}
It uses polyroot piecewise, first finding all roots on complex field, then retaining only real ones on the piecewise interval. This works because a cubic interpolation spline is just a number of piecewise cubic polynomials. My answer at How to save and load spline interpolation functions in R? has shown how to obtain piecewise polynomial coefficients, so using polyroot is straightforward.
Using the example data in the question, both RootSpline1 and RootSpline3 correctly identify all roots.
par(mfrow = c(1, 2))
RootSpline1(x, y, 2.85)
#[1] 3.495375 6.606465
RootSpline3(f3, 2.85)
#[1] 3.924512 6.435812 9.207171 9.886640
Given data points and spline function as above, simply apply findzeros() from the pracma package.
library(pracma)
xs <- findzeros(function(x) f3(x) - 2.85,min(x), max(x))
xs # [1] 3.924513 6.435812 9.207169 9.886618
points(xs, f3(xs))

How to run monte carlo simulation from a custom distribution in R

I would like to pull 1000 samples from a custom distribution in R
I have the following custom distribution
library(gamlss)
mu <- 1
sigma <- 2
tau <- 3
kappa <- 3
rate <- 1
Rmax <- 20
x <- seq(1, 2e1, 0.01)
points <- Rmax * dexGAUS(x, mu = mu, sigma = sigma, nu = tau) * pgamma(x, shape = kappa, rate = rate)
plot(points ~ x)
How can I randomly sample via Monte Carlo simulation from this distribution?
My first attempt was the following code which produced a histogram shape I did not expect.
hist(sample(points, 1000), breaks = 51)
This is not what I was looking for as it does not follow the same distribution as the pdf.
If you want a Monte Carlo simulation, you'll need to sample from the distribution a large number of times, not take a large sample one time.
Your object, points, has values that increases as the index increases to a threshold around 400, levels off, and then decreases. That's what plot(points ~ x) shows. It may describe a distribution, but the actual distribution of values in points is different. That shows how often values are within a certain range. You'll notice your x axis for the histogram is similar to the y axis for the plot(points ~ x) plot. The actual distribution of values in the points object is easy enough to see, and it is similar to what you're seeing when sampling 1000 values at random, without replacement from an object with 1900 values in it. Here's the distribution of values in points (no simulation required):
hist(points, 100)
I used 100 breaks on purpose so you could see some of the fine details.
Notice the little bump in the tail at the top, that you may not be expecting if you want the histogram to look like the plot of the values vs. the index (or some increasing x). That means that there are more values in points that are around 2 then there are around 1. See if you can look at how the curve of plot(points ~ x) flattens when the value is around 2, and how it's very steep between 0.5 and 1.5. Notice also the large hump at the low end of the histogram, and look at the plot(points ~ x) curve again. Do you see how most of the values (whether they're at the low end or the high end of that curve) are close to 0, or at least less than 0.25. If you look at those details, you may be able to convince yourself that the histogram is, in fact, exactly what you should expect :)
If you want a Monte Carlo simulation of a sample from this object, you might try something like:
samples <- replicate(1000, sample(points, 100, replace = TRUE))
If you want to generate data using points as a probability density function, that question has been asked and answered here
Let's define your (not normalized) probability density function as a function:
library(gamlss)
fun <- function(x, mu = 1, sigma = 2, tau = 3, kappa = 3, rate = 1, Rmax = 20)
Rmax * dexGAUS(x, mu = mu, sigma = sigma, nu = tau) *
pgamma(x, shape = kappa, rate = rate)
Now one approach is to use some MCMC (Markov chain Monte Carlo) method. For instance,
simMCMC <- function(N, init, fun, ...) {
out <- numeric(N)
out[1] <- init
for(i in 2:N) {
pr <- out[i - 1] + rnorm(1, ...)
r <- fun(pr) / fun(out[i - 1])
out[i] <- ifelse(runif(1) < r, pr, out[i - 1])
}
out
}
It starts from point init and gives N draws. The approach can be improved in many ways, but I'm simply only going to start form init = 5, include a burnin period of 20000 and to select every second draw to reduce the number of repetitions:
d <- tail(simMCMC(20000 + 2000, init = 5, fun = fun), 2000)[c(TRUE, FALSE)]
plot(density(d))
You invert the ECDF of the distribution:
ecd.points <- ecdf(points)
invecdfpts <- with( environment(ecd.points), approxfun(y,x) )
samp.inv.ecd <- function(n=100) invecdfpts( runif(n) )
plot(density (samp.inv.ecd(100) ) )
plot(density(points) )
png(); layout(matrix(1:2,1)); plot(density (samp.inv.ecd(100) ),main="The Sample" )
plot(density(points) , main="The Original"); dev.off()
Here's another way to do it that draws from R: Generate data from a probability density distribution and How to create a distribution function in R?:
x <- seq(1, 2e1, 0.01)
points <- 20*dexGAUS(x,mu=1,sigma=2,nu=3)*pgamma(x,shape=3,rate=1)
f <- function (x) (20*dexGAUS(x,mu=1,sigma=2,nu=3)*pgamma(x,shape=3,rate=1))
C <- integrate(f,-Inf,Inf)
> C$value
[1] 11.50361
# normalize by C$value
f <- function (x)
(20*dexGAUS(x,mu=1,sigma=2,nu=3)*pgamma(x,shape=3,rate=1)/11.50361)
random.points <- approx(cumsum(pdf$y)/sum(pdf$y),pdf$x,runif(10000))$y
hist(random.points,1000)
hist((random.points*40),1000) will get the scaling like your original function.

Illustrating the LLN (Law of Large Numbers)

I have to illustrate the Law of Large Numbers through simulations in R.
More precisely.
I would like to illustrate that the cumulative distribution function of the mean,
converges to the function f given by
f(x) = 0 if x ≤ μ and f(x) = 1 if x > μ.
In my case, I have to use a dice. That is, each Xi is the uniformly distributed on {1,2,3,4,5,6}, so μ = 3.5.
Using R, I have tried to proceed in the following way:
n <- 100
N <- 10000
mu <- 3.5
for(j in 1:N)
{
V[j] <- sum(sample(1:6), n, replace = TRUE);
}
f <- function(x)
{
if (x<=3.5)
{
y <-0
}
else
{
y <- 1
}
y
}
Vf <- Vectorize(f, "x")
So my idea was to compare the cumulative distribution function of the mean with the function f using a plot. How can I implement it in R properly. So I have to plot the cumulative distribution function and the function f in one plot.
You can simulate dice-rolls like this
set.seed(1)
n.rolls <- 100
dicerolls <- sample(1:6, n.rolls, replace=TRUE)
mean(dicerolls)
As for the rest of your question I'm afraid I'd need some further explanation. Maybe you can draw an image of what kind of plot you want?
If this is homework you should tag your question accordingly, and read the info for the tag.
As you can see this site doesn't support MathJax/LaTeX equation mark-up. If you want to include equations you can do it through something like codecogs.
Maybe it's something like this you're thinking of?
dicerolls <- function(rolls=2, reps=10^4) {
mean.per.replicate <- replicate(reps, mean(sample(1:6, rolls, replace=TRUE)))
}
set.seed(1)
dice.seq <- c(1:6, 20, 100)
opar <- par(no.readonly=TRUE)
par(mar=c(2, 2.5, 1, 0.1), mfrow=c(length(dice.seq), 2),
cex=0.5, mgp=c(1.5, 0.5, 0))
for (i in dice.seq) {
hist(dicerolls(i), breaks=50, col="darkgrey",
xlim=c(1, 6), ylim=c(0, 3), freq=FALSE, main="", xlab="")
legend("topleft", paste(i, "dice"), bty="n")
plot(ecdf(dicerolls(i)), xlim=c(1, 6), main="", frame.plot=FALSE)
}
par(opar)
Consider a dice rolling experiment and consider the Expected value for this event .E[X] = 1+2+3+4+5+6 / 6
Suppose we perform the experiment of throwing the dice n times , recording the number that is observed each time , let the observations be X1 , X2 , ….Xn.
If we compute say the mean say Xbar = X1 + X2 + …….Xn / n.
if the n is large then , the Xbar should tend to E[X] .
For better understanding , i have a blog where the intuition and mathematical part has been explained and also there is a simulation you can play with and the python code for the same is also available on the website. The following is the link .
https://statisticsexplained.blogspot.com/2020/06/law-of-large-numbers-explained-using.html.
There is a simulation for better understanding and the python code for the same has been attached too .

Resources