Suppose I have
set.seed(2020) # make the results reproducible
a <- rnorm(100, 0, 1)
My probability density is estimated through kernel density estimator (gaussian) in R using the R built in function density. The question is how to integrate the square of the estimated density. It does not matter between which values, let us suppose between -Inf and +Inf. I have tried the following:
f <- approxfun(density(a)$x, density(a)$y)
integrate (f*f, min(density(a)$x), max(density(a)$x))
There are a couple of problems here. First you have the x and y round the wrong way in approxfun. Secondly, you can't multiply function names together. You need to specify a new function that gives you the square of your original function:
set.seed(2020)
a <- rnorm(100, 0, 1)
f <- approxfun(density(a)$x, density(a)$y)
f2 <- function(v) ifelse(is.na(f(v)), 0, f(v)^2)
integrate (f2, -Inf, Inf)
#> 0.2591153 with absolute error < 0.00011
We can also plot the original density function and the squared density function:
curve(f, -3, 3)
curve(f2, -3, 3, add = TRUE, col = "red")
I think you should write the objective function as function(x) f(x)**2, rather than f*f, e.g.,
> integrate (function(x) f(x)**2, min(density(a)$x), max(density(a)$x))
0.2331793 with absolute error < 6.6e-06
Here is a way using package caTools, function trapz. It computes the integral given a vector x and its corresponding image y using the trapezoidal rule.
I also include a function trapzf based on the original to have the integral computed with the function returned by approxfun
trapzf <- function(x, FUN) trapz(x, FUN(x))
set.seed(2020) # make the results reproducible
a <- rnorm(100, 0, 1)
d <- density(a)
f <- approxfun(d$x, d$y)
int1 <- trapz(d$x, d$y^2)
int2 <- trapzf(d$x, function(x) f(x)^2)
int1
#[1] 0.2591226
identical(int1, int2)
#[1] TRUE
Related
I wrote a function to calculate the log-likelihood of a set of observations sampled from a mixture of two normal distributions. This function is not giving me the correct answer.
I will not know which of the two distributions any given sample is from, so the function needs to sum over possibilities.
This function takes a vector of five model parameters as its first argument (μ1, σ1, μ2, σ2 and p) where μi and σi are the mean and standard deviation of the ith distribution and p is the probability a sample is from the first distribution. For the second argument, the function takes a vector of observations.
I have written the following function:
mixloglik <- function(p, v) {
sum(log(dnorm(v, p[1], p[2])*p[5] + dnorm(v,p[3],p[4]))*p[5])
}
I can create test data, for which I know the solution should be ~ -854.6359:
set.seed(42)
v<- c(rnorm(100), rnorm(200, 8, 2))
p <- c(0, 1, 6, 2, 0.5)
When I test this function on the test data I do not get the correct solution
> mixloglik(p, v)
[1] -356.7194
I know the solution should be ~ -854.6359. Where am I going wrong in my function?
The correct expression for the log-likelihood is the following.
mixloglik <- function(p, v) {
sum(log(p[5]*dnorm(v, p[1], p[2]) + (1 - p[5])*dnorm(v, p[3], p[4])))
}
Now try it:
set.seed(42)
v<- c(rnorm(100), rnorm(200, 8, 2))
p <- c(0, 1, 6, 2, 0.5)
mixloglik(p, v)
#[1] -854.6359
In cases like this, the best way to solve the error is to restart by rewriting the expression on paper and recode it.
Generally for the inverse sampling method, we have a density and we would like to sample from it. A first step is to find the the cumulative density function for the density. Then to find it's inverse, and finally to find the inverse function for a randomly sampled value from the uniform distribution.
For example, I have this function y= ((3/2)/(1+x)^2) so the cdf equals (3x)/2(x+1) and the inverse of the cdf is ((3/2)*u)/(1-(3/2)*u)
To do this in R, I wrote
f<-function(x){
y= ((3/2)/(1+x)^2)
return(y)
}
cdf <- function(x){
integrate(f, -Inf, x)$value
}
invcdf <- function(q){
uniroot(function(x){cdf(x) - q}, range(x))$root
}
U <- runif(1e6)
X <- invcdf(U)
I have two problem! First: the code returns the function and not the samples.
The second: is there another simple way to do this work? for example to find the cdf and inverse in more simple ways?
I would like to add that I am not looking for efficiency of the code. I am just interested of a code that could be written by a beginner.
You could try a numerical approach to inverse sampling. As per your request, this is more about transparency of method than efficiency.
This function will numerically integrate a given function over the given range (though it will trim infinite values)
cdf <- function(f, lower_bound, upper_bound)
{
if(lower_bound < -10000) lower_bound <- -10000 # Trim large negatives
if(upper_bound > 10000) upper_bound <- 10000 # Trim large positive
x <- seq(lower_bound, upper_bound, length.out = 100001) # Finely divide x axis
delta <- mean(diff(x)) # Get delta x (i.e. dx)
mid_x <- (x[-1] + x[-length(x)])/2 # Get the mid point of each slice
result <- cumsum(delta * f(mid_x)) # sum f(x) dx
result <- result / max(result) # normalize
list(x = mid_x, cdf = result) # return both x and f(x) in list
}
And to get the inverse, we find the closest value in the cdf of a random number drawn from the uniform distribution between 0 and 1. We then see which value of x corresponds to that value of the cdf. We want to be able to do this for n samples at a time so we use sapply:
inverse_sample <- function(f, n = 1, lower_bound = -1000, upper_bound = 1000)
{
CDF <- cdf(f, lower_bound, upper_bound)
samples <- runif(n)
sapply(samples, function(s) CDF$x[which.min(abs(s - CDF$cdf))])
}
We can test it by drawing histograms of the results. We'll start with the normal distribution's density function (dnorm in R), drawing 1000 samples and plotting their distribution:
hist(inv_sample(dnorm, 1000))
And we can do the same for the exponential distribution, this time setting the limits of integration between 0 and 100:
hist(inv_sample(dexp, 1000, 0, 100))
And finally we can do the same with your own example:
f <- function(x) 3/2/(1 + x)^2
hist(inv_sample(f, 1000, 0, 10))
I am rather new to R and could use some basic help. I'd like to generate sums of two normal random variables (variance = 1 for each) as their means move apart and plot the results. The basic idea: if the means are sufficiently far apart, the distribution will be bimodal. Here's the code I'm trying:
x <- seq(-3, 3, length=500)
for(i in seq(0, 3, 0.25)) {
y <- dnorm(x, mean=0-i, sd=1)
z <- dnorm(x, mean=0+i, sd=1)
plot(x,y+z, type="l", xlim=c(-3,3))
}
Several questions:
Are there better ways to do this?
I'm only getting one PDF on my plot. How can I put multiple PDFs on the same plot?
Thank you in advance!
It is not difficult to do this using basic R features. We first define a function f to compute the density of this mixture of normal:
## `x` is an evaluation grid
## `dev` is deviation of mean from 0
f <- function (x, dev) {
(dnorm(x, -dev) + dnorm(x, dev)) / 2
}
Then we use sapply to loop through various dev to get corresponding density:
## `dev` sequence to test
dev <- seq(0, 3, 0.25)
## evaluation grid; extending `c(-1, 1) * max(dev)` by 4 standard deviation
x <- seq(-max(dev) -4, max(dev) + 4, by = 0.1)
## density matrix
X <- sapply(dev, f, x = x)
## a comment on 2022-07-31: X <- outer(x, dev, f)
Finally we use matplot for plotting:
matplot(x, X, type = "l", lty = 1)
Explanation of sapply:
During sapply, x is not changed, while we pick up and try one element of dev each iteration. It is like
X <- matrix(0, nrow = length(x), ncol = length(dev))
for (i in 1:length(dev)) X[, i] <- f(x, dev[i])
matplot(x, X) will plot columns of X one by one, against x.
A comment on 2022-07-31: Just use outer. Here are more examples:
Run a function of 2 arguments over a span of parameter values in R
Plot of a Binomial Distribution for various probabilities of success in R
I am not familiar with function over a vector in R.
I would like a vector with the different values of cumulative probability of a bivariate when some parameters change value simultaneously according to different function. For example here:
library(mvtnorm)
m<-2
corr<-diag(2)
corr[2,1]<-0
vapply(2*1:3,function(x)
pmvnorm(mean=c(2,x),corr,lower=c(-Inf,-Inf), upper=c(1,2)),1)
[1] 7.932763e-02 3.609428e-03 5.024809e-06
I have the different value of cumulative probability when the mean of the second distribution takes value 2,4 and 6.
My problem is that I want simultaneously change also the value of the value of the mean of the first distribution. I can't write properly the vapply command with more than one function. What can I do?
Thank you very much
You will need to use mapply for this task
library(mvtnorm)
corr <- diag(2)
m1 <- c(3, 5, 7)
m2 <- c(2, 4, 6)
mapply(function(x, y)
pmvnorm(mean = c(x, y), corr, lower = c(-Inf, -Inf), upper = c(1, 2)),
m1, m2)
## [1] 1.1375e-02 7.2052e-07 3.1246e-14
I would like to use the output of the density() object as a function (to do many things as derivative, integrate on specific interval, evaluate at specific point,...)
To be clear, let's take an example:
a=c(1,3,10,-5,0,0,2, 1, 3, 8,2, -2)
b=density(a)
I would like some transformation of b
f=some_transformation(b) # transformation I don't know
is.function(f) # answer must be "TRUE"
so that I can evaluate the density at any point
f(1.2) # evaluate density at 1.2
compute its derivative
Df=D(body(f), "x") # derivative of f
Df(1.2) # derivative at 1.2
and do other R stuff as if f is a function.
You can use approxfun.
a <- c(1,3,10,-5,0,0,2, 1, 3, 8,2, -2)
b <- density(a)
f <- approxfun(b, rule=2)
is.function(f)
f(1.2)
Since it is not defined by a formula,
you cannot use D (symbolic differentiation)
to compute its derivative.
You can estimate it numerically, though.
library(numDeriv)
df <- function(x) grad(f,x)
curve( f(x), lwd=3, xlim=c(-10,10) )
curve( df(x), lwd=3, xlim=c(-10,10) )
D takes an expression, not a function as its first argument. It is for doing symbolic calculus, not finding the gradient of numeric values. You can numerically calculate the derivative of b wrt x using.
with(b, diff(y) / diff(x))
Here's a visualisation of the gradient to give an example of how you might use it.
librray(ggplot2)
gradient_data <- with(
density(a),
{
data.frame(
dy_by_dx = diff(y) / diff(x),
x = x[-1] + x[-length(x)] / 2
)
}
)
(gradient_plot <- ggplot(gradient_data, aes(x, dy_by_dx)) +
geom_line()
)
If you want to evaluate the function at any point, then use approx.
with(density(a), approx(x, y, xout = -8:13))
The answer will be more accurate if you increase the n argument to the density function.