Method of Moments for Gamma distribution- histogram and superimposing the PDF - r

I have this question. 'Model the data in nfsold (nfsold is just a vector containing 150 numbers)as a set of 150independent observations from a Gamma(lambda; k) distribution. Use the Method of Moments, to obtain estimates of k and lambda. Draw a histogram of the data and superimpose the PDF of your fitted gamma distribution as a preliminary check that this distribution matches the observed data.'
This is the code I have written.
#The first moment of each Xi, i = 1,...,n, is E(Xi) = k/lamda.
#The second moment of each Xi is E(Xi^2) = k(k+1)/(lamda)^2
#Since we have to find 2 two things, k and lamda we require 2 moments to do this.
x_bar = mean = sum(nfsold)/150 #This is the first moment
mean
second_moment = sum(nfsold^2)/150
second_moment
#(1/n)(sum xi) = k/lamda
#(1/n)(sum x^2i) = k(k+1)/(lamda)^2
#By solving these because of the methods of moments we get lambda and k.
lamda_hat = (x_bar)/((second_moment)-(x_bar)^2)
lamda_hat
k_hat = (x_bar)^2/ ((second_moment)-(x_bar)^2)
k_hat
independent_observations = dgamma(x,k_hat, rate = lamda_hat)
hist( independent_observations, breaks = 15, prob = TRUE, main="Histogram for the Gamma Distribution of the data in nfsold", xlab="Independent Observations", ylab="P.D.F")
curve(dgamma(x,k_hat, rate =lamda_hat), add=TRUE, col="green")
My problem is that my superimposed curve does not follow my histogram, so I feel like there is something wrong with my code, please would I be able to have some help with correcting it?
Thanks!

Related

Estimating PDF with monotonically declining density at tails

tldr: I am numerically estimating a PDF from simulated data and I need the density to monotonically decrease outside of the 'main' density region (as x-> infinity). What I have yields a close to zero density, but which does not monotonically decrease.
Detailed Problem
I am estimating a simulated maximum likelihood model, which requires me to numerically evaluate the probability distribution function of some random variable (the probability of which cannot be analytically derived) at some (observed) value x. The goal is to maximize the log-likelihood of these densities, which requires them to not have spurious local maxima.
Since I do not have an analytic likelihood function I numerically simulate the random variable by drawing the random component from some known distribution function, and apply some non-linear transformation to it. I save the results of this simulation in a dataset named simulated_stats.
I then use density() to approximate the PDF and approxfun() to evaluate the PDF at x:
#some example simulation
Simulated_stats_ <- runif(n=500, 10,15)+ rnorm(n=500,mean = 15,sd = 3)
#approximation for x
approxfun(density(simulated_stats))(x)
This works well within the range of simulated simulated_stats, see image:
Example PDF. The problem is I need to be able to evaluate the PDF far from the range of simulated data.
So in the image above, I would need to evaluate the PDF at, say, x=50:
approxfun(density(simulated_stats))(50)
> [1] NA
So instead I use the from and to arguments in the density function, which correctly approximate near 0 tails, such
approxfun(
density(Simulated_stats, from = 0, to = max(Simulated_stats)*10)
)(50)
[1] 1.924343e-18
Which is great, under one condition - I need the density to go to zero the further out from the range x is. That is, if I evaluated at x=51 the result must be strictly smaller. (Otherwise, my estimator may find local maxima far from the 'true' region, since the likelihood function is not monotonic very far from the 'main' density mass, i.e. the extrapolated region).
To test this I evaluated the approximated PDF at fixed intervals, took logs, and plotted. The result is discouraging: far from the main density mass the probability 'jumps' up and down. Always very close to zero, but NOT monotonically decreasing.
a <- sapply(X = seq(from = 0, to = 100, by = 0.5), FUN = function(x){approxfun(
density(Simulated_stats_,from = 0, to = max(Simulated_stats_)*10)
)(x)})
aa <- cbind( seq(from = 0, to = 100, by = 0.5), a)
plot(aa[,1],log(aa[,2]))
Result:
Non-monotonic log density far from density mass
My question
Does this happen because of the kernel estimation in density() or is it inaccuracies in approxfun()? (or something else?)
What alternative methods can I use that will deliver a monotonically declining PDF far from the simulated density mass?
Or - how can I manually change the approximated PDF to monotonically decline the further I am from the density mass? I would happily stick some linear trend that goes to zero...
Thanks!
One possibility is to estimate the CDF using a beta regression model; numerical estimate of the derivative of this model could then be used to estimate the pdf at any point. Here's an example of what I was thinking. I'm not sure if it helps you at all.
Import libraries
library(mgcv)
library(data.table)
library(ggplot2)
Generate your data
set.seed(123)
Simulated_stats_ <- runif(n=5000, 10,15)+ rnorm(n=500,mean = 15,sd = 3)
Function to estimate CDF using gam beta regression model
get_mod <- function(ss,p = seq(0.02, 0.98, 0.02)) {
qp = quantile(ss, probs=p)
betamod = mgcv::gam(p~s(qp, bs="cs"), family=mgcv::betar())
return(betamod)
}
betamod <- get_mod(Simulated_stats_)
Very basic estimate of PDF at val given model that estimates CDF
est_pdf <- function(val, betamod, tol=0.001) {
xvals = c(val,val+tol)
yvals = predict(betamod,newdata=data.frame(qp = xvals), type="response")
as.numeric((yvals[1] - yvals[2])/(xvals[1] - xvals[2]))
}
Lets check if monotonically increasing below min of Simulated_stats
test_x = seq(0,min(Simulated_stats_), length.out=1000)
pdf = sapply(test_x, est_pdf, betamod=betamod)
all(pdf == cummax(pdf))
[1] TRUE
Lets check if monotonically decreasing above max of Simulated_stats
test_x = seq(max(Simulated_stats_), 60, length.out=1000)
pdf = sapply(test_x, est_pdf, betamod=betamod)
all(pdf == cummin(pdf))
[1] TRUE
Additional thoughts 3/5/22
As discussed in comments, using the betamod to predict might slow down the estimator. While this could be resolved to a great extent by writing your own predict function directly, there is another possible shortcut.
Generate estimates from the betamod over the range of X, including the extremes
k <- sapply(seq(0,max(Simulated_stats_)*10, length.out=5000), est_pdf, betamod=betamod)
Use the approach above that you were initially using, i.e. a linear interpolation across the density, but rather than doing this over the density outcome, instead do over k (i.e. over the above estimates from the beta model)
lin_int = approxfun(x=seq(0,max(Simulated_stats_)*10, length.out=5000),y=k)
You can use the lin_int() function for prediction in the estimator, and it will be lighting fast. Note that it produces virtually the same value for a given x
c(est_pdf(38,betamod), lin_int(38))
[1] 0.001245894 0.001245968
and it is very fast
microbenchmark::microbenchmark(
list = alist("betamod" = est_pdf(38, betamod),"lin_int" = lint(38)),times=100
)
Unit: microseconds
expr min lq mean median uq max neval
betamod 1157.0 1170.20 1223.304 1188.25 1211.05 2799.8 100
lin_int 1.7 2.25 3.503 4.35 4.50 10.5 100
Finally, lets check the same plot you did before, but using lin_int() instead of approxfun(density(....))
a <- sapply(X = seq(from = 0, to = 100, by = 0.5), lin_int)
aa <- cbind( seq(from = 0, to = 100, by = 0.5), a)
plot(aa[,1],log(aa[,2]))

Geometric distribution with general random variable

I got this exercise for my homework in the "Statistical Theory" course.
We were asked to find a geometric distribution for a random variable, so far this is my code and the graph immediately after it.
Define a discreet random variable starting from the Uniform(0,1)
distribution. Simulate for n=1000 and plot the distribution of it’s
mean as the function of n and the PMF. Add a horizontal line for the
theoretical mean (find it analytically, write your solution in tex,
you may use a known for this distribution formula).
Geometric(p) Choose the p parameter randomly from U(0,1) while writing
your code for a general p. Please avoid “magic numbers” inside the
code.The writing shoud be strictly parametric.
My question is, how can I get a better and more accurate result? My goal is for the blue line to converge as much as possible to the original value of Expected value(Mean).
library(glue)
p = runif(1) # choosing random p
n = 1000
real_avg = 1/p
cum_sum = 0
avg = numeric()
for (i in 1:n) {
cum_sum = cum_sum + ceiling(log(U[i],10)/log(1-p,10))
avg=c(avg,cum_sum / i)
}
plot(1 : n, avg, type = "l", lwd = 2, col = "blue", ylab = glue("Oberved Mean for p={round(p,digits=4)}"),
xlab = "Number of Experiments")
abline(h=real_avg,col="red")
print(glue("p={round(p,4)}"))
print(glue("E[X]={1/p}"))

How to compute some basic statistics in R with the density and support only?

I have no sample and I'd like to compute the variance, mean, median, and mode of a distribution which I only have a vector with it's density and a vector with it's support. Is there an easy way to compute this statistics in R with this information?
Suppose that I only have the following information:
Support
Density
sum(Density) == 1 #TRUE
length(Support)==length(Density)# TRUE
You have to do weighted summations
F.e., starting with #Johann example
set.seed(312345)
x = rnorm(1000, mean=10, sd=1)
x_support = density(x)$x
x_density = density(x)$y
plot(x_support, x_density)
mean(x)
prints
[1] 10.00558
and what, I believe, you're looking for
m = weighted.mean(x_support, x_density)
computes mean as weighted mean of values, producing output
10.0055796130192
There are weighted.sd, weighted.sum functions which should help you with other quantities you're looking for.
Plot
If you don't need a mathematical solution, and an empirical one is all right, you can achieve a pretty good approximation by sampling.
Let's generate some data:
set.seed(6854684)
x = rnorm(50,mean=10,sd=1)
x_support = density(x)$x
x_density = density(x)$y
# see our example:
plot(x_support, x_density )
# the real mean of x
mean(x)
Now to 'reverse' the process we generate a large sample from that density distribution:
x_sampled = sample(x = x_support, 1000000, replace = T, prob = x_density)
# get the statistics
mean(x_sampled)
median(x_sampled)
var(x_sampled)
etc...

R: how to add L1 norm line to plot from glmnet

I'm doing lasso regression, and I want to choose some beta coefficients that best explain my model by using Leave one out cross validation.
Here is my code:
library(glmnet)
set.seed(19875)
n=100
p=500
real_p=15
x=matrix(rnorm(n*p), nrow=n, ncol=p)
y=as.matrix(apply(x[, 1:real_p], 1, sum) + rnorm(n))
lasso=glmnet(x,y,alpha = 1)
plot(lasso)
#computing loocv
cvlassofit<-cv.glmnet(x,y, nfolds =n, grouped = FALSE )
plot(cvlassofit)
The first plot generates the beta coefficient paths:
Then I want to add a vertical line that chooses the best coefficients that have small mean square error. The plot should then look like this:
In the code part where I do CV I get the best lambda that has the smallest mse(mean square error).
Here is the plot:
Now, can I somehow based on the lambda get a value for the L1 norm, so that I could add a vertical line to the first plot? Or instead of log(lambda) in the last plot, could I do a L1 norm?
Now, can I somehow based on the lambda get a value for the L1 norm, so that I could add a vertical line to the first plot? Or instead of log(lambda) in the last plot, could I do a L1 norm?
You can do it as follows:
lambda_min <- cvlassofit$lambda.min
estimates <- as.vector(coef(lasso, s = lambda_min, exact = TRUE))
norm. <- sum(abs(estimates))
plot(lasso, xlim = range(0, norm., as.vector(lasso$beta)))
abline(v = norm., col = "red")
Here is the result:

Sample from a custom likelihood function

I have the following likelihood function which I used in a rather complex model (in practice on a log scale):
library(plyr)
dcustom=function(x,sd,L,R){
R. = (log(R) - log(x))/sd
L. = (log(L) - log(x))/sd
ll = pnorm(R.) - pnorm(L.)
return(ll)
}
df=data.frame(Range=seq(100,500),sd=rep(0.1,401),L=200,U=400)
df=mutate(df, Likelihood = dcustom(Range, sd,L,U))
with(df,plot(Range,Likelihood,type='l'))
abline(v=200)
abline(v=400)
In this function, the sd is predetermined and L and R are "observations" (very much like the endpoints of a uniform distribution), so all 3 of them are given. The above function provides a large likelihood (1) if the model estimate x (derived parameter) is in between the L-R range, a smooth likelihood decrease (between 0 and 1) near the bounds (of which the sharpness is dependent on the sd), and 0 if it is too much outside.
This function works very well to obtain estimates of x, but now I would like to do the inverse: draw a random x from the above function. If I would do this many times, I would generate a histogram that follows the shape of the curve plotted above.
The ultimate goal is to do this in C++, but I think it would be easier for me if I could first figure out how to do this in R.
There's some useful information online that helps me start (http://matlabtricks.com/post-44/generate-random-numbers-with-a-given-distribution, https://stats.stackexchange.com/questions/88697/sample-from-a-custom-continuous-distribution-in-r) but I'm still not entirely sure how to do it and how to code it.
I presume (not sure at all!) the steps are:
transform likelihood function into probability distribution
calculate the cumulative distribution function
inverse transform sampling
Is this correct and if so, how do I code this? Thank you.
One idea might be to use the Metropolis Hasting Algorithm to obtain a sample from the distribution given all the other parameters and your likelihood.
# metropolis hasting algorithm
set.seed(2018)
n_sample <- 100000
posterior_sample <- rep(NA, n_sample)
x <- 300 # starting value: I chose 300 based on your likelihood plot
for (i in 1:n_sample){
lik <- dcustom(x = x, sd = 0.1, L = 200, R =400)
# propose a value for x (you can adjust the stepsize with the sd)
x.proposed <- x + rnorm(1, 0, sd = 20)
lik.proposed <- dcustom(x = x.proposed, sd = 0.1, L = 200, R = 400)
r <- lik.proposed/lik # this is the acceptance ratio
# accept new value with probablity of ratio
if (runif(1) < r) {
x <- x.proposed
posterior_sample[i] <- x
}
}
# plotting the density
approximate_distr <- na.omit(posterior_sample)
d <- density(approximate_distr)
plot(d, main = "Sample from distribution")
abline(v=200)
abline(v=400)
# If you now want to sample just a few values (for example, 5) you could use
sample(approximate_distr,5)
#[1] 281.7310 371.2317 378.0504 342.5199 412.3302

Resources