Superimposing gamma distribution curve to a plot - r

I have to superimpose the gamma distribution curve to a plot of others powerlaw-like curve.
I first plot dot points of the histogram in log-log scale
plot(log(pp$mids),log(pp$density))
then i would like to superimpose my gamma distribution curve calling an external function gamma()
gamma <- function(X)
{
n <- length(X)
theta<-var(hh2$V1)/mean(hh2$V1)
kappa<-mean(hh2$V1)/theta
y<-rgamma(n,kappa,theta)
xx<-hist(y,plot=F)
curve(log(xx$density),add=T,col='violet',type='l')
return( c(kappa) )
}
but this return me an error because curve() needs a true curve to plot. How can i do this?

Here is a somewhat-working variant of your code:
Generate an example structured as (I guess) your data are:
library(rmutil) ## for rpareto
set.seed(101)
hh2 <- data.frame(V1=rpareto(1000, m=1, s=1.5))
Initial histogram calculation:
pp <- hist(hh2$V1,plot=FALSE)
Function (better not to call it gamma as that masks a built-in function):
ghistfun <- function(x) {
n <- length(x)
scalepar <- var(x)/mean(x)
shapepar <- mean(x)^2/var(x)
y <- rgamma(n,shape=shapepar,scale=scalepar)
xx <- hist(y,plot=FALSE)
lines(log(xx$mids),log(xx$density),col="red")
curve(dgamma(exp(x),shape=shapepar,scale=scalepar,log=TRUE),
add=TRUE,col="blue")
shapepar
}
It might be better to use a very large number for n rather than just using the length of the data, unless you are particularly interested in seeing the random fluctuations in a data set of precisely the same size. Alternatively you could just use curve(dgamma(x,...)), as shown (I initially thought that you would have to allow for scaling from a density of x to a density of log(x), but because of the way you have computed a histogram on the original (unlogged) scale and then transformed the bin midpoints, you don't have to ...)
plot(log(pp$mids),log(pp$density))
ghistfun(hh2$V1)

Related

How to plot bivariate density from first principles in R?

I am trying to plot the following bivariate density from first principles in R:
I have attempted to code this using the sn package, however it does not work. This is a geometric mixture of the multivariate normal distribution and it is not of the skewing mechanism form that was popularised by A. Azzalini. My code is as follows:
library(matlib)
#Make 2-D Grid Coordinates:
number_cor <- 1000 #the number of coordinates
x <- y <- seq(-5, 5, length.out=number_cor) #x and y coordinates
X <- cbind(u=grid2d(x)$x, v=grid2d(x)$y) #combining the coordinates
#Set Parameters:
d<-2
p<-0.75
mu<-as.vector(c(0,0))
cov<-matrix(c(2,0,0,2), nrow=2, ncol=2)
#Summation bounds:
lower <- 1
upper <- 10
#First calculate the density values
mvgsn_pdf_nosum <- function(k, i) {
((p*((1-p)**(k-1)))/(((2*pi)**(d/2))*(cov**0.5)*(k**(d/2))))*(exp((-1/(2*k))*(t(i-k*mu))*(inv(cov))*(i-k*mu)))
}
mvgsn_pdf <- sapply(X,
function(i) sum(mvgsn_pdf_nosum(seq(lower, upper, 1), i=i)))
The above approach works perfectly in the univariate case. However, my problem arises when I run the mvgsn_pdf step in the above code. The density is supposed to look like the following:
I have never attempted to code a bivariate density from first principles (i.e. hardcoding the PDF). Any help would kindly be appreciated.

Interpolate density using a number of points instead of random sampling

When R calculates density(x) it uses n = 512 (I believe) so it randomly tries to pick 512 points and interpolates a density function using those points. I have a list of points (p) that I would like to use when computing density(x) INSTEAD of randomly sampling 512 points. (ignore any complications this may cause, validity, etc. - I am just looking for a way to implement this)
Is there a way to modify density(x) so I can pass a list
of points instead of using random sampling?
What is the algorithm that density(x) uses?
Is there a different function that I should use for my purposes?
So after a bit of research, you can calculate kernel density estimate as follows (from Wikipedia):
#` Data
set.seed(1) #Used for reproducibility
data = c(rnorm(100,-10,1),rnorm(100,10,1)) #Two Normals mixed
#` True
phi = function(x) exp(-.5*x^2)/sqrt(2*pi) #Normal Density
tpdf = function(x) phi(x+10)/2+phi(x-10)/2 #True Density
#` Kernel
h = sd(data)*(4/3/length(data))^(1/5) #Bandwidth estimated by Silverman's Rule of Thumb
Kernel2 = function(x) mean(phi((x-data)/h)/h) #Kernel Density
kpdf = function(x) sapply(x,Kernel2) #Elementwise application
#` Plot
x=seq(-25,25,length=1000) #Linear Space
plot(x,tpdf(x),type="l",ylim=c(0,0.23),col="red") #Plot True Density
par(new=T)
plot(x,kpdf(x),type="l",ylim=c(0,0.23),xlab="",ylab="",axes=F) #Plot Kernel Density with Silverman's Rule of Thumb
Notice that I can select x in any way I want

Central Limit Theorem in R

I wish to simulate the central limit theorem in order to demonstrate it, and I am not sure how to do it in R. I want to create 10,000 samples with a sample size of n (can be numeric or a parameter), from a distribution I will choose (uniform, exponential, etc...). Then I want to graph in one plot (using the par and mfrow commands) the original distribution (histogram), the distribution of the means of all samples, a Q-Q plot of the means, and in the 4th graph (there are four, 2X2), I am not sure what to plot. Can you please assist me in starting to program it in R ? I think once I have the simulated data I should be fine. Thank you.
My initial attempt is below, it is too simple and I am not sure even correct.
r = 10000;
n = 20;
M = matrix(0,n,r);
Xbar = rep(0,r);
for (i in 1:r)
{
M[,i] = runif(n,0,1);
}
for (i in 1:r)
{
Xbar[i] = mean(M[,i]);
}
hist(Xbar);
The CLT states that given i.i.d. samples from a distribution with mean and variance, the sample mean (as a random variable) has a distribution that converges to a Gaussian as the number of samples n increase. Here, I will assume that you want to generate r sample sets containing n samples each to create r samples of the sample mean. Some code to do that is as follows:
set.seed(123) ## set the seed for reproducibility
r <- 10000
n <- 200 ## I use 200 instead of 20 to enhance convergence to Gaussian
## this function computes the r samples of the sample mean from the
## r*n original samples
sample.means <- function(samps, r, n) {
rowMeans(matrix(samps,nrow=r,ncol=n))
}
For generating the plots, we use ggplot2 and Aaron's qqplot.data function from here. We also use gridExtra to plot multiple plots in one frame.
library(ggplot2)
library(gridExtra)
qqplot.data <- function (vec) {
# following four lines from base R's qqline()
y <- quantile(vec[!is.na(vec)], c(0.25, 0.75))
x <- qnorm(c(0.25, 0.75))
slope <- diff(y)/diff(x)
int <- y[1L] - slope * x[1L]
d <- data.frame(resids = vec)
ggplot(d, aes(sample = resids)) + stat_qq() + geom_abline(slope = slope, intercept = int, colour="red") + ggtitle("Q-Q plot")
}
generate.plots <- function(samps, samp.means) {
p1 <- qplot(samps, geom="histogram", bins=30, main="Sample Histogram")
p2 <- qplot(samp.means, geom="histogram", bins=30, main="Sample Mean Histogram")
p3 <- qqplot.data(samp.means)
grid.arrange(p1,p2,p3,ncol=2)
}
Then we can use these functions with the uniform distribution:
samps <- runif(r*n) ## uniform distribution [0,1]
# compute sample means
samp.means <- sample.means(samps, r, n))
# generate plots
generate.plots(samps, samp.means)
We get:
Or, with the poisson distribution with mean = 3:
samps <- rpois(r*n,lambda=3)
# compute sample means
samp.means <- sample.means(samps, r, n))
# generate plots
generate.plots(samps, samp.means)
We get:
Or, with the exponential distribution with mean = 1/1:
samps <- rexp(r*n,rate=1)
# compute sample means
samp.means <- sample.means(samps, r, n))
# generate plots
generate.plots(samps, samp.means)
We get:
Note that the mean of the sample mean histograms all look like Gaussians with mean that is very similar to the mean of the original generating distribution, whether this is uniform, poisson, or exponential, as predicted by the CLT (also its variance will be 1/(n=200) the variance of the original generating distribution).
Maybe this can help you get started. I have hard-coded the normal distribution and only shown two of your suggested plots: a the histogram of a randomly selected sample, and a histogram of all sample means.
I guess my main suggestion is using a list to store the samples instead of a matrix.
r <- 10000
my.n <- 20
simulation <- list()
for (i in 1:r) {
simulation[[i]] <- rnorm(my.n)
}
sample.means <- sapply(simulation, mean)
selected.sample <- runif(1, min = 1, max = r)
dev.off()
par(mfrow = c(1, 2))
hist(simulation[[selected.sample]])
hist(sample.means)

Fit distribution to given frequency values in R

I have frequency values changing with the time (x axis units), as presented on the picture below. After some normalization these values may be seen as data points of a density function for some distribution.
Q: Assuming that these frequency points are from Weibull distribution T, how can I fit best Weibull density function to the points so as to infer the distribution T parameters from it?
sample <- c(7787,3056,2359,1759,1819,1189,1077,1080,985,622,648,518,
611,1037,727,489,432,371,1125,69,595,624)
plot(1:length(sample), sample, type = "l")
points(1:length(sample), sample)
Update.
To prevent from being misunderstood, I would like to add little more explanation. By saying I have frequency values changing with the time (x axis units) I mean I have data which says that I have:
7787 realizations of value 1
3056 realizations of value 2
2359 realizations of value 3 ... etc.
Some way towards my goal (incorrect one, as I think) would be to create a set of these realizations:
# Loop to simulate values
set.values <- c()
for(i in 1:length(sample)){
set.values <<- c(set.values, rep(i, times = sample[i]))
}
hist(set.values)
lines(1:length(sample), sample)
points(1:length(sample), sample)
and use fitdistr on the set.values:
f2 <- fitdistr(set.values, 'weibull')
f2
Why I think it is incorrect way and why I am looking for a better solution in R?
in the distribution fitting approach presented above it is assumed that set.values is a complete set of my realisations from the distribution T
in my original question I know the points from the first part of the density curve - I do not know its tail and I want to estimate the tail (and the whole density function)
Here is a better attempt, like before it uses optim to find the best value constrained to a set of values in a box (defined by the lower and upper vectors in the optim call). Notice it scales x and y as part of the optimization in addition to the Weibull distribution shape parameter, so we have 3 parameters to optimize over.
Unfortunately when using all the points it pretty much always finds something on the edges of the constraining box which indicates to me that maybe Weibull is maybe not a good fit for all of the data. The problem is the two points - they ares just too large. You see the attempted fit to all data in the first plot.
If I drop those first two points and just fit the rest, we get a much better fit. You see this in the second plot. I think this is a good fit, it is in any case a local minimum in the interior of the constraining box.
library(optimx)
sample <- c(60953,7787,3056,2359,1759,1819,1189,1077,1080,985,622,648,518,
611,1037,727,489,432,371,1125,69,595,624)
t.sample <- 0:22
s.fit <- sample[3:23]
t.fit <- t.sample[3:23]
wx <- function(param) {
res <- param[2]*dweibull(t.fit*param[3],shape=param[1])
return(res)
}
minwx <- function(param){
v <- s.fit-wx(param)
sqrt(sum(v*v))
}
p0 <- c(1,200,1/20)
paramopt <- optim(p0,minwx,gr=NULL,lower=c(0.1,100,0.01),upper=c(1.1,5000,1))
popt <- paramopt$par
popt
rms <- paramopt$value
tit <- sprintf("Weibull - Shape:%.3f xscale:%.1f yscale:%.5f rms:%.1f",popt[1],popt[2],popt[3],rms)
plot(t.sample[2:23], sample[2:23], type = "p",col="darkred")
lines(t.fit, wx(popt),col="blue")
title(main=tit)
You can directly calculate the maximum likelihood parameters, as described here.
# Defining the error of the implicit function
k.diff <- function(k, vec){
x2 <- seq(length(vec))
abs(k^-1+weighted.mean(log(x2), w = sample)-weighted.mean(log(x2),
w = x2^k*sample))
}
# Setting the error to "quite zero", fulfilling the equation
k <- optimize(k.diff, vec=sample, interval=c(0.1,5), tol=10^-7)$min
# Calculate lambda, given k
l <- weighted.mean(seq(length(sample))^k, w = sample)
# Plot
plot(density(rep(seq(length(sample)),sample)))
x <- 1:25
lines(x, dweibull(x, shape=k, scale= l))
Assuming the data are from a Weibull distribution, you can get an estimate of the shape and scale parameter like this:
sample <- c(7787,3056,2359,1759,1819,1189,1077,1080,985,622,648,518,
611,1037,727,489,432,371,1125,69,595,624)
f<-fitdistr(sample, 'weibull')
f
If you are not sure whether it is distributed Weibull, I would recommend using the ks.test. This tests whether your data is from a hypothesised distribution. Given your knowledge of the nature of the data, you could test for a few selected distributions and see which one works best.
For your example this would look like this:
ks = ks.test(sample, "pweibull", shape=f$estimate[1], scale=f$estimate[2])
ks
The p-value is insignificant, hence you do not reject the hypothesis that the data is from a Weibull distribution.
Update: The histograms of either the Weibull or exponential look like a good match to your data. I think the exponential distribution gives you a better fit. Pareto distribution is another option.
f<-fitdistr(sample, 'weibull')
z<-rweibull(10000, shape= f$estimate[1],scale= f$estimate[2])
hist(z)
f<-fitdistr(sample, 'exponential')
z = rexp(10000, f$estimate[1])
hist(z)

Observation in a bivariate Ellipse

I am trying find the probability that a point lies within an ellipse?
For eg if I was plotting the bivariate data (x,y) for 300 datasets in an 95% ellipsoid region, how do I calculate how many times out of 300 will my points fall inside the
ellipse?
Heres the code I am using
library(MASS)
seed<-1234
x<-NULL
k<-1
Sigma2 <- matrix(c(.72,.57,.57,.46),2,2)
Sigma2
rho <- Sigma2[1,2]/sqrt(Sigma2[1,1]*Sigma2[2,2])
rho
eta1<-replicate(300,mvrnorm(k, mu=c(-1.59,-2.44), Sigma2))
library(car)
dataEllipse(eta1[1,],eta1[2,], levels=c(0.05, 0.95))
Thanks for your help.
I don't see why people are jumping on the OP. In context, it's clearly a programming question: it's about getting the empirical frequency of data points within a given ellipse, not a theoretical probability. The OP even posted code and a graph showing what they're trying to obtain.
It may be that they don't fully understand the statistical theory behind a 95% ellipse, but they didn't ask about that. Besides, making plots and calculating frequencies like this is an excellent way of coming to grips with the theory.
Anyway, here's some code that answers the narrowly-defined question of how to count the points within an ellipse obtained via a normal distribution (which is what underlies dataEllipse). The idea is to transform your data to the unit circle via principal components, then get the points within a certain radius of the origin.
within.ellipse <- function(x, y, plot.ellipse=TRUE)
{
if(missing(y) && is.matrix(x) && ncol(x) == 2)
{
y <- x[,2]
x <- x[,1]
}
if(plot.ellipse)
dataEllipse(x, y, levels=0.95)
d <- scale(prcomp(cbind(x, y), scale.=TRUE)$x)
rad <- sqrt(2 * qf(.95, 2, nrow(d) - 1))
mean(sqrt(d[,1]^2 + d[,2]^2) < rad)
}
It was also commented that a 95% data ellipse contains 95% of the data by definition. This is certainly not true, at least for normal-theory ellipses. If your distribution is particularly bad, the coverage frequency may not even converge to the assumed level as the sample size increases. Consider a generalised pareto distribution, for example:
library(evd) # for rgpd
# generalised pareto has no variance for shape > 0.5
z <- sapply(1:1000, function(...) within.ellipse(rgpd(100, shape=5), rgpd(100, shape=5), FALSE))
mean(z)
[[1] 0.97451
z <- sapply(1:1000, function(...) within.ellipse(rgpd(10000, shape=5), rgpd(10000, shape=5), FALSE))
mean(z)
[1] 0.9995808

Resources