Inverse Moments of a Non-Central Chi-Square Distribution - r

I want to compute inverse moments and truncated inverse moments of a non-central chi-square distribution in R. How can I do that in R?
Suppose X follows the non-central chi-square distribution with degrees of freedom "k" and non-centrality parameter "t". My problem is to numerically compute the following expectations for various values of "t" so I can simulate the risk of James-Stein type estimators.
(i) E[X^(-1)] and E[X^(-2)]
(ii) E[X^(-1)I(A)] where I(A) is an indicator function of set A
(iii) E[1-c{X^(-2)}I(A)] where c is a constant.

In general, you can numerically compute the expected value of a random variable by drawing a large number of samples and then averaging them. For instance, you could estimate the expected values of X^(-1) and X^(-2) with something like:
mean(rchisq(1000000, df=3, ncp=10)^-1)
# [1] 0.1152163
mean(rchisq(1000000, df=3, ncp=10)^-2)
# [1] 0.1371877

Paolella's book, Intermediate Probability gives the moments of the non-central chi-square to various powers. See equation (10.10). You can find R code for these in the sadists package.

Related

Using GAMLSS, the difference between fitDist() and gamlss()

When using the GAMLSS package in R, there are many different ways to fit a distribution to a set of data. My data is a single vector of values, and I am fitting a distribution over these values.
My question is this: what is the main difference between using fitDist() and gamlss() since they give similar but different answers for parameter values, and different worm plots?
Also, using the function confint() works for gamlss() fitted objects but not for objects fitted with fitDist(). Is there any way to produce confidence intervals for parameters fitted with the fitDist() function? Is there an accuracy difference between the two procedures? Thanks!
m1 <- fitDist()
fits many distributions and chooses the best according to a
generalized Akaike information criterion, GAIC(k), wit penalty k for each
fitted parameter in the distribution, where k is specified by the user,
e.g. k=2 for AIC,
k = log(n) for BIC,
k=4 for a Chi-squared test (rounded from 3.84, the 5% critical value of a Chi-squared distribution with 1 degree of fereedom), which is my preference.
m1$fits
gives the full results from the best to worst distribution according to GAIC(k).

How do I extract the principal component`s values of all observations using psych package

I'm performing dimensionality reduction using the psych package. After analyzing the scree plot I decided to use the 9 most important PCs (out of 15 variables) to build a linear model.
My question is, how do I extract the values of the 9 most important PCs for each of the 500 observations I have? Is there any built in function for that, or do I have to manually compute it using the loadings matrix?
Returns eigen values, loadings, and degree of fit for a specified number of components after performing an eigen value decomposition. Essentially, it involves doing a principal components analysis (PCA) on n principal components of a correlation or covariance matrix. Can also display residual correlations.By comparing residual correlations to original correlations, the quality of the reduction in squared correlations is reported. In contrast to princomp, this only returns a subset of the best nfactors. To obtain component loadings more characteristic of factor analysis, the eigen vectors are rescaled by the sqrt of the eigen values.
principal(r, nfactors = 1, residuals = FALSE,rotate="varimax",n.obs=NA, covar=FALSE,
scores=TRUE,missing=FALSE,impute="median",oblique.scores=TRUE,
method="regression",...)
I think So.

How are asymptotic p-values calculated in Hmisc: rcorr?

I am using the rcorr function within the Hmisc package in R to develop Pearson correlation coefficients and corresponding p-values when analyzing the correlation of several fishery landings time series. The data isn't really important here but what I would like to know is: how are the p-values calculated for this? It states that the asymptotic P-values are approximated by using the t or F distributions but I am wondering if someone could help me find some more information on this or an equation that describes how exactly these values are calculated.

Maximum likelihood of Compound Poisson Distributions

I'm trying to compute a maximum likelihood of the compound Poisson-Gamma distribution in R. The distribution is defined by $ \sum_{j=1}^{N} Y_j $ where $Y_n$ is i.i.d sequence independent $gamma(k,\theta)$ values and $N$ is a Poisson distribution with parameter $\beta$. I'm trying to estimate the parameters $\theta$ and $\beta$ without luck.
If you wanted to do something similar, but for a negative binomial distribution, then you can use the the function negbin.mle from the package Rfast
y <- rpois(100, 2)
Rfast::negbin.mle(y)
Output
$iters
[1] 5
$loglik
[1] -162.855
$param
success probability number of failures mean
0.9963271 480.1317031 1.7700000
Also if you run the command:
Rfast::negbin.mle
You can see what the function is computing.
You can also check the functions manual with:
?Rfast::negbin.mle
Edit:
Unfortunately I haven't found something that perfectly fits your answer.
As Ben states, this answer is for a Poisson with Gamma-distributed mean.

Significance level of ACF and PACF in R

I want to obtain the the limits that determine the significance of autocorrelation coefficients and partial autocorrelation coefficients, but I don't know how to do it.
I obtained the Partial autocorrelogram using this function pacf(data). I want that R print me the values indicated in the figure.
The limits that determine the significance of autocorrelation coefficients are: +/- of (exp(2*1.96/√(N-3)-1)/(exp(2*1.96/√(N-3)+1).
Here N is the length of the time series, and I used the 95% confidence level.
The correlation values that correspond to the m % confidence intervals chosen for the test are given by 0 ± i/√N where:
N is the length of the time series
i is the number of standard deviations we expect m % of the correlations to lie within under the null hypothesis that there is zero autocorrelation.
Since the observed correlations are assumed to be normally distributed:
i=2 for a 95% confidence level (acf's default),
i=3 for a 99% confidence level,
and so on as dictated by the properties of a Gaussian distribution
Figure A1, Page 1011 here provides a nice example of how the above principle applies in practice.
After investigating acf and pacf functions and library psychometric with its CIz and CIr functions I found this simple code to do the task:
Compute confidence interval for z Fisher:
ciz = c(-1,1)*(-qnorm((1-alpha)/2)/sqrt(N-3))
here alpha is the confidence level (typically 0.95). N - number of observations.
Compute confidence interval for R:
cir = (exp(2*ciz)-1)/(exp(2*ciz)+1

Resources