How to calculate Godambe information matrix in R? - r

I have a likelihood function in R that I am optimizing using 'optim' and calculating the hessian matrix using hessian=T in the optim function. I want to calculate the Godambe Information matrix in R, which is defined as:
G(theta)= H(theta) J(theta)^-1 H(theta)
where J(theta) is the variability matrix and H(theta) is the sensitivity matrix.
I am not sure how to calculate these matrices in R for my likelihood function and the estimates obtained from the optim. Please help.

Related

Simulating data using gaussian copula when dealing with non-positive definite matrix

I am simulating data using a gaussian copula which requires a correlation matrix. To construct the correlation matrix, I got its correlation coefficients from literature/past studies. However, how do you deal with a non-positive definite matrix when simulating non-normal data using a Gaussian copula and ensuring that the final outcome presents a correlation that contains almost similar values as the correlation matrix used to simulate the data?
Approaches of dealing with the challenge stated above in R programming

How do I extract the principal component`s values of all observations using psych package

I'm performing dimensionality reduction using the psych package. After analyzing the scree plot I decided to use the 9 most important PCs (out of 15 variables) to build a linear model.
My question is, how do I extract the values of the 9 most important PCs for each of the 500 observations I have? Is there any built in function for that, or do I have to manually compute it using the loadings matrix?
Returns eigen values, loadings, and degree of fit for a specified number of components after performing an eigen value decomposition. Essentially, it involves doing a principal components analysis (PCA) on n principal components of a correlation or covariance matrix. Can also display residual correlations.By comparing residual correlations to original correlations, the quality of the reduction in squared correlations is reported. In contrast to princomp, this only returns a subset of the best nfactors. To obtain component loadings more characteristic of factor analysis, the eigen vectors are rescaled by the sqrt of the eigen values.
principal(r, nfactors = 1, residuals = FALSE,rotate="varimax",n.obs=NA, covar=FALSE,
scores=TRUE,missing=FALSE,impute="median",oblique.scores=TRUE,
method="regression",...)
I think So.

How are asymptotic p-values calculated in Hmisc: rcorr?

I am using the rcorr function within the Hmisc package in R to develop Pearson correlation coefficients and corresponding p-values when analyzing the correlation of several fishery landings time series. The data isn't really important here but what I would like to know is: how are the p-values calculated for this? It states that the asymptotic P-values are approximated by using the t or F distributions but I am wondering if someone could help me find some more information on this or an equation that describes how exactly these values are calculated.

use correlation matrix in robust PCA functions R

I want to perform robust principal component analysis (PCA) on the correlation matrix. Namely, rrcov::PcaHubert.
I know that if I give to the function cor=TRUE, rrcov:CovMcd calculates the robust covariance and correlation matrix. How can I force the PCA to use the correlation matrix instead of the covariance matrix?
Thanks!

BoxCox Transformation in auto.arima(): Does it also transform the residuals?

I am using the auto.arima() function in the forecast package in R. I performed a Box-Cox transformation (lambda = 0.02492832, if you're curious). My data are on the order of 10^9 and is exhibiting increasing variance after differencing twice, so I think B-C is appropriate. Strangely, the residuals are on the order of 10^-2. Not sure if I have discovered a crystal ball or if I'm missing something in the way residuals are calculated when using a B-C transformation in auto.arima(). Are the residuals also transformed?
The residuals are on the scale of the transformed data. If you want to compute data - fitted instead, use fitted() to obtain the fitted values.
If the variance is equal to k, the lambda component of the residual is the coefficient of k. Thus lambda(k)=10^9.
There is something missing in the box-cox transformation which is generating increasing variances.
It is the forecast ''k''.
Yes, the residuals are transformed. We simply 10^-2 to the forecast k.
This value generates a new forecast.

Resources