McDonalds omega: warnings in R - r

I'm computing omega for several different scales; and get different warning messages for different scales with different omega functions in R. My questions are regarding how to interpret these warnings and if it is safe to report the retrieved omega statistics.
When I'm using the following function from the article "From alpha to omega: A practical solution to the pervasive problem of internal consistency estimation"
ci.reliability(subscale1, interval.type="bca", B=1000)
I get these warnings:
1: In lav_object_post_check(lavobject) :
lavaan WARNING: some estimated variances are negative
2: In lav_object_post_check(lavobject) :
lavaan WARNING: observed variable error term matrix (theta) is not positive definite; use inspect(fit,"theta") to investigate.
And it can be many of them!
What do they mean?
I still receive omega statistics; can they be interpreted or not?
When I use the function:
psych::omega(subscale1)
I get this warning:
Warning message:
In GPFoblq(L, Tmat = Tmat, normalize = normalize, eps = eps, maxit = maxit, :
convergence not obtained in GPFoblq. 1000 iterations used.
Again,
What does it mean; and can I use the omega-statistics that I get?
Note that these warnings appear on different subscales; so one subscale can be computed using one of the function but not the other and vice versa.
EDIT: If it helps: Subscale1 encompasses 4 items; the sample includes N>300. Also, I can run a CFA analysis on these 4 items in lavaan (Chi2=11.8, p<.001; CFI=0.98; RMSEA=0.123).

That particular article to which you are referring seems to be the British Journal of Psychology (2014), 105, 399–412© 2013 by Dunn, Baguley and Brunsden. The omega coefficient they discuss is actually what Rick Zinbarg and I refer to as omega_total. (McDonald developed two omega coefficients which has led to this confusion.)
You are having problems using omega in my psych package. The omega function in psych is meant to find omega_hiearchical as well as omega_total. Thus, it tries (by default) to extract three lower level factors and then, in turn, factor the resulting correlations of those factors. However, with only 4 variables in your sub scale, it can not find a meaningful 3 factor solution. You can specify that you want to find two factors:
omega(subscale1,2)
and it will work. However, omega_h is not particularly meaningful for 4 items.
Contrary to the suggestion of sample size, it is actually due to the number of items.
I think you might find the tutorial for finding omega_h using psych helpful:
[http://personality-project.org/r/psych/HowTo/R_for_omega.pdf]

Related

What is the current convergence criterion of glmnet?

I have attempted to reproduce the results of glmnet with the convergence criterion described in equation 1 and 2 or in the vignette in Appendix 0 on page 34: https://cran.r-project.org/web/packages/glmnet/vignettes/glmnet.pdf
equation1
equation2
Considering that each observation has a weight of 1, this gives me:
delta[i]=crossprod(X[, i], X[, i])* (beta_last[i] - beta_new[i])**2
Then I check if max(delta)>=eps, as described in the vignette
Using this criterion, I do not get the same number of iterations as the glmnet results (often a lag of one or two iterations), leading me to believe that it is out of date. By the way, it seems that the convergence criterion of the glmnet algorithm in the Gaussian case has changed regularly in the last few years.
Do you know what criterion is used to determine the convergence of the algorithm ?
Thanks in advance for your help.
glmnet rescales the weights to sum to 1 before starting the fit, so you're missing a 1/n factor in the definition of delta[i]. But with that fix, this is the criterion used in the current version of glmnet (4.1-3) and also in version 4.1-2. Keep in mind, there are may be other differences like active set/strong set that you may not be using in exactly the same way as glmnet does, which can also affect the number of coordinate descents you realize.

Gamma GLM: NaN production and divergence errors

Intro
I'm trying to construct a GLM that models the quantity (mass) of eggs the specimens of a fish population lays depending on its size and age.
Thus, the variables are:
eggW: the total mass of layed eggs, a continuous and positive variable ranging between 300 and 30000.
fishW: mass of the fish, continuous and positive, ranging between 3 and 55.
age: either 1 or 2 years.
No 0's, no NA's.
After checking and realising assuming a normal distribution was probably not appropriate, I decided to use a Gamma distribution. I chose Gamma basically because the variable was positive and continuous, with increasing variance with higher values and appeared to be skewed, as you can see in the image below.
Frequency distribution of eggW values:
fishW vs eggW:
The code
myglm <- glm(eggW ~ fishW * age, family=Gamma(link=identity),
start=c(mean(data$eggW),1,1,1),
maxit=100)
I added the maxit factor after seeing it suggested on a post of this page as a solution to glm.fit: algorithm did not converge error, and it worked.
I chose to work with link=identity because of the more obvious and straightforward interpretation of the results in biological terms rather than using an inverse or log link.
So, the code above results in the next message:
Warning messages: 1: In log(ifelse(y == 0, 1, y/mu)) : NaNs
produced 2: step size truncated due to divergence
Importantly, no error warnings are shown if the variable fishW is dropped and only age is kept. No errors are reported if a log link is used.
Questions
If the rationale behind the design of my model is acceptable, I would like to understand why these errors are reported and how to solve or avoid them. In any case, I would appreciate any criticism or suggestions.
You are looking to determine the weight of the eggs based upon age and weight of the fish correct? I think you need to use:
glm(eggW ~ fishW + age, family=Gamma(link=identity)
Instead of
glm(eggW ~ fishW * age, family=Gamma(link=identity)
Does your dataset have missing values?
Are your variables highly correlated?
Turn fishW * age into a seperate column and just pass that to the algo

glmmLasso error and warning

I am trying to perform variable selection in a generalized linear mixed model using glmmLasso, but am coming up with an error and a warning, that I can not resolve. The dataset is unbalanced, with some participants (PTNO) having more samples than others; no missing data. My dependent variable is binary, all other variables (beside the ID variable PTNO) are continous.
I suspect something very generic is happening, but obviously fail to see it and have not found any solution in the documentation or on the web.
The code, which is basically just adapted from the glmmLasso soccer example is:
glm8 <- glmmLasso(Group~NDUFV2_dCTABL+GPER1_dCTABL+ ESR1_dCTABL+ESR2_dCTABL+KLF12_dCTABL+SP4_dCTABL+SP1_dCTABL+ PGAM1_dCTABL+ANK3_dCTABL+RASGRP1_dCTABL+AKT1_dCTABL+NUDT1_dCTABL+ POLG_dCTABL+ ADARB1_dCTABL+OGG_dCTABL+ PDE4B_dCTABL+ GSK3B_dCTABL+ APOE_dCTABL+ MAPK6_dCTABL, rnd = list(PTNO=~1),
family = poisson(link = log), data = stackdata, lambda=100,
control = list(print.iter=TRUE,start=c(1,rep(0,29)),q.start=0.7))
The error message is displayed below. Specficially, I do not believe there are any NAs in the dataset and I am unsure about the meaning of the warning regarding the factor variable.
Iteration 1
Error in grad.lasso[b.is.0] <- score.beta[b.is.0] - lambda.b * sign(score.beta[b.is.0]) :
NAs are not allowed in subscripted assignments
In addition: Warning message:
In Ops.factor(y, Mu) : ‘-’ not meaningful for factors
An abbreviated dataset containing the necessary variables is available in R format and can be downladed here.
I hope I can be guided a bit as to how to go on with the analysis. Please let me know if there is anything wrong with the dataset or you cannot download it. ANY help is much appreciated.
Just to follow up on #Kristofersen comment above. It is indeed the start vector that messes your analysis up.
If I run
glm8 <- glmmLasso(Group~NDUFV2_dCTABL+GPER1_dCTABL+ ESR1_dCTABL+ESR2_dCTABL+KLF12_dCTABL+SP4_dCTABL+SP1_dCTABL+ PGAM1_dCTABL+ANK3_dCTABL+RASGRP1_dCTABL+AKT1_dCTABL+NUDT1_dCTABL+ POLG_dCTABL+ ADARB1_dCTABL+OGG_dCTABL+ PDE4B_dCTABL+ GSK3B_dCTABL+ APOE_dCTABL+ MAPK6_dCTABL,
rnd = list(PTNO=~1),
family = binomial(),
data = stackdata,
lambda=100,
control = list(print.iter=TRUE))
then everything is fine and dandy (i.e., it converges and produces a solution). You have copied the example with poisson regression and you need to tweak the code to your situation. I have no idea about whether the output makes sense.
Quick note: I ran with the binomial distribution in the code above since your outcome is binary. If it makes sense to estimate relative risks then poisson may be reasonable (and it also converges), but you need to recode your outcome as the two groups are defined as 1 and 2 and that will certainly mess up the poisson regression.
In other words do a
stackdata$Group <- stackdata$Group-1
before you run the analysis.

Spark ML Logistic Regression with Categorical Features Returns Incorrect Model

I've been doing a head-to-head comparison of Spark 1.6.2 ML's LogisticRegression with R's glmnet package (the closest analog I could find based on other forum posts).
I'm specifically looking at these two fitting packages when using categorical features. When using continuous features, results for the two packages are comparable.
For my first attempt with Spark, I used the ML Pipeline API to transform my single 21-level categorical variable (called FAC for faculty) with StringIndexer followed by OneHotEncoder to get a binary vector representation.
When I fit my models in Spark and R, I get the following sets of results (that aren't even close):
SPARK 1.6.2 ML
lrModel.intercept
-3.1453838659926427
lrModel.weights
[0.37664264958084287,0.697784342445422,0.4269429071484017,0.3521764371898419,0.19233585960734872,0.6708049751689226,0.49342372792676115,0.5471489576300356,0.37650628365008465,1.0447861554914701,0.5371820187662734,0.4556833133252492,0.2873530144304645,0.09916227313130375,0.1378469333986134,0.20412095883272838,0.4494641670133712,0.4499625784826652,0.489912016708041,0.5433020878341336]
R (glmnet)
(Intercept) -2.79255253
facG -0.35292166
facU -0.16058275
facN 0.69187146
facY -0.06555711
facA 0.09655696
facI 0.02374558
facK -0.25373146
facX 0.31791765
facM 0.14054251
facC 0.02362977
facT 0.07407357
facP 0.09709607
facE 0.10282076
facH -0.21501281
facQ 0.19044412
facW 0.18432837
facF 0.34494177
facO 0.13707197
facV -0.14871580
facS 0.19431703
I've manually checked the glmnet results and they're correct (calculating the proportion of training samples with a particular level of the categorical feature and comparing that to the softmax prob. under the estimated model). These results do not change even when the max. no. of iterations in the optimization is set to 1000000 and the convergence tolerance is set to 1E-15. These results also do not change when the Spark LogisticRegression weights are initialized to the glmnet-estimated weights (Spark's optimizing a different cost function?).
I should say that the optimization problem is not different between these two approaches. You should be minimizing logistic loss (a convex surface) and thereby arriving at nearly the exact same answer).
Now, when I manually recode the FAC feature as a binary vector in the data file and read those binary columns as "DoubleType" (using Spark's DataFrame schema), I get very comparable results. (The order of the coefficients for the following results is different from the above results. Also the reference levels are different--"B" in the first case, "A" in the second--so the coefficients for this test should not match those from the above test.)
SPARK 1.6.2 ML
lrModel.intercept
-2.9530485080391378
lrModel.weights
[-0.19233467682265934,0.8524505857034615,0.09501714540028124,0.25712829253044844,0.18430675058702053,0.09317325898819705,0.4784688407322236,0.3010877381053835,0.18417033887042242,0.2346069926274015,0.2576267066227656,0.2633474197307803,0.05448893119304087,0.35096612444193326,0.3448460751810199,0.505448794876487,0.29757609104571175,0.011785058030487976,0.3548130904832268,0.15984047288368383]
R (glmnet)
s0
(Intercept) -2.9419468179
FAC_B -0.2045928975
FAC_C 0.8402716731
FAC_E 0.0828962518
FAC_F 0.2450427806
FAC_G 0.1723424956
FAC_H -0.1051037449
FAC_I 0.4666239456
FAC_K 0.2893153021
FAC_M 0.1724536240
FAC_N 0.2229762780
FAC_O 0.2460295934
FAC_P 0.2517981380
FAC_Q -0.0660069035
FAC_S 0.3394729194
FAC_T 0.3334048723
FAC_U 0.4941379563
FAC_V 0.2863010635
FAC_W 0.0005482422
FAC_X 0.3436361348
FAC_Y 0.1487405173
Standardization is set to FALSE for both and no regularization is performed (you shouldn't perform it here since you're really just learning the incidence rate of each level of the feature and the binary feature columns are completely uncorrelated from one another). Also, I should mention that the 21 levels of the categorical feature range in incidence counts from ~800 to ~3500 (so this is not due to lack of data; large error in estimates).
Anyone experience this? I'm one step away from reporting this to the Spark guys.
Thanks as always for the help.

Why is rmvnorm() function returning "In sqrt(ev$values) : NaNs produced", what is this error and how can it be corrected or avoided?

I am working with financial/economic data in case you are wondering about the large size of some of the coefficients below... My general question has to do with the simulation of parameter coefficients output from a linear random effects model in R. I am attempting to generate a random sample of beta coefficients using the model coefficients and the variance-covariance (VCOV) matrix from the same model in R. My question is: Why am I receiving the error below about the square root of the expected values using the rmvnorm() function from the mvtnorm{} package? How can I deal with this warning/issue?
#Example call: lmer model with random effects by YEAR
#mlm<-lmer(DV~V1+V2+V3+V2*V3+V4+V5+V6+V7+V8+V9+V10+V11+(1|YEAR), data=dat)
#Note: 5 years (5 random effects total)
#LMER call yields the following information:
coef<-as.matrix(c(-28037800,0.8368619,2816347,8681918,-414002.6,371010.7,-26580.84,80.17909,271.417,-239.1172,3.463785,-828326))
sigma<-as.matrix(rbind(c(1834279134971.21,-415.95,-114036304870.57,-162630699769.14,-23984428143.44,-94539802675.96,
-4666823087.67,-93751.98,1735816.34,-1592542.75,3618.67,14526547722.87),
c(-415.95,0.00,41.69,94.17,-8.94,-22.11,-0.55,0.00,0.00,0.00,0.00,-7.97),
c(-114036304870.57,41.69,12186704885.94,12656728536.44,-227877587.40,-2267464778.61,
-4318868.82,8909.65,-355608.46,338303.72,-321.78,-1393244913.64),
c(-162630699769.14,94.17,12656728536.44,33599776473.37,542843422.84,4678344700.91,-27441015.29,
12106.86,-225140.89,246828.39,-593.79,-2445378925.66),
c(-23984428143.44,-8.94,-227877587.40,542843422.84,32114305557.09,-624207176.98,-23072090.09,
2051.16,51800.37,-49815.41,-163.76,2452174.23),
c(-94539802675.96,-22.11,-2267464778.61,4678344700.91,-624207176.98,603769409172.72,90275299.55,
9267.90,208538.76,-209180.69,-304.18,-7519167.05),
c(-4666823087.67,-0.55,-4318868.82,-27441015.29,-23072090.09,90275299.55,82486186.42,-100.73,
15112.56,-15119.40,-1.34,-2476672.62),
c(-93751.98,0.00,8909.65,12106.86,2051.16,9267.90,-100.73,2.54,8.73,-10.15,-0.01,-1507.62),
c(1735816.34,0.00,-355608.46,-225140.89,51800.37,208538.76,15112.56,8.73,527.85,-535.53,-0.01,21968.29),
c(-1592542.75,0.00,338303.72,246828.39,-49815.41,-209180.69,-15119.40,-10.15,-535.53,545.26,0.01,-23262.72),
c(3618.67,0.00,-321.78,-593.79,-163.76,-304.18,-1.34,-0.01,-0.01,0.01,0.01,42.90),
c(14526547722.87,-7.97,-1393244913.64,-2445378925.66,2452174.23,-7519167.05,-2476672.62,-1507.62,21968.29,
-23262.72,42.90,229188496.83)))
#Error begins here:
betas<-rmvnorm(n=1000, mean=coef, sigma=sigma)
#rmvnorm breaks, Error returned:
Warning message: In sqrt(ev$values) : NaNs produced
When I Google the following search string: "rmvnorm, "Warning message: In sqrt(ev$values) : NaNs produced," I saw that:
http://www.nickfieller.staff.shef.ac.uk/sheff-only/mvatasksols6-9.pdf On Page 4 that this error indicates "negative eigen values." Although, I have no idea conceptually or practically what a negative eigen value is or why that they would be produced in this instance.
The second search result: [http://www.r-tutor.com/r-introduction/basic-data-types/complex2 Indicates that this error arises because of an attempt to take the square root of -1, which is "not a complex value" (you cannot take the square root of -1).
The question remains, what is going on here with the random generation of the betas, and how can this be corrected?
sessionInfo() R version 3.0.2 (2013-09-25) Platform:
x86_64-apple-darwin10.8.0 (64-bit)
Using the following packages/versions
mvtnorm_0.9-9994,
lme4_1.1-5,
Rcpp_0.10.3,
Matrix_1.1-2-2,
lattice_0.20-23
You have a huge range of scales in your eigenvalues:
range(eigen(sigma)$values)
## [1] -1.005407e-05 1.863477e+12
I prefer to use mvrnorm from the MASS package, just because it comes installed automatically with R. It also appears to be more robust:
set.seed(1001)
m <- MASS::mvrnorm(n=1000, mu=coef, Sigma=sigma) ## works fine
edit: OP points out that using method="svd" with rmvnorm also works.
If you print the code for MASS::mvrnorm, or debug(MASS:mvrnorm) and step through it, you see that it uses
if (!all(ev >= -tol * abs(ev[1L]))) stop("'Sigma' is not positive definite")
(where ev is the vector of eigenvalues, in decreasing order, so ev[1] is the largest eigenvalue) to decide on the positive definiteness of the variance-covariance matrix. In this case ev[1L] is about 2e12, tol is 1e-6, so this would allow negative eigenvalues up to a magnitude of about 2e6. In this case the minimum eigenvalue is -1e-5, well within tolerance.
Farther down MASS::mvrnorm uses pmax(ev,0) -- that is, if it has decided that the eigenvalues are not below tolerance (i.e. it didn't fail the test above), it just truncates the negative values to zero, which should be fine for practical purposes.
If you insisted on using rmvnorm you could use Matrix::nearPD, which tries to force the matrix to be positive definite -- it returns a list which contains (among other things) the eigenvalues and the "positive-definite-ified" matrix:
m <- Matrix::nearPD(sigma)
range(m$eigenvalues)
## [1] 1.863477e+04 1.863477e+12
The eigenvalues computed from the matrix are not quite identical -- nearPD and eigen use slightly different algorithms -- but they're very close.
range(eigen(m$mat)$values)
## [1] 1.861280e+04 1.863477e+12
More generally,
Part of the reason for the huge range of eigenvalues might be predictor variables that are scaled very differently. It might be a good idea to scale your input data if possible to make the variances more similar to each other (i.e., it will make all of your numerical computations more stable) -- you can always rescale the values once you've generated them
It's also the case that when matrices are very close to singular (i.e. some eigenvalues are very close to zero), small numerical differences can change the sign of the eigenvalues. In particular, if you copy and paste the values, you might lose some precision and cause this problem. Using dput(vcov(fit)) or save(vcov(fit)) to save the variance-covariance matrix at full precision is safer.
if you have no idea what "positive definite" means you might want to read up about it. The Wikipedia articles on covariance matrices and positive definite matrices might be a little too technical for you to start with; this question on StackExchange is closer, but still a little technical. The next entry on my Google journey was this one, which looks about right.

Resources