Mixed model starting values for lme4 - r

I am trying to fit a mixed model using the lmer function from the lme4 package. However, I do not understand what should be input to the start parameter.
My purpose is to use a simple linear regression to use the coefficients estimated there as starting values to the mixed model.
Lets say that my model is the following:
linear_model = lm(y ~ x1 + x2 + x3, data = data)
coef = summary(linear_model)$coefficients[- 1, 1] #I remove the intercept
result = lmer(y ~ x1 + x2 + x3 | x1 + x2 + x3, data = data, start = coef)
This example is an oversimplified version of what I am doing since I won't be able to share my data.
Then I get the following kind of error:
Error during wrapup: incorrect number of theta components (!=105) #105 is the value I get from the real regression I am trying to fit.
I have tried many different solutions, trying to provide a list and name those values theta like I saw suggested on some forums.
Also the Github code test whether the length is appropriate but I cant find to what it refers to:
# Assign the start value to theta
if (is.numeric(start)) {
theta <- start
}
# Check the length of theta
length(theta)!=length(pred$theta)
However I can't find where pred$theta is defined and so I don't understand where that value 105 is coming from.
Any help ?

A few points:
lmer doesn't in fact fit any of the fixed-effect coefficients explicitly; these are profiled out so that they are solved for implicitly at each step of the nonlinear estimation process. The estimation involves only a nonlinear search over the variance-covariance parameters. This is detailed (rather technically) in one of the lme4 vignettes (eqs. 30-31, p. 15). Thus providing starting values for the fixed-effect coefficients is impossible, and useless ...
glmer does fit fixed-effects coefficients explicitly as part of the nonlinear optimization (as #G.Grothendieck discusses in comments), if nAGQ>0 ...
it's admittedly rather obscure, but the starting values for the theta parameters (the only ones that are explicitly optimized in lmer fits) are 0 for the off-diagonal elements of the Cholesky factor, 1 for the diagonal elements: this is coded here
ll$theta[] <- is.finite(ll$lower) # initial values of theta are 0 off-diagonal, 1 on
... where you need to know further that, upstream, the values of the lower vector have been coded so that elements of the theta vector corresponding to diagonal elements have a lower bound of 0, off-diagonal elements have a lower bound of -Inf; this is equivalent to starting with an identity matrix for the scaled variance-covariance matrix (i.e., the variance-covariance matrix of the random-effects parameters divided by the residual variance), or a random-effects variance-covariance matrix of (sigma^2 I).
If you have several random effects and big variance-covariance matrices for each, things can get a little hairy. If you want to recover the starting values that lmer will use by default you can use lFormula() as follows:
library(lme4)
ff <- lFormula(Reaction~Days+(Days|Subject),sleepstudy)
(lwr <- ff$reTrms$lower)
## [1] 0 -Inf 0
ifelse(lwr==0,1,0) ## starting values
## [1] 1 0 1
For this model, we have a single 2x2 random-effects variance-covariance matrix. The theta parameters correspond to the lower-triangle Cholesky factor of this matrix, in column-wise order, so the first and third elements are diagonal, and the second element is off-diagonal.
The fact that you have 105 theta parameters worries me; fitting such a large random-effects model will be extremely slow and take an enormous amount of data to fit reliably. (If you know your model makes sense and you have enough data you might want to look into faster options, such as using Doug Bates's MixedModels package for Julia or possibly glmmTMB, which might scale better than lme4 for problems with large theta vectors ...)
your model formula, y ~ x1 + x2 + x3 | x1 + x2 + x3, seems very odd. I can't figure out any context in which it would make sense to have the same variables as random-effect terms and grouping variables in the same model!

Related

Is there a way to force the coefficient of the independent variable to be a positive coefficient in the linear regression model used in R?

In lm(y ~ x1 + x2+ x3 +...+ xn) , not all independent variables are positive.
For example, we know that x1 to x5 must have positive coefficients and x6 to x10 must have negative coefficients.
However, when lm(y ~ x1 + x2+ x3 +...+ x10) is performed using R, some of x1 ~ x5 have negative coefficients and some of x6 ~ x10 have positive coefficients. is the data analysis result.
I want to control this using a linear regression method, is there any good way?
The sign of a coefficient may change depending upon its correlation with other coefficients. As #TarJae noted, this seems like an example of (or counterpart to?) Simpson's Paradox, which describes cases where the sign of a correlation might reverse depending on if we condition on another variable.
Here's a concrete example in which I've made two independent variables, x1 and x2, which are both highly correlated to y, but when they are combined the coefficient for x2 reverses sign:
# specially chosen seed; most seeds' result isn't as dramatic
set.seed(410)
df1 <- data.frame(y = 1:10,
x1 = rnorm(10, 1:10),
x2 = rnorm(10, 1:10))
lm(y ~ ., df1)
Call:
lm(formula = y ~ ., data = df1)
Coefficients:
(Intercept) x1 x2
-0.2634 1.3990 -0.4792
This result is not incorrect, but arises here (I think) because the prediction errors from x1 happen to be correlated with the prediction errors from x2, such that a better prediction is created by subtracting some of x2.
EDIT, additional analysis:
The more independent series you have, the more likely you are to see this phenomenon arise. For my example with just two series, only 2.4% of the integer seeds from 1 to 1000 produce this phenomenon, where one of the series produces a negative regression coefficient. This increases to 16% with three series, 64% of the time with five series, and 99.9% of the time with 10 series.
Constraints
Possibilities include using:
nls with algorithm = "port" in which case upper and lower bounds can be specified.
nnnpls in the nnls package which supports upper and lower 0 bounds or use nnls in the same package if all coefficients should be non-negative.
bvls (bounded value least squares) in the bvls package and specify the bounds.
there is an example of performing non-negative least squares in the vignette of the CVXR package.
reformulate it as a quadratic programming problem (see Wikipedia for the formulation) and use quadprog package.
nnls in the limSolve package. Negate the columns that should have negative coefficients to convert it to a non-negative least squares problem.
These packages mostly do not have a formula interface but instead require that a model matrix and dependent variable be passed as separate arguments. If df is a data frame containing the data and if the first column is the dependent variable then the model matrix can be calculated using:
A <- model.matrix(~., df[-1])
and the dependent variable is
df[[1]]
Penalties
Another approach is to add a penalty to the least squares objective function, i.e. the objective function becomes the sum of the squares of the residuals plus one or more additional terms that are functions of the coefficients and tuning parameters. Although doing this does not impose any hard constraints to guarantee the desired signs it may result in the correct signs anyways. This is particularly useful if the problem is ill conditioned or if there are more predictors than observations.
linearRidge in the ridge package will minimize the sum of the square of the residuals plus a penalty equal to lambda times the sum of the squares of the coefficients. lambda is a scalar tuning parameter which the software can automatically determine. It reduces to least squares when lambda is 0. The software has a formula method which along with the automatic tuning makes it particularly easy to use.
glmnet adds penalty terms containing two tuning parameters. It includes least squares and ridge regression as a special cases. It also supports bounds on the coefficients. There are facilities to automatically set the two tuning parameters but it does not have a formula method and the procedure is not as straight forward as in the ridge package. Read the vignettes that come with it for more information.
1- one way is to define an optimization program and minimize the mean square error by constraints and limits. (nlminb, optim, etc.)
2- Another one is using a library called "lavaan" as follow:
https://stats.stackexchange.com/questions/96245/linear-regression-with-upper-and-or-lower-limits-in-r

Removing variables resulting in singular matrix in R regression model

I've been using mnlogit in R to generate a multivariable logistic regression model. My original set of variables generated a singular matrix error, i.e.
Error in solve.default(hessian, gradient, tol = 1e-24) :
system is computationally singular: reciprocal condition number = 7.09808e-25
It turns out that several "sparse" columns (variables that are 0 for most sampled individuals) cause this singularity error. I need a systematic way of removing those variables that lead to a singularity error while retaining those that allow estimation of a regression model, i.e. something analogous to the use of the function step to select variables minimizing AIC via stepwise addition, but this time removing variables that generate singular matrices.
Is there some way to do this, since checking each variable by hand (there are several hundred predictor variables) would be incredibly tedious?
If X is the design matrix from your model which you can obtain using
X <- model.matrix(formula, data = data)
then you can find a (non-unique) set of variables that would give you a non-singular model using the QR decomposition. For example,
x <- 1:3
X <- model.matrix(~ x + I(x^2) + I(x^3))
QR <- qr(crossprod(X)) # Get the QR decomposition
vars <- QR$pivot[seq_len(QR$rank)] # Variable numbers
names <- rownames(QR$qr)[vars] # Variable names
names
#> [1] "(Intercept)" "x" "I(x^2)"
This is subject to numerical error and may not agree with whatever code you are using, for two reasons.
First, it doesn't do any weighting, whereas logistic regression normally uses iteratively reweighted regression.
Second, it might not use the same tolerance as the other code. You can change its sensitivity by changing the tol parameter to qr() from the default 1e-07. Bigger values will cause more variables to be omitted from names.

Replicate SAS GLM proc to R

I want to compute a linear model in order to get means of some Y variable adjusted on a categorial Q variable and some X numeric variables.
One told me I could easily get them with SAS, and I used this piece of code:
proc glm data=TABLE_R;
class Q(ref="Q1");
model Y = Q X2 X3 X4 / solution;
lsmeans Q/ stderr pdiff cov out=adjmeans;
run;
But being way more friendly with R, I wanted to replicate this procedure, and after some research I ended with this code:
m = glm(Y ~ Q + X2 + X3 + X4, data=db) #using lm() didn't change anything
emmeans::emmeans(m, "Q")
The problem is that, whether very close, model coefficients are different. Here is an example with the intercept and 2 levels of Q:
#in R
(Intercept) Q2 Q3
-0.1790444126 0.0051160461 -0.0013756817
#in SAS
(Intercept) Q2 Q3
-0.1767853086 0.0016709301 -0.0031477746
Actually, in SAS, I have a message saying that coefficients needed additional computation (which I unfortunately don't understand, does R glm() lack this ?):
Note: The X'X matrix has been found to be singular, and a generalized
inverse was used to solve the normal equations. Terms whose estimates
are followed by the letter 'B' are not uniquely estimable.
Which option should I add here or ther so I can find the same results with both SAS and R ?
If I cannot, how can I choose which method is best suited ?
Usefull posts : Proc GLM (SAS) using R, X'X matrix found to be singular
EDIT : This is very strange but effectives are different in SAS and R :
#SAS
Observations read: 81733
Observations used: 9000
#R
16357 Residual
(88017 observations deleted due to missingness)
You will get the same coefficients if you first do
options(contrasts=c(“contr.SAS”,”contr.poly”))
before fitting the model. This will cause R to use the same parameterization that SAS uses.
However, even without this change, the fitted values from R will be identical to those from SAS, and the EMMs from R will match the lsmeans from SAS. That’s because we are not really changing the model, we are only changing how it is parameterized.

How to get covariance matrix for random effects (BLUPs/conditional modes) from lme4

So, I've fitted a linear mixed model with two random intercepts in R:
Y = X beta + Z b + e_i,
where b ~ MVN (0, Sigma); X and Z are the fixed- and random-effects model matrices respectively, and beta and b are the fixed-effect parameters and random-effects BLUPs/conditional modes.
I would like to get my hands on the underlying covariance matrix of b, which doesn't seem to be a trivial thing in lme4 package. You can get only the variances by VarCorr, not the actual correlation matrix.
According to one of the package vignettes (page 2), you can calculate the covariance of beta: e_i * lambda * t(lambda). And all those components you can extract from the output of lme4.
I was wondering if this is the way to go? Or do you have any other suggestions?
From ?ranef:
If ‘condVar’ is ‘TRUE’ each of the data frames has an attribute
called ‘"postVar"’ which is a three-dimensional array with symmetric
faces; each face contains the variance-covariance matrix for a
particular level of the grouping factor. (The name of this attribute
is a historical artifact, and may be changed to ‘condVar’ at some
point in the future.)
Set up an example:
library(lme4)
fm1 <- lmer(Reaction ~ Days + (Days | Subject), sleepstudy)
rr <- ranef(fm1,condVar=TRUE)
Get the variance-covariance matrix among the b values for the intercept
pv <- attr(rr[[1]],"postVar")
str(pv)
##num [1:2, 1:2, 1:18] 145.71 -21.44 -21.44 5.31 145.71 ...
So this is a 2x2x18 array; each slice is the variance-covariance matrix among the conditional intercept and slope for a particular subject (by definition, the intercepts and slopes for each subject are independent of the intercepts and slopes for all other subjects).
To convert this to a variance-covariance matrix (see getMethod("image",sig="dgTMatrix") ...)
library(Matrix)
vc <- bdiag( ## make a block-diagonal matrix
lapply(
## split 3d array into a list of sub-matrices
split(pv,slice.index(pv,3)),
## ... put them back into 2x2 matrices
matrix,2))
image(vc,sub="",xlab="",ylab="",useRaster=TRUE)

Warning message using robust regression with lmRob: Denominator smaller than tl= 1e-100 in test for bias

I'm doing robust regressions in R using lmRob from the robust package. I have four response variables (R1-R4) and eight predictors (X1-X8), thus doing four separate robust regressions à la:
`library(robust)
mod.rob <- lmRob(R1 ~ X1 + X2 + X3 + X4 + X5 + X6 + X7 + X8, data=dataset)
summary(mod.rob)`
... and so on for the four response variables.
R1 to R3 works perfect, but when I try to do R4 I get two error messages. First:
Warning message:
In lmRob.fit.compute(x, y, x1.idx = x1.idx, nrep = nrep, robust.control = robust.control, :
Max iteration for refinement reached.
... and after summary(mod.rob):
`Warning messages:
1: In test.lmRob(object) :
Denominator smaller than tl= 1e-100 in test for bias.
2: In test.lmRob(object) :
Denominator smaller than tl= 1e-100 in test for bias.`
In the resulting model all t-values and estimates are zero and all p-values are 1, except for the intercept. The test for bias M-estimate has NaN for both statistic and p-value, which I suspect is where the model fails for some reason. If i change the parameter mxr to 200 I get rid of the first warning, but not the second ones.
I have tried to modify some other parameters; initial.alg, tl, tlo, tua, but to no avail. Doing an ordinary LS with mod.lm <- lm(R4 ~ X1 ... X8) works just fine. I guess my statistics knowledge is lacking here, because I don't really understand what is wrong.
Edit:
I have uploaded the data here: http://jmp.sh/QVV6O9w
My goal is to fit regression models using these data. My statistics background is a lot more lacking than I thought, so when diving deeper into regression modelling I found it a bit more complicated than I first had hoped. In short, I just want to construct models that are as accurate as possible. The predictors are all based on previous literature, and from what I have read, it is more justified to keep them all in the model rather than using some stepwise method. I have so far tried to learn various kinds of robust techniques, GAMs, and bootstrapping, since the OLS linear models violate the normality of residuals assumption. I am at a loss of how to proceed, really.

Resources