Derive standard error of a transformed variable in linear regression - r

I would like to calculate the standard error of a transformed variable from my linear regression, i.e. divide two variables and get the standard error from this variable.
I use the deltamethod function from the msm package, but fail to get accurate standard errors.
For example:
Simulation of data:
set.seed(123)
nobs = 1000
data <- data.table(
x1 = rnorm(nobs),
x2 = rnorm(nobs),
x3 = rnorm(nobs),
x4 = rnorm(nobs),
y = rnorm(nobs))
Linear regression:
reg2 <- lm(y~x1+x2+x3+x4, data=data)
Get the coef and vcov (Here I need to get rid of the missings, as some coefficients in my real data are NA and I calculate a lot of regressions in loop)
vcov_reg <- vcov(reg2)
coef_reg <- coef(reg2)
coef_reg <- na.omit(coef_reg)
coef_reg <- as.numeric(coef_reg)
Deltamethod, for the the variable x1 divided by x3 (meaning I should use x2 and x4 according to the msm package):
deltamethod(~ x2/x4, coef_reg, vcov_reg)
This gives me a standard error of the transformed variable (x1/x3) of 3.21, while all standard errors from this regression are around 0.03.
Any idea's why/what's wrong here?
Other suggestions to calculate it are also welcome.

There is nothing wrong with the result. In your example your data is centered at 0 so it shouldn't be too surprising that when dividing by the data that you end up with a large variance / standard error.
Note that your estimated coefficient for x3 is -0.017408626 so with a standard error of about 0.03 the CI for this coefficient crosses 0. And that's the thing we're dividing by. Hopefully that gives you some intuition for why the standard error seems to explode. For some evidence that this really is part of the issue consider x1/x2 instead.
> deltamethod(~ x2/x3, coef_reg, vcov_reg)
[1] 0.3752063
Which is much smaller since the estimated coefficient for the denominator is bigger in this case (0.09)
But really there is nothing wrong with your code. It was just your intuition was wrong. Alternative methods to estimate what you want would be to bootstrap or to use a Bayesian regression and look at the posterior distribution of the transformation.

Related

Is there a way to force the coefficient of the independent variable to be a positive coefficient in the linear regression model used in R?

In lm(y ~ x1 + x2+ x3 +...+ xn) , not all independent variables are positive.
For example, we know that x1 to x5 must have positive coefficients and x6 to x10 must have negative coefficients.
However, when lm(y ~ x1 + x2+ x3 +...+ x10) is performed using R, some of x1 ~ x5 have negative coefficients and some of x6 ~ x10 have positive coefficients. is the data analysis result.
I want to control this using a linear regression method, is there any good way?
The sign of a coefficient may change depending upon its correlation with other coefficients. As #TarJae noted, this seems like an example of (or counterpart to?) Simpson's Paradox, which describes cases where the sign of a correlation might reverse depending on if we condition on another variable.
Here's a concrete example in which I've made two independent variables, x1 and x2, which are both highly correlated to y, but when they are combined the coefficient for x2 reverses sign:
# specially chosen seed; most seeds' result isn't as dramatic
set.seed(410)
df1 <- data.frame(y = 1:10,
x1 = rnorm(10, 1:10),
x2 = rnorm(10, 1:10))
lm(y ~ ., df1)
Call:
lm(formula = y ~ ., data = df1)
Coefficients:
(Intercept) x1 x2
-0.2634 1.3990 -0.4792
This result is not incorrect, but arises here (I think) because the prediction errors from x1 happen to be correlated with the prediction errors from x2, such that a better prediction is created by subtracting some of x2.
EDIT, additional analysis:
The more independent series you have, the more likely you are to see this phenomenon arise. For my example with just two series, only 2.4% of the integer seeds from 1 to 1000 produce this phenomenon, where one of the series produces a negative regression coefficient. This increases to 16% with three series, 64% of the time with five series, and 99.9% of the time with 10 series.
Constraints
Possibilities include using:
nls with algorithm = "port" in which case upper and lower bounds can be specified.
nnnpls in the nnls package which supports upper and lower 0 bounds or use nnls in the same package if all coefficients should be non-negative.
bvls (bounded value least squares) in the bvls package and specify the bounds.
there is an example of performing non-negative least squares in the vignette of the CVXR package.
reformulate it as a quadratic programming problem (see Wikipedia for the formulation) and use quadprog package.
nnls in the limSolve package. Negate the columns that should have negative coefficients to convert it to a non-negative least squares problem.
These packages mostly do not have a formula interface but instead require that a model matrix and dependent variable be passed as separate arguments. If df is a data frame containing the data and if the first column is the dependent variable then the model matrix can be calculated using:
A <- model.matrix(~., df[-1])
and the dependent variable is
df[[1]]
Penalties
Another approach is to add a penalty to the least squares objective function, i.e. the objective function becomes the sum of the squares of the residuals plus one or more additional terms that are functions of the coefficients and tuning parameters. Although doing this does not impose any hard constraints to guarantee the desired signs it may result in the correct signs anyways. This is particularly useful if the problem is ill conditioned or if there are more predictors than observations.
linearRidge in the ridge package will minimize the sum of the square of the residuals plus a penalty equal to lambda times the sum of the squares of the coefficients. lambda is a scalar tuning parameter which the software can automatically determine. It reduces to least squares when lambda is 0. The software has a formula method which along with the automatic tuning makes it particularly easy to use.
glmnet adds penalty terms containing two tuning parameters. It includes least squares and ridge regression as a special cases. It also supports bounds on the coefficients. There are facilities to automatically set the two tuning parameters but it does not have a formula method and the procedure is not as straight forward as in the ridge package. Read the vignettes that come with it for more information.
1- one way is to define an optimization program and minimize the mean square error by constraints and limits. (nlminb, optim, etc.)
2- Another one is using a library called "lavaan" as follow:
https://stats.stackexchange.com/questions/96245/linear-regression-with-upper-and-or-lower-limits-in-r

F-test with HAC estimate

I am calculating a multi-variate OLS regression in R, and I know the residual are autocorrelated. I know I can use Newey-West correction when performing the t-test to check whether one of the coefficient is zero. I can do that using:
require(sandwich)
model <- lm(y ~ x1 + x2)
coeftest(model, vcov=NeweyWest(model))
where y was the variable to regress and x1 and x2 the predictor. This seems a good approach since my sample size is large.
But what if I want to run an F-test to test whether the coefficient of x1 is 1 and the coefficient of x2 is zero simultaneously? I cannot find a way to do that in R, if I want to account for the autocorrelation of the residuals. For instance, if I use the function linearHypothesis in R, it seems that Newey-West cannot be used as an argument of vcov. Any suggestion? An alternative would be to do bootstrapping to estimate a confidence ellipse for my point (1,0), but I was hoping to use an F-test if possible. Thank you!

generating errors with heteroscedasticity

I have a question regarding generating errors with heteroscedasticity
This is how my friend told me to do it:
n <- 30
x1 <- rnorm(n,0,1) # 1st predictor
x2 <- rnorm(n,0,1) # 2nd predictor
e <- rnorm(n,0,x1^2) # errors with heteroscedaticity
b1 <- 0.5; b2 <- 0.5
y <- x1*b1+x2*b2+e
For me, e <-rnorm(n,0,x1^2) -- this is autocorrelation rather than heteroscedastic error distribution.
But my friend said this is the correct way to generate errors with heteroscedasticity.
Am I missing something here?
I thought heteroscedasticity occurs when the variance of the error terms differ across observations.
Does e<-rnorm(n,0,x1^2) this syntax generate errors with heteroscedasticity correctly?
If not, could anyone tell me how to generate errors with heteroscedasticity?
This specification does generate a particular (slightly odd) kind of heteroscedasticity. You define heteroscedasticity as "the variance of the error term differ[ing] across observations". Since the value of x1 is different for different observations, and you have chosen your error values with a standard deviation of x1^2, the variances will be different for different observations.
note that
rnorm() specifies variability in terms of the standard deviation rather than the variance
autocorrelation refers to non-independence between (successive) observations. rnorm() chooses independent deviates, so this specification doesn't constitute an autocorrelated sample.

Mixed model starting values for lme4

I am trying to fit a mixed model using the lmer function from the lme4 package. However, I do not understand what should be input to the start parameter.
My purpose is to use a simple linear regression to use the coefficients estimated there as starting values to the mixed model.
Lets say that my model is the following:
linear_model = lm(y ~ x1 + x2 + x3, data = data)
coef = summary(linear_model)$coefficients[- 1, 1] #I remove the intercept
result = lmer(y ~ x1 + x2 + x3 | x1 + x2 + x3, data = data, start = coef)
This example is an oversimplified version of what I am doing since I won't be able to share my data.
Then I get the following kind of error:
Error during wrapup: incorrect number of theta components (!=105) #105 is the value I get from the real regression I am trying to fit.
I have tried many different solutions, trying to provide a list and name those values theta like I saw suggested on some forums.
Also the Github code test whether the length is appropriate but I cant find to what it refers to:
# Assign the start value to theta
if (is.numeric(start)) {
theta <- start
}
# Check the length of theta
length(theta)!=length(pred$theta)
However I can't find where pred$theta is defined and so I don't understand where that value 105 is coming from.
Any help ?
A few points:
lmer doesn't in fact fit any of the fixed-effect coefficients explicitly; these are profiled out so that they are solved for implicitly at each step of the nonlinear estimation process. The estimation involves only a nonlinear search over the variance-covariance parameters. This is detailed (rather technically) in one of the lme4 vignettes (eqs. 30-31, p. 15). Thus providing starting values for the fixed-effect coefficients is impossible, and useless ...
glmer does fit fixed-effects coefficients explicitly as part of the nonlinear optimization (as #G.Grothendieck discusses in comments), if nAGQ>0 ...
it's admittedly rather obscure, but the starting values for the theta parameters (the only ones that are explicitly optimized in lmer fits) are 0 for the off-diagonal elements of the Cholesky factor, 1 for the diagonal elements: this is coded here
ll$theta[] <- is.finite(ll$lower) # initial values of theta are 0 off-diagonal, 1 on
... where you need to know further that, upstream, the values of the lower vector have been coded so that elements of the theta vector corresponding to diagonal elements have a lower bound of 0, off-diagonal elements have a lower bound of -Inf; this is equivalent to starting with an identity matrix for the scaled variance-covariance matrix (i.e., the variance-covariance matrix of the random-effects parameters divided by the residual variance), or a random-effects variance-covariance matrix of (sigma^2 I).
If you have several random effects and big variance-covariance matrices for each, things can get a little hairy. If you want to recover the starting values that lmer will use by default you can use lFormula() as follows:
library(lme4)
ff <- lFormula(Reaction~Days+(Days|Subject),sleepstudy)
(lwr <- ff$reTrms$lower)
## [1] 0 -Inf 0
ifelse(lwr==0,1,0) ## starting values
## [1] 1 0 1
For this model, we have a single 2x2 random-effects variance-covariance matrix. The theta parameters correspond to the lower-triangle Cholesky factor of this matrix, in column-wise order, so the first and third elements are diagonal, and the second element is off-diagonal.
The fact that you have 105 theta parameters worries me; fitting such a large random-effects model will be extremely slow and take an enormous amount of data to fit reliably. (If you know your model makes sense and you have enough data you might want to look into faster options, such as using Doug Bates's MixedModels package for Julia or possibly glmmTMB, which might scale better than lme4 for problems with large theta vectors ...)
your model formula, y ~ x1 + x2 + x3 | x1 + x2 + x3, seems very odd. I can't figure out any context in which it would make sense to have the same variables as random-effect terms and grouping variables in the same model!

Warning message using robust regression with lmRob: Denominator smaller than tl= 1e-100 in test for bias

I'm doing robust regressions in R using lmRob from the robust package. I have four response variables (R1-R4) and eight predictors (X1-X8), thus doing four separate robust regressions à la:
`library(robust)
mod.rob <- lmRob(R1 ~ X1 + X2 + X3 + X4 + X5 + X6 + X7 + X8, data=dataset)
summary(mod.rob)`
... and so on for the four response variables.
R1 to R3 works perfect, but when I try to do R4 I get two error messages. First:
Warning message:
In lmRob.fit.compute(x, y, x1.idx = x1.idx, nrep = nrep, robust.control = robust.control, :
Max iteration for refinement reached.
... and after summary(mod.rob):
`Warning messages:
1: In test.lmRob(object) :
Denominator smaller than tl= 1e-100 in test for bias.
2: In test.lmRob(object) :
Denominator smaller than tl= 1e-100 in test for bias.`
In the resulting model all t-values and estimates are zero and all p-values are 1, except for the intercept. The test for bias M-estimate has NaN for both statistic and p-value, which I suspect is where the model fails for some reason. If i change the parameter mxr to 200 I get rid of the first warning, but not the second ones.
I have tried to modify some other parameters; initial.alg, tl, tlo, tua, but to no avail. Doing an ordinary LS with mod.lm <- lm(R4 ~ X1 ... X8) works just fine. I guess my statistics knowledge is lacking here, because I don't really understand what is wrong.
Edit:
I have uploaded the data here: http://jmp.sh/QVV6O9w
My goal is to fit regression models using these data. My statistics background is a lot more lacking than I thought, so when diving deeper into regression modelling I found it a bit more complicated than I first had hoped. In short, I just want to construct models that are as accurate as possible. The predictors are all based on previous literature, and from what I have read, it is more justified to keep them all in the model rather than using some stepwise method. I have so far tried to learn various kinds of robust techniques, GAMs, and bootstrapping, since the OLS linear models violate the normality of residuals assumption. I am at a loss of how to proceed, really.

Resources