Scatter plot between two predictors X1 and X2 - r

Given the following scatter plot between two predictors X1 and X2:
Is there a way to get the number of parameters of a linear model like that?
model <- lm(Y~X1+X2)
I would like to get the number 3 somehow (intercept + X1 + X2). I looked for something like this in the structures that lm, summary(model) and anova(model) return, but I didn't figure it out.
In case I don't get an answer, I'll stick on dim(model.matrix(model))[2] Thank you
I was thinking that X1 and X2 are correlated. Collinearity will reduce the accuracy of the estimates of the regression coefficients
Maybe the The importance of either X1 or X2 variable may be masked due to the presence of collinearity?
Though they both could be correct
Thank you!

In a linear model to get the 2nd beta, you need to have your y variable predicted/explained by at least 2 independent variables. If you are predicting 1 variable explained by only 1 variable, your linear model will only produce 1 beta.

Related

Is there a way to force the coefficient of the independent variable to be a positive coefficient in the linear regression model used in R?

In lm(y ~ x1 + x2+ x3 +...+ xn) , not all independent variables are positive.
For example, we know that x1 to x5 must have positive coefficients and x6 to x10 must have negative coefficients.
However, when lm(y ~ x1 + x2+ x3 +...+ x10) is performed using R, some of x1 ~ x5 have negative coefficients and some of x6 ~ x10 have positive coefficients. is the data analysis result.
I want to control this using a linear regression method, is there any good way?
The sign of a coefficient may change depending upon its correlation with other coefficients. As #TarJae noted, this seems like an example of (or counterpart to?) Simpson's Paradox, which describes cases where the sign of a correlation might reverse depending on if we condition on another variable.
Here's a concrete example in which I've made two independent variables, x1 and x2, which are both highly correlated to y, but when they are combined the coefficient for x2 reverses sign:
# specially chosen seed; most seeds' result isn't as dramatic
set.seed(410)
df1 <- data.frame(y = 1:10,
x1 = rnorm(10, 1:10),
x2 = rnorm(10, 1:10))
lm(y ~ ., df1)
Call:
lm(formula = y ~ ., data = df1)
Coefficients:
(Intercept) x1 x2
-0.2634 1.3990 -0.4792
This result is not incorrect, but arises here (I think) because the prediction errors from x1 happen to be correlated with the prediction errors from x2, such that a better prediction is created by subtracting some of x2.
EDIT, additional analysis:
The more independent series you have, the more likely you are to see this phenomenon arise. For my example with just two series, only 2.4% of the integer seeds from 1 to 1000 produce this phenomenon, where one of the series produces a negative regression coefficient. This increases to 16% with three series, 64% of the time with five series, and 99.9% of the time with 10 series.
Constraints
Possibilities include using:
nls with algorithm = "port" in which case upper and lower bounds can be specified.
nnnpls in the nnls package which supports upper and lower 0 bounds or use nnls in the same package if all coefficients should be non-negative.
bvls (bounded value least squares) in the bvls package and specify the bounds.
there is an example of performing non-negative least squares in the vignette of the CVXR package.
reformulate it as a quadratic programming problem (see Wikipedia for the formulation) and use quadprog package.
nnls in the limSolve package. Negate the columns that should have negative coefficients to convert it to a non-negative least squares problem.
These packages mostly do not have a formula interface but instead require that a model matrix and dependent variable be passed as separate arguments. If df is a data frame containing the data and if the first column is the dependent variable then the model matrix can be calculated using:
A <- model.matrix(~., df[-1])
and the dependent variable is
df[[1]]
Penalties
Another approach is to add a penalty to the least squares objective function, i.e. the objective function becomes the sum of the squares of the residuals plus one or more additional terms that are functions of the coefficients and tuning parameters. Although doing this does not impose any hard constraints to guarantee the desired signs it may result in the correct signs anyways. This is particularly useful if the problem is ill conditioned or if there are more predictors than observations.
linearRidge in the ridge package will minimize the sum of the square of the residuals plus a penalty equal to lambda times the sum of the squares of the coefficients. lambda is a scalar tuning parameter which the software can automatically determine. It reduces to least squares when lambda is 0. The software has a formula method which along with the automatic tuning makes it particularly easy to use.
glmnet adds penalty terms containing two tuning parameters. It includes least squares and ridge regression as a special cases. It also supports bounds on the coefficients. There are facilities to automatically set the two tuning parameters but it does not have a formula method and the procedure is not as straight forward as in the ridge package. Read the vignettes that come with it for more information.
1- one way is to define an optimization program and minimize the mean square error by constraints and limits. (nlminb, optim, etc.)
2- Another one is using a library called "lavaan" as follow:
https://stats.stackexchange.com/questions/96245/linear-regression-with-upper-and-or-lower-limits-in-r

How to find overall significance for main effects in a dummy interaction using anova()

I run a Cox Regression with two categorical variables (x1 and x2) and their interaction. I need to know the significance of the overall effect of x1, x2 and of the interaction.
The overall effect of the interaction:
I know how do find out the overall effect of the interaction using anova():
library(survival)
fit_x1_x2 <- coxph(Surv(time, death) ~ x1 + x2 , data= df)
fit_full <- coxph(Surv(time, death) ~ x1 + x2 + x1:x2, data= df)
anova(fit_x1_x2, fit_full)
But how are we supposed to use anova() to find out the overall effect of x1 or x2? What I tried is this:
The overall effect of x1
fit_x2_ia <- coxph(Surv(time, death) ~ x2 + x1:x2, data= df)
fit_full <- coxph(Surv(time, death) ~ x1 + x2 + x1:x2, data= df)
anova(fit_x2_ia, fit_full)
The overall effect of x2
fit_x1_ia <- coxph(Surv(time, death) ~ x1 + x1:x2, data= df)
fit_full <- coxph(Surv(time, death) ~ x1 + x2 + x1:x2, data= df)
anova(fit_x1_ia, fit_full)
I am not sure whether this is how we are supposed to use anova(). The fact that the output shows degree of freedom is zero makes me sceptical. I am even more puzzled that both times, for the overall effect of x1 and x2, the test is significant, although the log likelihood values of the models are the same and the Chi value is zero.
Here is the data I used
set.seed(1) # make it reproducible
df <- data.frame(x1= rnorm(1000), x2= rnorm(1000)) # generate data
df$death <- rbinom(1000,1, 1/(1+exp(-(1 + 2 * df$x1 + 3 * df$x2 + df$x1 * df$x2)))) # dead or not
library(tidyverse) # for cut_number() function
df$x1 <- cut_number(df$x1, 4); df$x2 <- cut_number(df$x2, 4) # make predictors to groups
df$time <- rnorm(1000); df$time[df$time<0] <- -df$time[df$time<0] # add survival times
The two models you have constructed for "overall effect" do really not appear to satisfy the statistical property of being hierarchical, i.e properly nested. Specifically, if you look at the actual models that get constructed with that code you should see that they are actually the same model with different labels for the two-way crossed effects. In both cases you have 15 estimated coefficients (hence zero degrees of freedom difference) and you will not that the x1 parameter in the full model has the same coefficient as the x2[-3.2532,-0.6843):x1[-0.6973,-0.0347) parameter in the "reduced" model looking for an x1-effect, namely 0.19729. The crossing operator is basically filling in all the missing cells for the main effects with interaction results.
There really is little value in looking at interaction models without all of the main effects if you want to stay within the bounds of generally accepted statistical practice.
If you type:
fit_full
... you should get a summary of the model that has p-values for x1 levels, x2 levels,and the interaction levels. Because you chose to categorize these by four arbitrary cutpoints each you will end up with a total of 15 parameter estimates. If instead you made no cuts and modeled the linear effects and the linear-by-linear interaction, you could get three p-values directly. I'm guessing there was suspicion that the effects were not linear and if so I thought a cubic spline model might be more parsimonious and distort the biological reality less than discretization into 4 disjoint levels. If you thought the effects might be non-linear but ordinal, there is an ordinal version of factor classed variables, but the results are generally confusion to the uninitiated.
The answer from 42- is informative but after reading it I still did not know how to determine the three p values or if this is possible at all. Thus I talked to the professor of biostatistic of my university. His answer was quite simple and I share it in case others have similar questions.
In this design it is not possible to determine the three p values for overall effect of x1, x2 and their interaction. If we want to know the p values of the three overall effects, we need to keep the continuous variables as they are. But breaking up the variables into groups answers a different question, hence we can not test the hypothesis of the overall effects no matter which statisstical model we use.

Interpretation of main effects when interaction is present in gam

Consider a GAM model with the following structure:
y~gam(s(x1, by=x2) + x2 + s(x3)) where x1 and x2 are continuous variables and x2 is categorical. If I want to know the effect of x1 (in terms of deviance explained), I remove x1 from the model and compare the deviance explained (following this thread), like this:
model1 <- y~gam(s(x1, by=x2) + x2 + s(x3))
model2 <- y~gam(x2 + s(x3))
## deviance explained by x1:
summary(model1)$dev.expl-summary(model2)$dev.expl
But what if I want to know the effect of x2? I am not interested in the effect of x2 on x1; I just want to know the effect of x2 by itself. Could I do this:
model3 <- y~gam(s(x1, by=x2) + s(x3))
## deviance explained by x2:
summary(model1)$dev.expl-summary(model3)$dev.expl
I know that for linear models, if a significant interaction is present, one cannot remove the main effects of the variables in that interaction, even if they are not significant. Does the same apply here, in that I cannot know the effect of x2 on y independently of its effect on x1?
Yes, the same apply here. Whenever there are any interactions involving a variable, you cannot make any affirmation over the effects of this variable.
However, notice that this type of effect you are retrieving from explained deviance, doesn't have the same interpretability as the usual in linear models, where you affirm that a modification of a single unit in x2 represents an increase of beta2 over the mean of y. In fact, they are two different effects. Hence, by removing, only the x2 parameter you can still say that you have an explained deviance increase that is interpretable. The only difference is that the interpretation is in terms of information loss, or uncertainty decrease, which is absolutely fine to do.

F-test with HAC estimate

I am calculating a multi-variate OLS regression in R, and I know the residual are autocorrelated. I know I can use Newey-West correction when performing the t-test to check whether one of the coefficient is zero. I can do that using:
require(sandwich)
model <- lm(y ~ x1 + x2)
coeftest(model, vcov=NeweyWest(model))
where y was the variable to regress and x1 and x2 the predictor. This seems a good approach since my sample size is large.
But what if I want to run an F-test to test whether the coefficient of x1 is 1 and the coefficient of x2 is zero simultaneously? I cannot find a way to do that in R, if I want to account for the autocorrelation of the residuals. For instance, if I use the function linearHypothesis in R, it seems that Newey-West cannot be used as an argument of vcov. Any suggestion? An alternative would be to do bootstrapping to estimate a confidence ellipse for my point (1,0), but I was hoping to use an F-test if possible. Thank you!

Derive standard error of a transformed variable in linear regression

I would like to calculate the standard error of a transformed variable from my linear regression, i.e. divide two variables and get the standard error from this variable.
I use the deltamethod function from the msm package, but fail to get accurate standard errors.
For example:
Simulation of data:
set.seed(123)
nobs = 1000
data <- data.table(
x1 = rnorm(nobs),
x2 = rnorm(nobs),
x3 = rnorm(nobs),
x4 = rnorm(nobs),
y = rnorm(nobs))
Linear regression:
reg2 <- lm(y~x1+x2+x3+x4, data=data)
Get the coef and vcov (Here I need to get rid of the missings, as some coefficients in my real data are NA and I calculate a lot of regressions in loop)
vcov_reg <- vcov(reg2)
coef_reg <- coef(reg2)
coef_reg <- na.omit(coef_reg)
coef_reg <- as.numeric(coef_reg)
Deltamethod, for the the variable x1 divided by x3 (meaning I should use x2 and x4 according to the msm package):
deltamethod(~ x2/x4, coef_reg, vcov_reg)
This gives me a standard error of the transformed variable (x1/x3) of 3.21, while all standard errors from this regression are around 0.03.
Any idea's why/what's wrong here?
Other suggestions to calculate it are also welcome.
There is nothing wrong with the result. In your example your data is centered at 0 so it shouldn't be too surprising that when dividing by the data that you end up with a large variance / standard error.
Note that your estimated coefficient for x3 is -0.017408626 so with a standard error of about 0.03 the CI for this coefficient crosses 0. And that's the thing we're dividing by. Hopefully that gives you some intuition for why the standard error seems to explode. For some evidence that this really is part of the issue consider x1/x2 instead.
> deltamethod(~ x2/x3, coef_reg, vcov_reg)
[1] 0.3752063
Which is much smaller since the estimated coefficient for the denominator is bigger in this case (0.09)
But really there is nothing wrong with your code. It was just your intuition was wrong. Alternative methods to estimate what you want would be to bootstrap or to use a Bayesian regression and look at the posterior distribution of the transformation.

Resources