Drawing a quadratic function in mixed effect model - r

It's a bit of a long question so thanks for bearing with me.
Here's my data
https://www.dropbox.com/s/jo22d68a8vxwg63/data.csv?dl=0
I constructed a mixed effect model
library(lme4)
mod <- lmer(sqrt(y) ~ x1 + I(x1^2) + x2 + I(x2^2) + x3 + I(x3^2) + x4 + I(x4^2) + x5 + I(x5^2) +
x6 + I(x6^2) + x7 + I(x7^2) + x8 + I(x8^2) + (1|loc) + (1|year), data = data)
All the predictors are standardised and I am interested in knowing how does y changes with changes in x5while keeping other variables at their mean values (equal to 0 since all the variables are standardised).
This is how I do it.
# make all predictors except x5 equal to zero
data$x1<-0
data$x2<-0
data$x3<-0
data$x4<-0
data$x6<-0
data$x7<-0
data$x8<-0
# Use the predict function
library(merTools)
fitted <- predictInterval(merMod = mod, newdata = data, level = 0.95, n.sims = 1000,stat = "median",include.resid.var = TRUE)
Now I want to plot the fitted as a quadratic function of x5. I do this:
i<-order(data$x5)
plot(data$x5[i],fitted$fit[i],type="l")
I expected this to produce a plot of y as a quadratic function of x5. But As you can see, I get the following plot which does not have any quadratic curve. Can anyone tell me what I am doing wrong here?

I'm not sure where predictInterval comes from, but you can do this with predict. The trick is just to make sure you set your random effects to 0. Here's how you can do that
newdata <- data
newdata[,paste0("x", setdiff(1:8,5))] <- 0
y <- predict(mod, newdata=newdata, re.form=NA)
plot(data$x5, y)
The re.form=NA part drops out the random effect

Related

Use linear model for propensity score matching (R MatchIt)

I want to use the propensity score fitted from a linear model to match observations using the MatchIt library.
For example, suppose df is a dataframe. If I were to balance its treatment column in terms of x1, x2 and x3 using the propensity score from a logit, I would set distance = 'glm' and link = 'logit':
m <- matchit(formula = treatment ~ x1 + x2 + x3,
data = df,
method = 'nearest',
distance = 'glm',
link = 'logit')
How can I do the same with a linear model instead of a logit?
As per the documentation:
When link is prepended by "linear.", the linear predictor is used instead of the predicted probabilities.
Hence, I tried:
m <- matchit(formula = treatment ~ x1 + x2 + x3,
data = df,
method = 'nearest',
distance = 'glm',
link = 'linear.logit')
I'm afraid that doing this (link = linear.logit) would use the score from the log-odds of the logit model.
Is there any way I can just use a linear model instead of a generalized linear model?
You cannot do this from within matchit() (and you shouldn't, in general, which is why it is not allowed). But you can always estimate propensity score outside matchit() however you want and the supply them to matchit(). To use a linear probability model, you would run the following:
fit <- lm(treatment ~ x1 + x2 + x3, data = df)
ps <- fit$fitted
m <- matchit(treatment ~ x1 + x2 + x3,
data = df,
method = 'nearest',
distance = ps)

Linear Regression with coefficients question

We are given the following dataset [dataset used for linear regression][1]
[1]: https://github.com/Iron-Maiden-19/regression/blob/master/shel2x.csv and we fit this linear regression model - Model A
modelA <- lm(Y ~ X1 + X2 + X3 + X4 + X5 + X6 + X7 + X8,data=shel2x)
which is fine but then we are given the following problem which I am unsure how to solve the following question - Fit Model B and compare the AIC of it to modelA and here is modelB:
Y = β0 + β1X1+ β2X2+ β3X2^2 + β4X4+ β5X6 +ε
So I know the beta values represent my coefficients from the first model but how do I do regression and how do I form an equation for regression.
In R, you perform a linear regression just the way you already have.
modelA <- lm(Y ~ X1 + X2 + X3 + X4 + X5 + X6 + X7 + X8,data=shel2x)
ModelA is a linear model of the form:
Y = beta0 + beta1*X1 + beta2*X2 + beta3*X3 + beta4*X4 + beta5*X5 + beta6*X6 + beta7*X7 + beta8*X8
So, to fit model B, you would just create another linear model in the following manner:
modelB <- lm(Y ~ X1 + X2 + X2^2 + X4 + X6, data=shel2x)
Then calling:
summary(modelA)
summary(modelB)
Should give you the summary output for the two separate linear models, which will include the separate AIC for both of them. Without running the models and without looking at your data, I'm almost positive that modelB will have a smaller AIC, as it always tends to favor the more parsimonious model.

Running all Combinations of Dummy Variables Through a Regression Equation

I have an issue that concerns itself with extracting output from a regression for all possible combinations of dummy variable while keeping the continuous predictor variables fixed.
The problem is that my model contains over 100 combinations of interactions and manually calculating all of these will be quite tedious. Is there an efficient method for iteratively calculating output?
The only way I can think of is to write a loop that generates all desired combinations to subsequently feed into the predict() function.
Some context:
I am trying to identify the regional differences of automobile resale prices by the model of car.
My model looks something like this:
lm(data, price ~ age + mileage + region_dummy_1 + ... + region_dummy_n + model_dummy_1 + ... + model_dummy_n + region_dummy_1 * model_dummy_1 + ... + region_dummy_1 * model_dummy_n)
My question is:
How do I produce a table of predicted prices for every model/region combination?
Use .*.
lm(price ~ .*.)
Here's a small reproducible example:
> df <- data.frame(y = rnorm(100,0,1),
+ x1 = rnorm(100,0,1),
+ x2 = rnorm(100,0,1),
+ x3 = rnorm(100,0,1))
>
> lm(y ~ .*., data = df)
Call:
lm(formula = y ~ . * ., data = df)
Coefficients:
(Intercept) x1 x2 x3 x1:x2 x1:x3
-0.02036 0.08147 0.02354 -0.03055 0.05752 -0.02399
x2:x3
0.24065
How does it work?
. is shorthand for "all predictors", and * includes the two-way interaction term.
For example, consider a dataframe with 3 columns: Y (independent variable), and 2 predictors (X1 and X2). The syntax lm(Y ~ X1*X2) is shorthand for lm(Y ~ X1 + X2 + X1:X2), where, X1:X2 is the interaction term.
Extending this simple case, imagine we have a data frame with 3 predictors, X1, X2, and X3. lm(Y ~ .*.) is equivalent to lm(Y ~ X1 + X2 + X3 + X1:X2 + X1:X3 + X2:X3).

R : Plotting Prediction Results for a multiple regression

I want to observe the effect of a treatment variable on my outcome Y. I did a multiple regression: fit <- lm (Y ~ x1 + x2 + x3). x1 is the treatment variable and x2, x3 are the control variables. I used the predict function holding x2 and x3 to their means. I plotted this predict function.
Now I would like to add a line to my plot similar to a simple regression abline but I do not know how to do this.
I think I have to use line(x,y) where y = predict and x is a sequence of values for my variable x1. But R tells me the lengths of y and x differ.
I think you are looking for termplot:
## simulate some data
set.seed(0)
x1 <- runif(100)
x2 <- runif(100)
x3 <- runif(100)
y <- cbind(1,x1,x2,x3) %*% runif(4) + rnorm(100, sd = 0.1)
## fit a model
fit <- lm(y ~ x1 + x2 + x3)
termplot(fit, se = TRUE, terms = "x1")
termplot uses predict.lm(, type = "terms") for term-wise prediction. If a model has intercept (like above), predict.lm will centre each term (What does predict.glm(, type=“terms”) actually do?). In this way, each terms is predicted to be 0 at the mean of the covariate, and the standard error at the mean is 0 (hence the confidence interval intersects the line at the mean).

R probit regression marginal effects

I am using R to replicate a study and obtain mostly the same results the
author reported. At one point, however, I calculate marginal effects that seem to be unrealistically small. I would greatly appreciate if you could have a look at my reasoning and the code below and see if I am mistaken at one point or another.
My sample contains 24535 observations, the dependent variable "x028bin" is a
binary variable taking on the values 0 and 1, and there are furthermore 10
explaining variables. Nine of those independent variables have numeric levels, the independent variable "f025grouped" is a factor consisting of different religious denominations.
I would like to run a probit regression including dummies for religious denomination and then compute marginal effects. In order to do so, I first eliminate missing values and use cross-tabs between the dependent and independent variables to verify that there are no small or 0 cells. Then I run the probit model which works fine and I also obtain reasonable results:
probit4AKIE <- glm(x028bin ~ x003 + x003squ + x025secv2 + x025terv2 + x007bin + x04chief + x011rec + a009bin + x045mod + c001bin + f025grouped, family=binomial(link="probit"), data=wvshm5red2delna, na.action=na.pass)
summary(probit4AKIE)
However, when calculating marginal effects with all variables at their means from the probit coefficients and a scale factor, the marginal effects I obtain are much too small (e.g. 2.6042e-78).
The code looks like this:
ttt <- cbind(wvshm5red2delna$x003,
wvshm5red2delna$x003squ,
wvshm5red2delna$x025secv2,
wvshm5red2delna$x025terv2,
wvshm5red2delna$x007bin,
wvshm5red2delna$x04chief,
wvshm5red2delna$x011rec,
wvshm5red2delna$a009bin,
wvshm5red2delna$x045mod,
wvshm5red2delna$c001bin,
wvshm5red2delna$f025grouped,
wvshm5red2delna$f025grouped,
wvshm5red2delna$f025grouped,
wvshm5red2delna$f025grouped,
wvshm5red2delna$f025grouped,
wvshm5red2delna$f025grouped,
wvshm5red2delna$f025grouped,
wvshm5red2delna$f025grouped,
wvshm5red2delna$f025grouped) #I put variable "f025grouped" 9 times because this variable consists of 9 levels
ttt <- as.data.frame(ttt)
xbar <- as.matrix(mean(cbind(1,ttt[1:19]))) #1:19 position of variables in dataframe ttt
betaprobit4AKIE <- probit4AKIE$coefficients
zxbar <- t(xbar) %*% betaprobit4AKIE
scalefactor <- dnorm(zxbar)
marginprobit4AKIE <- scalefactor * betaprobit4AKIE[2:20] #2:20 are the positions of variables in the output of the probit model 'probit4AKIE' (variables need to be in the same ordering as in data.frame ttt), the constant in the model occupies the first position
marginprobit4AKIE #in this step I obtain values that are much too small
I apologize that I can not provide you with a working example as my dataset is
much too large. Any comment would be greatly appreciated. Thanks a lot.
Best,
Tobias
#Gavin is right and it's better to ask at the sister site.
In any case, here's my trick to interpret probit coefficients.
The probit regression coefficients are the same as the logit coefficients, up to a scale (1.6). So, if the fit of a probit model is Pr(y=1) = fi(.5 - .3*x), this is equivalent to the logistic model Pr(y=1) = invlogit(1.6(.5 - .3*x)).
And I use this to make a graphic, using the function invlogit of package arm. Another possibility is just to multiply all coefficients (including the intercept) by 1.6, and then applying the 'divide by 4 rule' (see the book by Gelman and Hill), i.e, divide the new coefficients by 4, and you will find out an upper bound of the predictive difference corresponding to a unit difference in x.
Here's an example.
x1 = rbinom(100,1,.5)
x2 = rbinom(100,1,.3)
x3 = rbinom(100,1,.9)
ystar = -.5 + x1 + x2 - x3 + rnorm(100)
y = ifelse(ystar>0,1,0)
probit = glm(y~x1 + x2 + x3, family=binomial(link='probit'))
xbar <- as.matrix(mean(cbind(1,ttt[1:3])))
# now the graphic, i.e., the marginal effect of x1, x2 and x3
library(arm)
curve(invlogit(1.6*(probit$coef[1] + probit$coef[2]*x + probit$coef[3]*xbar[3] + probit$coef[4]*xbar[4]))) #x1
curve(invlogit(1.6*(probit$coef[1] + probit$coef[2]*xbar[2] + probit$coef[3]*x + probit$coef[4]*xbar[4]))) #x2
curve(invlogit(1.6*(probit$coef[1] + probit$coef[2]*xbar[2] + probit$coef[3]*xbar[3] + probit$coef[4]*x))) #x3
This will do the trick for probit or logit:
mfxboot <- function(modform,dist,data,boot=1000,digits=3){
x <- glm(modform, family=binomial(link=dist),data)
# get marginal effects
pdf <- ifelse(dist=="probit",
mean(dnorm(predict(x, type = "link"))),
mean(dlogis(predict(x, type = "link"))))
marginal.effects <- pdf*coef(x)
# start bootstrap
bootvals <- matrix(rep(NA,boot*length(coef(x))), nrow=boot)
set.seed(1111)
for(i in 1:boot){
samp1 <- data[sample(1:dim(data)[1],replace=T,dim(data)[1]),]
x1 <- glm(modform, family=binomial(link=dist),samp1)
pdf1 <- ifelse(dist=="probit",
mean(dnorm(predict(x, type = "link"))),
mean(dlogis(predict(x, type = "link"))))
bootvals[i,] <- pdf1*coef(x1)
}
res <- cbind(marginal.effects,apply(bootvals,2,sd),marginal.effects/apply(bootvals,2,sd))
if(names(x$coefficients[1])=="(Intercept)"){
res1 <- res[2:nrow(res),]
res2 <- matrix(as.numeric(sprintf(paste("%.",paste(digits,"f",sep=""),sep=""),res1)),nrow=dim(res1)[1])
rownames(res2) <- rownames(res1)
} else {
res2 <- matrix(as.numeric(sprintf(paste("%.",paste(digits,"f",sep=""),sep="")),nrow=dim(res)[1]))
rownames(res2) <- rownames(res)
}
colnames(res2) <- c("marginal.effect","standard.error","z.ratio")
return(res2)
}
Source: http://www.r-bloggers.com/probitlogit-marginal-effects-in-r/

Resources