glmer logit - interaction effects on probability scale (replicating `effects` with `predict`) - r

I am running glmer logit models using the lme4 package. I am interested in various two and three way interaction effects and their interpretations. To simplify, I am only concerned with the fixed effects coefficients.
I managed to come up with a code to calculate and plot these effects on the logit scale, but I am having trouble transforming them to the predicted probabilities scale. Eventually I would like to replicate the output of the effects package.
The example relies on the UCLA's data on cancer patients.
library(lme4)
library(ggplot2)
library(plyr)
getmode <- function(v) {
uniqv <- unique(v)
uniqv[which.max(tabulate(match(v, uniqv)))]
}
facmin <- function(n) {
min(as.numeric(levels(n)))
}
facmax <- function(x) {
max(as.numeric(levels(x)))
}
hdp <- read.csv("http://www.ats.ucla.edu/stat/data/hdp.csv")
head(hdp)
hdp <- hdp[complete.cases(hdp),]
hdp <- within(hdp, {
Married <- factor(Married, levels = 0:1, labels = c("no", "yes"))
DID <- factor(DID)
HID <- factor(HID)
CancerStage <- revalue(hdp$CancerStage, c("I"="1", "II"="2", "III"="3", "IV"="4"))
})
Until here it is all data management, functions and the packages I need.
m <- glmer(remission ~ CancerStage*LengthofStay + Experience +
(1 | DID), data = hdp, family = binomial(link="logit"))
summary(m)
This is the model. It takes a minute and it converges with the following warning:
Warning message:
In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
Model failed to converge with max|grad| = 0.0417259 (tol = 0.001, component 1)
Even though I am not quite sure if I should worry about the warning, I use the estimates to plot the average marginal effects for the interaction of interest. First I prepare the dataset to be feed into the predict function, and then I calculate the marginal effects as well as the confidence intervals using the fixed effects parameters.
newdat <- expand.grid(
remission = getmode(hdp$remission),
CancerStage = as.factor(seq(facmin(hdp$CancerStage), facmax(hdp$CancerStage),1)),
LengthofStay = seq(min(hdp$LengthofStay, na.rm=T),max(hdp$LengthofStay, na.rm=T),1),
Experience = mean(hdp$Experience, na.rm=T))
mm <- model.matrix(terms(m), newdat)
newdat$remission <- predict(m, newdat, re.form = NA)
pvar1 <- diag(mm %*% tcrossprod(vcov(m), mm))
cmult <- 1.96
## lower and upper CI
newdat <- data.frame(
newdat, plo = newdat$remission - cmult*sqrt(pvar1),
phi = newdat$remission + cmult*sqrt(pvar1))
I am fairly confident these are correct estimates on the logit scale, but maybe I am wrong. Anyhow, this is the plot:
plot_remission <- ggplot(newdat, aes(LengthofStay,
fill=factor(CancerStage), color=factor(CancerStage))) +
geom_ribbon(aes(ymin = plo, ymax = phi), colour=NA, alpha=0.2) +
geom_line(aes(y = remission), size=1.2) +
xlab("Length of Stay") + xlim(c(2, 10)) +
ylab("Probability of Remission") + ylim(c(0.0, 0.5)) +
labs(colour="Cancer Stage", fill="Cancer Stage") +
theme_minimal()
plot_remission
I think now the OY scale is measured on the logit scale but to make sense of it I would like to transform it to predicted probabilities. Based on wikipedia, something like exp(value)/(exp(value)+1) should do the trick to get to predicted probabilities. While I could do newdat$remission <- exp(newdat$remission)/(exp(newdat$remission)+1) I am not sure how should I do this for the confidence intervals?.
Eventually I would like to get to the same plot what the effects package generates. That is:
eff.m <- effect("CancerStage*LengthofStay", m, KR=T)
eff.m <- as.data.frame(eff.m)
plot_remission2 <- ggplot(eff.m, aes(LengthofStay,
fill=factor(CancerStage), color=factor(CancerStage))) +
geom_ribbon(aes(ymin = lower, ymax = upper), colour=NA, alpha=0.2) +
geom_line(aes(y = fit), size=1.2) +
xlab("Length of Stay") + xlim(c(2, 10)) +
ylab("Probability of Remission") + ylim(c(0.0, 0.5)) +
labs(colour="Cancer Stage", fill="Cancer Stage") +
theme_minimal()
plot_remission2
Even though I could just use the effects package, it unfortunately does not compile with a lot of the models I had to run for my own work:
Error in model.matrix(mod2) %*% mod2$coefficients :
non-conformable arguments
In addition: Warning message:
In vcov.merMod(mod) :
variance-covariance matrix computed from finite-difference Hessian is
not positive definite or contains NA values: falling back to var-cov estimated from RX
Fixing that would require adjusting the estimation procedure, which at the moment I would like to avoid. plus, I am also curious what effects actually does here.
I would be grateful for any advice on how to tweak my initial syntax to get to predicted probabilities!

To obtain a similar result as the effect function provided in your question, you just have to back transform both the predicted values and the boundaries of your confidence interval from the logit scale to the original scale with the transformation you provide : exp(x)/(1+exp(x)).
This transformation can be done in base R with the plogis function :
> a <- 1:5
> plogis(a)
[1] 0.7310586 0.8807971 0.9525741 0.9820138 0.9933071
> exp(a)/(1+exp(a))
[1] 0.7310586 0.8807971 0.9525741 0.9820138 0.9933071
So using proposal from #eipi10 using ribbons for the confidence bands instead of the dotted lines (I also find this presentation more readable) :
ggplot(newdat, aes(LengthofStay, fill=factor(CancerStage), color=factor(CancerStage))) +
geom_ribbon(aes(ymin = plogis(plo), ymax = plogis(phi)), colour=NA, alpha=0.2) +
geom_line(aes(y = plogis(remission)), size=1.2) +
xlab("Length of Stay") + xlim(c(2, 10)) +
ylab("Probability of Remission") + ylim(c(0.0, 0.5)) +
labs(colour="Cancer Stage", fill="Cancer Stage") +
theme_minimal()
The results are the same (with effects_3.1-2 and lme4_1.1-13):
> compare <- merge(newdat, eff.m)
> compare[, c("remission", "plo", "phi")] <-
+ sapply(compare[, c("remission", "plo", "phi")], plogis)
> head(compare)
CancerStage LengthofStay remission Experience plo phi fit se lower upper
1 1 10 0.20657613 17.64129 0.12473504 0.3223392 0.20657613 0.3074726 0.12473625 0.3223368
2 1 2 0.35920425 17.64129 0.27570456 0.4522040 0.35920425 0.1974744 0.27570598 0.4522022
3 1 4 0.31636299 17.64129 0.26572506 0.3717650 0.31636299 0.1254513 0.26572595 0.3717639
4 1 6 0.27642711 17.64129 0.22800277 0.3307300 0.27642711 0.1313108 0.22800360 0.3307290
5 1 8 0.23976445 17.64129 0.17324422 0.3218821 0.23976445 0.2085896 0.17324530 0.3218805
6 2 10 0.09957493 17.64129 0.06218598 0.1557113 0.09957493 0.2609519 0.06218653 0.1557101
> compare$remission-compare$fit
[1] 8.604228e-16 1.221245e-15 1.165734e-15 1.054712e-15 9.714451e-16 4.718448e-16 1.221245e-15 1.054712e-15 8.326673e-16
[10] 6.383782e-16 4.163336e-16 7.494005e-16 6.383782e-16 5.689893e-16 4.857226e-16 2.567391e-16 1.075529e-16 1.318390e-16
[19] 1.665335e-16 2.081668e-16
The differences between the confidence boundaries is higher but still very small :
> compare$plo-compare$lower
[1] -1.208997e-06 -1.420235e-06 -8.815678e-07 -8.324261e-07 -1.076016e-06 -5.481007e-07 -1.429258e-06 -8.133438e-07 -5.648821e-07
[10] -5.806940e-07 -5.364281e-07 -1.004792e-06 -6.314904e-07 -4.007381e-07 -4.847205e-07 -3.474783e-07 -1.398476e-07 -1.679746e-07
[19] -1.476577e-07 -2.332091e-07
But if I use the real quantile of the normal distribution cmult <- qnorm(0.975) instead of cmult <- 1.96 I obtain very small differences also for these boundaries :
> compare$plo-compare$lower
[1] 5.828671e-16 9.992007e-16 9.992007e-16 9.436896e-16 7.771561e-16 3.053113e-16 9.992007e-16 8.604228e-16 6.938894e-16
[10] 5.134781e-16 2.289835e-16 4.718448e-16 4.857226e-16 4.440892e-16 3.469447e-16 1.006140e-16 3.382711e-17 6.765422e-17
[19] 1.214306e-16 1.283695e-16

Related

How to increase the number of values returned by R MASS::profile?

The short question is: profile() returns 12 parameter values. How can it be made to return a greater number?
The motivation for my question is to reproduce Fig. 1.3 in Applied Logistic Regression 3rd Edition by David W. Hosmer Jr., Stanley Lemeshow and Rodney X. Sturdivant (2009), which plots the profile log likelihood against the coefficient for x = age over the confint() interval.
The glm model was
fit <- glm(chd ~ age, data = chdage, family = binomial(link = "logit"))
which relates the presence or absence of coronary heart disease to age for 100 patients. The results of the model agree with Table 1.3 in the text on p. 10.
For convenience, a csv file of the data is in my gist
Using the guidance provided by Ben Bolker for multiplying MASS::profile's output of deviance by -0.5 to convert to negative log-likelihood in a 2011 post using the tidy function provided by jebyrnes in later comment on the same post.
library(dplyr)
library(MASS)
library(purrr)
get_profile_glm <- function(aglm){
prof <- MASS:::profile.glm(aglm)
disp <- attr(prof,"summary")$dispersion
purrr::imap_dfr(prof, .f = ~data.frame(par = .y,
deviance=.x$z^2*disp+aglm$deviance,
values = as.data.frame(.x$par.vals)[[.y]],
stringsAsFactors = FALSE))
}
pll <- get_profile_glm(fit) %>% filter(par == "age") %>% mutate(beta = values) %>% mutate(pll = deviance * -0.5) %>% select(-c(par,values, deviance))
pll
> pll
beta pll
1 0.04895 -57.70
2 0.06134 -56.16
3 0.07374 -55.02
4 0.08613 -54.25
5 0.09853 -53.81
6 0.11092 -53.68
7 0.12332 -53.80
8 0.13571 -54.17
9 0.14811 -54.74
10 0.16050 -55.49
11 0.17290 -56.41
12 0.18529 -57.47
This can be plotted to obtain an approximation of the HLS figure 1.3 with the ablines for the alpha = 0.95 interval from
confint(fit)
and
logLik(fit)
The asymmetry in the legend can be calculated with
asymmetry <- function(x) {
ci <- confint(x, level = 0.95)
ci_lower <- ci[2,1]
ci_upper <- ci[2,2]
coeff <- x$coefficients[2]
round(100 * ((ci_upper - coeff) - (coeff - ci_lower))/(ci_upper - ci_lower), 2)
}
asym <- assymetry(fit)
The plot is produced with
ggplot(data = pll, aes(x = beta, y = pll)) +
geom_line() +
scale_x_continuous(breaks = c(0.06, 0.08, 0.10, 0.12, 0.14, 0.16)) +
scale_y_continuous(breaks = c(-57, -56, -55, -54)) +
xlab("Coefficient for age") +
ylab("Profile log-likelihood function") +
geom_vline(xintercept = confint(fit)[2,1]) +
geom_vline(xintercept = confint(fit)[2,2]) +
geom_hline(yintercept = (logLik(fit) - (qchisq(0.95, df = 1)/2))) +
theme_classic() +
ggtitle(paste("Asymmetry =", scales::percent(asym/100, accuracy = 0.1))) +
theme(plot.title = element_text(hjust = 0.5))
Two adjustments are needed:
The curve should be smoothed by the addition of beta values and log likelihood values along the x and y axes, respectively.
The range of beta should be set comparably to approximately [0.0575,0.1625] (visually, from the figure). I assume this can be done by subsetting as required.
A note regarding the logLik y intercept on the figure. It appears to be based on a transposed value of log-likelihood. See Table 1-3 at p. 10, where it is given as -53.676546, compared to the equation on p. 19 where it is transposed to -53.6756.
Thanks to prompting from kjetil b halvorsen and the comment by Ben Bolker in the bbmle package vignette for mle2
– del Step size (scaled by standard error) (Default: zmax/5.) Presum- ably (?) copied from MASS::profile.glm, which says (in ?profile.glm): “[d]efault value chosen to allow profiling at about 10 parameter values.”
I "solved" my issue with a change to the get_profile_glm function in the original post:
prof <- MASS:::profile.glm(aglm, del = .05, maxsteps = 52)
which yielded 100 points and produced the plot I was hoping for:
I say "solved" because the values hardwired were determined by trial and error. But it must do for now.

Adjusting ODE model output using a Rogan-Gladen estimator in R

I have made an ODE model in R using the package deSolve. Currently the output of the model gives me the "observed" prevalence of a disease (i.e. the prevalence not accounting for diagnostic imperfection).
However, I want to adjust the model to output the "true" prevalence, using a simple adjustment formula called the Rogan-Gladen estimator (http://influentialpoints.com/Training/estimating_true_prevalence.htm):
True prevalence =
(Apparent prev. + (Specificity-1)) / (Specificity + (Sensitivity-1))
As you will see in the code below, I have attempted to adjust only one of the differential equations (diggP).
Running the model without adjustment gives an expected output (a proportion between 0 and 1). However, attempting to adjust the model using the RG-estimator gives a spurious output (a proportion less than 0).
Any advice on what might be going wrong here would be very much appreciated.
# Load required packages
library(tidyverse)
library(broom)
library(deSolve)
# Set time (age) for function
time = 1:80
# Defining exponential decay of lambda over age
y1 = 0.003 + (0.15 - 0.003) * exp(-0.05 * time) %>% jitter(10)
df <- data.frame(t = time, y = y1)
fit <- nls(y ~ SSasymp(time, yf, y0, log_alpha), data = df)
fit
# Values of lambda over ages 1-80 years
data <- as.matrix(0.003 + (0.15 - 0.003) * exp(-0.05 * time))
lambda<-as.vector(data[,1])
t<-as.vector(seq(1, 80, by=1))
foi<-cbind(t, lambda)
foi[,1]
# Making lambda varying by time useable in the ODE model
input <- approxfun(x = foi[,1], y = foi[,2], method = "constant", rule = 2)
# Model
ab <- function(time, state, parms) {
with(as.list(c(state, parms)), {
# lambda, changing by time
import<-input(time)
# Derivatives
# RG estimator:
#True prevalence = (apparent prev + (sp-1)) / (sp + (se-1))
diggP<- (((import * iggN) - iggR * iggP) + (sp_igg-1)) / (sp_igg + (se_igg-1))
diggN<- (-import*iggN) + iggR*iggP
dtgerpP<- (0.5*import)*tgerpN -tgerpR*tgerpP
dtgerpN<- (0.5*-import)*tgerpN + tgerpR*tgerpP
# Return results
return(list(c(diggP, diggN, dtgerpP, dtgerpN)))
})
}
# Initial values
yini <- c(iggP=0, iggN=1,
tgerpP=0, tgerpN=1)
# Parameters
pars <- c(iggR = 0, tgerpR = (1/8)/12,
se_igg = 0.95, sp_igg = 0.92)
# Solve model
results<- ode(y=yini, times=time, func=ab, parms = pars)
# Plot results
plot(results, xlab="Time (years)", ylab="Proportion")

PLS regression in R: Testing alternative model specifications

In R, I would like to test the specification of a partial least square (PLS) model m1 against a non-nested alternative m2, applying the Davidson-MacKinnon J test. For a simple linear outcome Y it works quite well using the plsr estimator followed by the jtest command:
# Libraries and data
library(plsr)
library(plsRglm)
library(lmtest)
Z <- Cornell # illustration dataset coming with the plsrglm package
# Simple linear model
m1 <- plsr(Z$Y ~ Z$X1 + Z$X2 + Z$X3 + Z$X4 + Z$X5 ,2) # including X1
m2 <- plsr(Z$Y ~ Z$X6 + Z$X2 + Z$X3 + Z$X4 + Z$X5 ,2) # including X6 as alternative
jtest(m1,m2)
However, if Iuse the generalized linear model (plsRglm) estimator to account for a possible nonlinear distibution of an outcome, e.g.:
# Generalized Model
m1 <- plsRglm(Z$Y ~ Z$X1 + Z$X2 + Z$X3 + Z$X4 + Z$X5 ,2, modele = "pls-glm-family", family=Gamma(link = "log"), pvals.expli=TRUE)
m2 <- plsRglm(Z$Y ~ Z$X6 + Z$X2 + Z$X3 + Z$X4 + Z$X5 ,2, modele = "pls-glm-family", family=Gamma(link = "log"), pvals.expli=TRUE)
I am running into an error when using jtest:
> jtest(m1,m2)
Error in terms.default(formula1) : no terms component nor attribute
>
It seems that plsRglm does not save objects of class "formula", that jtest can handle. Has anybody a suggestion of how to edit my code to get this to work?
Thanks!

ROCR error: Format of predictions is invalid

After getting my predictions from glmnet, I am trying to use "prediction" function, in "ROCR" package, to get tpr, fpr, etc but get this error:
pred <- prediction(pred_glmnet_s5_3class, y)
Error in prediction(pred_glmnet_s5_3class, y) :
Format of predictions is invalid.
I have output both glmnet predictions and labels and they look like they are in similar format and hence I don't understand what is invalid here.
The code is as follows and input can be found here input. It is a small dataset and should not take much time to run.
library("ROCR")
library("caret")
sensor6data_s5_3class <- read.csv("/home/sensei/clustering /sensor6data_f21_s5_with3Labels.csv")
sensor6data_s5_3class <- within(sensor6data_s5_3class, Class <- as.factor(Class))
sensor6data_s5_3class$Class2 <- relevel(sensor6data_s5_3class$Class,ref="1")
set.seed("4321")
inTrain_s5_3class <- createDataPartition(y = sensor6data_s5_3class$Class, p = .75, list = FALSE)
training_s5_3class <- sensor6data_s5_3class[inTrain_s5_3class,]
testing_s5_3class <- sensor6data_s5_3class[-inTrain_s5_3class,]
y <- testing_s5_3class[,22]
ctrl_s5_3class <- trainControl(method = "repeatedcv", number = 10, repeats = 10 , savePredictions = TRUE)
model_train_glmnet_s5_3class <- train(Class2 ~ ZCR + Energy + SpectralC + SpectralS + SpectralE + SpectralF + SpectralR + MFCC1 + MFCC2 + MFCC3 + MFCC4 + MFCC5 + MFCC6 + MFCC7 + MFCC8 + MFCC9 + MFCC10 + MFCC11 + MFCC12 + MFCC13, data = training_s5_3class, method="glmnet", trControl = ctrl_s5_3class)
pred_glmnet_s5_3class = predict(model_train_glmnet_s5_3class, newdata=testing_s5_3class, s = "model_train_glmnet_s5_3class$finalModel$lambdaOpt")
pred <- prediction(pred_glmnet_s5_3class, y)
Appreciate your help!
The main problem is that prediction takes "a vector, matrix, list, or data frame" for both predictions and labels arguments. Even though pred_glmnet_s5_3class and y look like vectors, they are not, e.g.
sapply(c(is.vector, is.matrix, is.list, is.data.frame), do.call, list(y))
# [1] FALSE FALSE FALSE FALSE
In fact, they are factors (which can be seen from e.g. class(y)), and ?is.vector informs us to
Note that factors are not vectors; ‘is.vector’ returns ‘FALSE’
and ‘as.vector’ converts a factor to a character vector for ‘mode
= "any"’.
We can convert both objects to numeric:
pred <- prediction(as.numeric(pred_glmnet_s5_3class), as.numeric(y))
# Number of classes is not equal to 2.
# ROCR currently supports only evaluation of binary classification tasks.
Unfortunately, it produces a different problem which is beyond the scope of this question.

How to replicate Stata's "margins at" in R after lm()

From Stata:
margins, at(age=40)
To understand why that yields the desired result, let us tell you that if you were to type
. margins
margins would report the overall margin—the margin that holds nothing constant. Because our model
is logistic, the average value of the predicted probabilities would be reported. The at() option fixes
one or more covariates to the value(s) specified and can be used with both factor and continuous
variables. Thus, if you typed
margins, at(age=40)
then margins would average over the data
the responses for everybody, setting age=40.
Could someone help me which package could be useful? I tried already to find a mean of predicted values for the subset data, but it doesnt work for sequences, for example margins, at(age=40 (1)50).
There are many ways to get marginal effects in R.
You should understand that Stata's margins, at are simply marginal effects evaluated at means or representative points (see this and the documentation).
I think that you'll like this solution best as it's most similar to what you're used to:
library(devtools)
install_github("leeper/margins")
Source: https://github.com/leeper/margins
margins is an effort to port Stata's (closed source) margins command
to R as an S3 generic method for calculating the marginal effects (or
"partial effects") of covariates included in model objects (like those
of classes "lm" and "glm"). A plot method for the new "margins" class
additionally ports the marginsplot command.
library(margins)
x <- lm(mpg ~ cyl * hp + wt, data = mtcars)
(m <- margins(x))
cyl hp wt
0.03814 -0.04632 -3.11981
See also the prediction command (?prediction) in this package.
Asides from that, here are some other solutions I've compiled:
I. erer (package)
maBina() command
http://cran.r-project.org/web/packages/erer/erer.pdf
II. mfxboot
mfxboot <- function(modform,dist,data,boot=1000,digits=3){
x <- glm(modform, family=binomial(link=dist),data)
# get marginal effects
pdf <- ifelse(dist=="probit",
mean(dnorm(predict(x, type = "link"))),
mean(dlogis(predict(x, type = "link"))))
marginal.effects <- pdf*coef(x)
# start bootstrap
bootvals <- matrix(rep(NA,boot*length(coef(x))), nrow=boot)
set.seed(1111)
for(i in 1:boot){
samp1 <- data[sample(1:dim(data)[1],replace=T,dim(data)[1]),]
x1 <- glm(modform, family=binomial(link=dist),samp1)
pdf1 <- ifelse(dist=="probit",
mean(dnorm(predict(x, type = "link"))),
mean(dlogis(predict(x, type = "link"))))
bootvals[i,] <- pdf1*coef(x1)
}
res <- cbind(marginal.effects,apply(bootvals,2,sd),marginal.effects/apply(bootvals,2,sd))
if(names(x$coefficients[1])=="(Intercept)"){
res1 <- res[2:nrow(res),]
res2 <- matrix(as.numeric(sprintf(paste("%.",paste(digits,"f",sep=""),sep=""),res1)),nrow=dim(res1)[1])
rownames(res2) <- rownames(res1)
} else {
res2 <- matrix(as.numeric(sprintf(paste("%.",paste(digits,"f",sep=""),sep="")),nrow=dim(res)[1]))
rownames(res2) <- rownames(res)
}
colnames(res2) <- c("marginal.effect","standard.error","z.ratio")
return(res2)}
Source: http://www.r-bloggers.com/probitlogit-marginal-effects-in-r/
III. Source: R probit regression marginal effects
x1 = rbinom(100,1,.5)
x2 = rbinom(100,1,.3)
x3 = rbinom(100,1,.9)
ystar = -.5 + x1 + x2 - x3 + rnorm(100)
y = ifelse(ystar>0,1,0)
probit = glm(y~x1 + x2 + x3, family=binomial(link='probit'))
xbar <- as.matrix(mean(cbind(1,ttt[1:3])))
Now the graphic, i.e., the marginal effect of x1, x2 and x3
library(arm)
curve(invlogit(1.6*(probit$coef[1] + probit$coef[2]*x + probit$coef[3]*xbar[3] + probit$coef[4]*xbar[4]))) #x1
curve(invlogit(1.6*(probit$coef[1] + probit$coef[2]*xbar[2] + probit$coef[3]*x + probit$coef[4]*xbar[4]))) #x2
curve(invlogit(1.6*(probit$coef[1] + probit$coef[2]*xbar[2] + probit$coef[3]*xbar[3] + probit$coef[4]*x))) #x3
library(AER)
data(SwissLabor)
mfx1 <- mfxboot(participation ~ . + I(age^2),"probit",SwissLabor)
mfx2 <- mfxboot(participation ~ . + I(age^2),"logit",SwissLabor)
mfx3 <- mfxboot(participation ~ . + I(age^2),"probit",SwissLabor,boot=100,digits=4)
mfxdat <- data.frame(cbind(rownames(mfx1),mfx1))
mfxdat$me <- as.numeric(as.character(mfxdat$marginal.effect))
mfxdat$se <- as.numeric(as.character(mfxdat$standard.error))
# coefplot
library(ggplot2)
ggplot(mfxdat, aes(V1, marginal.effect,ymin = me - 2*se,ymax= me + 2*se)) +
scale_x_discrete('Variable') +
scale_y_continuous('Marginal Effect',limits=c(-0.5,1)) +
theme_bw() +
geom_errorbar(aes(x = V1, y = me),size=.3,width=.2) +
geom_point(aes(x = V1, y = me)) +
geom_hline(yintercept=0) +
coord_flip() +
opts(title="Marginal Effects with 95% Confidence Intervals")

Resources