Is it possible to have similar standard errors for marginal effects under a probit regression for all estimates? - r

Data: Data
Code:
## Regression
probit_enae = glm(emploi ~ genre + filiere + satisfaction + competence + anglais, family=binomial(link="probit"),
data=ENAE_Probit.df)
summary(probit_enae) #Summary output of the regression
confint(probit_enae) #Gives the 95% confidence interval for the estimated coefficients
## Marginal effects
mfx_enae = mfx(probit_enae)
So, when you run the mfx command to get the marginal effects, it works. But then you get a standard error estimate (9.901) which is the same across all estimated parameters. Is this a normal behavior?
Thanks

Related

R coxme: How to get study-specific treatment effect and 95% confidence interval from a mixed effect model?

I am using the coxme to fix a mixed effect Cox model. I have two random effects in the model (intercept [baseline hazard] and slope of the treatment). After running the model, I got the fixed coefficients and random effects output:
Fixed coefficients
coef exp(coef) se(coef) z p
treatment 0.420463710 1.5226675 0.28313885 1.49 0.14
Random effects
Group Variable Std Dev Variance Corr
Study Intercept 1.3979618 1.9542971 -0.6995296
treatment 0.4815433 0.2318839
I also used the ranef to get the random effect coefficients for each study. Outpout:
Intercept treatment
Study1 -1.591497 0.5479255
Study2 1.276003 -0.4392570
Study3 -1.051512 0.3621938
Study4 1.367005 -0.4708623
I was wondering how I can obtain the treatment effect (coefficient) and 95% CI for each study separately. Can I get the point estimate by summing up the overall coefficient of fixed effect and study-specific random effect (e.g. 0.420463710 + 0.5479255 for Study1)? But what about the 95%CI?
Is there any way to get the study-specific treatment effect and 95% CI (and the study's weight), like in ipdforest in Stata?
Thank you very much.

How to get confidence interval for hypothesis test of non-linear multiple parameters

I am trying to do something that seems very simple yet I cannot find any good advice out there. I would like to get the confidence interval for the non-linear combination of two coefficients in a regression model. I cam use linearHypothesis() to conduct an F-test and get the p-value for a linear combination. The code I ran for that part is:
reg4 <- lm(bpsys ~ current_tobac + male + wtlb + age, data=NAMCS2010)
linearHypothesis(reg4, "current_tobac + male = 0")
I can use glht() from the multcomp package to get the confidence interval for a linear combination of parameters:
confcm <- summary(glht(reg4, linfct = c("current_tobac + male = 0")))
confint(confcm)
But I'm not sure what to do for a non-linear combination like (summary(reg4)$coefficients[2])/ (summary(reg4)$coefficients[4])
Any advice?

How can I get CI of extracted fixed effect estimates from a linear mixed effects model?

I have a linear mixed effects model that looks like this:
model.1 <- lmer(x ~ 0 + treatment + (1|block), data)
I pulled out the fixed effect estimates from the model:
data$FittedValues <- fixef(model.1)
I made a distribution histogram of the fitted values and I need to know the 95% CI of the fitted values. I tried confint() which gives a CI for each treatment, but what I need a CI for the entire set of fitted values. I can run a t.test on the fitted values but I don't think this gives me the correct answer.
t.test(FittedValues, data = data,
alternative = 'two.sided',
conf.level = 0.95,
na.rm = TRUE)
I am new to stats and R, but I searched for quite some time and couldn't find an answer. Please excuse me if this is too simple of a questions for this board.

Quantile regression analysis in R

I have noticed that whenever I try to plot the coefficient graphs with their confidence intervals (CI) with the normal OLS coefficients and their CI, I get an error whenever I force the regression through the origin.
So if I use this code (engel is data for an quantile regression example in R):
data(engel)
fit1 <- rq(foodexp ~ income, tau = c(0.1,0.25,0.5,0.75,0.9), data = engel)
plot(summary(fit1))
I have no problem and my coefficeint graphs are drawn. But if I use this:
data(engel)
fit1 <- rq(foodexp ~ 0+income, tau = c(0.1,0.25,0.5,0.75,0.9), data = engel)
plot(summary(fit1))
I have a problem because the intercept goes through the origin. How can I get the plots as in the first code for the quantile regression without the intercept.

How to estimate the odds ratio with CI for X in a logistic regression containing the square of X using R?

I am trying to calculate odds ratios in R for variables with not only linear but also with quadratic terms in logistic regression. Let's say there is X and X^2 in the model. I know how to get the odds ratio (for a unit change of X) when X takes a specific value, but I don't know how to calculate confidence interval for this estimate. I found this reference how it's done in SAS: http://support.sas.com/kb/35/189.html , but I would like to do it in R. Any suggestions?
#BenBolker
Here is an example:
mydata <-read.csv("http://www.ats.ucla.edu/stat/data/binary.csv")
mydata <- transform(mydata, gpaSquared=gpa^2,greSquared=gre^2)
model <- glm(admit ~ gpa + gpaSquared + gre , family = binomial(logit), data = mydata)
In this example the odds ratio for gpa depends on the actual value of gpa (e.g. the effect of a unit change in gpa if gpa=4). I can calculate the log odds for gpa=5 and gpa=4 and get the odds ratio from those, but I don't know how to get CI for the OR. (please ignore that in the example the squared term is not stat. significant...)
m <- glm(x~X1^2+X2,data,family=binomial(link="logit"))
summary(m)
confint(m) # 95% CI for the coefficients using profiled log-likelihood
confint.default(m) ## CIs using standard errors
exp(coef(m)) # exponentiated coefficients
exp(confint(m)) # 95% CI for exponentiated coefficients

Resources