Estimate confidence intervals from a model with clustered standard errors - r

I am trying to find a way to estimate the predicted values of y with confidence intervals for specific values of x in an OLS regression. My model includes an interaction term and I use clustered standard errors and weights in my model.
A similar question was asked and answer previously, I thought it could be a good starting point:
robust standard errors in ggplot2
The problem is that, the solution offered here does not work when there are interaction terms OR weights in the model. It does produces an outcome when there are both weights and interaction terms. I found this confusing but I am relatively new to R and I could not understand the source of the problem.
In the second and third examples (lm2 & lm3) I get "Error in X %*% V : non-conformable arguments". My best guess for the source of the error in the third case is that model.frame(lm3) does not include interaction terms. But I don’t know whether I am in the right track and could not find a way to fix it. Besides it's not clear to me how I can set x1 to a specific value in this example. Can someone help me revise to code above or offer an alternative way to get my predicted standard errors when x is set to a specific value ?
df <- data.frame(x1 = rnorm(100), x2 = rnorm(100), w1 = runif(100,0.1,2),y = rnorm(100), group = as.factor(sample(1:10, 100, replace=T)))
lm1 <- lm(y ~ x1+x2, data = df)
lm2 <- lm(y ~ x1+x2, data = df, weight=w1)
lm3 <- lm(y ~ x1*x2, data = df)
lm4 <- lm(y ~ x1*x2, data = df, weight=w1)
getvcov <- function(fm,dfcw,cluster) {
library(sandwich);library(lmtest)
M <- length(unique(cluster))
N <- length(cluster)
K <- fm$rank
dfc <- (M/(M-1))*((N-1)/(N-K))
uj <- apply(estfun(fm),2, function(x) tapply(x, cluster, sum));
dfc*sandwich(fm, meat=crossprod(uj)/N)*dfcw
}
V <- getvcov(lm1,1,df$group)
X <- as.matrix(model.frame(lm1))
se <- predict(lm1,se=TRUE)$se.fit
se_robust1 <- sqrt(diag(X %*% V %*% t(X)))
V <- getvcov(lm2,1,df$group)
X <- as.matrix(model.frame(lm2))
se <- predict(lm2,se=TRUE)$se.fit
se_robust2 <- sqrt(diag(X %*% V %*% t(X)))
V <- getvcov(lm3,1,df$group)
X <- as.matrix(model.frame(lm3))
se <- predict(lm3,se=TRUE)$se.fit
se_robust2 <- sqrt(diag(X %*% V %*% t(X)))
V <- getvcov(lm4,1,df$group)
X <- as.matrix(model.frame(lm4))
se <- predict(lm4,se=TRUE)$se.fit
se_robust4 <- sqrt(diag(X %*% V %*% t(X)))

Related

Fitting probit model inr R

For my thesis I have to fit some glm models with MLEs that R doesn't have, I was going ok for the models with close form but now I have to use de Gausian CDF, so i decide to fit a simple probit model.
this is the code:
Data:
set.seed(123)
x <-matrix( rnorm(50,2,4),50,1)
m <- matrix(runif(50,2,4),50,1)
t <- matrix(rpois(50,0.5),50,1)
z <- (1+exp(-((x-mean(x)/sd(x)))))^-1 + runif(50)
y <- ifelse(z < 1.186228, 0, 1)
data1 <- as.data.frame(cbind(y,x,m,t))
myprobit <- function (formula, data)
{
mf <- model.frame(formula, data)
y <- model.response(mf, "numeric")
X <- model.matrix(formula, data = data)
if (any(is.na(cbind(y, X))))
stop("Some data are missing.")
loglik <- function(betas, X, y, sigma) { #loglikelihood
p <- length(betas)
beta <- betas[-p]
eta <- X %*% beta
sigma <- 1 #because of identification, sigma must be equal to 1
G <- pnorm(y, mean = eta,sd=sigma)
sum( y*log(G) + (1-y)*log(1-G))
}
ls.reg <- lm(y ~ X - 1)#starting values using ols, indicating that this model already has a constant
start <- coef(ls.reg)
fit <- optim(start, loglik, X = X, y = y, control = list(fnscale = -1), method = "BFGS", hessian = TRUE) #optimizar
if (fit$convergence > 0) {
print(fit)
stop("optim failed to converge!") #verify convergence
}
return(fit)
}
myprobit(y ~ x + m + t,data = data1)
And i get: Error in X %*% beta : non-conformable arguments, if i change start <- coef(ls.reg) with start <- c(coef(ls.reg), 1) i get wrong stimatives comparing with:
probit <- glm(y ~ x + m + t,data = data1 , family = binomial(link = "probit"))
What am I doing wrong?
Is possible to correctly fit this model using pnorm, if no, what algorithm should I use to approximate de gausian CDF. Thanks!!
The line of code responsible for your error is the following:
eta <- X %*% beta
Note that "%*%" is the matrix multiplication operator. By reproducing your code I noticed that X is a matrix with 50 rows and 4 columns. Hence, for matrix multiplication to be possible your "beta" needs to have 4 rows. But when you run "betas[-p]" you subset the betas vector by removing its last element, leaving only three elements instead of the four you need for matrix multiplication to be defined. If you remove [-p] the code will work.

Predict function for lm object in R

Why are prediction_me and prediction_R not equal? I'm attempting to follow the formula given by Lemma 5 here. Does the predict function use a different formula, have I made a mistake in my computation somewhere, or is it just rounding error? (the two are pretty close)
set.seed(100)
# genrate data
x <- rnorm(100, 10)
y <- 3 + x + rnorm(100, 5)
data <- data.frame(x = x, y = y)
# fit model
mod <- lm(y ~ x, data = data)
# new observation
data2 <- data.frame(x = rnorm(5, 10))
# prediction for new observation
d <- as.matrix(cbind(1, data[,-2]))
d2 <- as.matrix(cbind(1, data2))
fit <- d2 %*% mod$coefficients
t <- qt(1 - .025, mod$df.residual)
s <- summary(mod)$sigma
half <- as.vector(t*s*sqrt(1 + d2%*%solve(t(d)%*%d, t(d2))))
prediction_me <- cbind(fit, fit - half, fit + half)
prediction_R <- predict(mod, newdata = data2, interval = 'prediction')
prediction_me
prediction_R
Your current code is almost fine. Just note that the formula in Lemma 5 is for a single newly observed x. For this reason, half contains not only relevant variances but also covariances, while you only need the former ones. Thus, as.vector should be replaced with diag:
half <- diag(t * s * sqrt(1 + d2 %*% solve(t(d) %*%d , t(d2))))
prediction_me <- cbind(fit, fit - half, fit + half)
prediction_R <- predict(mod, newdata = data2, interval = 'prediction')
range(prediction_me - prediction_R)
# [1] 0 0

How to calculate variance of least squares estimator using QR decomposition in R?

I'm trying to learn QR decomposition, but can't figure out how to get the variance of beta_hat without resorting to traditional matrix calculations. I'm practising with the iris data set, and here's what I have so far:
y<-(iris$Sepal.Length)
x<-(iris$Sepal.Width)
X<-cbind(1,x)
n<-nrow(X)
p<-ncol(X)
qr.X<-qr(X)
b<-(t(qr.Q(qr.X)) %*% y)[1:p]
R<-qr.R(qr.X)
beta<-as.vector(backsolve(R,b))
res<-as.vector(y-X %*% beta)
Thanks for your help!
setup (copying in your code)
y <- iris$Sepal.Length
x <- iris$Sepal.Width
X <- cbind(1,x)
n <- nrow(X)
p <- ncol(X)
qr.X <- qr(X)
b <- (t(qr.Q(qr.X)) %*% y)[1:p] ## can be optimized; see Remark 1 below
R <- qr.R(qr.X) ## can be optimized; see Remark 2 below
beta <- as.vector(backsolve(R, b))
res <- as.vector(y - X %*% beta)
math
computation
Residual degree of freedom is n - p, so estimated variance is
se2 <- sum(res ^ 2) / (n - p)
Thus, the variance covariance matrix of estimated coefficients is
V <- chol2inv(R) * se2
# [,1] [,2]
#[1,] 0.22934170 -0.07352916
#[2,] -0.07352916 0.02405009
validation
Let's check the correctness by comparing with lm:
fit <- lm(Sepal.Length ~ Sepal.Width, iris)
vcov(fit)
# (Intercept) Sepal.Width
#(Intercept) 0.22934170 -0.07352916
#Sepal.Width -0.07352916 0.02405009
Identical result!
Remark 1 (skip forming 'Q' factor)
Instead of b <- (t(qr.Q(qr.X)) %*% y)[1:p], you can use function qr.qty (to avoid forming 'Q' matrix):
b <- qr.qty(qr.X, y)[1:p]
Remark 2 (skip forming 'R' factor)
You don't have to extract R <- qr.R(qr.X) for backsolve; using qr.X$qr is sufficient:
beta <- as.vector(backsolve(qr.X$qr, b))
Appendix: A function for estimation
The above is the simplest demonstration. In practice column pivoting and rank-deficiency need be dealt with. The following is an implementation. X is a model matrix and y is the response. Results should be compared with lm(y ~ X + 0).
qr_estimation <- function (X, y) {
## QR factorization
QR <- qr(X)
r <- QR$rank
piv <- QR$pivot[1:r]
## estimate identifiable coefficients
b <- qr.qty(QR, y)[1:r]
beta <- backsolve(QR$qr, b, r)
## fitted values
yhat <- base::c(X[, piv] %*% beta)
## residuals
resi <- y - yhat
## error variance
se2 <- base::c(crossprod(resi)) / (nrow(X) - r)
## variance-covariance for coefficients
V <- chol2inv(QR$qr, r) * se2
## post-processing on pivoting and rank-deficiency
p <- ncol(X)
beta_full <- rep.int(NA_real_, p)
beta_full[piv] <- beta
V_full <- matrix(NA_real_, p, p)
V_full[piv, piv] <- V
## return
list(coefficients = beta_full, vcov = V_full,
fitted.values = yhat, residuals = resi, sig = sqrt(se2))
}

Extracting the Linear Discriminant Equation

So I have this data and I would like to extract the coefficients from the equation it produces. That way I would be able to plug in a new data point and see where it would be placed.
library(MASS)
Iris <- data.frame(rbind(iris3[,,1], iris3[,,2], iris3[,,3]),
Sp = rep(c("s","c","v"), rep(50,3)))
train <- sample(1:150, 75)
table(Iris$Sp[train])
## your answer may differ
## c s v
## 22 23 30
z <- lda(Sp ~ ., Iris, prior = c(1,1,1)/3, subset = train)
I know I can get this:
> z
Call:
lda(Sp ~ ., data = Iris, prior = c(1, 1, 1)/3, subset = train)
Prior probabilities of groups:
c s v
0.3333333 0.3333333 0.3333333
Group means:
Sepal.L. Sepal.W. Petal.L. Petal.W.
c 5.969231 2.753846 4.311538 1.3384615
s 5.075000 3.541667 1.500000 0.2583333
v 6.700000 2.936000 5.552000 1.9880000
Coefficients of linear discriminants:
LD1 LD2
Sepal.L. -0.5458866 0.5215937
Sepal.W. -1.5312824 1.7891248
Petal.L. 1.8087255 -1.2637188
Petal.W. 2.8620894 3.2868849
Proportion of trace:
LD1 LD2
0.9893 0.0107
but is there a way to get just the equation so I would not have to calculate the new observation by hand?
Just turning this into an answer. You need predict(), the predict.lda method in the MASS package has your exact example in its help page:
tr <- sample(1:50, 25)
train <- rbind(iris3[tr,,1], iris3[tr,,2], iris3[tr,,3])
test <- rbind(iris3[-tr,,1], iris3[-tr,,2], iris3[-tr,,3])
cl <- factor(c(rep("s",25), rep("c",25), rep("v",25)))
z <- lda(train, cl)
predict(z, test)$class
The default method is "plug-in" so this is the code from MASS:::predict.lda. object is the fit-object and x comes from the newdata argument converted to a matrix:
# snipped preamble and error checking
means <- colSums(prior * object$means)
scaling <- object$scaling
x <- scale(x, center = means, scale = FALSE) %*% scaling
dm <- scale(object$means, center = means, scale = FALSE) %*%
scaling
method <- match.arg(method)
dimen <- if (missing(dimen))
length(object$svd)
else min(dimen, length(object$svd))
N <- object$N
if (method == "plug-in") {
dm <- dm[, 1L:dimen, drop = FALSE]
dist <- matrix(0.5 * rowSums(dm^2) - log(prior), nrow(x),
length(prior), byrow = TRUE) - x[, 1L:dimen, drop = FALSE] %*%
t(dm)
dist <- exp(-(dist - apply(dist, 1L, min, na.rm = TRUE)))
}
# snipped two other methods
}
posterior <- dist/drop(dist %*% rep(1, ng))
This mostly put in to demonstrate why Gregor's answer is the most sensible approach. Trying to pull out an "equation" seems unfruitful. (I can remember using the results of linear regression to do such an exercise in my first year-regression class in grad school.)

profile confidence intervals in R: mle2

I am trying to use the command mle2, in the package bbmle. I am looking at p2 of "Maximum likelihood estimation and analysis with the bbmle package" by Bolker. Somehow I fail to enter the right start values. Here's the reproducible code:
l.lik.probit <-function(par, ivs, dv){
Y <- as.matrix(dv)
X <- as.matrix(ivs)
K <-ncol(X)
b <- as.matrix(par[1:K])
phi <- pnorm(X %*% b)
sum(Y * log(phi) + (1 - Y) * log(1 - phi))
}
n=200
set.seed(1000)
x1 <- rnorm(n)
x2 <- rnorm(n)
x3 <- rnorm(n)
x4 <- rnorm(n)
latentz<- 1 + 2.0 * x1 + 3.0 * x2 + 5.0 * x3 + 8.0 * x4 + rnorm(n,0,5)
y <- latentz
y[latentz < 1] <- 0
y[latentz >=1] <- 1
x <- cbind(1,x1,x2,x3,x4)
values.start <-c(1,1,1,1,1)
foo2<-mle2(l.lik.probit, start=list(dv=0,ivs=values.start),method="BFGS",optimizer="optim", data=list(Y=y,X=x))
And this is the error I get:
Error in mle2(l.lik.probit, start = list(Y = 0, X = values.start), method = "BFGS", :
some named arguments in 'start' are not arguments to the specified log-likelihood function
Any idea why? Thanks for your help!
You've missed a couple of things, but the most important is that by default mle2 takes a list of parameters; you can make it take a parameter vector instead, but you have to work a little bit harder.
I have tweaked the code slightly in places. (I changed the log-likelihood function to a negative log-likelihood function, without which this would never work!)
l.lik.probit <-function(par, ivs, dv){
K <- ncol(ivs)
b <- as.matrix(par[1:K])
phi <- pnorm(ivs %*% b)
-sum(dv * log(phi) + (1 - dv) * log(1 - phi))
}
n <- 200
set.seed(1000)
dat <- data.frame(x1=rnorm(n),
x2=rnorm(n),
x3=rnorm(n),
x4=rnorm(n))
beta <- c(1,2,3,5,8)
mm <- model.matrix(~x1+x2+x3+x4,data=dat)
latentz<- rnorm(n,mean=mm%*%beta,sd=5)
y <- latentz
y[latentz < 1] <- 0
y[latentz >=1] <- 1
x <- mm
values.start <- rep(1,5)
Now we do the fit. The main thing is to specify vecpar=TRUE and to use parnames to let mle2 know the names of the elements in the parameter vector ...
library("bbmle")
names(values.start) <- parnames(l.lik.probit) <- paste0("b",0:4)
m1 <- mle2(l.lik.probit, start=values.start,
vecpar=TRUE,
method="BFGS",optimizer="optim",
data=list(dv=y,ivs=x))
As pointed out above for this particular example you have just re-implemented the probit regression (although I understand that you now want to extend this to allow for heteroscedasticity in some way ...)
dat2 <- data.frame(dat,y)
m2 <- glm(y~x1+x2+x3+x4,family=binomial(link="probit"),
data=dat2)
As a final note, I would say that you should check out the parameters argument, which allows you to specify a sub-linear model for any one of the parameters, and the formula interface:
m3 <- mle2(y~dbinom(prob=pnorm(eta),size=1),
parameters=list(eta~x1+x2+x3+x4),
start=list(eta=0),
data=dat2)
PS confint(foo2) appears to work fine (giving profile CIs as requested) with this set-up.
ae <- function(x,y) all.equal(unname(coef(x)),unname(coef(y)),tol=5e-5)
ae(m1,m2) && ae(m2,m3)

Resources