I am trying to figure out how to plot the profile likelihood curve of a GLM
parameter with 95% pLCI's on the same plot. The example I have been trying
with is below. The plots I am getting are not the likelihood curves that I
was expecting. The y-axis of the plots is tau and I would like that axis
to be the likelihood so that I have a curve that maxes at the parameter
estimate. I am not sure where I find those likelihood values? I may just
be misinterpreting the theory behind this. Thanks for any help you can give.
Max
clotting <- data.frame(
u = c(5,10,15,20,30,40,60,80,100),
lot1 = c(118,58,42,35,27,25,21,19,18),
lot2 = c(69,35,26,21,18,16,13,12,12))
glm2<-glm(lot2 ~ log(u), data=clotting, family=Gamma)
prof<-profile(glm2)
plot(prof)
Regenerate your example:
clotting <- data.frame(
u = c(5,10,15,20,30,40,60,80,100),
lot1 = c(118,58,42,35,27,25,21,19,18),
lot2 = c(69,35,26,21,18,16,13,12,12))
glm2 <- glm(lot2 ~ log(u), data=clotting, family=Gamma)
The profile.glm function actually lives in the MASS package:
library(MASS)
prof<-profile(glm2)
In order to figure out what profile.glm and plot.profile are doing, see ?profile.glm and ?plot.profile. However, in order to dig into the profile object it may also be useful to examine the code of MASS:::profile.glm and MASS:::plot.profile ... basically, what these tell you is that profile is returning the signed square root of the difference between the deviance and the minimum deviance, scaled by the dispersion parameter. The reason that this is done is so that the profile for a perfectly quadratic profile will appear as a straight line (it's much easier to detect deviations from a straight line than from a parabola by eye).
The other thing that may be useful to know is how the profile is stored. Basically, it's a list of data frames (one for each parameter profiled), except that the individual data frames are a little bit weird (containing one vector component and one matrix component).
> str(prof)
List of 2
$ (Intercept):'data.frame': 12 obs. of 3 variables:
..$ tau : num [1:12] -3.557 -2.836 -2.12 -1.409 -0.702 ...
..$ par.vals: num [1:12, 1:2] -0.0286 -0.0276 -0.0267 -0.0258 -0.0248 ...
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : NULL
.. .. ..$ : chr [1:2] "(Intercept)" "log(u)"
..$ dev : num [1:12] 0.00622 0.00753 0.00883 0.01012 0.0114 ...
$ log(u) :'data.frame': 12 obs. of 2 variables:
..$ tau : num [1:12] -3.516 -2.811 -2.106 -1.403 -0.701 ...
..$ par.vals: num [1:12, 1:2] -0.0195 -0.0204 -0.0213 -0.0222 -0.023 ...
.. ..- attr(*, "dimnames")=List of 2
It also contains attributes summary and original.fit that you can use to recover the dispersion and minimum deviance:
disp <- attr(prof,"summary")$dispersion
mindev <- attr(prof,"original.fit")$deviance
Now reverse the transformation for parameter 1:
dev1 <- prof[[1]]$tau^2
dev2 <- dev1*disp+mindev
Plot:
plot(prof[[1]][,1],dev2,type="b")
(This is the plot of the deviance. You can multiply by 0.5 to get the negative log-likelihood, or -0.5 to get the log-likelihood ...)
edit: some more general functions to transform the profile into a useful format for lattice/ggplot plotting ...
tmpf <- function(x,n) {
data.frame(par=n,tau=x$tau,
deviance=x$tau^2*disp+mindev,
x$par.vals,check.names=FALSE)
}
pp <- do.call(rbind,mapply(tmpf,prof,names(prof),SIMPLIFY=FALSE))
library(reshape2)
pp2 <- melt(pp,id.var=1:3)
pp3 <- subset(pp2,par==variable,select=-variable)
Now plot it with lattice:
library(lattice)
xyplot(deviance~value|par,type="b",data=pp3,
scales=list(x=list(relation="free")))
Or with ggplot2:
library(ggplot2)
ggplot(pp3,aes(value,deviance))+geom_line()+geom_point()+
facet_wrap(~par,scale="free_x")
FYI, for fun, I took the above and whipped it together into a single function using purrr::imap_dfr as I couldn't find a package that implements the above.
get_profile_glm <- function(aglm){
prof <- MASS:::profile.glm(aglm)
disp <- attr(prof,"summary")$dispersion
purrr::imap_dfr(prof, .f = ~data.frame(par = .y,
deviance=.x$z^2*disp+aglm$deviance,
values = as.data.frame(.x$par.vals)[[.y]],
stringsAsFactors = FALSE))
}
Works great!
counts <- c(18,17,15,20,10,20,25,13,12)
outcome <- gl(3,1,9)
treatment <- gl(3,3)
print(d.AD <- data.frame(treatment, outcome, counts))
glm.D93 <- glm(counts ~ outcome + treatment, family = poisson())
ggplot(get_profile_glm(aglm), aes(x = values, y = deviance)) +
geom_point() +
geom_line() +
facet_wrap(~par, scale = "free_x")
Related
How do you get the random intercept effects estimators from a lme4 result object?
set.seed(247)
# Create Data
n=1000
x = runif(n)
id = rep(NA,n)
for (i in 1:10) {
id_s = (i-1)*100+1
id_e = i*100
id[id_s:id_e] = i
}
effects = rnorm(10)
lp = -0.5+0.5*x + effects[id]
probs = exp(lp)/(1+exp(lp))
Y2 = rbinom(n, 1, probs)
library(lme4)
fit_glmm2 = glmer(Y2 ~ x + (1|id), family = "binomial",control = glmerControl(calc.derivs = FALSE))
I thought maybe they are the u's but there's a slight difference between them:
yy = coef(fit_glmm2) # looking only at the intercept
fit_glmm2#u + fit_glmm2#beta[1]
If you want the random effects, ranef() is the best way to get them:
r <- ranef(fit_glmm2)
str(r)
## List of 1
## $ id:'data.frame': 10 obs. of 1 variable:
## ..$ (Intercept): num [1:10] -0.693 0.297 0.54 -0.467 0.755 ...
## ..- attr(*, "postVar")= num [1, 1, 1:10] 0.0463 0.0385 0.0392 0.0436 0.0409 ...
## - attr(*, "class")= chr "ranef.mer"
raw <- unname(unlist(ranef(fit_glmm2)$id))
identical(raw, fit_glmm2#u*fit_glmm2#theta) ## TRUE
As described in vignette("lmer", package = "lme4"), the #u values are the spherical random effects, i.e. they're iid N(0,1) and need to be transformed to get to the random effects b used in the formula X %*% beta + Z %*% b. In this case (an intercept-only RE), theta corresponds to the standard deviation of the random effect. u*theta won't work for more complicated cases ... in this case you need getME(fit_glmm2, "Lambda") %*% getME(fit_glmm2, "u").
getME(., "b") will also work, but again for more complex models you'll have to work out how the b-vector is split into random intercepts, slopes, different RE terms, etc..
Turns out you can get them by multiplying the u parameter with the theta parameter, or by calling getME(.,"b"):
yy = coef(fit_glmm2) # looking only at the intercept
fit_glmm2#u*fit_glmm2#theta + fit_glmm2#beta[1] # or
# getME(fit_glmm2,"b") + fit_glmm2#beta[1]
Background
I want to plot the hazard ratio over time, including its confidence intervals, of a survival dataset. As an example, I will take a simplified dataset from the survival package: the colon dataset.
library(survival)
library(tidyverse)
# Colon survival dataset
data <- colon %>%
filter(etype == 2) %>%
select(c(id, rx, status, time)) %>%
filter(rx == "Obs" | rx == "Lev+5FU") %>%
mutate(rx = factor(rx))
The dataset contains patients that received a treatment (i.e., "Lev+5FU") and patients that did not (i.e., "Obs"). The survival curves are as follows:
fit <- survfit(Surv(time, status) ~ rx, data = data )
plot(fit)
Attempt
Using the cox.zph function, you can plot the hazard ratio of a cox model.
cox <- coxph(Surv(time, status) ~ rx, data = data)
plot(cox.zph(cox))
However, I want to plot the hazard ratio including 95% CI for this survival dataset using ggplot.
Question(s)
How do you extract the hazard ratio data and the 95% CIs from this cox.zph object to plot them in ggplot?
Are there other R packages that enable doing the same in a more convenient way?
Note: it’s important to recognize the correction of Dion Groothof. The lines and CIs are not really hazard ratios. They are estimates and bounds around time varying log-hazard-ratios. You would need to exponentiate to get HRs.
The values are in the result returned from cox.zph:
str(cox.zph(cox))
#----------------------
List of 7
$ table : num [1:2, 1:3] 1.188 1.188 1 1 0.276 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:2] "rx" "GLOBAL"
.. ..$ : chr [1:3] "chisq" "df" "p"
$ x : num [1:291] 0 0.00162 0.00323 0.00485 0.00646 ...
$ time : num [1:291] 23 34 45 52 79 113 125 127 138 141 ...
$ y : num [1:291, 1] 2.09 2.1 2.1 2.1 2.11 ...
..- attr(*, "dimnames")=List of 2
.. ..$ : chr [1:291] "23" "34" "45" "52" ...
.. ..$ : chr "rx"
$ var : num [1, 1] 4.11
$ transform: chr "km"
$ call : language cox.zph(fit = cox)
- attr(*, "class")= chr "cox.zph"
To get a plot with any of the paradigms (base, lattice or ggplot2) you use the time as the x axis, use x as the solid line and y at the "points"
z <- cox.zph(cox)
ggdf <- data.frame( unclass(z)[c("time", "x","y")])
ggplot(data=ggdf, aes(x=time, y=-x))+
geom_line()+ ylim(range(z$y))+
geom_point(aes(x=time,y=z$y) )
To get the CI look at getAnywhere(plot.cox.zph)
xx <- x$x
yy <- x$y
df <- max(df)
nvar <- ncol(yy)
pred.x <- seq(from = min(xx), to = max(xx), length = nsmo)
#------------
if (se) {
bk <- backsolve(qmat$qr[1:df, 1:df], diag(df))
xtx <- bk %*% t(bk)
seval <- ((pmat %*% xtx) * pmat) %*% rep(1, df)
temp <- 2 * sqrt(x$var[i, i] * seval)
yup <- yhat + temp
ylow <- yhat - temp
yr <- range(yr, yup, ylow)
#---------------
if (se) {
lines(pred.x, exp(yup), col = col[2], lty = lty[2],
lwd = lwd[2])
lines(pred.x, exp(ylow), col = col[2], lty = lty[2],
lwd = lwd[2])
}
The survminer package will do this for you:
library(survminer)
ggcoxzph(cox.zph(cox))
I am trying run a linear model in R that does not specify an intercept. The reason is to eventually calculate the sums of squares reduced when an intercept is added. However, I am receiving different results when specifying this model using built-in factor contrasts versus explicitly stating the contrast values (i.e., -.5 and .5).
More specifically, using contrasts() results in a model with 2 terms (no intercept) while explicitly stating the contrast values via a column vector results in the correct model (no intercept and 1 term specifying the contrast).
group <- rep(c("c", "t"), each = 5)
group_cont <- rep(c(-.5, .5), each = 5)
var1 <- runif(10)
var2 <- runif(10)
test_data <- data.frame(
group = factor(group),
group_cont = group_cont,
y = var1,
x = var2
)
contrasts(test_data$group) <- cbind(grp = c(-.5, .5))
summary(lm(y ~ 1 + group, data = test_data)) # full model
summary(lm(y ~ 0 + group, data = test_data)) # weird results
summary(lm(y ~ 0 + group_cont, data = test_data)) # expected
Is there a way to specify a linear model without an intercept, but still use contrasts() to specify the contrast?
lm() asks for a data frame and column names as inputs. When you use contrasts(), you are assigning an attribute to the column in your data frame, which you can call directly using the the contrast function or attr. However, you are not changing the data type itself. Using you example above:
> str(test_data)
'data.frame': 10 obs. of 4 variables:
$ group : Factor w/ 2 levels "c","t": 1 1 1 1 1 2 2 2 2 2 #### still a factor ####
..- attr(*, "contrasts")= num [1:2, 1] -0.5 0.5 #### NOTE The contrast attribute ####
.. ..- attr(*, "dimnames")=List of 2
.. .. ..$ : chr "c" "t"
.. .. ..$ : chr "grp"
$ group_cont: num -0.5 -0.5 -0.5 -0.5 -0.5 0.5 0.5 0.5 0.5 0.5
$ y : num 0.161 0.518 0.417 0.335 0.301 ...
$ x : num 0.34 0.729 0.766 0.629 0.191 ...
> attr(test_data$group, "contrasts")
grp
c -0.5
t 0.5
So a attr was added but the type is still a factor. So lm treats it like a factor, providing you a coefficient for each level. Moreover, providing contrast or calling the attr inside lm will throw an error. Depending on what you want to the end to look like, you may need to explore a different package like contrast. There is also a contrast argument in lm but I am not 100% sure this is what you are really looking for. See ?lm for more on that.
While fitting a Smooth Coefficient Kernel Regression with help of npscoef {np} in R, I cannot output the standard errors for the regression estimates.
The Help states that if errors = TRUE, asymptotic standard errors should be computed and returned in the resulting smoothcoefficient object.
Based on the example provided by the authors of the package "NP":
library("np")
data(wage1)
NP.Ydata <- wage1$lwage
NP.Xdata <- wage1[c("educ", "tenure", "exper", "expersq")]
NP.Zdata <- wage1[c("female", "married")]
NP.bw.scoef <- npscoefbw(xdat=NP.Xdata, ydat=NP.Ydata, zdat=NP.Zdata)
NP.scoef <- npscoef(NP.bw.scoef,
betas = TRUE,
residuals = TRUE,
errors = TRUE)
Coefficients are in the object coef(NP.scoef) saved under betas = TRUE
> str(coef(NP.scoef))
num [1:526, 1:5] 0.146 0.504 0.196 0.415 0.415 ...
- attr(*, "dimnames")=List of 2
..$ : NULL
..$ : chr [1:5] "Intercept" "educ" "tenure" "exper" ...
But should not the standard errors for the estimates be saved under errors = TRUE?
I see only one column vector. Not 5 for intercept + 4 explanatory variables.
> str(se(NP.scoef))
num [1:526] 0.015 0.0155 0.0155 0.0268 0.0128 ...
I am confused. Hope for a clarification.
From this question, I was wondering if it's possible to extract the Quadratic discriminant analysis (QDA's) scores and reuse them after like PCA scores.
## follow example from ?lda
Iris <- data.frame(rbind(iris3[,,1], iris3[,,2], iris3[,,3]),
Sp = rep(c("s","c","v"), rep(50,3)))
set.seed(1) ## remove this line if you want it to be pseudo random
train <- sample(1:150, 75)
table(Iris$Sp[train])
## your answer may differ
## c s v
## 22 23 30
Using the QDA here
z <- qda(Sp ~ ., Iris, prior = c(1,1,1)/3, subset = train)
## get the whole prediction object
pred <- predict(z)
## show first few sample scores on LDs
Here, you can see that it's not working.
head(pred$x)
# NULL
plot(LD2 ~ LD1, data = pred$x)
# Error in eval(expr, envir, enclos) : object 'LD2' not found
NOTE: Too long/formatted for a comment. NOT AN ANSWER
You may want to try the rrcov package:
library(rrcov)
z <- QdaCov(Sp ~ ., Iris[train,], prior = c(1,1,1)/3)
pred <- predict(z)
str(pred)
## Formal class 'PredictQda' [package "rrcov"] with 4 slots
## ..# classification: Factor w/ 3 levels "c","s","v": 2 2 2 1 3 2 2 1 3 2 ...
## ..# posterior : num [1:41, 1:3] 5.84e-45 5.28e-50 1.16e-25 1.00 1.48e-03 ...
## ..# x : num [1:41, 1:3] -97.15 -109.44 -54.03 2.9 -3.37 ...
## ..# ct : 'table' int [1:3, 1:3] 13 0 1 0 16 0 0 0 11
## .. ..- attr(*, "dimnames")=List of 2
## .. .. ..$ Actual : chr [1:3] "c" "s" "v"
## .. .. ..$ Predicted: chr [1:3] "c" "s" "v"
It also has robust PCA methods that may be useful.
Unfortunately, not every model in R conforms to the same object structure/API and this won't be a linear model, so it is unlikely to conform to linear model fit structure APIs.
There's an example of how to visualize the qda results here — http://ramhiser.com/2013/07/02/a-brief-look-at-mixture-discriminant-analysis/
And, you can do:
library(klaR)
partimat(Sp ~ ., data=Iris, method="qda", subset=train)
for a partition plot of the qda results.