incorrect logistic regression output - r

I'm doing logistic regression on Boston data with a column high.medv (yes/no) which indicates if the median house pricing given by column medv is either more than 25 or not.
Below is my code for logistic regression.
high.medv <- ifelse(Boston$medv>25, "Y", "N") # Applying the desired
`condition to medv and storing the results into a new variable called "medv.high"
ourBoston <- data.frame (Boston, high.medv)
ourBoston$high.medv <- as.factor(ourBoston$high.medv)
attach(Boston)
# 70% of data <- Train
train2<- subset(ourBoston,sample==TRUE)
# 30% will be Test
test2<- subset(ourBoston, sample==FALSE)
glm.fit <- glm (high.medv ~ lstat,data = train2, family = binomial)
summary(glm.fit)
The output is as follows:
Deviance Residuals:
[1] 0
Coefficients: (1 not defined because of singularities)
Estimate Std. Error z value Pr(>|z|)
(Intercept) -22.57 48196.14 0 1
lstat NA NA NA NA
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 0.0000e+00 on 0 degrees of freedom
Residual deviance: 3.1675e-10 on 0 degrees of freedom
AIC: 2
Number of Fisher Scoring iterations: 21
Also i need:
Now I'm required to use the misclassification rate as the measure of error for the two cases:
using lstat as the predictor, and
using all predictors except high.medv and medv.
but i am stuck at the regression itself

With every classification algorithm, the art relies on choosing the threshold upon which you will determine whether the the result is positive or negative.
When you predict your outcomes in the test data set you estimate probabilities of the response variable being either 1 or 0. Therefore, you need to the tell where you are gonna cut, the threshold, at which the prediction becomes 1 or 0.
A high threshold is more conservative about labeling a case as positive, which makes it less likely to produce false positives and more likely to produce false negatives. The opposite happens for low thresholds.
The usual procedure is to plot the rates that interests you, e.g., true positives and false positives against each other, and then choose what is the best rate for you.
set.seed(666)
# simulation of logistic data
x1 = rnorm(1000) # some continuous variables
z = 1 + 2*x1 # linear combination with a bias
pr = 1/(1 + exp(-z)) # pass through an inv-logit function
y = rbinom(1000, 1, pr)
df = data.frame(y = y, x1 = x1)
df$train = 0
df$train[sample(1:(2*nrow(df)/3))] = 1
df$new_y = NA
# modelling the response variable
mod = glm(y ~ x1, data = df[df$train == 1,], family = "binomial")
df$new_y[df$train == 0] = predict(mod, newdata = df[df$train == 0,], type = 'response') # predicted probabilities
dat = df[df$train==0,] # test data
To use missclassification error to evaluate your model, first you need to set up a threshold. For that, you can use the roc function from pROC package, which calculates the rates and provides the corresponding thresholds:
library(pROC)
rates =roc(dat$y, dat$new_y)
plot(rates) # visualize the trade-off
rates$specificity # shows the ratio of true negative over overall negatives
rates$thresholds # shows you the corresponding thresholds
dat$jj = as.numeric(dat$new_y>0.7) # using 0.7 as a threshold to indicate that we predict y = 1
table(dat$y, dat$jj) # provides the miss classifications given 0.7 threshold
0 1
0 86 20
1 64 164
The accuracy of your model can be computed as the ratio of the number of observations you got right against the size of your sample.

Related

What is the scale of parameter estimates produced by nnet::multinom?

I'm using the multinom function from the nnet package to do multinomial logistic regression in R. When I fit the model, I expected to get parameter estimates on the logit scale. However, transforming variables with the inverse logit doesn't give probability estimates that match predicted examples, see example below.
The help file states that "A log-linear model is fitted, with coefficients zero for the first class", but how do I transform parameter estimates to get predicted effects on the probability scale?
library("nnet")
set.seed(123)
# Simulate some simple fake data
groups <- t(rmultinom(500, 1, prob = c(0.05, 0.3, 0.65))) %*% c(1:3)
moddat <- data.frame(group = factor(groups))
# Fit the multinomial model
mod <- multinom(group ~ 1, moddat)
predict(mod, type = "probs")[1,] # predicted probabilities recover generating probs
# But transformed coefficients don't become probabilities
plogis(coef(mod)) # inverse logit
1/(1 + exp(-coef(mod))) # inverse logit
Using predict I can recover the generating probabilities:
1 2 3
0.06 0.30 0.64
But taking the inverse logit of the coefficients does not give probabilities:
(Intercept)
2 0.8333333
3 0.9142857
The inverse logit is the correct back transformation for a binomial model. In the case of a multinomial model, the appropriate back transformation is the softmax function, as described in this question.
The statement from the documentation that a "log-linear model is fitted with coefficient zero for the first class" essentially means that the reference probability is set to 0 on the link scale.
To recover the probabilities manually from the example above:
library("nnet")
set.seed(123)
groups <- t(rmultinom(500, 1, prob = c(0.05, 0.3, 0.65))) %*% c(1:3)
moddat <- data.frame(group = factor(groups))
mod <- multinom(group ~ 1, moddat)
# weights: 6 (2 variable)
# initial value 549.306144
# final value 407.810115
# converged
predict(mod, type = "probs")[1,] # predicted probabilities recover generating probs
# 1 2 3
# 0.06 0.30 0.64
# Inverse logit is incorrect
1/(1 + exp(-coef(mod))) # inverse logit
# (Intercept)
# 2 0.8333333
# 3 0.9142857
# Use softmax transformation instead
softmax <- function(x){
expx <- exp(x)
return(expx/sum(expx))
}
# Add the reference category probability (0 on link scale) and use softmax tranformation
all_coefs <- rbind("1" = 0, coef(mod))
softmax(all_coefs)
# (Intercept)
# 1 0.06
# 2 0.30
# 3 0.64

Estimating mix-level logistic regression coefficients without using lme4 package in R

I have a 2 level dataset of 37000 instances, which represents the choices of 199 subjects. I have to estimate coefficients in logistic regression for each of the 199 individuals. I have done manually 199 times by subsetting, but I want to know whether there is a more efficient way of getting the coefficients by looping without using the lme4 package. Also, I should compute the coefficients as variables in each subject.
Here is my code.
### Split of the dataset in each subject ID
mylist <- split(df_merged2, df_merged2$sjind)
### Indication of subject 1 in the first subsetting
df1 <- mylist[[1]]
### Logistic regression
glm1 <- glm(rep ~ reward_v.2 + trans_v.2 + reward_transition, data = df1)
### Extracting the coefficients
reward_transition <- coef(glm1)[4]
reward <- coef(glm1)[2]
transition <- coef(glm1)[3]
reward<- as.numeric(reward)
reward_transition <- as.numeric(reward_transition)
transition <- as.numeric(transition)
omega <- reward_transition - reward
### Computing the constant coefficients as variables
df1$rewardmix <- 1
df1$rewardmix <- reward
df1$omega <- 1
df1$omega <- omega
df1$transmix <- 1
df1$transmix <- transition
df1$reward_transitionmix <- reward_transition
You can use the by() function from the base package, whose short description is "Apply a Function to a Data Frame Split by Factors" (ref: help(by))
Here is an example using your terminology for the data frame and the subject ID variable names:
# Make the simulated data reproducible
set.seed(1717)
# The IDs can be sorted in any order
ids = c('A','B','B','A','A','B','B','B','C','C','C','B','C')
# Sample data frame with: subject ID, target variable (y), input variable (x)
df_merged2 = data.frame(sjind=ids,
y=rnorm(length(ids)),
x=rnorm(length(ids)))
head(df_merged2)
The top 6 rows of the data look like:
sjind y x
1 A -1.4548934 1.1004932
2 B -1.7084245 -0.7731208
3 B 2.1004557 -1.6229203
4 A -1.0283021 0.4233806
5 A 0.4133888 1.2398577
6 B -1.4104637 0.3746706
Now use the by() function to fit a GLM model for each group defined by the sjind unique values:
glm_by_sjind = by(df_merged2, as.factor(df_merged2$sjind),
function(df) glm(y ~ x, data=df))
The output object glm_by_sjind is a list with the following properties:
It has as many elements as the number of unique values in sjind (in this case 3)
It is indexed by the unique values of the sjind variable (in this case "A", "B", "C")
Each element contains the regression output from glm() run on each split of the input data frame (where splits are clearly defined by the sjind unique values)
So for example, you can request the summary of the regression output for subject "B" as follows:
> summary(glm_by_sjind[["B"]])
Call:
glm(formula = y ~ x, data = df)
Deviance Residuals:
2 3 6 7 8 12
-1.40226 1.59040 -0.00186 0.06400 -1.93118 1.68091
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.0487 0.7472 -1.404 0.233
x -0.9605 0.9170 -1.047 0.354
(Dispersion parameter for gaussian family taken to be 2.763681)
Null deviance: 14.087 on 5 degrees of freedom
Residual deviance: 11.055 on 4 degrees of freedom
AIC: 26.694
Number of Fisher Scoring iterations: 2
If we go a little further, we can also perform a sanity check that each GLM model is based on the expected number of cases (i.e. the number of cases in each model should be equal to the frequency distribution of the sjind variable in the input data frame).
freq_sjind_in_data = as.list( table(df_merged2$sjind) )
ncases_in_each_glm = lapply( glm_results, function(glm) NROW(glm$data) )
all.equal( freq_sjind_in_data,
ncases_in_each_glm )
which returns TRUE.
Or also inspect that visually:
as.data.frame(freq_sjind_in_data)
as.data.frame(ncases_in_each_glm)
which return
A B C
1 3 6 4
in both cases.

Notation of categorical variables in regression analysis

In the process of studying logistic regression using carret's mdrr data, questions arise.
I created a full model using a total of 19 variables, and I have questions about the notation of the categorical variable.
In my regression model, the categorical variables are:
nDB : 0 or 1 or 2
nR05 : 0 or 1
nR10 : 1 or 2
I created a full model using glm, but I do not know why the names of categorical variables have one of the numbers in the category.
-------------------------------------------------------------------------------
glm(formula = mdrrClass ~ ., family = binomial, data = train)
#Coefficients:
#(Intercept) nDB1 nDB2 nX nR051 nR101 nBnz2
#5.792e+00 5.287e-01 -3.103e-01 -2.532e-01 -9.291e-02 9.259e-01 -2.108e+00
#SPI BLI PW4 PJI2 Lop BIC2 VRA1
#3.222e-05 -1.201e+01 -3.754e+01 -5.467e-01 1.010e+00 -5.712e+00 -2.424e-04
# PCR H3D FDI PJI3 DISPm DISPe G.N..N.
# -6.397e-02 -4.360e-04 3.458e+01 -6.579e+00 -5.690e-02 2.056e-01 -7.610e-03
#Degrees of Freedom: 263 Total (i.e. Null); 243 Residual
#Null Deviance: 359.3
#Residual Deviance: 232.6 AIC: 274.6
-------------------------------------------------------------------------------
The above results show that nDB is numbered, and nR05 and nR10 are related to categories.
I am wondering why numbers are attached as above.
When you have categorical predictors in any regression model you need to create dummy variables. R does this for you and the output you see are the contrasts
Your variable nDB has 3 levels: 0, 1, 2
One of those needs to be chosen as the reference level (R was chosen 0 for you in this case, but this can also be specified manually). Then dummy variables are created to compare every other level against your reference level: 0 vs 1 and 0 vs 2
R names these dummy variables nDB1 and nDB2. nDB1 is for the 0 vs 1 contrast, and nDB2 is for the 0 vs 2 contrast. The numbers after the variable names are just to indicate which contrast you're looking at
The coefficient values are interpreted as the difference in your y (outcome) value between groups 0 and 1 (nDB1), and separately between groups 0 and 2 (nDB2). In other words, what change in the outcome would you expect when moving from one group to the other?
Your other categorical variables have 2 levels and are just a simpler case of the above
For example, nR05 only has 0 and 1 as values. 0 was chosen as your reference, and because theres only 1 possible contrast here, a single dummy variable is created comparing 0 vs 1. In the output that dummy variable is called nR051
It's always the case for categorical variables, espacially when they are not binary (like your nDB). It's so that you know for which value you have the coefficient. For the nDB variable the model has created two new variables: nDB_1 which equals 1 if nDB=1 and equals 0 if nDB= 0 or nDB=2.
To analyze a binary variable (whose values would be TRUE / FALSE, 0/1, or YES / NO) according to a quantitative explanatory variable, a logistic regression can be used.
Consider for example the following data, where x is the age of 40 people, and y the variable indicating if they bought a death metal album in the last 5 years (1 if "yes", 0 if "no" )
Graphically, we can see that, more likely, the older people are, the less they buy death metal.
Logistic regression is a special case of the Generalized Linear Model (GLM).
With a classical linear regression model, we consider the following model:
Y = αX + β
The expectation of Y is therefore predicted as follows:
E (Y) = αX + β
Here, because of the binary distribution of Y, the above relations can not apply. To "generalize" the linear model, we therefore consider that
g (E (Y)) = αX + β
where g is a link function.
In this case, for a logistic regression, the link function corresponds to the logit function:
logit (p) = log (p/(1-p))
Note that this logit function transforms a value (p) between 0 and 1 (such as a probability for example) into a value between - ∞ and + ∞.
Here's how to do the logistic regression under R:
myreg=glm(y~x, family=binomial(link=logit))
summary(myreg)
glm(formula = y ~ x, family = binomial(link = logit))
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.8686 -0.7764 0.3801 0.8814 2.0253
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 5.9462 1.9599 3.034 0.00241 **
## x -0.1156 0.0397 -2.912 0.00360 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 52.925 on 39 degrees of freedom
## Residual deviance: 39.617 on 38 degrees of freedom
## AIC: 43.617
##
## Number of Fisher Scoring iterations: 5
We obtain the following model:
logit (E (Y)) = - 0.12X + 5.95
and we note that the (negative) influence of age on the purchase of death metal albums is significant at the 5% level (p(>[Z| ----> < 5%).
Thus, logistic regression is often used to bring out risk factor (like Age but also BMI, Sex and so on …)

weights option in GAM

My dataset has many redundant observations (but each observation should be counted). So I consider using 'weights' option in GAM because it significantly reduces computation time.
gam function (in mgcv package) explains that they are 'equivalent' (from ?gam on arguments weights):
"Note that a weight of 2, for example, is equivalent to having made exactly the same observation twice."
But it does not seem right.
yy = c(5,2,8,9)
xx = 1:4
wgts = c(3,2,4,1)
yy2 = rep(yy, wgts)
xx2 = rep(xx, wgts)
mod1 = gam(yy2 ~ xx2)
mod2 = gam(yy ~ xx, weights = wgts)
mod3 = gam(yy ~ xx, weights = wgts / mean(wgts))
predict(mod1,data.frame(xx2=1:4))
predict(mod2,data.frame(xx=1:4))
predict(mod3,data.frame(xx=1:4))
The estimates are identical in all three models.
Standard error are same in model 2 and 3 but different in model 1.
GCV is different in all three models.
I understand GCVs can be different. But how can we say that the models are identical if standard errors are different? Is this an error, or is there any good explanation for this?
The issues you saw is not about GAM. You have used gam to fit a parametric model, in which case gam behaves almost as same as lm. To answer your questions, it is sufficient to focus on the linear regression case. What happens to a linear model will happens to GLMs and GAMs, too. Here is how we can reproduce the issue with lm:
yy <- c(5,2,8,9)
xx <- 1:4
wgts <- c(3,2,4,1)
yy2 <- rep(yy,wgts)
xx2 <- rep(xx,wgts)
fit1 <- lm(yy2 ~ xx2)
fit2 <- lm(yy ~ xx, weights = wgts)
fit3 <- lm(yy ~ xx, weights = wgts/mean(wgts))
summary1 <- summary(fit1)
summary2 <- summary(fit2)
summary3 <- summary(fit3)
pred1 <- predict(fit1, list(xx2 = xx), interval = "confidence", se.fit = TRUE)
pred2 <- predict(fit2, list(xx = xx), interval = "confidence", se.fit = TRUE)
pred3 <- predict(fit3, list(xx = xx), interval = "confidence", se.fit = TRUE)
All models have the same regression coefficients, but other results may differ. You asked:
For weighted regression fit2 and fit3, why is almost everything the same except residual standard error?
Why is weighted regression (fit2 or fit3) not equivalent to ordinary regression with ties?
Your first question is about the scaling invariance of weight least squares to weights. Here is a brief summary I made:
If we rescale W by an arbitrary positive value, only residual standard error and unscaled covariance will change. Such change does not imply a different, non-equivalent model. In fact, everything related to prediction is not affected. In weighted regression, don't just look at sigma2; it is just a marginal variance. What is really of interest is the gross variance after multiplying weights. If you divide your weights by 2, you will find sigma2 doubles, but you still get the same result when multiplying them together.
summary2$coef
summary3$coef
# Estimate Std. Error t value Pr(>|t|)
#(Intercept) 2.128713 3.128697 0.6803832 0.5664609
#xx 1.683168 1.246503 1.3503125 0.3094222
pred2
pred3
#$fit
# fit lwr upr
#1 3.811881 -5.0008685 12.62463
#2 5.495050 -0.1299942 11.12009
#3 7.178218 0.6095820 13.74685
#4 8.861386 -1.7302209 19.45299
#
#$se.fit
# 1 2 3 4
#2.048213 1.307343 1.526648 2.461646
#
#$df
#[1] 2
#
#$residual.scale ## for `pred2`
#[1] 3.961448
#
#$residual.scale ## for `pred3`
#[1] 2.50544
Your second question is about the meaning of weights. Weights is used to model heteroscedastic response to overcome leverage effect in ordinary least square regression. Weights are proportional to reciprocal variance: You give bigger weights to data with smaller expected errors. Weights can be non-integer, so it does not have a naturual explanation in terms of repeated data. Thus, what is written in mgcv package is not rigorously correct.
The real difference between fit1 and fit2? is the degree of freedom. Check the above table for (n - p). n is the number of data you have, while p is the number of non-NA coefficients, so n - p is the residual degree of freedom. For both models we have p = 2 (intercept and slope), but for fit1 we have n = 10 while for fit2 we have n = 4. This has dramatic effect on inference, as now standard errors for coefficients and predictions (hence confidence intervals) will differ. These two models are far from being equivalent.
summary1$coef
# Estimate Std. Error t value Pr(>|t|)
#(Intercept) 2.128713 1.5643486 1.360766 0.21068210
#xx2 1.683168 0.6232514 2.700625 0.02704784
summary2$coef
# Estimate Std. Error t value Pr(>|t|)
#(Intercept) 2.128713 3.128697 0.6803832 0.5664609
#xx 1.683168 1.246503 1.3503125 0.3094222
pred1
#$fit
# fit lwr upr
#1 3.811881 1.450287 6.173475
#2 5.495050 3.987680 7.002419
#3 7.178218 5.417990 8.938446
#4 8.861386 6.023103 11.699669
#
#$se.fit
# 1 2 3 4
#1.0241066 0.6536716 0.7633240 1.2308229
#
#$df # note, this is `10 - 2 = 8`
#[1] 8
#
#$residual.scale
#[1] 1.980724
pred2
#$fit
# fit lwr upr
#1 3.811881 -5.0008685 12.62463
#2 5.495050 -0.1299942 11.12009
#3 7.178218 0.6095820 13.74685
#4 8.861386 -1.7302209 19.45299
#
#$se.fit
# 1 2 3 4
#2.048213 1.307343 1.526648 2.461646
#
#$df # note, this is `4 - 2 = 2`
#[1] 2
#
#$residual.scale ## for `pred2`
#[1] 3.961448

Assessing/Improving prediction with linear discriminant analysis or logistic regression

I recently needed to combine two or more variables on some data set to evaluate if their combination could enhance predictivity, thus I made some logistic regression in R. Now, on the statistic Q&A, someone suggested that I may use the linear discriminant analysis.
Since I don't have any fitcdiscr.m in MATLAB, I'd rather go with lda in R but I cannot use the fit results to predict AUC or whatever I could use. Indeed, I see that fit output vector of lda in R is some sort of vector with multiple classes and I guess I should use fit$posterior to predict Cases against Controls, but I cannot take those data out of it.
For further information, I get this results as fit$posterior:
$posterior
0 1
1 0.7707927 0.22920726
2 0.7085165 0.29148352
3 0.6990989 0.30090106
4 0.5902161 0.40978387
5 0.8667109 0.13328912
6 0.6924406 0.30755939
7 0.7471086 0.25289141
8 0.7519326 0.24806736
And so on up to the last observation which is 242. Every time I try to take, for example, column 1 by fit$posterior[,1], I get:
1 2 3 4 5 6 7 8
0.7707927 0.7085165 0.6990989 0.5902161 0.8667109 0.6924406 0.7471086 0.7519326
9 10 11 12 13 14 15 16
0.7519326 0.6902850 0.7519326 0.8080445 0.8075360 0.8484318 0.4860899 0.8694121
I don't know which part of the code could be useful, since I made very basic computation:
library(gdata)
data=read.xls("ECGvarious.xls", perl="C:/Strawberry/perl/bin/perl.exe");
i=6;
p=19;
temp=data[,i];
temp1=data[, p];
library(MASS)
fit <- lda(Case ~ temp + temp , data=data, na.action="na.omit", CV=TRUE)
I can't link the data, anyway ECGvarious is simply an N observation x P variables, being N= N1+ N2 with N1 the number of Controls and N2 the number of Cases, and the Cases are defined as subjects who developed pathology after a follow up. The very last column of data is just 0 or 1 for Controls and Cases, respectively.
When I performed the logistic regression, I did:
mod1<-glm(Case ~ temp + temp1, data=data, family="binomial");
auctemp=auc(Case~predict(mod1), data=data);
Here's my input concerning logistic regression and prediction (I don't know much about linear discrimination but understand it's closely related to logistic regression, which I know much better). I'm not sure I'm following all of your reasoning, nor if this will be a satisfactory answer, but hopefully it won't hurt. This has been a review of some epidemiology classes for me. I hope it's not too formal and addresses at least in part some of your questions. If not, and if other users think this would better belong on Cross Validated, I won't take offense. :)
Sample data
We'll first generate 200 observations, having increasing levels of probability for Case=1. The first predictor (pred1) will follow a distribution that is nonlinear, close to the one being modeled when doing logistic regression. It will be rather closely related to the proportion of Cases. The second predictor will just be random, uniformly distributed noise.
set.seed(2351)
df <- data.frame(Case = c(sample(c(0,1), size = 67, prob = c(0.8, 0.2), replace = TRUE),
sample(c(0,1), size = 66, prob = c(0.5, 0.5), replace = TRUE),
sample(c(0,1), size = 67, prob = c(0.2, 0.8), replace = TRUE)),
pred1 = 6/(1+4*exp(-seq(from = -3, to = 5, length.out = 200))) + rnorm(n = 200, mean = 2, sd=.5),
pred2 = runif(n = 200, min = 0, max = 100))
We see in the boxplot below that the observations where case==1 generally have higher pred1, which is intended (from the way we generated the data). At the same time, there is an overlap, otherwise it would make it too easy to decide on a cutoff point/threshold.
boxplot(pred1 ~ Case, data=df, xlab="Case", ylab="pred1")
Fitting the logistic model
First using both predictors:
model.1 <- glm(Case ~ pred1 + pred2, data=df, family=binomial(logit))
summary(model.1)
# Coefficients:
# Estimate Std. Error z value Pr(>|z|)
# (Intercept) -2.058258 0.479094 -4.296 1.74e-05 ***
# pred1 0.428491 0.075373 5.685 1.31e-08 ***
# pred2 0.003399 0.005500 0.618 0.537
# ---
# Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
# (Dispersion parameter for binomial family taken to be 1)
#
# Null deviance: 276.76 on 199 degrees of freedom
# Residual deviance: 238.51 on 197 degrees of freedom
# AIC: 244.51
As we'd expect, the first predictor is rather strongly related, and the second, poorly related to the outcome.
Note that to get Odds Ratios from those coefficients, we need to exponentiate them:
exp(model.1$coefficients[2:3])
# pred1 pred2
# 1.534939 1.003405 # Odds Ratios (making the relationships appear more clearly).
# Use `exp(confint(model.1))` to get confidence intervals.
We'll compare this model to a simpler model, removing the second predictor:
model.2 <- glm(Case ~ pred1, data=df, family=binomial(logit))
summary(model.2)
# Coefficients:
# Estimate Std. Error z value Pr(>|z|)
# (Intercept) -1.87794 0.37452 -5.014 5.32e-07 ***
# pred1 0.42651 0.07514 5.676 1.38e-08 ***
# ---
# Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
# (Dispersion parameter for binomial family taken to be 1)
#
# Null deviance: 276.76 on 199 degrees of freedom
# Residual deviance: 238.89 on 198 degrees of freedom
# AIC: 242.89
exp(model.2$coefficients)[2]
# pred1
# 1.531907 # Odds Ratio
We could also run an anova(model.1, model.2), but let's skip this part and move on to prediction, keeping this simpler model as the second variable doesn't add much predictive value, if any. In practive, having more predictors is rarely a problem unless it's truly random noise, but here I focus more on the operation of predicting and choosing a proper threshold.
Stored predictions
In the model.2 object (a list), there is an item named fitted.values. Those values are the exact same that we'd get from predict(model.2, type="response") and can be interpreted as probabilities; one for each row, based on the predictor(s) and their coefficient(s).
New predictions
It is also possible to predict the outcome for hypothetical rows not in our initial dataframe.
With model.1 (2 predictors):
predict(model.1, newdata = list(pred1=1, pred2=42), type="response")
# 1
# 0.1843701
With model.2 (1 predictor):
predict(model.2, newdata = list(pred1=12), type="response")
# 1
# 0.96232
Going from probability to binary response
Looking back at the link between our predictor pred1 and the calculated probability of having Case=1:
plot(df$pred1, model.2$fitted.values,
xlab="pred1", ylab="probability that Case=1")
We note that since we have only one predictor, the probability is a direct function of it. If we had kept the other predictor in the equation, we'd see points grouped around the same line, but in a cloud of points.
But this doesn't change the fact that if we are to evaluate how well our model can predict binary outcomes, we need to settle on a threshold above which we'll consider that the observation is a Case. Several packages have tools to help picking that threshold. But even without any additional package, we can calculate various properties over a range of thresholds using a function such as the following, which will calculate the sensitivity (ability to detect True Cases), specificity (ability to identify True Non Cases), and other properties well described here.
df.ana <- data.frame(thresh=seq(from = 0, to = 100, by = 0.5) / 100)
for(i in seq_along(df.ana$thresh)) {
df.ana$sensitivity[i] <- sum(df$Case==1 & (predict(model.2, type="resp") >= df.ana$thresh[i])) / sum(df$Case==1)
df.ana$specificity[i] <- sum(df$Case==0 & (predict(model.2, type="resp") < df.ana$thresh[i])) / sum(df$Case==0)
df.ana$pos.pred.value[i] <- sum(df$Case == 1 & (predict(model.2, type="resp") >= df.ana$thresh[i])) / sum(predict(model.2, type="resp") >= df.ana$thresh[i])
df.ana$neg.pred.value[i] <- sum(df$Case == 0 & (predict(model.2, type="resp") < df.ana$thresh[i])) / sum(predict(model.2, type="resp") < df.ana$thresh[i])
df.ana$accuracy[i] <- sum((predict(model.2, type="resp") >= df.ana$thresh[i]) == df$Case) / nrow(df)
}
which.max(df.ana$accuracy)
# [1] 46
optimal.thresh <- df.ana$thresh[which.max(df.ana$accuracy)] # 0.46
The accuracy is the proportion of correct predictions over all predictions. The 46th threshold (0.46) is the "best" for that matter. Let's check a few other neighboring rows in the generated dataframe; it tells us that 0.47 would work as well on all fronts. Fine-tuning would involve adding some new data to our initial dataframe.
df.ana[45:48,]
# thresh sensitivity specificity pos.pred.value neg.pred.value accuracy
# 45 0.45 0.7142857 0.6947368 0.7211538 0.6875000 0.705
# 46 0.46 0.7142857 0.7157895 0.7352941 0.6938776 0.715
# 47 0.47 0.7142857 0.7157895 0.7352941 0.6938776 0.715
# 48 0.48 0.7047619 0.7157895 0.7326733 0.6868687 0.710
Note that the auc function (area under the curve) will give the same number as the accuracy for that threshold:
library(pROC)
auc(Case ~ as.numeric(predict(model.2, type="response") >= optimal.thresh), data=df)
# Area under the curve: 0.715
Some plots
# thresholds against accuracy
plot(x=df.ana$thresh, y=df.ana$accuracy, type="l",
xlab="Threshold", ylab="", xlim=c(0,1), ylim=c(0,1))
text(x = 0.1, y = 0.5, labels = "Accuracy", col="black")
# thresholds against Sensitivity
lines(x=df.ana$thresh, y=df.ana$sensitivity, type="l",col="blue") # Sensitivity We want to maximize this, but not too much
text(x = 0.1, y = 0.95, labels = "Sensitivity", col="blue")
# thresholds against specificity
lines(x=df.ana$thresh, y=df.ana$specificity, type="l", col="red") # Specificity we want to maximize also, but not too much
text(x = 0.1, y = 0.05, labels = "Specificity", col="red")
# optimal threshold vertical line
abline(v=optimal.thresh)
text(x=optimal.thresh + .01, y=0.05, labels= optimal.thresh)
Incidentally, all lines converge more or less to the same point, which suggests this is a good compromise between all the qualities we look for in a predictive tool. But depending on your objectives, it might be better picking a lower or a higher threshold. Statistical tools are useful, but in the end, some other considerations are often more important in making a final decision.
About ROC
The following graph is the same as the one which would be produced with pROC's roc:
plot(x=df.ana$specificity, y = df.ana$sensitivity, type="l", col="blue",
xlim = c(1,0), xlab = "Specificity", ylab = "Sensitivity")
# Equivalent to
# plot(roc(predictor=model.2$fitted.values, response = model.2$y))
Tabulations and other stats
The following function allows one to calculate, for a logistic model fit, the same stats seen above, and gives a 2x2 table for any chosen threshold.
diagnos.test <- function(model, threshold) {
output <- list()
output$stats <- c(
sensitivity = sum(model.1$y==1 & (predict(model, type="resp") >= threshold)) / sum(model.1$y==1),
specificity = sum(model.1$y==0 & (predict(model, type="resp") < threshold)) / sum(model.1$y==0),
pos.pr.value = sum(model.1$y==1 & (predict(model.2, type="resp") >= threshold)) / sum(predict(model.2, type="resp") >= threshold),
neg.pr.value = sum(df$Case == 0 & (predict(model.2, type="resp") < threshold)) / sum(predict(model.2, type="resp") < threshold),
accuracy = sum((predict(model.2, type="resp") >= threshold) == df$Case) / nrow(df))
output$tab <- addmargins(t(table(model$y, as.numeric(predict(model, type="response") > threshold),dnn = list("Cases", "Predictions")))[2:1,2:1])
return(output)
}
diagnos.test(model.2, 0.47)
# $stats
# sensitivity specificity pos.pr.value neg.pr.value accuracy
# 0.7142857 0.7157895 0.7352941 0.6938776 0.7150000
#
# $tab
# Cases
# Predictions 1 0 Sum
# 1 75 27 102
# 0 30 68 98
# Sum 105 95 200
Final note
I don't pretend I have covered everything on prediction, sensitivity and specificity; my goal was more to go as far as possible using common language and calculations, not relying on any specific packages.

Resources