Generating predictions from an aggregated binomial regression - r

Assessing model accuracy is reasonably easy with Bernoulli outcomes, but I am unsure how to generate meaningful predictions from an aggregated binomial regression.
Take this example. We want to model the number of drug counselling sessions (variable numCouns) a client attends over a twelve-week period based on: (1) how many years they had been using cannabis regularly prior to starting treatment (variable durationRegUse) and (2) the number of grams of cannabis they used on an average day (variable gms). The maximum number of counselling sessions each client can attend is six.
Here is the data
df <- data.frame(durationRegUse = c(19, 9, 13, 19, 10, 13, 2, 14, 11, 12, 7, 6, 3, 18, 17, 9, 9, 10, 0, 20, 4, 4, 8, 5, 4, 19, 25, 10, 27, 1, 10, 25, 8, 24, 8, 18, 15, 10, 6, 14, 16, 13, 4, 4, 5, 17, 13, 21, 8, 7, 10, 17, 13, 12, 28, 38, 23, 19, 36, 3, 14, 14, 22, 11, 26, 17, 4, 8, 25, 35, 14, 28, 32, 29, 22, 21, 2, 23, 35, 34, 31, 34, 15, 14, 26, 6, 3, 25, 24, 31, 31, 27, 30, 14.5, 12, 9, 3, 13, 5, 6, 23, 21, 27, 7, 36, 19, 22, 15, 11, 17, 11, 26, 21, 15),
gms = c(3.5, 2, 0.5, 10, 3, 3, 4, 4, 2, 2, 2, 2, 2, 2, 1, 1.75, 4, 1.75, 0.33, 5, 2.5, 1.25, 1, 0.5, 3, 2, 5, 3, 3, 0.571, 1, 0.5, 2, 4, 2.5, 1.25, 1.5, 1, 2.5, 2, 1, 2, 1.5, 2, 0.2, 1, 1, 2, 14, 2, 3.5, 3, 2, 1.75, 2, 0.55, 1, 2, 6, 0.5, 0.5, 0.5, 3, 1, 2.75, 4.5, 3, 3, 3, 2, 2, 1, 2.5, 1.75, 1, 1.5, 2, 0.7, 7, 0.5, 2, 1.2, 0.4, 3, 0.8, 1.3, 1.2, 2, 1.5, 3, 2, 2, 4, 3, 1, 6, 1, 0.5, 1.5, 2.5, 1, 2.5, 1.5, 1, 1.5, 2.5, 1.5, 2.5, 10, 1.5, 1.5, 0.5, 5, 1.5),
numCouns = c(6, 1, 2, 6, 0, 6, 0, 0, 2, 6, 0, 0, 0, 0, 0, 0, 0, 0, 0, 6, 0, 2, 5, 6, 0, 0, 6, 0, 6, 3, 6, 0, 0, 0, 4, 5, 0, 0, 4, 0, 4, 3, 0, 1, 2, 6, 4, 2, 4, 3, 1, 0, 2, 2, 5, 2, 0, 1, 3, 0, 3, 2, 1, 6, 0, 0, 1, 0, 1, 2, 0, 0, 5, 1, 1, 1, 5, 3, 5, 6, 6, 5, 3, 6, 2, 4, 3, 4, 6, 1, 0, 6, 4, 3, 3, 1, 5, 0, 1, 1, 6, 6, 6, 3, 3, 2, 0, 0, 5, 1, 6, 3, 0, 0))
To model it as an aggregated binomial regression we need to create a coverage variable (the max number of sessions.)
df$coverage <- 6
Now we can create the aggregated binomial regression model
aggBinMod <- glm(
formula = cbind(numCouns, coverage - numCouns) ~ durationRegUse + gms,
data = df,
family = binomial(link = "logit"))
And here is the output
summary(aggBinMod)
#output
# Coefficients:
# Estimate Std. Error z value Pr(>|z|)
# (Intercept) -1.157570 0.183116 -6.322 2.59e-10 ***
# durationRegUse 0.035975 0.008455 4.255 2.09e-05 ***
# gms 0.075838 0.039273 1.931 0.0535 .
Now is the part I am unsure of: How to generate predictions with which to assess model accuracy. Now, as I understand it if we use the predict() function, selecting "response" as the type we get a predicted per-trial probability of drawing a 1 from a Bernoulli response scale (i.e. [0,1]).
predBin <- predict(aggBinMod, type = "response")
predBin
# (predicted bernoulli probability for first 16 participants)
# 1 2 3 4 5 6 7 8
# 0.4480346 0.3357882 0.3425441 0.5706073 0.3611657 0.3864206 0.3138308 0.4132440
# 9 10 11 12 13 14 15 16
# 0.3520203 0.3602692 0.3199350 0.3121589 0.2894678 0.4113600 0.3845787 0.3315728
So, following that logic, in order to generate predictions for the number of sessions for each client from our aggregated binomial regression model, we should be able to simply multiply this value by the number of trials we wish to predict, in our case 6. So to generate the predictions we would run
predBin6 <- predict(aggBinMod, type = "response")*6
predBin6
# predicted number of sessions, out of a possible 6), for first 18 clients
# 1 2 3 4 5 6 7 8 9
# 2.688208 2.014729 2.055265 3.423644 2.166994 2.318524 1.882985 2.479464 2.112122
# 10 11 12 13 14 15 16 17 18
# 2.161615 1.919610 1.872954 1.736807 2.468160 2.307472 1.989437 2.222478 2.037563
And from there it is straightforward to assess model accuracy via the mean squared error
error <- predBin6 - df$numCouns
mse <- mean(error^2)
mse
# output
# [1] 4.871892
So my question is is this the correct way to generate predictions from an aggregated binomial regression?

More or less, yes.
Instead of hard-coding the fact that there are 6 trials per observation (in some applications the number of trials differs from observation to observation), I would recommend
predBin6 <- predict(aggBinMod, type = "response")*weights(aggBinMod)
(which should give the same answer in your case).
I would also say that MSE is reasonable, but not necessarily the best measure of predictive accuracy for a binomial model (it doesn't take the dependence of the variance on the mean into account). (I don't have a particular alternative recommendation, but the deviance (deviance(aggBinMod)) or something similar might be appropriate.)

Related

How do i Interpret the coefficients of glm with binomial error distribution?

I would be happy if someone could help me understand glm with binominal error distribution.
Lets assume the following df:
year<-c(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
3, 3, 3, 3, 3, 3, 3, 3, 3, 3,3, 3, 3, 3, 3, 3, 3, 3)
success<-c(1, 0, 3, 1, 1, 2, 6, 0, 1, 1, 12, 2, NA, 6, 12, 0, 10,
7, 4, 10, 13, 1, 2, 1, 18, 6, 3, 8, 3, 1, 9, 15, 6, 12,
6, 15, 13, 6, 8, 6, 2, 11, 6, 1, 12, 0, 4, 15, 0, 3, 18,
5, 6, 17, 5, 3, 17, 8, 0, 7, 12, 10, 26, 12, 4, 17, 1, 8,
2, 7, 14, 8)
no_success<-c(1, 9, 5, 4, 6, 1, 4, 4, 6, 10, 16, 4, NA, 3, NA, 3,
5, 5, 6, 10, 0, 5, 3, 10, 1, 7, 11, 8, 20, 4, 3, 3,
19, 1, 11, 4, 6, 4, 9, 4, 10, 4, 2, 8, 3, 1, 13, 3,
5, 7, 5, 9, 3, 6, 3, 4, 3, 13, 6, 5, 10, 3, 1, 0,
18, 6, 13, 0, 3, 2, 2, 2)
df<-data.frame(year,success,no_success)
df$success<-as.integer(df$success)
df$no_success<-as.integer(df$no_success)
If I want to know if there is a linear increase or decrease between year in regards to the success or no_success of a thought up treatment I apply a binominal glm:
m<- glm(cbind(success, no_success)~year,
data=df, family = "quasibinomial",
na.action=na.exclude)
summary(m)
I changed to "quasibinomial" here because of overdispersion.
From the summary I see that there is a significant effect: P: 0.0219 *
As the coefficients in a binomial glm represent log odds,
I get exp(estimate) = exp(0.3099) = 1.363
So, there is an increase in Odds of succes of 1.363 per year
My Questions are:
1.) When I exp(negative estimate) it gets always positive - this can not be correct. There must be a way to express negative relationships.
2.) When I want to visualize multiple linear models, I like to display the estimates.
In a "normal" lm I would display the estimate and confidence interval like this: divide the estimate by the mean of the observation and than substract and add the mean of observation/Std. Error times 1.96.
Estimate.mean<-exp(0.3099)/mean(df$or,na.rm=TRUE)
Std.Error.mean<-exp(0.1321)/mean(df$or,na.rm=TRUE)
low<-Estimate.mean-Std.Error.mean*1.96
high<-Estimate.mean+Std.Error.mean*1.96
If this confidence level is not touching the zero line it should be significant. The effect is significantly not greater than zero.
But here the low bound is -0.3901804 and the high bound is 1.608095. This does not appear to be a significant linear relationship despite the low p-value from the glm (0.0219).
What have I mixed up here?
I am happy for any suggestions
The "zero line" in this case is x=1 and not x=0.
Question 2:
the question is. Is there a effect that is different from zero?
But odds of 1 basicaly means zero.
Question 1:
When the estimate is exp the result can not be negative.But odds below 1 express a negative effect.
Here are some sources to calculate the confidence intervall for anyone stumbling over this post.
https://fromthebottomoftheheap.net/2018/12/10/confidence-intervals-for-glms/
https://stats.stackexchange.com/questions/304833/how-to-calculate-odds-ratio-and-95-confidence-interval-for-logistic-regression

How to make the expected value of the difference in the values in paired data using ggplot2

I have a pair data as below and I want to make the expected value of the difference in the value (column called value) of pairs. In all the pairs, one has disease and the other one does not have disease as you can see from the data. In other words, the expected value of the difference of the value in one sibling compare to his/her sibling.
The description of the variable in the data are:
id = individual ID
family ID = family ID showing their dependency
status = 1 means disease and status = 0 means no-disease
Any guidance is appreciated.
d <- structure(list(id = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20),
familyID = c(1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9, 10, 10),
status = c(0,1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1),
value = c(29,26, 39, 22.3, 24, 41, 29.7, 24, 25.9, 21, 29,24,26,29, 15.2, 11, 35, 15.4,16, 13.4)),
class = c("tbl_df","tbl", "data.frame"), row.names = c(NA, -20L))
I'm not certain if this is what you are looking for, but I used pivot_wider from tidyr to spread the values into two columns, though with status 0 and those with status 1. Then I used mutate to take a difference between the two columns, then plotted the familyID by the newly created difference with ggplot. Note that I removed the id column for the pivot_wider to work.
d <- structure(list(id = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20),
familyID = c(1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9, 10, 10),
status = c(0,1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1),
value = c(29,26, 39, 22.3, 24, 41, 29.7, 24, 25.9, 21, 29,24,26,29, 15.2, 11, 35, 15.4,16, 13.4)),
class = c("tbl_df","tbl", "data.frame"), row.names = c(NA, -20L))
library(dplyr)
library(tidyr)
library(ggplot2)
d%>%
select(-id)%>%
pivot_wider(values_from = value, names_from = status)%>%
mutate("Diff" = (`0`-`1`))%>%
ggplot()+
aes(as.character(familyID), Diff)+
geom_point()
You can group by familyID, then use summarize() from the dplyr package to find the differences.
Also note the conversion of id, familyID, and status to factors, which may make life easier so they aren't confused with being integers.
library(dplyr)
library(forcats)
library(ggplot2)
d <- structure(list(id = as.factor(c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20)),
familyID = as.factor(c(1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9, 10, 10)),
status = as.factor(c(0,1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 1)),
value = c(29,26, 39, 22.3, 24, 41, 29.7, 24, 25.9, 21, 29,24,26,29, 15.2, 11, 35, 15.4,16, 13.4)),
class = c("tbl_df","tbl", "data.frame"), row.names = c(NA, -20L))
diffs <- group_by(d, familyID) %>%
summarize(., diff = (value[status == 0] - value[status == 1]))
Reordering the families by difference can help get a sense of the distribution of differences
diffs$familyID <- fct_reorder(diffs$familyID, diffs$diff, .desc = TRUE)
ggplot(diffs, aes(x = familyID, y = diff)) +
geom_bar(stat="identity")
If you really have a lot of families you may want to display a summary of the differences.
One option is with a histogram (modifying binwidth can control how fine the bins are):
ggplot(diffs, aes(x = diff)) +
geom_histogram(binwidth = 3)
Similar to a histogram is a density plot:
ggplot(diffs, aes(x = diff)) +
geom_density()
Finally, a boxplot is also a familiar summary. They're mostly meant for comparing multiple groups, but it works okay with just one. I've added the individual points using the geom_jitter() function.
ggplot(diffs, aes(y = diff)) + #If using multiple groups add x=group inside the aes() function.
geom_boxplot() +
geom_jitter(aes(x = 0))

Density plot of a vector shows tails before and after its minimum and maximum

I have the following vector:
v<-c(1, 1, 8, 3, 1, 9, 4, 21, 13, 13, 1, 1, 3, 10, 1, 13, 22, 1,
1, 4, 2, 1, 13, 1, 5, 1, 2, 1, 1, 2, 12, 10, 26, 15, 2, 9, 6,
5, 1, 3, 18, 2, 10, 2, 8, 9, 4, 1, 11, 4, 2, 12, 3, 14, 2, 1,
27, 3, 6, 2, 1, 1, 3, 16, 3, 36, 13, 9, 11, 10, 24, 2, 27, 4,
4, 2, 9, 1, 3, 13, 3, 1, 8, 5, 5, 15, 1, 1, 3, 1, 4, 14, 8, 1,
1, 2, 20, 1, 9, 3, 1, 2, 5, 14, 5, 11, 1, 3, 2, 9, 10, 21, 9,
1, 20, 5, 11, 23, 2, 1, 1, 2, 1, 7, 2, 9, 1, 19, 9, 9, 2, 15,
17, 8, 11, 17, 2, 14, 2, 8, 13, 1, 2, 9, 15, 25, 3, 8, 32, 4,
11, 1, 1, 2)
I would like to estimate its density in R through the command density. With few lines of code:
d<-density(v)
df<-data.frame(x=d$x,y=d$y,stringsAsFactors = FALSE)
plot(df)
I obtained the following picture:
But the resulting plot doesn't add up, because max(v) is 36 and min(v) is 1 while the graph shows tails before and after 0 and 40.

Finding differences between populations

I have data equivalent data from 2019 and 2020. The proportion of diagnoses in 2020 look like they differ from 2019, but I'd like to ...
a) statistically test the populations are different.
b) determine which categories are the most different.
I've worked out I can do 'a' using:
chisq.test(test$count.2020, test$count.2019)
I don't know how to find out which categories are the ones that are the most different between 2020 and 2019. Any help would be amazing, thanks!
diagnosis <- data.frame(mf_label = c("Audiovestibular", "Autonomic", "Cardiovascular",
"Cerebral palsy", "Cerebrovascular", "COVID", "Cranial nerves",
"CSF disorders", "Developmental", "Epilepsy and consciousness",
"Functional", "Head injury", "Headache", "Hearing loss", "Infection",
"Maxillofacial", "Movement disorders", "Muscle and NMJ", "Musculoskeletal",
"Myelopathy", "Neurodegenerative", "Neuroinflammatory", "Peripheral nerve",
"Plexopathy", "Psychiatric", "Radiculopathy", "Spinal", "Syncope",
"Toxic and nutritional", "Tumour", "Visual system"),
count.2019 = c(5, 0, 1, 1, 2, 0, 4, 3, 0, 7, 4, 0, 24, 0, 0, 2, 22, 3, 3, 0, 3, 18, 12, 0, 0, 2, 2, 0, 1, 4, 0),
count.2020 = c(5, 1, 1, 3, 28, 9, 11, 13, 1, 13, 30, 5, 68, 1, 1, 2, 57, 14, 5, 8, 16, 37, 27, 3, 13, 17, 3, 1, 8, 13, 11))
Your Chi square test is not correct. You need to provide the counts as a table or matrix, not as two separate vectors. Because you have very small expected values for half of the cells, you need to use simulation to estimate the p-value:
results <- chisq.test(diagnosis[, 2:3], simulate.p.value=TRUE)
The overall table is barely significant at .05. The chisq.test function returns a list including the original data, the expected values, residuals, and standardized residuals. The manual page describes these (?chisq.test) and provides some citations for more details.

Logarithmic scaling with ggplot2 in R

I am trying to create a diagram using ggplot2. There are several very small values to be displayed and a few larger ones. I'd like to display all of them in an appropriate way using logarithmic scaling. This is what I do:
plotPointsPre <- ggplot(data = solverEntries, aes(x = val, y = instance,
color = solver, group = solver))
...
finalPlot <- plotPointsPre + coord_trans(x = 'log10') + geom_point() +
xlab("costs") + ylab("instance")
This is the result:
It is just the same as without coord_trans(x = 'log10').
However, if I use it with the y-axis:
How do I achieve the logarithmic scaling on the x-axis? Besides, it is not about the x-axis, if I switch the values of x and y, then it works on the x-axis and no longer on the y-axis. So there seems to be some problem with the displayed values. Does anybody have an idea how to fix this?
Edit - Here's the used data contained in solverEntries:
solverEntries <- data.frame(instance = c(1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 5, 5, 5, 5, 6, 6, 6, 6, 7, 7, 7, 7, 8, 8, 8, 8, 9, 9, 9, 9, 10, 10, 10, 10, 11, 11, 11, 11, 12, 12, 12, 12, 13, 13, 13, 13, 14, 14, 14, 14, 15, 15, 15, 15, 16, 16, 16, 16, 17, 17, 17, 17, 18, 18, 18, 18, 19, 19, 19, 19, 20, 20, 20, 20),
solver = c(4, 3, 2, 1, 4, 3, 2, 1, 4, 3, 2, 1, 4, 3, 2, 1, 4, 3, 2, 1, 4, 3, 2, 1, 4, 3, 2, 1, 4, 3, 2, 1, 4, 3, 2, 1, 4, 3, 2, 1, 4, 3, 2, 1, 4, 3, 2, 1, 4, 3, 2, 1, 4, 3, 2, 1, 4, 3, 2, 1, 4, 3, 2, 1, 4, 3, 2, 1, 4, 3, 2, 1, 4, 3, 2, 1, 4, 3, 2, 1),
time = c(1, 24, 13, 6, 1, 41, 15, 5, 1, 26, 16, 5, 1, 39, 7, 4, 1, 28, 11, 3, 1, 31, 12, 3, 1, 38, 20, 3, 1, 37, 10, 4, 1, 25, 11, 3, 1, 32, 18, 4, 1, 27, 21, 3, 1, 23, 22, 3, 1, 30, 17, 2, 1, 36, 8, 3, 1, 37, 19, 4, 1, 40, 21, 3, 1, 29, 11, 4, 1, 33, 10, 3, 1, 34, 9, 3, 1, 35, 14, 3),
val = c(6553.48, 6565.6, 6565.6, 6577.72, 6568.04, 7117.14, 6578.98, 6609.28, 6559.54, 6561.98, 6561.98, 6592.28, 6547.42, 7537.64, 6549.86, 6555.92, 6546.24, 6557.18, 6557.18, 6589.92, 6586.22, 6588.66, 6588.66, 6631.08, 6547.42, 7172.86, 6569.3, 6582.6, 6547.42, 6583.78, 6547.42, 6575.28, 6555.92, 6565.68, 6565.68, 6575.36, 6551.04, 6551.04, 6551.04, 6563.16, 6549.86, 6549.86, 6549.86, 6555.92, 6544.98, 6549.86, 6549.86, 6561.98, 6558.36, 6563.24, 6563.24, 6578.98, 6566.86, 7080.78, 6570.48, 6572.92, 6565.6, 7073.46, 6580.16, 6612.9, 6557.18, 7351.04, 6562.06, 6593.54, 6547.42, 6552.3, 6552.3, 6558.36, 6553.48, 6576.54, 6576.54, 6612.9, 6555.92, 6560.8, 6560.8, 6570.48, 6566.86, 6617.78, 6572.92, 6578.98))
Your data in current form is not log distributed -- most val around 6500 and some 10% higher. If you want to stretch the data, you could use a custom transformation using the scales::trans_new(), or here's a simpler version that just subtracts a baseline value to make a log transform useful. After subtracting 6500, the small values will be mapped to around 50, with the large values around 1000, which is a more appropriate range for a log scale. Then we apply the same transformation to the breaks so that the labels will appear in the right spots. (i.e. the label 6550 is mapped to the data that is mapped to 6550 - 6500 = 50)
This method helps if you want to make the underlying values more distinguishable, but at the cost of distorting the underlying proportions between values. You might be able to help with this by picking useful breaks and labeling them with scaling stats, e.g.
7000
+7% over min
my_breaks <- c(6550, 6600, 6750, 7000, 7500)
baseline = 6500
library(ggplot2)
ggplot(data = solverEntries,
aes(x = val - baseline, y = instance,
color = solver, group = solver)) +
geom_point() +
scale_x_log10(breaks = my_breaks - baseline,
labels = my_breaks, name = "val")
Is this what you're looking for?
x_data <- seq(from=1,to=50)
y_data <- 2*x_data+rnorm(n=50,mean=0,sd=5)
#non log y
ggplot()+
aes(x=x_data,y=y_data)+
geom_point()
#log y scale
ggplot()+
aes(x=x_data,y=y_data)+
geom_point()+
scale_y_log10()
#log x scale
ggplot()+
aes(x=x_data,y=y_data)+
geom_point()+
scale_x_log10()

Resources