mgcv: How to use 'exclude' argument in predict.gam? - r

I have a model structured as follows, and I would like to extract the predicted values while ignoring the random effect. As specified in ?predict.gam and here, I am using the exclude argument, but I am getting an error. Where is my mistake?
dt <- data.frame(n1 = runif(500, min=0, max=1),
n2 = rep(1:10,50),
n3 = runif(500, min=0, max=2),
n4 = runif(500, min=0, max=2),
c1 = factor(rep(c("X","Y"),250)),
c2 = factor(rep(c("a", "b", "c", "d", "e"), 100)))
mod = gam(n1 ~
s(n2, n3, n4, by=c1) +
s(c2, bs="re"),
data=dt)
newd=data.table(expand.grid(n1=seq(min(dt$n1), max(dt$n1), 0.5),
n2=1:10,
n3=seq(min(dt$n3), max(dt$n3), 0.5),
n4=seq(min(dt$n4), max(dt$n4), 0.5),
c1=c("X", "Y")))
newd$pred <- predict.gam(mod, newd, exclude = "s(c2)")
In predict.gam(mod, newd, exclude = "s(c2)"): not all required variables have been supplied in newdata!

exclude does not work in the way as you assumed. You still need to provide all variables in your newd for predict.gam. See my this answer for what is behind predict.gam.
Here is what you need to do:
## pad newd with an arbitrary value for variable c2
newd$c2 <- "a"
## termwise prediction
pt <- predict.gam(mod, newd, type = "terms", exclude = "s(c2)")
## linear predictor without random effect
lp_no_c2 <- rowSums(pt) + attr(pt, "constant")

Related

Predicting from glmmTMB with truncated counts

I'm running a glmmTMB model with various truncated count distributions (truncated_poisson, truncated_compois, truncated_nbinom1, truncated_nbinom2). When I predict from the model, the values seem to be lower than expected, as if the prediction is not accounting for the truncation. Where am I going wrong? A toy example is provided, showing that predicted values are lower than observed means.
Any advice would be appreciated. Extra points if the advice can extend to the other truncated count distributions (see above) and if it shows how to correctly get the 95% confidence band around the estimated values in these cases.
library(dplyr)
library( extraDistr)
library(glmmTMB)
set.seed(1)
df <- data.frame(Group = rep(c("a", "b"), each = 20), N = rtpois(40, 1, a = 0), ran = "a") %>%
mutate(N = ifelse(N == 0, 1, N))
m <- glmmTMB(N ~ Group + (1|ran), data = df, family = "truncated_poisson")
df %>% group_by(Group) %>% summarize(mean(N))
predict(m, newdata = data.frame(Group = c("a", "b"), ran = NA), type = "response")
I think the main issue is probably that you're using a slightly older version of glmmTMB (< 1.1.5, where a bug was fixed, see e.g. e.g. https://github.com/glmmTMB/glmmTMB/issues/860).
sample data
streamlined slightly (we don't need to include a random effect for this example), and adding a truncated nbinom2.
library(dplyr)
library(extraDistr)
library(glmmTMB)
set.seed(1)
df <- data.frame(Group = rep(c("a", "b"), each = 20),
Np = rtpois(40, 1, a = 0))
## clunky trunc nbinom generator
tnb <- rep(0, 40)
z <- (tnb==0)
while(any(z)) {
tnb[z] <- rnbinom(sum(z), mu = 1, size = 1)
z <- (tnb==0)
}
df$Nnb <- tnb
## summarize
df %>% group_by(Group) %>% summarize(across(starts_with("N"), mean))
## Group Np Nnb
## 1 a 1.75 1.8
## 2 b 1.45 2.35
fit models
m1 <- glmmTMB(Np ~ Group, data = df, family = "truncated_poisson")
m2 <- update(m1, Nnb ~ ., family = truncated_nbinom2)
Predicting with se.fit = TRUE will give you standard errors for the predictions, from which you can compute confidence intervals (assuming Normality/Wald intervals/blah blah blah ...) ...
pfun <- function(m, level = 0.95) {
pp <- predict(m, newdata = data.frame(Group = c("a", "b")),
type = "response",
se.fit = TRUE)
list(est = unname(pp$fit),
lwr = unname(pp$fit + qnorm((1-level)/2)*pp$se.fit),
upr = unname(pp$fit + qnorm((1+level)/2)*pp$se.fit))
}
pfun(m1)
pfun(m2)

Avoid writing large number of column names in a model formula with bs() terms

I want to use bs function for numerical variables in my dataset when fitting a logistic regression model.
df <- data.frame(a = c(0,1), b = c(0,1), d = c(0,1), e = c(0,1),
f= c("m","f"), output = c(0,1))
library(splines)
model <- glm(output~ bs(a, df=2)+ bs(b, df=2)+ bs(d, df=2)+ bs(e, df=2)+
factor(f) ,
data = df,
family = "binomial")
In my actual dataset, I need to apply bs() to way more columns than this example. Is there a way I can do this without writing all the terms?
We can use some string manipulation with sprintf, together with reformulate:
predictors <- c("a", "b", "d", "e")
bspl.terms <- sprintf("bs(%s, df = 2)", predictors)
other.terms <- "factor(f)"
form <- reformulate(c(bspl.terms, other.terms), response = "output")
#output ~ bs(a, df = 2) + bs(b, df = 2) + bs(d, df = 2) + bs(e,
# df = 2) + factor(f)
If you want to use a different df and degree for each spline, it is also straightforward (note that df can not be smaller than degree).
predictors <- c("a", "b", "d", "e")
dof <- c(3, 4, 3, 6)
degree <- c(2, 2, 2, 3)
bspl.terms <- sprintf("bs(%s, df = %d, degree = %d)", predictors, dof, degree)
other.terms <- "factor(f)"
form <- reformulate(c(bspl.terms, other.terms), response = "output")
#output ~ bs(a, df = 3, degree = 2) + bs(b, df = 4, degree = 2) +
# bs(d, df = 3, degree = 2) + bs(e, df = 6, degree = 3) + factor(f)
Prof. Ben Bolker: I was going to something a little bit fancier, something like predictors <- setdiff(names(df)[sapply(df, is.numeric)], "output").
Yes. This is good for safety. And of course, an automatic way if OP wants to include all numerical variables other than "output" as predictors.

How can I perform bootstrap to find the confidence interval for a k-nn model in R?

I have a training df with 2 columns like
a b
1 1000 20
2 1008 13
...
n ... ...
Now, as I am required to find a 95% CI for the estimate of 'b' based on a specific 'a' value, with a 'k' value of my choice and compare the CI result to other specific value of 'k's. My question is how can I perform bootstrap for this with 1000 bootstrap reps as I am required to use a fitted knn model for the training data with kernel = 'gaussian' and k can only be in range 1-20 ?
I have found that the best k for this model is k = 5, and had a go for bootstrap but it doesn't work
library(kknn)
library(boot)
boot.kn = function(formula, data, indices)
{
# Create a bootstrapped version
d = data[indices,]
# Fit a model for bs
fit.kn = fitted(train.kknn(formula,data, kernel= "gaussian", ks = 5))
# Do I even need this complicated block
target = as.character(fit.kn$terms[[2]])
rv = my.pred.stats(fit.kn, d[,target])
return(rv)
}
bs = boot(data=df, statistic=boot.kn, R=1000, formula=b ~ a)
boot.ci(bs,conf=0.95,type="bca")
Please inform me for more info if I'm not clear enough. Thank you.
Here is a way to regress b on a with the k-nearest neighbors algorithm.
First, a data set. This is a subset of the iris data set, keeping the first two columns. One row is removed to later be the new data.
i <- which(iris$Sepal.Length == 5.3)
df1 <- iris[-i, 1:2]
newdata <- iris[i, 1:2]
names(df1) <- c("a", "b")
names(newdata) <- c("a", "b")
Now load the packages to be used and determine the optimal value for k with package kknn.
library(caret)
library(kknn)
library(boot)
fit <- kknn::train.kknn(
formula = b ~ a,
data = df1,
kmax = 15,
kernel = "gaussian",
distance = 1
)
k <- fit$best.parameters$k
k
#[1] 9
And bootstrap predictions for the new point a <- 5.3.
boot.kn <- function(data, indices, formula, newdata, k){
d <- data[indices, ]
fit <- knnreg(formula, data = d)
predict(fit, newdata = newdata)
}
set.seed(2021)
R <- 1e4
bs <- boot(df1, boot.kn, R = R, formula = b ~ a, newdata = newdata, k = k)
ci <- boot.ci(bs, level = 0.95, type = "bca")
ci
#BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
#Based on 10000 bootstrap replicates
#
#CALL :
#boot.ci(boot.out = bs, type = "bca", level = 0.95)
#
#Intervals :
#Level BCa
#95% ( 3.177, 3.740 )
#Calculations and Intervals on Original Scale
Plot the results.
old_par <- par(mfrow = c(2, 1),
oma = c(5, 4, 0, 0) + 0.1,
mar = c(1, 1, 1, 1) + 0.1)
hist(bs$t, main = "Histogram of bootstrap values")
abline(v = 3.7, col = "red")
abline(v = mean(bs$t), col = "blue")
abline(v = ci$bca[4:5], col = "blue", lty = "dashed")
plot(b ~ a, df1)
points(5.3, 3.7, col = "red", pch = 19)
points(5.3, mean(bs$t), col = "blue", pch = 19)
arrows(x0 = 5.3, y0 = ci$bca[4],
x1 = 5.3, y1 = ci$bca[5],
col = "blue", angle = 90, code = 3)
par(old_par)

Silhouette value of each cluster using clustMixType

Using the clustMixType package I'm trying to get the silhouette values of the following dataset using the example in page 13 here:
https://cran.r-project.org/web/packages/clustMixType/clustMixType.pdf
n <- 10
prb <- 0.99
muk <- 2.5
x1 <- sample(c("A","B"), 2*n, replace = TRUE, prob = c(prb, 1-prb))
x1 <- c(x1, sample(c("A","B"), 2*n, replace = TRUE, prob = c(1-prb, prb)))
x1 <- as.factor(x1)
x2 <- sample(c("A","B"), 2*n, replace = TRUE, prob = c(prb, 1-prb))
x2 <- c(x2, sample(c("A","B"), 2*n, replace = TRUE, prob = c(1-prb, prb)))
x2 <- as.factor(x2)
x3 <- c(rnorm(n, mean = -muk), rnorm(n, mean = muk), rnorm(n, mean = -muk), rnorm(n, mean = muk))
x4 <- c(rnorm(n, mean = -muk), rnorm(n, mean = muk), rnorm(n, mean = -muk), rnorm(n, mean = muk))
x <- data.frame(x1,x2,x3,x4)
This seems to work to obtain the silhouette value for several k number of clusters:
library(clustMixType)
Essil <- numeric(5)
for(i in 2:6){
kpres <- kproto(x, k = i,na.rm=FALSE )
val_sil<-validation_kproto(method = "silhouette", object=kpres)
Essil[i] <- val_sil
}
plot(1:6, Essil, type = "b", ylab = "Silhouette", xlab = "Number of clusters")
Essil
Therefore as shown in the plot, the best number of clusters would be 4.
But validation_kproto gives the mean silhouette value for each k, perhaps 4 clusters gives best mean silhouette but with some of the clusters having a silhouette lower than 0.5.
Perhaps 5 clusters would be a better solution.
Therefore it would be important to know the silhouette value of each cluster, would that be possible?
I've tried setting kp_obj='all' but does not work..
It is correct, the validation_kproto function only outputs the silhouette index for the entire cluster partition. It is currently not possible to get the indices per cluster.
Possible alternative in the current state: use val <- validation_kproto(data = x, k = 2:6, kp_obj = "all") (instead of for(i in 2:6){...}) and get all examined kproto-objects where you can compare the within cluster distances (val$kp_obj$withinss).
Many greetings Rabea
clustmixtype

Apply a Bayesian model (JAGS) for various iterations

Consider the following data frame:
set.seed(5678)
sub_df<- data.frame(clustersize= rep(1, 4),
lepsp= c("A", "B", "C", "D"),
dens= round(runif(4, c(0, 1)), 3),
db= sample(1:10, 4, replace=TRUE))
Let's say I wanted to run the following Bayes linear model which returns samples, an mc.array object:
library("rjags")
library("coda")
dataForJags <- list(dens=sub_df$dens, db=sub_df$db, N=length(sub_df$dens))
model<-"model{
for(i in 1:N){
dens[i] ~ dnorm(mu[i], tau)
# identity
mu[i] <- int + beta1*db[i]
}
tau ~ dgamma(0.1,0.1)
int ~ dnorm(0, 0.001)
beta1 ~ dnorm(0, 0.001)
}"
##compile
mod1 <- jags.model(textConnection(model),data= dataForJags,n.chains=2)
##samples returns a list of mcarray objects
samples<-jags.samples(model= mod1,variable.names=c("beta1",
"int","mu","tau"),n.iter=100000)
Given that samples$beta1[,,] represents random samples from the posterior distribution of the parameters of the jags model, then to summarize, my next step would be to calculate the mean and the 95% credible intervals of the posterior distribution. So I would use:
coeff_output<- round(quantile(samples$beta1[,,],probs=c(0.5,0.025,0.975)),3)
Now, let's say my actual data frame has multiple levels of clustersize.
set.seed(5672)
df<- data.frame(clustersize= c(rep(1, 4), rep(2,4), rep(3, 3)),
lepsp= c("A", "B", "C", "D", "B", "C", "D", "E", "A", "D", "F"),
dens= round(runif(11, c(0, 1)), 3),
db= sample(1:10, 11, replace=TRUE))
How would I run this model for each level of clustersize separately and compile the output into a single result data frame using a forloop or apply function? For each level of clustersize, the resulting mc.array object samples should be output to result_list and the coeff_output should be output to a data frame result_coeff.
Below I calculate the output for each clustersize separately, to produce the expected result list and data frame.
#clustersize==1
sub_df1<- data.frame(clustersize= rep(1, 4),
lepsp= c("A", "B", "C", "D"),
dens= round(runif(4, c(0, 1)), 3),
db= sample(1:10, 4, replace=TRUE))
dataForJags <- list(dens=sub_df$dens, db=sub_df$db, N=length(sub_df$dens))
model<-"model{
for(i in 1:N){
dens[i] ~ dnorm(mu[i], tau)
mu[i] <- int + beta1*db[i]
}
tau ~ dgamma(0.1,0.1)
int ~ dnorm(0, 0.001)
beta1 ~ dnorm(0, 0.001)
}"
mod1 <- jags.model(textConnection(model),data= dataForJags,n.chains=2)
samples1<-jags.samples(model= mod1,variable.names=c("beta1",
"int","mu","tau"),n.iter=100000)
coeff_output1<-
data.frame(as.list(round(quantile(samples1$beta1[,,],probs=c(0.5,0.025,0.975)),3)))
#clustersize==2
sub_df2<- data.frame(clustersize= rep(2,4),
lepsp= c( "B", "C", "D", "E"),
dens= round(runif(4, c(0, 1)), 3),
db= sample(1:10, 4, replace=TRUE))
dataForJags <- list(dens=sub_df$dens, db=sub_df$db, N=length(sub_df$dens))
model<-"model{
for(i in 1:N){
dens[i] ~ dnorm(mu[i], tau)
mu[i] <- int + beta1*db[i]
}
tau ~ dgamma(0.1,0.1)
int ~ dnorm(0, 0.001)
beta1 ~ dnorm(0, 0.001)
}"
mod1 <- jags.model(textConnection(model),data= dataForJags,n.chains=2)
samples2<-jags.samples(model= mod1,variable.names=c("beta1",
"int","mu","tau"),n.iter=100000)
coeff_output2<-
data.frame(as.list(round(quantile(samples2$beta1[,,],probs=c(0.5,0.025,0.975)),3)))
#clustersize==3
sub_df3<- data.frame(clustersize= rep(3, 3),
lepsp= c("A", "D", "F"),
dens= round(runif(3, c(0, 1)), 3),
db= sample(1:10, 3, replace=TRUE))
dataForJags <- list(dens=sub_df$dens, db=sub_df$db, N=length(sub_df$dens))
model<-"model{
for(i in 1:N){
dens[i] ~ dnorm(mu[i], tau)
mu[i] <- int + beta1*db[i]
}
tau ~ dgamma(0.1,0.1)
int ~ dnorm(0, 0.001)
beta1 ~ dnorm(0, 0.001)
}"
mod1 <- jags.model(textConnection(model),data= dataForJags,n.chains=2)
samples3<-jags.samples(model= mod1,variable.names=c("beta1",
"int","mu","tau"),n.iter=100000)
coeff_output3<-
data.frame(as.list(round(quantile(samples3$beta1[,,],probs=c(0.5,0.025,0.975)),3)))
Desired final output:
result_list<- list(samples1, samples2, samples3)
result_coeff<-rbind(coeff_output1, coeff_output2, coeff_output3)
Here is a link to the actual data frame. The solution should be able to process a large dataframe with clustersizes up to 600.
download.file("https://drive.google.com/file/d/1ZYIQtb_QHbYsInDGkta-5P2EJrFRDf22/view?usp=sharing",temp)
There are a few issues to consider here, which are caused by the scale of what you're trying to do. You are creating over 550 different jags.sample objects with 100000 iterations each, and then trying to store all of them in a single list. On most machines, this will cause memory issues: the output is simply too large.
There are at least two ways that we can deal with this:
Take measures to reduce the memory usage of our input data by as much as possible.
Tune our JAGS output so that it does not save so many iterations from each chain.
I've made a number of modifications to your code that should allow it to work with your actual dataset.
Creating Input Data:
In your original code, clustersize and db both have the data type numeric, even though they only need to be integers. The numeric type takes 8 bytes, while the integer type only takes 4 bytes. If we coerce these two columns to the integer type, we can actually reduce the memory size of the list of dataframes in the next step by about 30%.
library("tidyverse")
#### Load Raw Data ####
df <- read_csv("example.csv") %>%
select(-1) %>%
mutate(clustersize = as.integer(clustersize),
db = as.integer(db))
Initial JAGS Tuning
You are using far too many iterations for each of your chains; niter = 100000 is extremely high. You should also be specifying a burn-in period using n.burn, an adaptation period using n.adapt, and a thinning parameter using thin. The thinning parameter is especially important here - this directly reduces the number of iterations that we are saving from each chain. A thinning parameter of 50 means that we are only saving every 50th result.
There are post-hoc methods for selecting your thinning parameters, burn-in, and adaptation period, but that discussion is beyond the scope of SO. For some basic information on what all of these arguments do, there is an excellent answer here: https://stackoverflow.com/a/38875637/9598813. For now, I've provided values that will allow this code to run on your entire dataset, but I recommend that you carefully select the values that you use for your final analysis.
Using tidybayes
The following solution uses the tidybayes package. This provides a clean output and allows us to neatly row-bind all of the coefficient summaries into a single dataframe. Note that we use coda.samples() instead of jags.samples(), because this provides a more universal MCMC object that we can pass to spread_draws(). We also use dplyr::group_split() which is slightly more computationally efficient than split().
library("rjags")
library("coda")
library("tidybayes")
set.seed(5672)
result <- df %>% group_split(clustersize) %>% map(~{
dataForJags <- list(dens=.x$dens, db=.x$db, N=length(.x$dens))
# Declare model structure
mod1 <- jags.model(textConnection(model),
data=dataForJags,
n.chains=2)
# samples returns a list of mcmc objects
samples<-coda.samples(model=mod1,
variable.names=c("beta1","int","mu","tau"),
n.burn=10000,
n.adapt=5000,
n.iter=25000,
thin=50
)
# Extract individual draws
samp <- spread_draws(samples, beta1)
# Summarize 95% credible intervals
coeff_output <- spread_draws(samples, beta1) %>%
median_qi(beta1)
list(samples = samp, coeff_output = coeff_output)
}) %>% transpose()
# List of sample objects
result$samples
# Dataframe of coefficient estimates and 95% credible intervals
result_coeff <- bind_rows(result$coeff_output, .id = "clustersize")
You can use map from purrr package and split over the different clustersize:
library(rjags)
library(coda)
library(purrr)
set.seed(5678)
set.seed(5672)
df<- data.frame(clustersize= c(rep(1, 4), rep(2,4), rep(3, 3)),
lepsp= c("A", "B", "C", "D", "B", "C", "D", "E", "A", "D", "F"),
dens= round(runif(11, c(0, 1)), 3),
db= sample(1:10, 11, replace=TRUE))
model<-"model{
for(i in 1:N){
dens[i] ~ dnorm(mu[i], tau)
# identity
mu[i] <- int + beta1*db[i]
}
tau ~ dgamma(0.1,0.1)
int ~ dnorm(0, 0.001)
beta1 ~ dnorm(0, 0.001)
}"
# split data for different clustersize and calculate result
result <- df %>% split(.$clustersize) %>% map(~{
dataForJags <- list(dens=.x$dens, db=.x$db, N=length(.x$dens))
##compile
mod1 <- jags.model(textConnection(model),data= dataForJags,n.chains=2)
##samples returns a list of mcarray objects
samples<-jags.samples(model= mod1,variable.names=c("beta1","int","mu","tau"),n.iter=100000)
coeff_output<- data.frame(as.list(round(quantile(samples$beta1[,,],probs=c(0.5,0.025,0.975)),3)))
list(samples = samples, coeff_output = coeff_output)
}) %>% transpose()
result$samples
result$coeff_output
Note the use of purrr::transpose to transform the final result in a list for samples and a list for coefs as per you request.

Resources