Constraining Bayesian multinomial logistic in R via JAGS - r

I am learning how to fit Bayesian multinomial logistic models in R. This is my first attempt at using JAGS via rjags. The code illustrates with a MWE what I am trying to do:
## simulate data
set.seed(123)
n=2000
rr<-rmultinom(n, 3, c(.1,.3,.2))
r2=as.numeric(rr==1)
r3=as.numeric(rr==2)
r4=as.numeric(rr==3)
abt=rbinom(n,1,.1);smk=rbinom(n,1,.3)
age=rnorm(n);bmi=rnorm(n)
## load programs
library("rjags")
## model
NMMmodel.string <- "
model{
for (i in 1:N){
## outcome levels 2, 3, and 4
r2[i] ~ dbern(pi2[i])
r3[i] ~ dbern(pi3[i])
r4[i] ~ dbern(pi4[i])
## linear predictors
logit(pi2[i]) <- g[1]+g[2]*age[i]+g[3]*abt[i]+g[4]*smk[i]
logit(pi3[i]) <- g[5]+g[6]*bmi[i]+g[7]*age[i]+g[8]*smk[i]
logit(pi4[i]) <- g[9]+g[10]*age[i]+g[11]*smk[i]+g[12]*bmi[i]
## probability that outcome is level 1
pi1[i] <- 1-pi2[i]-pi3[i]-pi4[i]
}
for (j in 1:12) {
g[j] ~ dnorm(0, 0.01)
}
}
"
NMMmodel.spec<-textConnection(NMMmodel.string)
## fit model w JAGS
jags <- jags.model(NMMmodel.spec,
data = list('r2'=r2,'r3'=r3,'r4'=r4,
'abt'=abt,'smk'=smk,
'age'=age,'bmi'=bmi,'N'=n),
n.chains=4,
n.adapt=100)
Here are two questions, in order of decreasing importance:
Question 1: I would like to put a constraint on the estimated parameters indexed by g[1] to g[12] such that pi1 lies between some arbitrary upper and lower bound: say, a=0.25 and b=0.75. One way is to use rejection sampling, where rjags will reject all samples that return pi1 less than a or greater b. How can I do this?
Question 2: What, exactly, is this program doing? For example, if this program implements a Gibbs sampler, is there a way to code it up without resorting to JAGS, or STAN, or BUGS? Something like the first set of code on this website?

Related

How to specify zero-inflated negative binomial model in JAGS

I'm currently working on constructing a zero-inflated negative binomial model in JAGS to model yearly change in abundance using count data and am currently a bit lost on how best to specify the model. I've included an example of the base model I'm using below. The main issue I'm struggling with is that in the model output I'm getting poor convergence (high Rhat values, low Neff values) and the 95% credible intervals are huge. I realize that without seeing/running the actual data there's probably not much anyone can help with but I thought I'd at least try and see if there are any obvious errors in the way I have the basic model specified. I also tried fitting a variety of other model types (regular negative binomial, Poisson, and zero-inflated Poisson) but decided to go with the ZINB since it had the lowest DIC scores of all the models and also makes the most intuitive sense to me, given my data structure.
library(R2jags)
# Create example dataframe
years <- c(1,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,2)
sites <- c(1,1,1,2,2,2,3,3,3,1,1,1,2,2,2,3,3,3)
months <- c(1,2,3,1,2,3,1,2,3,1,2,3,1,2,3,1,2,3)
# Count data
day1 <- floor(runif(18,0,7))
day2 <- floor(runif(18,0,7))
day3 <- floor(runif(18,0,7))
day4 <- floor(runif(18,0,7))
day5 <- floor(runif(18,0,7))
df <- as.data.frame(cbind(years, sites, months, day1, day2, day3, day4, day5))
# Put count data into array
y <- array(NA,dim=c(2,3,3,5))
for(m in 1:2){
for(k in 1:3){
sel.rows <- df$years == m &
df$months==k
y[m,k,,] <- as.matrix(df)[sel.rows,4:8]
}
}
# JAGS model
sink("model1.txt")
cat("
model {
# PRIORS
for(m in 1:2){
r[m] ~ dunif(0,50)
}
t.int ~ dlogis(0,1)
b.int ~ dlogis(0,1)
p.det ~ dunif(0,1)
# LIKELIHOOD
# ECOLOGICAL SUBMODEL FOR TRUE ABUNDANCE
for (m in 1:2) {
zero[m] ~ dbern(pi[m])
pi[m] <- ilogit(mu.binary[m])
mu.binary[m] <- t.int
for (k in 1:3) {
for (i in 1:3) {
N[m,k,i] ~ dnegbin(p[m,k,i], r)
p[m,k,i] <- r[m] / (r[m] + (1 - zero[m]) * lambda.count[m,k,i]) - 1e-10 * zero[m]
lambda.count[m,k,i] <- exp(mu.count[m,k,i])
log(mu.count[m,k,i]) <- b.int
# OBSERVATIONAL SUBMODEL FOR DETECTION
for (j in 1:5) {
y[m,k,i,j] ~ dbin(p.det, N[m,k,i])
}#j
}#i
}#k
}#m
}#END", fill=TRUE)
sink()
win.data <- list(y = y)
Nst <- apply(y,c(1,2,3),max)+1
inits <- function()list(N = Nst)
params <- c("N")
nc <- 3
nt <- 1
ni <- 50000
nb <- 5000
out <- jags(win.data, inits, params, "model1.txt",
n.chains = nc, n.thin = nt, n.iter = ni, n.burnin = nb,
working.directory = getwd())
print(out)
Tried fitting a ZINB model in JAGS using the code specified above but am having issues with model convergence.
The way that I have tended to specify zero-inflated models is to model the data as being Poisson distributed with mean that is either zero if that individual is part of the zero-inflated group, or distributed according to a gamma distribution otherwise. Something like:
Obs[i] ~ dpois(lambda[i] * is_zero[i])
is_zero[i] ~ dbern(zero_prob)
lambda[i] ~ dgamma(k, k/mean)
Something similar to this was first used in this paper: https://www.researchgate.net/publication/5231190_The_distribution_of_the_pathogenic_nematode_Nematodirus_battus_in_lambs_is_zero-inflated
These models usually converge OK, although the performance is not as good as for simpler models of course. You also need to make sure to supply initial values for is_zero so that the model starts with all individuals with positive counts in the appropriate group.
In your case, you have multiple timepoints, so you need to decide if the zero-inflation is fixed over time points (i.e. an individual cannot switch to or from zero-inflated group over time), or if each observation is completely independent with respect to zero-inflation status. You also need to decide if you want to have co-variates of year/month/site affecting the mean count (i.e. the gamma part) or the probability of a positive count (i.e. the zero-inflation part). For the former, you need to index mean (in my formulation) by i and then use a GLM-like formula (probably using log link) to relate this to the appropriate covariates. For the latter, you need to index zero_prob by i and then use a GLM-like formula (probably using logit link) to relate this to the appropriate covariates. It is also possible to do both, but if you try to use the same covariates in both parts then you can expect convergence problems!
It would arguably be better to replace the separate Poisson-Gamma distributions with a single Negative Binomial distribution using the 'ecology parameterisation' with mean and k. This is not currently implemented in JAGS, but I will add it for the next update.

Including an offset when using cph {rms} for validation of a Cox model

I am externally validating and updating a Cox model in R. The model predicts 5 year risk. I don't have access to the original data, just the equation for the linear predictor and the value of the baseline survival probability at 5 years.
I have assessed calibration and discrimination of the model in my dataset and found that the model needs to be updated.
I want to update the model by adjusting baseline risk only, so I have been using a Cox model with the linear predictor ("beta.sum") included as an offset term, to restrict its coefficient to be 1.
I want to be able to use cph instead of coxph as it makes internal validation by bootstrapping much easier. However, when including the linear predictor as an offset I get the error:
"Error in exp(object$linear.predictors) :
non-numeric argument to mathematical function"
Is there something I am doing incorrectly, or does the cph function not allow an offset within the formula? If so, is there another way to restrict the coefficient to 1?
My code is below:
load(file="k.Rdata")
### Predicted risk ###
# linear predictor (LP)
k$beta.sum <- -0.2201 * ((k$age/10)-7.036) + 0.2467 * (k$male - 0.5642) - 0.5567 * ((k$epi/5)-7.222) +
0.4510 * (log(k$acr_mgmmol/0.113)-5.137)
k$pred <- 1 - 0.9365^exp(k$beta.sum)
# Recalibrated model
# Using coxph:
cox.new <- coxph(Surv(time, rrt) ~ offset(beta.sum), data = k, x=TRUE, y=TRUE)
# new baseline survival at 5 years
library(pec)
predictSurvProb(cox.new, newdata=data.frame(beta.sum=0), times = 5) #baseline = 0.9570
# Using cph
cph.new <- cph(Surv(time, rrt) ~ offset(beta.sum), data=k, x=TRUE, y=TRUE, surv=TRUE)
The model will run without surv=TRUE included, but this means a lot of the commands I want to use cannot work, such as calibrate, validate and predictSurvProb.
EDIT:
I will include a way to reproduce the error
library(purr)
library(rms)
n <- 1000
set.seed(1234)
status <- as.numeric(rbernoulli(n, p=0.1))
time <- -5* log(runif(n))
lp <- rnorm(1000, mean=-2.7, sd=1)
mydata <- data.frame(status, time, lp)
test <- cph(Surv(time, status) ~ offset(lp), data=mydata, surv=TRUE)

Predicting new values in jags (mixed model)

I asked a similar question a while ago on how to get model predictions in JAGS for mixed models. Here's my original question.
This time, I'm trying to get predictions for the same model but using new data and not the original that was used to fit the model.
model<-"model {
# Priors
mu_int~dnorm(0, 0.0001)
sigma_int~dunif(0, 100)
tau_int <- 1/(sigma_int*sigma_int)
for (j in 1:(M)){
alpha[j] ~ dnorm(mu_int, tau_int)
}
beta~dnorm(0, 0.01)
sigma_res~dunif(0, 100)
tau_res <- 1/(sigma_res*sigma_res)
# Likelihood
for (i in 1:n) {
mu[i] <- alpha[Mat[i]]+beta*Temp[i] # Expectation
D47[i]~dnorm(mu[i], tau_res) # The actual (random) responses
}
for(i in 1:(n)){
D47_pred[i] <- dnorm(mu[i], tau_res)
}
}"
I know this mcan be done using the posterior distributions of the resulting parameters but I'm wondering if it could also be implemented inside jags.
Thank you!
It absolutely could be done inside JAGS. If you wanted predictions for new values of Temp for some of the same observations in Mat, you would just have to append them to the existing data with a corresponding D47 value of NA.

Coding an integrated Bayesian model with a mix of stochastic and deterministic inputs

The Problem
I am having trouble figuring out how to implement a Bayesian Framework for a predictive model that contains many deterministic inputs mixed with a few stochastic inputs. Conceptually the problem seems easy, but from a coding standpoint I am unsure how to specify this model as an integrated unit in JAGS/BUGS language (e.g., the entire equation described within a single JAGS/BUGS model).
The Way Forward
I have been told that the entire analysis should be within the confines of a single JAGS/BUGS model rather than as separate JAGS models from which 100,000 data points simulated from a Posterior Predictive Distribution (PPD) of each are input into the main equation.
Perhaps all that is required is to copy and paste all the JAGS code into one place and add the larger equation?
The Equation
The basic components of this equation are:
Outcome = probability of an event (in blue) * rate of exposure (in green) * ability to avoid that event (in orange)
The full equation is:
Where variables in red (G, PQ+R, and avoid) are stochastic, with the PQ+R being a linear regression that helps predict annual variability in exposure. I have empirical data for each of these variables and need to use a Bayesian framework (or other probabilistic approach) to model the outcome of this equation based on this combination of deterministic and stochastic inputs.
Initial Attempt
While I am a fan of Bayesian approaches in theory and am working to actualize this approach, my training to date has been more focused on resampling procedures. As such, the most logical thing to do seemed to be to generate separate JAGS code for each of the stochastic inputs and then use each of the saved steps in the MCMC chain for each parameter (s=100,000) to plug into the likelihood function and generate a dataset from the Posterior Predictive Distribution (PPD) to serve as an input for the larger equation.
Although the model is not integrated into a single Jags code unit, mathematically it seems that this the proper approach given the answers by #baruuum to #Fred L.'s question about Understanding Bayesian Predictive Distributions, URL (v: 2018-08-19): https://stats.stackexchange.com/q/335496 and excerpted here:
Now, how do we obtain the samples from p(y~|y)? The method you describe is sometimes called the method of composition, which works as follows:
for NewSimulatedData = 1,2,...,s do
draw θ(s) from p(θ|y) #This is the MCMC sample of the PPD from JAGS
draw y~(s) from p(y~|θ(s)) #This draws a simulated dataset from a PPD
where, in most situations, we have already the draws from p(θ|y), so that only the second step is required.
As a side note: I have not done this second step in JAGS either as I had not found an example of how to do this so I just took the 100,000 θ(s) from p(θ|y) and generated the y~(s) to serve input into the larger equation.
Step 1
I have made 4 separate JAGS models to generate Posterior Predictive Distributions for each stochastic variable:
One for estimating parameters needed to generate a PPD for variable G;
One for estimating parameters needed to generate a PPD for "avoid";
One for estimating P (Slope), R (Intercept) parameters of a linear relationship and generating Q (X-values) to generate the expected values of Y;
Each of these JAGS models have the following:
#JAGS ITERATIONS
adaptSteps = 5000 #Number of steps to "tune" the samplers
burnInSteps = 50000 #Number of steps to "burn-in" the samplers
nChains = 4 #Number of steps to "burn-in" the samplers
thinSteps = 5 #Number of steps to "thin" (1=keep every step)
numSavedSteps = 100000 #Total number of steps in chains to save
nPerChain = ceiling( ( numSavedSteps * thinSteps ) / nChains ) #Steps per chain
Here is the JAGS model and code to implement it for:
G:
#JAGS MODEL CODE FOR G
modelString = "
model {
#Likelihood function: Log-normal
#Note that JAGS uses precision as scale parameter for log-normal distributions rather than variance (which is the inverse of precision)
for( i in 1 : N ) {
y[i] ~ dlnorm( muOfLogY , 1/sigmaOfLogY^2 )
}
#Prior for mu of log(y): Normal distribution with a location parameter matching mean of log(y) of flight speed with a large spread about the location
muOfLogY ~ dnorm( meanOfLogY , 1/(10*sdOfLogY)^2 )
#Prior for sigma of log(y): Uniform distribution with a wide range that included all probable values of variance for flight speed
sigmaOfLogY ~ dunif( 0.001*sdOfLogY , 1000*sdOfLogY )
#Prior for Mu: Not essential for calculations of collision risk but tracked so that parameter distributions could be viewed on a non-logarithmic scale (more intuitive)
muOfY <- exp( muOfLogY+sigmaOfLogY^2/2 )
#Prior for Mode: Not essential for calculations of collision risk but tracked so that parameter distributions could be viewed on a non-logarithmic scale (more intuitive)
modeOfY <- exp( muOfLogY-sigmaOfLogY^2 )
#Prior for Sigma: Not essential for calculations of collision risk but tracked so that parameter distributions could be viewed on a non-logarithmic scale (more intuitive)
sigmaOfY <- sqrt( exp(2*muOfLogY+sigmaOfLogY^2)*(exp(sigmaOfLogY^2)-1) )
#-----------------
#References:
#Code adapted from Krushcke (2015): Doing Bayesian Data Analysis-A Tutorial with R, JAGS, and Stan (Jags-Ymet-Xnom1grp-MlogNormal-Script.R)
}
" # close quote for modelString
writeLines( modelString , con="JAGS_C1.txt" )
#PARAMETERS TO TRACK
parameters = c("muOfLogY" , "sigmaOfLogY" , "muOfY" , "modeOfY" , "sigmaOfY" )
# CREATE, INITIALIZE, AND ADAPT THE MODEL:
jagsModel = jags.model( "JAGS_C1.txt" , data=dataList ,
n.chains=nChains , n.adapt=adaptSteps )
# BURN IN THE MCMC CHAIN:
cat( "Burning in the MCMC chain...\n" )
update( jagsModel , n.iter=burnInSteps )
# SAVE MCMC CHAIN:
cat( "Sampling final MCMC chain...\n" )
mcmcCoda1 = coda.samples( jagsModel , variable.names=parameters ,
n.iter=nPerChain , thin=thinSteps )
avoid:
#JAGS MODEL FOR AVOID
#-----------------
modelString = "
model {
#Likelihood function: Weibull
#Note that the parameterization of this distribution by JAGS and R(dweibul) use different notation/parameters
#Shape parameter: JAGS is 'nu', R(dweibul) is 'a'
#Scale parameter: JAGS is 'lambda', R(dweibul) is 'b'
for ( i in 1:N ) {
y[i] ~ dweib( nu , lambda ) #This is the JAGS parameterization of Weibull distribution
}
#Prior for nu: Uniform distribution that included all probable values of the shape parameter
nu <- a #Renaming parameter to match R language rather than JAGS language (more intuitive)
a ~ dunif( 0,200 )
#Prior for lambda: Uniform distribution that included all probable values of the scale parameter
lambda <- 1/b^a #Put parameter in terms of R language rather than JAGS language (more intuitive)
b ~ dunif(0,200)
#-----------------
#References:
#Code adapted from Krushcke (2015): Doing Bayesian Data Analysis-A Tutorial with R, JAGS, and Stan (Jags-YmetCensored-Xnom2grp-Mweibull.R)
}
" # close quote for modelString
writeLines( modelString , con="JAGS_C2.txt" )
#PARAMETERS TO TRACK
parameters = c("nu","a","b")
nPerChain = ceiling( ( numSavedSteps * thinSteps ) / nChains ) # Steps per chain.
# CREATE, INITIALIZE, AND ADAPT THE MODEL:
jagsModel = jags.model( "JAGS_C2.txt" , data=dataList ,
n.chains=nChains , n.adapt=adaptSteps )
# BURN IN THE MCMC CHAIN:
cat( "Burning in the MCMC chain...\n" )
update( jagsModel , n.iter=burnInSteps )
# SAVE MCMC CHAIN:
cat( "Sampling final MCMC chain...\n" )
mcmcCoda2 = coda.samples( jagsModel, variable.names=parameters ,
n.iter=nPerChain, thin=thinSteps,
method="parallel" )
P, Q, R and derived Y: For simplicity, we'll skip the JAGS code for #3 as it is probably not needed to make this a reproducible example. It is an adapted version of the code provided by John Kruschke and available here http://doingbayesiandataanalysis.blogspot.com/2015/10/posterior-predicted-distribution-for.html.
Step 2
The outcome of each JAGS model is saved as "mcmcCODA" which contains 100,000 steps (or samples) of the parameters that determine the shape/scale/location of the Posterior Predictive Distribution. I use these to generate new data from the PPD to serve as inputs to the larger equation.
For purposes of a reproducible example
Let's say the equation was considerably simplified, with just stochastic variables 1 and 2 and one additional deterministic variable so the equation would be something like:
Outcome = StochasticVar1 * (1-StochasticVar2) * DeterministicVar1
Would the best approach to be something as follows?
#INTEGRATED JAGS MODEL
#-----------------
modelString = "
model {
### Main equation ###
outcome <- SV1 * (1 - SV2) * DV1
### Likelihoods ###
#Likelihood function for G: Log-normal
#Note that JAGS uses precision as scale parameter for log-normal distributions rather than variance (which is the inverse of precision)
for( i in 1 : N ) {
y[i] ~ dlnorm( muOfLogY , 1/sigmaOfLogY^2 )
}
#Likelihood function for "avoid": Weibull
#Note that the parameterization of this distribution by JAGS and R(dweibul) use different notation/parameters
#Shape parameter: JAGS is 'nu', R(dweibul) is 'a'
#Scale parameter: JAGS is 'lambda', R(dweibul) is 'b'
for ( i in 1:N ) {
y[i] ~ dweib( nu , lambda ) #This is the JAGS parameterization of Weibull distribution
}
### Priors ###
#All the priors from above JAGS code
The most complicated JAGS/BUGS model I have located so far is from Pardo et al (2015) https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0120727 and their supplemental code https://doi.org/10.1371/journal.pone.0120727.s002 that I have reproduced here:
##### Short-beaked common dolphins: #####
##### PRIORS: #####
a0 ~ dunif(-0.2,0.6)
a1 ~ dunif(-0.3,-0.02)
a2 ~ dunif(0.05,0.42)
sd.eps.a ~ dunif(0.15,0.65)
sd.eps.a2 ~ dunif(0.05,0.42)
logmu.gs ~ dunif(3.5,5.5)
logsd.gs ~ dunif(8.5,13.5)
nu0 ~ dunif(-3.6,-2.1)
nu1 ~ dunif(-14,-7.5)
nu2 ~ dunif(-45,-18)
sd.eps.nu ~ dunif(1.12,1.95)
#### OBSERVATION MODELS ####
## Models from sightings (j): ##
logvar.gs <- log(1+pow(logsd.gs/logmu.gs,2))
logtau.gs <- 1/logvar.gs
eps.a.tau <- 1/(sd.eps.a*sd.eps.a)
for (j in 1:1723){
perp.dist.sigh[j] ~ dnorm(0,perp.dist.tau[j]) # Eq. 1
perp.dist.tau[j] <- 1/(pow(w.sigh[j],2)*(2/pi)) # Eq. 2
gp.sz.sigh[j] ~ dlnorm(logmu.gs,logtau.gs) # Eq. 4
eps.a[j] ~ dnorm(0,eps.a.tau)
w.sigh[j] <- exp(a0+(a1*beauf.sigh[j])+(a2*log(gp.sz.sigh[j]))+eps.a[j]) # Eq. 6
}
mean.gs <- exp(logmu.gs+0.5*logvar.gs) # Eq. 7
#### g(0): ####
g0.barlow ~ dbeta(102.8362,3.180502) # Eqs. 13 to 16
#### Models from cells (i): ####
eps.a2.tau <- 1/(sd.eps.a2*sd.eps.a2)
eps.nu.tau <- 1/(sd.eps.nu*sd.eps.nu)
for (i in 1:11173)
#### Effective strip half-width model for cells: ####
eps.a2[i] ~ dnorm(0,eps.a2.tau)
w.cell[i] <-exp(a0+(a1*beauf.cell[i])+(a2*log(mean.gs))+eps.a2[i]) # Eq. 9
## Predicted group counts: ##
n.groups.cell[i] ~ dpois(pred.gp[i]) # Eq. 10
pred.gp[i] <- (2*w.cell[i]*eff.cell[i]*dens.cell[i]*g0.barlow)/mean.gs # Eq. 12
#### Check the group counts likelihood: ####
squared.res.obs.gp.counts[i] <- pow(n.groups.cell[i]-pred.gp[i],2)
new.n.groups.cell[i] ~ dpois(pred.gp[i])
new.squared.res.gp.counts[i] <- pow(new.n.groups.cell[i] - pred.gp[i], 2)
###################################### ECOLOGICAL MODEL: ######################################
eps.nu[i] ~ dnorm(0,eps.nu.tau)
dens.cell[i] <- exp(nu0+(nu1*ssh.cells[i])+(nu2*pow(ssh.cells[i],2))+eps.nu[i]) # Eq. 17
}
#### POSTERIOR PREDICTIVE CHECK: ####
fit.obs.counts <- sum(squared.res.obs.gp.counts[])
fit.new.counts <- sum(new.squared.res.gp.counts[])
test.fit.counts <- step(fit.new.counts-fit.obs.counts)
b.p.value.counts <- mean(test.fit.counts) # Bayesian p-value
Main point
I feel like I am missing something fundamental. Any advice, resources, or expertise that exists to help create a single integrated model or any thoughts on the most appropriate approach given the situation would be greatly appreciated.
Thank you in advance!

What is the algorithm of jags with discrete likelihood and continuous prior

I am trying to understanding the following rjags code.
library(rjags)
set.seed(1)
N <- 10
p <- rep(10,N)
cat("
model {
for (i in 1:N) {
p[i] ~ dpois(lambda)
}
lambda <- 2*exp(-2*alpha*3)/(2*pow(4,2))
alpha ~ dnorm(beta,tau)T(0,0.2)
beta ~ dnorm(0,10000)
tau ~ dgamma(2,0.01)
}", file= "example1.jag")
jags <- jags.model('example1.jag',data = list( "N" = N,"p"=p))
update(jags, 16000)
out_ex1<-jags.samples(jags, 'alpha',4000)
out_ex1$alpha
It has poisson likelihood and normal prior, so there is no closed form for Gibbs Sampling. Then what MCMC method is used here? ARS? Slice Sampling? or Metropolis Hasting?
You can always find out which samplers JAGS is using for stochastic variables using rjags::list.samplers - for example:
> list.samplers(jags)
$`base::RealSlicer`
[1] "alpha"
$`base::RealSlicer`
[1] "beta"
$`base::RealSlicer`
[1] "tau"
In this case this tells you that a slice sampler is being used for each of the three unobserved stochastic nodes in your model. Slice sampling is the main workhorse in JAGS so this is quite typical, except where a more efficient (e.g. conjugate) sampler is available (or if the GLM module is loaded for an appropriate model).

Resources