JAGS Random Effects Model Prediction - r

I'm trying to model a bayesian regression using an index as response (D47), temperature as predictor (Temp) and considering the random effects of a discrete variable (Material). I've found really good information regarding non-hierarchical regressions, some posts including even a prediction strategy for these models. Despite this, I've found a remarkable problem when predicting D47 values in my model, mostly because of the random intercept.
Is there any way to deal with a random intercept during the prediction of a JAGS regression?
Thanks for your answer,
model1<-"model {
# Priors
mu_int~dnorm(0, 0.0001) # Mean hyperparameter for random intercepts
sigma_int~dunif(0, 100) # SD hyperparameter for random intercepts
tau_int <- 1/(sigma_int*sigma_int)
for (i in 1:n) {
alpha[i]~dnorm(mu_int, tau_int) # Random intercepts
}
beta~dnorm(0, 0.01) # Common slope
sigma_res~dunif(0, 100) # Residual standard deviation
tau_res <- 1/(sigma_res*sigma_res)
# Likelihood
for (i in 1:n) {
mu[i] <- alpha[Mat[i]]+beta*Temp[i] # Expectation
D47[i]~dnorm(mu[i], tau_res) # The actual (random) responses
}
}"

Sure, you can make predictions with the random intercepts, all you need to do is specify it as some sort of derived quantity.
Try adding something like this to the model.
for(i in 1:(n)){
D47_pred[i] <- dnorm(mu[i], tau_res)
}
And then track D47_pred as a parameter.
edit:
Also, you need to change how you specify the prior for the random intercept. This will take a couple steps (updated code here from comments).
You will need to add a new constant to your data list, which represents the number of unique groups in vector Mat. I have labeled it M in this case (e.g. 4 groups in Mat, M = 4)
for (j in 1:(M)){
alpha[j] ~ dnorm(mu_int, tau_int) # Random intercepts
}
This specification just makes the correct number of random intercepts for your model

Related

Estimating bias in linear regression and linear mixed model in R simulation

I want to run simulations to estimate bias in linear model and linear mixed model. The bias is E(beta)-beta where beta is the association between my X and Y.
I generated my X variable from a normal distribution and Y from a multivariate normal distribution.
I understand how I can calculate E(beta) from simulations, which is the sum of beta estimates from all simulations divided by the total number of simulation, but I am not sure how I can estimate true beta.
meanY <- meanY + X*betaV
This is how I generated the meanY (betaV is the effect size) that is then used to generate multivariate Y outcome as shown below.
Y[jj,] <- rnorm(nRep, mean=meanY[jj], sd=sqrt(varY))
I understand how I can calculate E(beta) from simulations, which is the sum of beta estimates from all simulations divided by the total number of simulation, but I am not sure how I can estimate the true beta.
From my limited understanding, true beta is not obtained from the data but from the setting where I set fixed beta value.
Based on how I generated my data, how can I estimate the true beta?
There are a couple of methods of simulating bias. I'll take an easy example using a linear model. A linear mixed model could likely use a similar approach, however i am not certain it would go well for a generalized linear mixed model (I am simply not certain).
A simple method for estimating bias, when working with a simple linear model, is to 'choose' which model to estimate ones bias from. Lets say for example Y = 3 + 4 * X + e. I have chosen beta <- c(3,4), and as such i need to only simulate my data. For a linear model, the model assumptions are
Observations are independent
Observations are normally distributed
The mean can be described as by the linear predictor
Using these 3 assumptions, simulating a fixed design is simple.
set.seed(1)
xseq <- seq(-10,10)
xlen <- length(xseq)
nrep <- 100
#Simulate X given a flat prior (uniformly distributed. A normal distribution would likely work fine as well)
X <- sample(xseq, size = xlen * nrep, replace = TRUE)
beta <- c(3, 4)
esd = 1
emu <- 0
e <- rnorm(xlen * nrep, emu, esd)
Y <- cbind(1, X) %*% beta + e
fit <- lm(Y ~ X)
bias <- coef(fit) -beta
>bias
(Intercept) X
0.0121017239 0.0001369908
which indicates a small bias. To test if this bias is significant, we could perform a wald-test or t-test (or replicate the process 1000 times, and check the distribution of outcomes).
#Simulate linear model many times
model_frame <- cbind(1,X)
emany <- matrix(rnorm(xlen * nrep * 1000, emu, esd),ncol = 1000)
#add simulated noise. Sweep adds X %*% beta across all columns of emany
Ymany <- sweep(emany, 1, model_frame %*% beta, "+")
#fit many models simulationiously (lm is awesome!)
manyFits <- lm(Y~X)
#Plot density of fitted parameters
par(mfrow=c(1,2))
plot(density(coef(manyFits)[1,]), main = "Density of intercept")
plot(density(coef(manyFits)[2,]), main = "Density of beta")
#Calculate bias, here i use sweep to substract beta across all rows of my coefficients
biasOfMany <- rowMeans(sweep(coef(manyFits), 1, beta, "-"))
>biasOfMany
(Intercept) X
5.896473e-06 -1.710337e-04
Here we see that the bias is reduced quite a bit, and has changed sign for betaX giving reason to believe the bias is insignificant.
Changing the design would allow one to look into bias of interactions, outliers and other stuff using the same method.
For linear mixed models, one could perform the same method, however here you would have to design the random variables, which would require some more work, and the implementation of lmer as far as i know, does not fit a model across all columns of Y.
However b (the random effects) could be simulated, and so could any noise parameters. Do however note, that as b is a single vector containing a single outcome of simulations (often of a multivariate normal distribution), one would have to re-run the model for each simulation of b. Basically this will increase the number of times one would have to re-run the model fitting procedure, in order to get a good estimate of the bias.

Mixed model without an intercept

I want to use a mixed model without a random intercept but with a correlation structure. The reason is to get the AIC to help choose the best correlation structure (e.g., autoregressive versus compound symmetry). So it is essentially a GEE, but GEEs don't allow estimation of the AIC. They are also called covariance pattern models.
The code below simulates random data with a compound symmetry correlation. The model fits both a random intercept and a variance-covariance matrix. Is there any way to switch off the random intercept?
library(MASS)
library(nlme)
Sigma = toeplitz(c(1,0.5,0.5,0.5))
data = data.frame(mvrnorm(n=10, mu=1:4, Sigma=Sigma))
data$id = 1:nrow(data)
long = reshape(data, direction='long', varying=list(1:4), v.names='Y')
cs = corCompSymm(0.5, form = ~ 1 | id)
model = lme(Y~time , random=list(~1|id), data=long, correlation=cs)
summary(model)
If you are solely interested in comparing correlation structures, then I am pretty sure your goal could be served by a generalized least squares model fit with gls:
model = gls(Y~time, data=long, correlation=cs)
summary(model)
AIC(model)
Otherwise, a linear mixed effects model fit with lme must have random effects specified.

Save priors in rjags

I'm running a bayesian model in rjags, and I would like to be able to output a plot of the trace of the MCMC, the posterior distribution for my parameters (which I can already obtain from coda), and a comparison of the posterior vs. prior distributions.
Is there any way to save the priors you specify in the jags model part as a list or something that would not force me to copy and paste (then exponentially rising the likelihood of errors) all the distributions with their own parameters?
I have the following piece of code
cat(
'model{
for(i in 1:n){
P.hat[i] ~ dnorm(pi, df/sigma2)
SS[i] ~ dgamma((df-1)/2, sigma2/2 )
R[i] ~ dbin(theta, N)
}
# relations
gam <- m*vs+(1-m)*va
theta <- (pi*beta*gam)/(gam*dt+(1-gam)*du)
# numerical values
df <- 15
# priors
pi ~ dnorm(0.05, 2)I(0,1)
sigma2 ~ dgamma(2, 0.1*df)
beta ~ dunif(0, 0.4)
m ~ dbeta(1, 4)
vs ~ dbeta(2, 9)
va ~ dbeta(2, 5)
dt ~ dnorm(0.3, 2)I(0,10)
du ~ dnorm(1.25, 2)I(0,10)
}',
file='model1.bug')
and I would like to "save" the "priors" section.
Thanks in advance for all your answers!
EM
The short answer is no - JAGS (and BUGS) make no explicit distinction between what you define as priors and the other distributions in the model, so there is no way to ask JAGS to give you information on specific sub-sections of the model. The usual way to look at your prior distributions is to plot (or otherwise summarise) them separately within R.
However, there is a trick that will work with your model to get what you want: set the upper index of your loop (n) to 0 (in the data). This will cause JAGS to totally ignore everything within that for loop, effectively removing the likelihood component of your model, leaving only the priors. If you monitor pi, sigma2 etc etc you should see a distribution of the priors for these parameters. As there is no likelihood to compute, you should also see the model runs much faster! You do need to run the model twice though (once for the priors and once with the data as normal for the posteriors).

Collinearity after accounting for random/mixed effects

could two/more predictors become more/less collinear after accounting for random effects?
In my case I have tested for collinearity prior to modelling, e.g. using VIF, and everything checks out. However, the ranking (using IC) of different models makes me uncertain whether it truly can separate between the predictors.
Any ideas?
ps! Can someone with higher rep than I add a more relevant tag such as collinearity?
There are some solutions listed at this blog post. They use some code to create a function that will calculate VIFs for lmer and lme model objects from the lmer and nlme R packages, respectively. I have copied the code for the function below.
vif.lme <- function (fit) {
## adapted from rms::vif
v <- vcov(fit)
nam <- names(fixef(fit))
## exclude intercepts
ns <- sum(1 * (nam == "Intercept" | nam == "(Intercept)"))
if (ns > 0) {
v <- v[-(1:ns), -(1:ns), drop = FALSE]
nam <- nam[-(1:ns)] }
d <- diag(v)^0.5
v <- diag(solve(v/(d %o% d)))
names(v) <- nam
v }
Once you run that code once, you will be able to execute a new function, vif.lme within the R environment. I give an example below using a random data set, and an uninformative random effect. I use an uninformative random effect so that the results of lme within nlme will generate the same parameter values for predictors as lm in base R. Then, I use the above code to calculate variance inflation factors, as well as the vif functino from the car package used to calculate VIFs for linear models, to show that they give the same output.
#make 4 vectors- c is used as an uninformative random effect for the lme model
a<-c(1:10)
b1<-c(2,4,6,8,10,100,14,16,18,20)
b2<-c(1,9,2,4,5,6,4,3,2,-1)
c<-c(1,1,1,1,1,1,1,1,1,1)
test<-data.frame(a,b1,b2,c)
#model a as a function of b1 and b2, and c as a random effect
require(nlme)
fit<-lme(a~b1+b2, random=~1|c,data=test)
#see how the model fits
summary(fit)
#check variance inflation factors
vif.lme(fit)
#create a new regular linear regression model and check VIF using the car package.
#answers should be the same, as our random effect above was totally uninformative
require(car)
fit2<- lm(a~b1+b2,data=test)
#check to see that parameter fits are the same.
summary(fit2)
#check to see that variance inflation factors are the same
vif(fit2)

How to obtain AUC using leave-one-out cross-validation in R?

I have a matrix (x) containing 100 samples (rows) and 10000 independent features (columns). The observations are binary, either the sample is good or bad {0,1} (stored in vector y). I want to perform leave one out cross-validation and determine the Area Under Curve (AUC) for each feature separately (something like colAUC from CAtools package). I tried to use glmnet, but it didn't work. As it is said in manual, I tried to set the nfold parameter to be equal to the number of observations (100).
>result=cv.glmnet(x,y,nfolds=100,type.measure="auc",family="binomial")
And I'm getting these warnings:
>"Warning messages:
1: Too few (< 10) observations per fold for type.measure='auc' in
cv.lognet; changed to type.measure='deviance'. Alternatively, use smaller
value for nfolds
2: Option grouped=FALSE enforced in cv.glmnet, since < 3 observations per
fold"
Any ideas what I'm doing wrong? And is there any other way or R package to obtain LOO-balanced AUC values for each of the features?
I'll really appreciate any help. Thank you!
When you do a LOO-CV, you have a test set with only 1 sample in it, and you can of course not build an AUC with that. However, you can loop and store the predictions at each step:
k <- dim(x)[1]
predictions <- c()
for (i in 1:k) {
model <- glmnet(x[-i,], y[-i], family="binomial")
predictions <- c(predictions, predict(model, newx=x[i,]))
}
So that in the end you can make a ROC curve, for example:
library(pROC)
roc(y, predictions)

Resources