I have a question related to the R Code which calls BUGS. I have run the model in WinBUGS and it runs fine giving me the expected results. Below is the automation code used when I had single outcome or univariate data for Y’s. Now I want to use it for multiple outcomes. I tried a different way of reading the data. There are 2 simulations for testing which are read from csv files.Not sure where to specify in the code so that the same process can be repeated for 2 outcomes instead of one.
setwd("C://Tina/USB_Backup_042213/Testing/CSV")
matrix=NULL
csvs <- paste("MVN", 1:2, ".csv", sep="")
for(i in 1:length(csvs)){
matrix[[i]] <- read.csv(file=csvs[i], header=T)
print(matrix[[i]])
}
Y1 Y2
1 11 6
2 8 5
3 25 13
4 1 13
5 8 22
Y1 Y2
1 9 1
2 7 9
3 25 13
4 1 18
5 9 12
library("R2WinBUGS")
bugs.output <- list()
for(sim in 1:2){
Y <-(matrix[[sim]])
bugs.output[sim] <- bugs(
data=list(Y=as.matrix(Y), Nf=5, n=60, mn=c(-1.59, -2.44), prec=matrix(c(.0001,0,0,.0001),nrow=2,ncol=2), R=matrix(c(.001,0,0,.001),nrow=2,ncol=2)),
inits=list(list(gamma=c(0,0), T=matrix(c(0.9,0,0,0.9),nrow=2,ncol=2))),
model.file="M-LN_model_trial.txt",
parameters.to.save = c("p","rho","sigma2"),
n.chains=1, n.iter=12000, n.burnin=5000, debug=TRUE,
bugs.directory="C://Tina/USB_Backup_042213/winbugs14/WinBUGS14",
working.directory=NULL)
}
Warning messages:
1: In bugs.output[sim] <- bugs(data = list(Y = as.matrix(Y), Nf = 5, :
number of items to replace is not a multiple of replacement length
2: In bugs.output[sim] <- bugs(data = list(Y = as.matrix(Y), Nf = 5, :
number of items to replace is not a multiple of replacement length
When you get error running your BUGS model from R, one option is to try a mock run of the model in OpenBUGS or WinBUGS itself. It can help you (via the cursor placement after you hit check model button) to locate problematic lines.
I did this with your BUGS model. I found problems in the definition of mn, prec and R in the BUGS model. You can drop these as they are already defined in the data (which, looks like the appropriate place to define them). Once I dropped these from your BUGS model everything ran fine.
Note, to run a model in OpenBUGS you have to edit the format of your data, for example the script I ran was:
model{
#likelihood
for(j in 1 : Nf){
p1[j, 1:2 ] ~ dmnorm(gamma[1:2], T[1:2 ,1:2])
for (i in 1:2){
logit(p[j,i]) <- p1[j,i]
Y[j,i] ~ dbin(p[j,i],n)
}
}
#priors
gamma[1:2] ~ dmnorm(mn[1:2],prec[1:2 ,1:2])
expit[1] <- exp(gamma[1])/(1+exp(gamma[1]))
expit[2] <- exp(gamma[2])/(1+exp(gamma[2]))
T[1:2 ,1:2] ~ dwish(R[1:2 ,1:2], 2)
sigma2[1:2, 1:2] <- inverse(T[,])
rho <- sigma2[1,2]/sqrt(sigma2[1,1]*sigma2[2,2])
}
#data
list(Y=structure(.Data=c(1,11,6,1,8,5,1,25,13,1,1,13,1,8,22),.Dim=c(5,3)),
Nf=5, n=60, mn=c(-1.59,-2.44),
prec=structure(.Data=c(0.0001,0,0,0.0001),.Dim=c(2,2)),
R=structure(.Data=c(0.001,0,0,0.001),.Dim=c(2,2)))
#inits
list(gamma=c(0,0), T=structure(.Data=c(0.9,0,0,0.9),.Dim=c(2,2)))
where the data and inits need a bit work to convert from your R script.
A couple of other points: 1) I am not sure you have the right format for Y as it has 3 columns, your distribution only considers the first two (X and Y1). 2) you had an unnecessary set of curly brackets in the likelihood.
To run the code in BUGS via R you can use the following R syntax...
#BUGS code as a character string
bugs1<-
"model{
#likelihood
for(j in 1 : Nf){
p1[j, 1:2 ] ~ dmnorm(gamma[1:2], T[1:2 ,1:2])
for (i in 1:2){
logit(p[j,i]) <- p1[j,i]
Y[j,i] ~ dbin(p[j,i],n)
}
}
#priors
gamma[1:2] ~ dmnorm(mn[1:2],prec[1:2 ,1:2])
expit[1] <- exp(gamma[1])/(1+exp(gamma[1]))
expit[2] <- exp(gamma[2])/(1+exp(gamma[2]))
T[1:2 ,1:2] ~ dwish(R[1:2 ,1:2], 2)
sigma2[1:2, 1:2] <- inverse(T[,])
rho <- sigma2[1,2]/sqrt(sigma2[1,1]*sigma2[2,2])
}"
#write the BUGS code to a txt file in current working directory
writeLines(bugs1, "bugs1.txt")
#create data
Y<-data.frame(X=1,Y1=c(11,8,25,1,8),Y2=c(6,5,13,13,22))
#run BUGS from R
library("R2OpenBUGS")
mcmc1 <- bugs(data = list(Y=as.matrix(Y), Nf=5, n=60, mn=c(-1.59, -2.44),
prec=matrix(c(.0001,0,0,.0001),nrow=2,ncol=2),
R=matrix(c(.001,0,0,.001),nrow=2,ncol=2)),
inits = list(list(gamma=c(0,0), T=matrix(c(0.9,0,0,0.9),nrow=2,ncol=2))),
param = c("gamma", "sigma2"),
model = "bugs1.txt",
n.iter = 11000, n.burnin = 1000, n.chains = 1)
A couple of points to note here. 1) This uses OpenBUGS not WinBUGS. 2) If you use R2WinBUGS you might hit a trap if you are not running R (or Rstudio, or whatever you are using) as an administrator.
To run the above code a 1000 times you could put it within a loop, something like....
#create and write the BUGS code to a txt file in current working directory (outside the loop)
bugs1<-...
#loop
for(i in 1:1000){
Y <- read.csv(file=paste0("MVN",i,".csv"))
#run BUGS from R
library("R2OpenBUGS")
mcmc1 <- bugs(data = list(Y=as.matrix(Y), Nf=5, n=60, mn=c(-1.59, -2.44),
prec=matrix(c(.0001,0,0,.0001),nrow=2,ncol=2),
R=matrix(c(.001,0,0,.001),nrow=2,ncol=2)),
inits = list(list(gamma=c(0,0), T=matrix(c(0.9,0,0,0.9),nrow=2,ncol=2))),
param = c("gamma", "sigma2"),
model = "bugs1.txt",
n.iter = 11000, n.burnin = 1000, n.chains = 1)
#save mcmc
write.csv(mcmc1$sims.matrix,paste0("mcmc",i,".csv"))
}
Related
I have already written the following code, all of which works OK:
directory_path <- "~/DAEN_698/sample_obs"
file_list <- list.files(path = directory_path, full.names = TRUE, recursive = TRUE)
head(file_list, n = 2)
> head(file_list, n = 2)
[1] "C:/Users/Spencer/Documents/DAEN_698/sample_obs2/0-5-1-1.csv"
[2] "C:/Users/Spencer/Documents/DAEN_698/sample_obs2/0-5-1-2.csv"
# Create another list with the just the "n-n-n-n" part of the names of of each dataset
DS_name_list = stri_sub(file_list, 49, 55)
head(DS_name_list, n = 3)
> head(DS_name_list, n = 3)
[1] "0-5-1-1" "0-5-1-2" "0-5-1-3"
# This command reads all the data in each of the N csv files via their names
# stored in the 'file_list' list of characters.
csvs <- lapply(file_list, read.csv)
### Run a Backward Elimination Stepwise Regression on each of the N csvs.
# Assign the full model (meaning the one with all 30 candidate regressors
# included as the initial model in step 1).
# This is crucial because if the initial model has less than the number of
# total candidate factors for Stepwise to select from in the datasets,
# then it could miss 1 or more of the true factors.
full_model <- lapply(csvs, function(i) {
lm(formula = Y ~ ., data = i) })
# my failed attempt at figuring it out myself
set.seed(50) # for reproducibility
BE_fits3 <- lapply(full_model, function(i) {step(object = i[["coefficients"]],
direction = 'backward', scope = formula(full_model), trace = 0)})
When I hit run on the above 2 lines of code after setting the seed, I get
the following error message in the Console:
Error in terms`(object) : object 'i' not found
To briefly elaborate a bit further on why it is
absolutely essential that the initial model when running a Backward Elimination
version of Stepwise Regression, consider the following example:
Let us say that we start out with an initial model of 25, so, X1:X26 instead of
X1:X30, in that case, it would be possible to miss out on Stepwise Regression j
being able to select/choose 1 or more of the IVs/factors from X26 through X30,
especially if 1 or more of those really are included in the true underlying
population model that characterizes dataset j.
Instead of two lapply loops, one to fit the models and the second to run the stepwise regressions, use a for loop doing both operations one after the other. This is an environments thing, it seems that step is not finding the data when run in the environment of the lapply function.
I have also changed the code to create DS_name_list. Below it processes the full names without string position dependent code.
DS_name_list <- basename(file_list)
DS_name_list <- tools::file_path_sans_ext(DS_name_list)
head(DS_name_list, n = 2)
And here is the regressions code.
csvs <- lapply(file_list, read.csv)
names(csvs) <- DS_name_list
set.seed(50) # for reproducibility
full_model <- vector("list", length = length(csvs))
BE_fits3 <- vector("list", length = length(csvs))
for(i in seq_along(csvs)) {
full_model[[i]] <- lm(formula = Y ~ ., data = csvs[[i]])
BE_fits3[[i]] <- step(object = full_model[[i]],
scope = formula(full_model[[i]]),
direction = 'backward',
trace = 0)
}
I am searching for an equivalent function in R of the extremely convenient Stata command simulate. The command basically allows you to declare a program (reg_simulation in the example below) and then invoke such a program from simulate and store desired outputs.
Below is a Stata illustration of the usage of the simulate program, together with my attempt to replicate it using R.
Finally, my main question is: is this how R users will run a Montecarlo simulation? or am I missing something in terms of structure or speed bottlenecks? Thank you a lot in advance.
Stata example
Defining reg_simulation program.
clear all
*Define "reg_simulation" to be used later on by "simulate" command
program reg_simulation, rclass
*Declaring Stata version
version 13
*Droping all variables on memory
drop _all
*Set sample size (n=100)
set obs 100
*Simulate model
gen x1 = rnormal()
gen x2 = rnormal()
gen y = 1 + 0.5 * x1 + 1.5 *x2 + rnormal()
*Estimate OLS
reg y x1 x2
*Store coefficients
matrix B = e(b)
return matrix betas = B
end
Calling reg_simulation from simulate command:
*Seet seed
set seed 1234
*Run the actual simulation 10 times using "reg_simulation"
simulate , reps(10) nodots: reg_simulation
Obtained result (stored data on memory)
_b_x1 _b_x2 _b_cons
.4470155 1.50748 1.043514
.4235979 1.60144 1.048863
.5006762 1.362679 .8828927
.5319981 1.494726 1.103693
.4926634 1.476443 .8611253
.5920001 1.557737 .8391003
.5893909 1.384571 1.312495
.4721891 1.37305 1.017576
.7109139 1.47294 1.055216
.4197589 1.442816 .9404677
R replication of the Stata program above.
Using R I have managed to get the following (not an R expert tho). However, the part that worries me the most is the for-loop structure that loops over each the number of repetitions nreps.
Defining reg_simulation function.
#Defining a function
reg_simulation<- function(obs = 1000){
data <- data.frame(
#Generate data
x1 <-rnorm(obs, 0 , 1) ,
x2 <-rnorm(obs, 0 , 1) ,
y <- 1 + 0.5* x1 + 1.5 * x2 + rnorm(obs, 0 , 1) )
#Estimate OLS
ols <- lm(y ~ x1 + x2, data=data)
return(ols$coefficients)
}
Calling reg_simulation 10 times using a for-loop structure:
#Generate list to store results from simulation
results_list <- list()
# N repetitions
nreps <- 10
for (i in 1:nreps) {
#Set seed internally (to get different values in each run)
set.seed(i)
#Save results into list
results_list[i] <- list(reg_simulation(obs=1000))
}
#unlist results
df_results<- data.frame(t(sapply(results_list,
function(x) x[1:max(lengths(results_list))])))
Obtained result: df_results.
#final results
df_results
# X.Intercept. x1 x2
# 1 1.0162384 0.5490488 1.522017
# 2 1.0663263 0.4989537 1.496758
# 3 0.9862365 0.5144083 1.462388
# 4 1.0137042 0.4767466 1.551139
# 5 0.9996164 0.5020535 1.489724
# 6 1.0351182 0.4372447 1.444495
# 7 0.9975050 0.4809259 1.525741
# 8 1.0286192 0.5253288 1.491966
# 9 1.0107962 0.4659812 1.505793
# 10 0.9765663 0.5317318 1.501162
You're on the right track. Couple of hints/corrections:
Don't use <- inside data.frame()
In R, we construct data frames using = for internal column assignment, i.e. data.frame(x = 1:10, y = 11:20) rather than data.frame(x <- 1:10, y <- 11:20).
(There's more to be said about <- vs =, but I don't want to distract from your main question.)
In your case, you don't actually even need to create a data frame since x1, x2 and y will all be recognized as "global" variables within the scope of the function. I'll post some code at the end of my answer demonstrating this.
When growing a list via a for loop in R, always try to pre-allocate the list first
Always try to pre-allocate the list length and type if you are going to grow a (long) for loop. Reason: That way, R knows how much memory to efficiently allocate to your object. In the case where you are only doing 10 reps, that would mean starting with something like:
results_list <- vector("list", 10)
3. Consider using lapply instead of for
for loops have a bit of bad rep in R. (Somewhat unfairly, but that's a story for another day.) An alternative that many R users would consider is the functional programming approach offered by lapply. I'll hold off on showing you the code for a second, but it will look very similar to a for loop. Just to note quickly, following on from point 2, that one immediate benefit is that you don't need to pre-allocate the list with lapply.
4. Run large loops in parallel
A Monte Carlo simulation is an ideal candidate for running everything in parallel, since each iteration is supposed to be independent of the others. An easy way to go parallel in R is via the future.apply package.
Putting everything together, here's how I'd probably do your simulation. Note that this might be more "advanced" than you possibly need, but since I'm here...
library(data.table) ## optional, but what I'll use to coerce the list into a DT
library(future.apply) ## for parallel stuff
plan(multisession) ## use all available cores
obs <- 1e3
# Defining a function
reg_simulation <- function(...){
x1 <- rnorm(obs, 0 , 1)
x2 <- rnorm(obs, 0 , 1)
y <- 1 + 0.5* x1 + 1.5 * x2 + rnorm(obs, 0 , 1)
#Estimate OLS
ols <- lm(y ~ x1 + x2)
# return(ols$coefficients)
return(as.data.frame(t(ols$coefficients)))
}
# N repetitions
nreps <- 10
## Serial version
# results <- lapply(1:nreps, reg_simulation)
## Parallel version
results <- future_lapply(1:nreps, reg_simulation, future.seed = 1234L)
## Unlist / convert into a data.table
results <- rbindlist(results)
So, following up on the comments, you want to vary your independent variables (x) and also the error term and simulate the coefficients, but you also want to catch errors if any occur. The following would do the trick:
set.seed(42)
#Defining a function
reg_simulation<- function(obs = 1000){
data <- data.frame(
#Generate data
x1 <-rnorm(obs, 0 , 1) ,
x2 <-rnorm(obs, 0 , 1) ,
y <- 1 + 0.5* x1 + 1.5 * x2 + rnorm(obs, 0 , 1) )
#Estimate OLS
tryCatch(
{
ols <- lm(y ~ x1 + x2, data=data)
return(ols$coefficients)
},
error = function(e){
return(c('(Intercept)'=NA, 'x1'=NA, 'x2'=NA))
}
)
}
output <- t(data.frame(replicate(10, reg_simulation())))
output
(Intercept) x1 x2
X1 0.9961328 0.4782010 1.481712
X2 1.0234698 0.4801982 1.556393
X3 1.0336289 0.5239380 1.435468
X4 0.9796523 0.5095907 1.493548
...
Here, tryCatch (see also failwith) catches the error and returns NA as the default value.
Note that you only need to set the seed once because the seed changes automatically with every call to random number generator in a deterministic fashion.
I am working through the textbook "Bayesian Ideas and Data Analysis" by Christensen et al.
There is a simple exercise in the book that involves cutting and pasting the following code to run in Winbugs:
model{ y ~ dbin(theta, n) # Model the data
ytilde ~ dbin(theta, m) # Prediction of future binomial
theta ~ dbeta(a, b) # The prior
prob <- step(ytilde - 20) # Pred prob that ytilde >= 20 }
list(n=100, m=100, y=10, a=1, b=1) # The data
list(theta=0.5, ytilde=10) # Starting/initial values
I am trying to translate the following into R2jags code and am running into some trouble. I thought I could fairly directly write my R2Jags code in this fashion:
model {
#Likelihoods
y ~ dbin(theta,n)
yt ~ dbin(theta,m)
#Priors
theta ~ dbeta(a,b)
prob <- step(yt - 20)
}
with the R code:
library(R2jags)
n <- 100
m <- 100
y <- 10
a <- 1
b <- 1
jags.data <- list(n = n,
m = m,
y = y,
a = a,
b = b)
jags.init <- list(
list(theta = 0.5, yt = 10), #Chain 1 init
list(theta = 0.5, yt = 10), #Chain 2 init
list(theta = 0.5, yt = 10) #Chain 3 init
)
jags.param <- c("theta", "yt")
jags.fit <- jags.model(data = jags.data,
inits = jags.inits,
parameters.to.save = jags.param,
model.file = "hw21.bug",
n.chains = 3,
n.iter = 5000,
n.burnin = 100)
print(jags.fit)
However, calling the R code brings about the following error:
Error in jags.model(data = jags.data, inits = jags.inits, parameters.to.save = jags.param, :
unused arguments (parameters.to.save = jags.param, model.file = "hw21.bug", n.iter = 5000, n.burnin = 100)
Is it because I am missing a necessary for loop in my R2Jags model code?
The error is coming from the R function jags.model (not from JAGS) - you are trying to use arguments parameters.to.save etc to the wrong function.
If you want to keep the model as similar to WinBUGS as possible, there is an easier way than specifying the data and initial values in R. Put the following into a text file called 'model.txt' in your working directory:
model{
y ~ dbin(theta, n) # Model the data
ytilde ~ dbin(theta, m) # Prediction of future binomial
theta ~ dbeta(a, b) # The prior
prob <- step(ytilde - 20) # Pred prob that ytilde >= 20
}
data{
list(n=100, m=100, y=10, a=1, b=1) # The data
}
inits{
list(theta=0.5, ytilde=10) # Starting/initial values
}
And then run this in R:
library('runjags')
results <- run.jags('model.txt', monitor='theta')
results
plot(results)
For more information on this method of translating WinBUGS models to JAGS see:
http://runjags.sourceforge.net/quickjags.html
Matt
This old blog post has an extensive example of converting BUGS to JAGS accessed via package rjags not R2jags. (I like the package runjags even better.) I know we're supposed to present self-contained answers here, not just links, but the post is rather long. It goes through each logical step of a script, including:
loading the package
specifying the model
assembling the data
initializing the chains
running the chains
examining the results
I am a student studying bayesian statistics and have just begun to use JAGS using a intro script written by my lecturer, with us (the students) having to only enter the data and the number of iterations. The following is the script with my data added into it:
setwd("C:\\Users\\JohnSmith\\Downloads")
rawdata = read.table("bwt.txt",header=TRUE)
Birthweight = rawdata$Birthweight
Age = rawdata$Age
model = "model
{
beta0 ~ dnorm(0, 1/1000^2)
beta1 ~ dnorm(0, 1/1000^2)
log_sigma ~ dunif(-10, 10)
sigma <- exp(log_sigma)
for(i in 1:N)
{
mu[i] <- beta0 + beta1 * Age[i]
Birthweight[i] ~ dnorm(mu[i], 1/sigma^2)
}
}
"
data = list(x=Birthweight, y=Age, N=24)
# Variables to monitor
variable_names = c('beta0','beta1')
# How many burn-in steps?
burn_in = 1000
# How many proper steps?
steps = 100000
# Thinning?
thin = 10
# Random number seed
seed = 2693795
# NO NEED TO EDIT PAST HERE!!!
# Just run it all and use the results list.
library('rjags')
# Write model out to file
fileConn=file("model.temp")
writeLines(model, fileConn)
close(fileConn)
if(all(is.na(data)))
{
m = jags.model(file="model.temp", inits=list(.RNG.seed=seed, .RNG.name="base::Mersenne-Twister"))
} else
{
m = jags.model(file="model.temp", data=data, inits=list(.RNG.seed=seed, .RNG.name="base::Mersenne-Twister"))
}
update(m, burn_in)
draw = jags.samples(m, steps, thin=thin, variable.names = variable_names)
# Convert to a list
make_list <- function(draw)
{
results = list()
for(name in names(draw))
{
# Extract "chain 1"
results[[name]] = as.array(draw[[name]][,,1])
# Transpose 2D arrays
if(length(dim(results[[name]])) == 2)
results[[name]] = t(results[[name]])
}
return(results)
}
results = make_list(draw)
However, when I run the following code I get the following error message:
Error in jags.model(file = "model.temp", data = data, inits = list(.RNG.seed = seed, :
RUNTIME ERROR:
Compilation error on line 11.
Unknown parameter Age
In addition: Warning messages:
1: In jags.model(file = "model.temp", data = data, inits = list(.RNG.seed = seed, :
Unused variable "x" in data
2: In jags.model(file = "model.temp", data = data, inits = list(.RNG.seed = seed, :
Unused variable "y" in data
But as far as I can see, line 11 is blank, which leaves me stumped as to where the error is coming from. If anyone can give me some tips as to solve this, it will be greatly appreciated.
The names of the elements of your list of data (data) should match the names of the variables in your model.
You have:
data = list(x=Birthweight, y=Age, N=24)
so JAGS is looking for variables called x and y in your model. However, in your model, you have:
mu[i] <- beta0 + beta1 * Age[i]
Birthweight[i] ~ dnorm(mu[i], 1/sigma^2)
That is, your variables are called Age and Birthweight.
So, either change your list to:
data <- list(Birthweight=Birthweight, Age=Age, N=24)
or change your model to:
mu[i] <- beta0 + beta1 * y[i]
x[i] ~ dnorm(mu[i], 1/sigma^2)
Had you done readLines('model.temp') (or opened model.temp in a text editor), you would have seen that line 11 of that file refers to the line that contains mu[i] <- beta0 + beta1 * Age[i], which is the first error that JAGS encountered due to the reference to Age, for which neither data nor a prior was provided.
I am using the package rjags to do MCMC in R and I would like to save the output of the function jags.model for later use in another R session.
Here is a simple example for the mean of a Normal distribution:
library(rjags)
N <- 1000
x <- rnorm(N, 0, 5)
model.str <- 'model {for (i in 1:N) {
x[i] ~ dnorm(mu, 5)}
mu ~ dnorm(0, .0001)}'
jags <- jags.model(textConnection(model.str), data = list(x = x, N = N))
update(jags, 1000)
I can generate samples of mu like this:
coda.samples(model=jags,n.iter=1,variable.names="mu")
# [[1]]
# Markov Chain Monte Carlo (MCMC) output:
# Start = 2001
# End = 2001
# Thinning interval = 1
# mu
# [1,] 0.2312028
#
# attr(,"class")
# [1] "mcmc.list"
Now I would like to save the model object jags for later use in a new R session, so that I don't have to initialize and burn in the Markov Chain again:
save(file="/tmp/jags.Rdata", list="jags")
quit()
However, after starting a new R session and reloading the model I get an error message that the JAGS model must be recompiled:
load("/tmp/jags.Rdata")
coda.samples(model=jags,n.iter=1,variable.names="mu")
# Error in model$iter() : JAGS model must be recompiled
Why is that? How can I save the object jags in R for later use?
Note: The question has been asked before, but the OP was not very specific about the problem.
Maybe I am totally off track regarding what you really want to do, but I would set up a jags model like this, using R2jags instead of rjags (just something like a different wrapper):
library(R2jags)
N <- 1000
x <- rnorm(N, 0, 5)
sink("test.txt")
cat("
model{
for (i in 1:N) {
x[i] ~ dnorm(mu, 5)
}
mu ~ dnorm(0, .0001)
}
",fill = TRUE)
sink()
inits <- function() {
list(
mu = dnorm(1, 0, 0.01))
}
params <- c("mu")
chains <- 3
iter <- 1000
jags1 <- jags(model.file = "test.txt", data = list(x = x, N = N),
parameters.to.save = params, inits = inits,
n.chains = chains, n.iter = iter, n.burnin=floor(iter/2),
n.thin = ifelse(floor(iter/100) < 1, 1, floor(iter/100)))
jags2 <- update(jags1, 10000)
jags2
plot(jags2)
traceplot(jags2)
jags2.mcmc <- as.mcmc(jags2)
There is no difference in the results and I like this procedure because it's much more the way I used winbugs, so...
The last line of code converts the jags2-object to an mcmc-list which can be treated by package coda.
Good luck!
P.S. Here's a second answer:
After looking again on your code, the only thing after loading the jags-object that is missing to get the behavior you want, is:
jags$recompile()
coda.samples(model=jags,n.iter=1,variable.names="mu")
But if you really just want to use the already obtained posterior samples or maybe just want to update the chains for more iterations, you can also use the R2jags-procedure.