Stan MCMC chains switching back and forth between warmup and sampling - r

I am currently using R combining with Stan to conduct MCMC sampling for obtaining posterior distribution of a certain demand variable d, given historical demand dH and currently observed variable x (so the formulation is figuring out P(d|dH, x), which is proportional to P(x|d)P(d|dH).
My question
I found it really weird that the sampling process shows MCMC jumping back and forth between warmup and sampling (isn't it the case that the first nth iterations are always in warmup stage, followed by actual sampling stage?) At the same time, it skipped Chain 1 completely(?!). Below is the picture of the progress it shows:
My code
for(i in 1:365){
nrow = nrow(rte_m[[i]]);
ncol = ncol(rte_m[[i]]);
A <- as.matrix(rte_m[[i]]);
sigma_x <- as.vector(sample.int(10, nrow(kf_vect[[i]]), replace=TRUE))
sigma_y <- as.vector(eps_vect[[i]])
yH <- as.vector(dh_vect[[i]]);
yT <- yH + as.vector(eps_vect[[i]]);
epsilon <- sample.int(10, nrow(kf_vect[[i]]), replace=TRUE)
x <- as.vector(as.matrix(rte_m[[i]])%*%yT) + epsilon
iterations = 500;
#input data into a list called stan_data
stan_data = list(nrow = nrow, ncol = ncol,
yH = yH,
x = x, epsilon = epsilon,
A = A, sigma_x = sigma_x, sigma_y = sigma_y);
#input it into our Stan model file "stamodeling.stan"
stanmodel1 <- stan_model(file = "stamodeling.stan",
model_name = "stanmodel1");
#MCMC sampling
stanfit <- sampling(stanmodel1, data = list(ncol = ncol,nrow = nrow,
yH = yH,
x=x, epsilon = epsilon,
A = A, sigma_x = sigma_x, sigma_y = sigma_y)
,iter=iterations, warmup = 200, chains = 4, cores = 2);
Stan Modeling File
Data Files

What's happening isn't that a given chain is switching between warmup and sampling. Instead, what's happening is that the progress messages from the various chains are being interspersed with one another.
So, for example, when you see the following:
[Iteration:] 50/500 [0%] (Warmup)
[Iteration:] 50/500 [0%] (Warmup)
You're actually seeing two messages, one from Chain A and the second from Chain B.

Related

Simulating ODE model for different initial conditions in R

I have a model, and I want to generate random initial conditions, run the model, and save the output so that each simulation is a replicate. But I have a hard time interpreting and implementing loops (and I also know they are not always the best to use in R), so I'm struggling.
My ultimate goal is to iterate the simulation across 10 different random initial conditions, and save the output of the ODE including a column for simulation number.
First I have my random initial conditions:
library(deSolve)
states <- c(r=runif(1, min=0.1, max=25), # resource state variable
c=runif(1, min=0.1, max=10)) # consumer state variable
Then I have my parameters and model:
parameters <- c(g=5, # resource growth rate )
K=25, # resource carrying capacity
a=1, # consumer attack rate
h=1, # consumer handling time
e=0.9, # consumer conversion efficiency
m=0.5, # consumer mortality rate
avgrain = 1500, # average rainfall
A = 1000,
w = 0.6,
phi = 8.5,
ropt1 = 1500, # optimal rainfall for resource growth
s1 = 1000, # standard deviation for plant growth rate as a function of rainfall
ropt2 = 1000, # optimal rainfall for herbivore attack (feeding) rate
s2 = 500, # standard deviation for herbivore attack rate as a function of rainfall
avgtemp = 20, # average temperature
A_temp = 7,
w_temp = 0.5,
phi_temp = 0.5,
topt1 = 13, # optimal temperature for resource growth
ts1 = 10 # standard deviation for plant growth rate as a function of temperature
)
model <- function(t, states, parameters) {
with(as.list(c(states, parameters)), {
# rainfall time series function
rain <- avgrain + (A*sin((w*t)+phi)) # rainfall function
# temperature time series function
temp = avgtemp + (A_temp*sin((w_temp*t)+phi_temp))
# dynamic g and a equations
dg_both <- (exp(-(rain - ropt1)^2/(s1^2))) + (exp(-(temp - topt1)^2/(ts1^2)))
da = exp(-(rain - ropt2)^2/(s2^2))
# rate of change of state variables
dr <- dg_both*r*(1-(r/K)) - ((c*da*r)/(1+(da*h*r)))
dc <- ((c*e*da*r)/(1+(da*h*r)))- c*m
# return rate of change
list(c(dr, dc), rain=rain, temp=temp, dg_both=dg_both, da=da)
})
}
times <- seq(0, 200, by = 1)
out <- ode(y = states, times = times, func = model, parms = parameters, method="lsoda")
Would I do this with a for loop? Thank you in advance!
Here one of the other approaches, mentioned by #Ben Bolker. Here we use replicate instead of a loop. This has the advantage, that we don't need to create a list() for the results beforehand.
N <- 10
res <- replicate(N, ode(y = c(r = runif(1, min = 0.1, max = 25),
c = runif(1, min = 0.1, max = 10)),
times = times, func = model,
parms = parameters, method="lsoda"),
simplify = FALSE)
plot(out, res)
As an additional goody, we can also plot the results using deSolve's built-in plotting function. This works of course also with res in Ben's approach. The resulting data structure can then be simplified to something like a matrix or array, either with do.call(rbind, res) as in Ben's example, or with option simplify directly in replicate.
Yes, a for loop will be fine. There are lots of other slightly fancier ways to do this (replicate or lapply from base R, purrr::map_dfr from tidyverse ...), but they won't save you any time or memory — they're just a slightly more elegant way to do the same thing.
set.seed(101)
N <- 10
res <- list()
for (i in 1:N) {
## pick new initial conditions
cur_states <- c(r=runif(1, min=0.1, max=25),
c=runif(1, min=0.1, max=10))
## run model and attach index column to the matrix
res[[i]] <-
cbind(run = i,
ode(y = cur_states, times = times, func = model,
parms = parameters, method="lsoda")
)
}
## combine individual runs into one long matrix
res_final <- do.call(rbind,res)

Modifying SIR model to include stochasticity

I am trying to establish a method of estimating infectious disease parameters by comparing real epidemic curves with simulations of a stochastic SIR model. To construct the stochastic SIR model, I am using the deSolve package and instead of using fixed parameter values I would like to draw the parameter value used in the equations at each time point from a Poisson distribution centered on the original parameter values.
Using the parameter beta as an example, beta represents the average number of transmission events per capita and is the product of the average number of contacts and the probability that transmission occurs upon contact. Realistically, there is variation in the number of contacts a person will have and since transmission is also a probabilistic event there is variation surrounding this too.
So even if the average transmission rate were to be 2.4 (for example), an individual can go on to infect 0, 1, 2 or 3 ... etc. people with varying probabilities.
I have tried to incorporate this into my code below using the rpois function and reassigning the parameters used in the equations to the outputs of the rpois.
I have run my code with the same initial values and parameters multiple times and all the curves are different indicating that SOMETHING "stochastic" is going on, but I am unsure whether the code is sampling using the rpois at each time point or just once at the beginning. I have only started coding very recently so do not have much experience.
I would be grateful if anyone more experienced than myself could verify what my code is ACTUALLY doing and whether it is sampling using rpois at each time point or not. If not I would be grateful for any suggestions for achieving this. Perhaps a loop is needed?
library('deSolve')
library('reshape2')
library('ggplot2')
#MODEL INPUTS
initial_state_values <- c(S = 10000,
I = 1,
R = 0)
#PARAMETERS
parameters <- c(beta = 2.4,
gamma = 0.1)
#POISSON MODELLING OF PARAMETERS
#BETA
beta_p <- rpois(1, parameters[1])
#GAMMA
infectious_period_p <- rpois(1, 1/(parameters[2]))
gamma_p <- 1/infectious_period_p
#TIMESTEPS
times <- seq(from = 0, to = 50,by = 1)
#SIR MODEL FUNCTION
sir_model <- function(time, state, parameters) {
with(as.list(c(state, parameters)), {
N <- S + I + R
lambda <- beta_p * I/N
dS <- -lambda * S
dI <- lambda*S - gamma_p*I
dR <- gamma_p*I
return(list(c(dS, dI, dR)))
})
}
output<- as.data.frame(ode(y= initial_state_values,
times = times,
func = sir_model,
parms = parameters))
The code given in the question runs the model with constant parameters over time. Here an example with parameters varying over time. However, this setting assumes that for a given time step, the parameters are equal for all indidividuals of the population. If you want to have individual variability, one can either use a matrix formulation for different sub-populations or use an individual model instead.
Model with fluctuating population parameters:
library('deSolve')
initial_state_values <- c(S = 10000,
I = 1,
R = 0)
parameters <- c(beta = 2.4, gamma = 0.1)
times <- seq(from = 0, to = 50, by = 1) # note time step = 1!
# +1 to add one for time = zero
beta_p <- rpois(max(times) + 1, parameters[1])
infectious_period_p <- rpois(max(times) + 1, 1/(parameters[2]))
gamma_p <- 1/infectious_period_p
sir_model <- function(time, state, parameters) {
# cat(time, "\n") # show time steps for debugging
with(as.list(c(state, parameters)), {
# this overwrites the parms passed via parameters
beta <- beta_p[floor(time) + 1]
gamma <- gamma_p[floor(time) + 1]
N <- S + I + R
lambda <- beta * I/N
dS <- -lambda * S
dI <- lambda * S - gamma * I
dR <- gamma * I
list(c(dS, dI, dR))
})
}
output <- ode(y = initial_state_values,
times = times,
func = sir_model,
parms = parameters)
plot(output)
Here another, slightly more generalized version. It is added as a second answer, to keep the original version compact and simple. The new version differs with respect to the following:
generalized, so that it can work with fixed parameters and stochastic forcing
pass parameters as list
run a basic Monte-Carlo simulation
library('deSolve')
sir_model <- function(time, state, parameters) {
with(as.list(c(state, parameters)), {
# this overwrites the parms passed via parameters
if (time_dependent) {
beta <- beta_p[floor(time) + 1]
gamma <- gamma_p[floor(time) + 1]
}
N <- S + I + R
lambda <- beta * I/N
dS <- -lambda * S
dI <- lambda * S - gamma * I
dR <- gamma * I
list(c(dS, dI, dR))
})
}
initial_state_values <- c(S = 10000, I = 1, R = 0)
times <- seq(from = 0, to = 50, by = 1) # note time step = 1!
## (1) standard simulation with constant parameters
parameters <- c(beta = 2.4, gamma = 0.1)
out0 <- ode(y= initial_state_values,
times = times,
func = sir_model,
parms = c(parameters, time_dependent = FALSE))
plot(out0)
## (2) single simulation with time varying parameters
beta_p <- rpois(max(times) + 1, parameters[1])
infectious_period_p <- rpois(times + 1, 1/(parameters[2]))
gamma_p <- 1/infectious_period_p
## here we need pass the vectorized parameters globally
## for simplicity, it can also be done as list
out1 <- ode(y = initial_state_values, times = times,
func = sir_model, parms = c(time_dependent = TRUE))
plot(out0, out1)
## (3) a sample of simulations
monte_carlo <- function(i) {
#parameters <- c(beta = 2.4, gamma = 0.1)
beta_p <- rpois(max(times) + 1, parameters[1])
infectious_period_p <- rpois(max(times) + 1, 1 / (parameters[2]))
gamma_p <- 1/infectious_period_p
ode(y = initial_state_values, times = times,
func = sir_model, parms = list(beta_p = beta_p,
gamma_p = gamma_p,
time_dependent = TRUE))
}
## run 10 simulations
out_mc <- lapply(1:10, monte_carlo)
plot(out0, out_mc, mfrow=c(1, 3))

Weighted k-nearest neighbors and R kknn package

I would like to understand how the R kknn package calculates weights, distances, and class probabilities for binary classification problems. In the R code below, there are three observations in the training sample and one observation in the holdout sample. The two predictor variables are height and weight. With Euclidean distance, the distances for each observation in the training sample are then:
sqrt((6-8)^2 + (4-5)^2) = 2.24
sqrt((6-3)^2 + (4-7)^2) = 4.24
sqrt((6-7)^2 + (4-3)^2) = 1.41.
With k=3 and with equal weights, I get a probability for the holdout as:
(1/3 * 1) + (1/3 * 0) + (1/3 * 1) = 0.67.
With k=2 and with equal weights, I get a probability for the holdout as:
(1/2 * 1) + (1/2 * 1) = 1.00.
I would like to understand how the R kknn package makes these same calculations with the "triangular," "gaussian," and "inverse" weights (and more generally).
library(kknn)
training <- data.frame(class = c(1, 0, 1), height = c(8, 3, 7), weight = c(5, 7, 3))
holdouts <- data.frame(class = 1, height = 6, weight = 4)
triangular_kernel <- kknn(class ~., training, holdouts, distance = 2, kernel = "triangular", k = 3)
triangular_kernel[["fitted.values"]]
triangular_kernel[["W"]]
triangular_kernel[["D"]]
gaussian_kernel <- kknn(class ~., training, holdouts, distance = 2, kernel = "gaussian", k = 3)
gaussian_kernel[["fitted.values"]]
gaussian_kernel[["W"]]
gaussian_kernel[["D"]]
inverse_kernel <- kknn(class ~., training, holdouts, distance = 2, kernel = "inv", k = 3)
inverse_kernel[["fitted.values"]]
inverse_kernel[["W"]]
inverse_kernel[["D"]]
Calling kknn::kknn prints the source code for the kknn function in the console. With it, one can go through the function line by line to see what it does.
Distance
kknn calls a compiled C code dmEuclid. To obtain its source code, we follow this guide, writing the following code in R:
untar(download.packages(pkgs = "kknn", destdir = ".", type = "source")[,2])
and then open the src directory of kknn_1.3.1.tar in your working directory (getwd()) to find and open dm.C using any text editor. Scroll about halfway to find dmEuclid. To test the exact outputs of dmEuclid, you could install the build tools, and open a C++ file in Rstudio by selecting it in the dropdown menu, and run the code with different inputs.
Following the function outputs, in your case the dmtmp$dm results in
3.779645e-01 1.133893e+00 1.000000e+150 3.685210e-156
Per your specification k, the first 3 values are chosen as distance D.
This is manually converted to maxdist = 1e-06 by the package author, as the max distance is smaller than that in your case.
Weights
The kknn function uses the following section to allocate a weight scheme, per your defined kernel.
W <- D/maxdist
W <- pmin(W, 1 - (1e-06))
W <- pmax(W, 1e-06)
at this point your W values are larger than 1, and so W is then coerced to approximately 1.
if (kernel == "inv"
W <- 1/W
if (kernel == "triangular")
W <- 1 - W
if (kernel == "gaussian") {
alpha = 1/(2 * (k + 1))
qua = abs(qnorm(alpha))
W = W * qua
W = dnorm(W, sd = 1)
}
the explanation for which can be found in the paper linked by gowerc.
W is then converted to matrix W <- matrix(W, p, k) with 1 row (p=1), 3 columns (k=3)
Fitted value
p = 1 in your case is 1, k=3, cl = c(1,0,1).
C <- matrix(dmtmp$cl, nrow = p, ncol = k + 1)
C <- C[, 1:k] + 1
CL <- matrix(cl[C], nrow = p, ncol = k)
W <- matrix(W, p, k)
fit <- rowSums(W * CL)/pmax(rowSums(W), 1e-06)

R - deSolve package (ode function): change a matrix of parameters in SIR model according to time

I am trying to simulate the transmission of viruses in a population using the function ode from the deSolve package. The basic of my model is a SIR model and I posted a much simpler demo of my model here, which consists of only three states S(susceptible), I(infectious) and R(recovered). Each state is represented by a m*n matrix in my code, since I have m age groups and n subpopulations in my population.
The problem is: during the simulation period, there will be several vaccination activities that transfer people in state S to state I. Each vaccination activity is characterized by a begin date, an end date, its coverage rate and duration. What I want to do is once the time t falls into the interval of begin date and end date of one vaccination activity, the code calculates the effective vaccination rate (also a m*n matrix, based on coverage rate and duration) and times it with S (m*n matrix), to get a matrix of people transited to state I. Right now, I am using if() to decide if time t is between a begin date and a end date:
#initialize the matrix of effective vaccination rate
irrate_matrix = matrix(data = rep(0, m*n), nrow = m, ncol = n)
for (i in 1:length(tbegin)){
if (t>=tbegin[i] & t<=tend[i]){
for (j in 1:n){
irrate_matrix[, j] = -log(1-covir[(j-1)*length(tbegin)+i])/duration[i]
}
}
}
Here, irrate_matrix is the m*n effective vaccination rate matrix, m = 2 is the number of age groups, n = 2 is the number of subpopulations, tbegin = c(5, 20, 35) is the begin date of 3 vaccination activities, tend = c(8, 23, 38) is the end date of 3 vaccination activities, covir = c(0.35, 0.25, 0.25, 0.225, 0.18, 0.13) is the coverage rate of each vaccination for each subpopulation (e.g., covir[1] = 0.35 is the coverage rate of the first vaccination for subpopulation1, while covir[4] = 0.225 is the coverage rate of the first vaccination for subpopulation2) and duration = c(4, 4, 4) is the duration of each vaccination (in days).
After calculating irrate_matrix, I take it into derivatives and therefore I have:
dS = as.matrix(b*N) - as.matrix(irrate_matrix*S) - as.matrix(mu*S)
dI = as.matrix(irrate_matrix*S) - as.matrix(gammaS*I) - as.matrix(mu*I)
dR = as.matrix(gammaS*I) - as.matrix(mu*R)
I want to do a simulation from day 0 to day 50, by 1-day step, thus:
times = seq(0, 50, 1)
The current issue with my code is: every time the time t comes to a time point close to a tbegin[i] or tend[i], the simulation becomes much slower since it iterates at this time point for much more rounds than at any other time point. For example, once the time t comes to tbegin[1] = 5, the model iterates at time point 5 for many rounds. I attached screenshots from printing out those iterations (screenshot1 and screenshot2). I find this is why my bigger model takes a very long running time now.
I have tried using the "events" function of deSolve mentioned by tpetzoldt in this question stackoverflow: change the value of a parameter as a function of time. However, I found it's inconvenient for me to change a matrix of parameters and change it every time there is a vaccination activity.
I am looking for solutions regarding:
How to change my irrate_matrix to non-zero matrix when there is a vaccination activity and let it be zero matrix when there is no vaccination? (it has to be calculated for each vaccination)
At the same time, how to make the code run faster by avoiding iterating at any tbegin[i] or tend[i] for many rounds? (I think I should not use if() but I do not know what I should do with my case)
If I need to use "forcing" or "events" function, could you please also tell me how to have multiple "forcing"/"events" in the model? Right now, I have had an "events" used in my bigger model to introduce a virus to the population, as:
virusevents = data.frame(var = "I1", time = 2, value = 1, method = "add")
Any good idea is welcome and directly providing some codes is much appreciated! Thank you in advance!
For reference, I post the whole demo here:
library(deSolve)
##################################
###(1) define the sir function####
##################################
sir_basic <- function (t, x, params)
{ # retrieve initial states
S = matrix(data = x[(0*m*n+1):(1*m*n)], nrow = m, ncol = n)
I = matrix(data = x[(1*m*n+1):(2*m*n)], nrow = m, ncol = n)
R = matrix(data = x[(2*m*n+1):(3*m*n)], nrow = m, ncol = n)
with(as.list(params), {
N = as.matrix(S + I + R)
# print out current iteration
print(paste0("Total population at time ", t, " is ", sum(N)))
# calculate irrate_matrix by checking time t
irrate_matrix = matrix(data = rep(0, m*n), nrow = m, ncol = n)
for (i in 1:length(tbegin)){
if (t>=tbegin[i] & t<=tend[i]){
for (j in 1:n){
irrate_matrix[, j] = -log(1-covir[(j-1)*length(tbegin)+i])/duration[i]
}
}
}
# derivatives
dS = as.matrix(b*N) - as.matrix(irrate_matrix*S) - as.matrix(mu*S)
dI = as.matrix(irrate_matrix*S) - as.matrix(gammaS*I) - as.matrix(mu*I)
dR = as.matrix(gammaS*I) - as.matrix(mu*R)
derivatives <- c(dS, dI, dR)
list(derivatives)
})
}
##################################
###(2) characterize parameters####
##################################
m = 2 # the number of age groups
n = 2 # the number of sub-populations
tbegin = c(5, 20, 35) # begin dates
tend = c(8, 23, 38) # end dates
duration = c(4, 4, 4) # duration
covir = c(0.35, 0.25, 0.25, 0.225, 0.18, 0.13) # coverage rates
b = 0.0006 # daily birth rate
mu = 0.0006 # daily death rate
gammaS = 0.05 # transition rate from I to R
parameters = c(m = m, n = n,
tbegin = tbegin, tend = tend, duration = duration, covir = covir,
b = b, mu = mu, gammaS = gammaS)
##################################
#######(3) initial states ########
##################################
inits = c(
S = c(20000, 40000, 10000, 20000),
I = rep(0, m*n),
R = rep(0, m*n)
)
##################################
#######(4) run simulations########
##################################
times = seq(0, 50, 1)
traj <- ode(func = sir_basic,
y = inits,
parms = parameters,
times = times)
plot(traj)
Element wise operations are the same for matrices and vectors, so the as.matrix conversions are redundant, as no true matrix multiplication is used. Same with the rep: the zero is recycled anyway.
In effect, CPU time reduces already to 50%. In contrast, use of an external forcing with approxTime instead of the inner if and for made the model slower (not shown).
Simplified code
sir_basic2 <- function (t, x, params)
{ # retrieve initial states
S = x[(0*m*n+1):(1*m*n)]
I = x[(1*m*n+1):(2*m*n)]
R = x[(2*m*n+1):(3*m*n)]
with(as.list(params), {
N = S + I + R
# print out current iteration
#print(paste0("Total population at time ", t, " is ", sum(N)))
# calculate irrate_matrix by checking time t
irrate_matrix = matrix(data = 0, nrow = m, ncol = n)
for (i in 1:length(tbegin)){
if (t >= tbegin[i] & t <= tend[i]){
for (j in 1:n){
irrate_matrix[, j] = -log(1-covir[(j-1) * length(tbegin)+i])/duration[i]
}
}
}
# derivatives
dS = b*N - irrate_matrix*S - mu*S
dI = irrate_matrix*S - gammaS*I - mu*I
dR = gammaS*I - mu*R
list(c(dS, dI, dR))
})
}
Benchmark
Each model version is run 10 times. Model sir_basic is the original implementation, where print line was disabled for a fair comparison.
system.time(
for(i in 1:10)
traj <- ode(func = sir_basic,
y = inits,
parms = parameters,
times = times)
)
system.time(
for(i in 1:10)
traj2 <- ode(func = sir_basic2,
y = inits,
parms = parameters,
times = times)
)
plot(traj, traj2)
summary(traj - traj2)
I observed another considerable speedup, when I use method="adams" instead of the default lsoda solver, but this may differ for your full model.

Learning hidden markov model in R

A hidden Markov model (HMM) is one in which you observe a sequence of observations, but do not know the sequence of states the model went through to generate the observations. Analyses of hidden Markov models seek to recover the sequence of hidden states from the observed data.
I have data with both observations and hidden states (observations are of continuous values) where the hidden states were tagged by an expert. I would like to train a HMM that would be able - based on a (previously unseen) sequence of observations - to recover the corresponding hidden states.
Is there any R package to do that? Studying the existing packages (depmixS4, HMM, seqHMM - for categorical data only) allows you to specify a number of hidden states only.
EDIT:
Example:
data.tagged.by.expert = data.frame(
hidden.state = c("Wake", "REM", "REM", "NonREM1", "NonREM2", "REM", "REM", "Wake"),
sensor1 = c(1,1.2,1.2,1.3,4,2,1.78,0.65),
sensor2 = c(7.2,5.3,5.1,1.2,2.3,7.5,7.8,2.1),
sensor3 = c(0.01,0.02,0.08,0.8,0.03,0.01,0.15,0.45)
)
data.newly.measured = data.frame(
sensor1 = c(2,3,4,5,2,1,2,4,5,8,4,6,1,2,5,3,2,1,4),
sensor2 = c(2.1,2.3,2.2,4.2,4.2,2.2,2.2,5.3,2.4,1.0,2.5,2.4,1.2,8.4,5.2,5.5,5.2,4.3,7.8),
sensor3 = c(0.23,0.25,0.23,0.54,0.36,0.85,0.01,0.52,0.09,0.12,0.85,0.45,0.26,0.08,0.01,0.55,0.67,0.82,0.35)
)
I would like to create a HMM with discrete time t whrere random variable x(t) represents the hidden state at time t, x(t) {"Wake", "REM", "NonREM1", "NonREM2"}, and 3 continuous random variables sensor1(t), sensor2(t), sensor3(t) representing the observations at time t.
model.hmm = learn.model(data.tagged.by.user)
Then I would like to use the created model to estimate hidden states responsible for newly measured observations
hidden.states = estimate.hidden.states(model.hmm, data.newly.measured)
Data (training/testing)
To be able to run learning methods for Naive Bayes classifier, we need longer data set
states = c("NonREM1", "NonREM2", "NonREM3", "REM", "Wake")
artificial.hypnogram = rep(c(5,4,1,2,3,4,5), times = c(40,150,200,300,50,90,30))
data.tagged.by.expert = data.frame(
hidden.state = states[artificial.hypnogram],
sensor1 = log(artificial.hypnogram) + runif(n = length(artificial.hypnogram), min = 0.2, max = 0.5),
sensor2 = 10*artificial.hypnogram + sample(c(-8:8), size = length(artificial.hypnogram), replace = T),
sensor3 = sample(1:100, size = length(artificial.hypnogram), replace = T)
)
hidden.hypnogram = rep(c(5,4,1,2,4,5), times = c(10,10,15,10,10,3))
data.newly.measured = data.frame(
sensor1 = log(hidden.hypnogram) + runif(n = length(hidden.hypnogram), min = 0.2, max = 0.5),
sensor2 = 10*hidden.hypnogram + sample(c(-8:8), size = length(hidden.hypnogram), replace = T),
sensor3 = sample(1:100, size = length(hidden.hypnogram), replace = T)
)
Solution
In the solution, we used Viterbi algorithm - combined with Naive Bayes classifier.
At each clock time t, a Hidden Markov Model consist of
an unobserved state (denoted as hidden.state in this case) taking a finite number of states
states = c("NonREM1", "NonREM2", "NonREM3", "REM", "Wake")
a set of observed variables (sensor1, sensor2, sensor3 in this case)
Transition matrix
A new state is entered based upon a transition probability distribution
(transition matrix). This can be easily computed from data.tagged.by.expert e.g. using
library(markovchain)
emit_p <- markovchainFit(data.tagged.by.expert$hidden.state)$estimate
Emission matrix
After each transition is made, an observation (sensor_i) is produced according to a conditional probability distribution (emission matrix) which depends on the current state H of hidden.state only. We will replace emmision matrices by Naive Bayes classifier.
library(caret)
library(klaR)
library(e1071)
model = train(hidden.state ~ .,
data = data.tagged.by.expert,
method = 'nb',
trControl=trainControl(method='cv',number=10)
)
Viterbi algorithm
To solve the problem, we use Viterbi algorithm with the initial probability of 1 for "Wake" state and 0 otherwise. (We expect the patient to be awake in the beginning of the experiment)
# we expect the patient to be awake in the beginning
start_p = c(NonREM1 = 0,NonREM2 = 0,NonREM3 = 0, REM = 0, Wake = 1)
# Naive Bayes model
model_nb = model$finalModel
# the observations
observations = data.newly.measured
nObs <- nrow(observations) # number of observations
nStates <- length(states) # number of states
# T1, T2 initialization
T1 <- matrix(0, nrow = nStates, ncol = nObs) #define two 2-dimensional tables
row.names(T1) <- states
T2 <- T1
Byj <- predict(model_nb, newdata = observations[1,])$posterior
# init first column of T1
for(s in states)
T1[s,1] = start_p[s] * Byj[1,s]
# fill T1 and T2 tables
for(j in 2:nObs) {
Byj <- predict(model_nb, newdata = observations[j,])$posterior
for(s in states) {
res <- (T1[,j-1] * emit_p[,s]) * Byj[1,s]
T2[s,j] <- states[which.max(res)]
T1[s,j] <- max(res)
}
}
# backtract best path
result <- rep("", times = nObs)
result[nObs] <- names(which.max(T1[,nObs]))
for (j in nObs:2) {
result[j-1] <- T2[result[j], j]
}
# show the result
result
# show the original artificial data
states[hidden.hypnogram]
References
To read more about the problem, see Vomlel Jiří, Kratochvíl Václav : Dynamic Bayesian Networks for the Classification of Sleep Stages , Proceedings of the 11th Workshop on Uncertainty Processing (WUPES’18), p. 205-215 , Eds: Kratochvíl Václav, Vejnarová Jiřina, Workshop on Uncertainty Processing (WUPES’18), (Třeboň, CZ, 2018/06/06) [2018] Download

Resources