I hope you are well and having a nice day. I am attempting to code an MH algorithm for a multiple linear regression model. I am following a tutorial online for simple linear regression using an MH algorithm and I plan on applying the principles myself to my multiple linear regression model, but I am running into a problem figuring our how to specify prior distributions for the parameters. Below, is the code.
trueA <- 5
trueB <- 0
trueSd <- 10
sampleSize <- 31
# create independent x-values
x <- (-(sampleSize-1)/2):((sampleSize-1)/2)
# create dependent values according to ax + b + N(0,sd)
y <- trueA * x + trueB + rnorm(n=sampleSize,mean=0,sd=trueSd)
likelihood <- function(param){
a = param[1]
b = param[2]
sd = param[3]
pred = a*x + b
singlelikelihoods = dnorm(y, mean = pred, sd = sd, log = T)
sumll = sum(singlelikelihoods)
return(sumll)
}
This is the portion where I am running into problems. In the tutorial, the author now specifies that they define prior distributions for the parameters. In this case, I would like to specify uninformative prior distributions for the slopes, coefficients, and standard deviation. The author says that they used uniform/normal distributions.
I found an example using Poisson regression, which looks like:
LogPriorFunction <- function(param){
beta0 <- param[1]
beta1 <- param[2]
beta0prior <- dnorm(beta0, 0, sqrt(100), log=TRUE)
beta1prior <- dnorm(beta1, 0, sqrt(100), log=TRUE)
return(beta0prior + beta1prior) # Logarithm of prior distributions
}
But I am having trouble adapting it to the simple linear regression. Here is my attempt:
prior <- function(param){
a = param[1]
b = param[2]
sd = param[3]
aprior = dnorm(a, 0, 1000, log=TRUE)
bprior = dnorm(b, 0, 1000, log=TRUE)
sdprior = 1/rgamma(sd, shape = .001, scale = .001)
return(aprior + bprior + sdprior) # Logarithm of prior distributions
}
The rest of the code is:
posterior <- function(param){
return (likelihood(param) + prior(param))
}
proposalfunction <- function(param){
return(rnorm(3,mean = param, sd= c(0.1,0.5,0.3)))
}
run_metropolis_MCMC <- function(startvalue, iterations){
chain = array(dim = c(iterations+1,3))
chain[1,] = startvalue
for (i in 1:iterations){
proposal = proposalfunction(chain[i,])
probab = exp(posterior(proposal) - posterior(chain[i,]))
if (runif(1) < probab){
chain[i+1,] = proposal
}else{
chain[i+1,] = chain[i,]
}
}
return(chain)
}
startvalue = c(4,0,10)
chain = run_metropolis_MCMC(startvalue, 10000)
burnIn = 5000
acceptance = 1-mean(duplicated(chain[-(1:burnIn),]))
Finally, along with specifying the priors, I keep getting the warning message:
Error in if (runif(1) < probab) { : missing value where TRUE/FALSE needed
In addition: Warning message:
In if (runif(1) < probab) { :
the condition has length > 1 and only the first element will be used
I am not sure if this is related to my incorrectly specifying the prior distributions, as the author of the article gets similar results from the MH algorithm as running a simple linear regression model using the lm() function. Any help would with these issues would be greatly appreciated. I look forward to hearing from you. Thank you.
As a reference, here is the link to the article:
https://khayatrayen.github.io/MCMC.html#defining_the_prior
Finally, here is a link to the article from where I tried to emulate the priors, but does Poisson regression and not linear regression:
https://rpubs.com/SaraGarcesCespedes/586440
Related
I'm trying to figure out custom objective functions in LightGBM, and I figured a good place to start would be replicating the built-in functions. The equation LightGBM uses to calculate the Tweedie metric (https://github.com/microsoft/LightGBM/blob/1c27a15e42f0076492fcc966b9dbcf9da6042823/src/metric/regression_metric.hpp#L300-L318) seems to match definitions of the Tweedie loss I've found online (https://towardsdatascience.com/tweedie-loss-function-for-right-skewed-data-2c5ca470678f), though they do a weird exp(ln(score)) process, I'm guessing for numerical stability. However, their equations for the gradient and Hessian seem to be done on the log of score directly (https://github.com/microsoft/LightGBM/blob/1c27a15e42f0076492fcc966b9dbcf9da6042823/src/objective/regression_objective.hpp#L702-L732).
It seems like they are using the equation:
gradients[i] = -label_[i] * e^((1 - rho_) * score[i]) + e^((2 - rho_) * score[i]);
where I would expect the gradient to be:
gradients[i] = -label_[i] * score[i]^(- rho_) + score[i]^(1 - rho_);
My guess is somewhere LightGBM is processing score as ln(score), like using parameter reg_sqrt, but I can't find where in the documentation this is described.
Anyway I've tried recreating both their formula and my own calculations as custom objective functions, and neither seem to work:
library(lightgbm)
library(data.table)
# Tweedie gradient with variance = 1.5, according to my own math
CustomObj_t1 <- function(preds, dtrain) {
labels <- dtrain$getinfo('label')
grad <- -labels * preds^(-3/2) + preds^(-1/2)
hess <- 1/2 * (3*labels*preds^(-5/2) - preds^(-3/2))
return(list(grad = grad, hess = hess))
}
# Tweedie gradient with variance = 1.5, recreating code from LightGBM github
CustomObj_t2 <- function(preds, dtrain) {
labels <- dtrain$getinfo('label')
grad <- -labels*exp(-1/2*preds) + exp(1/2*preds)
hess <- -labels*(-1/2)*exp(-1/2*preds) + 1/2*exp(1/2*preds)
return(list(grad = grad, hess = hess))
}
params = list(objective = "tweedie",
seed = 1,
metric = "rmse")
params2 = list(objective = CustomObj_t1,
seed= 1,
metric = "rmse")
params3 = list(objective = CustomObj_t2,
seed= 1,
metric = "rmse")
# Create data
set.seed(321)
db_Custom = data.table(a=runif(2000), b=runif(2000))
db_Custom[,X := (a*4+exp(b))]
# break into test and training sets
db_Test = db_Custom[1:10]
db_Custom=db_Custom[11:nrow(db_Custom),]
FeatureCols = c("a","b")
# Create dataset
ds_Custom <- lgb.Dataset(data.matrix(db_Custom[, FeatureCols, with = FALSE]), label = db_Custom[["X"]])
# Train
fit = lgb.train(params, ds_Custom, verb=-1)
#print(" ")
fit2 = lgb.train(params2, ds_Custom, verb=-1)
#print(" ")
fit3 = lgb.train(params3, ds_Custom, verb=-1)
# Predict
pred = predict(fit, data.matrix(db_Test[, FeatureCols, with = FALSE]))
db_Test[, prediction := pmax(0, pred)]
pred2 = predict(fit2, data.matrix(db_Test[, FeatureCols, with = FALSE]))
db_Test[, prediction2 := pmax(0, pred2)]
pred3 = predict(fit3, data.matrix(db_Test[, FeatureCols, with = FALSE]))
db_Test[, prediction3 := pmax(0, pred3)]
print(db_Test[,.(X,prediction,prediction2,prediction3)])
I get the results (would expect prediction2 or prediction3 to be very similar to prediction):
"X" "prediction" "prediction2" "prediction3"
4.8931646234958 4.89996556839721 0 1.59154656425556
6.07328897031702 6.12313647937047 0 1.81022588429474
2.05728566704078 2.06824004875244 0 0.740577102751491
2.54732526765174 2.50329903656292 0 0.932517774958986
4.07044099941395 4.07047912554207 0 1.39922723582939
2.74639568121359 2.74408567443232 0 1.01628212910587
3.47720295158928 3.49241414141969 0 1.23049599462599
2.92043718858535 2.90464303454649 0 1.0680618051659
4.44415913080697 4.43091665909845 0 1.48607456777287
4.96566318066753 4.97898586895233 0 1.60163901781479
Is there something I'm missing? Am I just doing the math or coding wrong?
It appears, per the linked git page, and your prediction3 column, that if you exponentiate this column, it becomes very close to columns 0 and 1.
I am working through "Forecasting with Exponential Smoothing". I am stuck on exercise 16.4 on the part that states:
The data set partx contains a history of monthly sales of an automobile part. Apply a local Poisson model. Parameters should be estimated by either maximizing the likelihood or minimizing the sum of squared errors.
The local Poisson model is defined as:
where and
I have the following code, but it seems to be stuck. The optimization always returns something close to the starting values.
Am I fitting the local Poisson model correctly?
library(expsmooth)
data("partx")
S <- function(x) {
a <- x[1]
if(a < 0 | a > 1)
return(Inf)
n <- length(partx)
lambda <- numeric(n+1)
error <- numeric(n)
lambda[1] <- x[2]
for(i in 1:n) {
error[i] <- partx[i]-rpois(1,lambda[i])
lambda[i+1] <- (1-a)*lambda[i] + a*partx[i]
}
return(sum(error^2))
}
# returns a = 0.5153971 and lambda = 5.9282414
op1 <- optim(c(0.5,5L),S, control = list(trace = 1))
# returns a = 0.5999655 and lambda = 2.1000131
op2 <- optim(c(0.5,2L),S, control = list(trace = 1))
I know the book says you could use sum of squared errors or MLE but the first option seems wired due too the fact that you have to sample a poison distribution so event if you fix the parameters you would get the different sum of squared errors every time. As you don't say that you have tried the MLE approach I program it. The math is as follows:
And the code that implements it is
MLELocalPoisson = function(par,y){
alpha = par[1]
lambda_ini = par[2]
n = length(y)
vec_lambda = rep(NA, n)
for(i in 1:n){
if(i==1){
vec_lambda[i] = (1-alpha)*lambda_ini+alpha*y[i]
}else{
vec_lambda[i] = (1-alpha)*vec_lambda[i-1]+alpha*y[i]
}
}
vec_lambda = c(lambda_ini,vec_lambda[-n])
sum_factorial = sum(sapply(y,function(x)log(factorial(x))))
sum_lambda = sum(vec_lambda)
sum_prod = sum(log(vec_lambda)*y)
loglike = -sum_prod+sum_lambda+sum_factorial
return(loglike)
}
optim(par = c(0.1,1),fn = MLELocalPoisson,y = partx, method = "L-BFGS-B",
lower=c(1e-10,1e-10),upper = c(1,Inf),control = list(maxit = 10000))
the lower values set a 1e-10 is done so the optimization do not try c(0,0) and thus generating a loglikelihood of NaN.
EDIT
Taking a look at the poisson regression literature the usually define $\lambda = exp(x*\beta)$ and calculate the residuals as $y-exp(x*\beta)$ (have a look at). So it might be possible to do the same in this problem using the formula given by the author for $\lambda$ like this:
LocalPoisson = function(par,y,optim){
alpha = par[1]
lambda_ini = par[2]
n = length(y)
vec_lambda = rep(NA, n)
y_hat = rep(NA, n)
for(i in 1:n){
if(i==1){
vec_lambda[i] = (1-alpha)*lambda_ini+alpha*y[i]
}else{
vec_lambda[i] = (1-alpha)*vec_lambda[i-1]+alpha*y[i]
}
}
if(optim){
y_hat = c(lambda_ini,vec_lambda[-n])
return(sum((y_hat-y)^2))
} else {
return(data.frame(y_hat = y_hat,y=y, lambda = vec_lambda))
}
}
optim(par = c(0.1,1),fn = LocalPoisson,y = partx, optim =T,method = "L-BFGS-B",
lower=c(1e-10,1e-10),upper = c(1,Inf),control = list(maxit = 10000))
It does not yields the same results as the MLE (and I feel more comfortable with that option but it might be a possible way to estimate the parameters).
I'm trying to run a MLE for an infectious disease compartmental transmission model (SEIR, in my case SSEIR) with the mle2 command, trying to fit a curve of predicted number of weekly deaths to that of observed weekly deaths similar to this:
plot of predicted vs observed weekly deaths.
However, the parameter estimates seem to always be on the (sensible) boundaries I provide and SEs, z-values, p-values are NA.
I set up the SEIR model and then solve it with the ode solver. Using that model output and the observed data, I calculate a negative log likelihood, which I then submit to the mle2 function.
When I first set it up, there were multiple errors that stopped the script from running, but now that those are resolved, I cannot seem to find the root of why the fitting doesn't work.
I am certain that the boundaries I set for the parameter estimation are sensible. The parameters are transition rates between compartments and are therefore defined as (for example) delta = 1/duration of infectiousness, so there are very real biological boundaries on what the parameters can be.
I am aware that I am trying to fit a lot of parameters with not that much data, but the same problem persists when I try only fitting one, so that cannot be the root of it.
library(deSolve)
library(bbmle)
#data
gdta <- c(0, 36.2708172419082, 1.57129615346629, 28.1146409459558, 147.701669719614, 311.876708482584, 512.401145459178, 563.798275104372, 470.731269976821, 292.716043742125, 153.604156195608, 125.760068922451, 198.755685044427, 143.847282793854, 69.2693867232681, 42.2093135487066, 17.0200426587424)
#build seir function
seir <- function(time, state, parameters) {
with(as.list(c(state, parameters)), {
dS0 <- - beta0 * S0 * (I/N)
dS1 <- - beta1 * S1 * (I/N)
dE <- beta0 * S0 * (I/N) + beta1 * S1 * (I/N) - delta * E
dI <- delta * E - gamma * I
dR <- gamma * I
return(list(c(dS0, dS1, dE, dI, dR)))
})
}
# build function to run seir, include ode solver
run_seir <- function(time, state, beta0, beta1, delta, gamma, sigma, N, startInf) {
parameters <- c(beta0, beta1, delta, gamma)
names(parameters) <- c("beta0", "beta1", "delta", "gamma")
init <- c(S0 = (N - startInf)*(sigma) ,
S1 = (N - startInf) * (1-sigma),
E = 0,
I = startInf,
R = 0)
state_est <- as.data.frame(ode(y = init, times = times, func = seir, parms = parameters))
return(state_est)
}
times <- seq(0, 16, by = 1) #sequence
states <- c("S0", "S1", "E", "I", "R")
# run the run_seir function to see if it works
run_seir(time = times, state= states, beta0 = 1/(1.9/7), beta1 = 0.3*(1/(1.9/7)), delta = 1/(4.1/7), gamma = 1/(4.68/7), sigma = 0.7, N = 1114100, startInf = 100)
#build calc likelihood function
calc_likelihood <- function(times, state, beta0, beta1, delta, gamma, sigma, N, startInf, CFR) {
model.output <- run_seir(time, state, beta0, beta1, delta, gamma, sigma, N, startInf)
LL <- sum(dpois(round(as.numeric(gdta)), (model.output$I)/(1/delta)*CFR, log = TRUE))
print(LL)
return(LL)
}
# run calc_likelihood function
calc_likelihood(time = times, state = states, beta0 = 1/(1.9/7), beta1 = 0.3*(1/(1.9/7)), delta = 1/(4.1/7), gamma = 1/(4.68/7), sigma = 0.7, N = 1114100, startInf = 100, CFR = 0.02)
#MLE
#parameters that are supposed to be fixed
fixed.pars <- c(N=1114100, startInf=100, CFR = 0.02)
#parameters that mle2 is supposed to estimate
free.pars <- c(beta0 = 1/(1.9/7), beta1 = 0.3*(1/(1.9/7)),
delta = 1/(4.1/7), gamma = 1/(4.68/7), sigma = 0.7)
#lower bound
lower_v <- c(beta0 = 0, beta1 = 0, delta = 0, gamma = 0, sigma = 0)
#upper bound
upper_v <- c(beta0 = 15, beta1 = 15, delta = 15, gamma = 15, sigma = 1)
#sigma = 1, this is not a typo
#mle function - need to use L-BFGS-B since we need to include boundaries
test2 <- mle2(calc_likelihood, start = as.list(free.pars), fixed = as.list(fixed.pars),method = "L-BFGS-B", lower = lower_v, upper = upper_v)
summary(test2)
After I run mle2, I get a warning saying:
Warning message:
In mle2(calc_likelihood, start = as.list(free.pars), fixed = as.list(fixed.pars), :
some parameters are on the boundary: variance-covariance calculations based on Hessian may be unreliable
and if I look at summary(test2):
Warning message:
In sqrt(diag(object#vcov)) : NaNs produced
Based on the research I've done so far, I understand that the second error might be due to the estimates being on the boundaries, so my question really is how to address the first one.
If I run mle2 with only lower boundaries, I get parameter estimates in the millions, which cannot be correct.
I am fairly certain that my model specification for the SEIR is correct, but after staring at this code and trying to resolve this issue for a week, I'm open to any input on how to make the fitting work.
Thanks,
JJ
I need to implement a logistic regression manually, using the Score/GMM approach, without the use of GLM. This is because at later stages the model will be much more complicated. Currently I am running into a problem where for the logistic regression, the optimization procedures are very initial point dependent.To illustrate, here is my code using an online dataset. More details about the procedure are in the comments:
library(data,table)
library(nleqslv)
library(Matrix)
mydata <- read.csv("https://stats.idre.ucla.edu/stat/data/binary.csv")
data_analysis<-data.table(mydata)
data_analysis[,constant:=1]
#Likelihood function for logit
#The logistic regression will regress the binary variable
#admit on a constant and the variable gpa
LL <- function(beta){
beta=as.numeric(beta)
data_temp=data_analysis
mat_temp2 = cbind(data_temp$constant,
data_temp$gpa)
one = rep(1,dim(mat_temp2)[1])
h = exp(beta %*% t(mat_temp2))
choice_prob = h/(1+h)
llf <- sum(data_temp$admit * log(choice_prob)) + (sum((one-data_temp$admit) * log(one-choice_prob)))
return(-1*llf)
}
#Score to be used when optimizing using LL
#Identical to the Score function below but returns negative output
Score_LL <- function(beta){
data_temp=data_analysis
mat_temp2 = cbind(data_temp$constant,
data_temp$gpa)
one = rep(1,dim(mat_temp2)[1])
h = exp(beta %*% t(mat_temp2))
choice_prob = h/(1+h)
resid = as.numeric(data_temp$admit - choice_prob)
score_final2 = t(mat_temp2) %*% Diagonal(length(resid), x=resid) %*% one
return(-1*as.numeric(score_final2))
}
#The Score/Deriv/Jacobian of the Likelihood function
Score <- function(beta){
data_temp=data_analysis
mat_temp2 = cbind(data_temp$constant,
data_temp$gpa)
one = rep(1,dim(mat_temp2)[1])
h = exp(beta %*% t(mat_temp2))
choice_prob = as.numeric(h/(1+h))
resid = as.numeric(data_temp$admit - choice_prob)
score_final2 = t(mat_temp2) %*% Diagonal(length(resid), x=resid) %*% one
return(as.numeric(score_final2))
}
#Derivative of the Score function
Score_Deriv <- function(beta){
data_temp=data_analysis
mat_temp2 = cbind(data_temp$constant,
data_temp$gpa)
one = rep(1,dim(mat_temp2)[1])
h = exp(beta %*% t(mat_temp2))
weight = (h/(1+h)) * (1- (h/(1+h)))
weight_mat = Diagonal(length(weight), x=weight)
deriv = t(mat_temp2)%*%weight_mat%*%mat_temp2
return(-1*as.array(deriv))
}
#Quadratic Gain function
#Minimized at Score=0 and so minimizing is equivalent to solving the
#FOC of the Likelihood. This is the GMM approach.
Quad_Gain<- function(beta){
h=Score(as.numeric(beta))
return(sum(h*h))
}
#Derivative of the Quadratic Gain function
Quad_Gain_deriv <- function(beta){
return(2*t(Score_Deriv(beta))%*%Score(beta))
}
sol1=glm(admit ~ gpa, data = data_analysis, family = "binomial")
sol2=optim(c(2,2),Quad_Gain,gr=Quad_Gain_deriv,method="BFGS")
sol3=optim(c(0,0),Quad_Gain,gr=Quad_Gain_deriv,method="BFGS")
When I run this code, I get that sol3 matches what glm produces (sol1) but sol2, with a different initial point, differs from the glm solution by a lot. This is something happening in my main code with the actual data as well. One solution is to create a grid and test multiple starting points. However, my main data set has 10 parameters and this would make the grid very large and the program computationally infeasible. Is there a way around this problem?
Your code seems overly complicated. The following two functions define the negative log-likelihood and negative score vector for a logistic regression with the logit link:
logLik_Bin <- function (betas, y, X) {
eta <- c(X %*% betas)
- sum(dbinom(y, size = 1, prob = plogis(eta), log = TRUE))
}
score_Bin <- function (betas, y, X) {
eta <- c(X %*% betas)
- crossprod(X, y - plogis(eta))
}
Then you can use it as follows:
# load the data
mydata <- read.csv("https://stats.idre.ucla.edu/stat/data/binary.csv")
# fit with optim()
opt1 <- optim(c(-1, 1, -1), logLik_Bin, score_Bin, method = "BFGS",
y = mydata$admit, X = cbind(1, mydata$gre, mydata$gpa))
opt1$par
# compare with glm()
glm(admit ~ gre + gpa, data = mydata, family = binomial())
Typically, for well-behaved covariates (i.e., expecting to have a coefficients in the interval [-4 to 4]), starting at 0 is a good idea.
I am using 'KFAS' package from R to estimate a state-space model with the Kalman filter. My measurement and transition equations are:
y_t = Z_t * x_t + \eps_t (measurement)
x_t = T_t * x_{t-1} + R_t * \eta_t (transition),
with \eps_t ~ N(0,H_t) and \eta_t ~ N(0,Q_t).
So, I want to estimate the variances H_t and Q_t, but also T_t, the AR(1) coefficient. My code is as follows:
library(KFAS)
set.seed(100)
eps <- rt(200, 4, 1)
meas <- as.matrix((arima.sim(n=200, list(ar=0.6), innov = rnorm(200)*sqrt(0.5)) + eps),
ncol=1)
Zt <- 1
Ht <- matrix(NA)
Tt <- matrix(NA)
Rt <- 1
Qt <- matrix(NA)
ss_model <- SSModel(meas ~ -1 + SSMcustom(Z = Zt, T = Tt, R = Rt,
Q = Qt), H = Ht)
fit <- fitSSM(ss_model, inits = c(0,0.6,0), method = 'L-BFGS-B')
But it returns: "Error in is.SSModel(do.call(updatefn, args = c(list(inits, model), update_args)),: System matrices (excluding Z) contain NA or infinite values, covariance matrices contain values larger than 1e+07"
The NA definitions for the variances works well, as documented in the package's paper. However, it seems this cannot be done for the AR coefficients. Does anyone know how can I do this?
Note that I am aware of the SSMarima function, which eases the definition of the transition equation as ARIMA models. Although I am able to estimate the AR(1) coef. and Q_t this way, I still cannot estimate the \eps_t variance (H_t). Moreover, I am migrating my Kalman filter codes from EViews to R, so I need to learn SSMcustom for other models that are more complicated.
Thanks!
It seems that you are missing something in your example, as your error message comes from the function fitSSM. If you want to use fitSSM for estimating general state space models, you need to provide your own model updating function. The default behaviour can only handle NA's in covariance matrices H and Q. The main goal of fitSSM is just to get started with simple stuff. For complex models and/or large data, I would recommend using your self-written objective function (with help of logLik method) and your favourite numerical optimization routines manually for maximum performance. Something like this:
library(KFAS)
set.seed(100)
eps <- rt(200, 4, 1)
meas <- as.matrix((arima.sim(n=200, list(ar=0.6), innov = rnorm(200)*sqrt(0.5)) + eps),
ncol=1)
Zt <- 1
Ht <- matrix(NA)
Tt <- matrix(NA)
Rt <- 1
Qt <- matrix(NA)
ss_model <- SSModel(meas ~ -1 + SSMcustom(Z = Zt, T = Tt, R = Rt,
Q = Qt), H = Ht)
objf <- function(pars, model, estimate = TRUE) {
model$H[1] <- pars[1]
model$T[1] <- pars[2]
model$Q[1] <- pars[3]
if (estimate) {
-logLik(model)
} else {
model
}
}
opt <- optim(c(1, 0.5, 1), objf, method = "L-BFGS-B",
lower = c(0, -0.99, 0), upper = c(100, 0.99, 100), model = ss_model)
ss_model_opt <- objf(opt$par, ss_model, estimate = FALSE)
Same with fitSSM:
updatefn <- function(pars, model) {
model$H[1] <- pars[1]
model$T[1] <- pars[2]
model$Q[1] <- pars[3]
model
}
fit <- fitSSM(ss_model, c(1, 0.5, 1), updatefn, method = "L-BFGS-B",
lower = c(0, -0.99, 0), upper = c(100, 0.99, 100))
identical(ss_model_opt, fit$model)