I would like to estimate (MLE) this model using MARSS (or another package in R)
x_t=x_{t-1}+w_t , with w_t ~ N(0,q)
y_t= d1_t + \alpha d2_t + \beta (d3_t -x_{t-1}) + v_t, with v_t ~ N(0,6*q)
where the first line is the transition equation and the second, the observation one.
I managed to write it in form accepted by MARSS (R-package), as below:
[x1_t,x2_{t-1}]= [1,0;1,0][x1_{t-1},x2_{t-2}]+[w1_t,w2_t], with w1_t ~ N(0,q) and w2_t ~ N(0,0)
y_t= D d_t+Z x_t , with v_t ~ N(0,6*q)
where
x_t=[x1_t,x2_{t-1}]
D=[1,\alpha,\beta]
Z=[0,\beta]
d_t=[d1_t,d2_t, d3_t]
The problem is that I couldn't make the constraint work properly. When I run this system, R considers the \beta in Z matrix separately of the \beta in D matrix. All the examples that I saw on internet show a linear restriction using Z matrix only (or just D only). The same occurs in the variances that I would like to be multiples.
Anyone could help me with this?
Here's a toy data:
B <- matrix(list(1,0,1,0),2,2,byrow=TRUE)
U <- matrix(0,2,1)
C <- matrix(0,2,1)
G <- matrix(list(1,0,0,0),2,2,byrow=TRUE)
Q <- matrix(list('d',0,0,0),2,2,byrow=TRUE)
Z <- matrix(list(0,'b'),1,2)
A <- matrix(0)
D <- matrix(list(1,'a','b'),1,3)
H <- matrix(1)
R=matrix(list('6*d'))
dt<-matrix(rnorm(300),3,100)
y<-rnorm(100)
x0=matrix(list(0.094,0.094),2,1)
V0=matrix(list(0.001,0,0,0.001),2,2)
model.list = list(B=B, U=U, C=C, Q=Q, Z=Z, A=A, D=D, d=dt, H=H, R=R,x0=x0,V0=V0)
kemfit = MARSS(y, model=model.list, control=list(maxit=100,conv.test.slope.tol=0.1,abstol=0.1),method='kem')
The EM algorithm in MARSS only allows constraints (like setting values equal) within the same matrices. Setting constraints across A & D or U & C is easy but across D & Z or R & Q requires rewriting your model in a weird way where your covariates (dt) appears as dummy states (x's). So you don't want to do that.
You can just write a function to return the negative log-likelihood of your state-space model and then minimize that with optim(). I would do this with the KFAS package using the SSCustom() function because that will be fast. However, here is how to do this with the MARSS package just to show you the concept. As the author of MARSS, I can write this down immediately whereas with the KFAS package (which I also use), I'd need to look up how to do the covariates.
# Set up the parts that don't change
dt<-matrix(rnorm(300),3,100)
y<-rnorm(100)
x0=matrix(list(0.094,0.094),2,1)
V0=matrix(list(0.001,0,0,0.001),2,2)
B <- matrix(list(1,0,1,0),2,2,byrow=TRUE)
U <- A <- "zero"
# Put the parameters you will estimate into a vector
pars <- c(a=0.1624, b=-0.1, d=sqrt(0.2))
# Write a function to return the negative log-likelihood
negloglik <- function(pars){
Q <- matrix(list(pars["d"]^2,0,0,0),2,2,byrow=TRUE)
Z <- matrix(list(0, pars["b"]),1,2)
D <- matrix(list(1, pars["a"], pars["b"]),1,3)
R <- matrix(6*pars["d"]^2)
model.list = list(B=B, U=U, Q=Q, Z=Z, A=A, D=D, d=dt, R=R, x0=x0, V0=V0)
-1*MARSS(y, model=model.list, control=list(maxit=100,conv.test.slope.tol=0.1,abstol=0.1),method='kem', silent=TRUE)$logLik
}
optim(pars, negloglik, method="BFGS")
Using the MARSS() function to get the logLik is a bit silly here since that is a fitting function but with all the parameters fixed, it will just return the logLik without fitting.
If you want to see what your KFAS model should look like, you can do this:
kfas.model <- MARSSkfas(kemfit, return.kfas.model=TRUE, return.lag.one=FALSE)$kfas.model
Then
library(KFAS)
logLik(kfas.model)
will get you the log-likelihood. But how the covariates are entering the KFAS model is a little non-intuitive. They appear in the kfas.model$Z element as a time-varying Z. I am sure the KFAS package has some helper function to construct models with covariates. I always construct KFAS models from matrices (no helper functions) so I am not familiar with those, but I know they exist.
Related
I am looking for a fast way to do nonnegative quantile and Huber regression in R (i.e. with the constraint that all coefficients are >0). I tried using the CVXR package for quantile & Huber regression and the quantreg package for quantile regression, but CVXR is very slow and quantreg seems buggy when I use nonnegativity constraints. Does anybody know of a good and fast solution in R, e.g. using the Rcplex package or R gurobi API, thereby using the faster CPLEX or gurobi optimizers?
Note that I need to run a problem size like below 80 000 times, whereby I only need to update the y vector in each iteration, but still use the same predictor matrix X. In that sense, I feel it's inefficient that in CVXR I now have to do obj <- sum(quant_loss(y - X %*% beta, tau=0.01)); prob <- Problem(Minimize(obj), constraints = list(beta >= 0)) within each iteration, when the problem is in fact staying the same and all I want to update is y. Any thoughts to do all this better/faster?
Minimal example:
## Generate problem data
n <- 7 # n predictor vars
m <- 518 # n cases
set.seed(1289)
beta_true <- 5 * matrix(stats::rnorm(n), nrow = n)+20
X <- matrix(stats::rnorm(m * n), nrow = m, ncol = n)
y_true <- X %*% beta_true
eps <- matrix(stats::rnorm(m), nrow = m)
y <- y_true + eps
Nonnegative quantile regression using CVXR :
## Solve nonnegative quantile regression problem using CVX
require(CVXR)
beta <- Variable(n)
quant_loss <- function(u, tau) { 0.5*abs(u) + (tau - 0.5)*u }
obj <- sum(quant_loss(y - X %*% beta, tau=0.01))
prob <- Problem(Minimize(obj), constraints = list(beta >= 0))
system.time(beta_cvx <- pmax(solve(prob, solver="SCS")$getValue(beta), 0)) # estimated coefficients, note that they ocasionally can go - though and I had to clip at 0
# 0.47s
cor(beta_true,beta_cvx) # correlation=0.99985, OK but very slow
Syntax for nonnegative Huber regression is the same but would use
M <- 1 ## Huber threshold
obj <- sum(CVXR::huber(y - X %*% beta, M))
Nonnegative quantile regression using quantreg package :
### Solve nonnegative quantile regression problem using quantreg package with method="fnc"
require(quantreg)
R <- rbind(diag(n),-diag(n))
r <- c(rep(0,n),-rep(1E10,n)) # specify bounds of coefficients, I want them to be nonnegative, and 1E10 should ideally be Inf
system.time(beta_rq <- coef(rq(y~0+X, R=R, r=r, tau=0.5, method="fnc"))) # estimated coefficients
# 0.12s
cor(beta_true,beta_rq) # correlation=-0.477, no good, and even worse with tau=0.01...
To speed up CVXR, you can get the problem data once in the beginning, then modify it within a loop and pass it directly to the solver's R interface. The code for this is
prob_data <- get_problem_data(prob, solver = "SCS")
Then, parse out the arguments and pass them to scs from the scs library. (See Solver.solve in solver.R). You'll have to dig into the details of the canonicalization, but I expect if you're just changing y at each iteration, it should be a straightforward modification.
For exactly identified moments, GMM results should be the same regardless of initial starting values. This doesn't appear to be the case however.
library(gmm)
data(Finance)
x <- data.frame(rm=Finance[1:500,"rm"], rf=Finance[1:500,"rf"])
# want to solve for coefficients theta[1], theta[2] in exactly identified
# system
g <- function(theta, x)
{
m.1 <- x[,"rm"] - theta[1] - theta[2]*x[,"rf"]
m.z <- (x[,"rm"] - theta[1] - theta[2]*x[,"rf"])*x[,"rf"]
f <- cbind(m.1, m.z)
return(f)
}
# gmm coefficient result should be identical to ols regressing rm on rf
# since two moments are E[u]=0 and E[u*rf]=0
model.lm <- lm(rm ~ rf, data=x)
model.lm
# gmm is consistent with lm given correct starting values
summary(gmm(g, x, t0=model.lm$coefficients))
# problem is that using different starting values leads to different
# coefficients
summary(gmm(g, x, t0=rep(0,2)))
Is there something wrong with my setup?
The gmm package author Pierre Chausse was kind enough to respond to my inquiry.
For linear models, he suggests using the formula approach:
gmm(rm ~ rf, ~rf, data=x)
For non-linear models, he emphasizes that the starting values are indeed critical. In the case of exactly identified models, he suggests setting the fnscale to a small number to force the optim minimizer to converge closer to 0. Also, he thinks the BFGS algorithm works better with GMM.
summary(gmm(g, x, t0=rep(0,2), method = "BFGS", control=list(fnscale=1e-8)))
Both solutions work for this example. Thanks Pierre!
I am trying to determine whether there is a significant difference between two Gamm distributions. One distribution has (shape, scale)=(shapeRef,scaleRef) while the other has (shape, scale)=(shapeTarget,scaleTarget). I try to do analysis of variance with the following code
n=10000
x=rgamma(n, shape=shapeRef, scale=scaleRef)
y=rgamma(n, shape=shapeTarget, scale=scaleTarget)
glmm1 <- gam(y~x,family=Gamma(link=log))
anova(glmm1)
The resulting p values keep changing and can be anywhere from <0.1 to >0.9.
Am I going about this the wrong way?
Edit: I use the following code instead
f <- gl(2, n)
x=rgamma(n, shape=shapeRef, scale=scaleRef)
y=rgamma(n, shape=shapeTarget, scale=scaleTarget)
xy <- c(x, y)
anova(glm(xy ~ f, family = Gamma(link = log)),test="F")
But, every time I run it I get a different p-value.
You will indeed get a different p-value every time you run this, if you pick different realizations every time. Just like your data values are random variables, which you'd expect to vary each time you ran an experiment, so is the p-value. If the null hypothesis is true (which was the case in your initial attempts), then the p-values will be uniformly distributed between 0 and 1.
Function to generate simulated data:
simfun <- function(n=100,shapeRef=2,shapeTarget=2,
scaleRef=1,scaleTarget=2) {
f <- gl(2, n)
x=rgamma(n, shape=shapeRef, scale=scaleRef)
y=rgamma(n, shape=shapeTarget, scale=scaleTarget)
xy <- c(x, y)
data.frame(xy,f)
}
Function to run anova() and extract the p-value:
sumfun <- function(d) {
aa <- anova(glm(xy ~ f, family = Gamma(link = log),data=d),test="F")
aa["f","Pr(>F)"]
}
Try it out, 500 times:
set.seed(101)
r <- replicate(500,sumfun(simfun()))
The p-values are always very small (the difference in scale parameters is easily distinguishable), but they do vary:
par(las=1,bty="l") ## cosmetic
hist(log10(r),col="gray",breaks=50)
I'm looking for some script/package in R (Python will do too) to find out the component distribution parameters from a mixture of Gaussian and Gamma distributions. I've so far used
the R package "mixtools" to model the data as mixture of Gaussians, but I think it can be better modeled by Gamma plus Gaussian.
Thanks
Here's one possibility:
Define utility functions:
rnormgammamix <- function(n,shape,rate,mean,sd,prob) {
ifelse(runif(n)<prob,
rgamma(n,shape,rate),
rnorm(n,mean,sd))
}
(This could be made a little bit more efficient ...)
dnormgammamix <- function(x,shape,rate,mean,sd,prob,log=FALSE) {
r <- prob*dgamma(x,shape,rate)+(1-prob)*dnorm(x,mean,sd)
if (log) log(r) else r
}
Generate fake data:
set.seed(101)
r <- rnormgammamix(1000,1.5,2,3,2,0.5)
d <- data.frame(r)
Approach #1: bbmle package. Fit shape, rate, standard deviation on log scale, prob on logit scale.
library("bbmle")
m1 <- mle2(r~dnormgammamix(exp(logshape),exp(lograte),mean,exp(logsd),
plogis(logitprob)),
data=d,
start=list(logshape=0,lograte=0,mean=0,logsd=0,logitprob=0))
cc <- coef(m1)
png("normgam.png")
par(bty="l",las=1)
hist(r,breaks=100,col="gray",freq=FALSE)
rvec <- seq(-2,8,length=101)
pred <- with(as.list(cc),
dnormgammamix(rvec,exp(logshape),exp(lograte),mean,
exp(logsd),plogis(logitprob)))
lines(rvec,pred,col=2,lwd=2)
true <- dnormgammamix(rvec,1.5,2,3,2,0.5)
lines(rvec,true,col=4,lwd=2)
dev.off()
tcc <- with(as.list(cc),
c(shape=exp(logshape),
rate=exp(lograte),
mean=mean,
sd=exp(logsd),
prob=plogis(logitprob)))
cbind(tcc,c(1.5,2,3,2,0.5))
The fit is reasonable, but the parameters are fairly far off -- I think this model isn't very strongly identifiable in this parameter regime (i.e., the Gamma and gaussian components can be swapped)
library("MASS")
ff <- fitdistr(r,dnormgammamix,
start=list(shape=1,rate=1,mean=0,sd=1,prob=0.5))
cbind(tcc,ff$estimate,c(1.5,2,3,2,0.5))
fitdistr gets the same result as mle2, which suggests we're
in a local minimum. If we start from the true parameters we get
to something reasonable and near the true parameters.
ff2 <- fitdistr(r,dnormgammamix,
start=list(shape=1.5,rate=2,mean=3,sd=2,prob=0.5))
-logLik(ff2) ## 1725.994
-logLik(ff) ## 1755.458
I have a relationship:
y = a + b + c
I have the average and standard deviation of a, b and c
and I would like to obtain the probability distribution of y
from this by Monte Carlo simulation.
Is there a function or package or easy way that I can use to do this?
I assume that your are assuming your inputs a,b and c are normally distributed because you say you can define them with mean and standard deviation. If that is the case, you can do this pretty fast without any special package.
mu.a=33
mu.b=32
mu.c=13
sigma.a=22
sigma.b=22
sigma.c=222
n= a.large.number=10^5
a=rnorm(n,mu.a,sigma.a)
b=rnorm(n,mu.b,sigma.b)
c=rnorm(n,mu.c,sigma.c)
y=a+b+c
plot(density(y))
mean(y)
sd(y)
Make sure to be aware of all the assumptions we are making about y,a,b and c.
If you want to do something more complex like figure out the sampling variance of the mean of y. Then do this procedure many times collecting the mean and plot it.
mysimfun=function(n,mu,sigma,stat.you.want='mean')
# mu is length 3 and sigma is too.
{
n= a.large.number=10^5
a=rnorm(n,mu[1],sigma[1])
b=rnorm(n,mu[2],sigma[2])
c=rnorm(n,mu[3],sigma[3])
y=a+b+c
plot(density(y))
return(ifelse(stat.you.want=='mean',mean(y),sd(y))
}
mu=c(mu.a,my.b,mu.c)
sigma=c(sigma.a,sigma.b,sigma.c)
mi=rep(NA,100)
Then run it in a loop of some sort.
for(i in 1:100) {mi[i]=mysimfun(10,mu,sigma,stat.you.want='mean') }
par(mfrow=c(2,1)
hist(mi)
plot(density(mi))
mean(mi)
sd(mi)
There would be two approaches: bootstrapping which I think is what you might mean by MonteCarlo or if you are more interested in the theory than constructing estimates from empiric distributions, the 'distr' package and its friends 'distrSim" and "distrTEst".
require(boot)
ax <- rnorm(100); bx<-runif(100); cx<- rexp(100)
dat <- data.frame(ax=ax,bx=bx,cx=cx)
boot(dat, function(d){ with(d, mean(ax+bx+cx) )}, R=1000, sim="parametric")
boot(dat, function(d){ with(d, sd(ax+bx+cx) )}, R=1000, sim="parametric")