I want compute a null model,saturated model and a proposed model for a poisson regression by hand. For that i designed a loglikelihood function and optimize it with the optim function. It worked well for the null and the proposed model. For the computation of the coefficients of the saturatetd model i get an error : "Error in beta %*% t(x) : non-conformable arguments". I know what the error means (dimensions of the matrices doesn't fit) but i dont know how to fix it, maybe you can help.
data <- as.data.frame(warpbreaks)
# Function for loglikelihood
LogLike <- function(y,x, par) {
beta <- par
# the deterministic part of the model:
lambda <- exp(beta%*%t(x))
# and here comes the negative log-likelihood of the whole dataset, given the
# model:
LL <- -sum(dpois(y, lambda, log = TRUE))
return(LL)
}
formula <- breaks~wool+tension
form <- formula(formula)
# dataFrame
model <- model.frame(formula, data = data)
# Designmatrix for proposed modell
x <- model.matrix(formula,data = data)
# Response Variable
y <- model.response(model)
# modelMatrix for null Modell
x1 <- as.matrix(x[,1])
# Computation Koef nullmodell
par1 <- rep(0,1)
koef <- round(optim(par=par1,fn=LogLike,x=x1,y=y)$par,4)
koef
# Computation koef proposed Modell
par2 <- rep(0,ncol(x))
koef2 <- round(optim(par=par2,fn=LogLike,x=x,y=y)$par,4)
koef2
# Computation koef saturated Modell
par3<- rep(0,length(y))
koef3 <- round(optim(par=par3,fn=LogLike,x=x,y=y)$par,4)
koef3
A saturated model is a model that have a many parameters as data points. You need to build such a x matrix and everything should work. So far, x is a 54x4 matrix while it should be a 54x54.
--- Advice (FWIW)
Avoid using functions' names as variables (beta and par in your case).
Hope this helps.
Related
Two conceptually plausible methods of retrieving in-sample predictions (or "conditional expectations") of y[t] given y[t-1] from a bsts model yield different results, and I don't understand why.
One method uses the prediction errors returned by bsts (defined as e=y[t] - E(y[t]|y[t-1]); source: https://rdrr.io/cran/bsts/man/one.step.prediction.errors.html):
library(bsts)
get_yhats1 <- function(fit){
# One step prediction errors defined as e=y[t] - yhat (source: )
# Recover yhat by y-e
bsts.pred.errors <- bsts.prediction.errors(fit, burn=SuggestBurn(0.1, fit))$in.sample
predictions <- t(apply(bsts.pred.errors, 1, function(e){fit$original.series-e}))
return(predictions)
}
Another sums the contributions of all model component at time t.
get_yhats2 <- function(fit){
burn <- SuggestBurn(0.1, fit)
X <- fit$state.contributions
niter <- dim(X)[1]
ncomp <- dim(X)[2]
nobs <- dim(X)[3]
# initialize final fit/residuals matrices with zeros
predictions <- matrix(data = 0, nrow = niter - burn, ncol = nobs)
p0 <- predictions
comps <- seq_len(ncomp)
for (comp in comps) {
# pull out the state contributions for this component and transpose to
# a niter x (nobs - burn) array
compX <- X[-seq_len(burn), comp, ]
# accumulate the predictions across each component
predictions <- predictions + compX
}
return(predictions)
}
Fit a model:
## Air passengers data
data("AirPassengers")
# 11 years, monthly data (timestep=monthly) --> 132 observations
Y <- stats::window(AirPassengers, start=c(1949,1), end=c(1959,12))
y <- log(Y)
ss <- AddLocalLinearTrend(list(), y)
ss <- AddSeasonal(ss, y, nseasons=12, season.duration=1)
bsts.model <- bsts(y, state.specification=ss, niter=500, family='gaussian')
Compute and compare predictions using each of the functions
p1 <- get_yhats1(bsts.model)
p2 <- get_yhats2(bsts.model)
# Compare predictions for t=1:5, first MCMC iteration:
p1[1,1:5]; p2[1,1:5]
I'm the author of bsts.
The 'prediction errors' in bsts come from the filtering distribution. That is, they come from p(state | past data). The state contributions come from the smoothing distribution, i.e. p(state | all data). The filtering distribution looks backward in time, while the smoothing distribution looks both forward and backward. One typically needs the filtering distribution while using a fitted model, and the smoothing distribution while fitting the model in the first place.
I am trying to recover in-sample predictions (fitted values) from a bsts model with a specified poisson response using the bsts package in R. The following results in an error: Prediction errors are not supported for Poisson or logit models.
data("AirPassengers")
# 11 years, monthly data (timestep=monthly) --> 132 observations
Y <- stats::window(AirPassengers, start=c(1949,1), end=c(1959,12))
y <- log10(Y)
ss <- AddLocalLinearTrend(list(), y)
ss <- AddSeasonal(ss, y, nseasons=12, season.duration=1)
bsts.model <- bsts(Y, state.specification=ss, niter=150, family='poisson')
bsts.prediction.errors(bsts.model)
Is there a way to retrieve predictions on model-training data with a poisson model in bsts?
One way to do it is to extract the contribution of each model component at time t and sum them.
get_yhats2 <- function(fit){
burn <- SuggestBurn(0.1, fit)
X <- fit$state.contributions
niter <- dim(X)[1]
ncomp <- dim(X)[2]
nobs <- dim(X)[3]
# initialize final fit/residuals matrices with zeros
predictions <- matrix(data = 0, nrow = niter - burn, ncol = nobs)
p0 <- predictions
comps <- seq_len(ncomp)
for (comp in comps) {
# pull out the state contributions for this component and transpose to
# a niter x (nobs - burn) array
compX <- X[-seq_len(burn), comp, ]
# accumulate the predictions across each component
predictions <- predictions + compX
}
return(predictions)
}
get_yhats2(bsts.model)
But I also posted here, showing that this method didn't necessarily match expectations I had even in the Gaussian case.
From Poisson Regression by hand this 'manual' Poisson coefficient function is provided:
LogLike <- function(y,x, par) {
beta <- par
# the deterministic part of the model:
lambda <- exp(beta%*%t(x))
# and here comes the negative log-likelihood of the whole dataset, given the
# model:
LL <- -sum(dpois(y, lambda, log = TRUE))
return(LL)
}
PoisMod<-function(formula, data){
# # definiere Regressionsformel
form <- formula(formula)
#
# # dataFrame wird erzeugt
model <- model.frame(formula, data = data)
#
# # Designmatrix erzeugt
x <- model.matrix(formula,data = data)
#
# # Response Variable erzeugt
y <- model.response(model)
par <- rep(0,ncol(x))
erg <- list(optim(par=par,fn=LogLike,x=x,y=y)$par)
return(erg)
}
PoisMod(breaks~wool+tension, as.data.frame(daten))
glm(breaks~wool+tension, family = "poisson", data = as.data.frame(daten))
Can any one tell me exactly where the link function is computed here? What would this code look like with an identity link function? I have basic understanding from YouTube videos etc, but no one explains the actual computation.
How would this code look like with offset and weights?
The GLM is specified as:
or equivalently as
In the case of the Poisson model, the canonical link is , so
You can see that in your code where lambda = exp(beta %*% t(x)) If you wanted to estimate a model with an identity link, you would use lambda = beta %*% t(x) because in that case
Addressing the follow-up questions in the comments, with weights, often these are used to weight the likelihood contribution for each observation. So, imagine you had a variable wt that you passed into your function LogLike(), you could modify it by multiplying the likelihood contribution by the weight:
LL <- -sum(dpois(y, lambda, log = TRUE)*wt)
An offset is just a variable whose coefficient is forced to be 1 (see this post for a discussion). Lets say you had a variable called offset that you wanted to include in your model. Again, you could pass that argument to LogLike() and you would need to modify the function by:
lambda <- exp(beta%*%t(x) + offset)
Following the example in the CrossValidated post I linked, offset = log(time). Which offset is correct is more of a theoretical or substantive matter.
I may have found some sort of bug polr function (ordinal / polytomous regression) of the MASS library in R. The problem seems to be related to use of coef() on the summary object, but maybe is not.
The problem occurs in a function of type:
pol_me <- function(d){
some_x <- d[,1]
mod <- polr(some_x ~ d[,2])
pol_sum <- summary(mod)
return(pol_sum)
}
To illustrate, I simulate some data for an ordinal regression model.
set.seed(2016)
n=1000
x1 <- rnorm(n)
x2 <- 2*x1 +rnorm(n)
make_ord <- function(y){
y_ord <- y
y_ord[y < (-1)] <- 1
y_ord[y >= (-1) & y <= (1)] <- 2
y_ord[y >= 1] <- 3
y_ord <- as.factor(y_ord)
}
x1 <- make_ord(x1)
dat <- data.frame(x1,x2)
When we now call the function:
library(MASS)
pol_me(d = dat)
We get error
Error in eval(expr, envir, enclos) : object 'some_x' not found
I do not think this should logically happen at this point. In fact when we define alternative function in which the model command is replaced by a linear model lm on a numerical dependent variable, i.e.
mod <- lm(as.numeric(some_x) ~ d[,2])
The resulting function works fine.
Is this really a bug or a programming problem in my code and how can I get pol_me to run?
summary(polr(dat[,1] ~ dat[,2])) returns semi-error message Re-fitting to get Hessian and it's the cause of the error. polr's argument Hess = T will solve your problem. (?polr says Hess: logical for whether the Hessian (the observed information matrix) should be returned. Use this if you intend to call summary or vcov on the fit.)
pol_me <- function(d){
some_x <- d[,1]
mod <- polr(some_x ~ d[,2], Hess = T) # modify
pol_sum <- summary(mod)
return(pol_sum)
}
I would like to estimate the coefficients of a nonlinear model with a binary dependent variable. The nonlinearity arises because two regressors, A and B, depend on a subset of the dataset and on the two parameters lambda1 and lambda2 respectively:
y = alpha + beta1 * A(lambda1) + beta2 * B(lambda2) + delta * X + epsilon
where for each observation i, we have
Where a and Rs are variables in the data.frame. The regressor B(lambda2) is defined in a similar way.
Moreover, I need to include what in Stata are known as pweights, i.e. survey weights or sampling weights. For this reason, I'm working with the R package survey by Thomas Lumley.
First, I create a function for A (and B), i.e.:
A <- function(l1){
R <- as.matrix(data[,1:(80)])
a <- data[,169]
N = length(a)
var <- numeric(N)
for (i in 1:N) {
ai <- rep(a[i],a[i]-1) # vector of a(i)
k <- 1:(a[i]-1) # numbers from 1 to a(i)-1
num <- (ai-k)^l1
den <- sum((ai-k)^l1)
w <- num/den
w <- c(w,rep(0,dim(R)[2]-length(w)))
var[i] <- R[i,] %*% w
}
return(var)
}
B <- function(l2){
C <- as.matrix(data[,82:(161-1)])
a <- data[,169]
N = length(a)
var <- numeric(N)
for (i in 1:N) {
ai <- rep(a[i],a[i]-1) # vector of a(i)
k <- 1:(a[i]-1) # numbers from 1 to a(i)-1
num <- (ai-k)^l2
den <- sum((ai-k)^l2)
w <- num/den
w <- c(w,rep(0,dim(C)[2]-length(w)))
var[i] <- C[i,] %*% w
}
return(var)
}
But the problem is that I don't know how to include the nonlinear regressors in the model (or in the survey design, using the function svydesign):
d_test <- svydesign(id=~1, data = data, weights = ~data$hw0010)
Because, when I try to estimate the model:
# loglikelihood function:
LLsvy <- function(y, model, lambda1, lambda2){
aux1 <- y * log(pnorm(model))
aux2 <- (1-y) * log(1-pnorm(model))
LL <- (aux1) + (aux2)
return(LL)
}
fit <- svymle(loglike=LLsvy,
formulas=list(~y, model = ~ A(lambda1)+B(lambda2)+X,lambda1=~1,lambda2=~1),
design=d_test,
start=list(c(0,0,0,0),c(lambda1=11),c(lambda2=8)),
na.action="na.exclude")
I get the error message:
Error in eval(expr, envir, enclos) : object 'lambda1' not found
I think that the problem is in including the nonlinear part, because everything works fine if I fix A and B for some lambda1 and lambda2 (so that the model becomes linear):
lambda1=11
lambda2=8
data$A <- A(lambda1)
data$B <- B(lambda2)
d_test <- svydesign(id=~1, data = data, weights = ~data$hw0010)
LLsvylin <- function(y, model){
aux1 <- y * log(pnorm(model))
aux2 <- (1-y) * log(1-pnorm(model))
LL <- (aux1) + (aux2)
return(LL)
}
fitlin <- svymle(loglike=LLsvylin,
formulas=list(~y, model = ~A+B+X),
design=d_test,
start=list(0,0,0,0),
na.action="na.exclude")
On the contrary, if I don't use the sampling weights, I can easily estimate my nonlinear model using the function mle from package stats4 or the function mle2 from package bbmle.
To sum up,
how can I combine sampling weights (svymle) while estimating a nonlinear model (which I can do using mle or mle2)?
=========================================================================
A problem with the nonlinear part of the model arises also when using the function svyglm (with fixed lambda1 and lambda2, in order to get good starting values for svymle):
lambda1=11
lambda2=8
model0 = y ~ A(lambda1) + B(lambda2) + X
probit1 = svyglm(formula = model0,
data = data,
family = binomial(link=probit),
design = d_test)
Because I get the error message:
Error in svyglm.survey.design(formula = model0, data = data, family = binomial(link = probit), :
all variables must be in design= argument
This isn't what svymle does -- it's for generalised linear models, which have linear predictors and a potentially complicated likelihood or loss function. You want non-linear weighted least squares, with a simple loss function but complicated predictors.
There isn't an implementation of design-weighted nonlinear least squares in the survey package, probably because no-one has previously asked for one. You could try emailing the package author.
The upcoming version 4 of the survey package will have a function svynls, so if you know how to fit your model without sampling weights using nls you will be able to fit it with sampling weights.