I am trying to adjust a generalized linear model defined below:
It must be noted that the response variable Var1, as well as the regressor variable Var2, have zero values, for which a constant has been added to avoid problems when applying the log.
model = glm(Var1+2 ~ log(Var2+2) + offset(log(Var3/Var4)),
family = gaussian(link = "log"), data = data2)
However, I am facing an error when performing the graph for the diagnostic analysis using the hnp function, which is expressed by:
library(hnp)
hnp(model)
Gaussian model (glm object)
Error in eval(family$initialize) :
cannot find valid starting values: please specify some
In order to get around the situation, I tried to perform the manual implementation to then carry out the construction of the graph, however, the error message is still present.
dfun <- function(obj) resid(obj)
sfun <- function(n, obj) simulate(obj)[[1]]
ffun <- function(resp) glm(resp ~ log(Var2+2) + offset(log(Var3/Var4)),
family = gaussian(link = "log"), data = data2)
hnp(model, newclass = TRUE, diagfun = dfun, simfun = sfun, fitfun = ffun)
Error in eval(family$initialize) :
cannot find valid starting values: please specify some
Some guidelines in which I found information to try to solve the problem were used, such as considering initial values to initialize the estimation algorithm both in the linear predictor, as well as for the means, however, these were not enough to solve the problem, see below the computational routine:
fit = lm(Var1+2 ~ log(Var2+2) + offset(log(Var3/Var4)), data=data2)
coefficients(fit)
(Intercept) log(Var2+2)
32.961103 -8.283306
model = glm(Var1+2 ~ log(Var2+2) + offset(log(Var3/Var4)),
family = gaussian(link = "log"), start = c(32.96, -8.28), data = data2)
hnp(model)
Error in eval(family$initialize) :
cannot find valid starting values: please specify some
See that the error persists even when trying to manually implement the half-normal plot.
dfun <- function(obj) resid(obj)
sfun <- function(n, obj) simulate(obj)[[1]]
ffun <- function(resp) glm(resp ~ log(Var2+2) + offset(log(Var3/Var4)),
family = gaussian(link = "log"), data = data2, start = c(32.96, -8.28))
hnp(model, newclass = TRUE, diagfun = dfun, simfun = sfun, fitfun = ffun)
Error in eval(family$initialize) :
cannot find valid starting values: please specify some
I also tried to readjust the model by removing the zeros from the database, however, I didn't get any solution to the problem, that is, it still persists.
I suspect what you meant to fit is a log transformed response variable against your predictors. You can more detail about the difference between a log link glm and a log transformed response variable. Essentially when you use a log link, you are assuming the errors are on the exponential scale. I am not so familiar with hnp but my guess it there are problems simulating the response variable.
If I run your regression like this using the data provided, it looks ok
data2$Y = with(data2, log( (Var1+2)/Var3/Var4))
model = glm(Y ~ log(Var2+2), data = data2)
hnp(model)
Related
I am trying to run the following code on R:
m <- gam(Flp_pop ~ s(Flp_CO, bs = "cr", k = 30), data = data, family = poisson, method = "REML")
My dataset is like this:
enter image description here
But when I try to execute, I get this error message:
"Error in if (abs(old.score - score) > score.scale * conv.tol) { :
missing value where TRUE/FALSE needed
In addition: There were 50 or more warnings (use warnings() to see the first 50)"
I am very new to R, maybe it is a very basic question. But does anyone know why this is happening?
Thanks!
The Poisson distribution has support on the non-negative integers and you are passing a continuous variable as the response. Here's an example with simulated data
library("mgcv")
library("gratia")
library("dplyr")
df <- data_sim("eg1", seed = 2) %>% # simulate Gaussian response
mutate(yabs = abs(y)) # make y non negative
mp <- gam(yabs ~ s(x2, bs = "cr"), data = df,
family = poisson, method = "REML")
# fails
which reproduces the error you saw
Error in if (abs(old.score - score) > score.scale * conv.tol) { :
missing value where TRUE/FALSE needed
In addition: There were 50 or more warnings (use warnings() to see the first 50)
The warnings are of the form:
$> warnings()[1]
Warning message:
In dpois(y, y, log = TRUE) : non-integer x = 7.384012
Indicating the problem; the model is evaluating the probability mass for your response data given the estimated model and you're evaluating this at the indicated non-integer value, which returns a 0 mass plus the warning.
If we'd passed the original Gaussian variable as the response, which includes negative values, the function would have errored out earlier:
mp <- gam(y ~ s(x2, bs = "cr"), data = df,
family = poisson, method = "REML")
which raises this error:
r$> mp <- gam(y ~ s(x2, bs = "cr"), data = df,
family = poisson, method = "REML")
Error in eval(family$initialize) :
negative values not allowed for the 'Poisson' family
An immediate but not necessarily advisable solution is just to use the quasipoisson family
mq <- gam(yabs ~ s(x2, bs = "cr"), data = df,
family = quasipoisson, method = "REML")
which uses the same mean variance relationship as the Poisson distribution but not the actual distribution so we can get away with abusing it.
Better would be to ask yourself why you were trying to fit a model that is ostensibly for counts to a response that is a continuous (non-negative) variable?
If the answer is you had a count but then normalised it in some way (say by dividing by some measure of effort like area surveyed or length of observation time) then you should use an offset of the form + offset(log(effort_var)) added to the model formula, and use the original non-normalised integer variable as the response.
If you really have a continuous response and the poisson was an over sight, try fitting with family = Gamma(link = "log")) or family = tw().
If it's something else, you should edit your question to include that info and perhaps we here can help or the question could be migrated to CrossValidated if the issue is more statistical in nature.
I am trying to run a logit regression and I tried two approaches:
m.logit <- glm(p4 ~ scale(log(gdp,orthodox,swb)),
data = happiness,
family = binomial("logit"))
summary(m.logit)
Throws: Error in summary(m.logit) : object 'm.logit' not found
While
m1.logit <- glm(p4 ~ gdp + orthodox + swb, family = binomial(link = "logit"), data = happiness)
Throws: Error in eval(family$initialize) : y values must be 0 <= y <= 1
I kind of understood the errors (in the former case m.logit is not found, and in the latter, I need to transform the variables I think...) but don't know how to solve it...
Any help?
I want to estimate a multilevel ordered logistic model and afterwards access the model matrix. When running a simplified example from ?clmm:
library("ordinal")
mod1 <- clmm(SURENESS ~ PROD + (1|RESP), data = soup)
model.matrix(mod1)
I get the error message Error in eval(predvars, data, env) : object 'SURENESS' not found. From other packages I expected that setting parameters like model = TRUE the data going in are also exported to the estimated model, but here all relevant parameters seem to be set accordingly by default. Did I miss some parameters or elements from mod1 (I went through attributes(mod1) but did not find a model matrix.
Strangely if I set a random data.frame, it works:
set.seed(123)
df <- data.frame(y = factor(sample(c("A", "B", "C"), size = 1000, replace = TRUE), ordered = TRUE),
x = rnorm(1000),
id = factor(rep(1:10, each = 100)))
mod2 <- clmm(y ~ 1 + x + (1|id), data = df)
model.matrix(mod2)
So what's the difference between mod1 and mod2 and how do I get a model.matrix from mod1?
I do not think model.matrix(mod2) works for clmm objects. However, you can try to build a parallel model for the fixed effects part using functions like 'polr' and apply model.matrix() to the output object. The random-effects part can be fixed separately by using the clmm output.
#Logistic Regression
glm.fit <- glm(recent_cannabis_use~.,data = drug_use_train, family = binomial)
summary(glm.fit)
predict(glm.fit, with(drug_use_train, data.frame(Gender = "Male")), type = "response")
Trying to find the predicted probability for recent_canabis_use for a male.
You should use predict(glm.fit, newdata = data.frame(Gender = "Male")). Using with in this case is not warranted, since you are not accessing any of the variables in drug_use_train.
Note that this assumes your formula is, upon expansion, recent_cannabis_use ~ Gender. If you have other variable and you want to explore only the effect of Gender, you will need to set (pre-calculate or make up) all other variables to some fixed value (remember how coefficients are interpreted - change in y with one unit change of x, provided everything else stays the same). See for example this post.
I'm trying to fit a mixed-effects model with a gamma distribution. The most basic model has one fixed predictor and 1 random effect. No matter which link I specify (I've tried log, identity and inverse), I obtain the following error. My real data has zeros in Y, but even when I apply simulated data with only positive Y as below, it throws the same error.
mockdf = data.frame(y = rnorm(100,77,6.5), x1 = sample(letters,100,replace = T), x2 = seq(1900,1999,1))
mod = lmer(y ~ (1|x1) + x2, family = gamma(link = 'identity'), na.action = na.exclude, data = mockdf)
Error in gamma(link = "identity") :
supplied argument name 'link' does not match 'x'
I searched through SO and couldn't find another person who ran into this error. Is my syntax incorrect?
Thanks for your help.