Factors in nlme: singularity in backsolve error - r

I am parameterizing exponential fits for some metabolic scaling models. I have done this in lmer already, without problem, using logged dependent and independent variables. However, I would now like to incorporate other parameters that aren't necessarily exponentially related to the dependent variable. Hence, I've turned to nlme (lme4::nlmer doesn't seem to handle fixed effects), but I don't have much experience with it. Apologies in advance for newbie mistakes.
With the code below, I am getting the following error. I'm guessing that it has something to do with the 'site' factor being misspecified:
Error in nlme.formula(dep ~ scaling_fun(alpha, beta, ind, site), data = scale_df, :
Singularity in backsolve at level 0, block 1
When I fit a simpler function that does not involve 'site', the model seems to work correctly.
Any thoughts would be greatly appreciated!
Thanks,
Allie
# dput for data
# copy from http://pastebin.com/WNHhi2kZ (too large to include here)
> head(scale_df)
dep ind spp site
2 0.28069471 -0.0322841 157 A
3 -0.69719050 -1.2568901 183 A
4 0.29252012 0.1592420 246 A
5 0.72030740 -0.3282789 154 A
6 -0.08601891 0.3623756 110 A
7 0.30793594 0.2230840 154 A
scaling_fun <- function(alpha, beta, ind, site) {
return(beta + ind^alpha + site*(ind^alpha))
}
# results in singularity in backsolve error
nlme(dep ~ trait_scaling_fun(alpha, beta, ind, site),
data = scale_df,
fixed = list(alpha + beta + site ~ 1), random = alpha ~ 1|spp,
start = list(fixed = c(0.7, 0, 1)))
##############################
# simpler function converges #
##############################
scaling_fun <- function(alpha, beta, ind) {
return(beta + ind^alpha)
}
nlme(dep ~ scaling_fun(alpha, beta, ind),
data = scale_df,
fixed = list(alpha + beta ~ 1), random = alpha ~ 1|spp,
start = list(fixed = c(0.7, 0)))

Your model does not really make sense since site is a factor variable (and not a parameter). I suspect you actually want to stratify alpha by site:
library(nlme)
scaling_fun <- function(alpha, beta, ind) {
return(beta + ind^alpha)
}
nlme(dep ~ scaling_fun(alpha, beta, ind),
data = scale_df,
fixed = list(alpha ~ site, beta ~ 1), random = alpha ~ 1|spp,
start = list(fixed = c(0.487, rep(0, 19), -0.3)))
#Nonlinear mixed-effects model fit by maximum likelihood
# Model: dep ~ scaling_fun(alpha, beta, ind)
# Data: scale_df
# Log-likelihood: -716.4634
# Fixed: list(alpha ~ site, beta ~ 1)
#alpha.(Intercept) alpha.siteB alpha.siteC alpha.siteD alpha.siteE
# 0.57671912 -0.61258632 -0.59244337 -0.25793558 -0.24572998
# alpha.siteF alpha.siteG alpha.siteH alpha.siteI alpha.siteJ
# -0.23615274 -0.31015393 0.17970575 0.01286117 -0.12539377
# alpha.siteK alpha.siteL alpha.siteM alpha.siteN alpha.siteO
# 3.72445972 -0.08560994 0.13636185 0.31877456 -0.25952204
# alpha.siteQ alpha.siteR alpha.siteS alpha.siteT alpha.siteU
# 0.15663989 0.66511079 0.10785082 -0.21547379 -0.23656126
# beta
# -0.30280707
#
#Random effects:
# Formula: alpha ~ 1 | spp
# alpha.(Intercept) Residual
#StdDev: 0.6426563 0.4345844
#
#Number of Observations: 1031
#Number of Groups: 279
However, I also suspect that site should be a random effect.

Related

comparing segmented models in R

I have the hypothesis that the automatically found model below is not significantly better than one where the middle segment is horizontal. How could I test that?
df<-structure(list(ageThen=c(9,10,11,12,13,14,15,16,17,
18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,
34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,
50,51,52,53,54,55,56,57,58,59,60,61,62),mTh=c(-0.057,
-0.253,-0.345,-0.185,-0.155,-0.013,0.285,0.16,0.197,0.199,
0.215,0.288,0.401,0.363,0.387,0.37,0.387,0.28,0.571,
0.383,0.297,0.366,0.36,0.25,0.269,0.235,0.273,0.336,
0.354,0.286,0.331,0.21,0.32,0.278,0.195,0.257,0.259,
0.251,0.222,0.206,0.214,-0.072,-0.123,-0.043,-0.003,0.116,
-0.193,-0.218,-0.278,-0.265,-0.218,-0.541,-0.76,-0.401
),n=c(64L,524L,20595L,2504L,795L,704L,1700L,1239L,
1273L,1149L,1011L,1122L,1031L,814L,717L,667L,462L,414L,
405L,313L,256L,305L,187L,255L,240L,221L,262L,227L,230L,
239L,199L,290L,201L,246L,217L,215L,273L,229L,213L,193L,
199L,204L,159L,207L,148L,121L,115L,89L,87L,78L,68L,
85L,55L,80L)),class=c("tbl_df","tbl","data.frame"),row.names=c(NA,
-54L))
library(segmented)
m1<-lm(mTh~ageThen,data=df,weights=n)##initialfit
s2<-segmented(m1,psi=c(20,50))##twobreakpoints,estimatedstartingvalues
plot(mTh~ageThen,data=df)
lines(df$ageThen,predict(s2),col=2,lwd=2)
This workflow seems to solve my problem. The literature that I saw suggests that the there is a significant increase in the predictive value of the model if the elpd_diff/se_diff (in absolute terms) is larger than 2, in other places 5. In my case it isn't, so unconstrained model is not much better predictor than the constrained (theoretical). A citable source would be much appreciated to corroborate this.
https://lindeloev.github.io/mcp/articles/comparison.html#what-is-loo-cv.
https://discourse.mc-stan.org/t/interpreting-elpd-diff-loo-package/1628
library(mcp)
df<-structure(list(ageThen=c(9,10,11,12,13,14,15,16,17,
18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,
34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,
50,51,52,53,54,55,56,57,58,59,60,61,62),mTh=c(-0.057,
-0.253,-0.345,-0.185,-0.155,-0.013,0.285,0.16,0.197,0.199,
0.215,0.288,0.401,0.363,0.387,0.37,0.387,0.28,0.571,
0.383,0.297,0.366,0.36,0.25,0.269,0.235,0.273,0.336,
0.354,0.286,0.331,0.21,0.32,0.278,0.195,0.257,0.259,
0.251,0.222,0.206,0.214,-0.072,-0.123,-0.043,-0.003,0.116,
-0.193,-0.218,-0.278,-0.265,-0.218,-0.541,-0.76,-0.401
),n=c(64L,524L,20595L,2504L,795L,704L,1700L,1239L,
1273L,1149L,1011L,1122L,1031L,814L,717L,667L,462L,414L,
405L,313L,256L,305L,187L,255L,240L,221L,262L,227L,230L,
239L,199L,290L,201L,246L,217L,215L,273L,229L,213L,193L,
199L,204L,159L,207L,148L,121L,115L,89L,87L,78L,68L,
85L,55L,80L)),class=c("tbl_df","tbl","data.frame"),row.names=c(NA,
-54L))
modelC = list(
mTh | weights(n) ~ 1 + ageThen, # slope
~ 0 , # joined plateau
~ 0 + ageThen # joined slope
)
fitC<-mcp(modelC, data = df, prior = list(cp_1 = "dunif(12, 25)", cp_2 = "dunif(35, 60)"))
plot(fitC)
summary(fitC)
modelUc = list(
mTh | weights(n) ~ 1 + ageThen, # slope
~ 0 + ageThen, # joined slope
~ 0 + ageThen # joined slope
)
fitUc<-mcp(modelUc, data = df, prior = list(cp_1 = "dunif(12, 25)", cp_2 = "dunif(35, 60)"))
plot(fitUc)
summary(fitUc)
fitUc$loo = loo(fitUc)
fitC$loo = loo(fitC)
library(tidyverse)
as.data.frame(loo::loo_compare(fitUc$loo,fitC$loo))%>%
mutate(diffRatio=elpd_diff/se_diff, .keep='used')

Implementing multinomial-Poisson transformation with multilevel models

I know variations of this question have been asked before but I haven't yet seen an answer on how to implement the multinomial Poisson transformation with multilevel models.
I decided to make a fake dataset and follow the method outlined here, also consulting the notes the poster mentions as well as the Baker paper on MP transformation.
In order to check if I'm doing the coding correctly, I decided to create a binary outcome variable as a first step; because glmer can handle binary response variables, this will let me check I'm correctly recasting the logit regression as multiple Poissons.
The context of this problem is running multilevel regressions with survey data where the outcome variable is response to a question and the possible predictors are demographic variables. As I mentioned above, I wanted to see if I could properly code the binary outcome variable as a Poisson regression before moving on to multi-level outcome variables.
library(dplyr)
library(lme4)
key <- expand.grid(sex = c('Male', 'Female'),
age = c('18-34', '35-64', '45-64'))
set.seed(256)
probs <- runif(nrow(key))
# Make a fake dataset with 1000 responses
n <- 1000
df <- data.frame(sex = sample(c('Male', 'Female'), n, replace = TRUE),
age = sample(c('18-34', '35-64', '45-64'), n, replace = TRUE),
obs = seq_len(n), stringsAsFactors = FALSE)
age <- model.matrix(~ age, data = df)[, -1]
sex <- model.matrix(~ sex, data = df)[, -1]
beta_age <- matrix(c(0, 1), nrow = 2, ncol = 1)
beta_sex <- matrix(1, nrow = 1, ncol = 1)
# Create class probabilities as a function of age and sex
probs <- plogis(
-0.5 +
age %*% beta_age +
sex %*% beta_sex +
rnorm(n)
)
id <- ifelse(probs > 0.5, 1, 0)
df$y1 <- id
df$y2 <- 1 - df$y1
# First run the regular hierarchical logit, just with a varying intercept for age
glm_out <- glmer(y1 ~ (1|age), family = 'binomial', data = df)
summary(glm_out)
#Next, two Poisson regressions
glm_1 <- glmer(y1 ~ (1|obs) + (1|age), data = df, family = 'poisson')
glm_2 <- glmer(y2 ~ (1|obs) + (1|age), data = df, family = 'poisson')
coef(glm_1)$age - coef(glm_2)$age
coef(glm_out)$age
The outputs for the last two lines are:
> coef(glm_1)$age - coef(glm_2)$age
(Intercept)
18-34 0.14718933
35-64 0.03718271
45-64 1.67755129
> coef(glm_out)$age
(Intercept)
18-34 0.13517758
35-64 0.02190587
45-64 1.70852847
These estimates seem close but they are not exactly the same. I'm thinking I've specified an equation wrong with the intercept.

MCMCglmm binomial model prior

I want to estimate a binomial model with the R package MCMCglmm. The model shall incorporate an intercept and a slope - both as fixed and random parts. How do I have to specify an accepted prior? (Note, here is a similar question, but in a much more complicated setting.)
Assume the data have the following form:
y x cluster
1 0 -0.56047565 1
2 1 -0.23017749 1
3 0 1.55870831 1
4 1 0.07050839 1
5 0 0.12928774 1
6 1 1.71506499 1
In fact, the data have been generated by
set.seed(123)
nj <- 15 # number of individuals per cluster
J <- 30 # number of clusters
n <- nj * J
x <- rnorm(n)
y <- rbinom(n, 1, prob = 0.6)
cluster <- factor(rep(1:nj, each = J))
dat <- data.frame(y = y, x = x, cluster = cluster)
The information in the question about the model, suggest to specify fixed = y ~ 1 + x and random = ~ us(1 + x):cluster. With us() you allow the random effects to be correlated (cf. section 3.4 and table 2 in Hadfield's 2010 jstatsoft-article)
First of all, as you only have one dependent variable (y), the G part in the prior (cf. equation 4 and section 3.6 in Hadfield's 2010 jstatsoft-article) for the random effects variance(s) only needs to have one list element called G1. This list element isn't the actual prior distribution - this was specified by Hadfield to be an inverse-Wishart distribution. But with G1 you specify the parameters of this inverse-Whishart distribution which are the scale matrix ( in Wikipedia notation and V in MCMCglmm notation) and the degrees of freedom ( in Wikipedia notation and nu in MCMCglmm notation). As you have two random effects (the intercept and the slope) V has to be a 2 x 2 matrix. A frequent choice is the two dimensional identity matrix diag(2). Hadfield often uses nu = 0.002 for the degrees of freedom (cf. his course notes)
Now, you also have to specify the R part in the prior for the residual variance. Here again an inverse-Whishart distribution was specified by Hadfield, leaving the user to specify its parameters. As we only have one residual variance, V has to be a scalar (lets say V = 0.5). An optional element for R is fix. With this element you specify, whether the residual variance shall be fixed to a certain value (than you have to write fix = TRUE or fix = 1) or not (then fix = FALSE or fix = 0). Notice, that you don't fix the residual variance to be 0.5 by fix = 0.5! So when you find in Hadfield's course notes fix = 1, read it as fix = TRUE and look to which value of V it is was fixed.
All togehter we set up the prior as follows:
prior0 <- list(G = list(G1 = list(V = diag(2), nu = 0.002)),
R = list(V = 0.5, nu = 0.002, fix = FALSE))
With this prior we can run MCMCglmm:
library("MCMCglmm") # for MCMCglmm()
set.seed(123)
mod0 <- MCMCglmm(fixed = y ~ 1 + x,
random = ~ us(1 + x):cluster,
data = dat,
family = "categorical",
prior = prior0)
The draws from the Gibbs-sampler for the fixed effects are found in mod0$Sol, the draws for the variance parameters in mod0$VCV.
Normally a binomial model requires the residual variance to be fixed, so we set the residual variance to be fixed at 0.5
set.seed(123)
prior1 <- list(G = list(G1 = list(V = diag(2), nu = 0.002)),
R = list(V = 0.5, nu = 0.002, fix = TRUE))
mod1 <- MCMCglmm(fixed = y ~ 1 + x,
random = ~ us(1 + x):cluster,
data = dat,
family = "categorical",
prior = prior1)
The difference can be seen by comparing mod0$VCV[, 5] to mod1$VCV[, 5]. In the later case, all entries are 0.5 as specified.

R: cant get a lme{nlme} to fit when using self-constructed interaction variables

I'm trying to get a lme with self constructed interaction variables to fit. I need those for post-hoc analysis.
library(nlme)
# construct fake dataset
obsr <- 100
dist <- rep(rnorm(36), times=obsr)
meth <- dist+rnorm(length(dist), mean=0, sd=0.5); rm(dist)
meth <- meth/dist(range(meth)); meth <- meth-min(meth)
main <- data.frame(meth = meth,
cpgl = as.factor(rep(1:36, times=obsr)),
pbid = as.factor(rep(1:obsr, each=36)),
agem = rep(rnorm(obsr, mean=30, sd=10), each=36),
trma = as.factor(rep(sample(c(TRUE, FALSE), size=obsr, replace=TRUE), each=36)),
depr = as.factor(rep(sample(c(TRUE, FALSE), size=obsr, replace=TRUE), each=36)))
# check if all factor combinations are present
# TRUE for my real dataset; Naturally TRUE for the fake dataset
with(main, all(table(depr, trma, cpgl) >= 1))
# construct interaction variables
main$depr_trma <- interaction(main$depr, main$trma, sep=":", drop=TRUE)
main$depr_cpgl <- interaction(main$depr, main$cpgl, sep=":", drop=TRUE)
main$trma_cpgl <- interaction(main$trma, main$cpgl, sep=":", drop=TRUE)
main$depr_trma_cpgl <- interaction(main$depr, main$trma, main$cpgl, sep=":", drop=TRUE)
# model WITHOUT preconstructed interaction variables
form1 <- list(fixd = meth ~ agem + depr + trma + depr*trma + cpgl +
depr*cpgl +trma*cpgl + depr*trma*cpgl,
rndm = ~ 1 | pbid,
corr = ~ cpgl | pbid)
modl1 <- nlme::lme(fixed=form1[["fixd"]],
random=form1[["rndm"]],
correlation=corCompSymm(form=form1[["corr"]]),
data=main)
# model WITH preconstructed interaction variables
form2 <- list(fixd = meth ~ agem + depr + trma + depr_trma + cpgl +
depr_cpgl + trma_cpgl + depr_trma_cpgl,
rndm = ~ 1 | pbid,
corr = ~ cpgl | pbid)
modl2 <- nlme::lme(fixed=form2[["fixd"]],
random=form2[["rndm"]],
correlation=corCompSymm(form=form2[["corr"]]),
data=main)
The first model fits without any problems whereas the second model gives me following error:
Error in MEEM(object, conLin, control$niterEM) :
Singularity in backsolve at level 0, block 1
Nothing i found out about this error so far helped me to solve the problem. However the solution is probably pretty easy.
Can someone help me? Thanks in advance!
EDIT 1:
When i run:
modl3 <- lm(form1[["fixd"]], data=main)
modl4 <- lm(form2[["fixd"]], data=main)
The summaries reveal that modl4 (with the self constructed interaction variables) in contrast to modl3 shows many more predictors. All those that are in 4 but not in 3 show NA as coefficients. The problem therefore definitely lies within the way i create the interaction variables...
EDIT 2:
In the meantime I created the interaction variables "by hand" (mainly paste() and grepl()) - It seems to work now. However I would still be interested in how i could have realized it by using the interaction() function.
I should have only constructed the largest of the interaction variables (combining all 3 simple variables).
If i do so the model gets fit. The likelihoods then are very close to each other and the number of coefficients matches exactly.

correlation in multivariate mixed model in r

I am running multivariate mixed model in R by using nlme package. Suppose that x and y are responses variables for longitudinal data which assumed that the error within group is correlated. The residual error matrix is presented as:
So my question is how to involve the correlation into lme function?
I tried commands corr = corComSymm(from =~ 1 | x) or corr = corAR1(from =~ 1 | x) but did not work!
here en example:
# visiting time by months
time = rep(c(0,3,6,9),time = 4, 200)
# subjects
subject = rep(1:50, each = 4)
# first response variable "identity"
x = c(rep(0, 100), rep(1,100))
# second response variable "identity"
y = c(rep(1, 100), rep(0,100))
# values of both reponses variables (x_1, x_2)
value = c(rnorm(100,20,1),rnorm(100,48,1))
# variables refer to reponses variables (x_1, x_2)
variable = factor(c(rep(0,150),rep(1,50)), label=c("X","Y"))
df = data.frame(subject , time, x,y,value, variable)
library(nlme)
# fit the model that each response variable has intercept and slope (time) for each random and fixed effects
# as well as fixed effects slopes for sex and lesion, and each response has different variance
f= lme(value ~ -1 + x + y + x:time + y:time , random = ~ -1 + (x + y) + time:( x + y)|subject ,
weights = varIdent(form=~1| x),corr = corAR1(from = ~ 1|x), control=lmeControl(opt="optim"), data =df)
Error in corAR1(from = ~1 | x) : unused argument (from = ~1 | x)
Any suggestions?
I found this website (below) which helpful and useful, I posted here in case someone might has this problem in future.
https://rpubs.com/bbolker/3336

Resources