Using nls or nlsLM to fit global and group-specific parameters - r

I would like to use nls to fit a global parameter and group-specific parameters. The closest I have found to a minimum reproducible example is below (found here: https://stat.ethz.ch/pipermail/r-help/2015-September/432020.html)
#Generate some data
d <- transform(data.frame(x=seq(0,1,len=17),
group=rep(c("A","B","B","C"),len=17)), y =
round(1/(1.4+x^ifelse(group=="A", 2.3, ifelse(group=="B",3.1, 3.5))),2))
#Fit to model using nls
nls(y~1/(b+x^p[group]), data=d, start=list(b=1, p=rep(3,length(levels(d$group)))))
This gives me an error:
Error in numericDeriv(form[[3L]], names(ind), env, central = nDcentral) :
Missing value or an infinity produced when evaluating the model
I have not been able to figure out if the error is coming from bad guesses for the starting values, or the way this code is dealing with group-specific parameters. It seems the line with p=rep(3,length(levels(d$group))) is for generating c(3,3,3), but switching this part of the code does not remove the problem (same error obtained as above):
#Fit to model using nls
nls(y~1/(b+x^p[group]), data=d, start=list(b=1, p=c(3, 3, 3)))
Switching to nlsLM gives a different error which leads be to believe I am having an issue with the group-specific parameters:
#Generate some data
library(minpack.lm)
d <- transform(data.frame(x=seq(0,1,len=17),
group=rep(c("A","B","B","C"),len=17)), y =
round(1/(1.4+x^ifelse(group=="A", 2.3, ifelse(group=="B",3.1, 3.5))),2))
#Fit to model using nlsLM
nlsLM(y~1/(b+x^p[group]), data=d, start=list(b=1, p=c(3,3,3)))
Error:
Error in dimnames(x) <- dn :
length of 'dimnames' [2] not equal to array extent
Any ideas?

I think you can do this much more easily with nlme::gnls:
fit2 <- nlme::gnls(y~1/(b+x^p),
params = list(p~group-1, b~1),
data=d,
start = list(b=1, p = rep(3,3)))
Results:
Generalized nonlinear least squares fit
Model: y ~ 1/(b + x^p)
Data: d
Log-likelihood: 62.05887
Coefficients:
p.groupA p.groupB p.groupC b
2.262383 2.895903 3.475324 1.407561
Degrees of freedom: 17 total; 13 residual
Residual standard error: 0.007188101
The params argument allows you to specify fixed-effect submodels for each nonlinear parameter. Using p ~ b-1 parameterizes the model with a separate estimate for each group, rather than fitting a baseline (intercept) value for the first group and the differences between successive groups. (In R's formula language, -1 or +0 signify "fit a model without intercept/set the intercept to 0", which in this case corresponds to fitting all three groups separately.)
I'm quite surprised that gnls and nls don't give identical results (although both give reasonable results); would like to dig in further ...
Parameter estimates (code below):
term nls gnls
1 b 1.41 1.40
2 pA 2.28 2.28
3 pB 3.19 3.14
4 pC 3.60 3.51
par(las = 1, bty = "l")
plot(y~x, data = d, col = d$group, pch = 16)
xvec <- seq(0, 1, length = 21)
f <- function(x) factor(x, levels = c("A","B","C"))
## fit1 is nls() fit
ll <- function(g, c = 1) {
lines(xvec, predict(fit1, newdata = data.frame(group=f(g), x = xvec)), col = c)
}
Map(ll, LETTERS[1:3], 1:3)
d2 <- expand.grid(x = xvec, group = f(c("A","B","C")))
pp <- predict(fit2, newdata = d2)
ll2 <- function(g, c = 1) {
lines(xvec, pp[d2$group == g], lty = 2, col = c)
}
Map(ll2, LETTERS[1:3], 1:3)
legend("bottomleft", lty = 1:2, col = 1, legend = c("nls", "gnls"))
library(tidyverse)
library(broom)
library(broom.mixed)
(purrr::map_dfr(list(nls=fit1, gnls=fit2), tidy, .id = "pkg")
%>% select(pkg, term, estimate)
%>% group_by(pkg)
## force common parameter names
%>% mutate(across(term, ~ c("b", paste0("p", LETTERS[1:3]))))
%>% pivot_wider(names_from = pkg, values_from = estimate)
)

I was able to get this by switching the class of the group from chr to factor. Note the addition of factor() when generating the dataset.
> d <- transform(data.frame(
+ x=seq(0,1,len=17),
+ group=rep(factor(c("A","B","B","C")),len=17)),
+ y=round(1/(1.4+x^ifelse(group=="A", 2.3, ifelse(group=="B",3.1, 3.5))),2)
+ )
> str(d)
'data.frame': 17 obs. of 3 variables:
$ x : num 0 0.0625 0.125 0.1875 0.25 ...
$ group: Factor w/ 3 levels "A","B","C": 1 2 2 3 1 2 2 3 1 2 ...
$ y : num 0.71 0.71 0.71 0.71 0.69 0.7 0.69 0.69 0.62 0.64 ...
> nls(y~1/(b+x^p[group]), data=d, start=list(b=1, p=c(3,3,3)))
Nonlinear regression model
model: y ~ 1/(b + x^p[group])
data: d
b p1 p2 p3
1.406 2.276 3.186 3.601
residual sum-of-squares: 9.537e-05
Number of iterations to convergence: 5
Achieved convergence tolerance: 4.536e-06

Related

Any way to reverse the direction of comparisons when using emmeans contrast with "interaction" argument?

I'm trying to use emmeans to test "contrasts of contrasts" with custom orthogonal contrasts applied to a zero-inflated negative binomial model. The study design has 4 groups (study_group: grp1, grp2, grp3, grp4), each of which is assessed at 3 timepoints (time: Time1, Time2, Time3).
With the code below, I am able to get very close to, but not exactly, what I want. The contrasts that emerge are expressed in terms of ratios such as grp1/grp2, grp1/grp3,..., grp3/grp4 ("lower over higher"; see output following code).
What would be immensely helpful to me to have a way to flip these ratios to be grp2/grp1, grp3/grp1,..., grp4/grp3 ("higher over lower"). I've tried sticking reverse=TRUE in various spots, but to no effect.
Short of re-leveling the study_group factor, is there anyway to do this in emmeans?
Thanks!
library(glmmTMB)
library(emmeans)
set.seed(3456)
# Building grid for study design: 4 groups of 3 sites,
# each with 20 participants observed 3 times
site <- rep(1:12, each=60)
pid <- 1000*site+10*(rep(rep(1:20,each=3),12))
study_group <- c(rep("grp1",180), rep("grp2",180), rep("grp3",180), rep("grp4",180))
grp_num <- c(rep(0,180), rep(1,180), rep(2,180), rep(3,180))
time <- c(rep(c("Time1", "Time2", "Time3"),240))
time_num <- c(rep(c(0:2),240))
# Site-level random effects (intercepts)
site_eff_count = rep(rnorm(12, mean = 0, sd = 0.5), each = 60)
site_eff_zeros = rep(rnorm(12, mean = 0, sd = 0.5), each = 60)
# Simulating a neg binomial outcome
y_count <- rnbinom(n = 720, mu=exp(3.25 + grp_num*0.15 + time_num*-0.20 + grp_num*time_num*0.15 + site_eff_count), size=0.8)
# Simulating some extra zeros
log_odds = (-1.75 + grp_num*0.2 + time_num*-0.40 + grp_num*time_num*0.50 + site_eff_zeros)
prob_1 = plogis(log_odds)
prob_0 = 1 - prob_1
y_zeros <- rbinom(n = 720, size = 1, prob = prob_0)
# Building datasest with ZINB-ish outcome
data_ZINB <- data.frame(site, pid, study_group, time, y_count, y_zeros)
data_ZINB$y_obs <- ifelse(y_zeros==1, y_count, 0)
# Estimating ZINB GLMM in glmmTMB
mod_ZINB <- glmmTMB(y_obs ~ 1
+ study_group + time + study_group*time
+ (1|site),
family=nbinom2,
zi = ~ .,
data=data_ZINB)
#summary(mod_ZINB)
# Getting model-estimated "cell" means for conditional (non-zero) sub-model
# in response (not linear predictor) scale
count_means <- emmeans(mod_ZINB,
pairwise ~ time | study_group,
component="cond",
type="response",
adjust="none")
# count_means
# Defining custom contrast function for orthogonal time contrasts
# contr1 = Time 2 - Time 1
# contr2 = Time 3 - Times 1 and 2
compare_arms.emmc <- function(levels) {
k <- length(levels)
contr1 <- c(-1,1,0)
contr2 <- c(-1,-1,2)
coef <- data.frame()
coef <- as.data.frame(lapply(seq_len(k - 1), function(i) {
if(i==1) contr1 else contr2
}))
names(coef) <- c("T1vT2", "T1T2vT3")
attr(coef, "adjust") = "none"
coef
}
# Estimating pairwise between-group "contrasts of contrasts"
# i.e., testing if time contrasts differ across groups
compare_arms_contrast <- contrast(count_means[[1]],
interaction = c("compare_arms", "pairwise"),
by = NULL)
compare_arms_contrast
applying theemmeans::contrast function as above yields this:
time_compare_arms study_group_pairwise ratio SE df null t.ratio p.value
T1vT2 grp1 / grp2 1.091 0.368 693 1 0.259 0.7957
T1T2vT3 grp1 / grp2 0.623 0.371 693 1 -0.794 0.4276
T1vT2 grp1 / grp3 1.190 0.399 693 1 0.520 0.6034
T1T2vT3 grp1 / grp3 0.384 0.241 693 1 -1.523 0.1283
T1vT2 grp1 / grp4 0.664 0.245 693 1 -1.108 0.2681
.
.
.
T1T2vT3 grp3 / grp4 0.676 0.556 693 1 -0.475 0.6346
Tests are performed on the log scale
The answer, provided by Russ Lenth in the comments and in the emmeans documentation for the contrast function, is to replace pairwise with revpairwise in the contrast function call.

Question on "Error in hasTsp(x) : argument "x" is missing, with no default" when using nls

I'm learning to do nonlinear square fit with R, and I followed this tutorial:
p = function(x) x^3+2*x^2+5
x = seq(-0.99, 1, by = .01)
y = p(x) + runif(200)
df = data.frame(x = x, y = y)
head(df)
x y
1 -0.99 6.183018
2 -0.98 6.611669
3 -0.97 6.762615
4 -0.96 6.594278
5 -0.95 5.990637
6 -0.94 6.048369
# Then the author conducted a nonlinear regression fit.
fit = nls(y~a*x^2+b*x, data = df, start(a=0, b=0))
But when I try to run the code, it always says
"Error in hasTsp(x) : argument "x" is missing, with no default"
Does anyone know where the problem is?
Thank you so much!
We need the start as a list argument
nls(y~a*x^2+b*x, data = df, start = list(a=0, b=0))
-output
Nonlinear regression model
model: y ~ a * x^2 + b * x
data: df
a b
11.1241 0.5711
residual sum-of-squares: 2713
Number of iterations to convergence: 1
Achieved convergence tolerance: 3.246e-10

R lme4 model: calculating effect size between continuous predictor's max-min value

I'm struggling to calculate an effect size between a continuous predictor's max-min value while using an R lme4 multilevel model.
Simulated data: predictor "x" ranges from 1 to 3
library(tidyverse)
n = 100
a = tibble(y = rep(c("pos", "neg", "neg", "neg"), length.out = n), x = rep(3, length.out = n), group = rep(letters[1:7], length.out = n))
b = tibble(y = rep(c("pos", "pos", "neg", "neg"), length.out = n), x = rep(2, length.out = n), group = rep(letters[1:7], length.out = n))
c = tibble(y = rep(c("pos", "pos", "pos", "neg"), length.out = n), x = rep(1, length.out = n), group = rep(letters[1:7], length.out = n))
d = rbind(a, b)
df = rbind(d, c)
df = df %>% mutate(y = as.factor(y))
df
Model
library("lme4")
m = glmer(
y ~ x + (x | group),
data = df,
family = binomial(link = "logit"))
Output
ggpredict(m, "x")
.
# Predicted probabilities of y
x | Predicted | 95% CI
----------------------------
1 | 0.75 | [0.67, 0.82]
2 | 0.50 | [0.44, 0.56]
3 | 0.25 | [0.18, 0.33]
Adjusted for:
* group = 0 (population-level)
I'm failing to calculate the effect size between the predictor's "x" max (3) and min (1) value
My best try
library("emmeans")
emmeans(m, "x", trans = "logit", type = "response", at = list(x = c(1, 3)))
x response SE df asymp.LCL asymp.UCL
1 0.75 0.0387 Inf 0.667 0.818
3 0.25 0.0387 Inf 0.182 0.333
Confidence level used: 0.95
Intervals are back-transformed from the logit scale
How can I calculate the effect size with CIs between the predictor's "x" max (3) and min (1) value? The effect size should be in probability scale.
I'll try to answer, though I'm still not sure what the question is. I am going to assume that what is wanted is the difference between the two probabilities.
There are a lot of moving parts in the emmeans call shown, so I will proceed in smaller steps. First, let's get estimates of the probabilities in question:
> library(emmeans)
> EMM = emmeans(m, "x", at = list(x = c(1, 3)), type = "response")
> EMM
x prob SE df asymp.LCL asymp.UCL
1 0.75 0.0387 Inf 0.667 0.818
3 0.25 0.0387 Inf 0.182 0.333
Confidence level used: 0.95
Intervals are back-transformed from the logit scale
The quickest way to obtain a pairwise comparison is via
> pairs(EMM)
contrast odds.ratio SE df null z.ratio p.value
1 / 3 9 2.94 Inf 1 6.728 <.0001
Tests are performed on the log odds ratio scale
As stated in the annotations (and also in the documentation, e.g. the vignette on comparisons, when a log or logit transformation is in place, the comparison is shown as a ratio. This happens because the tests are performed on the link (logit) scale, and the difference between logs is the log of a ratio.
If we want the difference between probabilities, it is necessary to create a new object where the primary quantities being estimated are the probabilities, rather than their logits. In emmeans, this may be done via the regrid() function:
> EMMP = regrid(EMM, transform = "response")
> EMMP
x prob SE df asymp.LCL asymp.UCL
1 0.75 0.0387 Inf 0.674 0.826
3 0.25 0.0387 Inf 0.174 0.326
Confidence level used: 0.95
This output looks a lot like the summary of EMM; however, all memory of the logit transformation has been erased, thus the confidence intervals are different because they are calculated directly from the SEs of the prob estimates. For more information, see the vignette on transformations.
So now if we compare these, we get the difference of the probabilities:
> confint(pairs(EMMP))
contrast estimate SE df asymp.LCL asymp.UCL
1 - 3 0.5 0.0612 Inf 0.38 0.62
Confidence level used: 0.95
(Note: I wrapped this in confint() so that we woul;d obtain a confidence interval, rather than the default summary of the t ratio and P value.)
This could be accomplished in one line of code as follows:
confint(pairs(emmeans(m, "x", transform = "response", at = list(x = c(1, 3)))))
The transform argument requests that the reference grid be immediately passed to regrid(). Note that the correct argument here is transform = "response", rather than transform = "logit" (that is, specify what you want to end with, not what you started with). The latter undoes, then redoes, the logit transformation, putting you back where you started.
The emmeans package provides a lot of options, and I really do recommend reading the vignettes.

Logistic regression confusion matrix problem

I tried computing confusion-matrix for my glm model but I keep getting:
Error: data and reference should be factors with the same levels.
Below is my model:
model3 <- glm(winner ~ srs.1 + srs.2, data = train_set, family = binomial)
confusionMatrix(table(predict(model3, newdata=test_set, type="response")) >= 0.5,
train_set$winner == 1)
winner variable contains team1 and team2.
srs.1 and srs.2 are numerical values.
What is my problem here?
I suppose your winner label is a binary of 0,1. So let's use the example below:
library(caret)
set.seed(111)
data = data.frame(
srs.1 = rnorm(200),
srs.2 = rnorm(200)
)
data$winner = ifelse(data$srs.1*data$srs.2 > 0,1,0)
idx = sample(nrow(data),150)
train_set = data[idx,]
test_set = data[-idx,]
model3 <- glm(winner ~ srs.1 + srs.2, data = train_set, family = binomial)
Like you did, we try to predict, if > 0.5, it will be 1 else 0. You got the table() about right. Note you need to do it both for test_set, or train_set:
pred = as.numeric(predict(model3, newdata=test_set, type="response")>0.5)
ref = test_set$winner
confusionMatrix(table(pred,ref))
Confusion Matrix and Statistics
ref
pred 0 1
0 12 5
1 19 14
Accuracy : 0.52
95% CI : (0.3742, 0.6634)
No Information Rate : 0.62
P-Value [Acc > NIR] : 0.943973
Kappa : 0.1085

R and factor coding in formula

How do I use the formula interface if I want custom valued dummies, e.g. if I want values 1 and two, rather than 0 and 1. The estimation might look like the following where supp is a factor variable.
fit <- lm(len ~ dose + supp, data = ToothGrowth)
In this example, there is not much use of the different values, but in many cases of a "re-written" model it can be useful.
EDIT: Actually, I have e.g. 3 levels, and want the two columns to be coded differently, so one is a 1/0 variable, and the other is a 1/2 variable. The above example only has two levels.
You can set the contrasts to be whatever you want by creating the matrix you want to use and setting it either to the contrasts argument of lm or setting the default contrast of the factor itself.
Some sample data:
set.seed(6)
d <- data.frame(g=gl(3,5,labels=letters[1:3]), x=round(rnorm(15,50,20)))
The contrasts you have in mind:
mycontrasts <- matrix(c(0,0,1,0,1,1), byrow=TRUE, nrow=3)
colnames(mycontrasts) <- c("12","23")
mycontrasts
# 12 23
#[1,] 0 0
#[2,] 1 0
#[3,] 1 1
Then you use this in the lm call:
> lm(x ~ g, data=d, contrasts=list(g=mycontrasts))
Call:
lm(formula = x ~ g, data = d, contrasts = list(g = mycontrasts))
Coefficients:
(Intercept) g12 g23
58.8 -13.6 5.8
We can check that it does the right thing by comparing the means:
> diff(tapply(d$x, d$g, mean))
b c
-13.6 5.8
The default contrast is to use the first level as baseline:
> lm(x ~ g, data=d)
Call:
lm(formula = x ~ g, data = d)
Coefficients:
(Intercept) gb gc
58.8 -13.6 -7.8
But that can be changed with the contrasts command:
> contrasts(d$g) <- mycontrasts
> lm(x ~ g, data=d)
Call:
lm(formula = x ~ g, data = d)
Coefficients:
(Intercept) g12 g23
58.8 -13.6 5.8

Resources