Plot output of non-linear model output in ggplot2 - r

I have some data where the best fitting non-linear regression is the S curve model. I want to plot the S curve in ggplot2 but do not know how to specify this model. I assume I should use the following code but do not know how to specify the method or formula. Can anyone help?
'''geom_smooth(method = XXX,
method.args = list(formula = XXX)'''

You can wrap a prediction in geom_function(). Example with a built-in dataset below:
library(ggplot2)
# From the ?nls examples
df <- subset(DNase, Run == 1)
fit <- nls(density ~ SSlogis(log(conc), Asym, xmid, scal), df)
ggplot(df, aes(conc, density)) +
geom_point() +
geom_function(
fun = function(x) {
predict(fit, newdata = data.frame(conc = x))
},
colour = "red",
) +
scale_x_continuous(trans = "log10")

Related

Binomial logit model with glmer doesn't yield a good fit to sigmoidal data

I am trying to fit a model to my data, which has a dependent variable that can be 0 or 1.
I tried to fit a binomial glmer to the data, but the fit is pretty bad as you can see below. This puzzles me because this is quite a sigmoid so I thought I would get a great fit with that kind of model? Am I using the wrong model?
(color is my data, black is the fit)
Here is the code I used on r
library(lme4)
library(ggplot2)
exdata <- read.csv("https://raw.githubusercontent.com/FlorianLeprevost/dummydata/main/exdata.csv")
model=glmer(VD~ as.factor(VI2)*VI1 + (1|ID),exdata,
family=binomial(link = "logit"),
control = glmerControl(optimizer = "bobyqa", optCtrl = list(maxfun=2e5)))
summary(model)
exdata$fit=predict(model, type = "response")
ggplot(exdata,aes(VI1, VD, color=as.factor(VI2),
group=as.factor(VI2))) +
stat_summary(geom="line", size=0.8) +
stat_summary(aes(y=(fit)),geom="line", size=0.8, color="black") +
theme_bw()
And I tried without the random effect to see if it would change but no...
ggplot(exdata, aes(x=VI1, y=VD, color=as.factor(VI2),
group=as.factor(VI2))) +
stat_summary(fun.data=mean_se, geom="line", size=1)+
stat_smooth(method="glm", se=FALSE, method.args = list(family=binomial), color='black')
Here is the data:https://github.com/FlorianLeprevost/dummydata/blob/main/exdata.csv
tl;dr I don't think these data are as sigmoidal as you think. In particular, a logistic regression estimates a sigmoid curve that ranges from 0 to 1, whereas yours levels out (sort of) at 0.9. In much more detail:
slightly streamlined data import/model fitting
library(lme4)
library(ggplot2)
library(dplyr)
exdata <- (read.csv("https://raw.githubusercontent.com/FlorianLeprevost/dummydata/main/exdata.csv")
|> mutate(across(VI2, factor))
)
model <- glmer(VD~ VI2*VI1 + (1|ID),
exdata,
family=binomial(link = "logit"))
compute data summary and predictions
This can also be done with stat_summary(), but I like the finer control of doing it myself. In particular, I like to get Clopper-Pearson CIs on the proportions (could also do this with prop.test() to get score-test CIs). I'm also computing predictions across a wider VI1-range than the data (see why below).
ddsum <- (exdata
|> group_by(VI1, VI2)
|> summarise(
num = n(),
pos = sum(VD),
VD = mean(VD),
lwr = binom.test(pos, num)$conf.int[1],
upr = binom.test(pos, num)$conf.int[2],
.groups = "drop")
)
pframe <- expand.grid(
VI1 = seq(-50, 50, length = 101),
VI2 = unique(exdata$VI2))
pframe$VD <- predict(model, newdata = pframe, re.form = ~0, type = "response")
plot
gg0 <- ggplot(ddsum,aes(x=VI1, y=VD, color=VI2)) +
geom_pointrange(position = position_dodge(width = 0.3),
aes(ymin = lwr, ymax = upr, size = num), alpha = 0.5) +
scale_size_area(max_size = 4) +
theme_bw()
gg1 <- gg0 + geom_line(data = pframe)
ggsave(g1, file = "gglogist1.png")
Conclusion: the sharp increase from x=0 to x=15 combined with the saturation below 1.0 makes it hard to fit with a logistic curve.
We could try a quadratic-logistic fit:
model2 <- update(model, . ~ poly(VI1,2)*VI2 + (1|ID))
pframe$VD2 <- predict(model2, newdata = pframe, re.form = ~0, type = "response")
gg2 <- gg1 + geom_line(data=pframe, aes(y=VD2), linetype = 2)
ggsave(gg2, file = "gglogist2.png")
This fits better (it couldn't fit worse), but might not make sense for your application.
In principle we could fit a logistic that saturated at a value <1, but it's a bit tricky ...

Fit a smoothed/cumulative distribution function to data and predict x from given y

I need to fit a smoothed/cumulative distribution function to my data and afterwards be able to predict the x-value by a given y, this is what my code looks like atm but it doesn´t work as expected, because loess probably isn´t the right method (even goes below y<0) and the prediction doesn´t seem to work, too. Any help would be highly appreciated!
test<-data.frame("xvar"=c(0.01,0.86,2,6.3,20),"yvar"=c(0.14,0.16,5.16,89.77,100))
(testplot <- ggplot(test,aes(x=xvar,y=yvar)) +
geom_point(lwd=1) +
geom_line(col="red") +
geom_smooth(method = "loess") +
scale_x_continuous(trans='log10') +
xlab("X") +
ylab("Y") +
labs(title="Test"))
testf<-stats::loess(yvar~xvar, data = test)
predict(testf, 10)
Just eye-balling, but it looks like your data follows a logistic(ish) function. What about this:
library(tidyverse)
test<-data.frame("xvar"=c(0.01,0.86,2,6.3,20),"yvar"=c(0.14,0.16,5.16,89.77,100))
fit <- nls(yvar ~ SSlogis(xvar, Asym, xmid, scal), data = test)
new_dat <- tibble(xvar = seq(0.01, 20, by = 0.01))
new_dat$yvar <- predict(fit, new_dat)
test |>
ggplot(aes(xvar, yvar))+
geom_point()+
geom_line(data = new_dat)
predict(fit, tibble(xvar = 10))[1]
#> [1] 99.83301
EDIT:
I see that you want to then calculate X given a Y:
calc_x <- function(y, model){
cfs <- summary(model)$coefficients
-1*((log((cfs[1,1]/y)-1)*cfs[3,1])-cfs[2,1])
}
calc_x(y = 10, model = fit)
#> [1] 2.666598
#test
predict(fit, tibble(xvar = 2.666598))[1]
#> [1] 9.999995

How to fit non-linear function to data in ggplot2 using maximum likelihood model in R?

The data set (x.test, y.test) is an exponential fit. I'm trying to fit a custom non-linear function and attached is the code. The regular points plot just fine but I'm unable to get the fit line to work. Any suggestions?
x.test <- runif(50,2,8)
y.test <- 0.5^(x.test)
df <- data.frame(x.test, y.test)
library(ggpmisc)
my.formula <- y ~ lambda/ (1 + aii*x)
ggplot(data = df, aes(x=x.test,y=y.test)) +
geom_point(shape=21, fill="white", color="red", size=3) +
stat_smooth(method="nls",formula = y.test ~ lambda/ (1 + aii*x.test), method.args=list(start=c(lambda=1000,aii=-816.39)),se=F,color="red") +
geom_smooth(method="lm", formula = my.formula , col = "red") + stat_poly_eq(formula = my.formula, aes(label = stringr::str_wrap(paste(..eq.label.., ..rr.label.., sep = "~~~"))), parse = TRUE, size = 2.5, col = "red") + stat_function(fun=function (x.test){
y.test ~ lambda/ (1 + aii*x.test)}, color = "blue")
A few things:
you need to use y and x as the variable names in the formula argument to geom_smooth, regardless of what the names are in your data set
you need better starting values (see below)
there's a GLM trick you can use to fit this model; doesn't always work (can be numerically unstable), but it doesn't need starting values and will work more often than nls()
I don't think lm() and stat_poly_eq() are going to work as expected (or maybe at all) with a nonlinear formula ...
simulate data
(same as your code but using set.seed() - probably not important here but good practice)
set.seed(101)
x.test <- runif(50,2,8)
y.test <- 0.5^(x.test)
df <- data.frame(x.test, y.test)
attempt nls fit with your starting values
It's usually a good idea to troubleshoot by fitting any smoothing terms outside of ggplot2, so you have fewer layers to dig through to find the problems:
nls(y.test ~ lambda/(1+ aii*x.test),
start = list(lambda=1000,aii=-816.39),
data = df)
Error in nls(y.test ~ lambda/(1 + aii * x.test), start = list(lambda = 1000, :
singular gradient
OK, still doesn't work. Let's use glm() to get better starting values: we use an inverse-link GLM:
1/y = b0 + b1*x
y = 1/(b0 + b1*x)
= (1/b0)/(1 + (b1/b0)*x)
So:
g1 <- glm(y.test ~ x.test, family = gaussian(link = "inverse"))
s0 <- with(as.list(coef(g1)), list(lambda = 1/`(Intercept)`, aii = x.test/`(Intercept)`))
This gives lambda = -0.09, aii = -0.638 (with a little bit more work we could probably also figure out how to eyeball these by looking at the starting point and scale of the curve).
ggplot(data = df, aes(x=x.test,y=y.test)) +
geom_point(shape=21, fill="white", color="red", size=3) +
stat_smooth(method="nls",
formula = y ~ lambda/ (1 + aii*x),
method.args=list(start=s0),
se=FALSE,color="red") +
stat_smooth(method = "glm",
formula = y ~ x,
method.args = list(gaussian(link = "inverse")),
color = "blue", linetype = 2)

How to create a 2nd order trendline in R [duplicate]

I have a simple polynomial regression which I do as follows
attach(mtcars)
fit <- lm(mpg ~ hp + I(hp^2))
Now, I plot as follows
> plot(mpg~hp)
> points(hp, fitted(fit), col='red', pch=20)
This gives me the following
I want to connect these points into a smooth curve, using lines gives me the following
> lines(hp, fitted(fit), col='red', type='b')
What am I missing here. I want the output to be a smooth curve which connects the points
I like to use ggplot2 for this because it's usually very intuitive to add layers of data.
library(ggplot2)
fit <- lm(mpg ~ hp + I(hp^2), data = mtcars)
prd <- data.frame(hp = seq(from = range(mtcars$hp)[1], to = range(mtcars$hp)[2], length.out = 100))
err <- predict(fit, newdata = prd, se.fit = TRUE)
prd$lci <- err$fit - 1.96 * err$se.fit
prd$fit <- err$fit
prd$uci <- err$fit + 1.96 * err$se.fit
ggplot(prd, aes(x = hp, y = fit)) +
theme_bw() +
geom_line() +
geom_smooth(aes(ymin = lci, ymax = uci), stat = "identity") +
geom_point(data = mtcars, aes(x = hp, y = mpg))
Try:
lines(sort(hp), fitted(fit)[order(hp)], col='red', type='b')
Because your statistical units in the dataset are not ordered, thus, when you use lines it's a mess.
Generally a good way to go is to use the predict() function. Pick some x values, use predict() to generate corresponding y values, and plot them. It can look something like this:
newdat = data.frame(hp = seq(min(mtcars$hp), max(mtcars$hp), length.out = 100))
newdat$pred = predict(fit, newdata = newdat)
plot(mpg ~ hp, data = mtcars)
with(newdat, lines(x = hp, y = pred))
See Roman's answer for a fancier version of this method, where confidence intervals are calculated too. In both cases the actual plotting of the solution is incidental - you can use base graphics or ggplot2 or anything else you'd like - the key is just use the predict function to generate the proper y values. It's a good method because it extends to all sorts of fits, not just polynomial linear models. You can use it with non-linear models, GLMs, smoothing splines, etc. - anything with a predict method.

plotting polynomials in R [duplicate]

I have a simple polynomial regression which I do as follows
attach(mtcars)
fit <- lm(mpg ~ hp + I(hp^2))
Now, I plot as follows
> plot(mpg~hp)
> points(hp, fitted(fit), col='red', pch=20)
This gives me the following
I want to connect these points into a smooth curve, using lines gives me the following
> lines(hp, fitted(fit), col='red', type='b')
What am I missing here. I want the output to be a smooth curve which connects the points
I like to use ggplot2 for this because it's usually very intuitive to add layers of data.
library(ggplot2)
fit <- lm(mpg ~ hp + I(hp^2), data = mtcars)
prd <- data.frame(hp = seq(from = range(mtcars$hp)[1], to = range(mtcars$hp)[2], length.out = 100))
err <- predict(fit, newdata = prd, se.fit = TRUE)
prd$lci <- err$fit - 1.96 * err$se.fit
prd$fit <- err$fit
prd$uci <- err$fit + 1.96 * err$se.fit
ggplot(prd, aes(x = hp, y = fit)) +
theme_bw() +
geom_line() +
geom_smooth(aes(ymin = lci, ymax = uci), stat = "identity") +
geom_point(data = mtcars, aes(x = hp, y = mpg))
Try:
lines(sort(hp), fitted(fit)[order(hp)], col='red', type='b')
Because your statistical units in the dataset are not ordered, thus, when you use lines it's a mess.
Generally a good way to go is to use the predict() function. Pick some x values, use predict() to generate corresponding y values, and plot them. It can look something like this:
newdat = data.frame(hp = seq(min(mtcars$hp), max(mtcars$hp), length.out = 100))
newdat$pred = predict(fit, newdata = newdat)
plot(mpg ~ hp, data = mtcars)
with(newdat, lines(x = hp, y = pred))
See Roman's answer for a fancier version of this method, where confidence intervals are calculated too. In both cases the actual plotting of the solution is incidental - you can use base graphics or ggplot2 or anything else you'd like - the key is just use the predict function to generate the proper y values. It's a good method because it extends to all sorts of fits, not just polynomial linear models. You can use it with non-linear models, GLMs, smoothing splines, etc. - anything with a predict method.

Resources