I'm not quite sure to understand clearly how non-linear regression can be analysed in R.
I found that I can still do a linear regression with lm by using the log of my functions.
My first function is this one, where β1 is the intercept, β2 is the slope and ε is the error term :
I think that the following command gives me what I want:
result <- lm(log(Y) ~ log(X1), data=dataset)
The problem is with the following function:
I don't know what should I put inside the lm in order to perform a linear regression on my function... Any idea ?
The following math shows how to transform your equation into a linear regression:
Y = b0*exp(b1*X1 + epsilon)
log(Y) = log(b0) + b1*X1 + epsilon
log(Y) = c0 + b1*X1 + epsilon
So in R this is just
lm(log(Y) ~ X, data = your_data)
You won't get a direct estimate for b0, though, just an estimate of log(b0). But you can back-transform the intercept and its confidence intervals by exponentiating them.
b0_est <- exp(coef(fitted_model)["(Intercept)"])
b0_ci <- exp(confint(fitted_model)["(Intercept)", ])
Related
"gls function fits a linear model using generalized least squares. The errors are allowed to be correlated and/or have unequal variances."
Example
# NOT RUN {
# AR(1) errors within each Mare
fm1 <- gls(follicles ~ sin(2*pi*Time) + cos(2*pi*Time), Ovary,
correlation = corAR1(form = ~ 1 | Mare))
# variance increases as a power of the absolute fitted values
fm2 <- update(fm1, weights = varPower())
# }
I got all the above information from https://www.rdocumentation.org/packages/nlme/versions/3.1-137/topics/gls
In the example, they used a nonlinear model "follicles ~ sin(2*pi*Time) + cos(2*pi*Time)". My quesion is why they used gls fucntion to fit the nonlinear model? Any idea please!
Thank you in advance
I am helping a colleague fit a Compound-Poisson Generalized Linear Mixed Model in R, using the cpglmm-function from the cplm-package (link). The model involves a three-way interaction and I would like to compute some interpretable quantities. So far, I have tried to calculate some Odds-ratios but I am not sure this is the right way to do it?
# Fit model with three-way interaction in fixed effects #
m <- cpglmm(ncs ~ diversity_index*diversity_speciality*n_authors + selfcit +
n_refs + (1|region), data = diversity)
# Calculate Odds-ratio #
se <- sqrt(diag(vcov(m)))
tab <- cbind(Est = m$fixef,
S.E. = se,
LL = m$fixef - 1.96 * se,
UL = m$fixef + 1.96 * se)
print(exp(tab), digits=3)
I also want to compute some predicted values, e.g predicted probabilities or the like, but I can't get predict() to work for the cpglmm. Is there any functions I could use?
I am currently looking for an "optimal" fit for some data. I would like to use AIC stepwise regression to find the "best" polynomial regression for my outcome (y) with three variables (a, b, c) and maximum ^3. I also have interactions.
If I use:
lm_poly <- lm(y ~ a + I(a^2) + I(a^3) + b + I(b^2) + I(b^3) + c + a:b, my_data)
stepAIC(lm_poly, direction = "both")
I will get collinearities due to the use of I(i^j)-terms. This shows in the beta regression coefficients of the final fit. There are terms >|1|.
Is there a possibility to do stepwise regression with orthogonal terms?
Using poly() would be nice, but I just don't understand how to do stepwise regression with poly().
lm_poly2 <- lm(y ~ poly(a,3) + poly(b,3) + c + a:b, my_data)
stepAIC(lm_poly2, direction = "both")
This will not include steps with a, a^2(and b respectivly) and thus not find the results I am looking for.
(I know, that I might still have collinearities do to the interaction a:b.)
I hope, someone can understand my point.
Thank you in advance.
Jans
I have a problem extracting the expression of fitted smooth function in generalized additive model.
I fitted a additive model like this:
library(mgcv)
model <- gam(y ~ x1 + x2 + s(x3, bs = "cr"), data = newdata)
I could plot the fitted smooth function of s(x3), but I want get the exact expression for it and get its derivative. How can I manage to achieve that?
I have some multiple linear models without intercept like below:
Y = a*X1 + b*X2 + c*X3
However this model is a linear model, but since it does not have an intercept we have to write it as a non-linear model in R:
model1= nls(Y ~ a*X1+b*X2, data = trainDat, start = list(a = 1, b=1))
The problem is that the summary(model1) does not give us the statistics of the model like F-statistics because it is not lm.
How can we report the significance of these models in R?