Non-linear regression analysis in R - r

I'm a R novice but I'm looking for a way to determine the three parameters A, B and C related by the following function in R:
y = A * (x1^B) * (x2^C)
Can someone give me some hints about R method(s) that would help me to achieve such a fitting?

One option is the nls function as #SvenHohenstein suggested. Another option is to convert your nonlinear regression into a linear regression. In the case of this equation just take the log of both sides of the equation and do a little algebra and you will have a linear equation. You can run the regression using something like:
fit <- lm( log(y) ~ log(x1) + log(x2), data=mydata)
The intercept will be log(A) so use exp to get the value, the B and C parameters will be the 2 slopes.
The big difference here is that nls will fit the model with normal errors added to the original equation and the lm fit with logs assumes that the errors in the original model are from a lognormal distribution and are multiplied instead of added to the model. Many datasets will give similar results for the 2 methods.

You can fit a nonlinear least-squares model with the function nls.
nls(y ~ A * (x1^B) * (x2^C))

Why don´t you use SVM (Suppor Vector Machines) Regression? there´s a package in CRAN named e1071 that can handle regression with SVM.
You can check this tutorial: http://www.svm-tutorial.com/2014/10/support-vector-regression-r/
I hope it can help you

Related

How to force intercept to zero using nls pacakge in R?

I used nls package to analyze non linear model (power curve, y= ax^b).
cal<- nls(formula= agw~a*area^b, data=calibration_6, start=list(a=1, b=1))
summary(cal)
What I want now is to force intercept (a) to zero to check something. In Excel, I can't set intercept for power curve. In R, is that possible to set intercept?
If possible, could you tell me how to do it?
y~ x^b model is what I considered first,
nls(formula= agw ~ area^b, data=calibration_6, start=list(b=1))
but, I also found other way, please check the below link.
How to calculate R-squared in nls package (non-linear model) in R?

How to obtain R^2 for robust mixed effect model (rlmer command; robustlmm)?

I estimated a robust mixed effect model with the rlmercommand from the robustlmmpackage. Is there a way to obtain the marginal and conditional R^2 values?
Just going to answer that myself. I could not find a package or rather a function in R that is equivalent to e.g. r.squaredGLMM in the case of lmerMod objects but I found a quick workaround that works with rlmerMod objects. Basically you just have to extract the variance components for the fixed effects, random effects and residuals and then manually calcualte the marginal and conditional R^2 based on the formula provided by Nakagawa & Schielzeth (2013).
library(robustlmm)
library(insight)
library(lme4)
data(Dyestuff, package = "lme4")
robust.model <- rlmer(Yield ~ 1|Batch, data=Dyestuff)
var.fix <- get_variance_fixed(robust.model)
var.ran <- get_variance_random(robust.model)
var.res <- get_variance_residual(robust.model)
R2m = var.fix/(var.fix+var.ran+var.res)
R2c = (var.fix+var.ran)/(var.fix+var.ran+var.res)
Literature:
Nakagawa, S. and Schielzeth, H. (2013), A general and simple method for obtaining R2 from generalized linear mixed‐effects models. Methods Ecol Evol, 4: 133-142. doi:10.1111/j.2041-210x.2012.00261.x

R: fit mixed effect model

Suppose we have the following linear mixed effects model:
How do we fit this nested model in R?
For now, I tried two things:
Rewrite the model as:
Then using lmer function in lme4 package to fit the mixed effect model and put Xi as both random and fixed effect covariate as:
lmer(y ~ X-1+(0+X|subject))
But when I pass the result to BIC and do the model selection, it always picks the simplest model, which is not correct.
I tried to regress y_i on X_i first and treat X_i as the fixed effect, then I will get the estimate of the slope, i.e. phi_i vector. Then see phi_i as the new observations and regress on C_i again to get the beta. But it seems not correct since we do not know C_i in the real problem and it looks like C_i and beta jointly decide the coefficients.
So, are there other ways to fit this kind of model in R and where are my mistakes?
Thanks for any help!

vector regression in R

I would like to do a regression in R
The formula is y_t = alpha +beta* x_t-1 & x_t = theta + rho * x_t-1.
Since I would like to estimate the covariance matrix of the error. I do not know how to run regression for both equation together. Thank you.
I tried
lm(c(y[2:756],x[2:756])~c(x[1:755],x[1:755]),data=data1)
756 is the length of vector, it does not work.
Your example looks like you are trying to fit an autoregressive model with lm. Try autoregressive models instead. For multivariate autoregressive models I suggest using the MTS package. Something like the following should work:
require("MTS")
VAR(data.frame(x=x, y=y))
For more detail, check out ?VAR. You may also want to have a look at the time series task view on CRAN.

GLM with autoregressive term to correct for serial correlation

I have a stationary time series to which I want to fit a linear model with an autoregressive term to correct for serial correlation, i.e. using the formula At = c1*Bt + c2*Ct + ut, where ut = r*ut-1 + et
(ut is an AR(1) term to correct for serial correlation in the error terms)
Does anyone know what to use in R to model this?
Thanks
Karl
The GLMMarp package will fit these models. If you just want a linear model with Gaussian errors, you can do it with the arima() function where the covariates are specified via the xreg argument.
There are several ways to do this in R. Here are two examples using the "Seatbelts" time series dataset in the datasets package that comes with R.
The arima() function comes in package:stats that is included with R. The function takes an argument of the form order=c(p, d, q) where you you can specify the order of the auto-regressive, integrated, and the moving average component. In your question, you suggest that you want to create a AR(1) model to correct for first-order autocorrelation in the errors and that's it. We can do that with the following command:
arima(Seatbelts[,"drivers"], order=c(1,0,0),
xreg=Seatbelts[,c("kms", "PetrolPrice", "law")])
The value for order specifies that we want an AR(1) model. The xreg compontent should be a series of other Xs we want to add as part of a regression. The output looks a little bit like the output of summary.lm() turned on its side.
Another alternative process might be more familiar to the way you've fit regression models is to use gls() in the nlme package. The following code turns the Seatbelt time series object into a dataframe and then extracts and adds a new column (t) that is just a counter in the sorted time series object:
Seatbelts.df <- data.frame(Seatbelts)
Seatbelts.df$t <- 1:(dim(Seatbelts.df)[1])
The two lines above are only getting the data in shape. Since the arima() function is designed for time series, it can read time series objects more easily. To fit the model with nlme you would then run:
library(nlme)
m <- gls(drivers ~ kms + PetrolPrice + law,
data=Seatbelts.df,
correlation=corARMA(p=1, q=0, form=~t))
summary(m)
The line that begins with "correlation" is the way you pass in the ARMA correlation structure to GLS. The results won't be exactly the same because arima() uses maximum likelihood to estimate models and gls() uses restricted maximum likelihood by default. If you add method="ML" to the call to gls() you will get identical estimates you got with the ARIMA function above.
What is your link function?
The way you describe it sounds like a basic linear regression with autocorrelated errors. In that case, one option is to use lm to get a consistent estimate of your coefficients and use Newey-West HAC standard errors.
I'm not sure the best answer for GLM more generally.

Resources