Model fitting with nls.lm in R, "Error: unused argument" - r

I'm trying to use the nls.lm function in the minpack.lm to fit a non-linear model to some data from a psychophysics experiment.
I've had a search around and can't find a lot of information about the package so have essentially copied the format of the example given on the nls.lm help page. Unfortunately my script is still failing to run and R is throwing out this error:
Error in fn(par, ...) :
unused argument (observed = c(0.1429, 0.2857, 0.375, 0.3846, 0.4667, 0.6154))
It appears that the script thinks the data I want to fit the model to is irrelevant, which is definitely wrong.
I'm expecting it to fit the model and produce a value of 0.5403 for the spare parameter (w).
Any help is greatly appreciated.
I'm making the transfer from Matlab over to R so apologies if my code looks sloppy.
Here's the script.
install.packages("pracma")
require(pracma)
install.packages("minpack.lm")
require(minpack.lm)
# Residual function, uses parameter w (e.g. .23) to predict accuracy error at a given ratio [e.g. 2:1]
residFun=function(w,n) .5 * erfc( abs(n[,1]-n[,2])/ ((sqrt(2)*w) * sqrt( (n[,1]^2) + (n[,2]^2) ) ) )
# example for residFun
# calculates an error rate of 2.59%
a=matrix(c(2,1),1,byrow=TRUE)
residFun(.23,a)
# Initial guess for parameter to be fitted (w)
parStart=list(w=0.2)
# Recorded accuracies in matrix, 1- gives errors to input into residFun
# i.e. the y-values I want to fit the model
Acc=1-(matrix(c(0.8571,0.7143,0.6250,0.6154,0.5333,0.3846),ncol=6))
# Ratios (converted to proportions) used in testing
# i.e. the points along the x-axis to fit the above data to
Ratios=matrix(c(0.3,0.7,0.4,0.6,0.42,0.58,0.45,0.55,0.47,0.53,0.49,0.51),nrow=6,byrow=TRUE)
# non-linear model fitting, attempting to calculate the value of w using the Levenberg-Marquardt nonlinear least-squares algorithm
output=nls.lm(par=parStart,fn=residFun,observed=Acc,n=Ratios)
# Error message shown after running
# Error in fn(par, ...) :
# unused argument (observed = c(0.1429, 0.2857, 0.375, 0.3846, 0.4667, 0.6154))

The error means you passed a function an argument that it did not expect. ?nls.lm has no argument observed, so it is passed to the function passed to fn, in your case, residFun. However, residFun doesn't expect this argument either, hence the error. You need to redefine this function like this :
# Residual function, uses parameter w (e.g. .23) to predict accuracy error at a given ratio [e.g. 2:1]
residFun=function(par,observed, n) {
w <- par$w
r <- observed - (.5 * erfc( abs(n[,1]-n[,2])/ ((sqrt(2)*w) * sqrt( (n[,1]^2) + (n[,2]^2) ) ) ))
return(r)
}
It gives the following result :
> output = nls.lm(par=parStart,fn=residFun,observed=Acc,n=Ratios)
> output
Nonlinear regression via the Levenberg-Marquardt algorithm
parameter estimates: 0.540285874836135
residual sum-of-squares: 0.02166
reason terminated: Relative error in the sum of squares is at most `ftol'.
Why that happened :
It seems that you were inspired by this example in he documentation :
## residual function
residFun <- function(p, observed, xx) observed - getPred(p,xx)
## starting values for parameters
parStart <- list(a=3,b=-.001, c=1)
## perform fit
nls.out <- nls.lm(par=parStart, fn = residFun, observed = simDNoisy,
xx = x, control = nls.lm.control(nprint=1))
Note that observed is an argument of residFun here.

Related

R non-linear model fitting using fitModel function

I want to fit a non-linear model to a real data.
The real data consists of 2 known numerical vectors ; thickness as 'x' and fh as 'y'
thickness=seq(0.15,2.00,by=0.05)
fh = c(5.17641, 4.20461, 3.31091, 2.60899, 2.23541, 1.97771, 1.88141, 1.62821, 1.50138, 1.51075, 1.40850, 1.26222, 1.09432, 1.13202, 1.12918, 1.10355, 1.11867, 1.09740,1.08324, 1.05687, 1.19422, 1.22984, 1.34516, 1.19713,1.25398 ,1.29885, 1.33658, 1.31166, 1.40332, 1.39550,1.37855, 1.41491, 1.59549, 1.56027, 1.63925, 1.72440, 1.74192, 1.82049)
plot(thickness,fh)
This is apparently non-linear. So, I am trying to fit this model as a non-linear function of
y= x*2/3+(2+2*a)/(3*x)
Variable a is an unknown constant and I am trying to find the best constant a that minimizes the sum of square of error between the regression line and the real data.
I first used a function fitModel that I found on a YouTube video, Fitting Functions to Data in R.
library(TIMP)
f=fitModel(fh~thickness^2/3+(2+2*A)/(3*thickness)) #it finds the coefficient 'A'
coef(f) # to represent just the coefficient
However, there's an error
Error in modelspec[[datasetind[i]]] : subscript out of bounds
So, as an alternative, want to find a plot of 'a' and 'the Sum of Squares of Error'. This time, I have such a hard time finding 'a' and plotting this graph. By manual work, I figured out the value 'a' is somewhere near 0.2 but this is not a precise value.
It would be helpful if someone could manifest either:
Why the fitModel function didn't work or
How to find the value a and plot the graph.
You could try this instead:
yf = function(a,xv) xv*(2/3)+(2+2*a)/(3*xv)
yf(2,thickness)
f <- function (a,y, xv) sum((y - yf(a,xv))^2)
f(2,fh,thickness)
xmin <- optimize(f, c(0, 10), tol = 0.0001, y=fh,xv=thickness)
xmin
plot(thickness,fh)
lines(thickness,yf(xmin$minimum,thickness),col=3)

Error when fitting a beta distribution: the function mle failed to estimate the parameters with error code 100

I'm trying to use fitdist () function from the fitdistrplus package to fit my data to different distributions. Let's say that my data looks like:
x = c (1.300000, 1.220000, 1.160000, 1.300000, 1.380000, 1.240000,
1.150000, 1.180000, 1.350000, 1.290000, 1.150000, 1.240000,
1.150000, 1.120000, 1.260000, 1.120000, 1.460000, 1.310000,
1.270000, 1.260000, 1.270000, 1.180000, 1.290000, 1.120000,
1.310000, 1.120000, 1.220000, 1.160000, 1.460000, 1.410000,
1.250000, 1.200000, 1.180000, 1.830000, 1.670000, 1.130000,
1.150000, 1.170000, 1.190000, 1.380000, 1.160000, 1.120000,
1.280000, 1.180000, 1.170000, 1.410000, 1.550000, 1.170000,
1.298701, 1.123595, 1.098901, 1.123595, 1.110000, 1.420000,
1.360000, 1.290000, 1.230000, 1.270000, 1.190000, 1.180000,
1.298701, 1.136364, 1.098901, 1.123595, 1.316900, 1.281800,
1.239400, 1.216989, 1.785077, 1.250800, 1.370000)
Next, if i run fitdist (x, "gamma") everything is fine, but if I use fitdist (x, "beta") instead I get the following error:
Error in start.arg.default(data10, distr = distname) :
values must be in [0-1] to fit a beta distribution
Ok, so I'm not native english but as far as I understand this method requires data to be in the range [0,1], so I scale it by using x_scaled = (x-min(x))/max(x). This gives me a vector with values in that range that perfectly correlates the original vector x.
Because of x_scaled is of class matrix, I convert into a numeric vector using as.numeric(). And then fit the model with fitdist(x_scale,"beta").
This time I get the following error:
Error in fitdist(x_scale, "beta") :
the function mle failed to estimate the parameters, with the error code 100
So after that I've been doing some search engine queries but I don't find anything useful. Does anybody ave an idea of whats going on wrong here? Thank you
By reading into the source code, it can be found that the default estimation method of fitdist is mle, which will call mledist from the same package, which will construct a negative log-likelihood for the distribution you have chosen and use optim or constrOptim to numerically minimize it. If there is anything wrong with the numerical optimization process, you get the error message you've got.
It seems like the error occurs because when x_scaled contains 0 or 1, there will be some problem in calculating the negative log-likelihood for beta distribution, so the numerical optimization method will simply broke. One dirty trick is to let x_scaled <- (x - min(x) + 0.001) / (max(x) - min(x) + 0.002), so there is no 0 nor 1 in x_scaled, and fitdist will work.

Maximum Likelihood Estimation by hand for normal distribution in R

I am a newbie in R and searched in several forums but didn't got an answer so far. We are asked to do a maximum likelihood estimation in R for an AR(1) model without using the arima() command. We should estimate the intercept alpha, the coefficient beta and the variance sigma2. The data should be following a normal distribution, where I derived the log-likelihood function from. I was then trying to program the function with the following code:
Y <- data$V2
nlogL <- function(theta,Y){
alpha <- theta[1]
rho <- theta[2]
sigma2 <- theta[3]
logl <- -(100/2)*log(2*pi) - (100/2)*log(theta[3]) - (0.5*theta[3])*sum(Y-(theta[1]/(1-theta[2]))**2)
return(-logl)
}
par0 <- c(0.1,0.1,0.1)
opt <- optim(par0, nlogL, hessian = TRUE)
When running this code I always get the error message: Error in Y - (theta[1]/(1 - theta[2]))^2 : 'Y' is missing.
It would be great if you could have a look whether the likelihood function is derived correctly.
Thank you very much in advance for your help!
Your nlogL function should only take a single argument, theta. So you can fix your immediate problem simply by removing the 2nd argument to the function, and the Y variable would be resolved by its definition outside of nlogL. Alternatively, you could keep the signature of nlogL as-is and pass Y as an additional argument through optim like this: optim(par0, nlogL, hessian = TRUE, Y=Y). Also I would second chinsoon12's suggestion to review ?optim.

passing sparse xreg to stlf in R causes optimisation error

I am trying to forecast a time series, and regress on temperature. The residuals show a different behaviour at low and high temperatures so I want to use piecewise linear approach, so learn different coeffecients for temperatures above and below 35 degrees.
The data is in a dataframe data$x, data$Season, data$Temp.
#Create data frame
len<-365*3 + 1 +31
x<-rnorm(len,mean=4000000,sd=100000)
Season<-c(rep(3,62),rep(4,91),rep(1,90),rep(2,92),rep(3,92),rep(4,91),rep(1,90),rep(2,92),rep(3,92),rep(4,91),rep(1,91),rep(2,92),rep(3,61))
Temp<-rnorm(len,mean=20,sd=5)
data<-data.frame(x,Season,Temp)
#Create model matrix
season_dummy<-model.matrix(~as.factor(data$Season)+0)
Temp_max=pmax(0,data$Temp-35) # creates 0, or a difference
Temp_restore<-restore_temp_up(Temp_max,data$Temp,35) # restores difference to original value
Temp_season_matrix_max=Temp_restore * season_dummy
#Create time-series and forecast
data_ts<-ts(data$x[1:1000],freq=365,start=c(2009,182))
len_train<-length(data_ts)
xreg1<-Temp_season_matrix_max[1:len_train,]
newxreg1<-Temp_season_matrix_max[(len_train+1):(len_train+30),]
stlf(data_ts,method="arima",h=30,xreg=xreg1,newxreg=newxreg1,s.window="periodic")
> Error in optim(init[mask], armaCSS, method = optim.method, hessian = FALSE, :
non-finite value supplied by optim
Error in auto.arima(x, xreg = xreg, seasonal = FALSE, ...) :
No suitable ARIMA model found
In addition: Warning message:
In auto.arima(x, xreg = xreg, seasonal = FALSE, ...) :
Unable to calculate AIC offset
>
Other threads suggest changing method solver from CSS to ML, but I cant edit these parameters in stlf. The help file shows an optional parameter "forecastfunction" but there are no examples of real explanation how to use it.
Note - when I set the min temperature to say 20, instead of 35, this works ok - I am sure it is because the xreg matrix containing temperatures above 35 degress is sparse (most temperatures are below this value), but I am not sure how to get around this.
(I have included code for restore_temp_up - possibly inefficient, but included here for question completion.)
restore_temp_up<-function(x,original,k){
if(!is.vector(x))
stop('x must be a vector')
for (i in 1:length(x)){
if(!is.na(x[i])){
if (x[i] > 0){
x[i]<-x[i]+k
}
if (original[i] == k){
x[i]<-original[i] ## this is the case if original WAS =k, then dont know whether original is 0,
}
}
}
return(x)
}
Your design matrix is rank deficient so the regression is singular. To see this:
> eigen(t(xreg1) %*% xreg1)$val
[1] 1321.223 0.000 0.000 0.000
You cannot fit a regression model with a rank deficient design matrix.

R portfolio analytics chart.EfficientFrontier function

I am trying to use the chart.EfficientFrontier function in the portfolioanalytics package in R to chart an efficient frontier object that I have created but it keeps failing. Basically I am trying to find a frontier that will minimize annaulized standard deviation. Eventually once I get this working I would also like to maximize annualized return.
Firstly I created an annualized standard deviation function using this code
pasd <- function(R, weights){
as.numeric(StdDev(R=R, weights=weights)*sqrt(12)) # hardcoded for monthly data
# as.numeric(StdDev(R=R, weights=weights)*sqrt(4)) # hardcoded for quarterly data
}
I imported a csv file with monthly returns and my portfolio object looks like this:
> prt
**************************************************
PortfolioAnalytics Portfolio Specification
**************************************************
Call:
portfolio.spec(assets = colnames(returns))
Number of assets: 3
Asset Names
[1] "Global REITs" "Au REITs" "Au Util and Infra"
Constraints
Enabled constraint types
- leverage
- long_only
Objectives:
Enabled objective names
- mean
- pasd
Now I successfully create an efficient frontier object using this line:
prt.ef <- create.EfficientFrontier(R = returns, portfolio = prt, type = "DEoptim", match.col = "pasd")
But when I try to plot it I am getting the following error messages.
> chart.EfficientFrontier(prt.ef, match.col="pasd")
Error in StdDev(R = R, weights = weights) :
argument "weights" is missing, with no default
In addition: There were 26 warnings (use warnings() to see them)
Error in StdDev(R = R, weights = weights) :
argument "weights" is missing, with no default
Error in StdDev(R = R, weights = weights) :
argument "weights" is missing, with no default
Error in xlim[2] * 1.15 : non-numeric argument to binary operator
Anyone know why this is the case? When I use summary(prt.ef) I can see the weights, but why is the chart.EfficientFrontier function failing?
As #WaltS suggested, you need to be consistent in implementing functions to annualize mean and risk returns.
But actually to get annualized statistics you have two options, you are not using any:
1) Make the optimization with monthly data, with the original risk return functions in the specification. For plotting you can anualize making
Port.Anua.Returns=prt.ef$frontier[,1]*12
Port.Anua.StDev=prt.ef$frontier[,2]*12^.5
The weights will be the same for monthly or annualized portfolios.
prt.ef$frontier[,-(1:3)]
2) Transform your monthly returns in annualized returns multiplying by 12. Then do the optimization with the usual procedure, all risk and return will be already annualized in prt.ef$frontier.
Related to the jagged line in EF. Using your portfolio specification I was also able to recreate the same behavior. For the following plot I used edhec data, your specification with original mean and StdDev in the objectives:
data(edhec)
returns <- edhec[,1:3]
That behavior must be influenced by the specification or the optimization algorithm you are using. I did the same optimization with solve.QP from package quadprog. This is the result.
Update
The code is here:
require(quadprog)
#min_x(-d^T x + 1/2 b^T D x) r.t A.x>=b
MV_QP<-function(nx, tarRet, Sig=NULL,long_only=FALSE){
if (is.null(Sig)) Sig=cov(nx)
dvec=rep(0,ncol(Sig))
meq=2
Amat=rbind(rep(1,ncol(Sig)),
apply(nx,2,mean) )
bvec=c(1,tarRet )
if (long_only) {
meq=1
Amat=Amat[-1,]
Amat=rbind(Amat,
diag(1,ncol(Sig)),
rep(1,ncol(Sig)),
rep(-1,ncol(Sig)))
bvec=bvec[-1]
bvec=c(bvec,
rep(0,ncol(Sig)),.98,-1.02)
}
sol <- solve.QP(Dmat=Sig, dvec, t(Amat), bvec, meq=meq)$solution
}
steps=50
x=returns
µ.b <- apply(X = x, 2, FUN = mean)
long_only=TRUE
range.bl <- seq(from = min(µ.b), to = max(µ.b)*ifelse(long_only,1,1.6), length.out = steps)
risk.bl <- t(sapply(range.bl, function(targetReturn) {
w <- MV_QP(x, targetReturn,long_only=long_only)
c(sd(x %*% w),w) }))
weigthsl=round(risk.bl[,-1],4)
colnames(weigthsl)=colnames(x)
weigthsl
risk.bl=risk.bl[,1]
rets.bl= weigthsl%*%µ.b
fan=12
plot(x = risk.bl*fan^.5, y = rets.bl*fan,col=2,pch=21,
xlab = "Annualized Risk ",
ylab = "Annualized Return", main = "long only EF with solve.QP")
Adding to Robert's comments, the optimization calculation with monthly returns is a quadratic programming problem with linear constraints. When mean is the return objective and StdDev or var is the risk objective, optimize.portfolio and create.EfficientFrontier select the ROI method as the solver which uses solve.QP, an efficient solver for these sorts of problems. When the risk objective is changed to pasd, these functions don't recognize this as a QP problem so use DEoptim a general nonlinear problem solver perhaps better suited to solving nonconvex problem rather than convex QP ones. See Differential Evolution with DEoptim . This seems to be the cause of the jagged efficient frontier.
In order to have create.EfficientFrontier use solve.QP, which is much more efficient and accurate for this type of problem, you can make a custom moment function to compute the mean and variance and then specify it with the argument momentFUN. However, create.EfficientFrontier at least in part uses means computed directly from the returns rather than using mu from momentFUN. To deal with that, multiply the returns and divide the variance by 12 as shown in the example below.
library(PortfolioAnalytics)
data(edhec)
returns <- edhec[,1:3]
# define moment function
annualized.moments <- function(R, scale=12, portfolio=NULL){
out <- list()
out$mu <- matrix(colMeans(R), ncol=1)
out$sigma <- cov(R)/scale
return(out)
}
# define portfolio
prt <- portfolio.spec(assets=colnames(returns))
prt <- add.constraint(portfolio=prt, type="long_only")
# leverage defaults to weight_sum = 1 so is equivalent to full_investment constraint
prt <- add.constraint(portfolio=prt, type="leverage")
prt <- add.objective(portfolio=prt, type="risk", name="StdDev")
# calculate and plot efficient frontier
prt_ef <- create.EfficientFrontier(R=12*returns, portfolio=prt, type="mean-StdDev",
match.col = "StdDev", momentFUN="annualized.moments", scale=12)
xlim <- range(prt_ef$frontier[,2])*c(1, 1.5)
ylim <- range(prt_ef$frontier[,1])*c(.80, 1.05)
chart.EfficientFrontier(prt_ef, match.col="StdDev", chart.assets = FALSE,
labels.assets = FALSE, xlim=xlim, ylim=ylim )
points(with(annualized.moments(12*returns, scale=12), cbind(sqrt(diag(sigma)), mu)), pch=19 )
text(with(annualized.moments(12*returns, scale=12), cbind(sqrt(diag(sigma)), mu)),
labels=colnames(returns), cex=.8, pos=4)
chart.EF.Weights(prt_ef, match.col="StdDev")
The means and standard deviations of the assets also need to be adjusted and so are plotted outside of chart.EfficientFrontier and shown on the chart below.
At the end of the day it would be simpler, as Robert suggests, to compute the weights for the efficient frontier using the monthly returns and then compute the portfolio returns and standard deviations using annualized asset means and standard deviations and the monthly weights which are the same in both cases. However, perhaps this example is useful to show the use of custom moment and objective functions.
Does not find the reason of the error, but setting the limits it partially works!
prt.ef$frontier #see the EF
xylims=apply(prt.ef$frontier[,c(2,1)],2,range)*c(.98,1.01)
chart.EfficientFrontier(prt.ef, match.col="pasd",
main="Portfolio Optimization",
xlim=xylims[,1], ylim=xylims[,2])
#or
plot(prt.ef$frontier[,c(2,1)],col=2)
ok so I tried the pasd function that WaltS suggested, and the chart.EfficientFrontier seemed to work but it gave me a jagged line and not a smooth line.
I have now created an annualized return function using this code:
pamean <- function(R, weights=NULL){Return.annualized(apply(as.xts(t(t(R) * weights)),1,sum))}
and added this as an objective to my portfolio prt.
> prt
**************************************************
PortfolioAnalytics Portfolio Specification
**************************************************
Call:
portfolio.spec(assets = colnames(returns))
Number of assets: 3
Asset Names
[1] "Global REITs" "Au REITs" "Au Util and Infra"
Constraints
Enabled constraint types
- long_only
- leverage
Objectives:
Enabled objective names
- pamean
- pasd
I then create the efficient frontier again using this line:
> prt.ef <- create.EfficientFrontier(R=returns, portfolio=prt, type="DEoptim", match.col="pasd")
but when I use the summary function I see that only 1 frontier point has been generated. What does the error msg mean and why was only 1 point generated?
> summary(prt.ef)
**************************************************
PortfolioAnalytics Efficient Frontier
**************************************************
Call:
create.EfficientFrontier(R = returns, portfolio = prt, type = "DEoptim",
match.col = "pasd")
Efficient Frontier Points: 1
Error in `colnames<-`(`*tmp*`, value = character(0)) :
attempt to set 'colnames' on an object with less than two dimensions

Resources