optimize a function in r ( Error in solve.default) - r

I have the following equation:
(1)
Rp,t+1= the return of the portfolio .. fr = free risk rate.. rt+1: the return of a strategy.
beta has the following expression:
beta=x0+x1*A+ x2*B+ x3*C+x4*D (Estimated using Generalized Method of Moments (GMM) (1).
A,B,C and D are the risk factors related to rt+1.
My objective is to find the optimal values of x1, x2, x3 and x4 that maximize the utility function of the investor.
with U(Rp,t+1;x) is the investor utility
x is the vector of parameters to maximize
Zt presents the 4 risk factors.
The code is:
ret<-cbind(ret) #ret= rt+1
factors<-cbind(A,B,C,D)
func<-function(x,ret,factors) {
df <- data.frame(A=factors$A*x[1],B=factors$B*x[2],C=factors$C*x[3], D=factors$D*x[4])
H<-as.matrix(factors)
HH<-matrix(H,179,4)
m <- gmm(ret~., data=df, HH)
b<- coef(m)
beta<- b[1]+b[2]*factors$A+b[3]*factors$B+b[4]*factors$C+b[5]*D
beta=cbind(beta)
r=RF+beta*ret #equation (1)
#Annual Sharpe ratio of the portfolio
averp<-mean(r)*12
sigmap<-sqrt(12)*sd(r)
Sharpe<-averp/sigmap
#Calculating utility
u<-1/nrow(r)*sum((1+r)^(1-5)/(1-5))
obj<-u
result <- list(obj=obj,u=u,beta=beta,r=r,averp=averp,sigmap=sigmap,Sharpe=Sharpe)
return(result)
}
#Catching the obj from the function
Final<-function(x,ret,factors){
bra<-func(x,ret,factors)
#print(bra$obj)
return(-bra$obj)
}
p<-optim(par = c(0,1,2,3),Final,method="Nelder-Mead",ret=ret,factors=factors)
bra<-func(x=p$par,ret=ret,factors=factors)
When I run the code, I get the following errors:
for p -->
Error in solve.default(crossprod(hm, xm), crossprod(hm, ym)) :
Lapack routine dgesv: system is exactly singular: U[2,2] = 0
for bra -->
Error in is.data.frame(x) :
(list) object cannot be coerced to type 'double'
I am would be veryy grateful if you could help me on this ! Thank you

I would write a unit test for your func function. You can use browser() to step through it.
Put ret and factors into a data frame. df <- data.frame(ret, A=factors$A*x[1], ...)
Then run m <- gmm(ret~., data=df); beta <- coef(m)

Related

MLE of the parameters of a PDF written as an infinite sum of terms

My question relates to the use of R for the derivation of maximum likelihood estimates of parameters when a probability distributions is expressed in the form of an infinite sum, such as the one below due to Rao, Girija et al.
I wanted to see if I could reproduce the maximum likelihood estimates obtained by these authors (who used Matlab, rather than R) when the model is applied to a given set of data. My attempt is given below, although this throws up several warnings that "longer object length is not a multiple of shorter object length". I know why I am getting this warning, but I do not know how to remedy it. How can I edit my code to overcome this?
Also, is there a better way to handle infinite sums? Here I'm just using an arbitrary large number for n (1000).
library(bbmle)
svec <- list(c=1,lambda=1)
x <- scan(textConnection("0.1396263 0.1570796 0.2268928 0.2268928 0.2443461 0.3141593 0.3839724 0.4712389 0.5235988 0.5934119 0.6632251 0.6632251 0.6981317 0.7679449 0.7853982 0.8203047 0.8377580 0.8377580 0.8377580 0.8377580 0.8726646 0.9250245 0.9773844 0.9948377 1.0122910 1.0122910 1.0646508 1.0995574 1.1170107 1.1170107 1.1170107 1.1344640 1.1344640 1.1868239 1.2217305 1.2740904 1.3613568 1.3613568 1.3613568 1.4486233 1.4486233 1.5358897 1.5358897 1.5358897 1.5707963 1.6057029 1.6057029 1.6231562 1.6580628 1.6755161 1.7104227 1.7453293 1.7976891 1.8500490 1.9722221 2.0594885 2.4085544 2.6703538 2.6703538 2.7052603 3.5604717 3.7524579 3.8920842 3.9444441 4.1364303 4.1538836 4.2411501 4.2586034 4.3633231 4.3807764 4.4854962 4.6774824 4.9741884 5.5676003 5.9864793 6.1086524"))
dL <- function(x, c,lambda,n = 1000,log=TRUE) {
k <- 0:n
r <- log(sum(lambda*c*(x+2*k*pi)^(-c-1)*(exp(-(x+2*k*pi)^(-c))^(lambda))))
if (log) return(r) else return(exp(r))
}
dat <- data.frame(x)
m1 <- mle2( x ~ dL(c,lambda),
data=dat,
start=svec,
control=list(parscale=unlist(svec)),
method="L-BFGS-B",
lower=c(0,0)
)
I suggest starting out with that algorithm and making a density function that can be tested for proper behavior by integrating over its range of definition, (c(0, 2*pi). You are calling it a "probability function" but that is a term that I associate with CDF's rather than density distributions (PDF's):
dL <- function(x, c=1,lambda=1,n = 1000, log=FALSE) {
k <- 0:n
r <- sum(lambda*c*(x+2*k*pi)^(-c-1)*(exp(-(x+2*k*pi)^(-c))^(lambda)))
if (log) {log(r) }
}
vdL <- Vectorize(dL)
integrate(vdL, 0,2*pi)
#0.999841 with absolute error < 9.3e-06
LL <- function(x, c, lambda){ -sum( log( vdL(x, c, lambda))) }
(I think you were trying to pack too much into your log-likelihood function so I decide to break apart the steps.)
When I ran that version I got a warning message from the final mle2 step that I didn't like and I thought it might be the case that this density function was occasionally returning negative values, so this was my final version:
dL <- function(x, c=1,lambda=1,n = 1000) {
k <- 0:n
r <- max( sum(lambda*c*(x+2*k*pi)^(-c-1)*(exp(-(x+2*k*pi)^(-c))^(lambda))), 0.00000001)
}
vdL <- Vectorize(dL)
integrate(vdL, 0,2*pi)
#0.999841 with absolute error < 9.3e-06
LL <- function(x, c, lambda){ -sum( log( vdL(x, c, lambda))) }
(m0 <- mle2(LL,start=list(c=0.2,lambda=1),data=list(x=x)))
#------------------------
Call:
mle2(minuslogl = LL, start = list(c = 0.2, lambda = 1), data = list(x = x))
Coefficients:
c lambda
0.9009665 1.1372237
Log-likelihood: -116.96
(The warning and the warning-free LL numbers were the same.)
So I guess I think you were attempting to pack too much into your definition of a log-likelihood function and got tripped up somewhere. There should have been two summations, one for the density approximation and a second one for the summation of the log-likelihood. The numbers in those summations would have been different, hence the error you were seeing. Unpacking the steps allowed success at least to the extent of not throwing errors. I'm not sure what that density represents and cannot verify correctness.
As far as the question of whether there is a better way to approximate an infinite series, the answer hinges on what is known about the rate of convergence of the partial sums, and whether you can set up a tolerance value to compare successive values and stop calculations after a smaller number of terms.
When I look at the density, it makes me wonder if it applies to some scattering process:
curve(vdL(x, c=.9, lambda=1.137), 0.00001, 2*pi)
You can examine the speed of convergence by looking at the ratios of successive terms. Here's a function that does that for the first 10 terms at an arbitrary x:
> ratios <- function(x, c=1, lambda=1) {lambda*c*(x+2*(1:11)*pi)^(-c-1)*(exp(-(x+2*(1:10)*pi)^(-c))^(lambda))/lambda*c*(x+2*(0:10)*pi)^(-c-1)*(exp(-(x+2*(0:10)*pi)^(-c))^(lambda)) }
> ratios(0.5)
[1] 1.015263e-02 1.017560e-04 1.376150e-05 3.712618e-06 1.392658e-06 6.351874e-07 3.299032e-07 1.880054e-07
[9] 1.148694e-07 7.409595e-08 4.369854e-08
Warning message:
In lambda * c * (x + 2 * (1:11) * pi)^(-c - 1) * (exp(-(x + 2 * :
longer object length is not a multiple of shorter object length
> ratios(0.05)
[1] 1.755301e-08 1.235632e-04 1.541082e-05 4.024074e-06 1.482741e-06 6.686497e-07 3.445688e-07 1.952358e-07
[9] 1.187626e-07 7.634088e-08 4.443193e-08
Warning message:
In lambda * c * (x + 2 * (1:11) * pi)^(-c - 1) * (exp(-(x + 2 * :
longer object length is not a multiple of shorter object length
> ratios(0.5)
[1] 1.015263e-02 1.017560e-04 1.376150e-05 3.712618e-06 1.392658e-06 6.351874e-07 3.299032e-07 1.880054e-07
[9] 1.148694e-07 7.409595e-08 4.369854e-08
Warning message:
In lambda * c * (x + 2 * (1:11) * pi)^(-c - 1) * (exp(-(x + 2 * :
longer object length is not a multiple of shorter object length
That looks like pretty rapid convergence to me, so I'm guessing that you could use only the first 20 terms and get similar results. With 20 terms the results look like:
> integrate(vdL, 0,2*pi)
0.9924498 with absolute error < 9.3e-06
> (m0 <- mle2(LL,start=list(c=0.2,lambda=1),data=list(x=x)))
Call:
mle2(minuslogl = LL, start = list(c = 0.2, lambda = 1), data = list(x = x))
Coefficients:
c lambda
0.9542066 1.1098169
Log-likelihood: -117.83
Since you never attempt to interpret a LL in isolation but rather look at differences, I'm guessing that the minor difference will not affect your inferences adversely.

Getting more precision in pvalue in survdiff

I am running a survdiff using the survival package and the p-value is 0.02. I would like to see it have more precision(ie. 0.02xxxx). Is there an argument that I can pass to specify the length of the pvalue. I read the documentation for the survival package and did not find any mention on how to specify it.
survdiff(surv_object~access_sam2$Area_mTLSHL)
Credits.
The computation of the p-value for objects of class "survdiff" is not completely obvious. I had to see what is going on in the print method for objects of that class to understand the way the degrees of freedom are computed.
The code below is a simplification of the code of print.survdiff and therefore the credits go to
citation("survival")
#
#Therneau T (2015). _A Package for Survival Analysis
#in S_. version 2.38, <URL:
#https://CRAN.R-project.org/package=survival>.
#
#Terry M. Therneau, Patricia M. Grambsch (2000).
#_Modeling Survival Data: Extending the Cox Model_.
#Springer, New York. ISBN 0-387-98784-3.
#
#To see these entries in BibTeX format, use
#'print(<citation>, bibtex=TRUE)', 'toBibtex(.)', or
#set 'options(citation.bibtex.max=999)'.
The code itself can be seen in the sources or by running
getAnywhere("print.survdiff")
Now for the question's problem.
I have written a generic pvalue function to make it easier to call a method for objects of the class returned by function survdiff. The example is the taken from the help page of that function.
The return value is a named list with 3 members, the names are self explanatory. One of them, chisq is a repetition of a value returned by survdiff. I have included it for the sake of completeness.
pvalue <- function(x, ...) UseMethod("pvalue")
pvalue.survdiff <- function (x, ...)
{
if (length(x$n) == 1) {
df <- 1
pval <- pchisq(x$chisq, 1, lower.tail = FALSE)
} else {
if (is.matrix(x$obs)) {
otmp <- rowSums(x$obs)
etmp <- rowSums(x$exp)
} else {
otmp <- x$obs
etmp <- x$exp
}
df <- sum(etmp > 0) - 1
pval <- pchisq(x$chisq, df, lower.tail = FALSE)
}
list(chisq = x$chisq, p.value = pval, df = df)
}
srv <- survdiff(Surv(futime, fustat) ~ rx, data = ovarian)
pvalue(srv)
#$chisq
#[1] 1.06274
#
#$p.value
#[1] 0.3025911
#
#$df
#[1] 1
I am not sure about the survival package and you did not provide a reproducible code (please do so next time). But in general, if you want to see more digits what you need to do is
print(value, digits= n)
# n is the number of digits you want to see
In your case it is
print(survdiff(surv_object~access_sam2$Area_mTLSHL), 6)

How does ar.yw estimate the variance

In R, how does the function ar.yw estimate the variance? Specifically, where does the number "var.pred" come from? It does not seem to come from the usual YW estimate of the variance, nor the sum of squared residuals divided by df (even though there is disagreement about what the df should be, none of the choices give an answer equivalent to var.pred). And yes, I know that there are better methods than YW; just trying to figure out what R is doing.
set.seed(82346)
temp <- arima.sim(n=10, list(ar = 0.5), sd=1)
fit <- ar(temp, method = "yule-walker", demean = FALSE, aic=FALSE, order.max=1)
## R's estimate of the sigma squared
fit$var.pred
## YW estimate
sum(temp^2)/10 - fit$ar*sum(temp[2:10]*temp[1:9])/10
## YW if there was a mean
sum((temp-mean(temp))^2)/10 - fit$ar*sum((temp[2:10]-mean(temp))*(temp[1:9]-mean(temp)))/10
## estimate based on residuals, different possible df.
sum(na.omit(fit$resid^2))/10
sum(na.omit(fit$resid^2))/9
sum(na.omit(fit$resid^2))/8
sum(na.omit(fit$resid^2))/7
Need to read the code if it's not documented.
?ar.yw
Which says: "In ar.yw the variance matrix of the innovations is computed from the fitted coefficients and the autocovariance of x." If that is not enough explanation, then you need to look at the code:
methods(ar.yw)
#[1] ar.yw.default* ar.yw.mts*
#see '?methods' for accessing help and source code
getAnywhere(ar.yw.default)
# there are two cases that I see
x <- as.matrix(x)
nser <- ncol(x)
if (nser > 1L) # .... not your situation
#....
else{
r <- as.double(drop(xacf))
z <- .Fortran(C_eureka, as.integer(order.max), r, r,
coefs = double(order.max^2), vars = double(order.max),
double(order.max))
coefs <- matrix(z$coefs, order.max, order.max)
partialacf <- array(diag(coefs), dim = c(order.max, 1L,
1L))
var.pred <- c(r[1L], z$vars)
#.......
order <- if (aic)
(0L:order.max)[xaic == 0L]
else order.max
ar <- if (order)
coefs[order, seq_len(order)]
else numeric()
var.pred <- var.pred[order + 1L]
var.pred <- var.pred * n.used/(n.used - (order + 1L))
So you now need to find the Fortran code for C_eureka. I think I'm finding it here: https://svn.r-project.org/R/trunk/src/library/stats/src/eureka.f This is the code that aI think is returning the var.pred estimate. I'm not a time series guy and It's your responsibility to review this process for applicability to your problem.
subroutine eureka (lr,r,g,f,var,a)
c
c solves Toeplitz matrix equation toep(r)f=g(1+.)
c by Levinson's algorithm
c a is a workspace of size lr, the number
c of equations
c
snipped
c estimate the innovations variance
var(l) = var(l-1) * (1 - f(l,l)*f(l,l))
if (l .eq. lr) return
d = 0.0d0
q = 0.0d0
do 50 i = 1, l
k = l-i+2
d = d + a(i)*r(k)
q = q + f(l,i)*r(k)
50 continue

R optim(){fExtremes} gets 0 hessian matrix

I am using R {fExtremes} to find best parameters of GEV distribution for my data (a vector). but get the following error message
Error in solve.default(fit$hessian) : Lapack routine dgesv: system is exactly singular: U[1,1] = 0
I traced back to fit$hessian, found my hessian matrix is a sigular matrix, all of the elements are 0s. The source code (https://github.com/cran/fExtremes/blob/master/R/GevFit.R) of gevFit() shows fit$hessian is calculated by optim(). The output parameters are exactly the same value as the initial parameters. I am wondering what could be the problems of my data that cause this problem? I copied my code here
> min(sample);
[1] 5.240909
> max(sample)
[1] 175.8677
> length(sample)
[1] 6789
> mean(sample)
[1] 78.04107
>para<-gevFit(sample, type = "mle")
Error in solve.default(fit$hessian) :
Lapack routine dgesv: system is exactly singular: U[1,1] = 0
fit = optim(theta, .gumLLH, hessian = TRUE, ..., tmp = data)
> fit
$par
xi -0.3129225
mu 72.5542497
beta 16.4450897
$value
[1] 1e+06
$counts
function gradient
4 NA
$convergence
[1] 0
$message
NULL
$hessian
xi mu beta
xi 0 0 0
mu 0 0 0
beta 0 0 0
I updated my dataset on google docs:
https://docs.google.com/spreadsheets/d/1IRRpjmdrrJPhNmfiLism_P0efV_Ot4HlEsa6kwMnljc/edit?usp=sharing
This is going to be a long story, possibly more suited to https://stats.stackexchange.com/.
====== Part 1 -- The problem ======
This is the sequence generating the error:
library(fExtremes)
samp <- read.csv("optimdata.csv")[ ,2]
## does not converge
para <- gevFit(samp, type = "mle")
We are facing the typical cause of lack-of-convergence when using optim() and friends: inadequate starting values for the optimisation.
To see what goes wrong, let us use the PWM estimator (http://arxiv.org/abs/1310.3222); this consists of an analytical formula, hence it does not incur into convergence problems, since it makes no use of optim():
para <- gevFit(samp, type = "pwm")
fitpwm<- attr(para, "fit")
fitpwm$par.ests
The estimated tail parameter xi is negative, corresponding to a bounded upper tail; in fact the fitted distribution displays even more "upper tail boundedness" than the sample data, as you can see from the "leveling off" of the quantile-quantile graph at the right:
qqgevplot <- function(samp, params){
probs <- seq(0.1,0.99,by=0.01)
qqempir <- quantile(samp, probs)
qqtheor <- qgev(probs, xi=params["xi"], mu=params["mu"], beta=params["beta"])
rang <- range(qqempir,qqtheor)
plot(qqempir, qqtheor, xlim=rang, ylim=rang,
xlab="empirical", ylab="theoretical",
main="Quantile-quantile plot")
abline(a=0,b=1, col=2)
}
qqgevplot(samp, fitpwm$par.ests)
For xi<0.5 the MLE estimator is not regular (http://arxiv.org/abs/1301.5611): the value of -0.46 estimated by PWM for xi is very close to that. Now the PWM estimates are used internally by gevFit() as starting values for optim(): you can see this if you print out the code for the function gevFit():
print(gevFit)
print(.gevFit)
print(.gevmleFit)
The starting value for optim is theta, obtained by PWM. For the specific data at hand, this starting value is not adequate, in that it leads to non-convergence of optim().
====== Part 2 -- solutions? ======
Solution 1 is to use para <- gevFit(samp, type = "pwm") as above. If you'd like to use ML, then you have to specify good starting values for optim(). Unfortunately, the fExtremes package does not make it easy to do so. You can then re-define your own version of .gevmleFit to include those, e.g.
.gevmleFit <- function (data, block = NA, start.param, ...)
{
data = as.numeric(data)
n = length(data)
if(missing(start.param)){
theta = .gevpwmFit(data)$par.ests
}else{
theta = start.param
}
fit = optim(theta, .gevLLH, hessian = TRUE, ..., tmp = data)
if (fit$convergence)
warning("optimization may not have succeeded")
par.ests = fit$par
varcov = solve(fit$hessian)
par.ses = sqrt(diag(varcov))
ans = list(n = n, data = data, par.ests = par.ests, par.ses = par.ses,
varcov = varcov, converged = fit$convergence, nllh.final = fit$value)
class(ans) = "gev"
ans
}
## diverges, just as above
.gevmleFit(samp)
## diverges, just as above
startp <- fitpwm$par.ests
.gevmleFit(samp, start.param=startp)
## converges
startp <- structure(c(-0.1, 1, 1), names=names(fitpwm$par.ests))
.gevmleFit(samp, start.param=startp)$par.ests
Now check this out: the beta estimated by PWM is 0.1245; by changing this to a tiny amount, the MLE is made to converge:
startp <- fitpwm$par.ests
startp["beta"]
startp["beta"] <- 0.13
.gevmleFit(samp, start.param=startp)$par.ests
This hopefully clearly illustrates that to blindly optim()ise works until it doesn't and might then turn into a quite delicate endeavour indeed. For this reason, it might be useful to leave this reply here, rather than to migrate to CrossValidated.

Write a program to minimize the sum of squares of recursive exponential function

This is the function that I'd like to code in R,
i = 1,2,3,....j-1
a,b,c,f,g are to be determined from nls (with starting value arbitrarily set to 7,30,15,1,2)
S and Y are in the dataset
The function can be presented in a more computational friendly recursive equations,
Here is my attempt at the code but I could not get it to converge,
S=c(235,90,1775,960,965,1110,370,485,667,140,588,10,0,1340,600,0,930,1250,930,120,895,825,0,935,695,270,0,610,0,0,445,0,0,370,470,819,717,0,0,60,0,135,690,0,825,730,1250,370,1010,261,0,865,570,1425,150,1515,1143,0,675,1465,375,0,690,290,0,430,735,510,270,450,1044,0,928,60,95,105,60,950,0,1640,3960,1510,500,1135,0,0,0,181,568,60,1575,247,0,1270,870,290,510,0,540,455,120,580,420,90,525,1116,499,0,60,150,660,1080,1715,90,1090,840,975,280,850,633,30,1530,1765,880,150,225,77,1380,810,835,0,540,1017,1108,0,300,600,90,370,910,0,60,60,0,0,0,0,50,0,735,900)
Y=c(NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,7.7,NA,NA,7.2,NA,NA,NA,NA,NA,NA,7.4,NA,NA,NA,NA,NA,NA,10.7,NA,NA,NA,NA,8.1,8.5,NA,NA,NA,NA,NA,9.9,NA,7.4,NA,NA,NA,9.5,NA,NA,9,NA,NA,NA,8.8,NA,NA,8.5,NA,NA,NA,6.9,NA,NA,7.9,NA,NA,NA,7.3,NA,7.9,8.3,NA,NA,NA,11.5,NA,NA,12.3,NA,NA,NA,6.1,NA,NA,9,NA,NA,NA,10.3,NA,NA,9.7,NA,NA,8.6,NA,9.1,NA,NA,11,NA,NA,12.4,11.1,10.1,NA,NA,NA,NA,11.7,NA,NA,9,NA,NA,NA,10.2,NA,NA,11.2,NA,NA,NA,11.8,NA,9.2,10,9.8,NA,9.5,11.3,10.3,9.5,10.2,10.6,NA,10.8,10.7,11.1,NA,NA,NA,NA,NA,NA,NA,NA,12.6,NA)
mydata = data.frame(Y,S)
f <- function(a,b,f,c,g,m) {
model <- matrix(NA,nrow(m)+1,3)
model[1,1]=0
model[1,2]=0
model[1,3]=a
for (i in 2:nrow(model)){
model[i,1]=exp(-1/c)*model[i-1,1] + m$S[i-1]
model[i,2]=exp(-1/g)*model[i-1,2] + m$S[i-1]
model[i,3]=a+b*model[i,1]-f*model[i,2]
}
model <- as.data.frame(model)
colnames(model) = c('l','m','Y')
model$Y[which(m$Y>0)]
}
Y=mydata$Y
nls(Y ~ f(a,b,f,c,g,mydata), start=list(a=7,b=5.3651,f=5.3656,c=16.50329,g=16.5006),control=list(maxiter=1000,minFactor=1e-12))
Errors that I've been getting depends on the starting values are:
Error in nls(Y ~ f(a, b, f, c, g, mydata), start = list(a = 7, :
number of iterations exceeded maximum of 1000
Error in nls(Y ~ f(a, b, f, c, g, mydata), start = list(a = 7, :
singular gradient
I'm stuck and not sure what to do, any help would be greatly appreciated.
Try this:
ff <- function(a,b,f,c,g) {
Y <- numeric(length(S))
for(i in seq(from=2, to=length(S))) {
j <- seq(length=i-1)
Y[i] <- a + sum((b*exp(-(i-j)/c) - f*exp(-(i-j)/g))*S[j])
}
Y
}
S <- c(235,90,1775,960,965,1110,370,485,667,140,588,10,0,1340,600,0,930,1250,930,120,895,825,0,935,695,270,0,610,0,0,445,0,0,370,470,819,717,0,0,60,0,135,690,0,825,730,1250,370,1010,261,0,865,570,1425,150,1515,1143,0,675,1465,375,0,690,290,0,430,735,510,270,450,1044,0,928,60,95,105,60,950,0,1640,3960,1510,500,1135,0,0,0,181,568,60,1575,247,0,1270,870,290,510,0,540,455,120,580,420,90,525,1116,499,0,60,150,660,1080,1715,90,1090,840,975,280,850,633,30,1530,1765,880,150,225,77,1380,810,835,0,540,1017,1108,0,300,600,90,370,910,0,60,60,0,0,0,0,50,0,735,900)
Y <- c(NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,7.7,NA,NA,7.2,NA,NA,NA,NA,NA,NA,7.4,NA,NA,NA,NA,NA,NA,10.7,NA,NA,NA,NA,8.1,8.5,NA,NA,NA,NA,NA,9.9,NA,7.4,NA,NA,NA,9.5,NA,NA,9,NA,NA,NA,8.8,NA,NA,8.5,NA,NA,NA,6.9,NA,NA,7.9,NA,NA,NA,7.3,NA,7.9,8.3,NA,NA,NA,11.5,NA,NA,12.3,NA,NA,NA,6.1,NA,NA,9,NA,NA,NA,10.3,NA,NA,9.7,NA,NA,8.6,NA,9.1,NA,NA,11,NA,NA,12.4,11.1,10.1,NA,NA,NA,NA,11.7,NA,NA,9,NA,NA,NA,10.2,NA,NA,11.2,NA,NA,NA,11.8,NA,9.2,10,9.8,NA,9.5,11.3,10.3,9.5,10.2,10.6,NA,10.8,10.7,11.1,NA,NA,NA,NA,NA,NA,NA,NA,12.6,NA)
nls(Y ~ f(a,b,f,c,g,mydata), start=list(a=7,b=5.3651,f=5.3656,c=16.50329,g=16.5006))
But I am unable to get nls to run here. You may also try a general-purpose optimizer. Construct the sum of squares function (-sum of squares as we maximize it):
SS <- function(par) {
a <- par[1]
b <- par[2]
f <- par[3]
c <- par[4]
g <- par[5]
-sum((Y - ff(a,b,f,c,g))^2, na.rm=TRUE)
}
and maximize:
library(maxLik)
summary(a <- maxBFGS(SS, start=start))
It works, but as you see the gradients are still pretty large. I get gradients small if I re-run a NR optimizer on the output values of BFGS:
summary(b <- maxNR(SS, start=coef(a)))
which gives the results
Newton-Raphson maximisation
Number of iterations: 1
Return code: 2
successive function values within tolerance limit
Function value: -47.36338
Estimates:
estimate gradient
a 10.584488 0.0016371615
b 6.954444 -0.0043306656
f 6.955095 0.0043327901
c 28.622035 -0.0005735572
g 28.619185 0.0003871179
I don't know if this makes sense. The issues with nls and the other optimizers hint that you have numerical instabilities, either related to large numerical values, or the difference of exponents in the model formula.
Check what is going on there :-)

Resources