I am currently converting a matlab set of models that calculate the log likelihood using the optimizer fminunc to R using optim 'BFGS'.
I have the initial values, the maximum likelihood values and end parameter results for all the matlab models. Most of the R converted models can find using optim, the same log likelihood and the same end parameter values using the same initial parameters as matlab. However some get stuck at a local optima which can be fixed by putting in the matlab end parameter values as the initial values, and these models then find the matlab maximum likelihood values.
Is there a more powerful optimization for R, that in on par with matlab's or is it just that R is more likely to get stuck at a local optima and therefore intial parameter values become more critical in them achieving the maximum log liklihood that getting stuck at local optim ?
res<-optim(par=x,fn=BOTH4classnochange,hessian=TRUE,method='BFGS',control=list(maxit=MaxIter,abstol=TolFun,reltol=TolX))
Are you aware of the CRAN Task View on Optimization listing every related package? And yes, there are dozens.
Related
I am doing a model fitting using minpack.lm package. The objective function is the residual between my experimental data & my ODE model.
The tricky thing is the initial value for my ODE model. So I use a loop to run randomly initial values and keep the best result which minimizes the objective function.
The problem is that if the random initial value is bad, my model can't solve the ODE equation, the result I get is either NAN or errors such as the problem not converged or number of time steps 1 exceeded maxsteps at t = 0. So my questions are:
Is there any way to stop when the initial value is bad and pass to the next initial value in one loop?
Do you have any advice to choose a best initial value than run randomly?
Thanks a lot
I'm working on a simulation project with a 3-dimensional piece-wise constant function, and I'm trying to find the inputs that maximize the output. Using optim() in R with the Nelder-Mead or SANN algorithms seems best (they don't require the function to be differentiable), but I'm finding that optim() ends up returning my starting value exactly. This starting value was obtained using a grid search, so it's likely reasonably good, but I'd be surprised if it was the exact peak.
I suspect that optim() is not testing points far enough out from the initial guess, leading to a situation where all tested points give the same output.
Is this a reasonable concern?
How can I tweak the breadth of values that optim() is testing as it searches?
I am using the R package "BB" to maximize a log likelihood function (the objective function). I didn't change any control parameters in BBoptim, besides supplying my objective function and its arguments.
The optimization algorithm successfully converged. However, I can see from the output of "BBoptim" that in the middle of its iterations, it actually find a higher objective function value that its final report.
I think the default algorithm used by "BBoptim" is "spg". I would really appreciate if anyone could point me to the reason of that and maybe a more appropriate setting of "BBoptim" optimization parameters.
I have written a function for performing maximum simulated likelihood estimation in R, which works quite well.
However, the problem is that optim does not call the same function for estimating the likelihood value and estimating the gradient at the same time, like the fminuc optimizer in matlab does. Thus every time if optim wants to update the gradient, the simulation for the given parameter vector have to repeated. At the end optim has called about 100 times the loglik function for updating the parameters and in addition 50 times the loglik function for calculating the gradient.
I am wondering if there is an elegant solution to avoid the 50 additional simulation steps, for example by storing the estimated likelihood value and gradient in each step. Then before the likelihood function is called the next time, it is checked if for a given parameter vector the information are available already or not. This could be done by interpose an additional function between optim and the loglik function. But that seems to be bitty.
Any good ideas?
Cheers,
Ben
I am trying to manually calculate the standard error of the constant in an ARIMA model, if it is included. I have referred to Box and Jenkins (1994) text, specially Section 7.2, but my understanding is that the methods mentioned here calculates the variance-covariance matrix for the ARIMA parameters only, not the constant. Tried searching on the Internet, but couldn't find any theory. Software like Minitab, R etc. calculate this, so I was wondering what is the way? Can someone provide any pointer(s) on this topic?
Thanks.
arima() will fit a regression model with ARMA errors. The constant is treated as the coefficient of a regression variable consisting only of 1s. So you need the covariance matrix of the regression coefficients which is usually calculated separately from the covariance matrix of the ARMA coefficients. Look at Section 8.3 of Hamilton's "Time series analysis"
One of the nicest things about R is that you can access a lot of the source code to R itself from within the environment. If you simply type arima at the command prompt, you get the high-level source code for the arima() function. I got several pages of code here, when I tried it.
You do miss out on anything implemented internally within the R executable in native code, but often the high-level code tells you everything you want to know.
Perhaps a shift of perspective can solve this problem.
Rather than seeing the constant as something special, just consider the problem without constant and with a variable that is a vector of ones.