I am new in Julia language. I have a a large system of ODEs (around 500). When I use the AutoTsit5(Rosenbrock23()) solver, I receive this error:
This solver is not able to use mass matrices.
Does it mean that I have to use solvers for DAE problems? What other options exist?
Thanks!
I tried different solvers. Some work, some not.
Does it mean that I have to use solvers for DAE problems? What other options exist?
If you are using a solver to solve an ODEProblem in mass matrix form, you must use one of the methods that's documented as capable of doing this. This is shown in the DAE solver page. AutoTsit5 is not a compatible method with this as no explicit method is compatible with mass matrices (for clear mathematical reasons). FBDF would likely be recommended in a scenario like this.
In general, we would highly recommend not choosing an ODE solver and instead relying on the default given by DifferentialEquations.jl's default algorithm unless you have a clear idea of why you're choosing a specific solver.
Related
I need to estimate parameters of continuous-discrete nonlinear stochastic dynamic system using Kalman filtering techniques.
I'm going to use Julia ode45() from ODE and implement Extended Kalman Filter by myself to compute loglikelihood. ODE is written fully in Julia, ForwardDiff supports differentiation of native Julia functions, including nested differentiation, that's what I also need cause I want to use ForwardDiff in my EKF implementation.
Will ForwardDiff handle differentiation of such a comprehensive function like the loglikelihood I've described?
ODE.jl is in maintenance mode so I would recommend using DifferentialEquations.jl instead. In the DiffEq FAQ there is an explanation about using ForwardDiff through the ODE solvers. It works, but as in the FAQ I would recommend using sensitivity analysis since that's a better way of calculating the derivatives (it will take a lot less compilation time). But yes, DiffEqParamEstim.jl is a whole repository for parameter estimation of ODEs/SDEs/DAEs/DDEs and it uses ForwardDiff.jl through the solvers.
(BTW, what you're looking to do sounds interesting. Feel free to get in touch with us in the JuliaDiffEq channel to talk about the development of parameter estimation tooling!)
I am new to the package CVXR. I am using it to do the convex optimization within each iteration of EM algorithms. Everything is fine at first but after 38 iterations, I have an error:
Error in valuesById(object, results_dict, sym_data, solver) :
Solver failed. Try another.
I am not sure why the solver works fine at first but then fails to work later. I looked up the manual about how to change the solver but could not find the answer. I am also curious about whether we can specify learning step size in CVXR. Really appreciate any help
The list of installed solvers in CVXR you can get with
installed_solvers()
In my case that is:
# "ECOS" "ECOS_BB" "SCS"
You can change the one that is used just using argument solver, e.g. to change from the default ECOS to SCS:
result <- solve(prob, solver="SCS")
I think the developers are planning to support other solvers in the future, e.g. gurobi...
I have done it in Excel but need to run a proper simulation in R.
I need to minimize function F(x) (x is a vector) while having constraints that sum(x)=1, all values in x are [0,1] and another function G(x) > G_0.
I have tried it with optim and constrOptim. None of them give you this option.
The problem you are referring to is (presumably) a non-linear optimization with non-linear constraints. This is one of the most general optimization problems.
The package I have used for these purposes is called nloptr: see here. From my experience, it is both versatile and fast. You can specify both equality and inequality constaints by setting eval_g_eq and eval_g_ineq, correspondingly. If the jacobians are known explicitly (can be derived analytically), specify them for faster convergence; otherwise, a numerical approximation is used.
Use this list as a general reference to optimization problems.
Write the set of equations using the Lagrange multiplier, then solve using the R command nlm.
You can do this in the OpenMx Package (currently host at the site listed below. Aiming for 2.0 relase on cran this year)
It is a general purpose package mostly used for Structural Equation Modelling, but handling nonlinear constraints.
FOr your case, make an mxModel() with your algebras expressed in mxAlgebras() and the constraints in mxConstraints()
When you mxRun() the model, the algebras will be solved within the constraints, if possible.
http://openmx.psyc.virginia.edu/
I'm currently looking for a lua alternative to the R programming languages; optim() function, if anyone knows how to deal with this?
http://numlua.luaforge.net/ looks interesting but doesn't seem to have minimization. The most promising lead seems to be a Lua wrapper for GSL, which has a variety of multidimensional minimization algorithms included.
With derivatives
- BFGS (method="BFGS" in optim) and two conjugate gradient methods (Fletcher-Reeves and Polak-Ribiere) which are two of the three options available for method="CG" in optim.
Without derivatives
- the Nelder-Mead simplex (method="Nelder-Mead", the default in optim).
More specifically, see here for the Lua shell documentation covering minimization.
I agree with #Zack that you should try to use existing implementations if at all possible, and that you might need a little bit more background knowledge to know which algorithms will be useful for your particular problems ...
R's implementation of optim isn't actually written in R. If you type "optim" with no parentheses at the prompt, it'll dump out the definition of the function, and you can see that after some error checking and argument shuffling it invokes an .Internal routine (coded in C and/or Fortran) to do all the real work.
So your best bet is to find a C library for mathematical optimization -- sorry, I have no recommendations -- and wrap that into Lua. I doubt anyone has written native-Lua code for this, and I would not recommend trying to code it yourself; doing mathematical optimization efficiently is still an active domain of basic research, and the best-so-far algorithms are decidedly nontrivial to implement.
I'm checking a simple moving average crossing strategy in R. Instead of running a huge simulation over the 2 dimenional parameter space (length of short term moving average, length of long term moving average), I'd like to implement the Particle Swarm Optimization algorithm to find the optimal parameter values. I've been browsing through the web and was reading that this algorithm was very effective. Moreover, the way the algorithm works fascinates me...
Does anybody of you guys have experience with implementing this algorithm in R? Are there useful packages that can be used?
Thanks a lot for your comments.
Martin
Well, there is a package available on CRAN called pso, and indeed it is a particle swarm optimizer (PSO).
I recommend this package.
It is under actively development (last update 22 Sep 2010) and is consistent with the reference implementation for PSO. In addition, the package includes functions for diagnostics and plotting results.
It certainly appears to be a sophisticated package yet the main function interface (the function psoptim) is straightforward--just pass in a few parameters that describe your problem domain, and a cost function.
More precisely, the key arguments to pass in when you call psoptim:
dimensions of the problem, as a vector
(par);
lower and upper bounds for each
variable (lower, upper); and
a cost function (fn)
There are other parameters in the psoptim method signature; those are generally related to convergence criteria and the like).
Are there any other PSO implementations in R?
There is an R Package called ppso for (parallel PSO). It is available on R-Forge. I do not know anything about this package; i have downloaded it and skimmed the documentation, but that's it.
Beyond those two, none that i am aware of. About three months ago, I looked for R implementations of the more popular meta-heuristics. This is the only pso implementation i am aware of. The R bindings to the Gnu Scientific Library GSL) has a simulated annealing algorithm, but none of the biologically inspired meta-heuristics.
The other place to look is of course the CRAN Task View for Optimization. I did not find another PSO implementation other than what i've recited here, though there are quite a few packages listed there and most of them i did not check other than looking at the name and one-sentence summary.