In OpenMDAO v3.20.2, the optimization report was added (here). I'm using the SciPyOptimizer SLSQP optimizer, and the report appears to have a bug. The iterations count from the report does not match the iteration count from the SciPy print statement (see below for direct readout). I think this issue could be addressed using the information found (here) (e.g. in the scipy_optimizer.py file use self.iter_count=result.nit).
Also, the optimizer returns plenty of key optimization details that could be useful. For example, status indicates why the optimization failed/terminated (e.g. iteration limit, too many constraints, etc.). I think that information would be more useful than the current exit status which would just be FAIL.
Hopefully this is a useful suggestion.
Iteration limit reached (Exit mode 9)
Current function value: 1.5695626156067153
Iterations: 20
Function evaluations: 42
Gradient evaluations: 16
Optimization FAILED.
Iteration limit reached
-----------------------------------
Time: 94.89042663574219
p.driver.opt_result['iter_count']
Out[2]: 43
Related
I have a program that simulates the paths of particles using the Differential Equations package of Julia. The simulation allows for particles to hit devices - to prevent the continued simulation of such particles, I use the unstable_check of the solver (specifically of the EulerHeun solver). However, this leads to warnings like the following:
┌ Warning: Instability detected. Aborting
└ # SciMLBase <path>\.julia\packages\SciMLBase\0s9uL\src\integrator_interface.jl:351
As I simulate thousands of particles, this can be quite annoying (and slow).
Can I suppress this warning? And if not, is there another (better) way to abort the simulation of some particles?
I don't think a code sample makes sense / is necessary here; let me know though if you think otherwise.
https://diffeq.sciml.ai/stable/basics/common_solver_opts/#Miscellaneous
verbose: Toggles whether warnings are thrown when the solver exits early. Defaults to true.
Thus to turn off the warnings, you simply do solve(prob,alg;verbose=false).
The simulation allows for particles to hit devices - to prevent the continued simulation of such particles, I use the unstable_check of the solver
Using a DiscreteCallback or ContinuousCallback with affect!(integrator) = terminate!(integrator) is a much better way to do this.
There is Suppressor.jl, although I don't know whether this reduces the overhead you get from the warnings being created, so a DiffEq-specific setting might be the better way to go here (I don't know much about DiffEq though, sorry!)
Here's an example from the readme:
julia> using Suppressor
julia> #suppress begin
println("This string doesn't get printed!")
#warn("This warning is ignored.")
end
for just suppressing warnings you want #suppress_err
My experiments (using the C library directly) suggest that using the tm_lim parameter to limit the time taken by GLPK on a mixed integer programming problem results in a problem pointer that contains the best solution found so far. However, I can't find any confirmation of this in the documentation. Does a timed-out computation always leave the best discovered solution in the problem buffer?
Thanks!
The tm_lim parameter does indeed return the best solution from my anecdotal experience. I could not find verification of this in the documentation either, so I looked at the source.
glpk iterates over a loop, updating the solution in-place until one of four termination criteria (optimal solution, unbounded solution, time limit, iteration limit) is satisfied. Once this happens, glpk stops updating the solution and returns a value indicating the satisfied criterion.
You can verify this in the function ssx_phase_II in src/glpssx02.c in https://ftp.gnu.org/gnu/glpk/glpk-4.35.tar.gz. Look at references to tm_lim.
A final piece of justification is the documentation for the --tmlim command line option:
--tmlim nnn limit solution time to nnn seconds (--tmlim 0 allows
obtaining solution at initial point)
Passing --tmlim 0 would return the initial solution.
I'm running CPLEX from IBM ILOG CPLEX Optimization Studio 12.6.
Here I'm facing a weird issue. Solving the same optimization problem (pure LP) multiple times in a row, yields different results.
The aim is to solve once, then iteratively modify the coefficient matrix, and re-solve the problem. However, we experienced that the changes between iterations did not correspond to the modifications.
This lead us to try re-solving the problem without doing modifications in between, which returned different results.
The catch is that we still do one major modification before we start iterating, and our hypothesis is that this change (cplex.setCoef(...) on about 10,000 rows) is done asynchronously, so that it is only partially done during the first re-solution iterations.
However, we cannot seem to find any documentation stating that this method is asynchronous, nor any way to ensure synchronous execution, so that all the changes are done before CPLEX restarts.
Does anyone know if this is the case? Is there any way to delay restart until cplex.setCoef(...) is done? The problem is quite huge, but the representative lines are:
functionUsingSetCoefOn10000rows();
for(var j = 0; j < 100; j++){
cplex.solve();
writeln("Iteration " + j + ": " + cplex.getObjValue());
for(var k = 0; k < 100000; k++){
doBusyWork(); //Just to kill time
}
}
which outputs
Iteration 0: 1529486959.814946
Iteration 1: 1544325969.750444
Iteration 2: 1549669732.757587
Iteration 3: 1551818419.584333
...
Iteration 33: 1564007987.849925
...
Iteration 98: 1564007987.849925
Iteration 99: 1564007987.849925
Last minute update
Reducing the number of calls to cplex.setCoef to about 2500 removes the issue, and all iterations return the same objective value. Sadly, we do need to change all the 10,000 coefficients.
Edit: The OPL scripting and engine log: http://goo.gl/ywJhkm and here: http://goo.gl/v2Qhm9
Sorry that this is not really an answer, but it is too big to go as a comment...
I don't think that the setCoef() calls would be asynchronous and not complete - that would be very surprising. Such behaviour would be too unpredictable and too many other people would have problems with this behaviour. However, CPLEX itself will use multiple threads to solve a problem and that means that it can generate different solutions each time it runs. The example objective values that you show do seem to change significantly, so a few questions/observations:
1: The numbers seem to be monotonically increasing - are they all increasing like this until they reach the maximum value? It looks like some kind of convergence behaviour. On re-running, CPLEX will start from a previous solution if it can. Check that there isn't some other CPLEX parameter stopping the search early such as an iteration or time limit or wider solution optimality tolerance.
2: Have you looked at the CPLEX logs from each run to see what CPLEX is doing in each run?
3: If you have doubts about the model being solved, try dumping out the model as an LP file and check the values in each iteration. They should all be the same in your case. You can also try solving the LP file in the CPLEX standalone optimiser to see what value that gives.
4: Have you tried setting the parameters to make CPLEX use a different LP algorithm (e.g. primal simplex, barrier etc)?
I am writing a numerical model in R, for an ecological system, and solving it using "lsoda" from package deSolve.
My model has 14 state variables.
I define the model, set it up fine, and give time duration according to this:
nyears<-60
ndays<-nyears*365+1
times<-seq(0,nyears*365,by=1)
Rates of change of state variables (e.g. the rate of change of variable "A1" is "dA1")are calculated according to existing values for state variables (at time=t) and a set of parameters.
Simplified example:
dA1<-Tf*A1*(ImaxA*p_sub)
Where Tf, ImaxA and p_sub are parameters, and A1 is my state variable at time=t.
When I solve the model, I use the lsoda solver like this:
out<-as.data.frame(lsoda(start,times,model,parms))
Sometimes (depending on my parameter combinations), the model run completes over the entire duration I have specified, however sometimes it stops short of the mark (still giving me output up until the solver "crashes"). When it "crashes", this message is displayed:
DLSODA- At current T (=R1), MXSTEP (=I1) steps
taken on this call before reaching TOUT
In above message, I1 = 5000
In above message, R1 = 11535.5
Warning messages:
1: In lsoda(start, times, model, parms) :
an excessive amount of work (> maxsteps ) was done, but integration was not successful - increase maxsteps
2: In lsoda(start, times, model, parms) :
Returning early. Results are accurate, as far as they go
It commonly appears when one of the state variables is getting exponentially bigger, or is tending very near to zero, however sometimes it crashes when seemingly not much change is happening. I may be wrong, but is it due to the rate of change of state-variables becoming too large? If so, why might it also "crash" when there is not a fast rate of change?
Is there a way that I can make the solver complete its task with the specified parameter values, maybe with a more relaxed tolerance for error?
Thank you all for your contributions. I looked at some of the rates, and at the point of crashing, the model was switching between two metabolic states - and the fast rate of this binary switch caused the solver to stop - rejecting the solution because the rate of change was too large. I have fixed my model by introducing a gradual switch between states (with a logistic curve) instead of this binary switch. I aknowledge that I didn;t give enough info in the original question, so thanks for the help you offered!
I have some questions on options of value analysis module and some extensions options.
I use the command : frama-c-gui -val -slevel 100 -plevel 300 -absolute-valid-range 0x00000000-0xffffffff -metrics -metrics-value-cover -scope-def-interproc -main MYMAIN CODE/*.c
On a single file, -metrics give me 3 goto on a function without, how goto is compute ?
What is "Coverage estimation = 100.0%" with -metrics-value-cover I get a value between 80 and 100%, at the beginning I thought get <100% when I had dead code, but I had dead code when I get 100%, so I think get 100% if all functions in sources files are analysed ?
I suppose so 157 stmts in analyzed functions, 148 stmts analyzed (94.3%) that means I have dead code on my projet, it's that ?
With option -scope-def-interproc I get me 32 warning (62 without) but on website, we can read (in scope documentation)
The alarms emitted by the value analysis should therefore be examined carefully by the user.
So I need to verify all of 62 warning or just 32 got by this options ?
On a single file, -metrics [introduces] 3 goto on a function [that doesn't have any]
The C constructs &&, || and break; can be “normalized” into goto constructs.
I suppose so 157 stmts in analyzed functions, 148 stmts analyzed (94.3%) that means I have dead code on my projet, it's that ?
Yes. For the inputs visible to the value analysis, only 148 out of 157 statements are reachable. Note that if, for instance, main() has arguments argc and argv, the abstract values built for these arguments may not encompass all the values that should be considered. The best way to determine whether these 9 statements are really unreachable is to look at them (they are displayed in red in Frama-C's GUI).
With option -scope-def-interproc I get me 32 warning (62 without) but on website, we can read (in scope documentation)
It is not very clear what you are asking. It would help if you provided an example with full information (source code, commandline(s)) so that one can reproduce your steps and clarify the meaning of the emitted messages for you. Produce a small, reduced example if it is impossible to get the complete example out: it is also (nearly) impossible to answer you with the information provided so far.
What is "Coverage estimation = 100.0%" with -metrics-value-cover I get
a value between 80 and 100%, at the beginning I thought get <100% when
I had dead code, but I had dead code when I get 100%, so I think get
100% if all functions in sources files are analysed ?
Let's take an example.
[metrics] Value coverage statistics
=========================
Syntactically reachable functions = 3 (out of 3)
Semantically reached functions = 1
Coverage estimation = 33.3%
Unseen functions (2) =
<tests/metrics/reach.c>: baz; foo;
The first line, Syntactically reachable functions is an over-approximation of the number of functions of your program that may be eventually be called, starting from main. For example, a function whose address is never taken and which is never called directly won't be inside this set.
Semantically reached functions is the number of functions of your program indeed analyzed by the Value analysis.
Coverage estimation is the ratio between those two numbers. For small programs, in which all functions are reachable, it is usually 100%.
Unseen functions is the list of functions that were expected to be reached (syntactically), but were never analyzed by Value.
Notice that none of these numbers talk about instructions, so you may still get 100% with dead code.