I have a program that simulates the paths of particles using the Differential Equations package of Julia. The simulation allows for particles to hit devices - to prevent the continued simulation of such particles, I use the unstable_check of the solver (specifically of the EulerHeun solver). However, this leads to warnings like the following:
┌ Warning: Instability detected. Aborting
└ # SciMLBase <path>\.julia\packages\SciMLBase\0s9uL\src\integrator_interface.jl:351
As I simulate thousands of particles, this can be quite annoying (and slow).
Can I suppress this warning? And if not, is there another (better) way to abort the simulation of some particles?
I don't think a code sample makes sense / is necessary here; let me know though if you think otherwise.
https://diffeq.sciml.ai/stable/basics/common_solver_opts/#Miscellaneous
verbose: Toggles whether warnings are thrown when the solver exits early. Defaults to true.
Thus to turn off the warnings, you simply do solve(prob,alg;verbose=false).
The simulation allows for particles to hit devices - to prevent the continued simulation of such particles, I use the unstable_check of the solver
Using a DiscreteCallback or ContinuousCallback with affect!(integrator) = terminate!(integrator) is a much better way to do this.
There is Suppressor.jl, although I don't know whether this reduces the overhead you get from the warnings being created, so a DiffEq-specific setting might be the better way to go here (I don't know much about DiffEq though, sorry!)
Here's an example from the readme:
julia> using Suppressor
julia> #suppress begin
println("This string doesn't get printed!")
#warn("This warning is ignored.")
end
for just suppressing warnings you want #suppress_err
Related
I'm using REPL inside VScode and trying to fix a code that gets stuck inside a certain package. I want to figure out which process is taking time by looking at the stack trace but cannot interrupt because REPL does not respond to ctrl+c. I pressed ctrl+x by accident and that showed ^X on the screen.
I am using JuMP and GLPK so it could be stuck there. However, I am not seeing any outputs.
I would also appreciate any tips on figuring out which process is causing it to be stuck.
Interrupts are not implemented in GLPK.jl. I've opened an issue: https://github.com/jump-dev/GLPK.jl/issues/171 (but it's unlikely to get fixed quickly).
If you're interested in contributing to JuMP, it's a good issue to get started with. You could look at the Gurobi.jl code for how we handle interrupts there as inspiration.
I started out using GLPK.jl and I also found that it would "hang" on large problems. However, I recommend trying the Cbc.jl solver. It has a time limit parameter which will interrupt the solver in a set number of seconds. I found it to produce quality results. (Or you could use Cbc for Dev/QA testing to determine what might be causing the hanging and switch to GLPK for your production runs.)
You can set the time limit using the seconds parameter as follows.
For newer package versions:
model = Model(optimizer_with_attributes(Cbc.Optimizer
,"seconds" => 60
,"threads" => 4
,"loglevel" => 0
,"ratioGap" => 0.0001))
Or like this for older package versions:
model = Model(with_optimizer(Cbc.Optimizer
,seconds=60
,threads=4
,loglevel=0
,ratioGap=0.0001))
I have some Julia functions that are several hundreds of lines long that I would like to profile so that I can then work on optimizing the code.
I am aware of the BenchmarkTools package which allows the overall execution time and memory consumption of a function to be measured using #btime or #benchmark. But those functions tell me nothing about where inside the functions the bottlenecks are. So my first step would have to be using some tool to identify which parts of the code are slow.
In Matlab for instance there is a very nice built-in profiler which runs a script/function and then reports the time spent on every line of the code. Similarly in Python there is a module called line_profiler which can produce a line-by-line report showing how much time was spent on every single line of a function.
What I’m looking for is simply a line-by-line report showing the total time spent on each line of code and how many times a particular piece of code was called.
Is there such a functionality in Julia? Either built-in or via some third-party package.
There is a Profiling chapter in Julia docs with all the necessary info.
Also, you can use ProfileView.jl or similar packages for visual exploration of the profiled code.
And, not exactly profiling, but very useful in practice package is TimerOutputs.jl
UPD: Since Julia is a compiling language it makes no sense to measure timing of individual lines, since the actual code that is executed can be very different from what is written in Julia.
For example following julia code
function f()
x = 0
for i in 0:100_000_000
x += i
end
x
end
is lowered to
julia> #code_llvm f()
; # REPL[8]:1 within `f'
define i64 #julia_f_594() {
top:
; # REPL[8]:7 within `f'
ret i64 5000000050000000
}
I.e. there is no loop at all. This is why instead of execution time proxy metric of how often a line appears in the set of all backtraces is used. Of course, it is not the same as the execution time, but it gives a good approximation of where the bottleneck is because lines with long execution time appear in backtraces more often.
OwnTime.jl. Doesn't do call counts though, but it should be easy to add.
function somefun()
x::Int = 1
x = 0.5
end
this compiles with no warning. of course calling it produces an InexactError: Int64(0.5). question: can you enforce a compile time check?
Julia is a dynamic language in this sense. So, no, it appears you cannot detect if the result of an assignment will result in such an error without running the function first, as this kind of type checking is done at runtime.
I wasn't sure myself, so I wrapped this function in a module to force (pre)compilation in the absence of running the function, and the result was that no error was thrown, which confirms this idea. (see here if you want to see what I mean by this).
Having said this, to answer the spirit of your question: is there a way to avoid such obscure runtime errors from creeping up in unexpected ways?
Yes there is. Consider the following two, almost equivalent functions:
function fun1(x ); y::Int = x; return y; end;
function fun2(x::Int); y::Int = x; return y; end;
fun1(0.5) # ERROR: InexactError: Int64(0.5)
fun2(0.5) # ERROR: MethodError: no method matching fun2(::Float64)
You may think, big deal, we exchanged one error for another. But this is not the case. In the first instance, you don't know that your input will cause a problem until the point where it gets used in the function. Whereas in the second case, you are effectively enforcing a type check at the point of calling the function.
This is a trivial example of programming "by contract", by making use of Julia's elegant type-checking system. See Design By Contract for details.
So the answer to your question is, yes, if you rethink your design and follow good programming practices, such that this kind of errors are caught early on, then you can avoid having them occuring later on in obscure scenarios where they are hard to fix or detect.
The Julia manual provides a style guide which may also be of help (the example I give above is right at the top!).
It's worth thinking through what "compile time" really is in Julia — because it's probably not what you're thinking.
When you define the function:
julia> function somefun()
x::Int = 1
x = 0.5
end
somefun (generic function with 1 method)
You are not compiling it. Julia won't compile it, in fact, until you call it. Julia's compiler can be thought of as Just-Barely-Ahead-Of-Time, standing in contrast to typical JIT or AOT designs.
Now, when you call the function it compiles it and then runs it which throws the error. You can see this compilation happening the very first time you call the function — it takes a bit more time and memory as it generates and caches the specialized code:
julia> #time try somefun() catch end
0.005828 seconds (6.76 k allocations: 400.791 KiB)
julia> #time try somefun() catch end
0.000107 seconds (6 allocations: 208 bytes)
So perhaps you can see that with Julia's compilation model it doesn't so much matter if it gets caught at compile time or not — even if Julia refused to compile (and cache) the code it'd behave exactly like what you currently see. It'd still allow you to define the function in the first place, and it'd still only throw its error upon calling the function.
The question you mean to ask is if Julia could (or should) catch this error at function definition time. And then the question is really — is it ok to define a method that always results in an error? What about a function like error itself? In Julia, it's totally fine to define a method that unconditionally errors like this one, and there can be good reasons to do so.
Now, there are ways to ask Julia if it is able to detect that this method will always unconditionally error:
julia> #code_typed somefun()
CodeInfo(
1 ─ invoke Base.convert(Main.Int::Type{Int64}, 0.5::Float64)::Union{}
└── $(Expr(:unreachable))::Union{}
) => Union{}
This is the very first step in Julia's process of compilation, and in this case it can see that everything beyond convert(Int, 0.5) is unreachable — that is, it errors. Further, it knows that since the function will never return, it's return type is Union{} (that is, no possible type can ever be returned!) So you can ask Julia to do this step with, for example, the #inferred macro as part of a test suite.
I'm benchmarking Julia execution speed. I executed #time [i^2 for i in 1:1000] on Julia prompt, which resulted in something of the order of 20 ms. This seems strange, since my computer is modern with an i7 processor (I am using Linux Ubuntu).
Another strange thing is that when I execute the same command on a range of 1:10 the execution time is 15 ms.
There must be something trivial that I am missing here?
Several things, see performance tips:
Don't benchmark in global scope.
Don't measure the first execution of something like this.
Use BenchmarkTools.
Julia is a JIT-compiled language, so the first time you measure things, you're measuring compilation time. This is a small fixed overhead, so for anything that takes a substantial time, it's negligible, but for short-running code like this, it's almost all of the time. Non-constant global variables force the compiler to assume almost nothing about types, which tends to poison all of your performance. This is fine in some circumstances, but most of the time, you a) should write code so that the inputs are explicit parameters to functions, rather than implicit parameters coming from some globals, and b) shouldn't write code that uses mutable global state.
I'm running CPLEX from IBM ILOG CPLEX Optimization Studio 12.6.
Here I'm facing a weird issue. Solving the same optimization problem (pure LP) multiple times in a row, yields different results.
The aim is to solve once, then iteratively modify the coefficient matrix, and re-solve the problem. However, we experienced that the changes between iterations did not correspond to the modifications.
This lead us to try re-solving the problem without doing modifications in between, which returned different results.
The catch is that we still do one major modification before we start iterating, and our hypothesis is that this change (cplex.setCoef(...) on about 10,000 rows) is done asynchronously, so that it is only partially done during the first re-solution iterations.
However, we cannot seem to find any documentation stating that this method is asynchronous, nor any way to ensure synchronous execution, so that all the changes are done before CPLEX restarts.
Does anyone know if this is the case? Is there any way to delay restart until cplex.setCoef(...) is done? The problem is quite huge, but the representative lines are:
functionUsingSetCoefOn10000rows();
for(var j = 0; j < 100; j++){
cplex.solve();
writeln("Iteration " + j + ": " + cplex.getObjValue());
for(var k = 0; k < 100000; k++){
doBusyWork(); //Just to kill time
}
}
which outputs
Iteration 0: 1529486959.814946
Iteration 1: 1544325969.750444
Iteration 2: 1549669732.757587
Iteration 3: 1551818419.584333
...
Iteration 33: 1564007987.849925
...
Iteration 98: 1564007987.849925
Iteration 99: 1564007987.849925
Last minute update
Reducing the number of calls to cplex.setCoef to about 2500 removes the issue, and all iterations return the same objective value. Sadly, we do need to change all the 10,000 coefficients.
Edit: The OPL scripting and engine log: http://goo.gl/ywJhkm and here: http://goo.gl/v2Qhm9
Sorry that this is not really an answer, but it is too big to go as a comment...
I don't think that the setCoef() calls would be asynchronous and not complete - that would be very surprising. Such behaviour would be too unpredictable and too many other people would have problems with this behaviour. However, CPLEX itself will use multiple threads to solve a problem and that means that it can generate different solutions each time it runs. The example objective values that you show do seem to change significantly, so a few questions/observations:
1: The numbers seem to be monotonically increasing - are they all increasing like this until they reach the maximum value? It looks like some kind of convergence behaviour. On re-running, CPLEX will start from a previous solution if it can. Check that there isn't some other CPLEX parameter stopping the search early such as an iteration or time limit or wider solution optimality tolerance.
2: Have you looked at the CPLEX logs from each run to see what CPLEX is doing in each run?
3: If you have doubts about the model being solved, try dumping out the model as an LP file and check the values in each iteration. They should all be the same in your case. You can also try solving the LP file in the CPLEX standalone optimiser to see what value that gives.
4: Have you tried setting the parameters to make CPLEX use a different LP algorithm (e.g. primal simplex, barrier etc)?