Reading back through some of my old Python code to refresh myself on the "Yield" keyword, I realized that I have not seen a similar idea in Julia. Does an analog version of Yield exist? (Note that Julia's Base library comes with a yield function but that is for tasks and does not act like the yield keyword does in Python).
There are no built-ins for yield (unfortunately, if you ask me). However, since Julia has a very advanced macro system, and the theory as well as multiple possible implementations of coroutines/generators are quite well studied, there are a couple of implementations in third-party packages.
One of them is FGenerators.jl, previously GeneratorsX.jl, which works mostly in the transducers ecosystem.
Another is ResumableFunctions.jl.
You can construct generators that yield values when iterated upon:
f(x) = x*x
g = (f(i) for i=1:5) #generator that yields values
for x in g
println(x)
end
I have some Julia functions that are several hundreds of lines long that I would like to profile so that I can then work on optimizing the code.
I am aware of the BenchmarkTools package which allows the overall execution time and memory consumption of a function to be measured using #btime or #benchmark. But those functions tell me nothing about where inside the functions the bottlenecks are. So my first step would have to be using some tool to identify which parts of the code are slow.
In Matlab for instance there is a very nice built-in profiler which runs a script/function and then reports the time spent on every line of the code. Similarly in Python there is a module called line_profiler which can produce a line-by-line report showing how much time was spent on every single line of a function.
What I’m looking for is simply a line-by-line report showing the total time spent on each line of code and how many times a particular piece of code was called.
Is there such a functionality in Julia? Either built-in or via some third-party package.
There is a Profiling chapter in Julia docs with all the necessary info.
Also, you can use ProfileView.jl or similar packages for visual exploration of the profiled code.
And, not exactly profiling, but very useful in practice package is TimerOutputs.jl
UPD: Since Julia is a compiling language it makes no sense to measure timing of individual lines, since the actual code that is executed can be very different from what is written in Julia.
For example following julia code
function f()
x = 0
for i in 0:100_000_000
x += i
end
x
end
is lowered to
julia> #code_llvm f()
; # REPL[8]:1 within `f'
define i64 #julia_f_594() {
top:
; # REPL[8]:7 within `f'
ret i64 5000000050000000
}
I.e. there is no loop at all. This is why instead of execution time proxy metric of how often a line appears in the set of all backtraces is used. Of course, it is not the same as the execution time, but it gives a good approximation of where the bottleneck is because lines with long execution time appear in backtraces more often.
OwnTime.jl. Doesn't do call counts though, but it should be easy to add.
I am trying to do in parallel some stuff. Based on #time the performance is excellent, but I am actually waiting quite long in front of my computer.
The code is something like below.
function max(n)
rand_n = SharedArray{Float64}(n, n)
#distributed for i in 1:n
#distributed for j in 1:n
r = Random(Uniform(), 100)
rand_n[i,j] = StatsBase.maximum(EV0)
end
end
rand_n
end
#time max(1000)
0.000166 seconds (118 allocations: 18.203 KiB)
tick()
max(1000)
tock()
2.865833086s: 2 seconds, 865 milliseconds
So the actual time elapsed on the computer is much longer that what #time says.
You should read the documentation of #distributed (type ?#distributed at the prompt):
Note that without a reducer function, #distributed executes
asynchronously, i.e. it spawns independent tasks on all available
workers and returns immediately without waiting for completion. To
wait for completion, prefix the call with #sync, like :
#sync #distributed for var = range
body
end
Currently, you are just starting the calculation and bailing out, so you get the timing of starting the calculation without waiting for it to finish.
A couple of more things:
Please always provide a "minimal working example" so that other posters can just copy-paste your code, and it will run. So include using Distributed and other required packages. Define all variables, etc. What is EV0, what does Random mean here? etc. etc.
You're defining r but you're not using it. What's it for?
max is the name of a function in Base, it's probably not a good idea to overload that name like this.
function somefun()
x::Int = 1
x = 0.5
end
this compiles with no warning. of course calling it produces an InexactError: Int64(0.5). question: can you enforce a compile time check?
Julia is a dynamic language in this sense. So, no, it appears you cannot detect if the result of an assignment will result in such an error without running the function first, as this kind of type checking is done at runtime.
I wasn't sure myself, so I wrapped this function in a module to force (pre)compilation in the absence of running the function, and the result was that no error was thrown, which confirms this idea. (see here if you want to see what I mean by this).
Having said this, to answer the spirit of your question: is there a way to avoid such obscure runtime errors from creeping up in unexpected ways?
Yes there is. Consider the following two, almost equivalent functions:
function fun1(x ); y::Int = x; return y; end;
function fun2(x::Int); y::Int = x; return y; end;
fun1(0.5) # ERROR: InexactError: Int64(0.5)
fun2(0.5) # ERROR: MethodError: no method matching fun2(::Float64)
You may think, big deal, we exchanged one error for another. But this is not the case. In the first instance, you don't know that your input will cause a problem until the point where it gets used in the function. Whereas in the second case, you are effectively enforcing a type check at the point of calling the function.
This is a trivial example of programming "by contract", by making use of Julia's elegant type-checking system. See Design By Contract for details.
So the answer to your question is, yes, if you rethink your design and follow good programming practices, such that this kind of errors are caught early on, then you can avoid having them occuring later on in obscure scenarios where they are hard to fix or detect.
The Julia manual provides a style guide which may also be of help (the example I give above is right at the top!).
It's worth thinking through what "compile time" really is in Julia — because it's probably not what you're thinking.
When you define the function:
julia> function somefun()
x::Int = 1
x = 0.5
end
somefun (generic function with 1 method)
You are not compiling it. Julia won't compile it, in fact, until you call it. Julia's compiler can be thought of as Just-Barely-Ahead-Of-Time, standing in contrast to typical JIT or AOT designs.
Now, when you call the function it compiles it and then runs it which throws the error. You can see this compilation happening the very first time you call the function — it takes a bit more time and memory as it generates and caches the specialized code:
julia> #time try somefun() catch end
0.005828 seconds (6.76 k allocations: 400.791 KiB)
julia> #time try somefun() catch end
0.000107 seconds (6 allocations: 208 bytes)
So perhaps you can see that with Julia's compilation model it doesn't so much matter if it gets caught at compile time or not — even if Julia refused to compile (and cache) the code it'd behave exactly like what you currently see. It'd still allow you to define the function in the first place, and it'd still only throw its error upon calling the function.
The question you mean to ask is if Julia could (or should) catch this error at function definition time. And then the question is really — is it ok to define a method that always results in an error? What about a function like error itself? In Julia, it's totally fine to define a method that unconditionally errors like this one, and there can be good reasons to do so.
Now, there are ways to ask Julia if it is able to detect that this method will always unconditionally error:
julia> #code_typed somefun()
CodeInfo(
1 ─ invoke Base.convert(Main.Int::Type{Int64}, 0.5::Float64)::Union{}
└── $(Expr(:unreachable))::Union{}
) => Union{}
This is the very first step in Julia's process of compilation, and in this case it can see that everything beyond convert(Int, 0.5) is unreachable — that is, it errors. Further, it knows that since the function will never return, it's return type is Union{} (that is, no possible type can ever be returned!) So you can ask Julia to do this step with, for example, the #inferred macro as part of a test suite.
I am trying to test the speed of Julia ODE solvers. I used the Lorenz equation in the tutorial:
using DifferentialEquations
using Plots
function lorenz(t,u,du)
du[1] = 10.0*(u[2]-u[1])
du[2] = u[1]*(28.0-u[3]) - u[2]
du[3] = u[1]*u[2] - (8/3)*u[3]
end
u0 = [1.0;1.0;1.0]
tspan = (0.0,100.0)
prob = ODEProblem(lorenz,u0,tspan)
sol = solve(prob,reltol=1e-8,abstol=1e-8,saveat=collect(0:0.01:100))
Loading the packages took about 25 s in the beginning, and the code ran for 7 s on a windows 10 quad core laptop in Jupyter notebook. I understand that Julia need to precompile packages first, and is that why the loading time was so long? I found 25 s unbearable. Also, when I ran the solver again using different initial values it took much less time (~1s) to run, and why is that? Is this the typical speed?
Tl;dr:
Julia packages have a precompilation phase. This helps make all further using calls quicker, at the cost of the first one storing some compilation data. This is only triggered each package update.
using has to pull in the package which takes a little bit (dependent on how much can precompile).
Precompilation isn't "complete", so the first time you run a function, even from a package, it will have to compile.
Julia devs know about this and there's already plans to get rid of (2) and (3) by making precompilation more complete. There's also plans to reduce compilation time, which I don't know details about.
All Julia functions specialize on the types that are given, and each function is a separate type, so DiffEq's internal functions are specializing on each ODE function you give.
In most cases with long computations, (5) doesn't actually matter since you aren't changing functions that often (if you are, consider changing parameters instead).
But (6) does matter when using it interactively. It makes it feel less "smooth".
We can get rid of this specialization on the ODE function, but it isn't the default because it causes a 2x-4x performant hit. Maybe it will be the default in the future.
Our timings post precompilation on this problem are still better than things like SciPy's wrapped Fortran solvers on problems like this by 20x. So this is all a compilation time problem, and not a runtime problem. Compilation time is essentially constant (larger problems calling the same function have about the same compilation), so this is really just an interactivity problem.
We (and Julia in general) can and will do better with interactivity in the future.
Full Explanation
This really isn't a DifferentialEquations.jl thing, this is just a Julia package thing. 25s would have to be including the precompilation time. The first time you load a Julia package it precompiles. Then that doesn't need to happen again until the next update. That's probably the longest initialization and it is quite long for DifferentialEquations.jl, but again that only happens each time you update the package code. Then, each time there's a small initialization cost for using. DiffEq is quite large, so it does take a bit to initialize:
#time using DifferentialEquations
5.201393 seconds (4.16 M allocations: 235.883 MiB, 4.09% gc time)
Then as noted in the comments you also have:
#time using Plots
6.499214 seconds (2.48 M allocations: 140.948 MiB, 0.74% gc time)
Then, the first time you run
function lorenz(t,u,du)
du[1] = 10.0*(u[2]-u[1])
du[2] = u[1]*(28.0-u[3]) - u[2]
du[3] = u[1]*u[2] - (8/3)*u[3]
end
u0 = [1.0;1.0;1.0]
tspan = (0.0,100.0)
prob = ODEProblem(lorenz,u0,tspan)
#time sol = solve(prob,reltol=1e-8,abstol=1e-8,saveat=collect(0:0.01:100))
6.993946 seconds (7.93 M allocations: 436.847 MiB, 1.47% gc time)
But then the second and third time:
0.010717 seconds (72.21 k allocations: 6.904 MiB)
0.011703 seconds (72.21 k allocations: 6.904 MiB)
So what's going on here? The first time Julia runs a function, it will compile it. So the first time you run solve, it will compile all of its internal functions as it runs. All of the proceeding times will be without the compilation. DifferentialEquations.jl also specializes on the function itself, so if we change the function:
function lorenz2(t,u,du)
du[1] = 10.0*(u[2]-u[1])
du[2] = u[1]*(28.0-u[3]) - u[2]
du[3] = u[1]*u[2] - (8/3)*u[3]
end
u0 = [1.0;1.0;1.0]
tspan = (0.0,100.0)
prob = ODEProblem(lorenz2,u0,tspan)
we will incur some of the compilation time again:
#time sol =
solve(prob,reltol=1e-8,abstol=1e-8,saveat=collect(0:0.01:100))
3.690755 seconds (4.36 M allocations: 239.806 MiB, 1.47% gc time)
So that's the what, now the why. There's a few things together here. First of all, Julia packages do not fully precompile. They don't keep the cached compiled versions of actual methods between sessions. This is something that is on the 1.x release list to do, and this would get rid of that first hit, similar to just calling a C/Fortran package since it would just be hitting a lot of ahead of time (AOT) compiled functions. So that'll be nice, but for now just note that there is a startup time.
Now let's talk about changing the functions. Every function in Julia automatically specializes on its arguments (see this blog post for details). The key idea here is that every function in Julia is a separate concrete type. So, since the problem type here is parameterized, changing the function triggers compilation. Note it's that relation: you can change parameters of the function (if you had parameters), you can change the initial conditions, etc., but it's only changing the type that triggers recompilation.
Is it worth it? Well, maybe. We want to specialize to have things fast for calculations which are difficult. Compilation time is constant (i.e. you can solve a 6 hour ODE and it'll still be a few seconds), and so the computationally-costly calculations aren't effected here. Monte Carlo simulations where you're running thousands of parameters and initial conditions aren't effected here because if you're just changing values of initial conditions and parameters then it won't recompile. But interactive use where you are changing functions does get a second or so hit in there, which isn't nice. One answer from the Julia devs for this is to spend post Julia 1.0 time speeding up compilation times, which is something that I don't know the details of but I am assured there's some low hanging fruit here.
Can we get rid of it? Yes. DiffEq Online doesn't recompile for each function because it's geared towards online use.
function lorenz3(t,u,du)
du[1] = 10.0*(u[2]-u[1])
du[2] = u[1]*(28.0-u[3]) - u[2]
du[3] = u[1]*u[2] - (8/3)*u[3]
nothing
end
u0 = [1.0;1.0;1.0]
tspan = (0.0,100.0)
f = NSODEFunction{true}(lorenz3,tspan[1],u0)
prob = ODEProblem{true}(f,u0,tspan)
#time sol = solve(prob,reltol=1e-8,abstol=1e-8,saveat=collect(0:0.01:100))
1.505591 seconds (860.21 k allocations: 38.605 MiB, 0.95% gc time)
And now we can change the function and not incur compilation cost:
function lorenz4(t,u,du)
du[1] = 10.0*(u[2]-u[1])
du[2] = u[1]*(28.0-u[3]) - u[2]
du[3] = u[1]*u[2] - (8/3)*u[3]
nothing
end
u0 = [1.0;1.0;1.0]
tspan = (0.0,100.0)
f = NSODEFunction{true}(lorenz4,tspan[1],u0)
prob = ODEProblem{true}(f,u0,tspan)
#time sol =
solve(prob,reltol=1e-8,abstol=1e-8,saveat=collect(0:0.01
:100))
0.038276 seconds (242.31 k allocations: 10.797 MiB, 22.50% gc time)
And tada, by wrapping the function in NSODEFunction (which is internally using FunctionWrappers.jl) it no longer specializes per-function and you hit the compilation time once per Julia session (and then once that's cached, once per package update). But notice that this has about a 2x-4x cost so I am not sure if it will be enabled by default. We could make this happen by default inside of the problem-type constructor (i.e. no extra specialization by default, but the user can opt into more speed at the cost of interactivity) but I am unsure what the better default is here (feel free to comment on the issue with your thoughts). But it will definitely get documented soon after Julia does its keyword argument changes and so "compilation-free" mode will be a standard way to use it, even if not default.
But just to put it into perspective,
import numpy as np
from scipy.integrate import odeint
y0 = [1.0,1.0,1.0]
t = np.linspace(0, 100, 10001)
def f(u,t):
return [10.0*(u[1]-u[0]),u[0]*(28.0-u[2])-u[1],u[0]*u[1]-(8/3)*u[2]]
%timeit odeint(f,y0,t,atol=1e-8,rtol=1e-8)
1 loop, best of 3: 210 ms per loop
we're looking at whether this interactive convenience should be made a default to be 5x faster instead of 20x faster than SciPy's default here (though our default will usually be much more accurate than the default SciPy uses, but that's data for another time which can be found in the benchmarks or just ask). On one hand it makes sense as ease-of-use, but on the other hand if re-enabling the specialization for long calculations and Monte Carlo isn't known (which is where you really want speed), then lots of people there will take a 2x-4x performance hit which could amount to extra days/weeks of computation. Ehh... tough choices.
So in the end there's a mixture of optimizing choices and some precompilation features missing from Julia that effect the interactivity without effecting the true runtime speed. If you're looking to estimate parameters using some big Monte Carlo, or solve a ton of SDEs, or solve a big PDE, we have that down. That was our first goal and we made sure to hit that as good as possible. But playing around in the REPL does have 2-3 second "gliches" which we also cannot ignore (better than playing around in C/Fortran though of course, but still not ideal for a REPL). For this, I've shown you that there's solutions already being developed and tested, and so hopefully this time next year we can have a better answer for that specific case.
PS
Two other things to note. If you're only using the ODE solvers, you can just do using OrdinaryDiffEq to keep downloading/installing/compiling/importing all of DifferentialEquations.jl (this is described in the manual). Also, using saveat like that probably isn't the fastest way to solve this problem: solving it with a lot less points and using the dense output as necessary may be better here.
Edit
I opened an issue detailing how we can reduce the "between function" compilation time without losing the speedup that specializing gives. I think this is something we can make a short-term priority since I agree that we could do better here.