I am having a problem running a spiking-neuron simulator. I keep getting the error message, "operation +: Warning adding a matrix with the empty matrix will give an empty matrix result." Now I'm writing this program in "Scilab," but I'm hoping the problem I am having will be clear for the educated eye regardess. What I am doing is converting an existing MATLAB program to Scilab. The original MATLAB program and an explanation can be found here: https://www.izhikevich.org/publications/spikes.pdf
What happens in my Scilab version is that the first pass through the loop produces all the expected values. I Know this becuase I hit pause at the end of the first run, right before "end," and check all the values and matrix elements. However, if I run the program proper, which includes a loop of 20 iterations, I get the error message above, and all of the matrix values are empty! I cannot figure out what the problem is. I am fairly new to programming so the answer may be very simple as far as I know. Here is the Scilab version of the program:
Ne=8; Ni=2;
re=rand(Ne,1); ri=rand(Ni,1);
a=[0.02*ones(Ne,1); 0.02+0.08*ri];
b=[0.2*ones(Ne,1); 0.25-0.05*ri];
c=[-65+15*re.^2; -65*ones(Ni,1)];
d=[8-6*re.^2; 2*ones(Ni,1)];
S=[0.5*rand(Ne+Ni,Ne), -rand(Ne+Ni,Ni)];
v=60*rand(10,1)
v2=v
u=b.*v;
firings=[];
for t=1:20
I=[5*rand(Ne,1,"normal");2*rand(Ni,1,"normal")];
fired=find(v>=30);
j = length(fired);
h = t*ones(j,1);
k=[h,fired'];
firings=[firings;k];
v(fired)=c(fired);
u(fired)=u(fired)+d(fired);
I=I+sum(S(:,fired),"c");
v=v+0.5*(0.04*v.^2+5*v+140-u+I);
v=v+0.5*(0.04*v.^2+5*v+140-u+I);
u=u+a.*(b.*v-u);
end
plot(firings(:,1), firings(:,2),".");
I tried everything to no avail. The program should run through 20 iterations and produce a "raster plot" of dots representing the fired neurons at each of the 20 time steps.
You can add the following line
oldEmptyBehaviour("on")
at the beginning of your script in order to prevent the default Scilab rule (any algebraic operation with an empty matrix yields an empty matrix). However you will still have some warnings (despite the result will be OK). As a definitive fix I recommend testing the emptyness of fired in your code, like this:
Ne=8; Ni=2;
re=rand(Ne,1); ri=rand(Ni,1);
a=[0.02*ones(Ne,1); 0.02+0.08*ri];
b=[0.2*ones(Ne,1); 0.25-0.05*ri];
c=[-65+15*re.^2; -65*ones(Ni,1)];
d=[8-6*re.^2; 2*ones(Ni,1)];
S=[0.5*rand(Ne+Ni,Ne), -rand(Ne+Ni,Ni)];
v=60*rand(10,1)
v2=v
u=b.*v;
firings=[];
for t=1:20
I=[5*rand(Ne,1,"normal");2*rand(Ni,1,"normal")];
fired=find(v>=30);
if ~isempty(fired)
j = length(fired);
h = t*ones(j,1);
k=[h,fired'];
firings=[firings;k];
v(fired)=c(fired);
u(fired)=u(fired)+d(fired);
I=I+sum(S(:,fired),"c");
end
v=v+0.5*(0.04*v.^2+5*v+140-u+I);
v=v+0.5*(0.04*v.^2+5*v+140-u+I);
u=u+a.*(b.*v-u);
end
plot(firings(:,1), firings(:,2),".");
The [] + 1 is not really defined in a mathematical sense. The operation might fail or produce different results depending on the software you use. For example:
Scilab 5 [] + 1 produces 1
Scilab 6 [] + 1 produces [] and a warning
Julia 1.8 [] .+ 1 produces [] but [] + 1 an error.
Python+Numpy 1.23 np.zeros((0,0)) + 1 produces [].
I suggest checking with size() or a comparison to the empty matrix to avoid such strange behaviour.
Following the approach of this answer I am trying to understand what happens exactly and how expressions and generated functions work in Julia within the concept of metaprogramming.
The goal is to optimize a recursive function using expressions and generated functions (for a concrete example you can have a look at the question answered in the link provided above).
Consider the following modified fibonacci function, in which I want to compute the fibonacci series up to n and multiply it by a number p.
The straightforward, recursive implementation would be
function fib(n::Integer, p::Real)
if n <= 1
return 1 * p
else
return n * fib(n-1, p)
end
end
As a first step, I could define a function which returns an expression instead of the computed value
function fib_expr(n::Integer, p::Symbol)
if n <= 1
return :(1 * $p)
else
return :($n * $(fib_expr(n-1, p)))
end
end
which, e.g. returns something like
julia> ex = fib_expr(3, :myp)
:(3 * (2 * (1myp)))
In this way I get an expression which is fully expanded and depends on the value assigned to the symbol myp. In this way I do not see the recursion anymore, basically I am metaprogramming: I created a function that creates another "function" (in this case we call it expression though).
I can now set myp = 0.5 and call eval(ex) to compute the result.
However, this is slower than the first approach.
What I can do though, is to generate a parametric function in the following way
#generated function fib_gen{n}(::Type{Val{n}}, p::Real)
return fib_expr(n, :p)
end
And magically, calling fib_gen(Val{3}, 0.5) gets things done, and is incredibly fast.
So, what is going on?
To my understanding, in the first call to fib_gen(Val{3}, 0.5), the parametric function fib_gen{Val{3}}(...) gets compiled and its content is the fully expanded expression obtained through fib_expr(3, :p), i.e. 3*2*1*p with p substituted with the input value.
The reason why it is so fast then, is because fib_gen is basically just a series of multiplications, whereas the original fib has to allocate on the stack every single recursive call making it slower, am I correct?
To give some numbers, here is my short benchmark using BenchmarkTools.
julia> #benchmark fib(10, 0.5)
...
mean time: 26.373 ns
...
julia> p = 0.5
0.5
julia> #benchmark eval(fib_expr(10, :p))
...
mean time: 177.906 μs
...
julia> #benchmark fib_gen(Val{10}, 0.5)
...
mean time: 2.046 ns
...
I have many questions:
Why the second case is so slow?
What exactly is and means ::Type{Val{n}}? (I copied that from the answer linked above)
Because of the JIT compiler, sometimes I am lost in what happens at compile-time and at run-time, as it is the case here...
Furthermore, I tried to combine fib_expr and fib_gen in a single function according to
#generated function fib_tot{n}(::Type{Val{n}}, p::Real)
if n <= 1
return :(1 * p)
else
return :(n * fib_tot(Val{n-1}, p))
end
end
which however is slow
julia> #benchmark fib_tot(Val{10}, 0.5)
...
mean time: 4.601 μs
...
What am I doing wrong here? Is it even possible to combine fib_expr and fib_gen in a single function?
I realize this is more a monograph rather than a question, however, even though I read the metaprogramming section few times, I am having a hard time to grasp everything, in particular with an applied example such as this one.
A monograph in response:
Metaprogramming basics
It will be easier to start with "normal" macros first. I'll relax the definition you used a bit:
function fib_expr(n::Integer, p)
if n <= 1
return :(1 * $p)
else
return :($n * $(fib_expr(n-1, p)))
end
end
That allows to pass in more than just symbols for p, like integer literals or whole expressions. Given this, we can define a macro for the same functionality:
macro fib_macro(n::Integer, p)
fib_expr(n, p)
end
Now, if #fib_macro 45 1 is used anywhere in the code, at compile time it will first be replaced by a long nested expression
:(45 * (44 * ... * (1 * 1)) ... )
and then compiled normally -- to a constant.
That's all there is to macros, really. Replacing syntax during compile time; and by recursion, this can be an arbitrarily long alteration between compiling, and evaluating functions on expressions. And for things that are essentially constant, but tedious to write otherwise, it is very useful: a bood example example is Base.Math.#evalpoly.
Evaluation at runtime?
But it has the problem that you cannot inspect values which are only known at runtime: you can't implement fib(n) = #fib_macro n 1, since at compile time, n is a symbol representing the parameter, and not a number you can dispatch on.
The next best solution to this would be to use
fib_eval(n::Integer) = eval(fib_expr(n, 1))
which works, but will repeat the compilation process every time it is called -- and that is much more overhead than the original function, since now at runtime, we perform the whole recursion on the expression tree and then call the compiler on the result. Not good.
Method dispatch & compilation
So we need a way to intermingle runtime and compile time. Enter #generated functions. These will at runtime dispatch on a type, and then work like a macro defining the function body.
First about type dispatch. If we have
f(x) = x + 1
and have a function call f(1), about the following will happen:
The type of the argument is determined (Int)
The method table of the function is consulted to find the best matching method
The method body is compiled for the specific Int argument type, if that hasn't been done before
The compiled method is evaluated on the concrete argument
If we then enter f(1.0), the same will happen again, with a new, different specialized method being compiled for Float64, based on the same function body.
Value types & singleton types
Now, Julia has the peculiar feature that you can use numbers as types. That means that the dispatch process outlined above will also work on the following function:
g(::Type{Val{N}}) where N = N + 1
That's a bit tricky. Remember that types are themselves values in Julia: Int isa Type.
Here, Val{N} is for every N a so-called singleton type having exactly one instance, namely Val{N}() -- just like Int is a type having many instances 0, -1, 1, -2, ....
Type{T} is also a singleton type, having as its single instance the type T. Int is a Type{Int}, and Val{3} is a Type{Val{3}} -- in fact, both are the only values of their type.
So, for each N, there is a type Val{N}, being the single instance of Type{Val{N}}. Thus, g will be dispatched and compiled for each single N. This is how we can dispatch on numbers as types. This already allows for optimization:
julia> #code_llvm g(Val{1})
define i64 #julia_g_61158(i8**) #0 !dbg !5 {
top:
ret i64 2
}
julia> #code_llvm f(1)
define i64 #julia_f_61076(i64) #0 !dbg !5 {
top:
%1 = shl i64 %0, 2
%2 = or i64 %1, 3
%3 = mul i64 %2, %0
%4 = add i64 %3, 2
ret i64 %4
}
But remember that it requires compilation for each new N at the first call.
(And fkt(::T) is just short for fkt(x::T) if you don't use x in the body.)
Integrating generating functions and value types
Finally to generated functions. They work as a slight modification of the above dispatch pattern:
The type of the argument is determined (Int)
The method table of the function is consulted to find the best matching method
The method body is treated as a macro and called with the Int argument type as a parameter, if that hasn't been done before. The resulting expression is compiled into a method.
The compiled method is evaluated on the concrete argument
This pattern allows to change the implementation for each type which the function is dispatched on.
For our concrete setting, we want to dispatch on the Val types representing the arguments of the Fibonacci sequence:
#generated function fib_gen{n}(::Type{Val{n}}, p::Real)
return fib_expr(n, :p)
end
You now see that your explanation was exactly right:
in the first call to fib_gen(Val{3}, 0.5), the parametric function
fib_gen{Val{3}}(...) gets compiled and its content is the fully
expanded expression obtained through fib_expr(3, :p), i.e. 3*2*1*p
with p substituted with the input value.
I hope that the whole story has also answered all three of your listed questions:
The implementation using eval replicates the recursion every time, plus the overhead of compilation
Val is a trick to lift numbers to types, and Type{T} the singleton type containing only T -- but I hope the examples were helpful enough
Compile time is not before execution, because of JIT -- it is every time a method gets compiled first time, because it get's called.
First of all, I am joining myself to the comments: your question is very well written & constructive.
I have reproduced your results using Julia 0.7-beta.
Difference between #generated fib_tot (one piece of code) and fib_gen (that calls fib_expr)
With my julia version results are identicals:
julia> #btime fib_tot(Val{10},0.5)
0.042 ns (0 allocations: 0 bytes)
1.8144e6
julia> #btime fib_gen(Val{10},0.5)
0.042 ns (0 allocations: 0 bytes)
1.8144e6
Sometimes breaking a function into multiple parts see official doc:performance tips can be useful, however in your peculiar case I do not see why this could be useful. At compile time Julia has everything it needs to optimize fib_tot. There is a branch if n<=1 however n is known at "compile time" thanks to the Type{Val{n}} trick and this branch should be removed without problem in the generated (specialized) code.
The Type{Val{n}} trick
To specialize functions, Julia inference is performed according to argument type and not according to argument value.
For instance a compiled version of foo(n::Int) = ... is not generated for each n value. You must define a type that depends on n value to reach this goal. This is precisely how Type{Val{n}} works: Val{n} is simply a parametrized empty structure:
struct Val{T} end
Hence, each Val{1}, Val{2}, ... Val{100}, ... is a different type. By consequence, if foo is defined as:
foo(::Type{Val{n}}) where {n} = ...
Each foo(Val{1}), foo(Val{2}), ... foo(Val{100}) will trigger a specialized foo version (because argument type is different).
The eval(fib_expr(n, 1)) case
This
julia> #btime eval(fib_expr(10, :p))
401.651 μs (99 allocations: 6.45 KiB)
1.8144e6
is slow because your expression is (re-)compiled every time. The problem can be avoided if you use a macro instead (see phg answer).
The fib version
.
julia> #btime fib(10,0.5)
30.778 ns (0 allocations: 0 bytes)
1.8144e6
There is only one compiled version of this fib function. By consequence, it must contain all the runtime branch tests etc... This explains how slow it is.
Just a remark about:
foo{n}(::Type{Val{n}}) deprecated syntax
The foo{n}(::Type{Val{n}}) syntax is deprecated, the new one is foo(::Type{Val{n}}) where {n}. You can read Julia doc, parametric methods for further details.
My Julia version:
julia> versioninfo()
Julia Version 0.7.0-beta.0
Commit f41b1ecaec (2018-06-24 01:32 UTC)
Platform Info:
OS: Linux (x86_64-pc-linux-gnu)
CPU: Intel(R) Xeon(R) CPU E5-2603 v3 # 1.60GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-6.0.0 (ORCJIT, haswell)
This particular example function is in Lua, but I think the main concepts are true for any language with lexical scoping, first-class functions, and iterators.
Code Description (TL;DR -- see code):
The code below defines an iterator constructor which merely defines a local value and returns an iterator function.
This iterator function, when run, uses the local value from the constructor and increases that value by 1. It then runs itself recursively until the value reaches 5, then adds 1 to the value and returns the number 5. If it's run again, it will run itself recursively until the value reaches 20 or higher, then it returns nil, which is the signal for the loop to stop.
I then run an outer loop using the iterator provided by the constructor, which will run the body of the loop when the iterator returns a value (5). In the body of the outer loop I put an inner loop that also uses an iterator provided by the constructor.
The program should run the body of the loop once (when the value in the outer loop iterator hits 5) and then run the inner loop completely (letting the inner loop value hit 20), then return back to the outer loop and run it until it finishes.
function iterConstr()
local indexes = {0}
function iter()
print(indexes[1])
indexes[1] = indexes[1] + 1
if indexes[1] == 5 then
indexes[1] = indexes[1] + 1
return 5
elseif indexes[1]>=21 then
print("finished!")
return nil
else
print("returning next one")
return iter()
end
end
return iter
end
for val in iterConstr() do
for newVal in iterConstr() do
end
print("big switch")
end
The behavior I did not expect is that the value from the inner loop and outer loop seem to be linked together. When focus is returned to the outer loop after running through the inner loop, it runs with the expected next value for the outer loop (6), but then instead of iterating and incrementing up to 20, it instead jumps immediately to 21, which is where the inner loop ended!
Can anyone help explain why this bizarre behavior occurs?
I believe your problem lies on line 3 - you declare iter as a global function instead of a local one, which makes it accessible from any Lua chunk. Changing that line to local function iter() corrects this behaviour. I think therefore that this is more of an issue with the developer applying lexical scoping than with lexical scoping itself :)
The following line of code is a recursive helper function for generating the nth Fibonacci number. From Julia's REPL include("filename.jl"). It's the only line of code in the file and it won't compile. Julia ver. 4.5 and 5.1. Have tried using bracketing parentheses and also have rewritten in standard if-else format, all to no avail. This code will simply not compile for me. Can anyone see what I'm doing wrong?
helper(current, next, n) = n==0 ? current : helper(next, current+next, n-1)
I wrote a function that outputs a single element numeric after a 300-cycle for loop. I make it print about 10 lines in each cycle, to know where it's at. Now I want to run this for loop itself in a 1000 cycle for loop (and place the resulting numbers in a matrix). But it prints way to much stuff and I don't know where it's at in the execution of the outer (1000 cycle) for loop. The output from the inner for loop overwhelms a print statement executed at each of the outer loop's cycles. Here's how it looks:
for(i in 1:1000){
function(...){...} #prints 10 lines 300 times before outputting a single element numeric
cat("Outer loop step "); print(i)}
Now I don't want to remove the print statements from my function, but I want to mute them when I call the function in that for loop. How can I run my function without executing it's print() statements?
Modify your function so you can pass in a "debug" true/false parameter to control the print statements.
Don't use print or cat. Use message instead. You can then use suppressMessages to suppress the message output.