Is there a reason to put #static before if VERSION statements? - julia

For example in front of this:
#static if v"0.2" <= VERSION < v"0.3-"
# do something specific to 0.2 release series
end
Is the #static necessary?

In addition to #DVNold's answer, here's a bit more explanation...
First, the difference between the two:
if in global scope the condition and whatever branch is selected are both evaluated when that code is run;
#static if evaluates the condition when the code is parsed (during macro expansion) and is replaced with the code for whichever branch the condition selects — no if is left after macro expansion.
In top-level global scope, there's not much difference between these two since evaluation happens only once, right after the code is parsed, macro expanded and lowered. There are, however, some situations where you can't syntactically put an if expression. A good example is if you want the presence or absence of a field in a structure to be conditional on something:
have_baz_field = rand(Bool)
struct Foo
bar::Int
#static if have_baz_field
baz::Int
end
end
Here are two different evaluations of this code:
julia> have_baz_field = rand(Bool)
false
julia> struct Foo
bar::Int
#static if have_baz_field
baz::Int
end
end
julia> fieldnames(Foo)
(:bar,)
julia> have_baz_field = rand(Bool)
true
julia> struct Foo
bar::Int
#static if have_baz_field
baz::Int
end
end
julia> fieldnames(Foo)
(:bar, :baz)
Of course, randomly having a field or not isn't very useful; in practice, you'd want to have a more meaningful condition for whether to have the field or not, probably something based on the platform you're on when trying to match the layout of a C struct.
In local scope, there's more difference between if and #static if since the code can be evaluated many times whereas it is parsed only once. Since the condition is evaluated at parse time in the global scope where the expression appears, the #static if form cannot access any local variables since they don't exist when the macro is being expanded. On the other hand, if the condition of a non-static if is based only on global constants, there's a strong chance that the compiler can predict the value of the condition, so there still may not be any effective difference since the compiler will eliminate all code but the taken branch anyway. However, if you want to make sure that this happens at parse / compile time, then you can use #static if to force it.
Hopefully that clarifies the difference. In short, you rarely need #static if because it's usually equivalent in top-level scope and even in local scope, in situations where you can use #static if the compiler will probably already do the equivalent anyway. There are a few situations where #static if is required in a global scope because a branch is not syntactically allowed. In local scope, it usually isn't necessary, but may be desirable to be really sure that the condition is not evaluated at run time. In most code, you don't need #static if. In the example given in the question, for example, I would personally just use if and omit the #static since the expression appears at the top level and the condition is only evaluated once either way.

Yes, in some contexts if statements are not allowed, but macros are, or you can't, for example, define a struct in an if statement. Inside functions it might often not be necessary, but sometimes you want to be absolutely sure that the if statement gets eliminated.

Related

Multiple dispatch in julia with the same variable type

Usually the multiple dispatch in julia is straightforward if one of the parameters in a function changes data type, for example Float64 vs Complex{Float64}. How can I implement multiple dispatch if the parameter is an integer, and I want two functions, one for even and other for odd values?
You may be able to solve this with a #generated function: https://docs.julialang.org/en/v1/manual/metaprogramming/#Generated-functions-1
But the simplest solution is to use an ordinary branch in your code:
function foo(x::MyType{N}) where {N}
if isodd(N)
return _oddfoo(x)
else
return _evenfoo(x)
end
end
This may seem as a defeat for the type system, but if N is known at compile-time, the compiler will actually select only the correct branch, and you will get static dispatch to the correct function, without loss of performance.
This is idiomatic, and as far as I know the recommended solution in most cases.
I expect that with type dispatch you ultimately still are calling after a check on odd versus even, so the most economical of code, without a run-time penatly, is going to be having the caller check the argument and call the proper function.
If you nevertheless have to be type based, for some reason unrelated to run-time efficiency, here is an example of such:
abstract type HasParity end
struct Odd <: HasParity
i::Int64
Odd(i::Integer) = new(isodd(i) ? i : error("not odd"))
end
struct Even <: HasParity
i::Int64
Even(i::Integer) = new(iseven(i) ? i : error("not even"))
end
parity(i) = return iseven(i) ? Even(i) : Odd(i)
foo(i::Odd) = println("$i is odd.")
foo(i::Even) = println("$i is even.")
for n in 1:4
k::HasParity = parity(n)
foo(k)
end
So here's other option which I think is cleaner and more multiple dispatch oriented (given by a coworker). Let's think N is the natural number to be checked and I want two functions that do different stuff depending if N is even or odd. Thus
boolN = rem(N,2) == 0
(...)
function f1(::Val{true}, ...)
(...)
end
function f1(::Val{false}, ...)
(...)
end
and to call the function just do
f1(Val(boolN))
As #logankilpatrick pointed out the dispatch system is type based. What you are dispatching on, though, is well established pattern known as a trait.
Essentially your code looks like
myfunc(num) = iseven(num) ? _even_func(num) : _odd_func(num)

KEYWORD_SET in IDL

I am new to IDL and find the KEYWORD_SET difficult to grasp. I understand that it is a go no go switch. I think its the knocking on and off part that I am having difficulty with. I have written a small program to master this as such
Pro get_this_done, keyword1 = keyword1
WW=[3,6,8]
PRINT,'WW'
print,WW
y= WW*3
IF KEYWORD_Set(keyword1) Then BEGIN
print,'y'
print,y
ENDIF
Return
END
WW prints but print, y is restricted by the keyword. How do I knock off the keyword to allow y to print.
Silly little question, but if somebody can indulge me, it would be great.
After compiling the routine, type something like
get_this_done,KEYWORD1=1b
where the b after the one sets the numeric value to a BYTE type integer (also equivalent to TRUE). That should cause the y-variable to be printed to the screen.
The KEYWORD_SET function will return a TRUE for lots of different types of inputs that are basically either defined or not zero. The IF loop executes when the argument is TRUE.
Keywords are simply passed as arguments to the function:
get_this_done, KEYWORD1='whatever'
or also
get_this_done, /KEYWORD1
which will give KEYWORD1 the INT value of 1 inside the function. Inside the function KEYWORD_SET will return 1 (TRUE) when the keyword was passed any kind of value - no matter whether it makes sense or not.
Thus as a side note to the question: It often is advisable to NOT use KEYWORD_SET, but instead resort to a type query:
IF SIZE(variable, /TNAME) EQ 'UNDEFINED' THEN $
variable = 'default value'
It has the advantage that you can actually check for the correct type of the keyword and handle unexpected or even different variable types:
IF SIZE(variable, /TNAME) NE 'LONG' THEN BEGIN
IF SIZE(variable, /TNAME) EQ 'STRING' THEN $
PRINT, "We need a number here... sure that the cast to LONG works?"
variable = LONG(variable)
ENDIF

Julia macro expansion order

I would like to write a macro #unpack t which takes an object t and copies all its fields into local scope. For example, given
immutable Foo
i::Int
x::Float64
end
foo = Foo(42,pi)
the expression #unpack foo should expand into
i = foo.i
x = foo.x
Unfortunately, such a macro cannot exist since it would have to know the type of the passed object. To circumvent this limitation, I introduce a type-specific macro #unpackFoo foo with the same effect, but since I'm lazy I want the compiler to write #unpackFoo for me. So I change the type definition to
#unpackable immutable Foo
i::Int
x::Float64
end
which should expand into
immutable Foo
i::Int
x::Float64
end
macro unpackFoo(t)
return esc(quote
i = $t.i
x = $t.x
end)
end
Writing #unpackable is not too hard:
macro unpackable(expr)
if expr.head != :type
error("#unpackable must be applied on a type definition")
end
name = isa(expr.args[2], Expr) ? expr.args[2].args[1] : expr.args[2]
fields = Symbol[]
for bodyexpr in expr.args[3].args
if isa(bodyexpr,Expr) && bodyexpr.head == :(::)
push!(fields,bodyexpr.args[1])
elseif isa(bodyexpr,Symbol)
push!(fields,bodyexpr)
end
end
return esc(quote
$expr
macro $(symbol("unpack"*string(name)))(t)
return esc(Expr(:block, [:($f = $t.$f) for f in $fields]...))
end
end)
end
In the REPL, this definition works just fine:
julia> #unpackable immutable Foo
i::Int
x::Float64
end
julia> macroexpand(:(#unpackFoo foo))
quote
i = foo.i
x = foo.x
end
Problems arise if I put the #unpackFoo in the same compilation unit as the #unpackable:
julia> #eval begin
#unpackable immutable Foo
i::Int
x::Float64
end
foo = Foo(42,pi)
#unpackFoo foo
end
ERROR: UndefVarError: #unpackFoo not defined
I assume the problem is that the compiler tries to proceed as follows
Expand #unpackable but do not parse it.
Try to expand #unpackFoo which fails because the expansion of #unpackable has not been parsed yet.
If we wouldn't fail already at step 2, the compiler would now parse the expansion of #unpackable.
This circumstance prevents #unpackable from being used in a source file. Is there any way of telling the compiler to swap steps 2. and 3. in the above list?
The background to this question is that I'm working on an iterator-based implementation of iterative solvers in the spirit of https://gist.github.com/jiahao/9240888. Algorithms like MinRes require quite a number of variables in the corresponding state object (8 currently), and I neither want to write state.variable every time I use a variable in e.g. the next() function, nor do I want to copy all of them manually as this bloats up the code and is hard to maintain. In the end, this is mainly an exercise in meta-programming though.
Firstly, I would suggest writing this as:
immutable Foo
...
end
unpackable(Foo)
where unpackable is a function which takes the type, constructs the appropriate expression and evals it. There are a couple of advantages to this, e.g. that you can apply it to any type without it being fixed at definition time, and the fact that you don't have to do a bunch of parsing of the type declaration (you can just call fieldnames(Foo) == [:f, :i] and work with that).
Secondly, while I don't know your use case in detail (and dislike blanket rules) I will warn that this kind of thing is frowned upon. It makes code harder to read because it introduces a non-local dependency; suddenly, in order to know whether x is a local or global variable, you have to look up the definition of a type in a whole different file. A better, and more general, approach is to explicitly unpack variables, and this is available in MacroTools.jl via the #destruct macro:
#destruct _.(x, i) = myfoo
# now we can use x and i
(You can destruct nested data structures and indexable objects too, which is nice.)
To answer your question: you're essentially right about how Julia runs code (s/parse/evaluate). The whole block is parsed, expanded and evaluated together, which means in your example you're trying to expand #unpackFoo before it's been defined.
However, when loading a .jl file, Julia evaluates blocks in the file one at a time, rather than all at once.
This means that you can happily write a file like this:
macro foo()
:(println("hi"))
end
#foo()
and run julia foo.jl or include("foo.jl") and it will run fine. You just can't have a macro definition and its use in the same block, as in your begin block above.
Try having a look at Parameters package by Mauro (https://github.com/mauro3/Parameters.jl). It has an #unpack macro and the accompanying machinery similar to what you suggest you need.

julia introspection - get name of variable passed to function

In Julia, is there any way to get the name of a passed to a function?
x = 10
function myfunc(a)
# do something here
end
assert(myfunc(x) == "x")
Do I need to use macros or is there a native method that provides introspection?
You can grab the variable name with a macro:
julia> macro mymacro(arg)
string(arg)
end
julia> #mymacro(x)
"x"
julia> #assert(#mymacro(x) == "x")
but as others have said, I'm not sure why you'd need that.
Macros operate on the AST (code tree) during compile time, and the x is passed into the macro as the Symbol :x. You can turn a Symbol into a string and vice versa. Macros replace code with code, so the #mymacro(x) is simply pulled out and replaced with string(:x).
Ok, contradicting myself: technically this is possible in a very hacky way, under one (fairly limiting) condition: the function name must have only one method signature. The idea is very similar the answers to such questions for Python. Before the demo, I must emphasize that these are internal compiler details and are subject to change. Briefly:
julia> function foo(x)
bt = backtrace()
fobj = eval(current_module(), symbol(Profile.lookup(bt[3]).func))
Base.arg_decl_parts(fobj.env.defs)[2][1][1]
end
foo (generic function with 1 method)
julia> foo(1)
"x"
Let me re-emphasize that this is a bad idea, and should not be used for anything! (well, except for backtrace display). This is basically "stupid compiler tricks", but I'm showing it because it can be kind of educational to play with these objects, and the explanation does lead to a more useful answer to the clarifying comment by #ejang.
Explanation:
bt = backtrace() generates a ... backtrace ... from the current position. bt is an array of pointers, where each pointer is the address of a frame in the current call stack.
Profile.lookup(bt[3]) returns a LineInfo object with the function name (and several other details about each frame). Note that bt[1] and bt[2] are in the backtrace-generation function itself, so we need to go further up the stack to get the caller.
Profile.lookup(...).func returns the function name (the symbol :foo)
eval(current_module(), Profile.lookup(...)) returns the function object associated with the name :foo in the current_module(). If we modify the definition of function foo to return fobj, then note the equivalence to the foo object in the REPL:
julia> function foo(x)
bt = backtrace()
fobj = eval(current_module(), symbol(Profile.lookup(bt[3]).func))
end
foo (generic function with 1 method)
julia> foo(1) == foo
true
fobj.env.defs returns the first Method entry from the MethodTable for foo/fobj
Base.decl_arg_parts is a helper function (defined in methodshow.jl) that extracts argument information from a given Method.
the rest of the indexing drills down to the name of the argument.
Regarding the restriction that the function have only one method signature, the reason is that multiple signatures will all be listed (see defs.next) in the MethodTable. As far as I know there is no currently exposed interface to get the specific method associated with a given frame address. (as an exercise for the advanced reader: one way to do this would be to modify the address lookup functionality in jl_getFunctionInfo to also return the mangled function name, which could then be re-associated with the specific method invocation; however, I don't think we currently store a reverse mapping from mangled name -> Method).
Note also that (1) backtraces are slow (2) there is no notion of "function-local" eval in Julia, so even if one has the variable name, I believe it would be impossible to actually access the variable (and the compiler may completely elide local variables, unused or otherwise, put them in a register, etc.)
As for the IDE-style introspection use mentioned in the comments: foo.env.defs as shown above is one place to start for "object introspection". From the debugging side, Gallium.jl can inspect DWARF local variable info in a given frame. Finally, JuliaParser.jl is a pure-Julia implementation of the Julia parser that is actively used in several IDEs to introspect code blocks at a high level.
Another method is to use the function's vinfo. Here is an example:
function test(argx::Int64)
vinfo = code_lowered(test,(Int64,))
string(vinfo[1].args[1][1])
end
test (generic function with 1 method)
julia> test(10)
"argx"
The above depends on knowing the signature of the function, but this is a non-issue if it is coded within the function itself (otherwise some macro magic could be needed).

In Julia, can a macro access the inferred type of its arguments?

In Julia, is there a way to write a macro that branches based on the (compile-time) type of its arguments, at least for arguments whose types can be inferred at compile time? Like, in the example below, I made up a function named code_type that returns the compile-time type of x. Is there any function like that, or any way to produce this kind of behavior? (Or do macros get expanded before types are inferred, such that this kind of thing is impossible.)
macro report_int(x)
code_type(x) == Int64 ? "it's an int" : "not an int"
end
Macros cannot do this, but generated functions can.
Check out the docs here: https://docs.julialang.org/en/v1/manual/metaprogramming/#Generated-functions-1
In addition to spencerlyon2's answer, another option is to just generate explicit branches:
macro report_int(x)
:(isa(x,Int64) ? "it's an int" : "not an int")
end
If #report_int(x) is used inside a function, and the type of x can be inferred, then the JIT will be able to optimise away the dead branch (this approach is used by the #evalpoly macro in the standard library).

Resources