Julia macro expansion order - julia

I would like to write a macro #unpack t which takes an object t and copies all its fields into local scope. For example, given
immutable Foo
i::Int
x::Float64
end
foo = Foo(42,pi)
the expression #unpack foo should expand into
i = foo.i
x = foo.x
Unfortunately, such a macro cannot exist since it would have to know the type of the passed object. To circumvent this limitation, I introduce a type-specific macro #unpackFoo foo with the same effect, but since I'm lazy I want the compiler to write #unpackFoo for me. So I change the type definition to
#unpackable immutable Foo
i::Int
x::Float64
end
which should expand into
immutable Foo
i::Int
x::Float64
end
macro unpackFoo(t)
return esc(quote
i = $t.i
x = $t.x
end)
end
Writing #unpackable is not too hard:
macro unpackable(expr)
if expr.head != :type
error("#unpackable must be applied on a type definition")
end
name = isa(expr.args[2], Expr) ? expr.args[2].args[1] : expr.args[2]
fields = Symbol[]
for bodyexpr in expr.args[3].args
if isa(bodyexpr,Expr) && bodyexpr.head == :(::)
push!(fields,bodyexpr.args[1])
elseif isa(bodyexpr,Symbol)
push!(fields,bodyexpr)
end
end
return esc(quote
$expr
macro $(symbol("unpack"*string(name)))(t)
return esc(Expr(:block, [:($f = $t.$f) for f in $fields]...))
end
end)
end
In the REPL, this definition works just fine:
julia> #unpackable immutable Foo
i::Int
x::Float64
end
julia> macroexpand(:(#unpackFoo foo))
quote
i = foo.i
x = foo.x
end
Problems arise if I put the #unpackFoo in the same compilation unit as the #unpackable:
julia> #eval begin
#unpackable immutable Foo
i::Int
x::Float64
end
foo = Foo(42,pi)
#unpackFoo foo
end
ERROR: UndefVarError: #unpackFoo not defined
I assume the problem is that the compiler tries to proceed as follows
Expand #unpackable but do not parse it.
Try to expand #unpackFoo which fails because the expansion of #unpackable has not been parsed yet.
If we wouldn't fail already at step 2, the compiler would now parse the expansion of #unpackable.
This circumstance prevents #unpackable from being used in a source file. Is there any way of telling the compiler to swap steps 2. and 3. in the above list?
The background to this question is that I'm working on an iterator-based implementation of iterative solvers in the spirit of https://gist.github.com/jiahao/9240888. Algorithms like MinRes require quite a number of variables in the corresponding state object (8 currently), and I neither want to write state.variable every time I use a variable in e.g. the next() function, nor do I want to copy all of them manually as this bloats up the code and is hard to maintain. In the end, this is mainly an exercise in meta-programming though.

Firstly, I would suggest writing this as:
immutable Foo
...
end
unpackable(Foo)
where unpackable is a function which takes the type, constructs the appropriate expression and evals it. There are a couple of advantages to this, e.g. that you can apply it to any type without it being fixed at definition time, and the fact that you don't have to do a bunch of parsing of the type declaration (you can just call fieldnames(Foo) == [:f, :i] and work with that).
Secondly, while I don't know your use case in detail (and dislike blanket rules) I will warn that this kind of thing is frowned upon. It makes code harder to read because it introduces a non-local dependency; suddenly, in order to know whether x is a local or global variable, you have to look up the definition of a type in a whole different file. A better, and more general, approach is to explicitly unpack variables, and this is available in MacroTools.jl via the #destruct macro:
#destruct _.(x, i) = myfoo
# now we can use x and i
(You can destruct nested data structures and indexable objects too, which is nice.)
To answer your question: you're essentially right about how Julia runs code (s/parse/evaluate). The whole block is parsed, expanded and evaluated together, which means in your example you're trying to expand #unpackFoo before it's been defined.
However, when loading a .jl file, Julia evaluates blocks in the file one at a time, rather than all at once.
This means that you can happily write a file like this:
macro foo()
:(println("hi"))
end
#foo()
and run julia foo.jl or include("foo.jl") and it will run fine. You just can't have a macro definition and its use in the same block, as in your begin block above.

Try having a look at Parameters package by Mauro (https://github.com/mauro3/Parameters.jl). It has an #unpack macro and the accompanying machinery similar to what you suggest you need.

Related

Is there a reason to put #static before if VERSION statements?

For example in front of this:
#static if v"0.2" <= VERSION < v"0.3-"
# do something specific to 0.2 release series
end
Is the #static necessary?
In addition to #DVNold's answer, here's a bit more explanation...
First, the difference between the two:
if in global scope the condition and whatever branch is selected are both evaluated when that code is run;
#static if evaluates the condition when the code is parsed (during macro expansion) and is replaced with the code for whichever branch the condition selects — no if is left after macro expansion.
In top-level global scope, there's not much difference between these two since evaluation happens only once, right after the code is parsed, macro expanded and lowered. There are, however, some situations where you can't syntactically put an if expression. A good example is if you want the presence or absence of a field in a structure to be conditional on something:
have_baz_field = rand(Bool)
struct Foo
bar::Int
#static if have_baz_field
baz::Int
end
end
Here are two different evaluations of this code:
julia> have_baz_field = rand(Bool)
false
julia> struct Foo
bar::Int
#static if have_baz_field
baz::Int
end
end
julia> fieldnames(Foo)
(:bar,)
julia> have_baz_field = rand(Bool)
true
julia> struct Foo
bar::Int
#static if have_baz_field
baz::Int
end
end
julia> fieldnames(Foo)
(:bar, :baz)
Of course, randomly having a field or not isn't very useful; in practice, you'd want to have a more meaningful condition for whether to have the field or not, probably something based on the platform you're on when trying to match the layout of a C struct.
In local scope, there's more difference between if and #static if since the code can be evaluated many times whereas it is parsed only once. Since the condition is evaluated at parse time in the global scope where the expression appears, the #static if form cannot access any local variables since they don't exist when the macro is being expanded. On the other hand, if the condition of a non-static if is based only on global constants, there's a strong chance that the compiler can predict the value of the condition, so there still may not be any effective difference since the compiler will eliminate all code but the taken branch anyway. However, if you want to make sure that this happens at parse / compile time, then you can use #static if to force it.
Hopefully that clarifies the difference. In short, you rarely need #static if because it's usually equivalent in top-level scope and even in local scope, in situations where you can use #static if the compiler will probably already do the equivalent anyway. There are a few situations where #static if is required in a global scope because a branch is not syntactically allowed. In local scope, it usually isn't necessary, but may be desirable to be really sure that the condition is not evaluated at run time. In most code, you don't need #static if. In the example given in the question, for example, I would personally just use if and omit the #static since the expression appears at the top level and the condition is only evaluated once either way.
Yes, in some contexts if statements are not allowed, but macros are, or you can't, for example, define a struct in an if statement. Inside functions it might often not be necessary, but sometimes you want to be absolutely sure that the if statement gets eliminated.

Julia scoping issue when creating function from string

I would like to build a Julia application where a user can specify a function using a configuration file (and therefore as a string). The configuration file then needs to be parsed before the function is evaluated in the program.
The problem is that while the function name is known locally, it is not known in the module containing the parser. One solution I have come up with is to pass the local eval function to the parsing function but that does not seem very elegant.
I have tried to come up with a minimal working example here, where instead of parsing a configuration file, the function name is already contained in a string:
module MyFuns
function myfun(a)
return a+2
end
end
module MyUtil
# in a real application, parseconfig would parse the configuration file to extract funstr
function parseconfig(funstr)
return eval(Meta.parse(funstr))
end
function parseconfig(funstr, myeval)
return myeval(Meta.parse(funstr))
end
end
# test 1 -- succeeds
f1 = MyFuns.myfun
println("test1: $(f1(1))")
# test 2 -- succeeds
f2 = MyUtil.parseconfig("MyFuns.myfun", eval)
println("test2: $(f2(1))")
# test 3 -- fails
f3 = MyUtil.parseconfig("MyFuns.myfun")
println("test3: $(f3(1))")
The output is:
test1: 3
test2: 3
ERROR: LoadError: UndefVarError: MyFuns not defined
So, the second approach works but is there a better way to achieve the goal?
Meta.parse() will transform your string to an AST. What MyFuns.myfun refers to depends on the scope provided by the eval() you use.
The issue with your example is that the eval() inside MyUtil will evaluate in the context of that module. If that is the desired behavior, you simply miss using MyFuns inside MyUtil.
But what you really want to do is write a macro. This allows the code to be included when parsing your program, before running it. The macro will have access to a special argument __module__, which is the context where the macro is used. So __module__.eval() will execute an expression in that very scope.
foo = "outside"
module MyMod
foo = "inside"
macro eval(string)
expr = Meta.parse(string)
__module__.eval(expr)
end
end
MyMod.#eval "foo"
# Output is "outside"
See also this explanation on macros:
https://docs.julialang.org/en/v1/manual/metaprogramming/index.html#man-macros-1
And for the sake of transforming the answer of #MauricevanLeeuwen into the framework of my question, this code will work:
module MyFuns
function myfun(a)
return a+2
end
end
module MyUtil
macro parseconfig(funstr)
__module__.eval(Meta.parse(funstr))
end
end
f4 = MyUtil.#parseconfig "MyFuns.myfun"
println("test4: $(f4(1))")

How to evaluate Julia expression defining and calling a macro?

I am generating some code that is later being evaluated. Even though the generated code is correct and evaluating it line by line does not cause issues, it fails to be correctly evaluated as a whole.
eval(quote
macro m() "return" end
#m()
end)
Returns:
ERROR: LoadError: UndefVarError: #m not defined
eval(quote macro m() "return" end end)
eval(#m())
Returns: "return"
Macro expansion is done before evaluation, so when macro expansion occurs in this code the definition of the macro in the first expression in the block happens too late to affect the expansion of the second expression in the block. There is one special case which does what you want: the :toplevel expression type. This is automatically used for top-level global expressions in modules but you can manually construct expressions of this type like this:
ex = Expr(:toplevel,
:(macro m() "return" end),
:(#m())
)
And sure enough, this does what you want:
julia> eval(ex)
"return"
Since Julia doesn't have locally scoped macros, this macro definition has to take place in global scope already so presumably this should work anywhere that the original macro would have worked—i.e. a macro definition should be valid in all the same places that a top-level compound expression is valid.

Call particular method in Julia [duplicate]

The problem is the following:
I have an abstract type MyAbstract and derived composite types MyType1 and MyType2:
abstract type MyAbstract end
struct MyType1 <: MyAbstract
somestuff
end
struct MyType2 <: MyAbstract
someotherstuff
end
I want to specify some general behaviour for objects of type MyAbstract, so I have a function
function dosth(x::MyAbstract)
println(1) # instead of something useful
end
This general behaviour suffices for MyType1 but when dosth is called with an argument of type MyType2, I want some additional things to happen that are specific for MyType2 and, of course, I want to reuse the existing code, so I tried the following, but it did not work:
function dosth(x::MyType2)
dosth(x::MyAbstract)
println(2)
end
x = MyType2("")
dosth(x) # StackOverflowError
This means Julia did not recognize my attempt to treat x like its "supertype" for some time.
Is it possible to call an overloaded function from the overwriting function in Julia? How can I elegantly solve this problem?
You can use the invoke function
function dosth(x::MyType2)
invoke(dosth, Tuple{MyAbstract}, x)
println(2)
end
With the same setup, this gives the follow output instead of a stack overflow:
julia> dosth(x)
1
2
There's a currently internal and experimental macro version of invoke which can be called like this:
function dosth(x::MyType2)
Base.#invoke dosth(x::MyAbstract)
println(2)
end
This makes the calling syntax quite close to what you wrote.

julia introspection - get name of variable passed to function

In Julia, is there any way to get the name of a passed to a function?
x = 10
function myfunc(a)
# do something here
end
assert(myfunc(x) == "x")
Do I need to use macros or is there a native method that provides introspection?
You can grab the variable name with a macro:
julia> macro mymacro(arg)
string(arg)
end
julia> #mymacro(x)
"x"
julia> #assert(#mymacro(x) == "x")
but as others have said, I'm not sure why you'd need that.
Macros operate on the AST (code tree) during compile time, and the x is passed into the macro as the Symbol :x. You can turn a Symbol into a string and vice versa. Macros replace code with code, so the #mymacro(x) is simply pulled out and replaced with string(:x).
Ok, contradicting myself: technically this is possible in a very hacky way, under one (fairly limiting) condition: the function name must have only one method signature. The idea is very similar the answers to such questions for Python. Before the demo, I must emphasize that these are internal compiler details and are subject to change. Briefly:
julia> function foo(x)
bt = backtrace()
fobj = eval(current_module(), symbol(Profile.lookup(bt[3]).func))
Base.arg_decl_parts(fobj.env.defs)[2][1][1]
end
foo (generic function with 1 method)
julia> foo(1)
"x"
Let me re-emphasize that this is a bad idea, and should not be used for anything! (well, except for backtrace display). This is basically "stupid compiler tricks", but I'm showing it because it can be kind of educational to play with these objects, and the explanation does lead to a more useful answer to the clarifying comment by #ejang.
Explanation:
bt = backtrace() generates a ... backtrace ... from the current position. bt is an array of pointers, where each pointer is the address of a frame in the current call stack.
Profile.lookup(bt[3]) returns a LineInfo object with the function name (and several other details about each frame). Note that bt[1] and bt[2] are in the backtrace-generation function itself, so we need to go further up the stack to get the caller.
Profile.lookup(...).func returns the function name (the symbol :foo)
eval(current_module(), Profile.lookup(...)) returns the function object associated with the name :foo in the current_module(). If we modify the definition of function foo to return fobj, then note the equivalence to the foo object in the REPL:
julia> function foo(x)
bt = backtrace()
fobj = eval(current_module(), symbol(Profile.lookup(bt[3]).func))
end
foo (generic function with 1 method)
julia> foo(1) == foo
true
fobj.env.defs returns the first Method entry from the MethodTable for foo/fobj
Base.decl_arg_parts is a helper function (defined in methodshow.jl) that extracts argument information from a given Method.
the rest of the indexing drills down to the name of the argument.
Regarding the restriction that the function have only one method signature, the reason is that multiple signatures will all be listed (see defs.next) in the MethodTable. As far as I know there is no currently exposed interface to get the specific method associated with a given frame address. (as an exercise for the advanced reader: one way to do this would be to modify the address lookup functionality in jl_getFunctionInfo to also return the mangled function name, which could then be re-associated with the specific method invocation; however, I don't think we currently store a reverse mapping from mangled name -> Method).
Note also that (1) backtraces are slow (2) there is no notion of "function-local" eval in Julia, so even if one has the variable name, I believe it would be impossible to actually access the variable (and the compiler may completely elide local variables, unused or otherwise, put them in a register, etc.)
As for the IDE-style introspection use mentioned in the comments: foo.env.defs as shown above is one place to start for "object introspection". From the debugging side, Gallium.jl can inspect DWARF local variable info in a given frame. Finally, JuliaParser.jl is a pure-Julia implementation of the Julia parser that is actively used in several IDEs to introspect code blocks at a high level.
Another method is to use the function's vinfo. Here is an example:
function test(argx::Int64)
vinfo = code_lowered(test,(Int64,))
string(vinfo[1].args[1][1])
end
test (generic function with 1 method)
julia> test(10)
"argx"
The above depends on knowing the signature of the function, but this is a non-issue if it is coded within the function itself (otherwise some macro magic could be needed).

Resources