porting python class to Julialang - julia

I am seeing that Julia explicitly does NOT do classes... and I should instead embrace mutable structs.. am I going down the correct path here?? I diffed my trivial example against an official flux library but cannot gather how do I reference self like a python object.. is the cleanest way to simply pass the type as a parameter in the function??
Python
# Dense Layer
class Layer_Dense
def __init__(self, n_inputs, n_neurons):
self.weights = 0.01 * np.random.randn(n_inputs, n_neurons)
self.biases = np.zeros((1, n_neurons))
def forward(self, inputs):
pass
My JuliaLang version so far
mutable struct LayerDense
num_inputs::Int64
num_neurons::Int64
weights
biases
end
function forward(layer::LayerDense, inputs)
layer.weights = 0.01 * randn(layer.num_inputs, layer.num_neurons)
layer.biases = zeros((1, layer.num_neurons))
end
The flux libraries version of a dense layer... which looks very different to me.. and I do not know what they're doing or why.. like where is the forward pass call, is it here in flux just named after the layer Dense???
source : https://github.com/FluxML/Flux.jl/blob/b78a27b01c9629099adb059a98657b995760b617/src/layers/basic.jl#L71-L111
struct Dense{F, M<:AbstractMatrix, B}
weight::M
bias::B
σ::F
function Dense(W::M, bias = true, σ::F = identity) where {M<:AbstractMatrix, F}
b = create_bias(W, bias, size(W,1))
new{F,M,typeof(b)}(W, b, σ)
end
end
function Dense(in::Integer, out::Integer, σ = identity;
initW = nothing, initb = nothing,
init = glorot_uniform, bias=true)
W = if initW !== nothing
Base.depwarn("keyword initW is deprecated, please use init (which similarly accepts a funtion like randn)", :Dense)
initW(out, in)
else
init(out, in)
end
b = if bias === true && initb !== nothing
Base.depwarn("keyword initb is deprecated, please simply supply the bias vector, bias=initb(out)", :Dense)
initb(out)
else
bias
end
return Dense(W, b, σ)
end

This is an equivalent of your Python code in Julia:
mutable struct Layer_Dense
weights::Matrix{Float64}
biases::Matrix{Float64}
Layer_Dense(n_inputs::Integer, n_neurons::Integer) =
new(0.01 * randn(n_inputs, n_neurons),
zeros((1, n_neurons)))
end
forward(ld::Layer_Dense, inputs) = nothing
What is important here:
here I create an inner constructor only, as outer constructor is not needed; as opposed in the Flux.jl code you have linked the Dense type defines both inner and outer constructors
in python forward function does not do anything, so I copied it in Julia (your Julia code worked a bit differently); note that instead of self one should pass an instance of the object to the function as the first argument (and add ::Layer_Dense type signature so that Julia knows how to correctly dispatch it)
similarly in Python you store only weights and biases in the class, I have reflected this in the Julia code; note, however, that for performance reasons it is better to provide an explicit type of these two fields of Layer_Dense struct
like where is the forward pass call
In the code you have shared only constructors of Dense object are defined. However, in the lines below here and here the Dense type is defined to be a functor.
Functors are explained here (in general) and in here (more specifically for your use case)

Related

How to achieve type stability when assigning values with StaticArrays?

I have the following struct (simplified), and some calculations done with this struct:
mutable struct XX{VecType}
v::VecType
end
long_calculation(x::XX) = sum(x.v)
as a part of the program i need to update the v value. the struct is callable and mainly used as a cache. here, the use of static arrays helps a lot in speeding up calculations, but the type of v is ultimately defined by an user. my problem lies when assigning new values to XX.v:
function (f::XX)(w)
f.v .= w #here lies the problem
return long_calculation(f)
this works if v <: Array and w is of any value, but it doesn't work when v <: StaticArrays.StaticArray, as setindex! is not defined on that type.
How can i write f.v .= w in a way that, when v allows it, performs an inplace modification, but when not, just creates a new value, and stores it in the XX struct?
There's a package for exactly this use case: BangBang.jl. From there, you can use setindex!!:
f.v = setindex!!(f.v, w)
Here I propose a simple solution that should be enough in most cases. Use multiple dispatch and define the following function:
my_assign!(f::XX, w) = (f.v .= w)
my_assign!(f::XX{<:StaticArray}, w) = (f.v = w)
and then simply call it in your code like this:
function (f::XX)(w)
my_assign!(f, w)
return long_calculation(f)
end
Then if you (or your users) get an error with a default implementation it is easy enough to add another method to my_assign! co cover other special cases when it throws an error.
Would such a solution be enough for you?

What is the connection between Refs and Broadcasting in Julia

For two objects A and B we could previously get the vector [A*A, A*B] with the code A .* [A, B]. From the deprecation warnings in Julia 0.7, it seems that the new way to do this is to use a reference of the first A. So it becomes Ref(A) .* [A,B].
It doesn't seem that there is a strong link between references and broadcasting operations. What is the link here and why is using a Reference preferred (by the deprecation warnings at least)?
Minimal Working Example
import Base.*
struct example
num::Int
end
function *(lhs::example, rhs::example)
return example(lhs.num * rhs.num)
end
A = example(2)
B = example(4)
# Previously this could be done as follows.
A .* [A, B]
# Now we need to use Refs
Ref(A) .* [A, B]
I will here refer to the main case of Ref, that is when you pass it one argument (there are also other subtypes of Ref but they are not relevant for us here).
Writing Ref(x) creates a mutable wrapper around object x. The wrapper is a very simple RefValue type defined in the following way:
mutable struct RefValue{T} <: Ref{T}
x::T
RefValue{T}() where {T} = new()
RefValue{T}(x) where {T} = new(x)
end
Now why it is useful, because Ref has defined the following utility functions:
eltype(x::Type{<:Ref{T}}) where {T} = #isdefined(T) ? T : Any
size(x::Ref) = ()
axes(x::Ref) = ()
length(x::Ref) = 1
ndims(x::Ref) = 0
ndims(::Type{<:Ref}) = 0
iterate(r::Ref) = (r[], nothing)
iterate(r::Ref, s) = nothing
IteratorSize(::Type{<:Ref}) = HasShape{0}()
which means that it can be used in broadcasting as only objects that have axes defined and support indexing can be used with broadcast.
Fortunately it is easy to avoid writing Ref(A) all the time.
Just define:
Base.broadcastable(e::example) = Ref(e)
and the machinery of broadcast will work again as Base.broadcastable is called on each argument of broadcast.
More details about customizing broadcasting can be found here https://docs.julialang.org/en/v1/manual/interfaces/#man-interfaces-broadcasting.

Check if a type implements an interface in Julia

How to check that a type implements an interface in Julia?
For exemple iteration interface is implemented by the functions start, next, done.
I need is to have a specialization of a function depending on wether the argument type implements a given interface or not.
EDIT
Here is an example of what I would like to do.
Consider the following code:
a = [7,8,9]
f = 1.0
s = Set()
push!(s,30)
push!(s,40)
function getsummary(obj)
println("Object of type ", typeof(obj))
end
function getsummary{T<:AbstractArray}(obj::T)
println("Iterable Object starting with ", next(obj, start(obj))[1])
end
getsummary(a)
getsummary(f)
getsummary(s)
The output is:
Iterable Object starting with 7
Object of type Float64
Object of type Set{Any}
Which is what we would expect since Set is not an AbstractArray. But clearly my second method only requires the type T to implement the iteration interface.
my issue isn't only related to the iteration interface but to all interfaces defined by a set of functions.
EDIT-2
I think my question is related to
https://github.com/JuliaLang/julia/issues/5
Since we could have imagined something like T<:Iterable
Typically, this is done with traits. See Traits.jl for one implementation; a similar approach is used in Base to dispatch on Base.iteratorsize, Base.linearindexing, etc. For instance, this is how Base implements collect using the iteratorsize trait:
"""
collect(element_type, collection)
Return an `Array` with the given element type of all items in a collection or iterable.
The result has the same shape and number of dimensions as `collection`.
"""
collect{T}(::Type{T}, itr) = _collect(T, itr, iteratorsize(itr))
_collect{T}(::Type{T}, itr, isz::HasLength) = copy!(Array{T,1}(Int(length(itr)::Integer)), itr)
_collect{T}(::Type{T}, itr, isz::HasShape) = copy!(similar(Array{T}, indices(itr)), itr)
function _collect{T}(::Type{T}, itr, isz::SizeUnknown)
a = Array{T,1}(0)
for x in itr
push!(a,x)
end
return a
end
See also Mauro Werder's talk on traits.
I would define a iterability(::T) trait as follows:
immutable Iterable end
immutable NotIterable end
iterability(T) =
if method_exists(length, (T,)) || !isa(Base.iteratorsize(T), Base.HasLength)
Iterable()
else
NotIterable()
end
which seems to work:
julia> iterability(Set)
Iterable()
julia> iterability(Number)
Iterable()
julia> iterability(Symbol)
NotIterable()
you can check whether a type implements an interface via methodswith as follows:
foo(a_type::Type, an_interface::Symbol) = an_interface ∈ [i.name for i in methodswith(a_type, true)]
julia> foo(EachLine, :done)
true
but I don't quite understand the dynamic dispatch approach you mentioned in the comment, what does the generic function looks like? what's the input & output of the function? I guess you want something like this?
function foo(a_type::Type, an_interface::Symbol)
# assume bar baz are predefined
if an_interface ∈ [i.name for i in methodswith(a_type, true)]
# call function bar
else
# call function baz
end
end
or some metaprogramming stuff to generate those functions respectively at compile time?

Julia: non-destructively update immutable type variable

Let's say there is a type
immutable Foo
x :: Int64
y :: Float64
end
and there is a variable foo = Foo(1,2.0). I want to construct a new variable bar using foo as a prototype with field y = 3.0 (or, alternatively non-destructively update foo producing a new Foo object). In ML languages (Haskell, OCaml, F#) and a few others (e.g. Clojure) there is an idiom that in pseudo-code would look like
bar = {foo with y = 3.0}
Is there something like this in Julia?
This is tricky. In Clojure this would work with a data structure, a dynamically typed immutable map, so we simply call the appropriate method to add/change a key. But when working with types we'll have to do some reflection to generate an appropriate new constructor for the type. Moreover, unlike Haskell or the various MLs, Julia isn't statically typed, so one does not simply look at an expression like {foo with y = 1} and work out what code should be generated to implement it.
Actually, we can build a Clojure-esque solution to this; since Julia provides enough reflection and dynamism that we can treat the type as a sort of immutable map. We can use fieldnames to get the list of "keys" in order (like [:x, :y]) and we can then use getfield(foo, :x) to get field values dynamically:
immutable Foo
x
y
z
end
x = Foo(1,2,3)
with_slow(x, p) =
typeof(x)(((f == p.first ? p.second : getfield(x, f)) for f in fieldnames(x))...)
with_slow(x, ps...) = reduce(with_slow, x, ps)
with_slow(x, :y => 4, :z => 6) == Foo(1,4,6)
However, there's a reason this is called with_slow. Because of the reflection it's going to be nowhere near as fast as a handwritten function like withy(foo::Foo, y) = Foo(foo.x, y, foo.z). If Foo is parametised (e.g. Foo{T} with y::T) then Julia will be able to infer that withy(foo, 1.) returns a Foo{Float64}, but won't be able to infer with_slow at all. As we know, this kills the crab performance.
The only way to make this as fast as ML and co is to generate code effectively equivalent to the handwritten version. As it happens, we can pull off that version as well!
# Fields
type Field{K} end
Base.convert{K}(::Type{Symbol}, ::Field{K}) = K
Base.convert(::Type{Field}, s::Symbol) = Field{s}()
macro f_str(s)
:(Field{$(Expr(:quote, symbol(s)))}())
end
typealias FieldPair{F<:Field, T} Pair{F, T}
# Immutable `with`
for nargs = 1:5
args = [symbol("p$i") for i = 1:nargs]
#eval with(x, $([:($p::FieldPair) for p = args]...), p::FieldPair) =
with(with(x, $(args...)), p)
end
#generated function with{F, T}(x, p::Pair{Field{F}, T})
:($(x.name.primary)($([name == F ? :(p.second) : :(x.$name)
for name in fieldnames(x)]...)))
end
The first section is a hack to produce a symbol-like object, f"foo", whose value is known within the type system. The generated function is like a macro that takes types as opposed to expressions; because it has access to Foo and the field names it can generate essentially the hand-optimised version of this code. You can also check that Julia is able to properly infer the output type, if you parametrise Foo:
#code_typed with(x, f"y" => 4., f"z" => "hello") # => ...::Foo{Int,Float64,String}
(The for nargs line is essentially a manually-unrolled reduce which enables this.)
Finally, lest I be accused of giving slightly crazy advice, I want to warn that this isn't all that idiomatic in Julia. While I can't give very specific advice without knowing your use case, it's generally best to have fields with a manageable (small) set of fields and a small set of functions which do the basic manipulation of those fields; you can build on those functions to create the final public API. If what you want is really an immutable dict, you're much better off just using a specialised data structure for that.
There is also setindex (without the ! at the end) implemented in the FixedSizeArrays.jl package, which does this in an efficient way.

constrain argument to be in a set of values in Julia function signature

Is there a way in Julia to specify that a function argument can take one of a set of values through type annotations? For example, let's say I have function foo which accepts a single argument
function foo(x::String)
print(x)
end
the argument x can only be a String. Is there a way to further constrain it in the function signature so that it can only be for example one of the strings "right", "left", or "center"?
In Julia, the motto should be "There's a type for that!".
One way of handling this would be to create a type with a constructor that only allows the values you want (and possibly stores them in a more efficient manner).
Here is one example:
const directions = ["left", "right", "center"]
immutable MyDirection
Direction::Int8
function MyDirection(str::AbstractString)
i = findnext(directions, str, 1)
i == 0 && throw(ArgumentError("Invalid direction string"))
return new(i)
end
end
Base.show(io::IO, x::MyDirection) = print(io, string("MyDirection(\"",directions[x.Direction],"\")"))
function foo(x::MyDirection)
println(x)
end
function foo(str::AbstractString)
x = MyDirection(str)
println(x)
end
test = MyDirection("left")
foo(test)
foo("right")
Note: my example is written with Julia 0.4!
Edit:
Another approach would be to use symbols, such as :left, :right, and :center,
instead of strings.
These have the advantage of being interned (so that they can be compared simply by comparing their address), and they can also be used directly for type parameters.
For example:
immutable MyDirection{Symbol} ; end
function MyDirection(dir::Symbol)
dir in (:left, :right, :center) || error("invalid direction")
MyDirection{dir}()
end
MyDirection(dir::AbstractString) = MyDirection(symbol(dir))
That will let you do things like:
x = MyDirection("left")
which will create an immutable object of type MyDirection{:left}.
No, it is not. That would be dispatching on values, which isn't possible in Julia.
I'm not sure what your actual application is, but there are some possibly-appropriate workarounds to this, e.g.
abstract Sam81Args
type ArgRight <:Sam81Args end
type ArgLeft <:Sam81Args end
type ArgCenter <:Sam81Args end
function foo{T<:Sam81Args}(x::Type{T})
println(T)
end
foo(ArgCenter)

Resources