What does this struct mean in Julia? - julia

I am trying to lean deep learning using Julia. In one of the tutorials, which is about MLP, use the below structure for modeling multiple layers in ANN. What does this code mean?
struct Chain
layers
Chain(layers...) = new(layers)
end

This definition in isolation doesn't really "mean" anything; it is just a user defined struct with one field (called layers) and one inner constructor. Usually custom structs like this is used for collecting some data and/or used to define operations on, e.g. you could define a function f operating on this struct like this:
function f(c::Chain)
# do something with the layers in the chain
end
but in order to understand what it is used for in this specific case you probably need to consult the documentation and/or the rest of the code.

One peculiarity in this example:
The inner constructor takes multiple arguments (layers...) and creates a tuple out of them, which is assigned to the property layers.
julia> c = Chain(1, 2, "foo")
Chain((1, 2, "foo"))

Related

In (Free) Pascal, can a function return a value that can be modified without dereference?

In Pascal, I understand that one could create a function returning a pointer which can be dereferenced and then assign a value to that, such as in the following (obnoxiously useless) example:
type ptr = ^integer;
var d: integer;
function f(x: integer): ptr;
begin
f := #x;
end;
begin
f(d)^ := 4;
end.
And now d is 4.
(The actual usage is to access part of a quite complicated array of records data structure. I know that a class would be better than an array of nested records, but it isn't my code (it's TeX: The Program) and was written before Pascal implementations supported object-orientation. The code was written using essentially a language built on top of Pascal that added macros which expand before the compiler sees them. Thus you could define some macro m that takes an argument x and expands into thearray[x + 1].f1.f2 instead of writing that every time; the usage would be m(x) := somevalue. I want to replicate this functionality with a function instead of a macro.)
However, is it possible to achieve this functionality without the ^ operator? Can a function f be written such that f(x) := y (no caret) assigns the value y to x? I know that this is stupid and the answer is probably no, but I just (a) don't really like the look of it and (b) am trying to mimic exactly the form of the macro I mentioned above.
References are not first class objects in Pascal, unlike languages such as C++ or D. So the simple answer is that you cannot directly achieve what you want.
Using a pointer as you illustrated is one way to achieve the same effect although in real code you'd need to return the address of an object whose lifetime extends beyond that of the function. In your code that is not the case because the argument x is only valid until the function returns.
You could use an enhanced record with operator overloading to encapsulate the pointer, and so encapsulate the pointer dereferencing code. That may be a good option, but it very much depends on your overall problem, of which we do not have sight.

The Idiomatic Way To Do OOP, Types and Methods in Julia

As I'm learning Julia, I am wondering how to properly do things I might have done in Python, Java or C++ before. For example, previously I might have used an abstract base class (or interface) to define a family of models through classes. Each class might then have a method like calculate. So to call it I might have model.calculate(), where the model is an object from one of the inheriting classes.
I get that Julia uses multiple dispatch to overload functions with different signatures such as calculate(model). The question I have is how to create different models. Do I use the type system for that and create different types like:
abstract type Model end
type BlackScholes <: Model end
type Heston <: Model end
where BlackScholes and Heston are different types of model? If so, then I can overload different calculate methods:
function calculate(model::BlackScholes)
# code
end
function calculate(model::Heston)
# code
end
But I'm not sure if this is a proper and idiomatic use of types in Julia. I will greatly appreciate your guidance!
This is a hard question to answer. Julia offers a wide range of tools to solve any given problem, and it would be hard for even a core developer of the language to assert that one particular approach is "right" or even "idiomatic".
For example, in the realm of simulating and solving stochastic differential equations, you could look at the approach taken by Chris Rackauckas (and many others) in the suite of packages under the JuliaDiffEq umbrella. However, many of these people are extremely experienced Julia coders, and what they do may be somewhat out of reach for less experienced Julia coders who just want to model something in a manner that is reasonably sensible and attainable for a mere mortal.
It is is possible that the only "right" answer to this question is to direct users to the Performance Tips section of the docs, and then assert that as long as you aren't violating any of the recommendations there, then what you are doing is probably okay.
I think the best way I can answer this question from my own personal experience is to provide an example of how I (a mere mortal) would approach the problem of simulating different Ito processes. It is actually not too far off what you have put in the question, although with one additional layer. To be clear, I make no claim that this is the "right" way to do things, merely that it is one approach that utilizes multiple dispatch and Julia's type system in a reasonably sensible fashion.
I start off with an abstract type, for nesting specific subtypes that represent specific models.
abstract type ItoProcess ; end
Now I define some specific model subtypes, e.g.
struct GeometricBrownianMotion <: ItoProcess
mu::Float64
sigma::Float64
end
struct Heston <: ItoProcess
mu::Float64
kappa::Float64
theta::Float64
xi::Float64
end
Note, in this case I don't need to add constructors that convert arguments to Float64, since Julia does this automatically, e.g. GeometricBrownianMotion(1, 2.0) will work out-of-the-box, as Julia will automatically convert 1 to 1.0 when constructing the type.
However, I might want to add some constructors for common parameterizations, e.g.
GeometricBrownianMotion() = GeometricBrownianMotion(0.0, 1.0)
I might also want some functions that return useful information about my models, e.g.
number_parameter(model::GeometricBrownianMotion) = 2
number_parameter(model::Heston) = 4
In fact, given how I've defined the models above, I could actually be a bit sneaky and define a method that works for all subtypes:
number_parameter(model::T) where {T<:ItoProcess} = length(fieldnames(typeof(model)))
Now I want to add some code that allows me to simulate my models:
function simulate(model::T, numobs::Int, stval) where {T<:ItoProcess}
# code here that is common to all subtypes of ItoProcess
simulate_inner(model, somethingelse)
# maybe more code that is common to all subtypes of ItoProcess
end
function simulate_inner(model::GeometricBrownianMotion, somethingelse)
# code here that is specific to GeometricBrownianMotion
end
function simulate_inner(model::Heston, somethingelse)
# code here that is specific to Heston
end
Note that I have used the abstract type to allow me to group all code that is common to all subtypes of ItoProcess in the simulate function. I then use multiple dispatch and simulate_inner to run any code that needs to be specific to a particular subtype of ItoProcess. For the aforementioned reasons, I hesitate to use the phrase "idiomatic", but let me instead say that the above is quite a common pattern in typical Julia code.
The one thing to be careful of in the above code is to ensure that the output type of the simulate function is type-stable, that is, the output type can be uniquely determined by the input types. Type stability is usually an important factor in ensuring performant Julia code. An easy way in this case to ensure type-stability is to always return Matrix{Float64} (if the output type is fixed for all subtypes of ItoProcess then obviously it is uniquely determined). I examine a case where the output type depends on input types below for my estimate example. Anyway, for simulate I might always return Matrix{Float64} since for GeometricBrownianMotion I only need one column, but for Heston I will need two (the first for price of the asset, the second for the volatility process).
In fact, depending on how the code is used, type-stability is not always necessary for performant code (see eg using function barriers to prevent type-instability from flowing through to other parts of your program), but it is a good habit to be in (for Julia code).
I might also want routines to estimate these models. Again, I can follow the same approach (but with a small twist):
function estimate(modeltype::Type{T}, data)::T where {T<:ItoProcess}
# again, code common to all subtypes of ItoProcess
estimate_inner(modeltype, data)
# more common code
return T(some stuff generated from function that can be used to construct T)
end
function estimate_inner(modeltype::Type{GeometricBrownianMotion}, data)
# code specific to GeometricBrownianMotion
end
function estimate_inner(modeltype::Type{Heston}, data)
# code specific to Heston
end
There are a few differences from the simulate case. Instead of inputting an instance of GeometricBrownianMotion or Heston, I instead input the type itself. This is because I don't actually need an instance of the type with defined values for the fields. In fact, the values of those fields is the very thing I am attempting to estimate! But I still want to use multiple dispatch, hence the ::Type{T} construct. Note also I have specified an output type for estimate. This output type is dependent on the ::Type{T} input, and so the function is type-stable (output type can be uniquely determined by input types). But common with the simulate case, I have structured the code so that code that is common to all subtypes of ItoProcess only needs to be written once, and code that is specific to the subtypes is separted out.
This answer is turning into an essay, so I should tie it off here. Hopefully this is useful to the OP, as well as anyone else getting into Julia. I just want to finish by emphasizing that what I have done above is only one approach, there are others that will be just as performant, but I have personally found the above to be useful from a structural perspective, as well as reasonably common across the Julia ecosystem.

What is the name for a partially applied map

In functional programming most common patterns tend to be given names that help differentiate them.
I am trying to find what the name for a type of function that I would describe as a partially applied map.
An example in Python:
from functools import partial
seq_len = partial(map, len)
seq_len(['alpha', 'beta', 'charlie'])
I understand these could also be described as functions that take sequences as input.
Some functions that I identified as being interesting subjects of such partial application are:
map
reduce
filter
More:
Python uses the "iterator" name to refer to objects that implement the iterator protocol. But in itertools functions that take sequences and return sequences are also referred to as "iterators"(https://docs.python.org/3.7/library/itertools.html).
JavaScript uses identical definition for iterator(https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Iterators_and_Generators).
Rust uses the term "consuming adaptors" for methods that consume the iterator and "iterator adaptors" for methods that transform the input iterator into a different iterator on output.
The term partially applied map implies that the map function is partially applied. This is confusing.
The general principal you are looking for is curry. The function map in your example normally takes two parameters in Python. If you curry map(fn, list) to produce map'(fn) and then apply map'(fn) to len, you get a new function map_len(), and then you can apply map_len to a list of the appropriate type.
You can search for terms in functional programming, such as curry, compose and apply.

How to obtain deep copies of Julia composite types?

So here is the setting. I have multiple composite types defined with their own fields and constructors. Lets show two simplified components here:
type component1
x
y
end
type component2
x
y
z
end
Now I want to define a new type such that It can save an array of size K of previously defined composite types in it. So it is a parametric composite type with two fields: one is an integer K, and the other is an array of size K of the type passed.
type mixture{T}
components::Array{T, 1}
K::Int64
function mixture(qq::T, K::Int64)
components = Array{typeof(qq), K}
for k in 1:K
components[k] = qq
end
new(components, K)
end
end
But this is not the correct way to do it. Because all the K components are referring to one object and manipulating mixture.components[k] will affect all K components. In python I can remedy this with deepcopy. But the deepcopy in Julia is not defined for composite types. How do I solve this problem?
An answer to your specific question:
When you define a new type in Julia, it is common to extend some of the standard methods in Base to your new type, including deepcopy. For example:
type MyType
x::Vector
y::Vector
end
import Base.deepcopy
Base.deepcopy(m::MyType) = MyType(deepcopy(m.x), deepcopy(m.y))
Now you can call deepcopy over an instance of MyType and you will get a new, truly independent, copy of MyType as the output.
Note, my import Base.deepcopy is actually redundant, since I've referenced Base in my function definition, e.g. Base.deepcopy(m::MyType). However, I did both of these to show you the two ways of extending a method from Base.
Second note, if your type has lots of fields, you might instead iterate over the fields using deepcopy as follows:
Base.deepcopy(m::MyType) = MyType([ deepcopy(getfield(m, k)) for k = 1:length(names(m)) ]...)
A comment on your code:
First, it is standard practice in Julia to capitalize type names, e.g. Component1 instead of component1. Of course, you don't have to do this, but...
Second, from the Julia docs performance tips: Declare specific types for fields of composite types. Note, you can parameterize these declarations, e.g.
type Component1{T1, T2}
x::T1
y::T2
end
Third, here is how I would have defined your new type:
type Mixture{T}
components::Vector{T}
Mixture{T}(c::Vector{T}) = new(c)
end
Mixture{T}(c::Vector{T}) = Mixture{eltype(c)}(c)
Mixture(x, K::Int) = Mixture([ deepcopy(x) for k = 1:K ])
There are several important differences here between my code and yours. I'll go through them one at a time.
Your K field was redundant (I think) because it appears to just be the length of components. So it might be simpler to just extend the length method to your new type as follows:
Base.length(m::Mixture) = length(m.components)
and now you can use length(m), where m is an instance of Mixture to get what was previously stored in the K field.
The inner constructor in your type mixture was unusual. Standard practice is for the inner constructor to take arguments that correspond one-to-one (in sequence) to the fields of your type, and then the remainder of the inner constructor just performs any error checks you would like to be done whenever initialising your type. You deviated from this since qq was not (necessarily) an array. This kind of behaviour is better reserved for outer constructors. So, what have I done with constructors?
The inner constructor of Mixture doesn't really do anything other than pass the argument into the field via new. This is because currently there aren't any error checks I need to perform (but I can easily add some in the future).
If I want to call this inner constructor, I need to write something like m = Mixture{MyType}(x), where x is Vector{MyType}. This is a bit annoying. So my first outer constructor automatically infers the contents of {...} using eltype(x). Because of my first outer constructor, I can now initialise Mixture using m = Mixture(x) instead of m = Mixture{MyType}(x)
My second outer constructor corresponds to your inner constructor. It seems to me the behaviour you are after here is to initialise Mixture with the same component in every field of components, repeated K times. So I do this with a loop comprehension over x, which will work as long as the deepcopy method has been defined for x. If no deepcopy method exists, you'll get a No Method Exists error. This kind of programming is called duck-typing, and in Julia there is typically no performance penalty to using it.
Note, my second outer constructor will call my first outer constructor K times, and each time, my first outer constructor will call my inner constructor. Nesting functionality in this way will, in more complicated scenarios, massively cut-down on code duplication.
Sorry, this is a lot to take in I know. Hope it helps.

How to get a function from a symbol without using eval?

I've got a symbol that represents the name of a function to be called:
julia> func_sym = :tanh
I can use that symbol to get the tanh function and call it using:
julia> eval(func_sym)(2)
0.9640275800758169
But I'd rather avoid the 'eval' there as it will be called many times and it's expensive (and func_sym can have several different values depending on context).
IIRC in Ruby you can say something like:
obj.send(func_sym, args)
Is there something similar in Julia?
EDIT: some more details on why I have functions represented by symbols:
I have a type (from a neural network) that includes the activation function, originally I included it as a funcion:
type NeuralLayer
weights::Matrix{Float32}
biases::Vector{Float32}
a_func::Function
end
However, I needed to serialize these things to files using JLD, but it's not possible to serialize a Function, so I went with a symbol:
type NeuralLayer
weights::Matrix{Float32}
biases::Vector{Float32}
a_func::Symbol
end
And currently I use the eval approach above to call the activation function. There are collections of NeuralLayers and each can have it's own activation function.
#Isaiah's answer is spot-on; perhaps even more-so after the edit to the original question. To elaborate and make this more specific to your case: I'd change your NeuralLayer type to be parametric:
type NeuralLayer{func_type}
weights::Matrix{Float32}
biases::Vector{Float32}
end
Since func_type doesn't appear in the types of the fields, the constructor will require you to explicitly specify it: layer = NeuralLayer{:excitatory}(w, b). One restriction here is that you cannot modify a type parameter.
Now, func_type could be a symbol (like you're doing now) or it could be a more functionally relevant parameter (or parameters) that tunes your activation function. Then you define your activation functions like this:
# If you define your NeuralLayer with just one parameter:
activation(layer::NeuralLayer{:inhibitory}) = …
activation(layer::NeuralLayer{:excitatory}) = …
# Or if you want to use several physiological parameters instead:
activation{g_K,g_Na,g_l}(layer::NeuralLayer{g_K,g_Na,g_l} = f(g_K, g_Na, g_l)
The key point is that functions and behavior are external to the data. Use type definitions and abstract type hierarchies to define behavior, as is coded in the external functions… but only store data itself in the types. This is dramatically different from Python or other strongly object-oriented paradigms, and it takes some getting used to.
But I'd rather avoid the 'eval' there as it will be called many times and it's expensive (and func_sym can have several different values depending on context).
This sort of dynamic dispatch is possible in Julia, but not recommended. Changing the value of 'func_sym' based on context defeats type inference as well as method specialization and inlining. Instead, the recommended approach is to use multiple dispatch, as detailed in the Methods section of the manual.

Resources