Fast multi-dimensional Walsh-Hadamard transforms in Julia? - julia

I was looking for a fast implementation of FWHT(Fast Walsh-Hadamard transformation) to understand it and implement it in python (implementation should be able to handle an n-dimensional array and should be able to apply the transformation on any specific set of dimensions).
I came across the Julia implementation (https://github.com/stevengj/Hadamard.jl) which seems to be pretty good but as I am new to Julia I am not able to understand a part of the code:
for (Tr,Tc,fftw,lib) in ((:Float64,:Complex128,"fftw",FFTW.libfftw),
(:Float32,:Complex64,"fftwf",FFTW.libfftwf))
#eval function Plan_Hadamard{N}(X::StridedArray{$Tc,N}, Y::StridedArray{$Tc,N},
region, flags::Unsigned, timelimit::Real,
bitreverse::Bool)
set_timelimit($Tr, timelimit)
dims, howmany = dims_howmany(X, Y, [size(X)...], region)
dims = hadamardize(dims, bitreverse)
plan = ccall(($(string(fftw,"_plan_guru64_dft")),$lib),
PlanPtr,
(Int32, Ptr{Int}, Int32, Ptr{Int},
Ptr{$Tc}, Ptr{$Tc}, Int32, UInt32),
size(dims,2), dims, size(howmany,2), howmany,
X, Y, FFTW.FORWARD, flags)
set_timelimit($Tr, NO_TIMELIMIT)
if plan == C_NULL
error("FFTW could not create plan") # shouldn't normally happen
end
return cFFTWPlan{$Tc,FFTW.FORWARD,X===Y,N}(plan, flags, region, X, Y)
end
#eval function Plan_Hadamard{N}(X::StridedArray{$Tr,N}, Y::StridedArray{$Tr,N},
region, flags::Unsigned, timelimit::Real,
bitreverse::Bool)
set_timelimit($Tr, timelimit)
dims, howmany = dims_howmany(X, Y, [size(X)...], region)
dims = hadamardize(dims, bitreverse)
kind = Array{Int32}(size(dims,2))
kind[:] = R2HC
plan = ccall(($(string(fftw,"_plan_guru64_r2r")),$lib),
PlanPtr,
(Int32, Ptr{Int}, Int32, Ptr{Int},
Ptr{$Tr}, Ptr{$Tr}, Ptr{Int32}, UInt32),
size(dims,2), dims, size(howmany,2), howmany,
X, Y, kind, flags)
set_timelimit($Tr, NO_TIMELIMIT)
if plan == C_NULL
error("FFTW could not create plan") # shouldn't normally happen
end
return r2rFFTWPlan{$Tr,(map(Int,kind)...),X===Y,N}(plan, flags, region, X, Y)
end
end
In the above code what is the plan variable, how is it used, and where can I find its implementation?
What are the inputs in the curly braces for the below line?
cFFTWPlan{$Tc,FFTW.FORWARD,X===Y,N}

This is constructing an FFTW "plan" to perform a multidimensional FFT. The cFFTWPlan type is a wrapper around the C fftw_plan pointer, and is implemented in the FFTW.jl module. The arguments in curly braces are Julia type parameters: in this case, indicating the number type (Tc), the FFTW transform direction FORWARD, whether the transform is in-place (X===Y), and the dimensionality of the transform (N). There are two methods here, one for an FWHT of complex-number data that creates a cFFTWPlan (which calls fftw_plan_guru_dft) and one for real-number data that creates an r2rFFTWPlan (which calls fftw_plan_guru_r2r). (These internal types of FFTW.jl are undocumented. The low-level C calls directly to the FFTW library are documented in the FFTW manual.
It should, in principle, be possible to make similar calls to FFTW for NumPy arrays. However, the existing pyFFTW wrappers don't seem to support FFTW's r2r transforms (needed for FWHTs of real data), so you'd have to add that.
Or you could call the Julia Hadamard.jl module from Python via the pyjulia package. Or you could use some other Python FWHT package, like https://github.com/FALCONN-LIB/FFHT

Related

Type declaration in Julia for function argument that can be both array and scalar

What type should I specify in Julia for function arguments that can be either scalars or arrays? For instance, in the function below x and y could be e.g. Float64 or Array{Float64}.
function myfun(x, y)
return x .+ y
end
Is there an appropriate type declaration for such variables? Or should I just refrain from declaring the type there (or writing functions that are that generic)?
You can safely refrain from specifying types. This will not have an impact on performance of your code.
However, if you want to explicitly specify the type restriction you provided do (this is mostly useful to make sure your function is called with proper arguments and fail fast if it is not):
function myfun(x::Union{Float64, Array{Float64}},
y::Union{Float64, Array{Float64}})
return x .+ y
end
However, most likely you will rather want the following signature:
function myfun(x::Union{AbstractFloat, AbstractArray{<:AbstractFloat}},
y::Union{AbstractFloat, AbstractArray{<:AbstractFloat}})
return x .+ y
end
which says you accept any scalar float or any array of floats (not necessarily only Float64 and Array). This is more flexible, as e.g. you can accept views then or other floats (BigFloat or Float32) if you prefer to switch precision of your computations. Such a signature clearly signals your users what types of inputs you expect them to pass to myfun while remaining flexible.
I recommend this as being overly restrictive (Union{Float64, Array{Float64}}), while accepted by the compiler, usually leads to problems later when you start using your function with various input types.

Julia Flux: writing a regularizer depending on the provided regularization coefficients

I am writing a script converting Python's Keras (v1.1.0) model to Julia's Flux model, and I am struggling with implementing regularization (I have read https://fluxml.ai/Flux.jl/stable/models/regularisation/) as a way to get to know Julia.
So, in Keras's json model I have something like: "W_regularizer": {"l2": 0.0010000000474974513, "name": "WeightRegularizer", "l1": 0.0} for each Dense layer. I want to use these coefficients to create regularization in the Flux model. The problem is that, in Flux it is added directly to the loss instead of being defined as a property of the layer itself.
To avoid posting too much code here, I've added it to the repo. Here is a small script that takes the json and createa Flux's Chain: https://github.com/iegorval/Keras2Flux.jl/blob/master/Keras2Flux/src/Keras2Flux.jl
Now, I want to create a penalty for each Dense layer with the predefined l1/l2 coefficient. I tried to do it like this:
using Pkg
pkg"activate /home/username/.julia/dev/Keras2Flux"
using Flux
using Keras2Flux
using LinearAlgebra
function get_penalty(model::Chain, regs::Array{Any, 1})
index_model = 1
index_regs = 1
penalties = []
for layer in model
if layer isa Dense
println(regs[index_regs](layer.W))
penalty(m) = regs[index_regs](m[index_model].W)
push!(penalties, penalty)
#println(regs[i])
index_regs += 1
end
index_model += 1
end
total_penalty(m) = sum([p(m) for p in penalties])
println(total_penalty)
println(total_penalty(model))
return total_penalty
end
model, regs = convert_keras2flux("examples/keras_1_1_0.json")
penalty = get_penalty(model, regs)
So, I create a penalty function for each Dense layer and then sum it up to the total penalty. However, it gives me this error:
ERROR: LoadError: BoundsError: attempt to access 3-element Array{Any,1} at index [4]
I understand what it means but I really don't understand how to fix it. So, it seems that when I call total_penalty(model), it uses index_regs == 4 (so, the values of index_regs and index_model as they are AFTER the for-cycle). Instead, I want to use their actual indices that I had while pushing the given penalty to the list of penalties.
On the other hand, if I did it not as a list of functions but as a list of values, it also would not be correct, because I will define the loss as:
loss(x, y) = binarycrossentropy(model(x), y) + total_penalty(model). If I was to use it just as list of values, then I would have a static total_penalty, while it should be recalculated for every Dense layer every time during the model training.
I would be thankful if somebody with Julia experience gives me some advise because I am definitely failing to understand how it works in Julia and, specifically, in Flux. How would I create total_penalty that would be recalculated automatically during training?
There are a couple parts to your question, and since you are new to Flux (and Julia?), I will answer in steps. But I suggest the solution at the end as a cleaner way to handle this.
First, there is the issue of p(m) calculating the penalty using index_regs and index_model as the values after the for-loop. This is because of the scoping rules in Julia. When you define the closure penalty(m) = regs[index_regs](m[index_model].W), index_regs is bound to the variable defined in get_penalty. So, as index_regs changes, so does the output of p(m). The other issue is the naming of the function as penalty(m). Every time you run this line, you are redefining penalty and all references to it that you pushed onto penalties. Instead, you should prefer to create an anonymous function. Here is how we incorporate these changes:
function get_penalty(model::Chain, regs::Array{Any, 1})
index_model = 1
index_regs = 1
penalties = []
for layer in model
if layer isa Dense
println(regs[index_regs](layer.W))
penalty = let i = index_regs, index_model = index_model
m -> regs[i](m[index_model].W)
end
push!(penalties, penalty)
index_regs += 1
end
index_model += 1
end
total_penalty(m) = sum([p(m) for p in penalties])
return total_penalty
end
I used i and index_model in the let block to drive home the scoping rules. I'd encourage you to replace the anonymous function in the let block with global penalty(m) = ... (and remove the assignment to penalty before the let block) to see the difference of using anonymous vs named functions.
But, if we go back to your original issue, you want to calculate the regularization penalty for your model using the stored coefficients. Ideally, these would be stored with each Dense layer as in Keras. You can recreate the same functionality in Flux:
using Flux, Functor
struct RegularizedDense{T, LT<:Dense}
layer::LT
w_l1::T
w_l2::T
end
#functor RegularizedDense
(l::RegularizedDense)(x) = l.layer(x)
penalty(l) = 0
penalty(l::RegularizedDense) =
l.w_l1 * norm(l.layer.W, 1) + l.w_l2 * norm(l.layer.W, 2)
penalty(model::Chain) = sum(penalty(layer) for layer in model)
Then, in your Keras2Flux source, you can redefine get_regularization to return w_l1_reg and w_l2_reg instead of functions. And in create_dense you can do:
function create_dense(config::Dict{String,Any}, prev_out_dim::Int64=-1)
# ... code you have already written
dense = Dense(in, out, activation; initW = init, initb = zeros)
w_l1, w_l2 = get_regularization(config)
return RegularizedDense(dense, w_l1, w_l2)
end
Lastly, you can compute your loss function like so:
loss(x, y, m) = binarycrossentropy(m(x), y) + penalty(m)
# ... later for training
train!((x, y) -> loss(x, y, m), training_data, params)
We define loss as a function of (x, y, m) to avoid performance issues.
So, in the end, this approach is cleaner because after model construction, you don't need to pass around an array of regularization functions and figure out how to index each function correctly with the corresponding dense layer.
If you prefer to keep the regularizer and model separate (i.e. have standard Dense layers in your model chain), then you can do that too. Let me know if you want that solution, but I'll leave it out for now.

Can I use a subtype of a function parameter in the function definition?

I would like to use a subtype of a function parameter in my function definition. Is this possible? For example, I would like to write something like:
g{T1, T2<:T1}(x::T1, y::T2) = x + y
So that g will be defined for any x::T1 and any y that is a subtype of T1. Obviously, if I knew, for example, that T1 would always be Number, then I could write g{T<:Number}(x::Number, y::T) = x + y and this would work fine. But this question is for cases where T1 is not known until run-time.
Read on if you're wondering why I would want to do this:
A full description of what I'm trying to do would be a bit cumbersome, but what follows is a simplified example.
I have a parameterised type, and a simple method defined over that type:
type MyVectorType{T}
x::Vector{T}
end
f1!{T}(m::MyVectorType{T}, xNew::T) = (m.x[1] = xNew)
I also have another type, with an abstract super-type defined as follows
abstract MyAbstract
type MyType <: MyAbstract ; end
I create an instance of MyVectorType with vector element type set to MyAbstract using:
m1 = MyVectorType(Array(MyAbstract, 1))
I now want to place an instance of MyType in MyVectorType. I can do this, since MyType <: MyAbstract. However, I can't do this with f1!, since the function definition means that xNew must be of type T, and T will be MyAbstract, not MyType.
The two solutions I can think of to this problem are:
f2!(m::MyVectorType, xNew) = (m.x[1] = xNew)
f3!{T1, T2}(m::MyVectorType{T1}, xNew::T2) = T2 <: T1 ? (m.x[1] = xNew) : error("Oh dear!")
The first is essentially a duck-typing solution. The second performs the appropriate error check in the first step.
Which is preferred? Or is there a third, better solution I am not aware of?
The ability to define a function g{T, S<:T}(::Vector{T}, ::S) has been referred to as "triangular dispatch" as an analogy to diagonal dispatch: f{T}(::Vector{T}, ::T). (Imagine a table with a type hierarchy labelling the rows and columns, arranged such that the super types are to the top and left. The rows represent the element type of the first argument, and the columns the type of the second. Diagonal dispatch will only match the cells along the diagonal of the table, whereas triangular dispatch matches the diagonal and everything below it, forming a triangle.)
This simply isn't implemented yet. It's a complicated problem, especially once you start considering the scoping of T and S outside of function definitions and in the context of invariance. See issue #3766 and #6984 for more details.
So, practically, in this case, I think duck-typing is just fine. You're relying upon the implementation of myVectorType to do the error checking when it assigns its elements, which it should be doing in any case.
The solution in base julia for setting elements of an array is something like this:
f!{T}(A::Vector{T}, x::T) = (A[1] = x)
f!{T}(A::Vector{T}, x) = f!(A, convert(T, x))
Note that it doesn't worry about the type hierarchy or the subtype "triangle." It just tries to convert x to T… which is a no-op if x::S, S<:T. And convert will throw an error if it cannot do the conversion or doesn't know how.
UPDATE: This is now implemented on the latest development version (0.6-dev)! In this case I think I'd still recommend using convert like I originally answered, but you can now define restrictions within the static method parameters in a left-to-right manner.
julia> f!{T1, T2<:T1}(A::Vector{T1}, x::T2) = "success!"
julia> f!(Any[1,2,3], 4.)
"success!"
julia> f!(Integer[1,2,3], 4.)
ERROR: MethodError: no method matching f!(::Array{Integer,1}, ::Float64)
Closest candidates are:
f!{T1,T2<:T1}(::Array{T1,1}, ::T2<:T1) at REPL[1]:1
julia> f!([1.,2.,3.], 4.)
"success!"

Passing functions in R as .Fortran arguments

After spending some days already searching for something like this on the internet, I still couldn't manage to find anything describing this problem. Reading through the (otherwise quite recommendable) 'Writing R Extensions' dind't offer a solution as well. Thus, here's my most urgent question:
Is it possible to pass functions (for simplicity's sake, assume a simple R function - in reality, the problem is even uglier) as function/subroutine parameters to Fortran via .Fortran(...) call - and if so, how?
I wrote two simple functions in order to test this, first a Fortran subroutine (tailored to use the function I originally intended to pass, thus the kinda weird dimensions in the interface):
subroutine foo(o, x)
implicit none
interface
subroutine o(t, y, param, f)
double precision, intent(in) :: t
double precision, dimension(58), intent(in) :: y, param
double precision, dimension(22), intent(out) :: f
end subroutine
end interface
double precision, dimension(22), intent(out) :: x
double precision, dimension(58) :: yt, paramt
integer :: i
do i = 1, 58
yt(i) = rand(0)
paramt(i) = rand(1)
end do
call o(dble(4.2), yt, paramt, x)
end subroutine
and a simple R function to pass to the above function:
asdf <- function(a, s, d, f){x <- c(a, s, d, f)}
Calling .Fortran("foo", asdf, vector(mode="numeric", length=22)) yields
Error: invalid mode (closure) to pass to Fortran (arg 1) and passing "asdf" (as a string) results in a segfault, as the argument obviously doesn't fit the expected type (namely, a function).
FYI, I don't expect the code to do anything meaningful (that would be the task of another function), I mainly would like to know, whether passing functions (or function pointers) from R is possible at all or wether I better give up on this approach instantly and look for something that might work.
Thanks in advance,
Dean
You can't pass R objects via .Fortran. You would need to use the .Call or .External interface to pass the R objects to C/C++ code.
You could write a C/C++ wrapper for your R function, which you could then call from your Fortran code (see Calling-C-from-FORTRAN-and-vice-versa in Writing R Extensions).

New to OCaml: How would I go about implementing Gaussian Elimination?

I'm new to OCaml, and I'd like to implement Gaussian Elimination as an exercise. I can easily do it with a stateful algorithm, meaning keep a matrix in memory and recursively operating on it by passing around a reference to it.
This statefulness, however, smacks of imperative programming. I know there are capabilities in OCaml to do this, but I'd like to ask if there is some clever functional way I haven't thought of first.
OCaml arrays are mutable, and it's hard to avoid treating them just like arrays in an imperative language.
Haskell has immutable arrays, but from my (limited) experience with Haskell, you end up switching to monadic, mutable arrays in most cases. Immutable arrays are probably amazing for certain specific purposes. I've always imagined you could write a beautiful implementation of dynamic programming in Haskell, where the dependencies among array entries are defined entirely by the expressions in them. The key is that you really only need to specify the contents of each array entry one time. I don't think Gaussian elimination follows this pattern, and so it seems it might not be a good fit for immutable arrays. It would be interesting to see how it works out, however.
You can use a Map to emulate a matrix. The key would be a pair of integers referencing the row and column. You'll want to use your own get x y function to ensure x < n and y < n though, instead of accessing the Map directly. (edit) You can use the compare function in Pervasives directly.
module OrderedPairs = struct
type t = int * int
let compare = Pervasives.compare
end
module Pairs = Map.Make (OrderedPairs)
let get_ n set x y =
assert( x < n && y < n );
Pairs.find (x,y) set
let set_ n set x y v =
assert( x < n && y < n );
Pairs.add (x,y) set v
Actually, having a general set of functions (get x y and set x y at a minimum), without specifying the implementation, would be an even better option. The functions then can be passed to the function, or be implemented in a module through a functor (a better solution, but having a set of functions just doing what you need would be a first step since you're new to OCaml). In this way you can use a Map, Array, Hashtbl, or a set of functions to access a file on the hard-drive to implement the matrix if you wanted. This is the really important aspect of functional programming; that you trust the interface over exploiting the side-effects, and not worry about the underlying implementation --since it's presumed to be pure.
The answers so far are using/emulating mutable data-types, but what does a functional approach look like?
To see, let's decompose the problem into some functional components:
Gaussian elimination involves a sequence of row operations, so it is useful first to define a function taking 2 rows and scaling factors, and returning the resultant row operation result.
The row operations we want should eliminate a variable (column) from a particular row, so lets define a function which takes a pair of rows and a column index and uses the previously defined row operation to return the modified row with that column entry zero.
Then we define two functions, one to convert a matrix into triangular form, and another to back-substitute a triangular matrix to the diagonal form (using the previously defined functions) by eliminating each column in turn. We could iterate or recurse over the columns, and the matrix could be defined as a list, vector or array of lists, vectors or arrays. The input is not changed, but a modified matrix is returned, so we can finally do:
let out_matrix = to_diagonal (to_triangular in_matrix);
What makes it functional is not whether the data-types (array or list) are mutable, but how they they are used. This approach may not be particularly 'clever' or be the most efficient way to do Gaussian eliminations in OCaml, but using pure functions lets you express the algorithm cleanly.

Resources