What is the name for a partially applied map - functional-programming

In functional programming most common patterns tend to be given names that help differentiate them.
I am trying to find what the name for a type of function that I would describe as a partially applied map.
An example in Python:
from functools import partial
seq_len = partial(map, len)
seq_len(['alpha', 'beta', 'charlie'])
I understand these could also be described as functions that take sequences as input.
Some functions that I identified as being interesting subjects of such partial application are:
map
reduce
filter
More:
Python uses the "iterator" name to refer to objects that implement the iterator protocol. But in itertools functions that take sequences and return sequences are also referred to as "iterators"(https://docs.python.org/3.7/library/itertools.html).
JavaScript uses identical definition for iterator(https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Iterators_and_Generators).
Rust uses the term "consuming adaptors" for methods that consume the iterator and "iterator adaptors" for methods that transform the input iterator into a different iterator on output.

The term partially applied map implies that the map function is partially applied. This is confusing.
The general principal you are looking for is curry. The function map in your example normally takes two parameters in Python. If you curry map(fn, list) to produce map'(fn) and then apply map'(fn) to len, you get a new function map_len(), and then you can apply map_len to a list of the appropriate type.
You can search for terms in functional programming, such as curry, compose and apply.

Related

What does this struct mean in Julia?

I am trying to lean deep learning using Julia. In one of the tutorials, which is about MLP, use the below structure for modeling multiple layers in ANN. What does this code mean?
struct Chain
layers
Chain(layers...) = new(layers)
end
This definition in isolation doesn't really "mean" anything; it is just a user defined struct with one field (called layers) and one inner constructor. Usually custom structs like this is used for collecting some data and/or used to define operations on, e.g. you could define a function f operating on this struct like this:
function f(c::Chain)
# do something with the layers in the chain
end
but in order to understand what it is used for in this specific case you probably need to consult the documentation and/or the rest of the code.
One peculiarity in this example:
The inner constructor takes multiple arguments (layers...) and creates a tuple out of them, which is assigned to the property layers.
julia> c = Chain(1, 2, "foo")
Chain((1, 2, "foo"))

The Idiomatic Way To Do OOP, Types and Methods in Julia

As I'm learning Julia, I am wondering how to properly do things I might have done in Python, Java or C++ before. For example, previously I might have used an abstract base class (or interface) to define a family of models through classes. Each class might then have a method like calculate. So to call it I might have model.calculate(), where the model is an object from one of the inheriting classes.
I get that Julia uses multiple dispatch to overload functions with different signatures such as calculate(model). The question I have is how to create different models. Do I use the type system for that and create different types like:
abstract type Model end
type BlackScholes <: Model end
type Heston <: Model end
where BlackScholes and Heston are different types of model? If so, then I can overload different calculate methods:
function calculate(model::BlackScholes)
# code
end
function calculate(model::Heston)
# code
end
But I'm not sure if this is a proper and idiomatic use of types in Julia. I will greatly appreciate your guidance!
This is a hard question to answer. Julia offers a wide range of tools to solve any given problem, and it would be hard for even a core developer of the language to assert that one particular approach is "right" or even "idiomatic".
For example, in the realm of simulating and solving stochastic differential equations, you could look at the approach taken by Chris Rackauckas (and many others) in the suite of packages under the JuliaDiffEq umbrella. However, many of these people are extremely experienced Julia coders, and what they do may be somewhat out of reach for less experienced Julia coders who just want to model something in a manner that is reasonably sensible and attainable for a mere mortal.
It is is possible that the only "right" answer to this question is to direct users to the Performance Tips section of the docs, and then assert that as long as you aren't violating any of the recommendations there, then what you are doing is probably okay.
I think the best way I can answer this question from my own personal experience is to provide an example of how I (a mere mortal) would approach the problem of simulating different Ito processes. It is actually not too far off what you have put in the question, although with one additional layer. To be clear, I make no claim that this is the "right" way to do things, merely that it is one approach that utilizes multiple dispatch and Julia's type system in a reasonably sensible fashion.
I start off with an abstract type, for nesting specific subtypes that represent specific models.
abstract type ItoProcess ; end
Now I define some specific model subtypes, e.g.
struct GeometricBrownianMotion <: ItoProcess
mu::Float64
sigma::Float64
end
struct Heston <: ItoProcess
mu::Float64
kappa::Float64
theta::Float64
xi::Float64
end
Note, in this case I don't need to add constructors that convert arguments to Float64, since Julia does this automatically, e.g. GeometricBrownianMotion(1, 2.0) will work out-of-the-box, as Julia will automatically convert 1 to 1.0 when constructing the type.
However, I might want to add some constructors for common parameterizations, e.g.
GeometricBrownianMotion() = GeometricBrownianMotion(0.0, 1.0)
I might also want some functions that return useful information about my models, e.g.
number_parameter(model::GeometricBrownianMotion) = 2
number_parameter(model::Heston) = 4
In fact, given how I've defined the models above, I could actually be a bit sneaky and define a method that works for all subtypes:
number_parameter(model::T) where {T<:ItoProcess} = length(fieldnames(typeof(model)))
Now I want to add some code that allows me to simulate my models:
function simulate(model::T, numobs::Int, stval) where {T<:ItoProcess}
# code here that is common to all subtypes of ItoProcess
simulate_inner(model, somethingelse)
# maybe more code that is common to all subtypes of ItoProcess
end
function simulate_inner(model::GeometricBrownianMotion, somethingelse)
# code here that is specific to GeometricBrownianMotion
end
function simulate_inner(model::Heston, somethingelse)
# code here that is specific to Heston
end
Note that I have used the abstract type to allow me to group all code that is common to all subtypes of ItoProcess in the simulate function. I then use multiple dispatch and simulate_inner to run any code that needs to be specific to a particular subtype of ItoProcess. For the aforementioned reasons, I hesitate to use the phrase "idiomatic", but let me instead say that the above is quite a common pattern in typical Julia code.
The one thing to be careful of in the above code is to ensure that the output type of the simulate function is type-stable, that is, the output type can be uniquely determined by the input types. Type stability is usually an important factor in ensuring performant Julia code. An easy way in this case to ensure type-stability is to always return Matrix{Float64} (if the output type is fixed for all subtypes of ItoProcess then obviously it is uniquely determined). I examine a case where the output type depends on input types below for my estimate example. Anyway, for simulate I might always return Matrix{Float64} since for GeometricBrownianMotion I only need one column, but for Heston I will need two (the first for price of the asset, the second for the volatility process).
In fact, depending on how the code is used, type-stability is not always necessary for performant code (see eg using function barriers to prevent type-instability from flowing through to other parts of your program), but it is a good habit to be in (for Julia code).
I might also want routines to estimate these models. Again, I can follow the same approach (but with a small twist):
function estimate(modeltype::Type{T}, data)::T where {T<:ItoProcess}
# again, code common to all subtypes of ItoProcess
estimate_inner(modeltype, data)
# more common code
return T(some stuff generated from function that can be used to construct T)
end
function estimate_inner(modeltype::Type{GeometricBrownianMotion}, data)
# code specific to GeometricBrownianMotion
end
function estimate_inner(modeltype::Type{Heston}, data)
# code specific to Heston
end
There are a few differences from the simulate case. Instead of inputting an instance of GeometricBrownianMotion or Heston, I instead input the type itself. This is because I don't actually need an instance of the type with defined values for the fields. In fact, the values of those fields is the very thing I am attempting to estimate! But I still want to use multiple dispatch, hence the ::Type{T} construct. Note also I have specified an output type for estimate. This output type is dependent on the ::Type{T} input, and so the function is type-stable (output type can be uniquely determined by input types). But common with the simulate case, I have structured the code so that code that is common to all subtypes of ItoProcess only needs to be written once, and code that is specific to the subtypes is separted out.
This answer is turning into an essay, so I should tie it off here. Hopefully this is useful to the OP, as well as anyone else getting into Julia. I just want to finish by emphasizing that what I have done above is only one approach, there are others that will be just as performant, but I have personally found the above to be useful from a structural perspective, as well as reasonably common across the Julia ecosystem.

What does the jq notation <function>/<number> mean?

In various web pages, I see references to jq functions with a slash and a number following them. For example:
walk/1
I found the above notation used on a stackoverflow page.
I could not find in the jq Manual page a definition as to what this notation means. I'm guessing it might indicate that the walk function that takes 1 argument. If so, I wonder why a more meaningful notation isn't used such as is used with signatures in C++, Java, and other languages:
<function>(type1, type2, ..., typeN)
Can anyone confirm what the notation <function>/<number> means? Are other variants used?
The notation name/arity gives the name and arity of the function. "arity" is the number of arguments (i.e., parameters), so for example explode/0 means you'd just write explode without any arguments, and map/1 means you'd write something like map(f).
The fact that 0-arity functions are invoked by name, without any parentheses, makes the notation especially handy. The fact that a function name can have multiple definitions at any one time (each definition having a distinct arity) makes it easy to distinguish between them.
This notation is not used in jq programs, but it is used in the output of the (new) built-in filter, builtins/0.
By contrast, in some other programming languages, it (or some close variant, e.g. module:name/arity in Erlang) is also part of the language.
Why?
There are various difficulties which typically arise when attempting to graft a notation that's suitable for languages in which method-dispatch is based on types onto ones in which dispatch is based solely on arity.
The first, as already noted, has to do with 0-arity functions. This is especially problematic for jq as 0-arity functions are invoked in jq without parentheses.
The second is that, in general, jq functions do not require their arguments to be any one jq type. Having to write something like nth(string+number) rather than just nth/1 would be tedious at best.
This is why the manual strenuously avoids using "name(type)"-style notation. Thus we see, for example, startswith(str), rather than startswith(string). That is, the parameter names in the documentation are clearly just names, though of course they often give strong type hints.
If you're wondering why the 'name/arity' convention isn't documented in the manual, it's probably largely because the documentation was mostly written before jq supported multi-arity functions.
In summary -- any notational scheme can be made to work, but name/arity is (1) concise; (2) precise in the jq context; (3) easy-to-learn; and (4) widely in use for arity-oriented languages, at least on this planet.

How to get a function from a symbol without using eval?

I've got a symbol that represents the name of a function to be called:
julia> func_sym = :tanh
I can use that symbol to get the tanh function and call it using:
julia> eval(func_sym)(2)
0.9640275800758169
But I'd rather avoid the 'eval' there as it will be called many times and it's expensive (and func_sym can have several different values depending on context).
IIRC in Ruby you can say something like:
obj.send(func_sym, args)
Is there something similar in Julia?
EDIT: some more details on why I have functions represented by symbols:
I have a type (from a neural network) that includes the activation function, originally I included it as a funcion:
type NeuralLayer
weights::Matrix{Float32}
biases::Vector{Float32}
a_func::Function
end
However, I needed to serialize these things to files using JLD, but it's not possible to serialize a Function, so I went with a symbol:
type NeuralLayer
weights::Matrix{Float32}
biases::Vector{Float32}
a_func::Symbol
end
And currently I use the eval approach above to call the activation function. There are collections of NeuralLayers and each can have it's own activation function.
#Isaiah's answer is spot-on; perhaps even more-so after the edit to the original question. To elaborate and make this more specific to your case: I'd change your NeuralLayer type to be parametric:
type NeuralLayer{func_type}
weights::Matrix{Float32}
biases::Vector{Float32}
end
Since func_type doesn't appear in the types of the fields, the constructor will require you to explicitly specify it: layer = NeuralLayer{:excitatory}(w, b). One restriction here is that you cannot modify a type parameter.
Now, func_type could be a symbol (like you're doing now) or it could be a more functionally relevant parameter (or parameters) that tunes your activation function. Then you define your activation functions like this:
# If you define your NeuralLayer with just one parameter:
activation(layer::NeuralLayer{:inhibitory}) = …
activation(layer::NeuralLayer{:excitatory}) = …
# Or if you want to use several physiological parameters instead:
activation{g_K,g_Na,g_l}(layer::NeuralLayer{g_K,g_Na,g_l} = f(g_K, g_Na, g_l)
The key point is that functions and behavior are external to the data. Use type definitions and abstract type hierarchies to define behavior, as is coded in the external functions… but only store data itself in the types. This is dramatically different from Python or other strongly object-oriented paradigms, and it takes some getting used to.
But I'd rather avoid the 'eval' there as it will be called many times and it's expensive (and func_sym can have several different values depending on context).
This sort of dynamic dispatch is possible in Julia, but not recommended. Changing the value of 'func_sym' based on context defeats type inference as well as method specialization and inlining. Instead, the recommended approach is to use multiple dispatch, as detailed in the Methods section of the manual.

The difference between MapReduce and the map-reduce combination in functional programming

I read the mapreduce at http://en.wikipedia.org/wiki/MapReduce ,understood the example of how to get the count of a "word" in many "documents". However I did not understand the following line:
Thus the MapReduce framework transforms a list of (key, value) pairs into a list of values. This behavior is different from the functional programming map and reduce combination, which accepts a list of arbitrary values and returns one single value that combines all the values returned by map.
Can someone elaborate on the difference again(MapReduce framework VS map and reduce combination)? Especially, what does the reduce functional programming do?
Thanks a great deal.
The main difference would be that MapReduce is apparently patentable. (Couldn't help myself, sorry...)
On a more serious note, the MapReduce paper, as I remember it, describes a methodology of performing calculations in a massively parallelised fashion. This methodology builds upon the map / reduce construct which was well known for years before, but goes beyond into such matters as distributing the data etc. Also, some constraints are imposed on the structure of data being operated upon and returned by the functions used in the map-like and reduce-like parts of the computation (the thing about data coming in lists of key/value pairs), so you could say that MapReduce is a massive-parallelism-friendly specialisation of the map & reduce combination.
As for the Wikipedia comment on the function being mapped in the functional programming's map / reduce construct producing one value per input... Well, sure it does, but here there are no constraints at all on the type of said value. In particular, it could be a complex data structure like perhaps a list of things to which you would again apply a map / reduce transformation. Going back to the "counting words" example, you could very well have a function which, for a given portion of text, produces a data structure mapping words to occurrence counts, map that over your documents (or chunks of documents, as the case may be) and reduce the results.
In fact, that's exactly what happens in this article by Phil Hagelberg. It's a fun and supremely short example of a MapReduce-word-counting-like computation implemented in Clojure with map and something equivalent to reduce (the (apply + (merge-with ...)) bit -- merge-with is implemented in terms of reduce in clojure.core). The only difference between this and the Wikipedia example is that the objects being counted are URLs instead of arbitrary words -- other than that, you've got a counting words algorithm implemented with map and reduce, MapReduce-style, right there. The reason why it might not fully qualify as being an instance of MapReduce is that there's no complex distribution of workloads involved. It's all happening on a single box... albeit on all the CPUs the box provides.
For in-depth treatment of the reduce function -- also known as fold -- see Graham Hutton's A tutorial on the universality and expressiveness of fold. It's Haskell based, but should be readable even if you don't know the language, as long as you're willing to look up a Haskell thing or two as you go... Things like ++ = list concatenation, no deep Haskell magic.
Using the word count example, the original functional map() would take a set of documents, optionally distribute subsets of that set, and for each document emit a single value representing the number of words (or a particular word's occurrences) in the document. A functional reduce() would then add up the global counts for all documents, one for each document. So you get a total count (either of all words or a particular word).
In MapReduce, the map would emit a (word, count) pair for each word in each document. A MapReduce reduce() would then add up the count of each word in each document without mixing them into a single pile. So you get a list of words paired with their counts.
MapReduce is a framework built around splitting a computation into parallelizable mappers and reducers. It builds on the familiar idiom of map and reduce - if you can structure your tasks such that they can be performed by independent mappers and reducers, then you can write it in a way which takes advantage of a MapReduce framework.
Imagine a Python interpreter which recognized tasks which could be computed independently, and farmed them out to mapper or reducer nodes. If you wrote
reduce(lambda x, y: x+y, map(int, ['1', '2', '3']))
or
sum([int(x) for x in ['1', '2', '3']])
you would be using functional map and reduce methods in a MapReduce framework. With current MapReduce frameworks, there's a lot more plumbing involved, but it's the same concept.

Resources