How to use method reference in Java 8 for Map merge? - dictionary

I have following 2 forms of calling a collect operation, both return same result, but I still cannot depend fully on method references and need a lambda.
<R> R collect(Supplier<R> supplier,
BiConsumer<R,? super T> accumulator,
BiConsumer<R,R> combiner)
For this consider the following stream consisting on 100 random numbers
List<Double> dataList = new Random().doubles().limit(100).boxed()
.collect(Collectors.toList());
1) Following example uses pure lambdas
Map<Boolean, Integer> partition = dataList.stream()
.collect(() -> new ConcurrentHashMap<Boolean, Integer>(),
(map, x) ->
{
map.merge(x < 0.5 ? Boolean.TRUE : Boolean.FALSE, 1, Integer::sum);
}, (map, map2) ->
{
map2.putAll(map);
});
2) Following tries to use method references but 2nd argument still requires a lambda
Map<Boolean, Integer> partition2 = dataList.stream()
.collect(ConcurrentHashMap<Boolean, Integer>::new,
(map, x) ->
{
map.merge(x < 0.5 ? Boolean.TRUE : Boolean.FALSE, 1, Integer::sum);
}, Map::putAll);
How can I rewrite 2nd argument of collect method in java 8 to use method reference instead of a lambda for this example?
System.out.println(partition.toString());
System.out.println(partition2.toString());
{false=55, true=45}
{false=55, true=45}

A method reference is a handy tool if you have an existing method doing exactly the intended thing. If you need adaptations or additional operations, there is no special syntax for method references to support that, except, when you consider lambda expressions to be that syntax.
Of course, you can create a new method in your class doing the desired thing and create a method reference to it and that’s the right way to go when the complexity of the code raises, as then, it will get a meaningful name and become testable. But for simple code snippets, you can use lambda expressions, which are just a simpler syntax for the same result. Technically, there is no difference, except that the compiler generated method holding the lambda expression body will be marked as “synthetic”.
In your example, you can’t even use Map::putAll as merge function, as that would overwrite all existing mappings of the first map instead of merging the values.
A correct implementation would look like
Map<Boolean, Integer> partition2 = dataList.stream()
.collect(HashMap::new,
(map, x) -> map.merge(x < 0.5, 1, Integer::sum),
(m1, m2) -> m2.forEach((k, v) -> m1.merge(k, v, Integer::sum)));
but you don’t need to implement it by yourself. There are appropriate built-in collectors already offered in the Collectors class:
Map<Boolean, Long> partition2 = dataList.stream()
.collect(Collectors.partitioningBy(x -> x < 0.5, Collectors.counting()));

Related

porting python class to Julialang

I am seeing that Julia explicitly does NOT do classes... and I should instead embrace mutable structs.. am I going down the correct path here?? I diffed my trivial example against an official flux library but cannot gather how do I reference self like a python object.. is the cleanest way to simply pass the type as a parameter in the function??
Python
# Dense Layer
class Layer_Dense
def __init__(self, n_inputs, n_neurons):
self.weights = 0.01 * np.random.randn(n_inputs, n_neurons)
self.biases = np.zeros((1, n_neurons))
def forward(self, inputs):
pass
My JuliaLang version so far
mutable struct LayerDense
num_inputs::Int64
num_neurons::Int64
weights
biases
end
function forward(layer::LayerDense, inputs)
layer.weights = 0.01 * randn(layer.num_inputs, layer.num_neurons)
layer.biases = zeros((1, layer.num_neurons))
end
The flux libraries version of a dense layer... which looks very different to me.. and I do not know what they're doing or why.. like where is the forward pass call, is it here in flux just named after the layer Dense???
source : https://github.com/FluxML/Flux.jl/blob/b78a27b01c9629099adb059a98657b995760b617/src/layers/basic.jl#L71-L111
struct Dense{F, M<:AbstractMatrix, B}
weight::M
bias::B
σ::F
function Dense(W::M, bias = true, σ::F = identity) where {M<:AbstractMatrix, F}
b = create_bias(W, bias, size(W,1))
new{F,M,typeof(b)}(W, b, σ)
end
end
function Dense(in::Integer, out::Integer, σ = identity;
initW = nothing, initb = nothing,
init = glorot_uniform, bias=true)
W = if initW !== nothing
Base.depwarn("keyword initW is deprecated, please use init (which similarly accepts a funtion like randn)", :Dense)
initW(out, in)
else
init(out, in)
end
b = if bias === true && initb !== nothing
Base.depwarn("keyword initb is deprecated, please simply supply the bias vector, bias=initb(out)", :Dense)
initb(out)
else
bias
end
return Dense(W, b, σ)
end
This is an equivalent of your Python code in Julia:
mutable struct Layer_Dense
weights::Matrix{Float64}
biases::Matrix{Float64}
Layer_Dense(n_inputs::Integer, n_neurons::Integer) =
new(0.01 * randn(n_inputs, n_neurons),
zeros((1, n_neurons)))
end
forward(ld::Layer_Dense, inputs) = nothing
What is important here:
here I create an inner constructor only, as outer constructor is not needed; as opposed in the Flux.jl code you have linked the Dense type defines both inner and outer constructors
in python forward function does not do anything, so I copied it in Julia (your Julia code worked a bit differently); note that instead of self one should pass an instance of the object to the function as the first argument (and add ::Layer_Dense type signature so that Julia knows how to correctly dispatch it)
similarly in Python you store only weights and biases in the class, I have reflected this in the Julia code; note, however, that for performance reasons it is better to provide an explicit type of these two fields of Layer_Dense struct
like where is the forward pass call
In the code you have shared only constructors of Dense object are defined. However, in the lines below here and here the Dense type is defined to be a functor.
Functors are explained here (in general) and in here (more specifically for your use case)

Java 8 Functional Programming - Passing function along with its argument

I have a question on Java 8 Functional Programming. I am trying to achieve something using functional programming, and need some guidance on how to do it.
My requirement is to wrap every method execution inside timer function which times the method execution. Here's the example of timer function and 2 functions I need to time.
timerMethod(String timerName, Function func){
timer.start(timerName)
func.apply()
timer.stop()
}
functionA(String arg1, String arg2)
functionB(int arg1, intArg2, String ...arg3)
I am trying to pass functionA & functionB to timerMethod, but functionA & functionB expects different number & type of arguments for execution.
Any ideas how can I achieve it.
Thanks !!
you should separate it into two things by Separation of Concerns to make your code easy to use and maintaining. one is timing, another is invoking, for example:
// v--- invoking occurs in request-time
R1 result1 = timerMethod("functionA", () -> functionA("foo", "bar"));
R2 result2 = timerMethod("functionB", () -> functionB(1, 2, "foo", "bar"));
// the timerMethod only calculate the timing-cost
<T> T timerMethod(String timerName, Supplier<T> func) {
timer.start(timerName);
try {
return func.get();
} finally {
timer.stop();
}
}
IF you want to return a functional interface rather than the result of that method, you can done it as below:
Supplier<R1> timingFunctionA =timerMethod("A", ()-> functionA("foo", "bar"));
Supplier<R2> timingFunctionB =timerMethod("B", ()-> functionB(1, 2, "foo", "bar"));
<T> Supplier<T> timerMethod(String timerName, Supplier<T> func) {
// v--- calculate the timing-cost when the wrapper function is invoked
return () -> {
timer.start(timerName);
try {
return func.get();
} finally {
timer.stop();
}
};
}
Notes
IF the return type of all of your functions is void, you can replacing Supplier with Runnable and then make the timerMethod's return type to void & remove return keyword from timerMethod.
IF some of your functions will be throws a checked exception, you can replacing Supplier with Callable & invoke Callable#call instead.
Don't hold onto the arguments and then pass them at the last moment. Pass them immediately, but delay calling the function by wrapping it with another function:
Producer<?> f1 =
() -> functionA(arg1, arg2);
Producer<?> f2 =
() -> functionB(arg1, arg2, arg3);
Here, I'm wrapping each function call in a lambda (() ->...) that takes 0 arguments. Then, just call them later with no arguments:
f1()
f2()
This forms a closure over the arguments that you supplied in the lambda, which allows you to use the variables later, even though normally they would have been GC'd for going out of scope.
Note, I have a ? as the type of the Producer since I don't know what type your functions return. Change the ? to the return type of each function.
Introduction
The other answers show how to use a closure to capture the arguments of your function, no matter its number. This is a nice approach and it's very useful, if you know the arguments in advance, so that they can be captured.
Here I'd like to show two other approaches that don't require you to know the arguments in advance...
If you think it in an abstract way, there are no such things as functions with multiple arguments. Functions either receive one set of values (aka a tuple), or they receive one single argument and return another function that receives another single argument, which in turn returns another one-argument function that returns... etc, with the last function of the sequence returning an actual result (aka currying).
Methods in Java might have multiple arguments, though. So the challenge is to build functions that always receive one single argument (either by means of tuples or currying), but that actually invoke methods that receive multiple arguments.
Approach #1: Tuples
So the first approach is to use a Tuple helper class and have your function receive one tuple, either a Tuple2 or Tuple3:
So, the functionA of your example might receive one single Tuple2<String, String> as an argument:
Function<Tuple2<String, String>, SomeReturnType> functionA = tuple ->
functionA(tuple.getFirst(), tuple.getSecond());
And you could invoke it as follows:
SomeReturnType resultA = functionA.apply(Tuple2.of("a", "b"));
Now, in order to decorate the functionA with your timerMethod method, you'd need to do a few modifications:
static <T, R> Function<T, R> timerMethod(
String timerName,
Function<? super T, ? extends R> func){
return t -> {
timer.start(timerName);
R result = func.apply(t);
timer.stop();
return result;
};
}
Please note that you should use a try/finally block to make your code more robust, as shown in holi-java's answer.
Here's how you might use your timerMethod method for functionA:
Function<Tuple2<String, String>, SomeReturnType> timedFunctionA = timerMethod(
"timerA",
tuple -> functionA(tuple.getFirst(), tuple.getSecond());
And you can invoke timedFunctionA as any other function, passing it the arguments now, at invocation time:
SomeReturnType resultA = timedFunctionA.apply(Tuple2.of("a", "b"));
You can take a similar approach with the functionB of your example, except that you'd need to use a Tuple3<Integer, Integer, String[]> for the argument (taking care of the varargs arguments).
The downside of this approach is that you need to create many Tuple classes, i.e. Tuple2, Tuple3, Tuple4, etc, because Java lacks built-in support for tuples.
Approach #2: Currying
The other approach is to use a technique called currying, i.e. functions that accept one single argument and return another function that accepts another single argument, etc, with the last function of the sequence returning the actual result.
Here's how to create a currified function for your 2-argument method functionA:
Function<String, Function<String, SomeReturnType>> currifiedFunctionA =
arg1 -> arg2 -> functionA(arg1, arg2);
Invoke it as follows:
SomeReturnType result = currifiedFunctionA.apply("a").apply("b");
If you want to decorate currifiedFunctionA with the timerMethod method defined above, you can do as follows:
Function<String, Function<String, SomeReturnType>> timedCurrifiedFunctionA =
arg1 -> timerMethod("timerCurryA", arg2 -> functionA(arg1, arg2));
Then, invoke timedCurrifiedFunctionA exactly as you'd do with any currified function:
SomeReturnType result = timedCurrifiedFunctionA.apply("a").apply("b");
Please note that you only need to decorate the last function of the sequence, i.e. the one that makes the actual call to the method, which is what we want to measure.
For the method functionB of your example, you can take a similar approach, except that the type of the currified function would now be:
Function<Integer, Function<Integer, Function<String[], SomeResultType>>>
which is quite cumbersome, to say the least. So this is the downside of currified functions in Java: the syntax to express their type. On the other hand, currified functions are very handy to work with and allow you to apply several functional programming techniques without needing to write helper classes.

Julia: non-destructively update immutable type variable

Let's say there is a type
immutable Foo
x :: Int64
y :: Float64
end
and there is a variable foo = Foo(1,2.0). I want to construct a new variable bar using foo as a prototype with field y = 3.0 (or, alternatively non-destructively update foo producing a new Foo object). In ML languages (Haskell, OCaml, F#) and a few others (e.g. Clojure) there is an idiom that in pseudo-code would look like
bar = {foo with y = 3.0}
Is there something like this in Julia?
This is tricky. In Clojure this would work with a data structure, a dynamically typed immutable map, so we simply call the appropriate method to add/change a key. But when working with types we'll have to do some reflection to generate an appropriate new constructor for the type. Moreover, unlike Haskell or the various MLs, Julia isn't statically typed, so one does not simply look at an expression like {foo with y = 1} and work out what code should be generated to implement it.
Actually, we can build a Clojure-esque solution to this; since Julia provides enough reflection and dynamism that we can treat the type as a sort of immutable map. We can use fieldnames to get the list of "keys" in order (like [:x, :y]) and we can then use getfield(foo, :x) to get field values dynamically:
immutable Foo
x
y
z
end
x = Foo(1,2,3)
with_slow(x, p) =
typeof(x)(((f == p.first ? p.second : getfield(x, f)) for f in fieldnames(x))...)
with_slow(x, ps...) = reduce(with_slow, x, ps)
with_slow(x, :y => 4, :z => 6) == Foo(1,4,6)
However, there's a reason this is called with_slow. Because of the reflection it's going to be nowhere near as fast as a handwritten function like withy(foo::Foo, y) = Foo(foo.x, y, foo.z). If Foo is parametised (e.g. Foo{T} with y::T) then Julia will be able to infer that withy(foo, 1.) returns a Foo{Float64}, but won't be able to infer with_slow at all. As we know, this kills the crab performance.
The only way to make this as fast as ML and co is to generate code effectively equivalent to the handwritten version. As it happens, we can pull off that version as well!
# Fields
type Field{K} end
Base.convert{K}(::Type{Symbol}, ::Field{K}) = K
Base.convert(::Type{Field}, s::Symbol) = Field{s}()
macro f_str(s)
:(Field{$(Expr(:quote, symbol(s)))}())
end
typealias FieldPair{F<:Field, T} Pair{F, T}
# Immutable `with`
for nargs = 1:5
args = [symbol("p$i") for i = 1:nargs]
#eval with(x, $([:($p::FieldPair) for p = args]...), p::FieldPair) =
with(with(x, $(args...)), p)
end
#generated function with{F, T}(x, p::Pair{Field{F}, T})
:($(x.name.primary)($([name == F ? :(p.second) : :(x.$name)
for name in fieldnames(x)]...)))
end
The first section is a hack to produce a symbol-like object, f"foo", whose value is known within the type system. The generated function is like a macro that takes types as opposed to expressions; because it has access to Foo and the field names it can generate essentially the hand-optimised version of this code. You can also check that Julia is able to properly infer the output type, if you parametrise Foo:
#code_typed with(x, f"y" => 4., f"z" => "hello") # => ...::Foo{Int,Float64,String}
(The for nargs line is essentially a manually-unrolled reduce which enables this.)
Finally, lest I be accused of giving slightly crazy advice, I want to warn that this isn't all that idiomatic in Julia. While I can't give very specific advice without knowing your use case, it's generally best to have fields with a manageable (small) set of fields and a small set of functions which do the basic manipulation of those fields; you can build on those functions to create the final public API. If what you want is really an immutable dict, you're much better off just using a specialised data structure for that.
There is also setindex (without the ! at the end) implemented in the FixedSizeArrays.jl package, which does this in an efficient way.

Is there a way of providing a final transform method when chaining operations (like map reduce) in underscore.js?

(Really strugging to title this question, so if anyone has suggestions feel free.)
Say I wanted to do an operation like:
take an array [1,2,3]
multiply each element by 2 (map): [2,4,6]
add the elements together (reduce): 12
multiply the result by 10: 120
I can do this pretty cleanly in underscore using chaining, like so:
arr = [1,2,3]
map = (el) -> 2*el
reduce = (s,n) -> s+n
out = (r) -> 10*r
reduced = _.chain(arr).map(map).reduce(reduce).value()
result = out(reduced)
However, it would be even nicer if I could chain the 'out' method too, like this:
result = _.chain(arr).map(map).reduce(reduce).out(out).value()
Now this would be a fairly simple addition to a library like underscore. But my questions are:
Does this 'out' method have a name in functional programming?
Does this already exist in underscore (tap comes close, but not quite).
This question got me quite hooked. Here are some of my thoughts.
It feels like using underscore.js in 'chain() mode' breaks away from functional programming paradigm. Basically, instead of calling functions on functions, you're calling methods of an instance of a wrapper object in an OOP way.
I am using underscore's chain() myself here and there, but this question made me think. What if it's better to simply create more meaningful functions that can then be called in a sequence without having to use chain() at all. Your example would then look something like this:
arr = [1,2,3]
double = (arr) -> _.map(arr, (el) -> 2 * el)
sum = (arr) -> _.reduce(arr, (s, n) -> s + n)
out = (r) -> 10 * r
result = out sum double arr
# probably a less ambiguous way to do it would be
result = out(sum(double arr))
Looking at real functional programming languages (as in .. much more functional than JavaScript), it seems you could do exactly the same thing there in an even simpler manner. Here is the same program written in Standard ML. Notice how calling map with only one argument returns another function. There is no need to wrap this map in another function like we did in JavaScript.
val arr = [1,2,3];
val double = map (fn x => 2*x);
val sum = foldl (fn (a,b) => a+b) 0;
val out = fn r => 10*r;
val result = out(sum(double arr))
Standard ML also lets you create operators which means we can make a little 'chain' operator that can be used to call those functions in a more intuitive order.
infix 1 |>;
fun x |> f = f x;
val result = arr |> double |> sum |> out
I also think that this underscore.js chaining has something similar to monads in functional programming, but I don't know much about those. Though, I have feeling that this kind of data manipulation pipeline is not something you would typically use monads for.
I hope someone with more functional programming experience can chip in and correct me if I'm wrong on any of the points above.
UPDATE
Getting slightly off topic, but one way to creating partial functions could be the following:
// extend underscore with partialr function
_.mixin({
partialr: function (fn, context) {
var args = Array.prototype.slice.call(arguments, 2);
return function () {
return fn.apply(context, Array.prototype.slice.call(arguments).concat(args));
};
}
});
This function can now be used to create a partial function from any underscore function, because most of them take the input data as the first argument. For example, the sum function can now be created like
var sum = _.partialr(_.reduce, this, function (s, n) { return s + n; });
sum([1,2,3]);
I still prefer arr |> double |> sum |> out over out(sum(double(arr))) though. Underscore's chain() is nice in that it reads in a more natural order.
In terms of the name you are looking for, I think what you are trying to do is just a form of function application: you have an underscore object and you want to apply a function to its value. In underscore, you can define it like this:
_.mixin({
app: function(v, f) { return f (v); }
});
then you can pretty much do what you asked for:
var arr = [1,2,3];
function m(el) { return 2*el; };
function r(s,n) { return s+n; };
function out(r) { return 10*r; };
console.log("result: " + _.chain(arr).map(m).reduce(r).app(out).value()));
Having said all that, I think using traditional typed functional languages like SML make this kind of think a lot slicker and give much lighter weight syntax for function composition. Underscore is doing a kind of jquery twist on functional programming that I'm not sure what I think of; but without static-type checking it is frustratingly easy to make errors!

What is 'Pattern Matching' in functional languages?

I'm reading about functional programming and I've noticed that Pattern Matching is mentioned in many articles as one of the core features of functional languages.
Can someone explain for a Java/C++/JavaScript developer what does it mean?
Understanding pattern matching requires explaining three parts:
Algebraic data types.
What pattern matching is
Why its awesome.
Algebraic data types in a nutshell
ML-like functional languages allow you define simple data types called "disjoint unions" or "algebraic data types". These data structures are simple containers, and can be recursively defined. For example:
type 'a list =
| Nil
| Cons of 'a * 'a list
defines a stack-like data structure. Think of it as equivalent to this C#:
public abstract class List<T>
{
public class Nil : List<T> { }
public class Cons : List<T>
{
public readonly T Item1;
public readonly List<T> Item2;
public Cons(T item1, List<T> item2)
{
this.Item1 = item1;
this.Item2 = item2;
}
}
}
So, the Cons and Nil identifiers define simple a simple class, where the of x * y * z * ... defines a constructor and some data types. The parameters to the constructor are unnamed, they're identified by position and data type.
You create instances of your a list class as such:
let x = Cons(1, Cons(2, Cons(3, Cons(4, Nil))))
Which is the same as:
Stack<int> x = new Cons(1, new Cons(2, new Cons(3, new Cons(4, new Nil()))));
Pattern matching in a nutshell
Pattern matching is a kind of type-testing. So let's say we created a stack object like the one above, we can implement methods to peek and pop the stack as follows:
let peek s =
match s with
| Cons(hd, tl) -> hd
| Nil -> failwith "Empty stack"
let pop s =
match s with
| Cons(hd, tl) -> tl
| Nil -> failwith "Empty stack"
The methods above are equivalent (although not implemented as such) to the following C#:
public static T Peek<T>(Stack<T> s)
{
if (s is Stack<T>.Cons)
{
T hd = ((Stack<T>.Cons)s).Item1;
Stack<T> tl = ((Stack<T>.Cons)s).Item2;
return hd;
}
else if (s is Stack<T>.Nil)
throw new Exception("Empty stack");
else
throw new MatchFailureException();
}
public static Stack<T> Pop<T>(Stack<T> s)
{
if (s is Stack<T>.Cons)
{
T hd = ((Stack<T>.Cons)s).Item1;
Stack<T> tl = ((Stack<T>.Cons)s).Item2;
return tl;
}
else if (s is Stack<T>.Nil)
throw new Exception("Empty stack");
else
throw new MatchFailureException();
}
(Almost always, ML languages implement pattern matching without run-time type-tests or casts, so the C# code is somewhat deceptive. Let's brush implementation details aside with some hand-waving please :) )
Data structure decomposition in a nutshell
Ok, let's go back to the peek method:
let peek s =
match s with
| Cons(hd, tl) -> hd
| Nil -> failwith "Empty stack"
The trick is understanding that the hd and tl identifiers are variables (errm... since they're immutable, they're not really "variables", but "values" ;) ). If s has the type Cons, then we're going to pull out its values out of the constructor and bind them to variables named hd and tl.
Pattern matching is useful because it lets us decompose a data structure by its shape instead of its contents. So imagine if we define a binary tree as follows:
type 'a tree =
| Node of 'a tree * 'a * 'a tree
| Nil
We can define some tree rotations as follows:
let rotateLeft = function
| Node(a, p, Node(b, q, c)) -> Node(Node(a, p, b), q, c)
| x -> x
let rotateRight = function
| Node(Node(a, p, b), q, c) -> Node(a, p, Node(b, q, c))
| x -> x
(The let rotateRight = function constructor is syntax sugar for let rotateRight s = match s with ....)
So in addition to binding data structure to variables, we can also drill down into it. Let's say we have a node let x = Node(Nil, 1, Nil). If we call rotateLeft x, we test x against the first pattern, which fails to match because the right child has type Nil instead of Node. It'll move to the next pattern, x -> x, which will match any input and return it unmodified.
For comparison, we'd write the methods above in C# as:
public abstract class Tree<T>
{
public abstract U Match<U>(Func<U> nilFunc, Func<Tree<T>, T, Tree<T>, U> nodeFunc);
public class Nil : Tree<T>
{
public override U Match<U>(Func<U> nilFunc, Func<Tree<T>, T, Tree<T>, U> nodeFunc)
{
return nilFunc();
}
}
public class Node : Tree<T>
{
readonly Tree<T> Left;
readonly T Value;
readonly Tree<T> Right;
public Node(Tree<T> left, T value, Tree<T> right)
{
this.Left = left;
this.Value = value;
this.Right = right;
}
public override U Match<U>(Func<U> nilFunc, Func<Tree<T>, T, Tree<T>, U> nodeFunc)
{
return nodeFunc(Left, Value, Right);
}
}
public static Tree<T> RotateLeft(Tree<T> t)
{
return t.Match(
() => t,
(l, x, r) => r.Match(
() => t,
(rl, rx, rr) => new Node(new Node(l, x, rl), rx, rr))));
}
public static Tree<T> RotateRight(Tree<T> t)
{
return t.Match(
() => t,
(l, x, r) => l.Match(
() => t,
(ll, lx, lr) => new Node(ll, lx, new Node(lr, x, r))));
}
}
For seriously.
Pattern matching is awesome
You can implement something similar to pattern matching in C# using the visitor pattern, but its not nearly as flexible because you can't effectively decompose complex data structures. Moreover, if you are using pattern matching, the compiler will tell you if you left out a case. How awesome is that?
Think about how you'd implement similar functionality in C# or languages without pattern matching. Think about how you'd do it without test-tests and casts at runtime. Its certainly not hard, just cumbersome and bulky. And you don't have the compiler checking to make sure you've covered every case.
So pattern matching helps you decompose and navigate data structures in a very convenient, compact syntax, it enables the compiler to check the logic of your code, at least a little bit. It really is a killer feature.
Short answer: Pattern matching arises because functional languages treat the equals sign as an assertion of equivalence instead of assignment.
Long answer: Pattern matching is a form of dispatch based on the “shape” of the value that it's given. In a functional language, the datatypes that you define are usually what are known as discriminated unions or algebraic data types. For instance, what's a (linked) list? A linked list List of things of some type a is either the empty list Nil or some element of type a Consed onto a List a (a list of as). In Haskell (the functional language I'm most familiar with), we write this
data List a = Nil
| Cons a (List a)
All discriminated unions are defined this way: a single type has a fixed number of different ways to create it; the creators, like Nil and Cons here, are called constructors. This means that a value of the type List a could have been created with two different constructors—it could have two different shapes. So suppose we want to write a head function to get the first element of the list. In Haskell, we would write this as
-- `head` is a function from a `List a` to an `a`.
head :: List a -> a
-- An empty list has no first item, so we raise an error.
head Nil = error "empty list"
-- If we are given a `Cons`, we only want the first part; that's the list's head.
head (Cons h _) = h
Since List a values can be of two different kinds, we need to handle each one separately; this is the pattern matching. In head x, if x matches the pattern Nil, then we run the first case; if it matches the pattern Cons h _, we run the second.
Short answer, explained: I think one of the best ways to think about this behavior is by changing how you think of the equals sign. In the curly-bracket languages, by and large, = denotes assignment: a = b means “make a into b.” In a lot of functional languages, however, = denotes an assertion of equality: let Cons a (Cons b Nil) = frob x asserts that the thing on the left, Cons a (Cons b Nil), is equivalent to the thing on the right, frob x; in addition, all variables used on the left become visible. This is also what's happening with function arguments: we assert that the first argument looks like Nil, and if it doesn't, we keep checking.
It means that instead of writing
double f(int x, int y) {
if (y == 0) {
if (x == 0)
return NaN;
else if (x > 0)
return Infinity;
else
return -Infinity;
} else
return (double)x / y;
}
You can write
f(0, 0) = NaN;
f(x, 0) | x > 0 = Infinity;
| else = -Infinity;
f(x, y) = (double)x / y;
Hey, C++ supports pattern matching too.
static const int PositiveInfinity = -1;
static const int NegativeInfinity = -2;
static const int NaN = -3;
template <int x, int y> struct Divide {
enum { value = x / y };
};
template <bool x_gt_0> struct aux { enum { value = PositiveInfinity }; };
template <> struct aux<false> { enum { value = NegativeInfinity }; };
template <int x> struct Divide<x, 0> {
enum { value = aux<(x>0)>::value };
};
template <> struct Divide<0, 0> {
enum { value = NaN };
};
#include <cstdio>
int main () {
printf("%d %d %d %d\n", Divide<7,2>::value, Divide<1,0>::value, Divide<0,0>::value, Divide<-1,0>::value);
return 0;
};
Pattern matching is sort of like overloaded methods on steroids. The simplest case would be the same roughly the same as what you seen in java, arguments are a list of types with names. The correct method to call is based on the arguments passed in, and it doubles as an assignment of those arguments to the parameter name.
Patterns just go a step further, and can destructure the arguments passed in even further. It can also potentially use guards to actually match based on the value of the argument. To demonstrate, I'll pretend like JavaScript had pattern matching.
function foo(a,b,c){} //no pattern matching, just a list of arguments
function foo2([a],{prop1:d,prop2:e}, 35){} //invented pattern matching in JavaScript
In foo2, it expects a to be an array, it breaks apart the second argument, expecting an object with two props (prop1,prop2) and assigns the values of those properties to variables d and e, and then expects the third argument to be 35.
Unlike in JavaScript, languages with pattern matching usually allow multiple functions with the same name, but different patterns. In this way it is like method overloading. I'll give an example in erlang:
fibo(0) -> 0 ;
fibo(1) -> 1 ;
fibo(N) when N > 0 -> fibo(N-1) + fibo(N-2) .
Blur your eyes a little and you can imagine this in javascript. Something like this maybe:
function fibo(0){return 0;}
function fibo(1){return 1;}
function fibo(N) when N > 0 {return fibo(N-1) + fibo(N-2);}
Point being that when you call fibo, the implementation it uses is based on the arguments, but where Java is limited to types as the only means of overloading, pattern matching can do more.
Beyond function overloading as shown here, the same principle can be applied other places, such as case statements or destructuring assingments. JavaScript even has this in 1.7.
Pattern matching allows you to match a value (or an object) against some patterns to select a branch of the code. From the C++ point of view, it may sound a bit similar to the switch statement. In functional languages, pattern matching can be used for matching on standard primitive values such as integers. However, it is more useful for composed types.
First, let's demonstrate pattern matching on primitive values (using extended pseudo-C++ switch):
switch(num) {
case 1:
// runs this when num == 1
case n when n > 10:
// runs this when num > 10
case _:
// runs this for all other cases (underscore means 'match all')
}
The second use deals with functional data types such as tuples (which allow you to store multiple objects in a single value) and discriminated unions which allow you to create a type that can contain one of several options. This sounds a bit like enum except that each label can also carry some values. In a pseudo-C++ syntax:
enum Shape {
Rectangle of { int left, int top, int width, int height }
Circle of { int x, int y, int radius }
}
A value of type Shape can now contain either Rectangle with all the coordinates or a Circle with the center and the radius. Pattern matching allows you to write a function for working with the Shape type:
switch(shape) {
case Rectangle(l, t, w, h):
// declares variables l, t, w, h and assigns properties
// of the rectangle value to the new variables
case Circle(x, y, r):
// this branch is run for circles (properties are assigned to variables)
}
Finally, you can also use nested patterns that combine both of the features. For example, you could use Circle(0, 0, radius) to match for all shapes that have the center in the point [0, 0] and have any radius (the value of the radius will be assigned to the new variable radius).
This may sound a bit unfamiliar from the C++ point of view, but I hope that my pseudo-C++ make the explanation clear. Functional programming is based on quite different concepts, so it makes better sense in a functional language!
Pattern matching is where the interpreter for your language will pick a particular function based on the structure and content of the arguments you give it.
It is not only a functional language feature but is available for many different languages.
The first time I came across the idea was when I learned prolog where it is really central to the language.
e.g.
last([LastItem], LastItem).
last([Head|Tail], LastItem) :-
last(Tail, LastItem).
The above code will give the last item of a list. The input arg is the first and the result is the second.
If there is only one item in the list the interpreter will pick the first version and the second argument will be set to equal the first i.e. a value will be assigned to the result.
If the list has both a head and a tail the interpreter will pick the second version and recurse until it there is only one item left in the list.
For many people, picking up a new concept is easier if some easy examples are provided, so here we go:
Let's say you have a list of three integers, and wanted to add the first and the third element. Without pattern matching, you could do it like this (examples in Haskell):
Prelude> let is = [1,2,3]
Prelude> head is + is !! 2
4
Now, although this is a toy example, imagine we would like to bind the first and third integer to variables and sum them:
addFirstAndThird is =
let first = head is
third = is !! 3
in first + third
This extraction of values from a data structure is what pattern matching does. You basically "mirror" the structure of something, giving variables to bind for the places of interest:
addFirstAndThird [first,_,third] = first + third
When you call this function with [1,2,3] as its argument, [1,2,3] will be unified with [first,_,third], binding first to 1, third to 3 and discarding 2 (_ is a placeholder for things you don't care about).
Now, if you only wanted to match lists with 2 as the second element, you can do it like this:
addFirstAndThird [first,2,third] = first + third
This will only work for lists with 2 as their second element and throw an exception otherwise, because no definition for addFirstAndThird is given for non-matching lists.
Until now, we used pattern matching only for destructuring binding. Above that, you can give multiple definitions of the same function, where the first matching definition is used, thus, pattern matching is a little like "a switch statement on stereoids":
addFirstAndThird [first,2,third] = first + third
addFirstAndThird _ = 0
addFirstAndThird will happily add the first and third element of lists with 2 as their second element, and otherwise "fall through" and "return" 0. This "switch-like" functionality can not only be used in function definitions, e.g.:
Prelude> case [1,3,3] of [a,2,c] -> a+c; _ -> 0
0
Prelude> case [1,2,3] of [a,2,c] -> a+c; _ -> 0
4
Further, it is not restricted to lists, but can be used with other types as well, for example matching the Just and Nothing value constructors of the Maybe type in order to "unwrap" the value:
Prelude> case (Just 1) of (Just x) -> succ x; Nothing -> 0
2
Prelude> case Nothing of (Just x) -> succ x; Nothing -> 0
0
Sure, those were mere toy examples, and I did not even try to give a formal or exhaustive explanation, but they should suffice to grasp the basic concept.
You should start with the Wikipedia page that gives a pretty good explanation. Then, read the relevant chapter of the Haskell wikibook.
This is a nice definition from the above wikibook:
So pattern matching is a way of
assigning names to things (or binding
those names to those things), and
possibly breaking down expressions
into subexpressions at the same time
(as we did with the list in the
definition of map).
Here is a really short example that shows pattern matching usefulness:
Let's say you want to sort up an element in a list:
["Venice","Paris","New York","Amsterdam"]
to (I've sorted up "New York")
["Venice","New York","Paris","Amsterdam"]
in an more imperative language you would write:
function up(city, cities){
for(var i = 0; i < cities.length; i++){
if(cities[i] === city && i > 0){
var prev = cities[i-1];
cities[i-1] = city;
cities[i] = prev;
}
}
return cities;
}
In a functional language you would instead write:
let up list value =
match list with
| [] -> []
| previous::current::tail when current = value -> current::previous::tail
| current::tail -> current::(up tail value)
As you can see the pattern matched solution has less noise, you can clearly see what are the different cases and how easy it's to travel and de-structure our list.
I've written a more detailed blog post about it here.

Resources