Learning functional programming - having trouble conceptualizing "no if statements" [duplicate] - functional-programming

This question already has answers here:
What's a functional replacement for if-then statements?
(7 answers)
Closed 9 years ago.
I was discussing programming with a friend, who is an advocate for functional programming. He mentioned that you don't need to use if statements, but I can't seem to conceptualize how you would implement
if (something):
do this;
else:
do something_else;
In a functional paradigm?
Edit: my friend specifically mentioned that there are cases where you wouldn't need to use an if expression, even though you can. For example:
if x is odd:
x + 1
else:
x / 2
Is there a way to implement the above without using any if statements or conditionals?

Without more context it's hard to know exactly what your friend meant, but two things come to mind that he could have reasonably meant:
In functional languages if conditionals are expressions, not statements, so you'd be using if expressions and not if statements. This difference means that you write things like:
let x =
if condition
then value1
else value2
Instead of:
let x be a mutable variable
if condition
then x = value1
else x = value2
So this allows you to write in functional style without mutating variables.
The other thing he could have meant is that many functional languages offer constructs like pattern matching or guards that you can use instead of if statements. Pattern matching allows you to inspect the structure of a value and take it apart at the same time. As an example you can write this:
match my_list with
| x :: xs -> x + sum xs
| [] -> 0
Instead of this:
if my_list is empty
then
let x be the first element of my_list
let xs be the list containing the remaining elements of my_list
x + sum xs
Using pattern matching is preferable because it avoids calling functions on a value whose structure does not support it. In the example above, a function that returns the first element of a list would presumably cause an error when called on an empty list (which might happen if we mess up the if condition). But if we use pattern matching to get at the first element this can't happen because the syntax of the matching construct ensures that we only get x and xs if my_list is really not empty.
Pattern guards allow you to add arbitrary conditions to pattern matching:
match f(x) with
| 0 -> "f(x) was zero"
| 1 -> "f(x) was one"
| x when x > 1 -> "f(x) was greater than one"
| _ -> "f(x) was negative"
This can be cleaner if you're pattern matching anyway, but that hardly means you shouldn't use if expressions in functional languages. If you don't have a situation where you want pattern match on a value, introducing a pattern match just so that you can use a guard makes little sense over using an if statement.

The part that should confuse you isn't the if, it's the "do".
In functional programming, you don't "do" anything.
You just define the result to be some function of the input.
The function may of course have conditionals (like cond ? a : b in languages like C#, Java, C++, etc.), but a and b are expressions that evaluate to some common type; they are not statements -- so the result is either a or b, depending on cond.

Related

Recursive functions and pattern matching in ocaml

The following code snippet comes from the official OCaml website:
# let rec compress = function
| a :: (b :: _ as t) -> if a = b then compress t else a :: compress t
| smaller -> smaller;;
val compress : 'a list -> 'a list = <fun>
The above function 'compresses' a list with consecutive, duplicative elements, e.g. :
# compress ["a";"a";"a";"a";"b";"c";"c";"a";"a";"d";"e";"e";"e";"e"];;
- : string list = ["a"; "b"; "c"; "a"; "d"; "e"]
I'm having a devil of a time understanding the logic of the above code. I'm used to coding imperatively, so this recursive, functional approach, combined with OCamls laconic - but obscure - syntax is causing me to struggle.
For example, where is the base case? Is it smaller -> smaller? I know smaller is a variable, or an identifier, but what is it returning (is returning even the right term in OCaml for what's happening here)?
I know that lists in OCaml are singly linked, so I'm also wondering if a new list is being generated, or if elements of the existed list are being cut? Since OCaml is functional, I'm inclined to think that lists are not mutable - is that correct? If you want to change a list, you essentially need to generate a new list with the elements you're seeking to add (or with the elements you're seeking to excise absent). Is this a correct understanding?
Yes, the base case is this:
| smaller -> smaller
The first pattern of the match expression matches any list of length 2 or greater. (It would be good to make sure you see why this is the case.)
Since OCaml matches patterns in order, the base case matches lists of lengths 0 and 1. That's why the programmer chose the name smaller. They were thinking "this is some smaller list".
The parts of a match statement look like this in general:
| pattern -> result
Any names in the pattern are bound to parts of the value matched against the pattern (as you say). So smaller is bound to the whole list. So in sum, the second part of the match says that if the list is of length 0 or 1, the result should just be the list itself.
Lists in OCaml are immutable, so it's not possible for the result of the function to be a modified version of the list. The result is a new list, unless the list is already a short list (of length 0 or 1).
So, what you say about the immutability of OCaml lists is exactly correct.

Explanation of lists:fold function

I learning more and more about Erlang language and have recently faced some problem. I read about foldl(Fun, Acc0, List) -> Acc1 function. I used learnyousomeerlang.com tutorial and there was an example (example is about Reverse Polish Notation Calculator in Erlang):
%function that deletes all whitspaces and also execute
rpn(L) when is_list(L) ->
[Res] = lists:foldl(fun rpn/2, [], string:tokens(L," ")),
Res.
%function that converts string to integer or floating poitn value
read(N) ->
case string:to_float(N) of
%returning {error, no_float} where there is no float avaiable
{error,no_float} -> list_to_integer(N);
{F,_} -> F
end.
%rpn managing all actions
rpn("+",[N1,N2|S]) -> [N2+N1|S];
rpn("-", [N1,N2|S]) -> [N2-N1|S];
rpn("*", [N1,N2|S]) -> [N2*N1|S];
rpn("/", [N1,N2|S]) -> [N2/N1|S];
rpn("^", [N1,N2|S]) -> [math:pow(N2,N1)|S];
rpn("ln", [N|S]) -> [math:log(N)|S];
rpn("log10", [N|S]) -> [math:log10(N)|S];
rpn(X, Stack) -> [read(X) | Stack].
As far as I understand lists:foldl executes rpn/2 on every element on list. But this is as far as I can understand this function. I read the documentation but it does not help me a lot. Can someone explain me how lists:foldl works?
Let's say we want to add a list of numbers together:
1 + 2 + 3 + 4.
This is a pretty normal way to write it. But I wrote "add a list of numbers together", not "write numbers with pluses between them". There is something fundamentally different between the way I expressed the operation in prose and the mathematical notation I used. We do this because we know it is an equivalent notation for addition (because it is commutative), and in our heads it reduces immediately to:
3 + 7.
and then
10.
So what's the big deal? The problem is that we have no way of understanding the idea of summation from this example. What if instead I had written "Start with 0, then take one element from the list at a time and add it to the starting value as a running sum"? This is actually what summation is about, and it's not arbitrarily deciding which two things to add first until the equation is reduced.
sum(List) -> sum(List, 0).
sum([], A) -> A;
sum([H|T], A) -> sum(T, H + A).
If you're with me so far, then you're ready to understand folds.
There is a problem with the function above; it is too specific. It braids three ideas together without specifying any independently:
iteration
accumulation
addition
It is easy to miss the difference between iteration and accumulation because most of the time we never give this a second thought. Most languages accidentally encourage us to miss the difference, actually, by having the same storage location change its value each iteration of a similar function.
It is easy to miss the independence of addition merely because of the way it is written in this example because "+" looks like an "operation", not a function.
What if I had said "Start with 1, then take one element from the list at a time and multiply it by the running value"? We would still be doing the list processing in exactly the same way, but with two examples to compare it is pretty clear that multiplication and addition are the only difference between the two:
prod(List) -> prod(List, 1).
prod([], A) -> A;
prod([H|T], A) -> prod(T, H * A).
This is exactly the same flow of execution but for the inner operation and the starting value of the accumulator.
So let's make the addition and multiplication bits into functions, so we can pull that part of the pattern out:
add(A, B) -> A + B.
mult(A, B) -> A * B.
How could we write the list operation on its own? We need to pass a function in -- addition or multiplication -- and have it operate over the values. Also, we have to pay attention to the identity of the type and operation of things we are operating on or else we will screw up the magic that is value aggregation. "add(0, X)" always returns X, so this idea (0 + Foo) is the addition identity operation. In multiplication the identity operation is to multiply by 1. So we must start our accumulator at 0 for addition and 1 for multiplication (and for building lists an empty list, and so on). So we can't write the function with an accumulator value built-in, because it will only be correct for some type+operation pairs.
So this means to write a fold we need to have a list argument, a function to do things argument, and an accumulator argument, like so:
fold([], _, Accumulator) ->
Accumulator;
fold([H|T], Operation, Accumulator) ->
fold(T, Operation, Operation(H, Accumulator)).
With this definition we can now write sum/1 using this more general pattern:
fsum(List) -> fold(List, fun add/2, 0).
And prod/1 also:
fprod(List) -> fold(List, fun prod/2, 1).
And they are functionally identical to the one we wrote above, but the notation is more clear and we don't have to write a bunch of recursive details that tangle the idea of iteration with the idea of accumulation with the idea of some specific operation like multiplication or addition.
In the case of the RPN calculator the idea of aggregate list operations is combined with the concept of selective dispatch (picking an operation to perform based on what symbol is encountered/matched). The RPN example is relatively simple and small (you can fit all the code in your head at once, it's just a few lines), but until you get used to functional paradigms the process it manifests can make your head hurt. In functional programming a tiny amount of code can create an arbitrarily complex process of unpredictable (or even evolving!) behavior, based just on list operations and selective dispatch; this is very different from the conditional checks, input validation and procedural checking techniques used in other paradigms more common today. Analyzing such behavior is greatly assisted by single assignment and recursive notation, because each iteration is a conceptually independent slice of time which can be contemplated in isolation of the rest of the system. I'm talking a little ahead of the basic question, but this is a core idea you may wish to contemplate as you consider why we like to use operations like folds and recursive notations instead of procedural, multiple-assignment loops.
I hope this helped more than confused.
First, you have to remember haw works rpn. If you want to execute the following operation: 2 * (3 + 5), you will feed the function with the input: "3 5 + 2 *". This was useful at a time where you had 25 step to enter a program :o)
the first function called simply split this character list into element:
1> string:tokens("3 5 + 2 *"," ").
["3","5","+","2","*"]
2>
then it processes the lists:foldl/3. for each element of this list, rpn/2 is called with the head of the input list and the current accumulator, and return a new accumulator. lets go step by step:
Step head accumulator matched rpn/2 return value
1 "3" [] rpn(X, Stack) -> [read(X) | Stack]. [3]
2 "5" [3] rpn(X, Stack) -> [read(X) | Stack]. [5,3]
3 "+" [5,3] rpn("+", [N1,N2|S]) -> [N2+N1|S]; [8]
4 "2" [8] rpn(X, Stack) -> [read(X) | Stack]. [2,8]
5 "*" [2,8] rpn("*",[N1,N2|S]) -> [N2*N1|S]; [16]
At the end, lists:foldl/3 returns [16] which matches to [R], and though rpn/1 returns R = 16

Count negative numbers in list using list comprehension

Working through the first edition of "Introduction to Functional Programming", by Bird & Wadler, which uses a theoretical lazy language with Haskell-ish syntax.
Exercise 3.2.3 asks:
Using a list comprehension, define a function for counting the number
of negative numbers in a list
Now, at this point we're still scratching the surface of lists. I would assume the intention is that only concepts that have been introduced at that point should be used, and the following have not been introduced yet:
A function for computing list length
List indexing
Pattern matching i.e. f (x:xs) = ...
Infinite lists
All the functions and operators that act on lists - with one exception - e.g. ++, head, tail, map, filter, zip, foldr, etc
What tools are available?
A maximum function that returns the maximal element of a numeric list
List comprehensions, with possibly multiple generator expressions and predicates
The notion that the output of the comprehension need not depend on the generator expression, implying the generator expression can be used for controlling the size of the generated list
Finite arithmetic sequence lists i.e. [a..b] or [a, a + step..b]
I'll admit, I'm stumped. Obviously one can extract the negative numbers from the original list fairly easily with a comprehension, but how does one then count them, with no notion of length or indexing?
The availability of the maximum function would suggest the end game is to construct a list whose maximal element is the number of negative numbers, with the final result of the function being the application of maximum to said list.
I'm either missing something blindingly obvious, or a smart trick, with a horrible feeling it may be the former. Tell me SO, how do you solve this?
My old -- and very yellowed copy of the first edition has a note attached to Exercise 3.2.3: "This question needs # (length), which appears only later". The moral of the story is to be more careful when setting exercises. I am currently finishing a third edition, which contains answers to every question.
By the way, did you answer Exercise 1.2.1 which asks for you to write down all the ways that
square (square (3 + 7)) can be reduced to normal form. It turns out that there are 547 ways!
I think you may be assuming too many restrictions - taking the length of the filtered list seems like the blindingly obvious solution to me.
An couple of alternatives but both involve using some other function that you say wasn't introduced:
sum [1 | x <- xs, x < 0]
maximum (0:[index | (index, ()) <- zip [1..] [() | x <- xs, x < 0]])

Conditional function in APL

Is there a symbol or well-known idiom for the conditional function, in any of the APL dialects?
I'm sure I'm missing something, because it's such a basic language element. In other languages it's called conditional operator, but I will avoid that term here, because an APL operator is something else entirely.
For example C and friends have x ? T : F
LISPs have (if x T F)
Python has T if x else F
and so on.
I know modern APLs have :If and friends, but they are imperative statements to control program flow: they don't return a value, cannot be used inside an expression and certainly cannot be applied to arrays of booleans. They have a different purpose altogether, which is just fine by me.
The only decent expression I could come up with to do a functional selection is (F T)[⎕IO+x], which doesn't look particularly shorthand or readable to me, although it gets the job done, even on arrays:
('no' 'yes')[⎕IO+(⍳5)∘.>(⍳5)]
no no no no no
yes no no no no
yes yes no no no
yes yes yes no no
yes yes yes yes no
I tried to come up with a similar expression using squad ⌷, but failed miserably on arrays of booleans. Even if I could, it would still have to embed ⎕IO or an hardcoded 1, which is even worse as far as readability is concerned.
Before I go ahead and define my own if and use it on every program I will ever write, is there any canon on this? Am I missing an obvious function or operator?
(Are there any APL programmers on SO? :-)
The trouble with these:
(f t)[x]
x⌷f t
x⊃f t
is that both t and f get evaluated.
If you want to short-circuit the thing, you can use guards:
{x:t ⋄ f}
This is equivalent to
if (x) {
return t;
}
f;
in a C-like language.
Yes, there are APL programmers on SO (but not many!).
I think the answer is that there is no standard on this.
For a scalar solution, I use "pick":
x⊃f t
While for a Boolean array I use indexing as you do above:
f t[x]
I always use index origin zero, so there is no need to add 1, and the parens are not needed.
If these are not simple enough, I think you have to cover them with a function named "if". That will also let you put the true and false in the perhaps more natural ordering of t f.
In Dyalog APL you can use:
'value if true' (⊣⍣condition) 'value if false'
The idea is applying ⊣ (left tack – which always returns its left argument, while discarding the right argument) either 0 (for false) or 1 (for true) times – to the right argument. So, if it is applied 0 time (i.e. not at all), the right argument is returned unmodified, but if it is applied (once), then the left argument is applied. E.g.:
a b←3 5
Conditional←⊣⍣(a=b)
'match' Conditional 'different'
different
a b←4 4
Conditional←⊣⍣(a=b)
'match' Conditional 'different'
match
or
Cond←{⍺(⊣⍣⍺⍺)⍵}
bool←a=b
'match'(bool Cond)'different'
match
An old, old idiom which did something like C's ternary operator ? : and returned a result was the following:
r←⍎(¯3 3)[x=42]↑'6×8 ⋄ 6×7'
Note that this is written for origin 0 and the parens around the -3 3 are there for clarity.
x=42 evaluates to zero or one, depending on this answer we choose -3 or 3, and thus select and execute either the first 3 elements ("6x8") or last 3 elements ("6x7") of the string. The diamond ⋄ is just there for decoration.
Needless to say, one would probably not code this way if one had :if :else avaiable, though the control structure form would not return a result.
This is a common question, I think the reason there is no standard answer to it is that for the things you do with APL, there is actually less need for it than other languages.
That said, it is sometimes needed, and the way I implement an IFELSE operator in GNU APL is using this function:
∇Z ← arg (truefn IFELSE falsefn) condition ;v
v←⍬
→(0=⎕NC 'arg')/noarg
v←arg
noarg:
→condition/istrue
Z←falsefn v
→end
istrue:
Z←truefn v
end:
∇
The function can be called like this:
3 {'expression is true' ⍵} IFELSE {'was false' ⍵} 0
was false 3
This particular implementation passes in the left-hand argument as ⍵ to the clause, because that can be handy sometimes. Without a left-hand argument it passes in ⍬.
The APL expression:
(1+x=0)⌷y z
should be the C language equivalent for
x?y:z
And all the others
(1+x>0)⌷y z
for
x<=0?y:z
Etc. In general if a b c are expressions of respective languages the APL expression:
(1+~a)⌷b c
It should be equivalent to the C language:
a?b:c

Why does ocaml need both "let" and "let rec"? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Why are functions in Ocaml/F# not recursive by default?
OCaml uses let to define a new function, or let rec to define a function that is recursive. Why does it need both of these - couldn't we just use let for everything?
For example, to define a non-recursive successor function and recursive factorial in OCaml (actually, in the OCaml interpreter) I might write
let succ n = n + 1;;
let rec fact n =
if n = 0 then 1 else n * fact (n-1);;
Whereas in Haskell (GHCI) I can write
let succ n = n + 1
let fact n =
if n == 0 then 1 else n * fact (n-1)
Why does OCaml distinguish between let and let rec? Is it a performance issue, or something more subtle?
Well, having both available instead of only one gives the programmer tighter control on the scope. With let x = e1 in e2, the binding is only present in e2's environment, while with let rec x = e1 in e2 the binding is present in both e1 and e2's environments.
(Edit: I want to emphasize that it is not a performance issue, that makes no difference at all.)
Here are two situations where having this non-recursive binding is useful:
shadowing an existing definition with a refinement that use the old binding. Something like: let f x = (let x = sanitize x in ...), where sanitize is a function that ensures the input has some desirable property (eg. it takes the norm of a possibly-non-normalized vector, etc.). This is very useful in some cases.
metaprogramming, for example macro writing. Imagine I want to define a macro SQUARE(foo) that desugars into let x = foo in x * x, for any expression foo. I need this binding to avoid code duplication in the output (I don't want SQUARE(factorial n) to compute factorial n twice). This is only hygienic if the let binding is not recursive, otherwise I couldn't write let x = 2 in SQUARE(x) and get a correct result.
So I claim it is very important indeed to have both the recursive and the non-recursive binding available. Now, the default behaviour of the let-binding is a matter of convention. You could say that let x = ... is recursive, and one must use let nonrec x = ... to get the non-recursive binder. Picking one default or the other is a matter of which programming style you want to favor and there are good reasons to make either choice. Haskell suffers¹ from the unavailability of this non-recursive mode, and OCaml has exactly the same defect at the type level : type foo = ... is recursive, and there is no non-recursive option available -- see this blog post.
¹: when Google Code Search was available, I used it to search in Haskell code for the pattern let x' = sanitize x in .... This is the usual workaround when non-recursive binding is not available, but it's less safe because you risk writing x instead of x' by mistake later on -- in some cases you want to have both available, so picking a different name can be voluntary. A good idiom would be to use a longer variable name for the first x, such as unsanitized_x. Anyway, just looking for x' literally (no other variable name) and x1 turned a lot of results. Erlang (and all language that try to make variable shadowing difficult: Coffeescript, etc.) has even worse problems of this kind.
That said, the choice of having Haskell bindings recursive by default (rather than non-recursive) certainly makes sense, as it is consistent with lazy evaluation by default, which makes it really easy to build recursive values -- while strict-by-default languages have more restrictions on which recursive definitions make sense.

Resources