unimportant question about erlang and functional programming - functional-programming

I stumbled upon this question and i realized i forgot a lot of stuff from my nonprocedural programming class.
As I was trying to understand the code it seemed to me that it's terribly long-winded, so i attempted to shorten it. Does this do the same thing that the original code does?
merge([X|Xs], Ys) -> [X | merge(Ys, Xs)];
merge([], []) -> [].
... I've never worked with erlang before, so i maybe made some syntax errors :-)

Yes, it works properly. And it is more elegant in presentation. However, if I've learned correctly, not using the Zs variable as an accumulator makes it not tail recursive and thus less efficient. Also, using the reverse with the accumulator is more efficient than appending it together in correct order. This is, I believe, why the original would be in some cases more proper. But readability should trump efficiency where efficiency doesn't matter.
Perhaps:
merge(Xs, Ys) -> lists:reverse(merge(Xs, Ys, [])).
merge([X|Xs], Ys, Zs) -> merge(Ys, Xs, [X|Zs]);
merge([], [], Zs) -> Zs.
This would merge the efficiency of the original with the concise comprehensibility of yours.

You could go further:
merge(Xs, Ys) -> lists:reverse(merge1(Xs, Ys, [])).
merge1([], [], Zs) -> Zs.
merge1([X | Xs], [Y | Ys], Zs) -> merge1(Xs, Ys, [X, Y | Zs]).
This has the considerable advantage over feonixrift's suggestion that you are not switching the parameter order (which violates the principle of least surprise).
It is also good practice to give the helper function (in this case merge1) a different name as this is easier to spot that a change in arity. This is particularly true if, for instance merge/2 isn't exported and merge1/3 isn't. It basically says "I'm just a helper fn don't call me direct!"
It is also handy to write the desired terminator clause first as this makes the nature of the recursion explicit - you know as soon as you read the function definition that this fn terminates on list exhaustion.

Related

F# tree-building function causes stack overflow in Xamarin Studio

I'm trying to build up some rules in a tree structure, with logic gates i.e. and, not, or as well as conditions, e.g. property x equals value y. I wrote the most obvious recursive function first, which worked. I then tried to write a version that wouldn't cause a stack-overflow in continuation passing style taking my cue from this post about generic tree folding and this answer on stackoverflow.
It works for small trees (depth of approximately 1000), but unfortunately when using a large tree it causes a stackoverflow when I run it on my Mac with Xamarin Studio. Can anyone tell me whether I've misunderstood how F# treats tail-recursive code or whether this code isn't tail-recursive?
The full sample is here.
let FoldTree andF orF notF leafV t data =
let rec Loop t cont =
match t with
| AndGate (left, right)->
Loop left (fun lacc ->
Loop right (fun racc ->
cont (andF lacc racc)))
| OrGate (left, right)->
Loop left (fun lacc ->
Loop right (fun racc ->
cont (orF lacc racc)))
| NotGate exp ->
Loop exp (fun acc -> cont (notF acc))
| EqualsExpression(property,value) -> cont (leafV (property,value))
Loop t id
let evaluateContinuationPassingStyle tree data =
FoldTree (&&) (||) (not) (fun (prop,value) -> data |> Map.find prop |> ((=) value)) tree data
The code is tail-recursive, you got it right. But the problem is with Mono. See, Mono is not as high-quality implementation of .NET as the official thing. In particular, it doesn't do tail call elimination. Like, at all.
For the simplest (and most prevalent) case of self-recursion this doesn't matter too much, because the compiler catches it earlier. The F# compiler is smart enough to spot that the function is calling itself, figure out under what conditions, and convert it into a neat while loop, so that the compiled code doesn't make any calls at all.
But when your tail call is to a function passed as parameter, the compiler can't do that, because the actual function being called isn't known until runtime. In fact, even mutual recursion of two functions can't be converted into a loop reliably.
Possible solutions:
Switch to .NET Core.
Don't use recursive continuations, use accumulator instead (might not be possible).
Use self-recursion and pass manually maintained stack of continuations.
If all else fails, use a mutable stack.

How to create a Prolog predicate that removes 2nd to last element?

I need help creating a predicate that removes the 2nd to last element of a list and returns that list written in Prolog. So far I have
remove([],[]).
remove([X],[X]).
remove([X,Y],[Y]).
That is as far as I've gotten. I need to figure out a way to recursively go through the list until it is only two elements long and then reassemble the list to be returned. Help with explanation if you can.
Your definition so far is perfect! It is a little bit too specialized, so we will have to extend it. But your program is a solid foundation.
You "only" need to extend it.
remove([],[]).
remove([X],[X]).
remove([_,X],[X]).
remove([X,_,Y], [X,Y]).
remove([X,Y,_,Z], [X,Y,Z]).
remove([X,Y,Z,_,Z2], [X,Y,Z,Z2]).
...
OK, you see how to continue. Now, let us identify common cases:
...
remove([X,Y,_,Z], [X,Y,Z]).
% ^^^ ^^^
remove([X,Y,Z,_,Z2], [X,Y,Z,Z2]).
% ^^^^^ ^^^^^
...
So, we have a common list prefix. We could say:
Whenever we have a list and its removed list, we can conclude that by adding one element on both sides, we get a longer list of that kind.
remove([X|Xs], [X|Ys]) :-
remove(Xs,Ys).
Please note that the :- is really an arrow. It means: Provided what is true on the right-hand side, also what is found on the left-hand side will be true.
H-h-hold a minute! Is this really the case? How to test this? (If you test just for positive cases, you will always get a "yes".) We don't have the time to conjure up some test cases, do we? So let us let Prolog do the hard work for us! So, Prolog, fill in the blanks!
remove([],[]).
remove([X],[X]).
remove([_,X],[X]).
remove([X|Xs], [X|Ys]) :-
remove(Xs,Ys).
?- remove(Xs,Ys). % most general goal
Xs = [], Ys = []
; Xs = [A], Ys = [A]
; Xs = [_,A], Ys = [A]
; Xs = [A], Ys = [A] % redundant, but OK
; Xs = [A,B], Ys = [A,B], unexpected % WRONG
; Xs = [A,_,B], Ys = [A,B]
; Xs = [A,B], Ys = [A,B], unexpected % WRONG again!
; Xs = [A,B,C], Ys = [A,B,C], unexpected % WRONG
; Xs = [A,B,_,C], Ys = [A,B,C]
; ... .
It is tempting to reject everything and start again from scratch.
But in Prolog you can do better than that, so let's calm down to estimate the actual damage:
Some answers are incorrect. And some answers are correct.
It could be that our current definition is just a little bit too general.
To better understand the situation, I will look at the unexpected success remove([1,2],[1,2]) in detail. Who is the culprit for it?
Even the following program slice/fragment succeeds.
remove([],[]).
remove([X],[X]) :- false.
remove([_,X],[X]) :- false.
remove([X|Xs], [X|Ys]) :-
remove(Xs,Ys).
While this is a specialization of our program it reads: that remove/2 holds for all lists that are the same. That can't be true! To fix the problem we have to do something in the remaining visible part. And we have to specialize it. What is problematic here is that the recursive rule also holds for:
remove([1,2], [1,2]) :-
remove([2], [2]).
remove([2], [2]) :-
remove([], []).
That kind of conclusion must be avoided. We need to restrict the rule to those cases were the list has at least two further elements by adding another goal (=)/2.
remove([X|Xs], [Y|Ys]) :-
Xs = [_,_|_],
remove(Xs, Ys).
So what was our error? In the informal
Whenever we have a list and its removed list, ...
the term "removed list" was ambiguous. It could mean that we are referring here to the relation remove/2 (which is incorrect, because remove([],[]) holds, but still nothing is removed), or we are referring here to a list with an element removed. Such errors inevitably happen in programming since you want to keep your intuitions afresh by using a less formal language than Prolog itself.
For reference, here again (and for comparison with other definitions) is the final definition:
remove([],[]).
remove([X],[X]).
remove([_,X],[X]).
remove([X|Xs], [X|Ys]) :-
Xs = [_,_|_],
remove(Xs,Ys).
There are more efficient ways to do this, but this is the most straight-forward way.
I will try to provide another solution which is easier to construct if you only consider the meaning of "second last element", and describe each possible case explicitly:
rem_2nd_last([], []).
rem_2nd_last([First|Rest], R) :-
rem_2nd_last_2(Rest, First, R). % "Lag" the list once
rem_2nd_last_2([], First, [First]).
rem_2nd_last_2([Second|Rest], First, R) :-
rem_2nd_last_3(Rest, Second, First, R). % "Lag" the list twice
rem_2nd_last_3([], Last, _SecondLast, [Last]). % End of list: drop second last
rem_2nd_last_3([This|Rest], Prev, PrevPrev, [PrevPrev|R]) :-
rem_2nd_last_3(Rest, This, Prev, R). % Rest of list
The explanation is hiding in plain view in the definition of the three predicates.
"Lagging" is a way to reach back from the end of the list but keep the predicate always deterministic. You just grab one element and pass the rest of the list as the first argument of a helper predicate. One way, for example, to define last/2, is:
last([H|T], Last) :-
last_1(T, H, Last).
last_1([], Last, Last).
last_1([H|T], _, Last) :-
last_1(T, H, Last).

In pure functional languages, is data (strings, ints, floats.. ) also just functions?

I was thinking about pure Object Oriented Languages like Ruby, where everything, including numbers, int, floats, and strings are themselves objects. Is this the same thing with pure functional languages? For example, in Haskell, are Numbers and Strings also functions?
I know Haskell is based on lambda calculus which represents everything, including data and operations, as functions. It would seem logical to me that a "purely functional language" would model everything as a function, as well as keep with the definition that a function most always returns the same output with the same inputs and has no state.
It's okay to think about that theoretically, but...
Just like in Ruby not everything is an object (argument lists, for instance, are not objects), not everything in Haskell is a function.
For more reference, check out this neat post: http://conal.net/blog/posts/everything-is-a-function-in-haskell
#wrhall gives a good answer. However you are somewhat correct that in the pure lambda calculus it is consistent for everything to be a function, and the language is Turing-complete (capable of expressing any pure computation that Haskell, etc. is).
That gives you some very strange things, since the only thing you can do to anything is to apply it to something else. When do you ever get to observe something? You have some value f and want to know something about it, your only choice is to apply it some value x to get f x, which is another function and the only choice is to apply it to another value y, to get f x y and so on.
Often I interpret the pure lambda calculus as talking about transformations on things that are not functions, but only capable of expressing functions itself. That is, I can make a function (with a bit of Haskelly syntax sugar for recursion & let):
purePlus = \zero succ natCase ->
let plus = \m n -> natCase m n (\m' -> plus m' n)
in plus (succ (succ zero)) (succ (succ zero))
Here I have expressed the computation 2+2 without needing to know that there are such things as non-functions. I simply took what I needed as arguments to the function I was defining, and the values of those arguments could be church encodings or they could be "real" numbers (whatever that means) -- my definition does not care.
And you could think the same thing of Haskell. There is no particular reason to think that there are things which are not functions, nor is there a particular reason to think that everything is a function. But Haskell's type system at least prevents you from applying an argument to a number (anybody thinking about fromInteger right now needs to hold their tongue! :-). In the above interpretation, it is because numbers are not necessarily modeled as functions, so you can't necessarily apply arguments to them.
In case it isn't clear by now, this whole answer has been somewhat of a technical/philosophical digression, and the easy answer to your question is "no, not everything is a function in functional languages". Functions are the things you can apply arguments to, that's all.
The "pure" in "pure functional" refers to the "freedom from side effects" kind of purity. It has little relation to the meaning of "pure" being used when people talk about a "pure object-oriented language", which simply means that the language manipulates purely (only) in objects.
The reason is that pure-as-in-only is a reasonable distinction to use to classify object-oriented languages, because there are languages like Java and C++, which clearly have values that don't have all that much in common with objects, and there are also languages like Python and Ruby, for which it can be argued that every value is an object1
Whereas for functional languages, there are no practical languages which are "pure functional" in the sense that every value the language can manipulate is a function. It's certainly possible to program in such a language. The most basic versions of the lambda calculus don't have any notion of things that are not functions, but you can still do arbitrary computation with them by coming up with ways of representing the things you want to compute on as functions.2
But while the simplicity and minimalism of the lambda calculus tends to be great for proving things about programming, actually writing substantial programs in such a "raw" programming language is awkward. The function representation of basic things like numbers also tends to be very inefficient to implement on actual physical machines.
But there is a very important distinction between languages that encourage a functional style but allow untracked side effects anywhere, and ones that actually enforce that your functions are "pure" functions (similar to mathematical functions). Object-oriented programming is very strongly wed to the use of impure computations3, so there are no practical object-oriented programming languages that are pure in this sense.
So the "pure" in "pure functional language" means something very different from the "pure" in "pure object-oriented language".4 In each case the "pure vs not pure" distinction is one that is completely uninteresting applied to the other kind of language, so there's no very strong motive to standardise the use of the term.
1 There are corner cases to pick at in all "pure object-oriented" languages that I know of, but that's not really very interesting. It's clear that the object metaphor goes much further in languages in which 1 is an instance of some class, and that class can be sub-classed, than it does in languages in which 1 is something else than an object.
2 All computation is about representation anyway. Computers don't know anything about numbers or anything else. They just have bit-patterns that we use to represent numbers, and operations on bit-patterns that happen to correspond to operations on numbers (because we designed them so that they would).
3 This isn't fundamental either. You could design a "pure" object-oriented language that was pure in this sense. I tend to write most of my OO code to be pure anyway.
4 If this seems obtuse, you might reflect that the terms "functional", "object", and "language" have vastly different meanings in other contexts also.
A very different angle on this question: all sorts of data in Haskell can be represented as functions, using a technique called Church encodings. This is a form of inversion of control: instead of passing data to functions that consume it, you hide the data inside a set of closures, and to consume it you pass in callbacks describing what to do with this data.
Any program that uses lists, for example, can be translated into a program that uses functions instead of lists:
-- | A list corresponds to a function of this type:
type ChurchList a r = (a -> r -> r) --^ how to handle a cons cell
-> r --^ how to handle the empty list
-> r --^ result of processing the list
listToCPS :: [a] -> ChurchList a r
listToCPS xs = \f z -> foldr f z xs
That function is taking a concrete list as its starting point, but that's not necessary. You can build up ChurchList functions out of just pure functions:
-- | The empty 'ChurchList'.
nil :: ChurchList a r
nil = \f z -> z
-- | Add an element at the front of a 'ChurchList'.
cons :: a -> ChurchList a r -> ChurchList a r
cons x xs = \f z -> f z (xs f z)
foldChurchList :: (a -> r -> r) -> r -> ChurchList a r -> r
foldChurchList f z xs = xs f z
mapChurchList :: (a -> b) -> ChurchList a r -> ChurchList b r
mapChurchList f = foldChurchList step nil
where step x = cons (f x)
filterChurchList :: (a -> Bool) -> ChurchList a r -> ChurchList a r
filterChurchList pred = foldChurchList step nil
where step x xs = if pred x then cons x xs else xs
That last function uses Bool, but of course we can replace Bool with functions as well:
-- | A Bool can be represented as a function that chooses between two
-- given alternatives.
type ChurchBool r = r -> r -> r
true, false :: ChurchBool r
true a _ = a
false _ b = b
filterChurchList' :: (a -> ChurchBool r) -> ChurchList a r -> ChurchList a r
filterChurchList' pred = foldChurchList step nil
where step x xs = pred x (cons x xs) xs
This sort of transformation can be done for basically any type, so in theory, you could get rid of all "value" types in Haskell, and keep only the () type, the (->) and IO type constructors, return and >>= for IO, and a suitable set of IO primitives. This would obviously be hella impractical—and it would perform worse (try writing tailChurchList :: ChurchList a r -> ChurchList a r for a taste).
Is getChar :: IO Char a function or not? Haskell Report doesn't provide us with a definition. But it states that getChar is a function (see here). (Well, at least we can say that it is a function.)
So I think the answer is YES.
I don't think there can be correct definition of "function" except "everything is a function". (What is "correct definition"? Good question...) Consider the next example:
{-# LANGUAGE NoMonomorphismRestriction #-}
import Control.Applicative
f :: Applicative f => f Int
f = pure 1
g1 :: Maybe Int
g1 = f
g2 :: Int -> Int
g2 = f
Is f a function or datatype? It depends.

Folds versus recursion in Erlang

According to Learn you some Erlang :
Pretty much any function you can think of that reduces lists to 1 element can be expressed as a fold. [...]
This means fold is universal in the sense that you can implement pretty much any other recursive function on lists with a fold
My first thought when writing a function that takes a lists and reduces it to 1 element is to use recursion.
What are the guidelines that should help me decide whether to use recursion or a fold?
Is this a stylistic consideration or are there other factors as well (performance, readability, etc.)?
I personally prefer recursion over fold in Erlang (contrary to other languages e.g. Haskell). I don't see fold more readable than recursion. For example:
fsum(L) -> lists:foldl(fun(X,S) -> S+X end, 0, L).
or
fsum(L) ->
F = fun(X,S) -> S+X end,
lists:foldl(F, 0, L).
vs
rsum(L) -> rsum(L, 0).
rsum([], S) -> S;
rsum([H|T], S) -> rsum(T, H+S).
Seems more code but it is pretty straightforward and idiomatic Erlang. Using fold requires less code but the difference becomes smaller and smaller with more payload. Imagine we want a filter and map odd values to their square.
lcfoo(L) -> [ X*X || X<-L, X band 1 =:= 1].
fmfoo(L) ->
lists:map(fun(X) -> X*X end,
lists:filter(fun(X) when X band 1 =:= 1 -> true; (_) -> false end, L)).
ffoo(L) -> lists:foldr(
fun(X, A) when X band 1 =:= 1 -> [X|A];
(_, A) -> A end,
[], L).
rfoo([]) -> [];
rfoo([H|T]) when H band 1 =:= 1 -> [H*H | rfoo(T)];
rfoo([_|T]) -> rfoo(T).
Here list comprehension wins but recursive function is in the second place and fold version is ugly and less readable.
And finally, it is not true that fold is faster than recursive version especially when compiled to native (HiPE) code.
Edit:
I add a fold version with fun in variable as requested:
ffoo2(L) ->
F = fun(X, A) when X band 1 =:= 1 -> [X|A];
(_, A) -> A
end,
lists:foldr(F, [], L).
I don't see how it is more readable than rfoo/1 and I found especially an accumulator manipulation more complicated and less obvious than direct recursion. It is even longer code.
folds are usually both more readable (since everybody know what they do) and faster due to optimized implementations in the runtime (especially foldl which always should be tail recursive). It's worth noting that they are only a constant factor faster, not on another order, so it's usually premature optimization if you find yourself considering one over the other for performance reasons.
Use standard recursion when you do fancy things, such as working on more than one element at a time, splitting into multiple processes and similar, and stick to higher-order functions (fold, map, ...) when they already do what you want.
I expect fold is done recursively, so you may want to look at trying to implement some of the various list functions, such as map or filter, with fold, and see how useful it can be.
Otherwise, if you are doing this recursively you may be re-implementing fold, basically.
Learn to use what comes with the language, is my thought.
This discussion on foldl and recursion is interesting:
Easy way to break foldl
If you look at the first paragraph in this introduction (you may want to read all of it), he states better than I did.
http://www.cs.nott.ac.uk/~gmh/fold.pdf
Old thread but my experience is that fold works slower than a recursive function.

Implications of foldr vs. foldl (or foldl')

Firstly, Real World Haskell, which I am reading, says to never use foldl and instead use foldl'. So I trust it.
But I'm hazy on when to use foldr vs. foldl'. Though I can see the structure of how they work differently laid out in front of me, I'm too stupid to understand when "which is better." I guess it seems to me like it shouldn't really matter which is used, as they both produce the same answer (don't they?). In fact, my previous experience with this construct is from Ruby's inject and Clojure's reduce, which don't seem to have "left" and "right" versions. (Side question: which version do they use?)
Any insight that can help a smarts-challenged sort like me would be much appreciated!
The recursion for foldr f x ys where ys = [y1,y2,...,yk] looks like
f y1 (f y2 (... (f yk x) ...))
whereas the recursion for foldl f x ys looks like
f (... (f (f x y1) y2) ...) yk
An important difference here is that if the result of f x y can be computed using only the value of x, then foldr doesn't' need to examine the entire list. For example
foldr (&&) False (repeat False)
returns False whereas
foldl (&&) False (repeat False)
never terminates. (Note: repeat False creates an infinite list where every element is False.)
On the other hand, foldl' is tail recursive and strict. If you know that you'll have to traverse the whole list no matter what (e.g., summing the numbers in a list), then foldl' is more space- (and probably time-) efficient than foldr.
foldr looks like this:
foldl looks like this:
Context: Fold on the Haskell wiki
Their semantics differ so you can't just interchange foldl and foldr. The one folds the elements up from the left, the other from the right. That way, the operator gets applied in a different order. This matters for all non-associative operations, such as subtraction.
Haskell.org has an interesting article on the subject.
Shortly, foldr is better when the accumulator function is lazy on its second argument. Read more at Haskell wiki's Stack Overflow (pun intended).
The reason foldl' is preferred to foldl for 99% of all uses is that it can run in constant space for most uses.
Take the function sum = foldl['] (+) 0. When foldl' is used, the sum is immediately calculated, so applying sum to an infinite list will just run forever, and most likely in constant space (if you’re using things like Ints, Doubles, Floats. Integers will use more than constant space if the number becomes larger than maxBound :: Int).
With foldl, a thunk is built up (like a recipe of how to get the answer, which can be evaluated later, rather than storing the answer). These thunks can take up a lot of space, and in this case, it’s much better to evaluate the expression than to store the thunk (leading to a stack overflow… and leading you to… oh never mind)
Hope that helps.
By the way, Ruby's inject and Clojure's reduce are foldl (or foldl1, depending on which version you use). Usually, when there is only one form in a language, it is a left fold, including Python's reduce, Perl's List::Util::reduce, C++'s accumulate, C#'s Aggregate, Smalltalk's inject:into:, PHP's array_reduce, Mathematica's Fold, etc. Common Lisp's reduce defaults to left fold but there's an option for right fold.
As Konrad points out, their semantics are different. They don't even have the same type:
ghci> :t foldr
foldr :: (a -> b -> b) -> b -> [a] -> b
ghci> :t foldl
foldl :: (a -> b -> a) -> a -> [b] -> a
ghci>
For example, the list append operator (++) can be implemented with foldr as
(++) = flip (foldr (:))
while
(++) = flip (foldl (:))
will give you a type error.

Resources