In functional programming, is there a name for a function that takes an x and gives back a tuple (x, x)? - functional-programming

I was wondering if there is a commonly used term for a function that turns a value into a tuple-2 in ML-family languages, or functional programming languages more generally?
let toTuple2 x = (x, x)

In stack-based programming languages such as Forth, dup is a core operator that does duplicate the top stack element (not exactly a tuple though).
In Haskell, various packages provide this function under names like dup, dupe or double. Notice that two-tuples are also a core element of arrows, and dup = id &&& id.
I have not found anything specific to ML.

I don't know about the name of that specific function.
However, that function can be seen as a special case of a more general one:
let applyCtorToXX c x = c x x
Indeed, you can verify that toTuple2 is equivalent to applyCtorToXX (,).
In combinatory logic, or at least in how it is presented in To Mock a Mockingbird, such a function is named a "Warbler", and the symbol W is used for it (i.e. Wxy = xyy is the definition used in the book).
Looking at it from this perspective, your toTuple2 is W (,), which is the application of a warbler to the 2-tuple constructor.

Related

Terminology capturing ordering characteristic of arguments in functional composition

In functional composition g compose f what terms are used to refer to and differentiate the ordering property of functional arguments f and g passed to composition operator compose? For example, given the following compositions
val reverse = (s: String) => s.reverse
val dropThree = (s: String) => s.drop(3)
(reverse compose dropThree)("Make it so!") // ==> !os ti e: java.lang.String
(dropThree compose reverse)("Make it so!") // ==> ti ekaM: java.lang.String
what terminology makes explicit reverse comes after in
reverse compose dropThree
whilst it comes first in
dropThree compose reverse
Over at Math SE they seem to think such precise terminology has not yet emerged
...I'd aim for an analogy with division or subtraction: you might, for
instance, call the outer function the composer and the inner the
composand. Words like this are not (as far as I know) in common use,
perhaps because the need for them hasn't arisen as often as those for
the elementary arithmetic operations.
however, in software engineering world, composition, chaining, pipelining, etc. seems ubiquitous, and is bread-and-butter of functional programming, thus there ought to exist precise terminology characterising the crucial ordering property of operands involved in composition.
Note the terms the question is after refer specifically to particular arguments of composition, not the whole expression, akin to divisor and dividend which precisely describe which is which in division.
I don't think there's any formal terminology, but I would express a ∘ b (λx.a(b(x))) as
composition of a and b
a composed with b
a composed onto b
a chained onto b
a after b
b before a
x pipelined through b and a
As for the designations of the arguments to the compose function, I guess you're left with the Math.SE answer.

Simple example of call-by-need

I'm trying to understand the theorem behind "call-by-need." I do understand the definition, but I'm a bit confused. I would like to see a simple example which shows how call-by-need works.
After reading some previous threads, I found out that Haskell uses this kind of evaluation. Are there any other programming languages which support this feature?
I read about the call-by-name of Scala, and I do understand that call-by-name and call-by-need are similar but different by the fact that call-by-need will keep the evaluated value. But I really would love to see a real-life example (it does not have to be in Haskell), which shows call-by-need.
The function
say_hello numbers = putStrLn "Hello!"
ignores its numbers argument. Under call-by-value semantics, even though an argument is ignored, the parameter at the function call site may need to be evaluated, perhaps because of side effects that the rest of the program depends on.
In Haskell, we might call say_hello as
say_hello [1..]
where [1..] is the infinite list of naturals. Under call-by-value semantics, the CPU would run off trying to build an infinite list and never get to the say_hello at all!
Haskell merely outputs
$ runghc cbn.hs
Hello!
For less dramatic examples, the first ten natural numbers are
ghci> take 10 [1..]
[1,2,3,4,5,6,7,8,9,10]
The first ten odds are
ghci> take 10 $ filter odd [1..]
[1,3,5,7,9,11,13,15,17,19]
Under call-by-need semantics, each value — even a conceptually infinite one as in the examples above — is evaluated only to the extent required and no more.
update: A simple example, as asked for:
ff 0 = 1
ff 1 = 1
ff n = go (ff (n-1))
where
go x = x + x
Under call-by-name, each invocation of go evaluates ff (n-1) twice, each for each appearance of x in its definition (because + is strict in both arguments, i.e. demands the values of the both of them).
Under call-by-need, go's argument is evaluated at most once. Specifically, here, x's value is found out only once, and reused for the second appearance of x in the expression x + x. If it weren't needed, x wouldn't be evaluated at all, just as with call-by-name.
Under call-by-value, go's argument is always evaluated exactly once, prior to entering the function's body, even if it isn't used anywhere in the function's body.
Here's my understanding of it, in the context of Haskell.
According to Wikipedia, "call by need is a memoized variant of call by name where, if the function argument is evaluated, that value is stored for subsequent uses."
Call by name:
take 10 . filter even $ [1..]
With one consumer the produced value disappears after being produced so it might as well be call-by-name.
Call by need:
import qualified Data.List.Ordered as O
h = 1 : map (2*) h <> map (3*) h <> map (5*) h
where
(<>) = O.union
The difference is, here the h list is reused by several consumers, at different tempos, so it is essential that the produced values are remembered. In a call-by-name language there'd be much replication of computational effort here because the computational expression for h would be substituted at each of its occurrences, causing separate calculation for each. In a call-by-need--capable language like Haskell the results of computing the elements of h are shared between each reference to h.
Another example is, most any data defined by fix is only possible under call-by-need. With call-by-value the most we can have is the Y combinator.
See: Sharing vs. non-sharing fixed-point combinator and its linked entries and comments (among them, this, and its links, like Can fold be used to create infinite lists?).

In Idris, why do interface parameters have to be type or data constructors?

To get some practice with Idris, I've been trying to represent various basic algebraic structures as interfaces. The way I thought of organizing things at first was to make the parameters of a given interface be the set and the various operations over it, and the methods/fields be proofs of the various axioms. For example, I was thinking of defining Group like so:
Group (G : Type) (op : G -> G -> G) (e : G) (inv : G -> G) where
assoc : {x,y,z : G} -> (x `op` y) `op z = x `op` (y `op` z)
id_l : {x : G} -> x `op` e = x
id_r : {x : G} -> x `op` e = x
inv_l : {x : G} -> x `op` (inv x) = e
inv_r : {x : G} -> (inv x) `op` x = e
My reasoning for doing it this way instead of just making op, e, and inv methods was that it would be easier to talk about the same set being a group in different ways. Like, mathematically, it doesn't make sense to talk about a set being a group; it only makes sense to talk about a set with a specified operation being a group. The same set can correspond to two completely different groups by defining the operation differently. On the other hand, the proofs of the various interface laws don't affect the group. While the inhabitants (proofs) of the laws may be different, it doesn't result in a different group. Thus, one would have no use for declaring multiple implementations.
More fundamentally, this approach seems like a better representation of the mathematical concepts. It's a category error to talk about a set being a group, so the mathematician in me isn't thrilled about asserting as much by making the group operation an interface method.
This scheme isn't possible, however. When I try, it actually does typecheck, but as soon as I try to define an instance, it doesn't: idris complains that e.g.:
(+) cannot be a parameter of Algebra.Group
(Implementation arguments must be type or data constructors)
My question is: why this restriction? I assume there's a good reason, but for the life of me I can't see it. Like, I thought Idris collapses the value/type/kind hierarchy, so there's no real difference between types and values, so why do implementations treat types specially? And why are data constructors treated specially? It seems arbitrary to me.
Now, I could just achieve the same thing using named implementations, which I guess I'll end up doing now. I guess I'm just used to Haskell, where you can only have one instance of a typeclass for a given datatype. But it still feels rather arbitrary.... In particular, I would like to be able to define, e.g., a semiring as a tuple (R,+,*,0,1) where (R,+,0) is a monoid and (R,*,1) is a monoid (with the distributivity laws tacked on). But I don't think I can do that very easily without the above scheme, even with named implementations. I could only say whether or not R is a monoid---but for semirings, it needs to be a monoid in two distinct ways! I'm sure there are workarounds with some boilerplate type synonyms or something (which, again I'll probably end up doing), but I don't really see why that should be necessary.
$ idris --version
1.2.0

Why does ocaml need both "let" and "let rec"? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Why are functions in Ocaml/F# not recursive by default?
OCaml uses let to define a new function, or let rec to define a function that is recursive. Why does it need both of these - couldn't we just use let for everything?
For example, to define a non-recursive successor function and recursive factorial in OCaml (actually, in the OCaml interpreter) I might write
let succ n = n + 1;;
let rec fact n =
if n = 0 then 1 else n * fact (n-1);;
Whereas in Haskell (GHCI) I can write
let succ n = n + 1
let fact n =
if n == 0 then 1 else n * fact (n-1)
Why does OCaml distinguish between let and let rec? Is it a performance issue, or something more subtle?
Well, having both available instead of only one gives the programmer tighter control on the scope. With let x = e1 in e2, the binding is only present in e2's environment, while with let rec x = e1 in e2 the binding is present in both e1 and e2's environments.
(Edit: I want to emphasize that it is not a performance issue, that makes no difference at all.)
Here are two situations where having this non-recursive binding is useful:
shadowing an existing definition with a refinement that use the old binding. Something like: let f x = (let x = sanitize x in ...), where sanitize is a function that ensures the input has some desirable property (eg. it takes the norm of a possibly-non-normalized vector, etc.). This is very useful in some cases.
metaprogramming, for example macro writing. Imagine I want to define a macro SQUARE(foo) that desugars into let x = foo in x * x, for any expression foo. I need this binding to avoid code duplication in the output (I don't want SQUARE(factorial n) to compute factorial n twice). This is only hygienic if the let binding is not recursive, otherwise I couldn't write let x = 2 in SQUARE(x) and get a correct result.
So I claim it is very important indeed to have both the recursive and the non-recursive binding available. Now, the default behaviour of the let-binding is a matter of convention. You could say that let x = ... is recursive, and one must use let nonrec x = ... to get the non-recursive binder. Picking one default or the other is a matter of which programming style you want to favor and there are good reasons to make either choice. Haskell suffers¹ from the unavailability of this non-recursive mode, and OCaml has exactly the same defect at the type level : type foo = ... is recursive, and there is no non-recursive option available -- see this blog post.
¹: when Google Code Search was available, I used it to search in Haskell code for the pattern let x' = sanitize x in .... This is the usual workaround when non-recursive binding is not available, but it's less safe because you risk writing x instead of x' by mistake later on -- in some cases you want to have both available, so picking a different name can be voluntary. A good idiom would be to use a longer variable name for the first x, such as unsanitized_x. Anyway, just looking for x' literally (no other variable name) and x1 turned a lot of results. Erlang (and all language that try to make variable shadowing difficult: Coffeescript, etc.) has even worse problems of this kind.
That said, the choice of having Haskell bindings recursive by default (rather than non-recursive) certainly makes sense, as it is consistent with lazy evaluation by default, which makes it really easy to build recursive values -- while strict-by-default languages have more restrictions on which recursive definitions make sense.

How are functions curried?

I understand what the concept of currying is, and know how to use it. These are not my questions, rather I am curious as to how this is actually implemented at some lower level than, say, Haskell code.
For example, when (+) 2 4 is curried, is a pointer to the 2 maintained until the 4 is passed in? Does Gandalf bend space-time? What is this magic?
Short answer: yes a pointer is maintained to the 2 until the 4 is passed in.
Longer than necessary answer:
Conceptually, you're supposed to think about Haskell being defined in terms of the lambda calculus and term rewriting. Lets say you have the following definition:
f x y = x + y
This definition for f comes out in lambda calculus as something like the following, where I've explicitly put parentheses around the lambda bodies:
\x -> (\y -> (x + y))
If you're not familiar with the lambda calculus, this basically says "a function of an argument x that returns (a function of an argument y that returns (x + y))". In the lambda calculus, when we apply a function like this to some value, we can replace the application of the function by a copy of the body of the function with the value substituted for the function's parameter.
So then the expression f 1 2 is evaluated by the following sequence of rewrites:
(\x -> (\y -> (x + y))) 1 2
(\y -> (1 + y)) 2 # substituted 1 for x
(1 + 2) # substituted 2 for y
3
So you can see here that if we'd only supplied a single argument to f, we would have stopped at \y -> (1 + y). So we've got a whole term that is just a function for adding 1 to something, entirely separate from our original term, which may still be in use somewhere (for other references to f).
The key point is that if we implement functions like this, every function has only one argument but some return functions (and some return functions which return functions which return ...). Every time we apply a function we create a new term that "hard-codes" the first argument into the body of the function (including the bodies of any functions this one returns). This is how you get currying and closures.
Now, that's not how Haskell is directly implemented, obviously. Once upon a time, Haskell (or possibly one of its predecessors; I'm not exactly sure on the history) was implemented by Graph reduction. This is a technique for doing something equivalent to the term reduction I described above, that automatically brings along lazy evaluation and a fair amount of data sharing.
In graph reduction, everything is references to nodes in a graph. I won't go into too much detail, but when the evaluation engine reduces the application of a function to a value, it copies the sub-graph corresponding to the body of the function, with the necessary substitution of the argument value for the function's parameter (but shares references to graph nodes where they are unaffected by the substitution). So essentially, yes partially applying a function creates a new structure in memory that has a reference to the supplied argument (i.e. "a pointer to the 2), and your program can pass around references to that structure (and even share it and apply it multiple times), until more arguments are supplied and it can actually be reduced. However it's not like it's just remembering the function and accumulating arguments until it gets all of them; the evaluation engine actually does some of the work each time it's applied to a new argument. In fact the graph reduction engine can't even tell the difference between an application that returns a function and still needs more arguments, and one that has just got its last argument.
I can't tell you much more about the current implementation of Haskell. I believe it's a distant mutant descendant of graph reduction, with loads of clever short-cuts and go-faster stripes. But I might be wrong about that; maybe they've found a completely different execution strategy that isn't anything at all like graph reduction anymore. But I'm 90% sure it'll still end up passing around data structures that hold on to references to the partial arguments, and it probably still does something equivalent to factoring in the arguments partially, as it seems pretty essential to how lazy evaluation works. I'm also fairly sure it'll do lots of optimisations and short cuts, so if you straightforwardly call a function of 5 arguments like f 1 2 3 4 5 it won't go through all the hassle of copying the body of f 5 times with successively more "hard-coding".
Try it out with GHC:
ghc -C Test.hs
This will generate C code in Test.hc
I wrote the following function:
f = (+) 16777217
And GHC generated this:
R1.p[1] = (W_)Hp-4;
*R1.p = (W_)&stg_IND_STATIC_info;
Sp[-2] = (W_)&stg_upd_frame_info;
Sp[-1] = (W_)Hp-4;
R1.w = (W_)&integerzmgmp_GHCziInteger_smallInteger_closure;
Sp[-3] = 0x1000001U;
Sp=Sp-3;
JMP_((W_)&stg_ap_n_fast);
The thing to remember is that in Haskell, partially applying is not an unusual case. There's technically no "last argument" to any function. As you can see here, Haskell is jumping to stg_ap_n_fast which will expect an argument to be available in Sp.
The stg here stands for "Spineless Tagless G-Machine". There is a really good paper on it, by Simon Peyton-Jones. If you're curious about how the Haskell runtime is implemented, go read that first.

Resources