Is there a symbol or well-known idiom for the conditional function, in any of the APL dialects?
I'm sure I'm missing something, because it's such a basic language element. In other languages it's called conditional operator, but I will avoid that term here, because an APL operator is something else entirely.
For example C and friends have x ? T : F
LISPs have (if x T F)
Python has T if x else F
and so on.
I know modern APLs have :If and friends, but they are imperative statements to control program flow: they don't return a value, cannot be used inside an expression and certainly cannot be applied to arrays of booleans. They have a different purpose altogether, which is just fine by me.
The only decent expression I could come up with to do a functional selection is (F T)[⎕IO+x], which doesn't look particularly shorthand or readable to me, although it gets the job done, even on arrays:
('no' 'yes')[⎕IO+(⍳5)∘.>(⍳5)]
no no no no no
yes no no no no
yes yes no no no
yes yes yes no no
yes yes yes yes no
I tried to come up with a similar expression using squad ⌷, but failed miserably on arrays of booleans. Even if I could, it would still have to embed ⎕IO or an hardcoded 1, which is even worse as far as readability is concerned.
Before I go ahead and define my own if and use it on every program I will ever write, is there any canon on this? Am I missing an obvious function or operator?
(Are there any APL programmers on SO? :-)
The trouble with these:
(f t)[x]
x⌷f t
x⊃f t
is that both t and f get evaluated.
If you want to short-circuit the thing, you can use guards:
{x:t ⋄ f}
This is equivalent to
if (x) {
return t;
}
f;
in a C-like language.
Yes, there are APL programmers on SO (but not many!).
I think the answer is that there is no standard on this.
For a scalar solution, I use "pick":
x⊃f t
While for a Boolean array I use indexing as you do above:
f t[x]
I always use index origin zero, so there is no need to add 1, and the parens are not needed.
If these are not simple enough, I think you have to cover them with a function named "if". That will also let you put the true and false in the perhaps more natural ordering of t f.
In Dyalog APL you can use:
'value if true' (⊣⍣condition) 'value if false'
The idea is applying ⊣ (left tack – which always returns its left argument, while discarding the right argument) either 0 (for false) or 1 (for true) times – to the right argument. So, if it is applied 0 time (i.e. not at all), the right argument is returned unmodified, but if it is applied (once), then the left argument is applied. E.g.:
a b←3 5
Conditional←⊣⍣(a=b)
'match' Conditional 'different'
different
a b←4 4
Conditional←⊣⍣(a=b)
'match' Conditional 'different'
match
or
Cond←{⍺(⊣⍣⍺⍺)⍵}
bool←a=b
'match'(bool Cond)'different'
match
An old, old idiom which did something like C's ternary operator ? : and returned a result was the following:
r←⍎(¯3 3)[x=42]↑'6×8 ⋄ 6×7'
Note that this is written for origin 0 and the parens around the -3 3 are there for clarity.
x=42 evaluates to zero or one, depending on this answer we choose -3 or 3, and thus select and execute either the first 3 elements ("6x8") or last 3 elements ("6x7") of the string. The diamond ⋄ is just there for decoration.
Needless to say, one would probably not code this way if one had :if :else avaiable, though the control structure form would not return a result.
This is a common question, I think the reason there is no standard answer to it is that for the things you do with APL, there is actually less need for it than other languages.
That said, it is sometimes needed, and the way I implement an IFELSE operator in GNU APL is using this function:
∇Z ← arg (truefn IFELSE falsefn) condition ;v
v←⍬
→(0=⎕NC 'arg')/noarg
v←arg
noarg:
→condition/istrue
Z←falsefn v
→end
istrue:
Z←truefn v
end:
∇
The function can be called like this:
3 {'expression is true' ⍵} IFELSE {'was false' ⍵} 0
was false 3
This particular implementation passes in the left-hand argument as ⍵ to the clause, because that can be handy sometimes. Without a left-hand argument it passes in ⍬.
The APL expression:
(1+x=0)⌷y z
should be the C language equivalent for
x?y:z
And all the others
(1+x>0)⌷y z
for
x<=0?y:z
Etc. In general if a b c are expressions of respective languages the APL expression:
(1+~a)⌷b c
It should be equivalent to the C language:
a?b:c
Related
I'm trying to calculate the average of an integer array using the reduce function in one step. I can't do this:
say (reduce {($^a + $^b)}, <1 2 3>) / <1 2 3>.elems;
because it calculates the average in 2 separate pieces.
I need to do it like:
say reduce {($^a + $^b) / .elems}, <1 2 3>;
but it doesn't work of course.
How to do it in one step? (Using map or some other function is welcomed.)
TL;DR This answer starts with an idiomatic way to write equivalent code before discussing P6 flavored "tacit" programming and increasing brevity. I've also added "bonus" footnotes about the hyperoperation Håkon++ used in their first comment on your question.5
Perhaps not what you want, but an initial idiomatic solution
We'll start with a simple solution.1
P6 has built in routines2 that do what you're asking. Here's a way to do it using built in subs:
say { sum($_) / elems($_) }(<1 2 3>); # 2
And here it is using corresponding3 methods:
say { .sum / .elems }(<1 2 3>); # 2
What about "functional programming"?
First, let's replace .sum with an explicit reduction:
.reduce(&[+]) / .elems
When & is used at the start of an expression in P6 you know the expression refers to a Callable as a first class citizen.
A longhand way to refer to the infix + operator as a function value is &infix:<+>. The shorthand way is &[+].
As you clearly know, the reduce routine takes a binary operation as an argument and applies it to a list of values. In method form (invocant.reduce) the "invocant" is the list.
The above code calls two methods -- .reduce and .elems -- that have no explicit invocant. This is a form of "tacit" programming; methods written this way implicitly (or "tacitly") use $_ (aka "the topic" or simply "it") as their invocant.
Topicalizing (explicitly establishing what "it" is)
given binds a single value to $_ (aka "it") for a single statement or block.
(That's all given does. Many other keywords also topicalize but do something else too. For example, for binds a series of values to $_, not just one.)
Thus you could write:
say .reduce(&[+]) / .elems given <1 2 3>; # 2
Or:
$_ = <1 2 3>;
say .reduce(&[+]) / .elems; # 2
But given that your focus is FP, there's another way that you should know.
Blocks of code and "it"
First, wrap the code in a block:
{ .reduce(&[+]) / .elems }
The above is a Block, and thus a lambda. Lambdas without a signature get a default signature that accepts one optional argument.
Now we could again use given, for example:
say do { .reduce(&[+]) / .elems } given <1 2 3>; # 2
But we can also just use ordinary function call syntax:
say { .reduce(&[+]) / .elems }(<1 2 3>)
Because a postfix (...) calls the Callable on its left, and because in the above case one argument is passed in the parens to a block that expects one argument, the net result is the same as the do4 and the given in the prior line of code.
Brevity with built ins
Here's another way to write it:
<1 2 3>.&{.sum/.elems}.say; #2
This calls a block as if it were a method. Imo that's still eminently readable, especially if you know P6 basics.
Or you can start to get silly:
<1 2 3>.&{.sum/$_}.say; #2
This is still readable if you know P6. The / is a numeric (division) operator. Numeric operators coerce their operands to be numeric. In the above $_ is bound to <1 2 3> which is a list. And in Perls, a collection in numeric context is its number of elements.
Changing P6 to suit you
So far I've stuck with standard P6.
You can of course write subs or methods and name them using any Unicode letters. If you want single letter aliases for sum and elems then go ahead:
my (&s, &e) = &sum, &elems;
But you can also extend or change the language as you see fit. For example, you can create user defined operators:
#| LHS ⊛ RHS.
#| LHS is an arbitrary list of input values.
#| RHS is a list of reducer function, then functions to be reduced.
sub infix:<⊛> (#lhs, *#rhs (&reducer, *#fns where *.all ~~ Callable)) {
reduce &reducer, #fns».(#lhs)
}
say <1 2 3> ⊛ (&[/], &sum, &elems); # 2
I won't bother to explain this for now. (Feel free to ask questions in the comments.) My point is simply to highlight that you can introduce arbitrary (prefix, infix, circumfix, etc.) operators.
And if custom operators aren't enough you can change any of the rest of the syntax. cf "braid".
Footnotes
1 This is how I would normally write code to do the computation asked for in the question. #timotimo++'s comment nudged me to alter my presentation to start with that, and only then shift gears to focus on a more FPish solution.
2 In P6 all built in functions are referred to by the generic term "routine" and are instances of a sub-class of Routine -- typically a Sub or Method.
3 Not all built in sub routines have correspondingly named method routines. And vice-versa. Conversely, sometimes there are correspondingly named routines but they don't work exactly the same way (with the most common difference being whether or not the first argument to the sub is the same as the "invocant" in the method form.) In addition, you can call a subroutine as if it were a method using the syntax .&foo for a named Sub or .&{ ... } for an anonymous Block, or call a method foo in a way that looks rather like a subroutine call using the syntax foo invocant: or foo invocant: arg2, arg3 if it has arguments beyond the invocant.
4 If a block is used where it should obviously be invoked then it is. If it's not invoked then you can use an explicit do statement prefix to invoke it.
5 Håkon's first comment on your question used "hyperoperation". With just one easy to recognize and remember "metaop" (for unary operations) or a pair of them (for binary operations), hyperoperations distribute an operation to all the "leaves"6 of a data structure (for an unary) or create a new one based on pairing up the "leaves" of a pair of data structures (for binary operations). NB. Hyperoperations are done in parallel7.
6 What is a "leaf" for a hyperoperation is determined by a combination of the operation being applied (see the is nodal trait) and whether a particular element is Iterable.
7 Hyperoperation is applied in parallel, at least semantically. Hyperoperation assumes8 that the operations on the "leaves" have no mutually interfering side-effects -- that is to say, that any side effect when applying the operation to one "leaf" can safely be ignored in respect to applying the operation to any another "leaf".
8 By using a hyperoperation the developer is declaring that the assumption of no meaningful side-effects is correct. The compiler will act on the basis it is, but will not check that it is true. In the safety sense it's like a loop with a condition. The compiler will follow the dev's instructions, even if the result is an infinite loop.
Here is an example using given and the reduction meta operator:
given <1 2 3> { say ([+] $_)/$_.elems } ;
I'm trying to understand the theorem behind "call-by-need." I do understand the definition, but I'm a bit confused. I would like to see a simple example which shows how call-by-need works.
After reading some previous threads, I found out that Haskell uses this kind of evaluation. Are there any other programming languages which support this feature?
I read about the call-by-name of Scala, and I do understand that call-by-name and call-by-need are similar but different by the fact that call-by-need will keep the evaluated value. But I really would love to see a real-life example (it does not have to be in Haskell), which shows call-by-need.
The function
say_hello numbers = putStrLn "Hello!"
ignores its numbers argument. Under call-by-value semantics, even though an argument is ignored, the parameter at the function call site may need to be evaluated, perhaps because of side effects that the rest of the program depends on.
In Haskell, we might call say_hello as
say_hello [1..]
where [1..] is the infinite list of naturals. Under call-by-value semantics, the CPU would run off trying to build an infinite list and never get to the say_hello at all!
Haskell merely outputs
$ runghc cbn.hs
Hello!
For less dramatic examples, the first ten natural numbers are
ghci> take 10 [1..]
[1,2,3,4,5,6,7,8,9,10]
The first ten odds are
ghci> take 10 $ filter odd [1..]
[1,3,5,7,9,11,13,15,17,19]
Under call-by-need semantics, each value — even a conceptually infinite one as in the examples above — is evaluated only to the extent required and no more.
update: A simple example, as asked for:
ff 0 = 1
ff 1 = 1
ff n = go (ff (n-1))
where
go x = x + x
Under call-by-name, each invocation of go evaluates ff (n-1) twice, each for each appearance of x in its definition (because + is strict in both arguments, i.e. demands the values of the both of them).
Under call-by-need, go's argument is evaluated at most once. Specifically, here, x's value is found out only once, and reused for the second appearance of x in the expression x + x. If it weren't needed, x wouldn't be evaluated at all, just as with call-by-name.
Under call-by-value, go's argument is always evaluated exactly once, prior to entering the function's body, even if it isn't used anywhere in the function's body.
Here's my understanding of it, in the context of Haskell.
According to Wikipedia, "call by need is a memoized variant of call by name where, if the function argument is evaluated, that value is stored for subsequent uses."
Call by name:
take 10 . filter even $ [1..]
With one consumer the produced value disappears after being produced so it might as well be call-by-name.
Call by need:
import qualified Data.List.Ordered as O
h = 1 : map (2*) h <> map (3*) h <> map (5*) h
where
(<>) = O.union
The difference is, here the h list is reused by several consumers, at different tempos, so it is essential that the produced values are remembered. In a call-by-name language there'd be much replication of computational effort here because the computational expression for h would be substituted at each of its occurrences, causing separate calculation for each. In a call-by-need--capable language like Haskell the results of computing the elements of h are shared between each reference to h.
Another example is, most any data defined by fix is only possible under call-by-need. With call-by-value the most we can have is the Y combinator.
See: Sharing vs. non-sharing fixed-point combinator and its linked entries and comments (among them, this, and its links, like Can fold be used to create infinite lists?).
This question already has answers here:
What's a functional replacement for if-then statements?
(7 answers)
Closed 9 years ago.
I was discussing programming with a friend, who is an advocate for functional programming. He mentioned that you don't need to use if statements, but I can't seem to conceptualize how you would implement
if (something):
do this;
else:
do something_else;
In a functional paradigm?
Edit: my friend specifically mentioned that there are cases where you wouldn't need to use an if expression, even though you can. For example:
if x is odd:
x + 1
else:
x / 2
Is there a way to implement the above without using any if statements or conditionals?
Without more context it's hard to know exactly what your friend meant, but two things come to mind that he could have reasonably meant:
In functional languages if conditionals are expressions, not statements, so you'd be using if expressions and not if statements. This difference means that you write things like:
let x =
if condition
then value1
else value2
Instead of:
let x be a mutable variable
if condition
then x = value1
else x = value2
So this allows you to write in functional style without mutating variables.
The other thing he could have meant is that many functional languages offer constructs like pattern matching or guards that you can use instead of if statements. Pattern matching allows you to inspect the structure of a value and take it apart at the same time. As an example you can write this:
match my_list with
| x :: xs -> x + sum xs
| [] -> 0
Instead of this:
if my_list is empty
then
let x be the first element of my_list
let xs be the list containing the remaining elements of my_list
x + sum xs
Using pattern matching is preferable because it avoids calling functions on a value whose structure does not support it. In the example above, a function that returns the first element of a list would presumably cause an error when called on an empty list (which might happen if we mess up the if condition). But if we use pattern matching to get at the first element this can't happen because the syntax of the matching construct ensures that we only get x and xs if my_list is really not empty.
Pattern guards allow you to add arbitrary conditions to pattern matching:
match f(x) with
| 0 -> "f(x) was zero"
| 1 -> "f(x) was one"
| x when x > 1 -> "f(x) was greater than one"
| _ -> "f(x) was negative"
This can be cleaner if you're pattern matching anyway, but that hardly means you shouldn't use if expressions in functional languages. If you don't have a situation where you want pattern match on a value, introducing a pattern match just so that you can use a guard makes little sense over using an if statement.
The part that should confuse you isn't the if, it's the "do".
In functional programming, you don't "do" anything.
You just define the result to be some function of the input.
The function may of course have conditionals (like cond ? a : b in languages like C#, Java, C++, etc.), but a and b are expressions that evaluate to some common type; they are not statements -- so the result is either a or b, depending on cond.
in java we can write result?1:0. It is a short way to get a value depending on the result bool.
How can I write such a thing in OCaml?
It sounds like this is a duplicate question. However, note that everything in OCaml is an expression. So the answer is if result then 1 else 0. You may need to parenthesize this depending on the context. (In the C family, a form I sometimes use for your expression is !!result.)
I understand what the concept of currying is, and know how to use it. These are not my questions, rather I am curious as to how this is actually implemented at some lower level than, say, Haskell code.
For example, when (+) 2 4 is curried, is a pointer to the 2 maintained until the 4 is passed in? Does Gandalf bend space-time? What is this magic?
Short answer: yes a pointer is maintained to the 2 until the 4 is passed in.
Longer than necessary answer:
Conceptually, you're supposed to think about Haskell being defined in terms of the lambda calculus and term rewriting. Lets say you have the following definition:
f x y = x + y
This definition for f comes out in lambda calculus as something like the following, where I've explicitly put parentheses around the lambda bodies:
\x -> (\y -> (x + y))
If you're not familiar with the lambda calculus, this basically says "a function of an argument x that returns (a function of an argument y that returns (x + y))". In the lambda calculus, when we apply a function like this to some value, we can replace the application of the function by a copy of the body of the function with the value substituted for the function's parameter.
So then the expression f 1 2 is evaluated by the following sequence of rewrites:
(\x -> (\y -> (x + y))) 1 2
(\y -> (1 + y)) 2 # substituted 1 for x
(1 + 2) # substituted 2 for y
3
So you can see here that if we'd only supplied a single argument to f, we would have stopped at \y -> (1 + y). So we've got a whole term that is just a function for adding 1 to something, entirely separate from our original term, which may still be in use somewhere (for other references to f).
The key point is that if we implement functions like this, every function has only one argument but some return functions (and some return functions which return functions which return ...). Every time we apply a function we create a new term that "hard-codes" the first argument into the body of the function (including the bodies of any functions this one returns). This is how you get currying and closures.
Now, that's not how Haskell is directly implemented, obviously. Once upon a time, Haskell (or possibly one of its predecessors; I'm not exactly sure on the history) was implemented by Graph reduction. This is a technique for doing something equivalent to the term reduction I described above, that automatically brings along lazy evaluation and a fair amount of data sharing.
In graph reduction, everything is references to nodes in a graph. I won't go into too much detail, but when the evaluation engine reduces the application of a function to a value, it copies the sub-graph corresponding to the body of the function, with the necessary substitution of the argument value for the function's parameter (but shares references to graph nodes where they are unaffected by the substitution). So essentially, yes partially applying a function creates a new structure in memory that has a reference to the supplied argument (i.e. "a pointer to the 2), and your program can pass around references to that structure (and even share it and apply it multiple times), until more arguments are supplied and it can actually be reduced. However it's not like it's just remembering the function and accumulating arguments until it gets all of them; the evaluation engine actually does some of the work each time it's applied to a new argument. In fact the graph reduction engine can't even tell the difference between an application that returns a function and still needs more arguments, and one that has just got its last argument.
I can't tell you much more about the current implementation of Haskell. I believe it's a distant mutant descendant of graph reduction, with loads of clever short-cuts and go-faster stripes. But I might be wrong about that; maybe they've found a completely different execution strategy that isn't anything at all like graph reduction anymore. But I'm 90% sure it'll still end up passing around data structures that hold on to references to the partial arguments, and it probably still does something equivalent to factoring in the arguments partially, as it seems pretty essential to how lazy evaluation works. I'm also fairly sure it'll do lots of optimisations and short cuts, so if you straightforwardly call a function of 5 arguments like f 1 2 3 4 5 it won't go through all the hassle of copying the body of f 5 times with successively more "hard-coding".
Try it out with GHC:
ghc -C Test.hs
This will generate C code in Test.hc
I wrote the following function:
f = (+) 16777217
And GHC generated this:
R1.p[1] = (W_)Hp-4;
*R1.p = (W_)&stg_IND_STATIC_info;
Sp[-2] = (W_)&stg_upd_frame_info;
Sp[-1] = (W_)Hp-4;
R1.w = (W_)&integerzmgmp_GHCziInteger_smallInteger_closure;
Sp[-3] = 0x1000001U;
Sp=Sp-3;
JMP_((W_)&stg_ap_n_fast);
The thing to remember is that in Haskell, partially applying is not an unusual case. There's technically no "last argument" to any function. As you can see here, Haskell is jumping to stg_ap_n_fast which will expect an argument to be available in Sp.
The stg here stands for "Spineless Tagless G-Machine". There is a really good paper on it, by Simon Peyton-Jones. If you're curious about how the Haskell runtime is implemented, go read that first.