Update this statement for Julia 1.0 - julia

Could someone explain how to update this for Julia 1.0
function _encode_zigzag{T <: Integer}(n::T)
num_bits = sizeof(T) * 8
(n << 1) ⊻ (n >> (num_bits - 1))
end
And also what is the difference with:
function _encode_zigzag(n::Integer)
num_bits = sizeof(T) * 8
(n << 1) ⊻ (n >> (num_bits - 1))
end

Firstly, in Julia 1.x the subtype constraints on type parameters are stated after the parameters and followed by the reserved word where.
function _encode_zigzag(n::T) where {T <: Integer}
num_bits = sizeof(T) * 8
(n << 1) ⊻ (n >> (num_bits - 1))
end
The curly braces are unnecessary when there is only one type parameter but it is recommended to keep for clarity.
Now for the second question. In the version of your method where n is an Integer, sizeof will not work since the size of an abstract type is undefined. In this case, establishing the subtype constraint helps make sure that the given argument will have a defined size while still giving flexibility for different types. Julia will compile different versions of the function; one for each Integer subtype that gets passed.
This is more efficient than declaring the function with n having a concrete type like Int64, since this would mean the argument would have to be converted to the same type before executing the function.
You can read more of this in the Julia documentation.

Related

Trying to pass an array into a function

I'm very new to Julia, and I'm trying to just pass an array of numbers into a function and count the number of zeros in it. I keep getting the error:
ERROR: UndefVarError: array not defined
I really don't understand what I am doing wrong, so I'm sorry if this seems like such an easy task that I can't do.
function number_of_zeros(lst::array[])
count = 0
for e in lst
if e == 0
count + 1
end
end
println(count)
end
lst = [0,1,2,3,0,4]
number_of_zeros(lst)
There are two issues with your function definition:
As noted in Shayan's answer and Dan's comment, the array type in Julia is called Array (capitalized) rather than array. To see:
julia> array
ERROR: UndefVarError: array not defined
julia> Array
Array
Empty square brackets are used to instantiate an array, and if preceded by a type, they specifically instantiate an array holding objects of that type:
julia> x = Int[]
Int64[]
julia> push!(x, 3); x
1-element Vector{Int64}:
3
julia> push!(x, "test"); x
ERROR: MethodError: Cannot `convert` an object of type String to an object of type Int64
Thus when you do Array[] you are actually instantiating an empty vector of Arrays:
julia> y = Array[]
Array[]
julia> push!(y, rand(2)); y
1-element Vector{Array}:
[0.10298669573927233, 0.04327245960128345]
Now it is important to note that there's a difference between a type and an object of a type, and if you want to restrict the types of input arguments to your functions, you want to do this by specifying the type that the function should accept, not an instance of this type. To see this, consider what would happen if you had fixed your array typo and passed an Array[] instead:
julia> f(x::Array[])
ERROR: TypeError: in typeassert, expected Type, got a value of type Vector{Array}
Here Julia complains that you have provided a value of the type Vector{Array} in the type annotation, when I should have provided a type.
More generally though, you should think about why you are adding any type restrictions to your functions. If you define a function without any input types, Julia will still compile a method instance specialised for the type of input provided when first call the function, and therefore generate (most of the time) machine code that is optimal with respect to the specific types passed.
That is, there is no difference between
number_of_zeros(lst::Vector{Int64})
and
number_of_zeros(lst)
in terms of runtime performance when the second definition is called with an argument of type Vector{Int64}. Some people still like type annotations as a form of error check, but you also need to consider that adding type annotations makes your methods less generic and will often restrict you from using them in combination with code other people have written. The most common example of this are Julia's excellent autodiff capabilities - they rely on running your code with dual numbers, which are a specific numerical type enabling automatic differentiation. If you strictly type your functions as suggested (Vector{Int}) you preclude your functions from being automatically differentiated in this way.
Finally just a note of caution about the Array type - Julia's array's can be multidimensional, which means that Array{Int} is not a concrete type:
julia> isconcretetype(Array{Int})
false
to make it concrete, the dimensionality of the array has to be provided:
julia> isconcretetype(Array{Int, 1})
true
First, it might be better to avoid variable names similar to function names. count is a built-in function of Julia. So if you want to use the count function in the number_of_zeros function, you will undoubtedly face a problem.
Second, consider returning the value instead of printing it (Although you didn't write the print function in the correct place).
Third, You can update the value by += not just a +!
Last but not least, Types in Julia are constantly introduced with the first capital letter! So we don't have an array standard type. It's an Array.
Here is the correction of your code.
function number_of_zeros(lst::Array{Int64})
counter = 0
for e in lst
if e == 0
counter += 1
end
end
return counter
end
lst = [0,1,2,3,0,4]
number_of_zeros(lst)
would result in 2.
Additional explanation
First, it might be better to avoid variable names similar to function names. count is a built-in function of Julia. So if you want to use the count function in the number_of_zeros function, you will undoubtedly face a problem.
Check this example:
function number_of_zeros(lst::Array{Int64})
count = 0
for e in lst
if e == 0
count += 1
end
end
return count, count(==(1), lst)
end
number_of_zeros(lst)
This code will lead to this error:
ERROR: MethodError: objects of type Int64 are not callable
Maybe you forgot to use an operator such as *, ^, %, / etc. ?
Stacktrace:
[1] number_of_zeros(lst::Vector{Int64})
# Main \t.jl:10
[2] top-level scope
# \t.jl:16
Because I overwrote the count variable on the count function! It's possible to avoid such problems by calling the function from its module:
function number_of_zeros(lst::Array{Int64})
count = 0
for e in lst
if e == 0
count += 1
end
end
return count, Base.count(==(1), lst)
The point is I used Base.count, then the compiler knows which count I mean by Base.count.

How to make Pre and Post conditions for recursive functions in SPARK?

I'm translating an exercise I made in Dafny into SPARK, where one verifies a tail recursive function against a recursive one. The Dafny source (censored, because it might still be used for classes):
function Sum(n:nat):nat
decreases n
{
if n==0 then n else n+Sum(n-1)
}
method ComputeSum(n:nat) returns (s:nat)
ensures s == Sum(n)
{
s := 0;
// ...censored...
}
What I got in SPARK so far:
function Sum (n : in Natural) return Natural
is
begin
if n = 0 then
return n;
else
return n + Sum(n - 1);
end if;
end Sum;
function ComputeSum(n : in Natural) return Natural
with
Post => ComputeSum'Result = Sum(n)
is
s : Natural := 0;
begin
-- ...censored...
return s;
end ComputeSum;
I cannot seem to figure out how to express the decreases n condition (which now that I think about it might be a little odd... but I got graded for it a few years back so who am I to judge, and the question remains how to get it done). As a result I get warnings of possible overflow and/or infinite recursion.
I'm guessing there is a pre or post condition to be added. Tried Pre => n <= 1 which obviously does not overflow, but I still get the warning. Adding Post => Sum'Result <= n**n on top of that makes the warning go away, but that condition gets a "postcondition might fail" warning, which isn't right, but guess the prover can't tell. Also not really the expression I should check against, but I cannot seem to figure what other Post I'm looking for. Possibly something very close to the recursive expression, but none of my attempts work. Must be missing out on some language construct...
So, how could I express the recursive constraints?
Edit 1:
Following links to this SO answer and this SPARK doc section, I tried this:
function Sum (n : in Natural) return Natural
is
(if n = 0 then 0 else n + Sum(n - 1))
with
Pre => (n in 0 .. 2),
Contract_Cases => (n = 0 => Sum'Result = 0,
n >= 1 => Sum'Result = n + Sum(n - 1)),
Subprogram_Variant => (Decreases => n);
However getting these warnings from SPARK:
spark.adb:32:30: medium: overflow check might fail [reason for check: result of addition must fit in a 32-bits machine integer][#0]
spark.adb:36:56: warning: call to "Sum" within its postcondition will lead to infinite recursion
If you want to prove that the result of some tail-recursive summation function equals the result of a given recursive summation function for some value N, then it should, in principle, suffice to only define the recursive function (as an expression function) without any post-condition. You then only need to mention the recursive (expression) function in the post-condition of the tail-recursive function (note that there was no post-condition (ensures) on the recursive function in Dafny either).
However, as one of SPARK's primary goal is to proof the absence of runtime errors, you must have to prove that overflow cannot occur and for this reason, you do need a post-condition on the recursive function. A reasonable choice for such a post-condition is, as #Jeffrey Carter already suggested in the comments, the explicit summation formula for arithmetic progression:
Sum (N) = N * (1 + N) / 2
The choice is actually very attractive as with this formula we can now also functionally validate the recursive function itself against a well-known mathematically explicit expression for computing the sum of a series of natural numbers.
Unfortunately, using this formula as-is will only bring you somewhere half-way. In SPARK (and Ada as well), pre- and post-conditions are optionally executable (see also RM 11.4.2 and section 5.11.1 in the SPARK Reference Guide) and must therefore themselves be free of any runtime errors. Therefore, using the formula as-is will only allow you to prove that no overflow occurs for any positive number up until
max N s.t. N * (1 + N) <= Integer'Last <-> N = 46340
as in the post-condition, the multiplication is not allowed to overflow either (note that Natural'Last = Integer'Last = 2**31 - 1).
To work around this, you'll need to make use of the big integers package that has been introduced in the Ada 202x standard library (see also RM A.5.6; this package is already included in GNAT CE 2021 and GNAT FSF 11.2). Big integers are unbounded and computations with these integers never overflow. Using these integers, one can proof that overflow will not occur for any positive number up until
max N s.t. N * (1 + N) / 2 <= Natural'Last <-> N = 65535 = 2**16 - 1
The usage of these integers in a post-condition is illustrated in the example below.
Some final notes:
The Subprogram_Variant aspect is only needed to prove that a recursive subprogram will eventually terminate. Such a proof of termination must be requested explicitly by adding an annotation to the function (also shown in the example below and as discussed in the SPARK documentation pointed out by #egilhh in the comments). The Subprogram_Variant aspect is, however, not needed for your initial purpose: proving that the result of some tail-recursive summation function equals the result of a given recursive summation function for some value N.
To compile a program that uses functions from the new Ada 202x standard library, use compiler option -gnat2020.
While I use a subtype to constrain the range of permissible values for N, you could also use a precondition. This should not make any difference. However, in SPARK (and Ada as well), it is in general considered to be a best practise to express contraints using (sub)types as much as possible.
Consider counterexamples as possible clues rather than facts. They may or may not make sense. Counterexamples are optionally generated by some solvers and may not make sense. See also the section 7.2.6 in the SPARK user’s guide.
main.adb
with Ada.Numerics.Big_Numbers.Big_Integers;
procedure Main with SPARK_Mode is
package BI renames Ada.Numerics.Big_Numbers.Big_Integers;
use type BI.Valid_Big_Integer;
-- Conversion functions.
function To_Big (Arg : Integer) return BI.Valid_Big_Integer renames BI.To_Big_Integer;
function To_Int (Arg : BI.Valid_Big_Integer) return Integer renames BI.To_Integer;
subtype Domain is Natural range 0 .. 2**16 - 1;
function Sum (N : Domain) return Natural is
(if N = 0 then 0 else N + Sum (N - 1))
with
Post => Sum'Result = To_Int (To_Big (N) * (1 + To_Big (N)) / 2),
Subprogram_Variant => (Decreases => N);
-- Request a proof that Sum will terminate for all possible values of N.
pragma Annotate (GNATprove, Terminating, Sum);
begin
null;
end Main;
output (gnatprove)
$ gnatprove -Pdefault.gpr --output=oneline --report=all --level=1 --prover=z3
Phase 1 of 2: generation of Global contracts ...
Phase 2 of 2: flow analysis and proof ...
main.adb:13:13: info: subprogram "Sum" will terminate, terminating annotation has been proved
main.adb:14:30: info: overflow check proved
main.adb:14:32: info: subprogram variant proved
main.adb:14:39: info: range check proved
main.adb:16:18: info: postcondition proved
main.adb:16:31: info: range check proved
main.adb:16:53: info: predicate check proved
main.adb:16:69: info: division check proved
main.adb:16:71: info: predicate check proved
Summary logged in [...]/gnatprove.out
ADDENDUM (in response to comment)
So you can add the post condition as a recursive function, but that does not help in proving the absence of overflow; you will still have to provide some upper bound on the function result in order to convince the prover that the expression N + Sum (N - 1) will not cause an overflow.
To check the absence of overflow during the addition, the prover will consider all possible values that Sum might return according to it's specification and see if at least one of those value might cause the addition to overflow. In the absence of an explicit bound in the post condition, Sum might, according to its return type, return any value in the range Natural'Range. That range includes Natural'Last and that value will definitely cause an overflow. Therefore, the prover will report that the addition might overflow. The fact that Sum never returns that value given its allowable input values is irrelevant here (that's why it reports might). Hence, a more precise upper bound on the return value is required.
If an exact upper bound is not available, then you'll typically fallback onto a more conservative bound like, in this case, N * N (or use saturation math as shown in the Fibonacci example from the SPARK user manual, section 5.2.7, but that approach does change your function which might not be desirable).
Here's an alternative example:
example.ads
package Example with SPARK_Mode is
subtype Domain is Natural range 0 .. 2**15;
function Sum (N : Domain) return Natural
with Post =>
Sum'Result = (if N = 0 then 0 else N + Sum (N - 1)) and
Sum'Result <= N * N; -- conservative upper bound if the closed form
-- solution to the recursive function would
-- not exist.
end Example;
example.adb
package body Example with SPARK_Mode is
function Sum (N : Domain) return Natural is
begin
if N = 0 then
return N;
else
return N + Sum (N - 1);
end if;
end Sum;
end Example;
output (gnatprove)
$ gnatprove -Pdefault.gpr --output=oneline --report=all
Phase 1 of 2: generation of Global contracts ...
Phase 2 of 2: flow analysis and proof ...
example.adb:8:19: info: overflow check proved
example.adb:8:28: info: range check proved
example.ads:7:08: info: postcondition proved
example.ads:7:45: info: overflow check proved
example.ads:7:54: info: range check proved
Summary logged in [...]/gnatprove.out
I landed in something that sometimes works, which I think is enough for closing the title question:
function Sum (n : in Natural) return Natural
is
(if n = 0 then 0 else n + Sum(n - 1))
with
Pre => (n in 0 .. 10), -- works with --prover=z3, not Default (CVC4)
-- Pre => (n in 0 .. 100), -- not working - "overflow check might fail, e.g. when n = 2"
Subprogram_Variant => (Decreases => n),
Post => ((n = 0 and then Sum'Result = 0)
or (n > 0 and then Sum'Result = n + Sum(n - 1)));
-- Contract_Cases => (n = 0 => Sum'Result = 0,
-- n > 0 => Sum'Result = n + Sum(n - 1)); -- warning: call to "Sum" within its postcondition will lead to infinite recursion
-- Contract_Cases => (n = 0 => Sum'Result = 0,
-- n > 0 => n + Sum(n - 1) = Sum'Result); -- works
-- Contract_Cases => (n = 0 => Sum'Result = 0,
-- n > 0 => Sum'Result = n * (n + 1) / 2); -- works and gives good overflow counterexamples for high n, but isn't really recursive
Command line invocation in GNAT Studio (Ctrl+Alt+F), --counterproof=on and --prover=z3 my additions to it:
gnatprove -P%PP -j0 %X --output=oneline --ide-progress-bar --level=0 -u %fp --counterexamples=on --prover=z3
Takeaways:
Subprogram_Variant => (Decreases => n) is required to tell the prover n decreases for each recursive invocation, just like the Dafny version.
Works inconsistently for similar contracts, see commented Contract_Cases.
Default prover (CVC4) fails, using Z3 succeeds.
Counterproof on fail makes no sense.
n = 2 presented as counterproof for range 0 .. 100, but not for 0 .. 10.
Possibly related to this mention in the SPARK user guide: However, note that since the counterexample is always generated only using CVC4 prover, it can just explain why this prover cannot prove the property.
Cleaning between changing options required, e.g. --prover.

Why is Ada not trapping this specified range check?

I am going through the learn.adacore.com tutorial and have hit upon a problem that I am unsure of.
Specifically I understand that Ada is designed to trap attempts to overflow a variable with a specified range definition.
In the case below, the first attempt to do this causes a compiler 'range check failure' which is expected. However the following line doesn't trap it and I am not sure why:
with Ada.Text_IO; use Ada.Text_IO;
procedure Custom_Floating_Types is
type T_Norm is new float range -1.0 .. 1.0;
D : T_Norm := 1.0;
begin
Put_Line("The value of D =" & T_Norm'Image(D));
-- D := D + 1.0; -- This causes a range check failure at run time = completely expected.
Put_Line("The value of D =" & T_Norm'Image(D + 1.0)); -- This doesn't?
end Custom_Floating_Types;
You have a couple of pretty good answers, but I'm going to add another because it's clear that you expect the expression D + 1.0 to raise an exception, and the answers you have don't explain why it doesn't.
A type declaration like
type T_Norm is new float range -1.0 .. 1.0;
is roughly equivalent to
type T_Norm'Base is new Float;
subtype T_Norm is T_Norm'Base range -1.0 .. 1.0;
The type (called the "base type") isn't given a name, though it can often be referenced with the 'Base attribute. The name that is given is to a subtype, called the "first-named subtype".
This distinction is important and is often not given enough attention. As explained by egilhh, T_Norm'Image is defined in terms of T_Norm'Base. This is also true of the arithmetic operators. For example, "+" is defined as
function "+" (Left : in T_Norm'Base; Right : in T_Norm'Base) return T_Norm'Base;
2.0 is clearly in the range of T_Norm'Base, so evaluating D + 1.0 doesn't violate any constraints, nor does passing it to T_Norm'Image. However, when you try to assign the resulting value to D, which has subtype T_Norm, a check is performed that the value is in the range of the subtype, and an exception is raised because the check fails.
This distinction is used in other places to make the language work reasonably. For example, a constrained array type
type A is array (1 .. 10) of C;
is roughly equivalent to
type A'Base is array (Integer range <>) of C;
subtype A is A'Base (1 .. 10);
If you do
V : A;
... V (2 .. 4)
you might expect problems because the slice doesn't have the bounds of A. But it works because the slice doesn't have subtype A but rather the anonymous subtype A'Base (2 ..4).
The definition of 'Image says:
For every scalar subtype S:
S'Image denotes a function with the following specification:
function S'Image(Arg : S'Base)
return String
As you can see, it takes a parameter of the base (unconstrained) type
T_Norm'Image(D + 1.0) neither assigns nor reads an out-of-range value. It asks for the 'Image attribute (string representation) of (D + 1.0), which is the same as asking for (1.0 + 1.0).
I can see two ways the confusion might arise. First, the name "attribute" could be misinterpreted to suggest that 'Image involves something intrinsic to D. It doesn't. The 'Image attribute is just a function, so D is just part of the expression that defines the value of the parameter (in your example = 2.0).
Second, the 'Image attribute comes from Float. Thus, any Float can be its parameter.
You can create a function that accepts only parameters of T_Norm'Range and make that function an attribute of T_Norm. See Ada Reference Manual 4.10.
Because you don't store a value greater than range in D.

printing string and calling recursive function

I am currently learning sml but I have one question that I can not find an answer for. I have googled but still have not found anything.
This is my code:
fun diamond(n) =
if(n=1) then (
print("*")
) else (
print("*")
diamond(n-1)
)
diamond(5);
That does not work. I want the code to show as many * as number n is and I want to do that with recursion, but I don't understand how to do that.
I get an error when I try to run that code. This is the error:
Standard ML of New Jersey v110.78 [built: Thu Aug 20 19:23:18 2015]
[opening a4_p2.sml] a4_p2.sml:8.5-9.17 Error: operator is not a
function [tycon mismatch] operator: unit in expression:
(print "*") diamond /usr/local/bin/sml: Fatal error -- Uncaught exception Error with 0 raised at
../compiler/TopLevel/interact/evalloop.sml:66.19-66.27
Thank you
You can do side effects in ML by using ';'
It will evaluate whatever is before the ';' and discard its result.
fun diamond(n) =
if(n=1)
then (print "*"; 1)
else (print "*"; diamond(n-1));
diamond(5);
The reason for the error is because ML is a strongly typed language that although you don't need to specify types explicitly, it will infer them based on environmental factors at compile time. For this reason, every evaluation of functions, statements like if else need to evaluate to an unambiguous singular type.
If you were allowed to do the following:
if(n=1)
then 1
else print "*";
then the compiler will get a different typing for the then and else branch respectively.
For the then branch the type would be int -> int whereas the type for the else branch would be int -> unit
Such a dichotomy is not allowed under a strongly typed language.
As you need to evaluate to a singular type, you will understand that ML does not support the execution of a block of instructions as we commonly see in other paradigms which transposed to ML naively would render something like this:
....
if(n=1)
then (print "1"
print "2"
)
else (print "3"
diamond(n-1)
)
...
because what type would the then branch evaluate to? int -> unit? Then what about the other print statement? A statement has to return a singular result(even it be a compound) so that would not make sense. What about int -> unit * unit? No problem with that except that syntactically speaking, you failed to communicate a tuple to the compiler.
For this reason, the following WOULD work:
fun diamond(n) =
if(n=1)
then (print "a", 1) /* A tuple of the type unit * int */
else diamond(n-1);
diamond(5);
As in this case you have a function of type int -> unit * int.
So in order to satisfy the requirement of the paradigm of strongly typed functional programming where we strive for building mechanisms that evaluate to one result-type, we thus need to communicate to the compiler that certain statements are to be executed as instructions and are not to be incorporated under the typing of the function under consideration.
For this reason, you use ';' to communicate to the compiler to simply evaluate that statement and discard its result from being incorporated under the type evaluation of the function.
As far as your actual objective is concerned, following is a better way of writing the function, diamond as type int -> string:
fun diamond(n) =
if(n=1)
then "*"
else "*" ^ diamond(n-1);
print( diamond(5) );
The above way is more for debugging purposes.

Which languages support *recursive* function literals / anonymous functions?

It seems quite a few mainstream languages support function literals these days. They are also called anonymous functions, but I don't care if they have a name. The important thing is that a function literal is an expression which yields a function which hasn't already been defined elsewhere, so for example in C, &printf doesn't count.
EDIT to add: if you have a genuine function literal expression <exp>, you should be able to pass it to a function f(<exp>) or immediately apply it to an argument, ie. <exp>(5).
I'm curious which languages let you write function literals which are recursive. Wikipedia's "anonymous recursion" article doesn't give any programming examples.
Let's use the recursive factorial function as the example.
Here are the ones I know:
JavaScript / ECMAScript can do it with callee:
function(n){if (n<2) {return 1;} else {return n * arguments.callee(n-1);}}
it's easy in languages with letrec, eg Haskell (which calls it let):
let fac x = if x<2 then 1 else fac (x-1) * x in fac
and there are equivalents in Lisp and Scheme. Note that the binding of fac is local to the expression, so the whole expression is in fact an anonymous function.
Are there any others?
Most languages support it through use of the Y combinator. Here's an example in Python (from the cookbook):
# Define Y combinator...come on Gudio, put it in functools!
Y = lambda g: (lambda f: g(lambda arg: f(f)(arg))) (lambda f: g(lambda arg: f(f)(arg)))
# Define anonymous recursive factorial function
fac = Y(lambda f: lambda n: (1 if n<2 else n*f(n-1)))
assert fac(7) == 5040
C#
Reading Wes Dyer's blog, you will see that #Jon Skeet's answer is not totally correct. I am no genius on languages but there is a difference between a recursive anonymous function and the "fib function really just invokes the delegate that the local variable fib references" to quote from the blog.
The actual C# answer would look something like this:
delegate Func<A, R> Recursive<A, R>(Recursive<A, R> r);
static Func<A, R> Y<A, R>(Func<Func<A, R>, Func<A, R>> f)
{
Recursive<A, R> rec = r => a => f(r(r))(a);
return rec(rec);
}
static void Main(string[] args)
{
Func<int,int> fib = Y<int,int>(f => n => n > 1 ? f(n - 1) + f(n - 2) : n);
Func<int, int> fact = Y<int, int>(f => n => n > 1 ? n * f(n - 1) : 1);
Console.WriteLine(fib(6)); // displays 8
Console.WriteLine(fact(6));
Console.ReadLine();
}
You can do it in Perl:
my $factorial = do {
my $fac;
$fac = sub {
my $n = shift;
if ($n < 2) { 1 } else { $n * $fac->($n-1) }
};
};
print $factorial->(4);
The do block isn't strictly necessary; I included it to emphasize that the result is a true anonymous function.
Well, apart from Common Lisp (labels) and Scheme (letrec) which you've already mentioned, JavaScript also allows you to name an anonymous function:
var foo = {"bar": function baz() {return baz() + 1;}};
which can be handier than using callee. (This is different from function in top-level; the latter would cause the name to appear in global scope too, whereas in the former case, the name appears only in the scope of the function itself.)
In Perl 6:
my $f = -> $n { if ($n <= 1) {1} else {$n * &?BLOCK($n - 1)} }
$f(42); # ==> 1405006117752879898543142606244511569936384000000000
F# has "let rec"
You've mixed up some terminology here, function literals don't have to be anonymous.
In javascript the difference depends on whether the function is written as a statement or an expression. There's some discussion about the distinction in the answers to this question.
Lets say you are passing your example to a function:
foo(function(n){if (n<2) {return 1;} else {return n * arguments.callee(n-1);}});
This could also be written:
foo(function fac(n){if (n<2) {return 1;} else {return n * fac(n-1);}});
In both cases it's a function literal. But note that in the second example the name is not added to the surrounding scope - which can be confusing. But this isn't widely used as some javascript implementations don't support this or have a buggy implementation. I've also read that it's slower.
Anonymous recursion is something different again, it's when a function recurses without having a reference to itself, the Y Combinator has already been mentioned. In most languages, it isn't necessary as better methods are available. Here's a link to a javascript implementation.
In C# you need to declare a variable to hold the delegate, and assign null to it to make sure it's definitely assigned, then you can call it from within a lambda expression which you assign to it:
Func<int, int> fac = null;
fac = n => n < 2 ? 1 : n * fac(n-1);
Console.WriteLine(fac(7));
I think I heard rumours that the C# team was considering changing the rules on definite assignment to make the separate declaration/initialization unnecessary, but I wouldn't swear to it.
One important question for each of these languages / runtime environments is whether they support tail calls. In C#, as far as I'm aware the MS compiler doesn't use the tail. IL opcode, but the JIT may optimise it anyway, in certain circumstances. Obviously this can very easily make the difference between a working program and stack overflow. (It would be nice to have more control over this and/or guarantees about when it will occur. Otherwise a program which works on one machine may fail on another in a hard-to-fathom manner.)
Edit: as FryHard pointed out, this is only pseudo-recursion. Simple enough to get the job done, but the Y-combinator is a purer approach. There's one other caveat with the code I posted above: if you change the value of fac, anything which tries to use the old value will start to fail, because the lambda expression has captured the fac variable itself. (Which it has to in order to work properly at all, of course...)
You can do this in Matlab using an anonymous function which uses the dbstack() introspection to get the function literal of itself and then evaluating it. (I admit this is cheating because dbstack should probably be considered extralinguistic, but it is available in all Matlabs.)
f = #(x) ~x || feval(str2func(getfield(dbstack, 'name')), x-1)
This is an anonymous function that counts down from x and then returns 1. It's not very useful because Matlab lacks the ?: operator and disallows if-blocks inside anonymous functions, so it's hard to construct the base case/recursive step form.
You can demonstrate that it is recursive by calling f(-1); it will count down to infinity and eventually throw a max recursion error.
>> f(-1)
??? Maximum recursion limit of 500 reached. Use set(0,'RecursionLimit',N)
to change the limit. Be aware that exceeding your available stack space can
crash MATLAB and/or your computer.
And you can invoke the anonymous function directly, without binding it to any variable, by passing it directly to feval.
>> feval(#(x) ~x || feval(str2func(getfield(dbstack, 'name')), x-1), -1)
??? Maximum recursion limit of 500 reached. Use set(0,'RecursionLimit',N)
to change the limit. Be aware that exceeding your available stack space can
crash MATLAB and/or your computer.
Error in ==> create#(x)~x||feval(str2func(getfield(dbstack,'name')),x-1)
To make something useful out of it, you can create a separate function which implements the recursive step logic, using "if" to protect the recursive case against evaluation.
function out = basecase_or_feval(cond, baseval, fcn, args, accumfcn)
%BASECASE_OR_FEVAL Return base case value, or evaluate next step
if cond
out = baseval;
else
out = feval(accumfcn, feval(fcn, args{:}));
end
Given that, here's factorial.
recursive_factorial = #(x) basecase_or_feval(x < 2,...
1,...
str2func(getfield(dbstack, 'name')),...
{x-1},...
#(z)x*z);
And you can call it without binding.
>> feval( #(x) basecase_or_feval(x < 2, 1, str2func(getfield(dbstack, 'name')), {x-1}, #(z)x*z), 5)
ans =
120
It also seems Mathematica lets you define recursive functions using #0 to denote the function itself, as:
(expression[#0]) &
e.g. a factorial:
fac = Piecewise[{{1, #1 == 0}, {#1 * #0[#1 - 1], True}}] &;
This is in keeping with the notation #i to refer to the ith parameter, and the shell-scripting convention that a script is its own 0th parameter.
I think this may not be exactly what you're looking for, but in Lisp 'labels' can be used to dynamically declare functions that can be called recursively.
(labels ((factorial (x) ;define name and params
; body of function addrec
(if (= x 1)
(return 1)
(+ (factorial (- x 1))))) ;should not close out labels
;call factorial inside labels function
(factorial 5)) ;this would return 15 from labels
Delphi includes the anonymous functions with version 2009.
Example from http://blogs.codegear.com/davidi/2008/07/23/38915/
type
// method reference
TProc = reference to procedure(x: Integer);
procedure Call(const proc: TProc);
begin
proc(42);
end;
Use:
var
proc: TProc;
begin
// anonymous method
proc := procedure(a: Integer)
begin
Writeln(a);
end;
Call(proc);
readln
end.
Because I was curious, I actually tried to come up with a way to do this in MATLAB. It can be done, but it looks a little Rube-Goldberg-esque:
>> fact = #(val,branchFcns) val*branchFcns{(val <= 1)+1}(val-1,branchFcns);
>> returnOne = #(val,branchFcns) 1;
>> branchFcns = {fact returnOne};
>> fact(4,branchFcns)
ans =
24
>> fact(5,branchFcns)
ans =
120
Anonymous functions exist in C++0x with lambda, and they may be recursive, although I'm not sure about anonymously.
auto kek = [](){kek();}
'Tseems you've got the idea of anonymous functions wrong, it's not just about runtime creation, it's also about scope. Consider this Scheme macro:
(define-syntax lambdarec
(syntax-rules ()
((lambdarec (tag . params) . body)
((lambda ()
(define (tag . params) . body)
tag)))))
Such that:
(lambdarec (f n) (if (<= n 0) 1 (* n (f (- n 1)))))
Evaluates to a true anonymous recursive factorial function that can for instance be used like:
(let ;no letrec used
((factorial (lambdarec (f n) (if (<= n 0) 1 (* n (f (- n 1)))))))
(factorial 4)) ; ===> 24
However, the true reason that makes it anonymous is that if I do:
((lambdarec (f n) (if (<= n 0) 1 (* n (f (- n 1))))) 4)
The function is afterwards cleared from memory and has no scope, thus after this:
(f 4)
Will either signal an error, or will be bound to whatever f was bound to before.
In Haskell, an ad hoc way to achieve same would be:
\n -> let fac x = if x<2 then 1 else fac (x-1) * x
in fac n
The difference again being that this function has no scope, if I don't use it, with Haskell being Lazy the effect is the same as an empty line of code, it is truly literal as it has the same effect as the C code:
3;
A literal number. And even if I use it immediately afterwards it will go away. This is what literal functions are about, not creation at runtime per se.
Clojure can do it, as fn takes an optional name specifically for this purpose (the name doesn't escape the definition scope):
> (def fac (fn self [n] (if (< n 2) 1 (* n (self (dec n))))))
#'sandbox17083/fac
> (fac 5)
120
> self
java.lang.RuntimeException: Unable to resolve symbol: self in this context
If it happens to be tail recursion, then recur is a much more efficient method:
> (def fac (fn [n] (loop [count n result 1]
(if (zero? count)
result
(recur (dec count) (* result count))))))

Resources