Julia - how does #inline work? When to use function vs. macro? - julia

I have many small functions I would like to inline, for example to test flags for some condition:
const COND = UInt(1<<BITS_FOR_COND)
function is_cond(flags::UInt)
return flags & COND != 0
end
I could also make a macro:
macro IS_COND(flags::UInt)
return :(flags & COND != 0)
end
My motivation is many similar macro functions in the C code I am working with:
#define IS_COND(flags) ((flags) & COND)
I repeatedly timed the function, macro, function defined with #inline, and the expression by itself, but none are consistently faster than the others across many runs. The generated code for the function call in 1) and 3) are much longer than for the expression in 4), but I don't know how to compare 2) since #code_llvm etc. don't work on other macros.
1) for j=1:10 #time for i::UInt=1:10000 is_cond(i); end end
2) for j=1:10 #time for i::UInt=1:10000 #IS_COND(i); end end
3) for j=1:10 #time for i::UInt=1:10000 is_cond_inlined(i); end end
4) for j=1:10 #time for i::UInt=1:10000 i & COND != 0; end end
Questions: What is the purpose of #inline? I see from the sparse documentation that it appends the symbol :inline to the expression :meta, but what does that do, exactly? Is there any reason to prefer a function or macro for this kind of task?
My understanding is that a C macro function just substitutes the literal text of the macro at compile time, so the resulting code has no jumps and is therefore more efficient than a regular function call. (Safety is another issue, but let's assume the programmers know what they're doing.) A Julia macro has intermediate steps like parsing its arguments, so it's not obvious to me whether 2) should be faster than a 1). Ignoring for the moment that in this case the difference in performance is negligible, what technique results in the most efficient code?

If the two syntaxes result in exactly the same generated code, should you prefer one over the other? YES. Functions are vastly superior to macros in situations like this.
Macros are powerful, but they're tricky. You have three errors in your #IS_COND definition (you don't want to put a type annotation on the argument, you need to interpolate flags into the returned expression, and you need to use esc to get the hygiene correct).
The function definition just works as you expect.
Perhaps more importantly, the function works just as others would expect. Macros can do anything, so that # sigil is a good warning for "something beyond normal Julia syntax is occurring here." If it's behaving just like a function, though, might as well make it one.
Functions are first-class objects in Julia; you can pass them around and use them with higher order functions like map.
Julia is built on inlined functions. Its performance depends on it! Small functions typically don't even need the #inline annotation — it just does it on its own. You can use #inline to give the compiler an extra nudge that a bigger function is especially important to inline… but often Julia is good at figuring it out on its own (like here).
Backtraces and debugging works better with inlined functions than macros.
So, now, do they result in the same generated code? One of the most powerful things about Julia is your ability to ask it for its "intermediate work."
First, some set up:
julia> const COND = UInt(1<<7)
is_cond(flags) = return flags & COND != 0
macro IS_COND(flags)
return :($(esc(flags)) & COND != 0) # careful!
end
Now we can start looking at what happens when you use either is_cond or #IS_COND. In actual code, you'll be using these definitions within other functions, so let's create some test functions:
julia> test_func(x) = is_cond(x)
test_macro(x) = #IS_COND(x)
Now we can start moving down the chain to see if there's a difference. The first step is "lowering" — this simply converts the syntax to a limited subset to make life easier for the compiler. You can see that at this stage, the macro gets expanded but the function call still remains:
julia> #code_lowered test_func(UInt(1))
LambdaInfo template for test_func(x) at REPL[2]:1
:(begin
nothing
return (Main.is_cond)(x)
end)
julia> #code_lowered test_macro(UInt(1))
LambdaInfo template for test_macro(x) at REPL[2]:2
:(begin
nothing
return x & Main.COND != 0
end)
The next step, though, is inference and optimization. It's here that function inlining takes effect:
julia> #code_typed test_func(UInt(1))
LambdaInfo for test_func(::UInt64)
:(begin
return (Base.box)(Base.Bool,(Base.not_int)((Base.box)(Base.Bool,(Base.and_int)((Base.sle_int)(0,0)::Bool,((Base.box)(UInt64,(Base.and_int)(x,Main.COND)) === (Base.box)(UInt64,0))::Bool))))
end::Bool)
julia> #code_typed test_macro(UInt(1))
LambdaInfo for test_macro(::UInt64)
:(begin
return (Base.box)(Base.Bool,(Base.not_int)((Base.box)(Base.Bool,(Base.and_int)((Base.sle_int)(0,0)::Bool,((Base.box)(UInt64,(Base.and_int)(x,Main.COND)) === (Base.box)(UInt64,0))::Bool))))
end::Bool)
Look at that! This step in the internal representation is a little messier, but you can see that the function got inlined (even without #inline!) and now the code looks exactly identical between the two.
We can go farther and ask for the LLVM… and indeed the two are exactly identical:
julia> #code_llvm test_func(UInt(1)) | julia> #code_llvm test_macro(UInt(1))
|
define i8 #julia_test_func_70754(i64) #0 { | define i8 #julia_test_macro_70752(i64) #0 {
top: | top:
%1 = lshr i64 %0, 7 | %1 = lshr i64 %0, 7
%2 = xor i64 %1, 1 | %2 = xor i64 %1, 1
%3 = trunc i64 %2 to i8 | %3 = trunc i64 %2 to i8
%4 = and i8 %3, 1 | %4 = and i8 %3, 1
%5 = xor i8 %4, 1 | %5 = xor i8 %4, 1
ret i8 %5 | ret i8 %5
} | }

Related

How exactly is "interacting with Core.Compiler" undefined in a generated function?

The docs for generated functions at some point say the following:
Some operations that should not be attempted include:
...
Interacting with the contents or methods of Core.Compiler in any way.
What exactly is meant by "interacting" in this context? Is just using things from Core.Compiler from within a generated function undefined behaviour?
My use case is to detect builtin functions from within an IRTools dynamo (which constructs generated functions), but you can refer to the following dummy code (inspired by Cassette.canrecurse), which contains the actual "interactions" I want to perform:
julia> #generated function foo(f, args...)
mod = Core.Compiler.typename(f).module
is_builtin = ((f <: Core.Builtin) && !(mod === Core.Compiler))
if is_builtin
quote
println("$f is builtin")
f(args...)
end
else
:(f(args...))
end
end
foo (generic function with 1 method)
julia> foo(+, 1, 2)
3
julia> foo(Core.tuple, 1, 2)
tuple is builtin
(1, 2)
Which seems to work without problems.

How do I pass a variable to a macro and evaluate it before macro execution?

If I have a method
macro doarray(arr)
if in(:head, fieldnames(typeof(arr))) && arr.head == :vect
println("A Vector")
else
throw(ArgumentError("$(arr) should be a vector"))
end
end
it works if I write this
#doarray([x])
or
#doarray([:x])
but the following code rightly does not work, raising the ArgumentError(i.e. ArgumentError: alist should be a vector).
alist = [:x]
#doarray(alist)
How can I make the above to act similarly as #doarray([x])
Motivation:
I have a recursive macro(say mymacro) which takes a vector, operates on the first value and then calls recursively mymacro with the rest of the vector(say rest_vector). I can create rest_vector, print the value correctly(for debugging) but I don't know how to evaluate rest_vector when I feed it to the mymacro again.
EDIT 1:
I'm trying to implement logic programming in Julia, namely MiniKanren. In the Clojure implementation that I am basing this off, the code is such.
(defmacro fresh
[var-vec & clauses]
(if (empty? var-vec)
`(lconj+ ~#clauses)
`(call-fresh (fn [~(first var-vec)]
(fresh [~#(rest var-vec)]
~#clauses)))))
My failing Julia code based on that is below. I apologize if it does not make sense as I am trying to understand macros by implementing it.
macro fresh(varvec, clauses...)
if isempty(varvec.args)
:(lconjplus($(esc(clauses))))
else
varvecrest = varvec.args[2:end]
return quote
fn = $(esc(varvec.args[1])) -> #fresh($(varvecvest), $(esc(clauses)))
callfresh(fn)
end
end
end
The error I get when I run the code #fresh([x, y], ===(x, 42))(you can disregard ===(x, 42) for this discussion)
ERROR: LoadError: LoadError: UndefVarError: varvecvest not defined
The problem line is fn = $(esc(varvec.args[1])) -> #fresh($(varvecvest), $(esc(clauses)))
If I understand your problem correctly it is better to call a function (not a macro) inside a macro that will operate on AST passed to the macro. Here is a simple example how you could do it:
function recarray(arr)
println("head: ", popfirst!(arr.args))
isempty(arr.args) || recarray(arr)
end
macro doarray(arr)
if in(:head, fieldnames(typeof(arr))) && arr.head == :vect
println("A Vector")
recarray(arr)
else
throw(ArgumentError("$(arr) should be a vector"))
end
end
Of course in this example we do not do anything useful. If you specified what exactly you want to achieve then I might suggest something more specific.

Infinite loop with counter in elixir

I'm studying functional programming and I want to implement something like this.
while(true) do
if(somethingHappensHere) {
break
}
counter++
end
return counter
How can I do this in functional way using elixir?
Thanks for this.
While in most functional programming languages one would use a recursion for this task, Elixir particularly provides the way to do this without using an explicit recursion call: Enum.reduce_while/3:
Enum.reduce_while(1..100, 0, fn i, acc ->
if condition, do: {:halt, acc}, else: {:cont, acc + i}
end)
For lazy evaluation one would use Stream.reduce_while/3.
To make it infinite, one might use one of infinite generators, provided by Stream module, like Stream.iterate/2:
Stream.iterate(0, &(&1+1)) |> Enum.reduce_while(0, fn i, acc ->
if i > 6, do: {:halt, acc}, else: {:cont, acc + 1}
end)
#⇒ 7
For the sake of recursion, this is how the recursive solution might be implemented in Elixir:
defmodule M do
def checker, do: & &1 <= 0
def factorial(v, acc \\ 1) do
if checker().(v), do: acc, else: factorial(v - 1, v * acc)
end
end
M.factorial 6
#⇒ 720
Not sure about elixir specifically, but you can achieve this using recursion:
function myFunction(int counter)
{
if (condition) {
return counter
}
return myFunction(counter + 1)
}
This essentially sets up a function that can infinitely recurse (call itself), each time passing in the next counter value.
By having the recursive call as the last thing the function does, this is known as tail-call recursion which elixir supports (as per: Does Elixir infinite recursion ever overflow the stack?)
This can then be used as such:
int counterValue = myFunction(0)
With the function only returning once the condition is true.
You could also make this more generic by having the function take another function that returns true or false (i.e. performs the conditional check).
As I said, unfortunately I'm not aware of the syntax of elixir, but I'm sure you'll be able to bridge that gap.
An example of something in Elixir syntax:
defmodule SOQuestion do
def test(counter) do
if (something_happens_here?()), do: counter, else: test(counter+1)
end
def something_happens_here?() do
true
end
end
And it would be invoked like this:
SOQuestion.test(0)
A couple of notes on this:
1.) It's a code fragment. Obviously it's tough to be very complete given the broad nature of your question.
2.) something_happens_here? being a predicate it would normally be named ending with a question mark.
3.) If something_happens_here? were defined in a different module, then the call would be if (Module.something_happens_here?())
4.) I've obviously coded something_happens_here? to simply return true unconditionally. In real code, of course, you'd want to pass some argument to something_happens_here? and act on that to determine which boolean to return.
Given all that I totally agree with #mudasowba--this sort of thing is usually better handled with one of the higher order functions built into the language. It's less error prone and often much easier for others to read too.
As mentioned, you could use a number of built-in functions like Enum.reduce_while/3. However, sometimes it is just as easy (or fun) to use simple recursion.
Using recursion:
I will make some generic examples and use bar(foo) as an example of your somethingHappensHere condition.
1) If bar(foo) is something allowed in a guard clause:
defmodule Counter do
def count do
count(foo, 0)
end
defp count(foo, count) when bar(foo), do: count
defp count(foo, count), do: count(foo, count + 1)
end
2) If bar(foo) is a function that returns a boolean:
defmodule Counter do
def count(foo) do
count(foo, 0)
end
defp count(foo, count) do
if bar(foo) do
count
else
count(foo, count + 1)
end
end
end
3) If bar(foo) returns something other than a boolean, that you can pattern-match on, like:
defmodule Counter do
def count(foo) do
count(foo, 0)
end
defp count(foo, count) do
case bar(foo) do
{:ok, baz} -> count
{:error, _} -> count(foo, count + 1)
end
end
end
Call the module and function:
Counter.count(foo)

How can I shorten a type in the body of a function

Best be explained by an example:
I define a type
type myType
α::Float64
β::Float64
end
z = myType( 1., 2. )
Then suppose that I want to pass this type as an argument to a function:
myFunc( x::Vector{Float64}, m::myType ) =
x[1].^m.α+x[2].^m.β
Is there a way to pass myType so that I can actually use it in the body of the function in a "cleaner" fashion as follows:
x[1].^α+x[2].^β
Thanks for any answer.
One way is to use dispatch to a more general function:
myFunc( x::Vector{Float64}, α::Float64, β::Float64) = x[1].^α+x[2].^β
myFunc( x::Vector{Float64}, m::myType = myFunc(x,m.α,m.β)
Or if your functions are longer, you may want to use Parameters.jl's #unpack:
function myFunc( x::Vector{Float64}, m::myType )
#unpack m: α,β #now those are defined
x[1].^α+x[2].^β
end
The overhead of unpacking is small because you're not copying, it's just doing a bunch of α=m.α which is just making an α which points to m.α. For longer equations, this can be a much nicer form if you have many fields and use them in long calculations (for reference, I use this a lot in DifferentialEquations.jl).
Edit
There's another way as noted in the comments. Let me show this. You can define your type (with optional kwargs) using the #with_kw macro from Parameters.jl. For example:
using Parameters
#with_kw type myType
α::Float64 = 1.0 # Give a default value
β::Float64 = 2.0
end
z = myType() # Generate with the default values
Then you can use the #unpack_myType macro which is automatically made by the #with_kw macro:
function myFunc( x::Vector{Float64}, m::myType )
#unpack_myType m
x[1].^α+x[2].^β
end
Again, this only has the overhead of making the references α and β without copying, so it's pretty lightweight.
You could add this to the body of your function:
(α::Float64, β::Float64) = (m.α, m.β)
UPDATE: My original answer was wrong for a subtle reason, but I thought it was a very interesting bit of information so rather than delete it altogether, I'm leaving it with an explanation on why it's wrong. Many thanks to Fengyang for pointing out the global scope of eval! (as well as the use of $ in an Expr context!)
The original answer suggested that:
[eval( parse( string( i,"=",getfield( m,i)))) for i in fieldnames( m)]
would return a list comprehension which had assignment side-effects, since it conceptually would result in something like [α=1., β=2., etc]. The assumption was that this assignment would be within local scope. However, as pointed out, eval is always assessed at global scope, therefore the above one-liner does not do what it's meant to. Example:
julia> type MyType
α::Float64
β::Float64
end
julia> function myFunc!(x::Vector{Float64}, m::MyType)
α=5.; β=6.;
[eval( parse( string( i,"=",getfield( m,i)))) for i in fieldnames( m)]
x[1] = α; x[2] = β; return x
end;
julia> myFunc!([0.,0.],MyType(1., 2.))
2-element Array{Float64,1}:
5.0
6.0
julia> whos()
MyType 124 bytes DataType
myFunc 0 bytes #myFunc
α 8 bytes Float64
β 8 bytes Float64
I.e. as you can see, the intention was for the local variables α and β to be overwritten, but they didn't; eval placed α and β variables at global scope instead. As a matlab programmer I naively assumed that eval() was conceptually equivalent to Matlab, without actually checking. Turns out it's more similar to the evalin('base',...) command.
Thanks again to Fengyand for giving another example of why the phrase "parse and eval" seems to have about the same effect on Julia programmers as the word "it" on the knights who until recently said "NI". :)

Which languages support *recursive* function literals / anonymous functions?

It seems quite a few mainstream languages support function literals these days. They are also called anonymous functions, but I don't care if they have a name. The important thing is that a function literal is an expression which yields a function which hasn't already been defined elsewhere, so for example in C, &printf doesn't count.
EDIT to add: if you have a genuine function literal expression <exp>, you should be able to pass it to a function f(<exp>) or immediately apply it to an argument, ie. <exp>(5).
I'm curious which languages let you write function literals which are recursive. Wikipedia's "anonymous recursion" article doesn't give any programming examples.
Let's use the recursive factorial function as the example.
Here are the ones I know:
JavaScript / ECMAScript can do it with callee:
function(n){if (n<2) {return 1;} else {return n * arguments.callee(n-1);}}
it's easy in languages with letrec, eg Haskell (which calls it let):
let fac x = if x<2 then 1 else fac (x-1) * x in fac
and there are equivalents in Lisp and Scheme. Note that the binding of fac is local to the expression, so the whole expression is in fact an anonymous function.
Are there any others?
Most languages support it through use of the Y combinator. Here's an example in Python (from the cookbook):
# Define Y combinator...come on Gudio, put it in functools!
Y = lambda g: (lambda f: g(lambda arg: f(f)(arg))) (lambda f: g(lambda arg: f(f)(arg)))
# Define anonymous recursive factorial function
fac = Y(lambda f: lambda n: (1 if n<2 else n*f(n-1)))
assert fac(7) == 5040
C#
Reading Wes Dyer's blog, you will see that #Jon Skeet's answer is not totally correct. I am no genius on languages but there is a difference between a recursive anonymous function and the "fib function really just invokes the delegate that the local variable fib references" to quote from the blog.
The actual C# answer would look something like this:
delegate Func<A, R> Recursive<A, R>(Recursive<A, R> r);
static Func<A, R> Y<A, R>(Func<Func<A, R>, Func<A, R>> f)
{
Recursive<A, R> rec = r => a => f(r(r))(a);
return rec(rec);
}
static void Main(string[] args)
{
Func<int,int> fib = Y<int,int>(f => n => n > 1 ? f(n - 1) + f(n - 2) : n);
Func<int, int> fact = Y<int, int>(f => n => n > 1 ? n * f(n - 1) : 1);
Console.WriteLine(fib(6)); // displays 8
Console.WriteLine(fact(6));
Console.ReadLine();
}
You can do it in Perl:
my $factorial = do {
my $fac;
$fac = sub {
my $n = shift;
if ($n < 2) { 1 } else { $n * $fac->($n-1) }
};
};
print $factorial->(4);
The do block isn't strictly necessary; I included it to emphasize that the result is a true anonymous function.
Well, apart from Common Lisp (labels) and Scheme (letrec) which you've already mentioned, JavaScript also allows you to name an anonymous function:
var foo = {"bar": function baz() {return baz() + 1;}};
which can be handier than using callee. (This is different from function in top-level; the latter would cause the name to appear in global scope too, whereas in the former case, the name appears only in the scope of the function itself.)
In Perl 6:
my $f = -> $n { if ($n <= 1) {1} else {$n * &?BLOCK($n - 1)} }
$f(42); # ==> 1405006117752879898543142606244511569936384000000000
F# has "let rec"
You've mixed up some terminology here, function literals don't have to be anonymous.
In javascript the difference depends on whether the function is written as a statement or an expression. There's some discussion about the distinction in the answers to this question.
Lets say you are passing your example to a function:
foo(function(n){if (n<2) {return 1;} else {return n * arguments.callee(n-1);}});
This could also be written:
foo(function fac(n){if (n<2) {return 1;} else {return n * fac(n-1);}});
In both cases it's a function literal. But note that in the second example the name is not added to the surrounding scope - which can be confusing. But this isn't widely used as some javascript implementations don't support this or have a buggy implementation. I've also read that it's slower.
Anonymous recursion is something different again, it's when a function recurses without having a reference to itself, the Y Combinator has already been mentioned. In most languages, it isn't necessary as better methods are available. Here's a link to a javascript implementation.
In C# you need to declare a variable to hold the delegate, and assign null to it to make sure it's definitely assigned, then you can call it from within a lambda expression which you assign to it:
Func<int, int> fac = null;
fac = n => n < 2 ? 1 : n * fac(n-1);
Console.WriteLine(fac(7));
I think I heard rumours that the C# team was considering changing the rules on definite assignment to make the separate declaration/initialization unnecessary, but I wouldn't swear to it.
One important question for each of these languages / runtime environments is whether they support tail calls. In C#, as far as I'm aware the MS compiler doesn't use the tail. IL opcode, but the JIT may optimise it anyway, in certain circumstances. Obviously this can very easily make the difference between a working program and stack overflow. (It would be nice to have more control over this and/or guarantees about when it will occur. Otherwise a program which works on one machine may fail on another in a hard-to-fathom manner.)
Edit: as FryHard pointed out, this is only pseudo-recursion. Simple enough to get the job done, but the Y-combinator is a purer approach. There's one other caveat with the code I posted above: if you change the value of fac, anything which tries to use the old value will start to fail, because the lambda expression has captured the fac variable itself. (Which it has to in order to work properly at all, of course...)
You can do this in Matlab using an anonymous function which uses the dbstack() introspection to get the function literal of itself and then evaluating it. (I admit this is cheating because dbstack should probably be considered extralinguistic, but it is available in all Matlabs.)
f = #(x) ~x || feval(str2func(getfield(dbstack, 'name')), x-1)
This is an anonymous function that counts down from x and then returns 1. It's not very useful because Matlab lacks the ?: operator and disallows if-blocks inside anonymous functions, so it's hard to construct the base case/recursive step form.
You can demonstrate that it is recursive by calling f(-1); it will count down to infinity and eventually throw a max recursion error.
>> f(-1)
??? Maximum recursion limit of 500 reached. Use set(0,'RecursionLimit',N)
to change the limit. Be aware that exceeding your available stack space can
crash MATLAB and/or your computer.
And you can invoke the anonymous function directly, without binding it to any variable, by passing it directly to feval.
>> feval(#(x) ~x || feval(str2func(getfield(dbstack, 'name')), x-1), -1)
??? Maximum recursion limit of 500 reached. Use set(0,'RecursionLimit',N)
to change the limit. Be aware that exceeding your available stack space can
crash MATLAB and/or your computer.
Error in ==> create#(x)~x||feval(str2func(getfield(dbstack,'name')),x-1)
To make something useful out of it, you can create a separate function which implements the recursive step logic, using "if" to protect the recursive case against evaluation.
function out = basecase_or_feval(cond, baseval, fcn, args, accumfcn)
%BASECASE_OR_FEVAL Return base case value, or evaluate next step
if cond
out = baseval;
else
out = feval(accumfcn, feval(fcn, args{:}));
end
Given that, here's factorial.
recursive_factorial = #(x) basecase_or_feval(x < 2,...
1,...
str2func(getfield(dbstack, 'name')),...
{x-1},...
#(z)x*z);
And you can call it without binding.
>> feval( #(x) basecase_or_feval(x < 2, 1, str2func(getfield(dbstack, 'name')), {x-1}, #(z)x*z), 5)
ans =
120
It also seems Mathematica lets you define recursive functions using #0 to denote the function itself, as:
(expression[#0]) &
e.g. a factorial:
fac = Piecewise[{{1, #1 == 0}, {#1 * #0[#1 - 1], True}}] &;
This is in keeping with the notation #i to refer to the ith parameter, and the shell-scripting convention that a script is its own 0th parameter.
I think this may not be exactly what you're looking for, but in Lisp 'labels' can be used to dynamically declare functions that can be called recursively.
(labels ((factorial (x) ;define name and params
; body of function addrec
(if (= x 1)
(return 1)
(+ (factorial (- x 1))))) ;should not close out labels
;call factorial inside labels function
(factorial 5)) ;this would return 15 from labels
Delphi includes the anonymous functions with version 2009.
Example from http://blogs.codegear.com/davidi/2008/07/23/38915/
type
// method reference
TProc = reference to procedure(x: Integer);
procedure Call(const proc: TProc);
begin
proc(42);
end;
Use:
var
proc: TProc;
begin
// anonymous method
proc := procedure(a: Integer)
begin
Writeln(a);
end;
Call(proc);
readln
end.
Because I was curious, I actually tried to come up with a way to do this in MATLAB. It can be done, but it looks a little Rube-Goldberg-esque:
>> fact = #(val,branchFcns) val*branchFcns{(val <= 1)+1}(val-1,branchFcns);
>> returnOne = #(val,branchFcns) 1;
>> branchFcns = {fact returnOne};
>> fact(4,branchFcns)
ans =
24
>> fact(5,branchFcns)
ans =
120
Anonymous functions exist in C++0x with lambda, and they may be recursive, although I'm not sure about anonymously.
auto kek = [](){kek();}
'Tseems you've got the idea of anonymous functions wrong, it's not just about runtime creation, it's also about scope. Consider this Scheme macro:
(define-syntax lambdarec
(syntax-rules ()
((lambdarec (tag . params) . body)
((lambda ()
(define (tag . params) . body)
tag)))))
Such that:
(lambdarec (f n) (if (<= n 0) 1 (* n (f (- n 1)))))
Evaluates to a true anonymous recursive factorial function that can for instance be used like:
(let ;no letrec used
((factorial (lambdarec (f n) (if (<= n 0) 1 (* n (f (- n 1)))))))
(factorial 4)) ; ===> 24
However, the true reason that makes it anonymous is that if I do:
((lambdarec (f n) (if (<= n 0) 1 (* n (f (- n 1))))) 4)
The function is afterwards cleared from memory and has no scope, thus after this:
(f 4)
Will either signal an error, or will be bound to whatever f was bound to before.
In Haskell, an ad hoc way to achieve same would be:
\n -> let fac x = if x<2 then 1 else fac (x-1) * x
in fac n
The difference again being that this function has no scope, if I don't use it, with Haskell being Lazy the effect is the same as an empty line of code, it is truly literal as it has the same effect as the C code:
3;
A literal number. And even if I use it immediately afterwards it will go away. This is what literal functions are about, not creation at runtime per se.
Clojure can do it, as fn takes an optional name specifically for this purpose (the name doesn't escape the definition scope):
> (def fac (fn self [n] (if (< n 2) 1 (* n (self (dec n))))))
#'sandbox17083/fac
> (fac 5)
120
> self
java.lang.RuntimeException: Unable to resolve symbol: self in this context
If it happens to be tail recursion, then recur is a much more efficient method:
> (def fac (fn [n] (loop [count n result 1]
(if (zero? count)
result
(recur (dec count) (* result count))))))

Resources