Interview question: f(f(x)) == 1/x - math

Design a function f such that:
f(f(x)) == 1/x
Where x is a 32 bit float
Or how about
Given a function f, find a function g
such that
f(x) == g(g(x))
See Also
Interview question: f(f(n)) == -n

For the first part: this one is more trivial than f(f(x)) = -x, IMO:
float f(float x)
{
return x >= 0 ? -1.0/x : -x;
}
The second part is an interesting question and an obvious generalization of the original question that this question was based on. There are two basic approaches:
a numerical method, such that x ≠ f(x) ≠ f(f(x)), which I believe was more in the spirit of the original question, but I don't think is possible in the general case
a method that involves g(g(x)) invoking f exactly once

Well, here's the C quick hack:
extern double f(double x);
double g(double x)
{
static int parity = 0;
parity ^= 1;
return (parity ? x : f(x));
}
However, this breaks down if you do:
a = g(4.0); // => a = 4.0, parity = 1
b = g(2.0); // => b = f(2.0), parity = 0
c = g(a); // => c = 4.0, parity = 1
d = g(b); // => d = f(f(2.0)), parity = 0
In general, if f is a bijection f : D → D, what you need is a function σ that partitions the domain D into A and B such that:
D = A ∪ B, ( the partition is total )
∅ = A ∩ B (the partition is disjoint )
σ(a) ∈ B, f(a) ∈ A ∀ a ∈ A,
σ(b) ∈ A, f(b) ∈ B ∀ b ∈ B,
σ has an inverse σ-1 s.t. σ(σ-1(d)) = σ-1(σ(d)) = d ∀ d ∈ D.
σ(f(d)) = f(σ(d)) ∀ d ∈ D
Then, you can define g thusly:
g(a) = σ(f(a)) ∀ a ∈ A
g(b) = σ-1(b) ∀ b ∈ B
This works b/c
∀ a ∈ A, g(g(a)) = g(σ(f(a)). By (3), f(a) ∈ A so σ(f(a)) ∈ B so g(σ(f(a)) = σ-1(σ(f(a))) = f(a).
∀ b ∈ B, g(g(b)) = g(σ-1(b)). By (4), σ-1(b) ∈ A so g(σ-1(b)) = σ(f(σ-1(b))) = f(σ(σ-1(b))) = f(b).
You can see from Miles answer that, if we ignore 0, then the operation σ(x) = -x works for f(x) = 1/x. You can check 1-6 (for D = nonzero reals), with A being the positive numbers, and B being the negative numbers yourself. With the double precision standard, there's a +0, a -0, a +inf, and a -inf, and these can be used to make the domain total (apply to all double precision numbers, not just the nonzero).
The same method can be applied to the f(x) = -1 problem - the accepted solution there partitions the space by the remainder mod 2, using σ(x) = (x - 1), handling the zero case specially.

I like the javascript/lambda suggestion from the earlier thread:
function f(x)
{
if (typeof x == "function")
return x();
else
return function () {return 1/x;}
}

The other solutions hint at needing extra state. Here's a more mathematical justification of that:
let f(x) = 1/(x^i)= x^-i
(where ^ denotes exponent, and i is the imaginary constant sqrt(-1) )
f(f(x)) = (x^-i)^-i) = x^(-i*-i) = x^(-1) = 1/x
So a solution exists for complex numbers. I don't know if there is a general solution sticking strictly to Real numbers.

If f(x) == g(g(x)), then g is known as the functional square root of f. I don't think there's closed form in general even if you allow x to be complex (you may want to go to mathoverflow to discuss :) ).

Again, it's specified as a 32-bit number. Make the return have more bits, use them to carry your state information between calls.
Const
Flag = $100000000;
Function F(X : 32bit) : 64bit;
Begin
If (64BitInt(X) And Flag) > 0 then
Result := g(32bit(X))
Else
Result := 32BitInt(X) Or Flag;
End;
for any function g and any 32-bit datatype 32bit.

There is another way to solve this and it uses the concept of fractional linear transformations. These are functions that send x->(ax+b)/(cx+d) where a,b,c,d are real numbers.
For example you can prove using some algebra that if f is defined by f(x)=(ax+1)(-x+d) where a^2=d^2=1 and a+d<>0 then f(f(x))=1/x for all real x. Choosing a=1,d=1, this give a solution to the problem in C++:
float f(float x)
{
return (x+1)/(-x+1);
}
The proof is f(f(x))=f((x+1)/(-x+1))=((x+1)/(-x+1)+1)/(-(x+1)/(-x+1)+1)
= (2/(1-x))/(2x/(1-x))=1/x on cancelling (1-x).
This doesn't work for x=1 or x=0 unless we allow an "infinite" value to be defined that satisfies 1/inf = 0, 1/0 = inf.

a C++ solution for g(g(x)) == f(x):
struct X{
double val;
};
X g(double x){
X ret = {x};
return ret;
}
double g(X x){
return f(x.val);
}
here is one a bit shorter version (i like this one better :-) )
struct X{
X(double){}
bool operator==(double) const{
return true
}
};
X g(X x){
return X();
}

Based on this answer, a solution to the generalized version (as a Perl one-liner):
sub g { $_[0] > 0 ? -f($_[0]) : -$_[0] }
Should always flip the variable's sign (a.k.a. state) twice, and should always call f() only once. For those languages not fortunate enough for Perl's implicit returns, just pop in a return before the { and you're good.
This solution works as long as f() does not change the variable's sign. In that case, it returns the original result (for negative numbers) or the result of f(f()) (for positive numbers). An alternative could store the variable's state in even/odd like the answers to the previous question, but then it breaks if f() changes (or can change) the variable's value. A better answer, as has been said, is the lambda solution. Here is a similar but different solution in Perl (uses references, but same concept):
sub g {
if(ref $_[0]) {
return ${$_[0]};
} else {
local $var = f($_[0]);
return \$var;
}
}
Note: This is tested, and does not work. It always returns a reference to a scalar (and it's always the same reference). I've tried a few things, but this code shows the general idea, and though my implementation is wrong and the approach may even be flawed, it's a step in the right direction. With a few tricks, you could even use a string:
use String::Util qw(looks_like_number);
sub g {
return "s" . f($_[0]) if looks_like_number $_[0];
return substr $_[0], 1;
}

try this
MessageBox.Show( "x = " + x );
MessageBox.Show( "value of x + x is " + ( x + x ) );
MessageBox.Show( "x =" );
MessageBox.Show( ( x + y ) + " = " + ( y + x ) );

Related

How to work with the result of the wild sympy

I have the following code:
f=tan(x)*x**2
q=Wild('q')
s=f.match(tan(q))
s={q_ : x}
How to work with the result of the "wild"? How to not address the array, for example, s[0], s{0}?
Wild can be used when you have an expression which is the result of some complicated calculation, but you know it has to be of the form sin(something) times something else. Then s[q] will be the sympy expression for the "something". And s[p] for the "something else". This could be used to investigate both p and q. Or to further work with a simplified version of f, substituting p and q with new variables, especially if p and q would be complex expressions involving multiple variables.
Many more use cases are possible.
Here is an example:
from sympy import *
from sympy.abc import x, y, z
p = Wild('p')
q = Wild('q')
f = tan(x) * x**2
s = f.match(p*tan(q))
print(f'f is the tangent of "{s[q]}" multiplied by "{s[p]}"')
g = f.xreplace({s[q]: y, s[p]:z})
print(f'f rewritten in simplified form as a function of y and z: "{g}"')
h = s[p] * s[q]
print(f'a new function h, combining parts of f: "{h}"')
Output:
f is the tangent of "x" multiplied by "x**2"
f rewritten in simplified form as a function of y and z: "z*tan(y)"
a new function h, combining parts of f: "x**3"
If you're interested in all arguments from tan that appear in f written as a product, you might try:
from sympy import *
from sympy.abc import x
f = tan(x+2)*tan(x*x+1)*7*(x+1)*tan(1/x)
if f.func == Mul:
all_tan_args = [a.args[0] for a in f.args if a.func == tan]
# note: the [0] is needed because args give a tupple of arguments and
# in the case of tan you'ld want the first (there is only one)
elif f.func == tan:
all_tan_args = [f.args[0]]
else:
all_tan_args = []
prod = 1
for a in all_tan_args:
prod *= a
print(f'All the tangent arguments are: {all_tan_args}')
print(f'Their product is: {prod}')
Output:
All the tangent arguments are: [1/x, x**2 + 1, x + 2]
Their product is: (x + 2)*(x**2 + 1)/x
Note that neither method would work for f = tan(x)**2. For that, you'ld need to write another match and decide whether you'ld want to take the same power of the arguments.

Load Error when trying to pass complicated function into Simpson's rule

I have written a method that approximates a definite integral by the composite Simpson's rule.
#=
f integrand
a lower integration bound
b upper integration bound
n number of iterations or panels
h step size
=#
function simpson(f::Function, a::Number, b::Number, n::Number)
n % 2 == 0 || error("`n` must be even")
h = (b - a) / n
s = f(a) + f(b)
s += 4*sum(f(a .+ collect(1:2:n) .* h))
s += 2*sum(f(a .+ collect(2:2:n-1) .* h))
return h/3 * s
end
For "simple" functions, like e^(-x^2), the simpson function works.
Input: simpson(x -> simpson(x -> exp.(-x.^2), 0, 5, 100)
Output: 0.8862269254513949
However, for the more complicated function f(x)
gArgs(x) = (30 .+ x, 0)
f(x) = exp.(-x.^2) .* maximum(generator.(gArgs.(x)...)[1])
where generator(θ, plotsol) is a function that takes in a defect θ in percent and a boolean value plotsol (either 0 or 1) that determines whether the generator should be plotted, and returns a vector with the magnetization in certain points in the generator.
When I try to compute the integral by running the below code
gArgs(x) = (30 .+ x, 0)
f(x) = exp.(-x.^2) .* maximum(generator.(gArgs.(x)...)[1])
println(simpson(x -> f(x), 0, 5, 10))
I encounter the error MethodError: no method matching generator(::Float64). With slight variants of the expression for f(x) I run into different errors like DimensionMismatch("array could not be broadcast to match destination") and InexactError: Bool(33.75). In the end, I think the cause of the error boils down to that I cannot figure out how to properly enter an expression for the integrand f(x). Could someone help me figure out how to enter f(x) correctly? Let me know if anything is unclear in my question.
Given an array x , gArgs.(x) returns an array of Tuples and you are trying to broadcast over an array of tuples. But the behavior of broadcasting with tuples is a bit different. Tuples are not treated as a single element and they themselves broadcast.
julia> println.(gArgs.([0.5, 1.5, 2.5, 3.5, 4.5])...)
30.531.532.533.534.5
00000
This is not what you expected, is it?
You can also see the problem with the following example;
julia> (2, 5) .!= [(2, 5)]
2-element BitArray{1}:
true
true
I believe f is a function that actually takes a scalar and returns a scalar. Instead of making f work on arrays, you should leave the broadcasting to the caller. You are very likely to be better of implementing f element-wise. This is the more Julia way of doing things and will make your job much easier.
That said, I believe your implementation should work with the following modifications, if you do not have an error in generator.
function simpson(f::Function, a::Number, b::Number, n::Number)
n % 2 == 0 || error("`n` must be even")
h = (b - a) / n
s = f(a) + f(b)
s += 4*sum(f.(a .+ collect(1:2:n) .* h)) # broadcast `f`
s += 2*sum(f.(a .+ collect(2:2:n-1) .* h)) # broadcast `f`
return h/3 * s
end
# define `gArg` and `f` element-wise and `generator`, too.
gArgs(x) = (30 + x, 0) # get rid of broadcasting dot. Shouldn't `0` be `false`?
f(x) = exp(-x^2) * maximum(generator(gArgs(x)...)[1]) # get rid of broadcasting dots
println(simpson(f, 0, 5, 10)) # you can just write `f`
You should also define the generator function element-wise.

implementing an algorithm to transform a real number to a continued fraction in #F

i am trying to implement a recursive function which takes a float and returns a list of ints representing the continued fraction representation of the float (https://en.wikipedia.org/wiki/Continued_fraction) In general i think i understand how the algorithm is supposed to work. its fairly simply. What i have so far is this:
let rec float2cfrac (x : float) : int list =
let q = int x
let r = x - (float q)
if r = 0.0 then
[]
else
q :: (float2cfrac (1.0 / r ))
the problem is with the base case obviously. It seems the value r never does reduce to 0.0 instead the algorithm keeps on returning values which are the likes of 0.0.....[number]. I am just not sure how to perform the comparison. How exactly should i go about it. The algorithm the function is based on says the base case is 0, so i naturally interpret this as 0.0. I dont see any other way. Also, do note that this is for an assignment where i am explicitly asked to implement the algorithm recursively. Does anyone have some guidance for me? It would be much appreciated
It seems the value r never does reduce to 0.0 instead the algorithm keeps on returning values which are the likes of 0.0.....[number].
This is a classic issue with floating point comparisons. You need to use some epsilon tolerance value for comparisons, because r will never reach exactly 0.0:
let epsilon = 0.0000000001
let rec float2cfrac (x : float) : int list =
let q = int x
let r = x - (float q)
if r < epsilon then
[]
else
q :: (float2cfrac (1.0 / r))
> float2cfrac 4.23
val it : int list = [4; 4; 2; 1]
See this MSDN documentation for more.
You could define a helper function for this:
let withinTolerance (x: float) (y: float) e =
System.Math.Abs(x - y) < e
Also note your original solution isn't tail-recursive, so it consumes stack as it recurses and could overflow the stack. You could refactor it such that a float can be unfolded without recursion:
let float2cfrac (x: float) =
let q = int x
let r = x - (float q)
if withinTolerance r 0.0 epsilon then None
else Some (q, (1.0 / r))
4.23 |> Seq.unfold float2cfrac // seq [4; 4; 2; 1]

How to factor RSA modulus given the public and private exponent?

I have a RSA private key with modulus m, public exponent e and private exponent d, but the program I am using needs the modulus's prime factors p and q.
Is it possible to use e and d to get p and q?
Yes -- once you know the modulus N, and public/private exponents d and e, it is not too difficult to obtain p and q such that N=pq.
This paper by Dan Boneh describes an algorithm for doing so. It relies
on the fact that, by definition,
de = 1 mod phi(N).
For any randomly chosen "witness"
in (2,N), there is about a 50% chance of being able to use it to find a nontrivial
square root of 1 mod N (call it x). Then gcd(x-1,N) gives one of the factors.
You can use the open source tool I have developed in 2009 that converts RSA keys between the SFM format (n,e,d) and CRT format (p,q,dp,dq,u), and the other way around. It is on SourceForge : http://rsaconverter.sourceforge.net/
The algorithm I implemented is based on ideas presented by Dan Boneh, as described by the previous answer.
I hope this will be useful.
Mounir IDRASSI - IDRIX
I posted a response on the crypto stack exchange answering the same question here. It uses the same approach as outlined in Boneh's paper, but does a lot more explanation as to how it actually works. I also try to assume a minimal amount of prior knowledge.
Hope this helps!
I put in the effort to dig through Boneh's paper. The "algorithm" for deriving (p, q) from (n, d) is buried at the end of §1.1, coded in maths jargon, and left as an exercise for the reader to render out of his (rather terse) proof that it's efficient to do so.
Let 〈N, e〉 be an RSA public key. Given the private key d, one can efficiently factor the modulus N = pq.
Proof. Compute k = de − 1. By definition of d and e we know that k is a multiple of φ(N). Since φ(N) is even, k = 2tr with r odd and t ≥ 1. We have gk = 1 for every g ∈ ℤN×, and therefore gk/2 is a square root of unity modulo N. By the Chinese Remainder Theorem, 1 has four square roots modulo N = pq. Two of these square roots are ±1. The other two are ±x where x satisfies x = 1 mod p and x = −1 mod q. Using either one of these last two square roots, the factorization of N is revealed by computing gcd(x − 1, N). A straightforward argument shows that if g is chosen at random from ℤN× then with probability at least 1/2 (over the choice of g) one of the elements in the sequence gk/2, gk/4, …, gk/2t mod N is a square root of unity that reveals the factorization of N. All elements in the sequence can be efficiently computed in time O(n3) where n = log2(N).
Obviously, this is pretty close to meaningless for anyone who doesn't know what $Z_N^\ast$ is, and has a pretty nonlinear structure that takes a good deal of time to twist into a linear algorithm.
So here is the worked solution:
from random import randrange
from math import gcd
def ned_to_pqe(secret_key):
"""
https://crypto.stanford.edu/~dabo/papers/RSA-survey.pdf#:~:text=Given%20d%2C,reveals%20the%20factorization%20of%20N%2E
"""
n, e, d = secret_key
k = d * e - 1
t = bit_scan1(k)
trivial_sqrt1 = {1, n - 1}
while True:
g = randrange(2, n - 1)
for j in range(1, t + 1):
x = pow(g, k >> j, n)
if pow(x, 2, n) == 1:
if x in trivial_sqrt1: continue
p = gcd(x - 1, n)
q = n // p
if q > p: p, q = q, p
return p, q, e
def pqe_to_ned(secret_key):
p, q, e = secret_key
n = p * q
l = (p - 1) * (q - 1)
d = pow(e, -1, l)
return n, e, d
def bit_scan1(i):
"""
https://gmpy2.readthedocs.io/en/latest/mpz.html#mpz.bit_scan1
"""
# https://stackoverflow.com/a/63552117/1874170
return (i & -i).bit_length() - 1
def test():
secret_key = (
# https://en.wikipedia.org/wiki/RSA_numbers#RSA-100
# Should take upwards of an hour to factor on a consumer desktop ca. 2022
1522605027922533360535618378132637429718068114961380688657908494580122963258952897654000350692006139,
65537,
1435319569480661473883310243084583371347212233430112391255270984679722445287591616684593449660400673
)
if secret_key != pqe_to_ned(ned_to_pqe(secret_key)):
raise ValueError
if __name__ == '__main__':
test()
print("Self-test OK")
Live demo (JS):
function ned_to_pqe({n, e, d}) {
// https://crypto.stanford.edu/~dabo/papers/RSA-survey.pdf#:~:text=Given%20d%2C,reveals%20the%20factorization%20of%20N%2E
let k = d * e - 1n;
let t = scan1(k);
let trivial_sqrt1 = new Set([1n, n - 1n]);
while (true) {
let g = insecure_randrange(2n, n - 1n);
for ( let j = t ; j > 0 ; --j ) {
let x = bn_powMod(g, k >> j, n);
if (bn_powMod(x, 2n, n) === 1n) {
if (trivial_sqrt1.has(x)) continue;
let p = gcd(x - 1n, n), q = n/p;
if (q > p) [p, q] = [q, p];
return {p, q, e};
}
}
}
}
function pqe_to_ned({p, q, e}) {
let n = p * q;
let l = (p - 1n) * (q - 1n);
let d = bn_modInv(e, l);
return {n, e, d};
}
function bn_powMod(x, e, m) {
// h/t https://umaranis.com/2018/07/12/calculate-modular-exponentiation-powermod-in-javascript-ap-n/
if (m === 1n) return 0n;
let y = 1n;
x = x % m;
while (e > 0n) {
if (e % 2n === 1n) //odd number
y = (y * x) % m;
e = e >> 1n; //divide by 2
x = (x * x) % m;
}
return y;
}
function bn_modInv(x, m) {
// TOY IMPLEMENTATION
// DO NOT USE IN GENERAL-PURPOSE CODE
// h/t https://rosettacode.org/wiki/Modular_inverse#C
let m0 = m, t, q;
let x0 = 0n, y = 1n;
if (m === 1n) return 1n;
while (x > 1n) {
q = x / m;
t = m;
m = x % m;
x = t;
t = x0;
x0 = y - q * x0;
y = t;
}
if (y < 0n) y += m0;
return y;
}
function gcd(a, b) {
// h/t https://stackoverflow.com/a/17445304/1874170
while (b) {
[a, b] = [b, a % b];
}
return a;
}
function scan1(i) {
// https://gmplib.org/manual/Integer-Logic-and-Bit-Fiddling#mpz_scan1
let k = 0n;
if ( i !== 0n ) {
while( (i & 1n) === 0n ) {
i >>= 1n;
k += 1n;
}
}
return k;
}
function insecure_randrange(a, b) {
// h/t https://arxiv.org/abs/1304.1916
let numerator = 0n;
let denominator = 1n;
let n = (b - a);
while (true) {
numerator <<= 1n;
denominator <<= 1n;
numerator |= BigInt(Math.random()>1/2);
if (denominator >= n) {
if (numerator < n)
return a + numerator;
numerator -= n;
denominator -= n;
}
}
}
<form action="javascript:" onsubmit="(({target:form,submitter:{value:action}})=>{eval(action)(form)})(event)">
<p>
<label for="p">p=</label><input name="p" value="37975227936943673922808872755445627854565536638199" /><br />
<label for="q">q=</label><input name="q" value="40094690950920881030683735292761468389214899724061" /><br />
<label for="n">n=</label><input name="n" /><br />
<label for="e">e=</label><input name="e" placeholder="65537" /><br />
<label for="d">d=</label><input name="d" /><br />
</p>
<p>
<button type="submit" value="pqe2nd">Get (n,d) from (p,q,e)</button><br />
<button type="submit" value="delpq">Forget (p,q)</button><br />
<button type="submit" value="ned2pq">Get (p,q) from (n,e,d)</button>
</form>
<script>
function pqe2nd({elements}) {
if (!elements['e'].value) elements['e'].value = elements['e'].placeholder;
let p = BigInt(elements['p'].value||undefined);
let q = BigInt(elements['q'].value||undefined);
let e = BigInt(elements['e'].value||undefined);
let {n, d} = pqe_to_ned({p,q,e});
elements['n'].value = n.toString();
elements['d'].value = d.toString();
}
function ned2pq({elements}) {
if (!elements['e'].value) elements['e'].value = elements['e'].placeholder;
let n = BigInt(elements['n'].value||undefined);
let e = BigInt(elements['e'].value||undefined);
let d = BigInt(elements['d'].value||undefined);
let {p, q} = ned_to_pqe({n,e,d});
elements['p'].value = p.toString();
elements['q'].value = q.toString();
}
function delpq({elements}) {
elements['p'].value = null;
elements['q'].value = null;
}
</script>
To answer the question as-stated in the title: factoring N entails finding N. But you cannot, in the general case, derive N from (e, d). Therefore, you cannot, in the general case, derive the factors of N from (e, d); QED.
finding n from (e, d) is computationally feasible with fair probability, or even certainty, for a small but observable fraction of RSA keys of practical interest
If you want to try to do so anyway, you'll need to be able to factorize e * d - 1 (if I understand the above-linked answer correctly):
from itertools import permutations
def ed_to_pq(e, d):
# NOT ALWAYS POSSIBLE -- the number e*d-1 must be small enough to factorize
# h/t https://crypto.stackexchange.com/a/81620/8287
factors = factorize(e * d - 1)
factors.sort()
# Unimplemented optimization:
# if two factors are larger than (p * q).bit_length()//4
# and the greater of (p, q) is not many times bigger than the lesser,
# then you can safely assume that the large factors belong to (p-1) and (q-1)
# and thereby reduce the number of iterations in the following loops
# Unimplemented optimization:
# permutations are overkill for this partitioning scheme;
# a clever mathematician could come up with something more efficient
# Unimplemented optimization:
# prune permutations based on "sanity" factor of logarithm knapsacking
l = len(factors)
for arrangement in permutations(factors):
for l_pm1 in range(1, l - 1):
for l_qm1 in range(1, l_pm1):
pm1 = prod(arrangement[:l_pm1])
qm1 = prod(arrangement[l_pm1:l_pm1+l_qm1])
try:
if pow(e, -1, pm1 * qm1) == d:
return (pm1 + 1, qm1 + 1)
except Exception:
pass
from functools import reduce
from operator import mul
def prod(l):
return reduce(mul, l)

What is a Y-combinator? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
A Y-combinator is a computer science concept from the “functional” side of things. Most programmers don't know much at all about combinators, if they've even heard about them.
What is a Y-combinator?
How do combinators work?
What are they good for?
Are they useful in procedural languages?
A Y-combinator is a "functional" (a function that operates on other functions) that enables recursion, when you can't refer to the function from within itself. In computer-science theory, it generalizes recursion, abstracting its implementation, and thereby separating it from the actual work of the function in question. The benefit of not needing a compile-time name for the recursive function is sort of a bonus. =)
This is applicable in languages that support lambda functions. The expression-based nature of lambdas usually means that they cannot refer to themselves by name. And working around this by way of declaring the variable, refering to it, then assigning the lambda to it, to complete the self-reference loop, is brittle. The lambda variable can be copied, and the original variable re-assigned, which breaks the self-reference.
Y-combinators are cumbersome to implement, and often to use, in static-typed languages (which procedural languages often are), because usually typing restrictions require the number of arguments for the function in question to be known at compile time. This means that a y-combinator must be written for any argument count that one needs to use.
Below is an example of how the usage and working of a Y-Combinator, in C#.
Using a Y-combinator involves an "unusual" way of constructing a recursive function. First you must write your function as a piece of code that calls a pre-existing function, rather than itself:
// Factorial, if func does the same thing as this bit of code...
x == 0 ? 1: x * func(x - 1);
Then you turn that into a function that takes a function to call, and returns a function that does so. This is called a functional, because it takes one function, and performs an operation with it that results in another function.
// A function that creates a factorial, but only if you pass in
// a function that does what the inner function is doing.
Func<Func<Double, Double>, Func<Double, Double>> fact =
(recurs) =>
(x) =>
x == 0 ? 1 : x * recurs(x - 1);
Now you have a function that takes a function, and returns another function that sort of looks like a factorial, but instead of calling itself, it calls the argument passed into the outer function. How do you make this the factorial? Pass the inner function to itself. The Y-Combinator does that, by being a function with a permanent name, which can introduce the recursion.
// One-argument Y-Combinator.
public static Func<T, TResult> Y<T, TResult>(Func<Func<T, TResult>, Func<T, TResult>> F)
{
return
t => // A function that...
F( // Calls the factorial creator, passing in...
Y(F) // The result of this same Y-combinator function call...
// (Here is where the recursion is introduced.)
)
(t); // And passes the argument into the work function.
}
Rather than the factorial calling itself, what happens is that the factorial calls the factorial generator (returned by the recursive call to Y-Combinator). And depending on the current value of t the function returned from the generator will either call the generator again, with t - 1, or just return 1, terminating the recursion.
It's complicated and cryptic, but it all shakes out at run-time, and the key to its working is "deferred execution", and the breaking up of the recursion to span two functions. The inner F is passed as an argument, to be called in the next iteration, only if necessary.
If you're ready for a long read, Mike Vanier has a great explanation. Long story short, it allows you to implement recursion in a language that doesn't necessarily support it natively.
I've lifted this from http://www.mail-archive.com/boston-pm#mail.pm.org/msg02716.html which is an explanation I wrote several years ago.
I'll use JavaScript in this example, but many other languages will work as well.
Our goal is to be able to write a recursive function of 1
variable using only functions of 1 variables and no
assignments, defining things by name, etc. (Why this is our
goal is another question, let's just take this as the
challenge that we're given.) Seems impossible, huh? As
an example, let's implement factorial.
Well step 1 is to say that we could do this easily if we
cheated a little. Using functions of 2 variables and
assignment we can at least avoid having to use
assignment to set up the recursion.
// Here's the function that we want to recurse.
X = function (recurse, n) {
if (0 == n)
return 1;
else
return n * recurse(recurse, n - 1);
};
// This will get X to recurse.
Y = function (builder, n) {
return builder(builder, n);
};
// Here it is in action.
Y(
X,
5
);
Now let's see if we can cheat less. Well firstly we're using
assignment, but we don't need to. We can just write X and
Y inline.
// No assignment this time.
function (builder, n) {
return builder(builder, n);
}(
function (recurse, n) {
if (0 == n)
return 1;
else
return n * recurse(recurse, n - 1);
},
5
);
But we're using functions of 2 variables to get a function of 1
variable. Can we fix that? Well a smart guy by the name of
Haskell Curry has a neat trick, if you have good higher order
functions then you only need functions of 1 variable. The
proof is that you can get from functions of 2 (or more in the
general case) variables to 1 variable with a purely
mechanical text transformation like this:
// Original
F = function (i, j) {
...
};
F(i,j);
// Transformed
F = function (i) { return function (j) {
...
}};
F(i)(j);
where ... remains exactly the same. (This trick is called
"currying" after its inventor. The language Haskell is also
named for Haskell Curry. File that under useless trivia.)
Now just apply this transformation everywhere and we get
our final version.
// The dreaded Y-combinator in action!
function (builder) { return function (n) {
return builder(builder)(n);
}}(
function (recurse) { return function (n) {
if (0 == n)
return 1;
else
return n * recurse(recurse)(n - 1);
}})(
5
);
Feel free to try it. alert() that return, tie it to a button, whatever.
That code calculates factorials, recursively, without using
assignment, declarations, or functions of 2 variables. (But
trying to trace how it works is likely to make your head spin.
And handing it, without the derivation, just slightly reformatted
will result in code that is sure to baffle and confuse.)
You can replace the 4 lines that recursively define factorial with
any other recursive function that you want.
I wonder if there's any use in attempting to build this from the ground up. Let's see. Here's a basic, recursive factorial function:
function factorial(n) {
return n == 0 ? 1 : n * factorial(n - 1);
}
Let's refactor and create a new function called fact that returns an anonymous factorial-computing function instead of performing the calculation itself:
function fact() {
return function(n) {
return n == 0 ? 1 : n * fact()(n - 1);
};
}
var factorial = fact();
That's a little weird, but there's nothing wrong with it. We're just generating a new factorial function at each step.
The recursion at this stage is still fairly explicit. The fact function needs to be aware of its own name. Let's parameterize the recursive call:
function fact(recurse) {
return function(n) {
return n == 0 ? 1 : n * recurse(n - 1);
};
}
function recurser(x) {
return fact(recurser)(x);
}
var factorial = fact(recurser);
That's great, but recurser still needs to know its own name. Let's parameterize that, too:
function recurser(f) {
return fact(function(x) {
return f(f)(x);
});
}
var factorial = recurser(recurser);
Now, instead of calling recurser(recurser) directly, let's create a wrapper function that returns its result:
function Y() {
return (function(f) {
return f(f);
})(recurser);
}
var factorial = Y();
We can now get rid of the recurser name altogether; it's just an argument to Y's inner function, which can be replaced with the function itself:
function Y() {
return (function(f) {
return f(f);
})(function(f) {
return fact(function(x) {
return f(f)(x);
});
});
}
var factorial = Y();
The only external name still referenced is fact, but it should be clear by now that that's easily parameterized, too, creating the complete, generic, solution:
function Y(le) {
return (function(f) {
return f(f);
})(function(f) {
return le(function(x) {
return f(f)(x);
});
});
}
var factorial = Y(function(recurse) {
return function(n) {
return n == 0 ? 1 : n * recurse(n - 1);
};
});
Most of the answers above describe what the Y-combinator is but not what it is for.
Fixed point combinators are used to show that lambda calculus is turing complete. This is a very important result in the theory of computation and provides a theoretical foundation for functional programming.
Studying fixed point combinators has also helped me really understand functional programming. I have never found any use for them in actual programming though.
For programmers who haven't encountered functional programming in depth, and don't care to start now, but are mildly curious:
The Y combinator is a formula which lets you implement recursion in a situation where functions can't have names but can be passed around as arguments, used as return values, and defined within other functions.
It works by passing the function to itself as an argument, so it can call itself.
It's part of the lambda calculus, which is really maths but is effectively a programming language, and is pretty fundamental to computer science and especially to functional programming.
The day to day practical value of the Y combinator is limited, since programming languages tend to let you name functions.
In case you need to identify it in a police lineup, it looks like this:
Y = λf.(λx.f (x x)) (λx.f (x x))
You can usually spot it because of the repeated (λx.f (x x)).
The λ symbols are the Greek letter lambda, which gives the lambda calculus its name, and there's a lot of (λx.t) style terms because that's what the lambda calculus looks like.
y-combinator in JavaScript:
var Y = function(f) {
return (function(g) {
return g(g);
})(function(h) {
return function() {
return f(h(h)).apply(null, arguments);
};
});
};
var factorial = Y(function(recurse) {
return function(x) {
return x == 0 ? 1 : x * recurse(x-1);
};
});
factorial(5) // -> 120
Edit:
I learn a lot from looking at code, but this one is a bit tough to swallow without some background - sorry about that. With some general knowledge presented by other answers, you can begin to pick apart what is happening.
The Y function is the "y-combinator". Now take a look at the var factorial line where Y is used. Notice you pass a function to it that has a parameter (in this example, recurse) that is also used later on in the inner function. The parameter name basically becomes the name of the inner function allowing it to perform a recursive call (since it uses recurse() in it's definition.) The y-combinator performs the magic of associating the otherwise anonymous inner function with the parameter name of the function passed to Y.
For the full explanation of how Y does the magic, checked out the linked article (not by me btw.)
Anonymous recursion
A fixed-point combinator is a higher-order function fix that by definition satisfies the equivalence
forall f. fix f = f (fix f)
fix f represents a solution x to the fixed-point equation
x = f x
The factorial of a natural number can be proved by
fact 0 = 1
fact n = n * fact (n - 1)
Using fix, arbitrary constructive proofs over general/μ-recursive functions can be derived without nonymous self-referentiality.
fact n = (fix fact') n
where
fact' rec n = if n == 0
then 1
else n * rec (n - 1)
such that
fact 3
= (fix fact') 3
= fact' (fix fact') 3
= if 3 == 0 then 1 else 3 * (fix fact') (3 - 1)
= 3 * (fix fact') 2
= 3 * fact' (fix fact') 2
= 3 * if 2 == 0 then 1 else 2 * (fix fact') (2 - 1)
= 3 * 2 * (fix fact') 1
= 3 * 2 * fact' (fix fact') 1
= 3 * 2 * if 1 == 0 then 1 else 1 * (fix fact') (1 - 1)
= 3 * 2 * 1 * (fix fact') 0
= 3 * 2 * 1 * fact' (fix fact') 0
= 3 * 2 * 1 * if 0 == 0 then 1 else 0 * (fix fact') (0 - 1)
= 3 * 2 * 1 * 1
= 6
This formal proof that
fact 3 = 6
methodically uses the fixed-point combinator equivalence for rewrites
fix fact' -> fact' (fix fact')
Lambda calculus
The untyped lambda calculus formalism consists in a context-free grammar
E ::= v Variable
| λ v. E Abstraction
|  E E Application
where v ranges over variables, together with the beta and eta reduction rules
(λ x. B) E -> B[x := E] Beta
λ x. E x -> E if x doesn’t occur free in E Eta
Beta reduction substitutes all free occurrences of the variable x in the abstraction (“function”) body B by the expression (“argument”) E. Eta reduction eliminates redundant abstraction. It is sometimes omitted from the formalism. An irreducible expression, to which no reduction rule applies, is in normal or canonical form.
λ x y. E
is shorthand for
λ x. λ y. E
(abstraction multiarity),
E F G
is shorthand for
(E F) G
(application left-associativity),
λ x. x
and
λ y. y
are alpha-equivalent.
Abstraction and application are the two only “language primitives” of the lambda calculus, but they allow encoding of arbitrarily complex data and operations.
The Church numerals are an encoding of the natural numbers similar to the Peano-axiomatic naturals.
0 = λ f x. x No application
1 = λ f x. f x One application
2 = λ f x. f (f x) Twofold
3 = λ f x. f (f (f x)) Threefold
. . .
SUCC = λ n f x. f (n f x) Successor
ADD = λ n m f x. n f (m f x) Addition
MULT = λ n m f x. n (m f) x Multiplication
. . .
A formal proof that
1 + 2 = 3
using the rewrite rule of beta reduction:
ADD 1 2
= (λ n m f x. n f (m f x)) (λ g y. g y) (λ h z. h (h z))
= (λ m f x. (λ g y. g y) f (m f x)) (λ h z. h (h z))
= (λ m f x. (λ y. f y) (m f x)) (λ h z. h (h z))
= (λ m f x. f (m f x)) (λ h z. h (h z))
= λ f x. f ((λ h z. h (h z)) f x)
= λ f x. f ((λ z. f (f z)) x)
= λ f x. f (f (f x)) Normal form
= 3
Combinators
In lambda calculus, combinators are abstractions that contain no free variables. Most simply: I, the identity combinator
λ x. x
isomorphic to the identity function
id x = x
Such combinators are the primitive operators of combinator calculi like the SKI system.
S = λ x y z. x z (y z)
K = λ x y. x
I = λ x. x
Beta reduction is not strongly normalizing; not all reducible expressions, “redexes”, converge to normal form under beta reduction. A simple example is divergent application of the omega ω combinator
λ x. x x
to itself:
(λ x. x x) (λ y. y y)
= (λ y. y y) (λ y. y y)
. . .
= _|_ Bottom
Reduction of leftmost subexpressions (“heads”) is prioritized. Applicative order normalizes arguments before substitution, normal order does not. The two strategies are analogous to eager evaluation, e.g. C, and lazy evaluation, e.g. Haskell.
K (I a) (ω ω)
= (λ k l. k) ((λ i. i) a) ((λ x. x x) (λ y. y y))
diverges under eager applicative-order beta reduction
= (λ k l. k) a ((λ x. x x) (λ y. y y))
= (λ l. a) ((λ x. x x) (λ y. y y))
= (λ l. a) ((λ y. y y) (λ y. y y))
. . .
= _|_
since in strict semantics
forall f. f _|_ = _|_
but converges under lazy normal-order beta reduction
= (λ l. ((λ i. i) a)) ((λ x. x x) (λ y. y y))
= (λ l. a) ((λ x. x x) (λ y. y y))
= a
If an expression has a normal form, normal-order beta reduction will find it.
Y
The essential property of the Y fixed-point combinator
λ f. (λ x. f (x x)) (λ x. f (x x))
is given by
Y g
= (λ f. (λ x. f (x x)) (λ x. f (x x))) g
= (λ x. g (x x)) (λ x. g (x x)) = Y g
= g ((λ x. g (x x)) (λ x. g (x x))) = g (Y g)
= g (g ((λ x. g (x x)) (λ x. g (x x)))) = g (g (Y g))
. . . . . .
The equivalence
Y g = g (Y g)
is isomorphic to
fix f = f (fix f)
The untyped lambda calculus can encode arbitrary constructive proofs over general/μ-recursive functions.
FACT = λ n. Y FACT' n
FACT' = λ rec n. if n == 0 then 1 else n * rec (n - 1)
FACT 3
= (λ n. Y FACT' n) 3
= Y FACT' 3
= FACT' (Y FACT') 3
= if 3 == 0 then 1 else 3 * (Y FACT') (3 - 1)
= 3 * (Y FACT') (3 - 1)
= 3 * FACT' (Y FACT') 2
= 3 * if 2 == 0 then 1 else 2 * (Y FACT') (2 - 1)
= 3 * 2 * (Y FACT') 1
= 3 * 2 * FACT' (Y FACT') 1
= 3 * 2 * if 1 == 0 then 1 else 1 * (Y FACT') (1 - 1)
= 3 * 2 * 1 * (Y FACT') 0
= 3 * 2 * 1 * FACT' (Y FACT') 0
= 3 * 2 * 1 * if 0 == 0 then 1 else 0 * (Y FACT') (0 - 1)
= 3 * 2 * 1 * 1
= 6
(Multiplication delayed, confluence)
For Churchian untyped lambda calculus, there has been shown to exist a recursively enumerable infinity of fixed-point combinators besides Y.
X = λ f. (λ x. x x) (λ x. f (x x))
Y' = (λ x y. x y x) (λ y x. y (x y x))
Z = λ f. (λ x. f (λ v. x x v)) (λ x. f (λ v. x x v))
Θ = (λ x y. y (x x y)) (λ x y. y (x x y))
. . .
Normal-order beta reduction makes the unextended untyped lambda calculus a Turing-complete rewrite system.
In Haskell, the fixed-point combinator can be elegantly implemented
fix :: forall t. (t -> t) -> t
fix f = f (fix f)
Haskell’s laziness normalizes to a finity before all subexpressions have been evaluated.
primes :: Integral t => [t]
primes = sieve [2 ..]
where
sieve = fix (\ rec (p : ns) ->
p : rec [n | n <- ns
, n `rem` p /= 0])
David Turner: Church's Thesis and Functional Programming
Alonzo Church: An Unsolvable Problem of Elementary Number Theory
Lambda calculus
Church–Rosser theorem
Other answers provide pretty concise answer to this, without one important fact: You don't need to implement fixed point combinator in any practical language in this convoluted way and doing so serves no practical purpose (except "look, I know what Y-combinator is"). It's important theoretical concept, but of little practical value.
Here is a JavaScript implementation of the Y-Combinator and the Factorial function (from Douglas Crockford's article, available at: http://javascript.crockford.com/little.html).
function Y(le) {
return (function (f) {
return f(f);
}(function (f) {
return le(function (x) {
return f(f)(x);
});
}));
}
var factorial = Y(function (fac) {
return function (n) {
return n <= 2 ? n : n * fac(n - 1);
};
});
var number120 = factorial(5);
A Y-Combinator is another name for a flux capacitor.
I have written a sort of "idiots guide" to the Y-Combinator in both Clojure and Scheme in order to help myself come to grips with it. They are influenced by material in "The Little Schemer"
In Scheme:
https://gist.github.com/z5h/238891
or Clojure:
https://gist.github.com/z5h/5102747
Both tutorials are code interspersed with comments and should be cut & pastable into your favourite editor.
As a newbie to combinators, I found Mike Vanier's article (thanks Nicholas Mancuso) to be really helpful. I would like to write a summary, besides documenting my understanding, if it could be of help to some others I would be very glad.
From Crappy to Less Crappy
Using factorial as an example, we use the following almost-factorial function to calculate factorial of number x:
def almost-factorial f x = if iszero x
then 1
else * x (f (- x 1))
In the pseudo-code above, almost-factorial takes in function f and number x (almost-factorial is curried, so it can be seen as taking in function f and returning a 1-arity function).
When almost-factorial calculates factorial for x, it delegates the calculation of factorial for x - 1 to function f and accumulates that result with x (in this case, it multiplies the result of (x - 1) with x).
It can be seen as almost-factorial takes in a crappy version of factorial function (which can only calculate till number x - 1) and returns a less-crappy version of factorial (which calculates till number x). As in this form:
almost-factorial crappy-f = less-crappy-f
If we repeatedly pass the less-crappy version of factorial to almost-factorial, we will eventually get our desired factorial function f. Where it can be considered as:
almost-factorial f = f
Fix-point
The fact that almost-factorial f = f means f is the fix-point of function almost-factorial.
This was a really interesting way of seeing the relationships of the functions above and it was an aha moment for me. (please read Mike's post on fix-point if you haven't)
Three functions
To generalize, we have a non-recursive function fn (like our almost-factorial), we have its fix-point function fr (like our f), then what Y does is when you give Y fn, Y returns the fix-point function of fn.
So in summary (simplified by assuming fr takes only one parameter; x degenerates to x - 1, x - 2... in recursion):
We define the core calculations as fn: def fn fr x = ...accumulate x with result from (fr (- x 1)), this is the almost-useful function - although we cannot use fn directly on x, it will be useful very soon. This non-recursive fn uses a function fr to calculate its result
fn fr = fr, fr is the fix-point of fn, fr is the useful funciton, we can use fr on x to get our result
Y fn = fr, Y returns the fix-point of a function, Y turns our almost-useful function fn into useful fr
Deriving Y (not included)
I will skip the derivation of Y and go to understanding Y. Mike Vainer's post has a lot of details.
The form of Y
Y is defined as (in lambda calculus format):
Y f = λs.(f (s s)) λs.(f (s s))
If we replace the variable s in the left of the functions, we get
Y f = λs.(f (s s)) λs.(f (s s))
=> f (λs.(f (s s)) λs.(f (s s)))
=> f (Y f)
So indeed, the result of (Y f) is the fix-point of f.
Why does (Y f) work?
Depending the signature of f, (Y f) can be a function of any arity, to simplify, let's assume (Y f) only takes one parameter, like our factorial function.
def fn fr x = accumulate x (fr (- x 1))
since fn fr = fr, we continue
=> accumulate x (fn fr (- x 1))
=> accumulate x (accumulate (- x 1) (fr (- x 2)))
=> accumulate x (accumulate (- x 1) (accumulate (- x 2) ... (fn fr 1)))
the recursive calculation terminates when the inner-most (fn fr 1) is the base case and fn doesn't use fr in the calculation.
Looking at Y again:
fr = Y fn = λs.(fn (s s)) λs.(fn (s s))
=> fn (λs.(fn (s s)) λs.(fn (s s)))
So
fr x = Y fn x = fn (λs.(fn (s s)) λs.(fn (s s))) x
To me, the magical parts of this setup are:
fn and fr interdepend on each other: fr 'wraps' fn inside, every time fr is used to calculate x, it 'spawns' ('lifts'?) an fn and delegates the calculation to that fn (passing in itself fr and x); on the other hand, fn depends on fr and uses fr to calculate result of a smaller problem x-1.
At the time fr is used to define fn (when fn uses fr in its operations), the real fr is not yet defined.
It's fn which defines the real business logic. Based on fn, Y creates fr - a helper function in a specific form - to facilitate the calculation for fn in a recursive manner.
It helped me understanding Y this way at the moment, hope it helps.
BTW, I also found the book An Introduction to Functional Programming Through Lambda Calculus very good, I'm only part through it and the fact that I couldn't get my head around Y in the book led me to this post.
Here are answers to the original questions, compiled from the article (which is TOTALY worth reading) mentioned in the answer by Nicholas Mancuso, as well as other answers:
What is a Y-combinator?
An Y-combinator is a "functional" (or a higher-order function — a function that operates on other functions) that takes a single argument, which is a function that isn't recursive, and returns a version of the function which is recursive.
Somewhat recursive =), but more in-depth definition:
A combinator — is just a lambda expression with no free variables.
Free variable — is a variable that is not a bound variable.
Bound variable — variable which is contained inside the body of a lambda expression that has that variable name as one of its arguments.
Another way to think about this is that combinator is such a lambda expression, in which you are able to replace the name of a combinator with its definition everywhere it is found and have everything still work (you will get into an infinite loop if combinator would contain reference to itself, inside the lambda body).
Y-combinator is a fixed-point combinator.
Fixed point of a function is an element of the function's domain that is mapped to itself by the function.
That is to say, c is a fixed point of the function f(x) if f(c) = c
This means f(f(...f(c)...)) = fn(c) = c
How do combinators work?
Examples below assume strong + dynamic typing:
Lazy (normal-order) Y-combinator:
This definition applies to languages with lazy (also: deferred, call-by-need) evaluation — evaluation strategy which delays the evaluation of an expression until its value is needed.
Y = λf.(λx.f(x x)) (λx.f(x x)) = λf.(λx.(x x)) (λx.f(x x))
What this means is that, for a given function f (which is a non-recursive function), the corresponding recursive function can be obtained first by computing λx.f(x x), and then applying this lambda expression to itself.
Strict (applicative-order) Y-combinator:
This definition applies to languages with strict (also: eager, greedy) evaluation — evaluation strategy in which an expression is evaluated as soon as it is bound to a variable.
Y = λf.(λx.f(λy.((x x) y))) (λx.f(λy.((x x) y))) = λf.(λx.(x x)) (λx.f(λy.((x x) y)))
It is same as lazy one in it's nature, it just has an extra λ wrappers to delay the lambda's body evaluation. I've asked another question, somewhat related to this topic.
What are they good for?
Stolen borrowed from answer by Chris Ammerman: Y-combinator generalizes recursion, abstracting its implementation, and thereby separating it from the actual work of the function in question.
Even though, Y-combinator has some practical applications, it is mainly a theoretical concept, understanding of which will expand your overall vision and will, likely, increase your analytical and developer skills.
Are they useful in procedural languages?
As stated by Mike Vanier: it is possible to define a Y combinator in many statically typed languages, but (at least in the examples I've seen) such definitions usually require some non-obvious type hackery, because the Y combinator itself doesn't have a straightforward static type. That's beyond the scope of this article, so I won't mention it further
And as mentioned by Chris Ammerman: most procedural languages has static-typing.
So answer to this one — not really.
A fixed point combinator (or fixed-point operator) is a higher-order function that computes a fixed point of other functions. This operation is relevant in programming language theory because it allows the implementation of recursion in the form of a rewrite rule, without explicit support from the language's runtime engine. (src Wikipedia)
The y-combinator implements anonymous recursion. So instead of
function fib( n ){ if( n<=1 ) return n; else return fib(n-1)+fib(n-2) }
you can do
function ( fib, n ){ if( n<=1 ) return n; else return fib(n-1)+fib(n-2) }
of course, the y-combinator only works in call-by-name languages. If you want to use this in any normal call-by-value language, then you will need the related z-combinator (y-combinator will diverge/infinite-loop).
The this-operator can simplify your life:
var Y = function(f) {
return (function(g) {
return g(g);
})(function(h) {
return function() {
return f.apply(h(h), arguments);
};
});
};
Then you avoid the extra function:
var fac = Y(function(n) {
return n == 0 ? 1 : n * this(n - 1);
});
Finally, you call fac(5).
I think the best way to answer this is to pick a language, like JavaScript:
function factorial(num)
{
// If the number is less than 0, reject it.
if (num < 0) {
return -1;
}
// If the number is 0, its factorial is 1.
else if (num == 0) {
return 1;
}
// Otherwise, call this recursive procedure again.
else {
return (num * factorial(num - 1));
}
}
Now rewrite it so that it doesn't use the name of the function inside the function, but still calls it recursively.
The only place the function name factorial should be seen is at the call site.
Hint: you can't use names of functions, but you can use names of parameters.
Work the problem. Don't look it up. Once you solve it, you will understand what problem the y-combinator solves.

Resources