How does primitive recursion differ from "normal" recursion? - recursion

I am currently reading Simon Thompson's The Craft of Functional Programming and when describing recursion, he also mentions a form of recursion called Primitive Recursion.
Can you please explain how this type of recursion is different from "normal" recursive functions?
Here's an example of a primitive recursion function (in Haskell):
power2 n
| n == 0 = 1
| n > 0 = 2 * power2(n - 1)

A simplified answer is that primitive recursive functions are those which are defined in terms of other primitive recursive functions, and recursion on the structure of natural numbers. Natural numbers are conceptually like this:
data Nat
= Zero
| Succ Nat -- Succ is short for 'successor of', i.e. n+1
This means you can recurse on them like this:
f Zero = ...
f (Succ n) = ...
We can write your example as:
power2 Zero = Succ Zero -- (Succ 0) == 1
power2 (Succ n) = 2 * power2 n -- this is allowed because (*) is primitive recursive as well
Composition of primitive recursive functions is also primitive recursive.
Another example is Fibonacci numbers:
fib Zero = Zero
fib (Succ Zero) = (Succ Zero)
fib (Succ n#(Succ n' )) = fib n + fib n' -- addition is primitive recursive

Primitive recursive functions are a (mathematician's) natural response to the halting problem, by stripping away the power to do arbitrary unbounded self recursion.
Consider an "evil" function
f n
| n is an odd perfect number = true
| otherwise = f n+2
Does f terminate? You can't know without solving the open problem of whether there are odd perfect numbers. It's the ability to create functions like these that makes the halting problem hard.
Primitive recursion as a construct doesn't let you do that; the point is to ban the "f n+2" thing while still remaining as flexible as possible -- you can't primitive-recursively define f(n) in terms of f(n+1).
Note that just because a function is not primitive recursive does not mean it doesn't terminate; Ackermann's function is the canonical example.

the recursive functions that can only be implemented by do loops are Primitive recursive functions.

http://en.wikipedia.org/wiki/Primitive_recursive_function

Related

How to prove that a recursive function has some value

Here is a trivial function and a lemma:
fun count_from where
"count_from y 0 = []"
| "count_from y (Suc x) = y # count_from (Suc y) x"
lemma "count_from 3 5 = [3,4,5,6,7]"
It's just an example. The real function is more complicated.
Could you please suggest how to prove such a lemmas?
I redefined the function using tail-recursion and proved the lemma as follows:
fun count_from2 where
"count_from2 y 0 ys = ys"
| "count_from2 y (Suc x) ys = count_from2 (Suc y) x (ys # [y])"
lemma "count_from2 3 5 [] = xs ⟹ xs = [3,4,5,6,7]"
apply (erule count_from2.elims)
apply simp
apply (drule_tac s="xs" in sym)
apply (erule count_from2.elims)
apply simp
apply (drule_tac s="xs" in sym)
apply (erule count_from2.elims)
apply simp
apply (drule_tac s="xs" in sym)
apply (erule count_from2.elims)
by auto
For sure it's not an adequate solution.
I have a several questions:
Is it preferred to define functions using tail-recursion? Does it usually simplifies theorem proving?
Why function simplification rules (count_from.simps or count_from2.simps) can't be applied?
Should I define an introduction rules to prove the first lemma?
Is it possible to apply a function induction rule to prove such a lemmas?
Your question might be better phrased as ‘How do I evaluate a recursively-defined function and get that evaluation as a theorem?’
The answer is that usually the simplifier should do a decent job at evaluating it. The problem here is that numerals like 1, 2, 3 use a binary representation of the natural numbers, whereas the function is defined by pattern matching on 0 and Suc. This is the reason why your simps cannot be applied: they only match on terms of the form count_from ?y 0 or count_from ?y (Suc ?x) and count_from 3 5 is neither.
What you can do to move things along is to use the theorem collection eval_nat_numeral, which simply rewrites numerals like 1, 2, 3 into successor notation:
lemma "count_from 3 5 = [3,4,5,6,7]"
by (simp add: eval_nat_numeral)
Another possibility are the code_simp and eval proof methods which try to prove a statement that is ‘executable’ in some sense by evaluating it and checking that you get True. They both work fine here:
lemma "count_from 3 5 = [3,4,5,6,7]"
by code_simp
The difference between the two is that code_simp uses the simplifier, which gives you a ‘proper’ proof that goes through the Isabelle kernel (but this can be very slow for bigger examples), whereas eval uses the code generator as a trusted oracle and is thus potentially less trustworthy (although I have never seen a problem in practice) but much faster.
As for your other questions: No, I don't see how induction applies here. I don't know what you mean by defining introduction rules (what would they look like?). And tail-recursion does not really have any advantages for proving things – in fact, if you ‘artificially’ make function definitions tail-recursive as you have done for count_from2 you actually make things slightly more difficult, since any properties you want to prove then require additional generalisation before you can prove them by induction. A classic example is normal vs tail-recursive list reversal (I think you can find that in the ‘Programming and Proving’ tutorial).
Also note that a function very much like your count_from already exists in the library: It is called upt :: nat ⇒ nat ⇒ nat list and has custom syntax of the form [a..<b]. It gives you the list of natural numbers between a and b-1 in ascending order. It is a good idea to look at the ‘What's in Main’ document to get an idea for what.

Introducing fixed representation for a quotient type in Isabelle

This question is better explained with an example. Suppose I want to prove the following lemma:
lemma int_inv: "(n::int) - (n::int) = (0::int)"
How I'd informally prove this is something along these lines:
Lemma: n - n = 0, for any integer n and 0 = abs_int(0,0).
Proof:
Let abs_int(a,b) = n for some fixed natural numbers a and b.
--- some complex and mind blowing argument here ---
That means it suffices to prove that a+b+0 = a+b+0, which is true by reflexivity.
QED.
However, I'm having trouble with the first step "Let abs_int(a,b) = n". The let statement doesn't seem to be made for this, as it only allows one term on the left side, so I'm lost at how I could introduce the variables a and b in an arbitrary representation for n.
How may I introduce a fixed reprensentation for a quotient type so I may use the variables in it?
Note: I know the statement above can be proved by auto, and the problem may be sidestepped by rewriting the lemma as "lemma int_inv: "Abs_integ(a,b) - Abs_integ(a,b) = (0::int)". However, I'm looking specifically for a way to prove by introducing an arbitrary representation in the proof.
You can introduce a concrete representation with the theorem int.abs_induct. However, you almost never want to do that manually.
The general method of proving statements about quotients is to first state an equivalent theorem about the underlying relation, and then use the transfer tool. It would've helped if your example wasn't automatically discharged by automation... in fact, let's create our own little int type so that it isn't:
theory Scratch
imports Main
begin
quotient_type int = "nat × nat" / "intrel"
morphisms Rep_Integ Abs_Integ
proof (rule equivpI)
show "reflp intrel" by (auto simp: reflp_def)
show "symp intrel" by (auto simp: symp_def)
show "transp intrel" by (auto simp: transp_def)
qed
lift_definition sub :: "int ⇒ int ⇒ int"
is "λ(x, y) (u, v). (x + v, y + u)"
by auto
lift_definition zero :: "int" is "(0, 0)".
Now, we have
lemma int_inv: "sub n n = zero"
apply transfer
proof (prove)
goal (1 subgoal):
1. ⋀n. intrel ((case n of (x, y) ⇒ λ(u, v). (x + v, y + u)) n) (0, 0)
So, the version we want to prove is
lemma int_inv': "intrel ((case n of (x, y) ⇒ λ(u, v). (x + v, y + u)) n) (0, 0)"
by (induct n) simp
Now we can transfer it with
lemma int_inv: "sub n n = zero"
by transfer (fact int_inv')
Note that the transfer proof method is backtracking — this means that it will try many possible transfers until one of them succeeds. Note however, that this backtracking doesn't apply across separate apply commands. Thus you will always want to write a transfer proof as by transfer something_simple, instead of, say proof transfer.
You can see the many possible versions with
apply transfer
back back back back back
Note also, that if your theorem mentions constants about int which weren't defined with lift_definition, you will need to prove a transfer rule for them separately. There are some examples of that here.
In general, after defining a quotient you will want to "forget" about its underlying construction as soon as possible, proving enough properties by transfer so that the rest can be proven without peeking into your type's construction.

Prove that the powerset of a finite set is finite using Coq

While trying to prove some things, I encountered an innocent looking claim that I failed to prove in Coq. The claim is that for a given Finite Ensemble, the powerset is also finite. The statement is given in the Coq code below.
I looked through the Coq documentation on finite sets and facts about finite sets and powersets, but I could not find something that deconstructs the powerset into a union of subsets (such that the Union_is_finite constructor can be used). Another approach may be to show that the cardinal number of the powerset is 2^|S| but here I certainly have no idea how to approach the proof.
From Coq Require Export Sets.Ensembles.
From Coq Require Export Sets.Powerset.
From Coq Require Export Sets.Finite_sets.
Lemma powerset_finite {T} (S : Ensemble T) :
Finite T S -> Finite (Ensemble T) (Power_set T S).
Proof.
(* I don't know how to proceed. *)
Admitted.
I did not solve it completely because I myself struggled a lot with this proof. I merely transferred it along your line of thought. Now the crux of problem is, proving the cardinality of power set of a set of n elements is 2^n.
From Coq Require Export Sets.Ensembles.
From Coq Require Export Sets.Powerset.
From Coq Require Export Sets.Finite_sets.
From Coq Require Export Sets.Finite_sets_facts.
Fixpoint exp (n m : nat) : nat :=
match m with
| 0 => 1
| S m' => n * (exp n m')
end.
Theorem power_set_empty :
forall (U : Type), Power_set _ (Empty_set U) = Singleton _ (Empty_set _).
Proof with auto with sets.
intros U.
apply Extensionality_Ensembles.
unfold Same_set. split.
+ unfold Included. intros x Hin.
inversion Hin; subst.
apply Singleton_intro.
symmetry. apply less_than_empty; auto.
+ unfold Included. intros x Hin.
constructor. inversion Hin; subst.
unfold Included; intros; assumption.
Qed.
Lemma cardinality_power_set :
forall (U : Type) (A : Ensemble U) (n : nat),
cardinal U A n -> cardinal _ (Power_set _ A) (exp 2 n).
Proof.
intros U A n. revert A.
induction n; cbn; intros He Hc.
+ inversion Hc; subst. rewrite power_set_empty.
Search Singleton.
rewrite <- Empty_set_zero'.
constructor; repeat auto with sets.
+ inversion Hc; subst; clear Hc.
Admitted.
Lemma powerset_finite {T} (S : Ensemble T) :
Finite T S -> Finite (Ensemble T) (Power_set T S).
Proof.
intros Hf.
destruct (finite_cardinal _ S Hf) as [n Hc].
eapply cardinal_finite with (n := exp 2 n).
apply cardinality_power_set; auto.
Qed.

(Scheme) Tail recursive modular exponentiation

I have an assignment to make a tail-recursive function that takes 3 integers(possibly very large), p q and r, and calculates the modulo of the division (p^q)/r. I figured out how to do make a function that achieves the goal, but it is not tail recursive.
(define (mod-exp p q r)
(if (= 0 p)
0
(if (= 0 q)
1
(if (= 0 (remainder r 2))
(remainder (* (mod-exp p (quotient q 2) r)
(mod-exp p (quotient q 2) r))
r)
(remainder (* (remainder p r)
(remainder (mod-exp p (- q 1) r) r))
r)))))
I'm having a hard time wrapping my head around making this tail-recursive, I don't see how I can "accumulate" the remainder.
I'm pretty much restricted to using the basic math operators and quotient and remainder for this task.
I see that you're implementing binary exponentiation, with the extra feature that it's reduced mod r.
What you may want to do is take a normal (tail recursive) binary exponentiation algorithm and simply change the 2-ary functions + and * to your own user defined 3-ary functions +/mod and */mode which also take r and reduce the result mod r before returning it.
Now how do you do binary exponentiation in a tail recursive way? You need the main function to call into a helper function that takes an extra accumulator parameter - initial value 1. This is kind of similar to tail recursive REVERSE using a helper function REVAPPEND - if you're familiar with that.
Hope that helps and feel free to ask if you need more information.

How to show that addition is primitive recursive?

How do i show in an example with numbers that addition is primitve recursive.
I understand why it is primitive recursive through the proof but I just can't imagine how it works primitive recursively with numbers.
To show that a function φ is primitive recursive, it suffices to provide a finite sequence of primitive recursive functions beginning with the constant, successor and projection functions and terminating with φ such that each function is constructed from prior functions by composition and primitive recursion. The primitive recursive addition function is defined
add(0,x) = φ(x)
add(n + 1,x) = ψ(n,x,add(n,x))
where φ = P[1/1]
ψ = S ∘ P[3/3]
where P[m/n] is the m-ary projection function returning its nth argument for n >= 1 and n <= m. To demonstrate that add is primitive recursive, we must construct φ and ψ from the basic functions:
1. P[1/1] [Axiom]
2. P[3/3] [Axiom]
3. S [Axiom]
4. S ∘ P[3/3] [1,3 Composition]
6. PR(P[1/1],S ∘ P[3/3]) [1,4 Primitive Recursion]
The function φ is provided by the axioms of primitive recursive functions. The function ψ is constructed by composition from the primitive recursive functions S and P[3/3] in step (4). Finally, the function add is constructed from φ and ψ in step (6) by primitive recursion. To see how a value is computed by a primitive recursive function such as add, it suffices to systematically substitute the right-hand sides of function definitions where appropriate, then simplify. I've collapsed substitution and simplification of composition in the following example:
add(2,3) = S(P[3/3](1,3,add(1,3))) [Def. ψ]
= S(P[3/3](1,3,S(P[3/3](0,3,add(0,3))))) [Def. ψ]
= S(P[3/3](1,3,S(P[3/3](0,3,P[1/1](3))))) [Def. φ]
= S(P[3/3](1,3,S(P[3/3](0,3,3)))) [Def. P[1/1]]
= S(P[3/3](1,3,S(3))) [Def. P[3/3]]
= S(P[3/3](1,3,4)) [Def. S]
= S(4) [Def. P[3/3]]
= 5 [Def. S]
It's unclear precisely what you're asking, so I gave a general overview of the primitive recursive definition of addition, the proof that addition is primitive recursive, and provided an example computation. If you're still unclear, it might be helpful to perform computations on small values of primitive recursive functions.

Resources