Composition of a function and a value - julia

I'd like to avoid brackets in applying a composition of functions to a value. I come up with the idea to compose a function and a value:
julia> ∘(f::Function, x::Number)=f(x)
∘ (generic function with 2 methods)
julia> sqrt ∘ abs ∘ -2
1.4142135623730951
julia> sqrt ∘ abs ∘ (1-3)
1.4142135623730951
My question is how to declare the x argument to accept a "value" which is not a function, so that it does not overwrite the ∘(f::Function, g::Function)

What does compose a function and a value mean? Might it mean scaling with a constant value? e.g. sqrt ∘ abs ∘ x->-2x(this gives you a function, not the result). But it seems what you would like to do is just applying the function to a value, in this case, you could write -2 |> sqrt ∘ abs if you really really hate brackets. I agree with you that overwrite the ∘ is not a good idea, cause it breaks the concept of function composition.

I don't know if it is a good idea but you could probably use this:
∘(f, g) = f(g) # define for all
∘{S<:Function,T<:Function}(f::S, g::T) = (x...)->f(g(x...)) # but not for fnc
edit: I guess you don't want to redefine functionality for any subtype of Function
edit2: needed bigger redefinition
You don't avoid brackets though:
sqrt ∘ abs ∘ (x->2x) ∘ 1

Related

How to prove that a recursive function has some value

Here is a trivial function and a lemma:
fun count_from where
"count_from y 0 = []"
| "count_from y (Suc x) = y # count_from (Suc y) x"
lemma "count_from 3 5 = [3,4,5,6,7]"
It's just an example. The real function is more complicated.
Could you please suggest how to prove such a lemmas?
I redefined the function using tail-recursion and proved the lemma as follows:
fun count_from2 where
"count_from2 y 0 ys = ys"
| "count_from2 y (Suc x) ys = count_from2 (Suc y) x (ys # [y])"
lemma "count_from2 3 5 [] = xs ⟹ xs = [3,4,5,6,7]"
apply (erule count_from2.elims)
apply simp
apply (drule_tac s="xs" in sym)
apply (erule count_from2.elims)
apply simp
apply (drule_tac s="xs" in sym)
apply (erule count_from2.elims)
apply simp
apply (drule_tac s="xs" in sym)
apply (erule count_from2.elims)
by auto
For sure it's not an adequate solution.
I have a several questions:
Is it preferred to define functions using tail-recursion? Does it usually simplifies theorem proving?
Why function simplification rules (count_from.simps or count_from2.simps) can't be applied?
Should I define an introduction rules to prove the first lemma?
Is it possible to apply a function induction rule to prove such a lemmas?
Your question might be better phrased as ‘How do I evaluate a recursively-defined function and get that evaluation as a theorem?’
The answer is that usually the simplifier should do a decent job at evaluating it. The problem here is that numerals like 1, 2, 3 use a binary representation of the natural numbers, whereas the function is defined by pattern matching on 0 and Suc. This is the reason why your simps cannot be applied: they only match on terms of the form count_from ?y 0 or count_from ?y (Suc ?x) and count_from 3 5 is neither.
What you can do to move things along is to use the theorem collection eval_nat_numeral, which simply rewrites numerals like 1, 2, 3 into successor notation:
lemma "count_from 3 5 = [3,4,5,6,7]"
by (simp add: eval_nat_numeral)
Another possibility are the code_simp and eval proof methods which try to prove a statement that is ‘executable’ in some sense by evaluating it and checking that you get True. They both work fine here:
lemma "count_from 3 5 = [3,4,5,6,7]"
by code_simp
The difference between the two is that code_simp uses the simplifier, which gives you a ‘proper’ proof that goes through the Isabelle kernel (but this can be very slow for bigger examples), whereas eval uses the code generator as a trusted oracle and is thus potentially less trustworthy (although I have never seen a problem in practice) but much faster.
As for your other questions: No, I don't see how induction applies here. I don't know what you mean by defining introduction rules (what would they look like?). And tail-recursion does not really have any advantages for proving things – in fact, if you ‘artificially’ make function definitions tail-recursive as you have done for count_from2 you actually make things slightly more difficult, since any properties you want to prove then require additional generalisation before you can prove them by induction. A classic example is normal vs tail-recursive list reversal (I think you can find that in the ‘Programming and Proving’ tutorial).
Also note that a function very much like your count_from already exists in the library: It is called upt :: nat ⇒ nat ⇒ nat list and has custom syntax of the form [a..<b]. It gives you the list of natural numbers between a and b-1 in ascending order. It is a good idea to look at the ‘What's in Main’ document to get an idea for what.

Isabelle: verify tautological formula

I want to create a function F in Isabelle that is given a formula
formula = pr int | neg formula | imp formula formula
and yields True if the formula is tautological and False otherwise.
For example:
F( φ ⇒ φ ) = True
F( φ ⇒ (ψ ⇒ φ) ) = True
F( ψ ⇒ φ ) = False
Can anyone help me? I find it really difficult to understand Isabelle's documentation and I cannot find such function (which I think that it should already exist).
In any case, if you want to talk about tautology of formulae (or about any semantic property of formulae), you first need to define semantics for your formulae, i.e. a function eval :: formula ⇒ (int ⇒ bool) ⇒ bool (assuming that the pr constructor represents free variables) that takes a formula and a variable assignment and returns whether the formula holds for that assignment or not.
You can define such a function by recursion over the formula using the primrec or fun command. There are many examples for those i the ‘Programming and Proving‘ tutorial on the Isabelle website.

How to extract the instantiated variable in Isabelle?

I am trying to prove the following in Isabelle:
theorem map_fold: "∃h b. (map f xs) = foldr h xs b"
apply (induction xs)
apply auto
done
How can I get the instantiated value of h and b?
An approach that sometimes works for this purpose is to state a schematic lemma:
schematic_lemma "map f xs = foldr ?h xs ?b"
apply (induct xs)
apply simp
...
Methods like simp or rule can instantiate schematic variables during the proof (a result of unification). If you are able to complete the proof, then you can just look at the resulting lemma to see what the final instantiations were.
Beware that schematic variables can be a bit tricky: sometimes simp will instantiate a schematic variable in a way that makes the current goal trivially provable, but simultaneously makes other subgoals unsolvable.
In this specific case, Isabelle is able to instantiate ?b with no problem, but it can't determine ?h by unification. In general, schematic variables with function types are much trickier to handle.
In the end, I did something like what Manuel suggested: First, state a lemma with ordinary variables (lemma "map f xs = foldr h xs b"). Then see where the proof by induction gets stuck, and incrementally refine the statement until it is provable.
One way is to use SOME:
h := SOME h. ∃b. map f xs = foldr h xs b
b := SOME b. map f xs = foldr h xs b
Using your map_fold theorem and some fiddling around with someI_ex, you could prove that with these definitions, map f xs = foldr h xs b does indeed hold.
However, while this logically gives you values of h and b, I expect you will not be very satisfied with them, because you don't actualls see what h and b are; and there is no way (logically) to do that either.
In some cases, you can also formulate a theorem stating “There are f, xs such that no h, b exist with map f xs = foldr h xs b” and get nitpick to find a counterexample for that statement, but this case is too complicated for nitpick, as it would have to find a function on an infinite domain that depends on another function on an infinite domain.
I do not think there is a way for you to actually get the existential witnesses h and b out of the theorem you proved as concrete values. You will just have to find them yourself by inspection of the induction cases and find that they are h = λx xs. f x # xs and b = [].
This is by far the easiest solution.
Update: Proof extraction
Upon re-reading this thread today, I actually remembered that proof extraction does exist in Isabelle. It requires explicit proof terms to be computed for all theorems, so you need to start Isabelle with isabelle jedit -l HOL-Proofs. Then you can do this:
theorem map_fold: "∃h b. (map f xs) = foldr h xs b"
by (induction xs) auto
extract map_fold
This defines you a constant map_fold of type ('a ⇒ 'b) ⇒ 'a list ⇒ ('a ⇒ 'b list ⇒ 'b list) × 'b list, i.e. given a mapping function and a list, it gives you the function and the initial state you have to put into the foldr in order to get the same result. You can look at the definition using thm map_fold_def. Simplifying it a bit, it looks like this:
map_fold f xs =
rec_list (λx xa. default, []) (λx xa H. (λa b. f a # map f xa, default)) xs
This is a bit difficult to read, but you can see the [] and the f a # map f xa.
Unfortunately, proof terms get pretty big, so I doubt this will be of much use for anything more than toy examples.

How to show that addition is primitive recursive?

How do i show in an example with numbers that addition is primitve recursive.
I understand why it is primitive recursive through the proof but I just can't imagine how it works primitive recursively with numbers.
To show that a function φ is primitive recursive, it suffices to provide a finite sequence of primitive recursive functions beginning with the constant, successor and projection functions and terminating with φ such that each function is constructed from prior functions by composition and primitive recursion. The primitive recursive addition function is defined
add(0,x) = φ(x)
add(n + 1,x) = ψ(n,x,add(n,x))
where φ = P[1/1]
ψ = S ∘ P[3/3]
where P[m/n] is the m-ary projection function returning its nth argument for n >= 1 and n <= m. To demonstrate that add is primitive recursive, we must construct φ and ψ from the basic functions:
1. P[1/1] [Axiom]
2. P[3/3] [Axiom]
3. S [Axiom]
4. S ∘ P[3/3] [1,3 Composition]
6. PR(P[1/1],S ∘ P[3/3]) [1,4 Primitive Recursion]
The function φ is provided by the axioms of primitive recursive functions. The function ψ is constructed by composition from the primitive recursive functions S and P[3/3] in step (4). Finally, the function add is constructed from φ and ψ in step (6) by primitive recursion. To see how a value is computed by a primitive recursive function such as add, it suffices to systematically substitute the right-hand sides of function definitions where appropriate, then simplify. I've collapsed substitution and simplification of composition in the following example:
add(2,3) = S(P[3/3](1,3,add(1,3))) [Def. ψ]
= S(P[3/3](1,3,S(P[3/3](0,3,add(0,3))))) [Def. ψ]
= S(P[3/3](1,3,S(P[3/3](0,3,P[1/1](3))))) [Def. φ]
= S(P[3/3](1,3,S(P[3/3](0,3,3)))) [Def. P[1/1]]
= S(P[3/3](1,3,S(3))) [Def. P[3/3]]
= S(P[3/3](1,3,4)) [Def. S]
= S(4) [Def. P[3/3]]
= 5 [Def. S]
It's unclear precisely what you're asking, so I gave a general overview of the primitive recursive definition of addition, the proof that addition is primitive recursive, and provided an example computation. If you're still unclear, it might be helpful to perform computations on small values of primitive recursive functions.

How does primitive recursion differ from "normal" recursion?

I am currently reading Simon Thompson's The Craft of Functional Programming and when describing recursion, he also mentions a form of recursion called Primitive Recursion.
Can you please explain how this type of recursion is different from "normal" recursive functions?
Here's an example of a primitive recursion function (in Haskell):
power2 n
| n == 0 = 1
| n > 0 = 2 * power2(n - 1)
A simplified answer is that primitive recursive functions are those which are defined in terms of other primitive recursive functions, and recursion on the structure of natural numbers. Natural numbers are conceptually like this:
data Nat
= Zero
| Succ Nat -- Succ is short for 'successor of', i.e. n+1
This means you can recurse on them like this:
f Zero = ...
f (Succ n) = ...
We can write your example as:
power2 Zero = Succ Zero -- (Succ 0) == 1
power2 (Succ n) = 2 * power2 n -- this is allowed because (*) is primitive recursive as well
Composition of primitive recursive functions is also primitive recursive.
Another example is Fibonacci numbers:
fib Zero = Zero
fib (Succ Zero) = (Succ Zero)
fib (Succ n#(Succ n' )) = fib n + fib n' -- addition is primitive recursive
Primitive recursive functions are a (mathematician's) natural response to the halting problem, by stripping away the power to do arbitrary unbounded self recursion.
Consider an "evil" function
f n
| n is an odd perfect number = true
| otherwise = f n+2
Does f terminate? You can't know without solving the open problem of whether there are odd perfect numbers. It's the ability to create functions like these that makes the halting problem hard.
Primitive recursion as a construct doesn't let you do that; the point is to ban the "f n+2" thing while still remaining as flexible as possible -- you can't primitive-recursively define f(n) in terms of f(n+1).
Note that just because a function is not primitive recursive does not mean it doesn't terminate; Ackermann's function is the canonical example.
the recursive functions that can only be implemented by do loops are Primitive recursive functions.
http://en.wikipedia.org/wiki/Primitive_recursive_function

Resources