Given A∧B what is the equivalent using just → and ⊕(Xor) - logical-operators

Consider the set of connectives consisting of just → and ⊕, where ⊕ is an exclusive OR connective: A⊕B is true if and only if A and B have the opposite truth values (one is true and the other one false).
Given A∧B what is the equivalent formula using only → and ⊕(Xor).

Assumming that -> is the material conditional.
A and B is equivalent to not(A implies not B)
not C is equivalent to (C implies C) xor C
so
not B is equivalent to (B implies B) xor B)
and
A implies not B equivalent to A implies ((B implies B) xor B))
finally the equivalent expression is
((A implies ((B implies B) xor B)) implies (A implies ((B implies B) xor B)))xor (A implies ((B implies B) xor B))
in your notation:
((A → ((B → B) ⊕ B)) → (A → ((B → B) ⊕ B)))⊕ (A → ((B → B) ⊕ B))
with some care you can surely minimize these formulas
Checking the final formula on wolfram alpha
A general framework to answer such questions is functional completness.
The people at mathoverflow might be helpful.
EDIT
i've made a mess of copying the long formulas, corrected now

Related

Need hints about proving some intuitionistic logic statements

I'm new to Agda, and I'm new to dependently typed programming and proof assistants in general. I decided to get myself started by constructing simple intuitionistic logic proofs, using the definitions I found in Programming Language Foundations in Agda, and I had some success. However, I got confused when I tried to write the following proof:
∨-identity-indirect : {A B : Set} → (¬ A) ∧ (A ∨ B) → B
Proving this on paper would be fairly simple: expanding ¬ A, we have A → ⊥. So this statement becomes equivalent to (⊥ ∨ B) → B, which is obviously true.
I was able to successfully prove the latter part, that is, (⊥ ∨ B) → B:
∨-identity : {A : Set} → (⊥ ∨ A) → A
∨-identity (∨-left ())
∨-identity (∨-right A) = A
Then, I was able to write:
∨-identity-indirect ⟨ ¬A , A∨B ⟩ = ∨-identity ?
Suggesting me that I need to produce ⊥ ∨ B by having ¬A and A ∨ B. I'd like to somehow replace A in A ∨ B with ¬A A, but I don't think there's a way of doing so.
When trying to apply the ∨-identity case analysis pattern to ∨-identity-indirect, I get an error message that A should be empty, but that's not obvious to me - I assume I need to somehow make this obvious to Agda, by making use of ¬A.
Am I on the right track, or am I getting this wrong completely? How should I go about writing this ∨-identity-indirect function?
Suggesting me that I need to produce ⊥ ∨ B by having ¬A and A ∨ B. I'd like to somehow replace A in A ∨ B with ¬A A, but I don't think there's a way of doing so.
When trying to apply the ∨-identity case analysis pattern to ∨-identity-indirect, I get an error message that A should be empty, but that's not obvious to me - I assume I need to somehow make this obvious to Agda, by making use of ¬A.
You're probably trying to pattern match on a value of type ¬ A with (), which doesn't work, because ¬ A expands to A -> ⊥, i.e. it's a function that will only return you a ⊥ after you give it some A. Here is how you do that:
replace-A : {A B : Set} → (¬ A) → (A ∨ B) → ⊥ ∨ B
replace-A f (v-left x) = v-left (f x)
replace-A _ (v-right y) = v-right y
Having that, ∨-identity-indirect is straightforward:
∨-identity-indirect : {A B : Set} → (¬ A) ∧ (A ∨ B) → B
∨-identity-indirect ⟨ ¬A , A∨B ⟩ = ∨-identity (replace-A ¬A A∨B)

Lexical Binding in Lisp

(let ((a 3))
(let ((a 4)
(b a))
(+ a b)))
The above code evaluates to 7 the logic being that b takes the value of outer a. According to my understanding, in lexical binding each use of 'let' creates a fresh location. So why is the variable b in the statement (b a) not using the value of a from (a 4)?
Because that's what LET is specified to do. Bindings are done in parallel.
CL-USER 60 > (let ((a 3))
(let ((a 4)
(b a))
(+ a b)))
7
The version where bindings are done in a sequential fashion is called LET*.
CL-USER 61 > (let ((a 3))
(let* ((a 4)
(b a))
(+ a b)))
8
See Special Operator LET, LET*.
(let ((a 4)
(b a))
(+ a b)) ; ==> 7
Is equivalent to writing:
((lambda (a b)
(+ a b))
4
a) ; ==> 7
Do you see from this version that it's logical that a and b are bound after the evaluation of 4 and a?
Now we have:
(let* ((a 4)
(b a))
(+ a b)) ; ==> 8
which is equivalent to:
(let ((a 4))
(let ((b a))
(+ a b))) ; ==> 8
Here the second let is in the body of the first. a is 4 when the expression for b is evaluated.

In pure functional languages, is there an algorithm to get the inverse function?

In pure functional languages like Haskell, is there an algorithm to get the inverse of a function, (edit) when it is bijective? And is there a specific way to program your function so it is?
In some cases, yes! There's a beautiful paper called Bidirectionalization for Free! which discusses a few cases -- when your function is sufficiently polymorphic -- where it is possible, completely automatically to derive an inverse function. (It also discusses what makes the problem hard when the functions are not polymorphic.)
What you get out in the case your function is invertible is the inverse (with a spurious input); in other cases, you get a function which tries to "merge" an old input value and a new output value.
No, it's not possible in general.
Proof: consider bijective functions of type
type F = [Bit] -> [Bit]
with
data Bit = B0 | B1
Assume we have an inverter inv :: F -> F such that inv f . f ≡ id. Say we have tested it for the function f = id, by confirming that
inv f (repeat B0) -> (B0 : ls)
Since this first B0 in the output must have come after some finite time, we have an upper bound n on both the depth to which inv had actually evaluated our test input to obtain this result, as well as the number of times it can have called f. Define now a family of functions
g j (B1 : B0 : ... (n+j times) ... B0 : ls)
= B0 : ... (n+j times) ... B0 : B1 : ls
g j (B0 : ... (n+j times) ... B0 : B1 : ls)
= B1 : B0 : ... (n+j times) ... B0 : ls
g j l = l
Clearly, for all 0<j≤n, g j is a bijection, in fact self-inverse. So we should be able to confirm
inv (g j) (replicate (n+j) B0 ++ B1 : repeat B0) -> (B1 : ls)
but to fulfill this, inv (g j) would have needed to either
evaluate g j (B1 : repeat B0) to a depth of n+j > n
evaluate head $ g j l for at least n different lists matching replicate (n+j) B0 ++ B1 : ls
Up to that point, at least one of the g j is indistinguishable from f, and since inv f hadn't done either of these evaluations, inv could not possibly have told it apart – short of doing some runtime-measurements on its own, which is only possible in the IO Monad.
                                                                                                                                   ⬜
You can look it up on wikipedia, it's called Reversible Computing.
In general you can't do it though and none of the functional languages have that option. For example:
f :: a -> Int
f _ = 1
This function does not have an inverse.
Not in most functional languages, but in logic programming or relational programming, most functions you define are in fact not functions but "relations", and these can be used in both directions. See for example prolog or kanren.
Tasks like this are almost always undecidable. You can have a solution for some specific functions, but not in general.
Here, you cannot even recognize which functions have an inverse. Quoting Barendregt, H. P. The Lambda Calculus: Its Syntax and Semantics. North Holland, Amsterdam (1984):
A set of lambda-terms is nontrivial if it is neither the empty nor the full set. If A and B are two nontrivial, disjoint sets of lambda-terms closed under (beta) equality, then A and B are recursively inseparable.
Let's take A to be the set of lambda terms that represent invertible functions and B the rest. Both are non-empty and closed under beta equality. So it's not possible to decide whether a function is invertible or not.
(This applies to the untyped lambda calculus. TBH I don't know if the argument can be directly adapted to a typed lambda calculus when we know the type of a function that we want to invert. But I'm pretty sure it will be similar.)
If you can enumerate the domain of the function and can compare elements of the range for equality, you can - in a rather straightforward way. By enumerate I mean having a list of all the elements available. I'll stick to Haskell, since I don't know Ocaml (or even how to capitalise it properly ;-)
What you want to do is run through the elements of the domain and see if they're equal to the element of the range you're trying to invert, and take the first one that works:
inv :: Eq b => [a] -> (a -> b) -> (b -> a)
inv domain f b = head [ a | a <- domain, f a == b ]
Since you've stated that f is a bijection, there's bound to be one and only one such element. The trick, of course, is to ensure that your enumeration of the domain actually reaches all the elements in a finite time. If you're trying to invert a bijection from Integer to Integer, using [0,1 ..] ++ [-1,-2 ..] won't work as you'll never get to the negative numbers. Concretely, inv ([0,1 ..] ++ [-1,-2 ..]) (+1) (-3) will never yield a value.
However, 0 : concatMap (\x -> [x,-x]) [1..] will work, as this runs through the integers in the following order [0,1,-1,2,-2,3,-3, and so on]. Indeed inv (0 : concatMap (\x -> [x,-x]) [1..]) (+1) (-3) promptly returns -4!
The Control.Monad.Omega package can help you run through lists of tuples etcetera in a good way; I'm sure there's more packages like that - but I don't know them.
Of course, this approach is rather low-brow and brute-force, not to mention ugly and inefficient! So I'll end with a few remarks on the last part of your question, on how to 'write' bijections. The type system of Haskell isn't up to proving that a function is a bijection - you really want something like Agda for that - but it is willing to trust you.
(Warning: untested code follows)
So can you define a datatype of Bijection s between types a and b:
data Bi a b = Bi {
apply :: a -> b,
invert :: b -> a
}
along with as many constants (where you can say 'I know they're bijections!') as you like, such as:
notBi :: Bi Bool Bool
notBi = Bi not not
add1Bi :: Bi Integer Integer
add1Bi = Bi (+1) (subtract 1)
and a couple of smart combinators, such as:
idBi :: Bi a a
idBi = Bi id id
invertBi :: Bi a b -> Bi b a
invertBi (Bi a i) = (Bi i a)
composeBi :: Bi a b -> Bi b c -> Bi a c
composeBi (Bi a1 i1) (Bi a2 i2) = Bi (a2 . a1) (i1 . i2)
mapBi :: Bi a b -> Bi [a] [b]
mapBi (Bi a i) = Bi (map a) (map i)
bruteForceBi :: Eq b => [a] -> (a -> b) -> Bi a b
bruteForceBi domain f = Bi f (inv domain f)
I think you could then do invert (mapBi add1Bi) [1,5,6] and get [0,4,5]. If you pick your combinators in a smart way, I think the number of times you'll have to write a Bi constant by hand could be quite limited.
After all, if you know a function is a bijection, you'll hopefully have a proof-sketch of that fact in your head, which the Curry-Howard isomorphism should be able to turn into a program :-)
I've recently been dealing with issues like this, and no, I'd say that (a) it's not difficult in many case, but (b) it's not efficient at all.
Basically, suppose you have f :: a -> b, and that f is indeed a bjiection. You can compute the inverse f' :: b -> a in a really dumb way:
import Data.List
-- | Class for types whose values are recursively enumerable.
class Enumerable a where
-- | Produce the list of all values of type #a#.
enumerate :: [a]
-- | Note, this is only guaranteed to terminate if #f# is a bijection!
invert :: (Enumerable a, Eq b) => (a -> b) -> b -> Maybe a
invert f b = find (\a -> f a == b) enumerate
If f is a bijection and enumerate truly produces all values of a, then you will eventually hit an a such that f a == b.
Types that have a Bounded and an Enum instance can be trivially made RecursivelyEnumerable. Pairs of Enumerable types can also be made Enumerable:
instance (Enumerable a, Enumerable b) => Enumerable (a, b) where
enumerate = crossWith (,) enumerate enumerate
crossWith :: (a -> b -> c) -> [a] -> [b] -> [c]
crossWith f _ [] = []
crossWith f [] _ = []
crossWith f (x0:xs) (y0:ys) =
f x0 y0 : interleave (map (f x0) ys)
(interleave (map (flip f y0) xs)
(crossWith f xs ys))
interleave :: [a] -> [a] -> [a]
interleave xs [] = xs
interleave [] ys = []
interleave (x:xs) ys = x : interleave ys xs
Same goes for disjunctions of Enumerable types:
instance (Enumerable a, Enumerable b) => Enumerable (Either a b) where
enumerate = enumerateEither enumerate enumerate
enumerateEither :: [a] -> [b] -> [Either a b]
enumerateEither [] ys = map Right ys
enumerateEither xs [] = map Left xs
enumerateEither (x:xs) (y:ys) = Left x : Right y : enumerateEither xs ys
The fact that we can do this both for (,) and Either probably means that we can do it for any algebraic data type.
Not every function has an inverse. If you limit the discussion to one-to-one functions, the ability to invert an arbitrary function grants the ability to crack any cryptosystem. We kind of have to hope this isn't feasible, even in theory!
In some cases, it is possible to find the inverse of a bijective function by converting it into a symbolic representation. Based on this example, I wrote this Haskell program to find inverses of some simple polynomial functions:
bijective_function x = x*2+1
main = do
print $ bijective_function 3
print $ inverse_function bijective_function (bijective_function 3)
data Expr = X | Const Double |
Plus Expr Expr | Subtract Expr Expr | Mult Expr Expr | Div Expr Expr |
Negate Expr | Inverse Expr |
Exp Expr | Log Expr | Sin Expr | Atanh Expr | Sinh Expr | Acosh Expr | Cosh Expr | Tan Expr | Cos Expr |Asinh Expr|Atan Expr|Acos Expr|Asin Expr|Abs Expr|Signum Expr|Integer
deriving (Show, Eq)
instance Num Expr where
(+) = Plus
(-) = Subtract
(*) = Mult
abs = Abs
signum = Signum
negate = Negate
fromInteger a = Const $ fromIntegral a
instance Fractional Expr where
recip = Inverse
fromRational a = Const $ realToFrac a
(/) = Div
instance Floating Expr where
pi = Const pi
exp = Exp
log = Log
sin = Sin
atanh = Atanh
sinh = Sinh
cosh = Cosh
acosh = Acosh
cos = Cos
tan = Tan
asin = Asin
acos = Acos
atan = Atan
asinh = Asinh
fromFunction f = f X
toFunction :: Expr -> (Double -> Double)
toFunction X = \x -> x
toFunction (Negate a) = \a -> (negate a)
toFunction (Const a) = const a
toFunction (Plus a b) = \x -> (toFunction a x) + (toFunction b x)
toFunction (Subtract a b) = \x -> (toFunction a x) - (toFunction b x)
toFunction (Mult a b) = \x -> (toFunction a x) * (toFunction b x)
toFunction (Div a b) = \x -> (toFunction a x) / (toFunction b x)
with_function func x = toFunction $ func $ fromFunction x
simplify X = X
simplify (Div (Const a) (Const b)) = Const (a/b)
simplify (Mult (Const a) (Const b)) | a == 0 || b == 0 = 0 | otherwise = Const (a*b)
simplify (Negate (Negate a)) = simplify a
simplify (Subtract a b) = simplify ( Plus (simplify a) (Negate (simplify b)) )
simplify (Div a b) | a == b = Const 1.0 | otherwise = simplify (Div (simplify a) (simplify b))
simplify (Mult a b) = simplify (Mult (simplify a) (simplify b))
simplify (Const a) = Const a
simplify (Plus (Const a) (Const b)) = Const (a+b)
simplify (Plus a (Const b)) = simplify (Plus (Const b) (simplify a))
simplify (Plus (Mult (Const a) X) (Mult (Const b) X)) = (simplify (Mult (Const (a+b)) X))
simplify (Plus (Const a) b) = simplify (Plus (simplify b) (Const a))
simplify (Plus X a) = simplify (Plus (Mult 1 X) (simplify a))
simplify (Plus a X) = simplify (Plus (Mult 1 X) (simplify a))
simplify (Plus a b) = (simplify (Plus (simplify a) (simplify b)))
simplify a = a
inverse X = X
inverse (Const a) = simplify (Const a)
inverse (Mult (Const a) (Const b)) = Const (a * b)
inverse (Mult (Const a) X) = (Div X (Const a))
inverse (Plus X (Const a)) = (Subtract X (Const a))
inverse (Negate x) = Negate (inverse x)
inverse a = inverse (simplify a)
inverse_function x = with_function inverse x
This example only works with arithmetic expressions, but it could probably be generalized to work with lists as well. There are also several implementations of computer algebra systems in Haskell that may be used to find the inverse of a bijective function.
No, not all functions even have inverses. For instance, what would the inverse of this function be?
f x = 1

How does append-to-form work? (SICP's section on Logic Programming)

I am currently working through SICP's section on Logic Programming, but I got stuck in the examples regarding logical deductions, especially the append-to-form rules. How do they work? What I don't quite understand is how the second rule cdr-downs the first list. For example, given:
(rule (append-to-form () ?y ?y))
(rule (append-to-form (?u . ?v) ?y (?u . ?z))
(append-to-form ?v ?y ?z))
a) How do we reach from:
;;; Query input:
(append-to-form (a b) (c d) ?z)
to
;;; Query results:
(append-to-form (a b) (c d) (a b c d))
b) And what bout this one:
;;; Query input:
(append-to-form (a b) ?y (a b c d))
to
;;; Query results:
(append-to-form (a b) (c d) (a b c d))
c) And lastly:
;;; Query input:
(append-to-form ?x ?y (a b c d))
to
;;; Query results:
(append-to-form () (a b c d) (a b c d))
(append-to-form (a) (b c d) (a b c d))
(append-to-form (a b) (c d) (a b c d))
(append-to-form (a b c) (d) (a b c d))
(append-to-form (a b c d) () (a b c d))
I would be interested in the specific mental steps required to carry out the rule matching.
Thank you in advance.
Play interpreter by taking a piece of paper and writing down every single step. For every step you write down which rule was/can be triggered and what variable was bound to what value.
For example:
(append-to-form (a b) (c d) ?z)
triggers the rule
(rule (append-to-form (?u . ?v) ?y (?u . ?z))
(append-to-form ?v ?y ?z))
with
?u = a, ?v = (b), ?y = (c d), ?z = (a . ?z_2)
Note: ?z in the original query is supposed to be a different variable from ?z in the rule body, therefor rename the rule's ?z into ?z_2. A list (1 2 3) when matched to (?a . ?b) produces ?a = 1, ?b = (2 3) like when car/cdr'ing a list.
These bindings are applied to the body of the rule (append-to-form ?v ?y ?z) So we get
(append-to-form (b) (c d) ?z_2)
which again becomes
(append-to-form () (c d) ?z_3)
and triggers a different rule: (rule (append-to-form () ?y ?y)) binding ?z_3 to (c d).
Then recursion kicks in, ?z_2 was defined as (b . ?z_3), ?z was defined as (a . ?z2)
The original query (append-to-form (a b) (c d) ?z) gets applied to the bindings in which ?z = (a . (b . (c d))) and returns (append-to-form (a b) (c d) (a b c d))
The rest of the exercises are left to the reader ;)
The crucial concepts here are pattern matching and unification which can be found at section 4.2.2. The whole query evaluator is really the most difficult piece in SICP, so don't be discourage. It is well worth the effort. Try to run the code (in an R5RS Scheme) and fiddle with it, such as adding tracing.

Hilbert System - Automate Proof

I'm trying to prove the statement ~(a->~b) => a in a Hilbert style system. Unfortunately it seems like it is impossible to come up with a general algorithm to find a proof, but I'm looking for a brute force type strategy. Any ideas on how to attack this are welcome.
If You like "programming" in combinatory logic, then
You can automatically "translate" some logic problems into another field: proving equality of combinatory logic terms.
With a good functional programming practice, You can solve that,
and afterwards, You can translate the answer back to a Hilbert style proof of Your original logic problem.
The possibility of this translation in ensured by Curry-Howard correspondence.
Unfortunately, the situation is so simple only for a subset of (propositional) logic: restricted using conditionals. Negation is a complication, I know nothing about that. Thus I cannot answer this concrete question:
¬ (α ⊃ ¬β) ⊢ α
But in cases where negation is not part of the question, the mentioned automatic translation (and back-translation) can be a help, provided that You have already practice in functional programming or combinatory logic.
Of course, there are other helps, too, where we can remain inside the realm of logic:
proving the problem in some more intuitive deductive system (e.g. natural deduction)
and afterwards using metatheorems that provide a "compiler" possibility: translating the "high-level" proof of natural deduction to the "machine-code" of Hilbert-style deduction system. I mean, for example, the metalogical theorem called "deduction theorem".
As for theorem provers, as far as I know, the capabilities of some of them are extended so that they can harness interactive human assistance. E.g. Coq is such.
Appendix
Let us see an example. How to prove α ⊃ α?
Hilbert system
Verum ex quolibetα,β is assumed as an axiom scheme, stating that sentence α ⊃ β ⊃ α is expected to be deducible, instantiated for any subsentences α,β
Chain ruleα,β,γ is assumed as an axiom scheme, stating that sentence (α ⊃ β ⊃ γ) ⊃ (α ⊃ β) ⊃ α ⊃ γ is expected to be deducible, instantiated for any subsentences α,β
Modus ponens is assumed as a rule of inference: provided that α ⊃ β is deducible, and also α is deducible, then we expect to be justified to infer that also α ⊃ β is deducible.
Let us prove theorem: α ⊃ α is deducible for any α proposition.
Let us introduce the following notations and abbreviations, developing a "proof calculus":
Proof calculus
VEQα,β: ⊢ α ⊃ β ⊃ α
CRα,β,γ: ⊢ (α ⊃ β ⊃ γ) ⊃ (α ⊃ β) ⊃ α⊃ γ
MP: If ⊢ α ⊃ β and ⊢ α, then also ⊢ β
A tree diagram notation:
Axiom scheme — Verum ex quolibet:
━━━━━━━━━━━━━━━━━ [VEQα,β]
⊢ α ⊃ β ⊃ α
Axiom scheme — chain rule:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ [CRα,β,γ]
⊢ (α ⊃ β ⊃ γ) ⊃ (α ⊃ β) ⊃ α⊃ γ
Rule of inference — modus ponens:
⊢ α ⊃ β ⊢ α
━━━━━━━━━━━━━━━━━━━ [MP]
⊢ β
Proof tree
Let us see a tree diagram representation of the proof:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━ [CRα, α⊃α, α]
━━━━━━━━━━━━━━━ [VEQα, α⊃α]
⊢ [α⊃(α⊃α)⊃α]⊃(α⊃α⊃α)⊃α⊃α
⊢ α ⊃ (α ⊃ α) ⊃ α
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ [MP] ━━━━━━━━━━━ [VEQα,α]
⊢ (α ⊃ α ⊃ α) ⊃ α ⊃ α
⊢ α ⊃ α ⊃ α
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ [MP]
⊢ α ⊃ α
Proof formulae
Let us see an even conciser (algebraic? calculus?) representation of the proof:
(CRα,α⊃α,α VEQα,α ⊃ α) VEQα,α: ⊢ α⊃ α
so, we can represent the proof tree by a single formula:
the forking of the tree (modus ponens) is rendered by simple concatenation (parentheses),
and the leaves of the tree are rendered by the abbreviations of the corresponding axiom names.
It is worth of keep record about the concrete instantiation, that' is typeset here with subindexical parameters.
As it will be seen from the series of examples below, we can develop a proof calculus, where axioms are notated as sort of base combinators, and modus ponens is notated as a mere application of its "premise" subproofs:
Example 1
VEQα,β: ⊢ α ⊃ β ⊃ α
meant as
Verum ex quolibet axiom scheme instantiated with α,β provides a proof for the statement, that α ⊃ β ⊃ α is deducible.
Example 2
VEQα,α: ⊢ α ⊃ α ⊃ α
Verum ex quolibet axiom scheme instantiated with α,α provides a proof for the statement, that α ⊃ α ⊃ α is deducible.
Example 3
VEQα, α⊃α: ⊢ α ⊃ (α ⊃ α) ⊃ α
meant as
Verum ex quolibet axiom scheme instantiated with α, α⊃α provides a proof for the statement, that α ⊃ (α ⊃ α) ⊃ α is deducible.
Example 4
CRα,β,γ: ⊢ (α ⊃ β ⊃ γ) ⊃ (α ⊃ β) ⊃ α⊃ γ
meant as
Chain rule axiom scheme instantiated with α,β,γ provides a proof for the statement, that (α ⊃ β ⊃ γ) ⊃ (α ⊃ β) ⊃ α⊃ γ is deducible.
Example 5
CRα,α⊃α,α: ⊢ [α ⊃ (α⊃α) ⊃ α] ⊃ (α ⊃ α⊃α) ⊃ α⊃ α
meant as
Chain rule axiom scheme instantiated with α,α⊃α,α provides a proof for the statement, that [α ⊃ (α⊃α) ⊃ α] ⊃ (α ⊃ α⊃α) ⊃ α⊃ α is deducible.
Example 6
CRα,α⊃α,α VEQα,α ⊃ α: ⊢ (α ⊃ α⊃α) ⊃ α⊃ α
meant as
If we combine CRα,α⊃α,α and VEQα,α ⊃ α together via modus ponens, then we get a proof that proves the following statement: (α ⊃ α⊃α) ⊃ α⊃ α is deducible.
Example 7
(CRα,α⊃α,α VEQα,α ⊃ α) VEQα,α: ⊢ α⊃ α
If we combine the compund proof (CRα,α⊃α,α) together with VEQα,α ⊃ α (via modus ponens), then we get an even more compund proof. This proves the following statement: α⊃ α is deducible.
Combinatory logic
Although all this above has indeed provided a proof for the expected theorem, but it seems very unintuitive. It cannot be seen how people can "find out" the proof.
Let us see another field, where similar problems are investigated.
Untyped combinatory logic
Combinatory logic can be regarded also as an extremely minimalistic functional programming language. Despite of its minimalism, it entirely Turing complete, but evenmore, one can write quite intuitive and complex programs even in this seemingly obfuscated language, in a modular and reusable way, with some practice gained from "normal" functional programming and some algebraic insights, .
Adding typing rules
Combinatory logic also has typed variants. Syntax is augmented with types, and evenmore, in addition to reduction rules, also typing rules are added.
For base combinators:
Kα,β is selected as a basic combinator, inhabiting type α → β → α
Sα,β,γ is selected as a basic combinator, inhabiting type (α → β → γ) → (α → β) → α → γ.
Typing rule of application:
If X inhabits type α → β and Y inhabits type α, then
X Y inhabits type β.
Notations and abbreviations
Kα,β: α → β → α
Sα,β,γ: (α → β → γ) → (α → β)* → α → γ.
If X: α → β and Y: α, then
X Y: β.
Curry-Howard correspondence
It can be seen that the "patterns" are isomorphic in the proof calculus and in this typed combinatory logic.
The Verum ex quolibet axiom of the proof calculus corresponds to the K base combinator of combinatory logic
The Chain rule axiom of the proof calculus corresponds to the S base combinator of combinatory logic
The Modus ponens rule of inference in the proof calculus corresponds to the operation "application" in combinatory logic.
The "conditional" connective ⊃ of logic corresponds to type constructor → of type theory (and typed combinatory logic)
Functional programming
But what is the gain? Why should we translate problems to combinatory logic? I, personally, find it sometimes useful, because functional programming is a thing which has a large literature and is applied in practical problems. People can get used to it, when forced to use it in erveryday programming tasks ans pracice. And some tricks and hints of functional programming practice can be exploited very well in combinatory logic reductions. And if a "transferred" practice develops in combinatory logic, then it can be harnessed also in finding proofs in Hilbert system.
External links
Links how types in functional programming (lambda calculus, combinatory logic) can be translated into logical proofs and theorems:
Wadler, Philip (1989). Theorems for free!.
Links (or books) how to learn methods and practice to program directly in combinatory logic:
Madore, David (2003). The Unlambda Programming Language. Unlambda: Your Functional Programming Language Nightmares Come True.
Curry, Haskell B. & Feys, Robert & Craig, William (1958). Combinatory Logic. Vol. I. Amsterdam: North-Holland Publishing Company.
Tromp, John (1999). Binary Lambda Calculus and Combinatory Logic. Downloadable in PDF and Postscript from the author's John's Lambda Calculus and Combinatory Logic Playground.
The Hilbert system is not normally used in automated theorem proving. It is much easier to write a computer program to do proofs using natural deduction. From the material of a CS course:
Some FAQ’s about the Hilbert system:
Q: How does one know which axiom
schemata to use, and which
substitutions to make? Since there are
infinitely many possibilities, it is
not possible to try them all, even in
princple. A: There is no algorithm; at
least no simple one. Rather, one has
to be clever. In pure mathematics,
this is not viewed as a problem, since
one is most concerned about the
existence of a proof. However, in
computer science applications, one is
interested in automating the deduction
process, so this is a fatal flaw. The
Hilbert system is not normally used in
automated theorem proving. Q: So, why
do people care about the Hilbert
system? A: With modus ponens as its
single deductive rule, it provides a
palatable model of how humans devise
mathematical proofs. As we shall see,
methods which are more amenable to
computer implementation produce proofs
which are less “human like.”
You can approach the problem also by setting ¬ α = α → ⊥. We can then adopt the Hilbert style system as shown in the appendix of one of the answers, and make it classical by adding the following two axioms respectively constants:
Ex Falso Quodlibet: Eα : ⊥ → α
Consequentia Mirabilis: Mα : (¬ α → α) → α
A sequent proof of ¬ (α → ¬ β) → α then reads as follows:
α ⊢ α (Identity)
⊥ ⊢ β → ⊥ (Ex Falso Quodlibet)
α → ⊥, α ⊢ β → ⊥ (Impl Intro Left 1 & 2)
α → ⊥ ⊢ α → (β → ⊥) (Impl Intro Right 3)
⊥ ⊢ α (Ex Falso Quodlibet)
(α → (β → ⊥)) → ⊥, α → ⊥ ⊢ α (Impl Intro Left 4 & 5)
(α → (β → ⊥)) → ⊥ ⊢ α (Consequentia Mirabilis 6)
⊢ ((α → (β → ⊥)) → ⊥) → α (Impl Intro Right 7)
From this sequent proof, one can extract a lambda expression. A possible
lambda expressions for the above sequent proof reads as follows:
λy.(M λz.(E (y λx.(E (z x)))))
This lambda expression can be converted into a SKI term. A possible
SKI term for the above lambda expression reads as follows:
S (K M)) (L2 (L1 (K (L2 (L1 (K I))))))
where L1 = (S ((S (K S)) ((S (K K)) I)))
and L2 = (S (K (S (K E))))
This gives the following Hilbert style proofs:
Lemma 1: A weakened form of the chain rule:
1: ((A → B) → ((C → A) → (C → B))) → (((A → B) → (C → A)) → ((A → B) → (C → B))) [Chain]
2: ((A → B) → ((C → (A → B)) → ((C → A) → (C → B)))) → (((A → B) → (C → (A → B))) → ((A → B) → ((C → A) → (C → B)))) [Chain]
3: ((C → (A → B)) → ((C → A) → (C → B))) → ((A → B) → ((C → (A → B)) → ((C → A) → (C → B)))) [Verum Ex]
4: (C → (A → B)) → ((C → A) → (C → B)) [Chain]
5: (A → B) → ((C → (A → B)) → ((C → A) → (C → B))) [MP 3, 4]
6: ((A → B) → (C → (A → B))) → ((A → B) → ((C → A) → (C → B))) [MP 2, 5]
7: ((A → B) → ((A → B) → (C → (A → B)))) → (((A → B) → (A → B)) → ((A → B) → (C → (A → B)))) [Chain]
8: ((A → B) → (C → (A → B))) → ((A → B) → ((A → B) → (C → (A → B)))) [Verum Ex]
9: (A → B) → (C → (A → B)) [Verum Ex]
10: (A → B) → ((A → B) → (C → (A → B))) [MP 8, 9]
11: ((A → B) → (A → B)) → ((A → B) → (C → (A → B))) [MP 7, 10]
12: (A → B) → (A → B) [Identity]
13: (A → B) → (C → (A → B)) [MP 11, 12]
14: (A → B) → ((C → A) → (C → B)) [MP 6, 13]
15: ((A → B) → (C → A)) → ((A → B) → (C → B)) [MP 1, 14]
Lemma 2: A weakened form of Ex Falso:
1: (A → ((B → ⊥) → (B → C))) → ((A → (B → ⊥)) → (A → (B → C))) [Chain]
2: ((B → ⊥) → (B → C)) → (A → ((B → ⊥) → (B → C))) [Verum Ex]
3: (B → (⊥ → C)) → ((B → ⊥) → (B → C)) [Chain]
4: (⊥ → C) → (B → (⊥ → C)) [Verum Ex]
5: ⊥ → C [Ex Falso]
6: B → (⊥ → C) [MP 4, 5]
7: (B → ⊥) → (B → C) [MP 3, 6]
8: A → ((B → ⊥) → (B → C)) [MP 2, 7]
9: (A → (B → ⊥)) → (A → (B → C)) [MP 1, 8]
Final Proof:
1: (((A → (B → ⊥)) → ⊥) → (((A → ⊥) → A) → A)) → ((((A → (B → ⊥)) → ⊥) → ((A → ⊥) → A)) → (((A → (B → ⊥)) → ⊥) → A)) [Chain]
2: (((A → ⊥) → A) → A) → (((A → (B → ⊥)) → ⊥) → (((A → ⊥) → A) → A)) [Verum Ex]
3: ((A → ⊥) → A) → A [Mirabilis]
4: ((A → (B → ⊥)) → ⊥) → (((A → ⊥) → A) → A) [MP 2, 3]
5: (((A → (B → ⊥)) → ⊥) → ((A → ⊥) → A)) → (((A → (B → ⊥)) → ⊥) → A) [MP 1, 4]
6: (((A → (B → ⊥)) → ⊥) → ((A → ⊥) → ⊥)) → (((A → (B → ⊥)) → ⊥) → ((A → ⊥) → A)) [Lemma 2]
7: (((A → (B → ⊥)) → ⊥) → ((A → ⊥) → (A → (B → ⊥)))) → (((A → (B → ⊥)) → ⊥) → ((A → ⊥) → ⊥)) [Lemma 1]
8: ((A → ⊥) → (A → (B → ⊥))) → (((A → (B → ⊥)) → ⊥) → ((A → ⊥) → (A → (B → ⊥)))) [Verum Ex]
9: ((A → ⊥) → (A → ⊥)) → ((A → ⊥) → (A → (B → ⊥))) [Lemma 2]
10: ((A → ⊥) → (A → A)) → ((A → ⊥) → (A → ⊥)) [Lemma 1]
11: (A → A) → ((A → ⊥) → (A → A)) [Verum Ex]
12: A → A [Identity]
13: (A → ⊥) → (A → A) [MP 11, 12]
14: (A → ⊥) → (A → ⊥) [MP 10, 13]
15: (A → ⊥) → (A → (B → ⊥)) [MP 9, 14]
16: ((A → (B → ⊥)) → ⊥) → ((A → ⊥) → (A → (B → ⊥))) [MP 8, 15]
17: ((A → (B → ⊥)) → ⊥) → ((A → ⊥) → ⊥) [MP 7, 16]
18: ((A → (B → ⊥)) → ⊥) → ((A → ⊥) → A) [MP 6, 17]
19: ((A → (B → ⊥)) → ⊥) → A [MP 5, 18]
Quite a long proof!
Bye
Finding proofs in Hilbert calculus is very hard.
You could try to translate proofs in sequent calculus or natural deduction to Hilbert calculus.
Which specific Hilbert system? There are tons.
Probably the best way is to find a proof in a sequent calculus and convert it to the Hilbert system.
I use Polish notation.
Since you referenced the Wikipedia, we'll suppose our basis is
1 CpCqp.
2 CCpCqrCCpqCpr.
3 CCNpNqCqp.
We want to prove
NCaNb |- a.
I use the theorem prover Prover9. So, we'll need to parenthesize everything. Also, the variables of Prover9 go (x, y, z, u, w, v5, v6, ..., vn). All other symbols get interpreted as functions or relations or predicates. All axioms need a predicate symbol "P" before them also, which we can think of as meaning "it is provable that..." or more simply "provable". And all sentences in Prover9 need to get ended by a period. Thus, axioms 1, 2, and 3 become respectively:
1 P(C(x,C(y,x))).
2 P(C(C(x,C(y,z)),C(C(x,y),C(x,z)))).
3 P(C(C(N(x),N(y)),C(y,x))).
We can combine the rules of uniform substitution and detachment into the rule of condensed detachment. In Prover9 we can represent this as:
-P(C(x,y)) | -P(x) | P(y).
The "|" indicates logical disjunction, and "-" indicates negation. Prover9 proves by contradiction. The rule says in words can get interpreted as saying "either it is not the case that if x, then y is provable, or it is not the case that x is provable, or y is provable." Thus, if it does hold that if x, then y is provable, the first disjunct fails. If it does hold that x is provable, then the second disjunct fails. So, if, if x, then y is provable, if x is provable, then the third disjunct, that y is provable follows by the rule.
Now we can't make substitutions in NCaNb, since it's not a tautology. Nor is "a". Thus, if we put
P(N(C(a,N(b)))).
as an assumption, Prover9 will interpret "a" and "b" as nullary functions, which effectively turns them into constants. We also want to make P(a) as our goal.
Now we can also "tune" Prover9 using various theorem-proving strategies such as weighting, resonance, subformula, pick-given ratio, level saturation (or even invent our own). I'll use the hints strategy a little bit, by making all of the assumptions (including the rule of inference), and the goal into hints. I'll also turn the max weight down to 40, and make 5 the number of maximum variables.
I use the version with the graphical interface, but here's the entire input:
set(ignore_option_dependencies). % GUI handles dependencies
if(Prover9). % Options for Prover9
assign(max_seconds, -1).
assign(max_weight, 40).
end_if.
if(Mace4). % Options for Mace4
assign(max_seconds, 60).
end_if.
if(Prover9). % Additional input for Prover9
formulas(hints).
-P(C(x,y))|-P(x)|P(y).
P(C(x,C(y,x))).
P(C(C(x,C(y,z)),C(C(x,y),C(x,z)))).
P(C(C(N(x),N(y)),C(y,x))).
P(N(C(a,N(b)))).
P(a).
end_of_list.
assign(max_vars,5).
end_if.
formulas(assumptions).
-P(C(x,y))|-P(x)|P(y).
P(C(x,C(y,x))).
P(C(C(x,C(y,z)),C(C(x,y),C(x,z)))).
P(C(C(N(x),N(y)),C(y,x))).
P(N(C(a,N(b)))).
end_of_list.
formulas(goals).
P(a).
end_of_list.
Here's the proof it gave me:
============================== prooftrans ============================
Prover9 (32) version Dec-2007, Dec 2007.
Process 1312 was started by Doug on Machina2,
Mon Jun 9 22:35:37 2014
The command was "/cygdrive/c/Program Files (x86)/Prover9-Mace43/bin-win32/prover9".
============================== end of head ===========================
============================== end of input ==========================
============================== PROOF =================================
% -------- Comments from original proof --------
% Proof 1 at 0.01 (+ 0.01) seconds.
% Length of proof is 23.
% Level of proof is 9.
% Maximum clause weight is 20.
% Given clauses 49.
1 P(a) # label(non_clause) # label(goal). [goal].
2 -P(C(x,y)) | -P(x) | P(y). [assumption].
3 P(C(x,C(y,x))). [assumption].
4 P(C(C(x,C(y,z)),C(C(x,y),C(x,z)))). [assumption].
5 P(C(C(N(x),N(y)),C(y,x))). [assumption].
6 P(N(C(a,N(b)))). [assumption].
7 -P(a). [deny(1)].
8 P(C(x,C(y,C(z,y)))). [hyper(2,a,3,a,b,3,a)].
9 P(C(C(C(x,C(y,z)),C(x,y)),C(C(x,C(y,z)),C(x,z)))). [hyper(2,a,4,a,b,4,a)].
12 P(C(C(C(N(x),N(y)),y),C(C(N(x),N(y)),x))). [hyper(2,a,4,a,b,5,a)].
13 P(C(x,C(C(N(y),N(z)),C(z,y)))). [hyper(2,a,3,a,b,5,a)].
14 P(C(x,N(C(a,N(b))))). [hyper(2,a,3,a,b,6,a)].
23 P(C(C(a,N(b)),x)). [hyper(2,a,5,a,b,14,a)].
28 P(C(C(x,C(C(y,x),z)),C(x,z))). [hyper(2,a,9,a,b,8,a)].
30 P(C(x,C(C(a,N(b)),y))). [hyper(2,a,3,a,b,23,a)].
33 P(C(C(x,C(a,N(b))),C(x,y))). [hyper(2,a,4,a,b,30,a)].
103 P(C(N(b),x)). [hyper(2,a,33,a,b,3,a)].
107 P(C(x,b)). [hyper(2,a,5,a,b,103,a)].
113 P(C(C(N(x),N(b)),x)). [hyper(2,a,12,a,b,107,a)].
205 P(C(N(x),C(x,y))). [hyper(2,a,28,a,b,13,a)].
209 P(C(N(a),x)). [hyper(2,a,33,a,b,205,a)].
213 P(a). [hyper(2,a,113,a,b,209,a)].
214 $F. [resolve(213,a,7,a)].
============================== end of proof ==========================

Resources