How to complete this proof of commutativity with `replace`? - functional-programming

On this documentation, it is mentioned how replace could be used to complete the proof, but it ends up using rewrite, which seems to be a syntax sugar that writes replace for you. I'm interested in understanding how to use it explicitly.
If I understand correctly, it could be used to rewrite S k = S (plus k 0) as S (plus k 0) = S (plus k 0), given a proof that k = plus k 0, which would then be provable by reflexivity. But if we instance it as replace {P = \x => S x = S (plus k 0)} {x = k} {y = plus k 0} rec, we'll now need a proof of S k = S (plus k 0), which is what we wanted to prove to begin with. In short, I'm not sure what exactly P should be.

Ah, it is fairly obvious in retrospect. If we let:
P = \x => S x = S (plus k 0)
Then, we can prove it for x = (plus k 0) (by reflexivity). Now, if we let y = k, then, by using replace, we gain a proof of S k = S (plus k 0), which is what we need. Or, in other words:
plusCommZ : (m : Nat) -> m = plus m 0
plusCommZ Z = Refl
plusCommZ (S k) = replace
{P = \x => S x = S (plus k 0)}
{x = plus k 0}
{y = k}
(sym (plusCommZ k))
Refl
Completes the proof. We could do it the other way around with P = \x => S x = S k.

Related

How can I make use of cong and injective with indexed vectors in Idris?

cong and injective allow you to apply and unapply functions to equalities:
cong : (f : a -> b) -> x = y -> f x = f y
injective : Injective f => f x = f y -> x = y
Both of these fail for indexed vectors with different lengths, for obvious reasons.
How can I prove that two equal vectors have the same length? I.e.
sameLen : {xs : Vect n a} -> {ys : Vect m b} -> xs = ys -> n = m
I can't just do
sameLen pf = cong length pf
because length on xs has type Vect n a -> Nat and length on ys has type Vect m b -> Nat. (In fact, I'm not even sure how to prove the same thing for two regular Lists, due to the differing type arguments, never mind with the added indices).
Going the other way, how would I prove something like
data Rose a = V a | T (Vect n (Rose a))
Injective T where
injective Refl = Refl
unwrap : {xs : Vect n (Rose a)} -> {ys : Vect m (Rose b)} -> T xs = T ys -> xs = ys
Again, I can't just do
unwrap pf = injective pf
due to the differing types of T (one with m and one with n). And even if I had a proof m=n, how could I use that to convince Idris that the two applications of T are the same?
Got the answer from the Idris Discord - if you pattern match on Refl then it unifies a and b automatically:
sameLen : {xs : List a} -> {ys : List b} -> xs = ys -> length xs = length ys
sameLen Refl = Refl
sameLen' : {xs : Vect n a} -> {ys : Vect m b} -> xs = ys -> n = m
sameLen' Refl = Refl

Well-founded recursion by repeated division

Suppose I have some natural numbers d ≥ 2 and n > 0; in this case, I can split off the d's from n and get n = m * dk, where m is not divisible by d.
I'd like to use this repeated removal of the d-divisible parts as a recursion scheme; so I thought I'd make a datatype for the Steps leading to m:
import Data.Nat.DivMod
data Steps: (d : Nat) -> {auto dValid: d `GTE` 2} -> (n : Nat) -> Type where
Base: (rem: Nat) -> (rem `GT` 0) -> (rem `LT` d) -> (quot : Nat) -> Steps d {dValid} (rem + quot * d)
Step: Steps d {dValid} n -> Steps d {dValid} (n * d)
and write a recursive function that computes the Steps for a given pair of d and n:
total lemma: x * y `GT` 0 -> x `GT` 0
lemma {x = Z} LTEZero impossible
lemma {x = Z} (LTESucc _) impossible
lemma {x = (S k)} prf = LTESucc LTEZero
steps : (d : Nat) -> {auto dValid: d `GTE` 2} -> (n : Nat) -> {auto nValid: n `GT` 0} -> Steps d {dValid} n
steps Z {dValid = LTEZero} _ impossible
steps Z {dValid = (LTESucc _)} _ impossible
steps (S d) {dValid} n {nValid} with (divMod n d)
steps (S d) (q * S d) {nValid} | MkDivMod q Z _ = Step (steps (S d) {dValid} q {nValid = lemma nValid})
steps (S d) (S rem + q * S d) | MkDivMod q (S rem) remSmall = Base (S rem) (LTESucc LTEZero) remSmall q
However, steps is not accepted as total since there's no apparent reason why the recursive call is well-founded (there's no structural relationship between q and n).
But I also have a function
total wf : (S x) `LT` (S x) * S (S y)
with a boring proof.
Can I use wf in the definition of steps to explain to Idris that steps is total?
Here is one way of using well-founded recursion to do what you're asking. I'm sure though, that there is a better way. In what follows I'm going to use the standard LT function, which allows us to achieve our goal, but there some obstacles we will need to work around.
Unfortunately, LT is a function, not a type constructor or a data constructor, which means we cannot define an implementation of the
WellFounded
typeclass for LT. The following code is a workaround for this situation:
total
accIndLt : {P : Nat -> Type} ->
(step : (x : Nat) -> ((y : Nat) -> LT y x -> P y) -> P x) ->
(z : Nat) -> Accessible LT z -> P z
accIndLt {P} step z (Access f) =
step z $ \y, lt => accIndLt {P} step y (f y lt)
total
wfIndLt : {P : Nat -> Type} ->
(step : (x : Nat) -> ((y : Nat) -> LT y x -> P y) -> P x) ->
(x : Nat) -> P x
wfIndLt step x = accIndLt step x (ltAccessible x)
We are going to need some helper lemmas dealing with the less than relation, the lemmas can be found in this gist (Order module). It's a subset of my personal library which I recently started. I'm sure the proofs of the helper lemmas can be minimized, but it wasn't my goal here.
After importing the Order module, we can solve the problem (I slightly modified the original code, it's not difficult to change it or write a wrapper to have the original type):
total
steps : (n : Nat) -> {auto nValid : 0 `LT` n} -> (d : Nat) -> Steps (S (S d)) n
steps n {nValid} d = wfIndLt {P = P} step n d nValid
where
P : (n : Nat) -> Type
P n = (d : Nat) -> (nV : 0 `LT` n) -> Steps (S (S d)) n
step : (n : Nat) -> (rec : (q : Nat) -> q `LT` n -> P q) -> P n
step n rec d nV with (divMod n (S d))
step (S r + q * S (S d)) rec d nV | (MkDivMod q (S r) prf) =
Base (S r) (LTESucc LTEZero) prf q
step (Z + q * S (S d)) rec d nV | (MkDivMod q Z _) =
let qGt0 = multLtNonZeroArgumentsLeft nV in
let lt = multLtSelfRight (S (S d)) qGt0 (LTESucc (LTESucc LTEZero)) in
Step (rec q lt d qGt0)
I modeled steps after the divMod function from the contrib/Data/Nat/DivMod/IteratedSubtraction.idr module.
Full code is available here.
Warning: the totality checker (as of Idris 0.99, release version) does not accept steps as total. It has been recently fixed and works for our problem (I tested it with Idris 0.99-git:17f0899c).

Using an exponentiation function

This is the definition for exp in group theory:
Definition exp : Z -> U -> U.
Proof.
intros n a.
elim n;
clear n.
exact e.
intro n.
elim n; clear n.
exact a.
intros n valrec.
exact (star a valrec).
intro n; elim n; clear n.
exact (inv a).
intros n valrec.
exact (star (inv a) valrec).
Defined.
I tried to define a ((a^n)^k) this way.
Definition exp2 (n k : Z) (a : U) := fun a => exp k (exp n a).
But exp k (exp n a) is of type U->U but I want it to be of type U. How can I do it?
As Gilles pointed out, the signature of your exp2 function is equivalent to
Definition exp2 (n k : Z) (a a(*won't work in Coq*) : U) :=
exp k (exp n a).
You need to remove one of the a:
Definition exp2 (n k : Z) (a : U) := exp k (exp n a).
Definition exp2_alt (n k : Z) : fun a => exp k (exp n a).

F# lazy recursion

I am have some problems with recursion in Lazy Computations. I need calculation the square root by Newton Raphson method. I do not know how to apply a lazy evaluation. This is my code:
let next x z = ((x + z / x) / 2.);
let rec iterate f x =
List.Cons(x, (iterate f (f x)));
let rec within eps list =
let a = float (List.head list);
let b = float (List.head (List.tail list));
let rest = (List.tail (List.tail (list)));
if (abs(a - b) <= eps * abs(b))
then b
else within eps (List.tail (list));
let lazySqrt a0 eps z =
within eps (iterate (next z) a0);
let result2 = lazySqrt 10. Eps fvalue;
printfn "lazy approach";
printfn "result: %f" result2;
Of course, stack overflow exception.
You're using F# lists which has eager evaluation. In your example, you need lazy evaluation and decomposing lists, so F# PowerPack's LazyList is appropriate to use:
let next z x = (x + z / x) / 2.
let rec iterate f x =
LazyList.consDelayed x (fun () -> iterate f (f x))
let rec within eps list =
match list with
| LazyList.Cons(a, LazyList.Cons(b, rest)) when abs(a - b) <= eps * abs(b) -> b
| LazyList.Cons(a, res) -> within eps res
| LazyList.Nil -> failwith "Unexpected pattern"
let lazySqrt a0 eps z =
within eps (iterate (next z) a0)
let result2 = lazySqrt 10. Eps fvalue
printfn "lazy approach"
printfn "result: %f" result2
Notice that I use pattern matching which is more idiomatic than head and tail.
If you don't mind a slightly different approach, Seq.unfold is natural here:
let next z x = (x + z / x) / 2.
let lazySqrt a0 eps z =
a0
|> Seq.unfold (fun a ->
let b = next z a
if abs(a - b) <= eps * abs(b) then None else Some(a, b))
|> Seq.fold (fun _ x -> x) a0
If you need lazy computations, then you have to use appropriate tools. List is not lazy, it is computed to the end. Your iterate function never ends, so the entire code stack overflows in this function.
You may use Seq here.
Note: Seq.skip almost inevitably leads you to an O(N^2) complexity.
let next N x = ((x + N / x) / 2.);
let rec iterate f x = seq {
yield x
yield! iterate f (f x)
}
let rec within eps list =
let a = Seq.head list
let b = list |> Seq.skip 1 |> Seq.head
if (abs(a - b) <= eps * abs(b))
then b
else list |> Seq.skip 1 |> within eps
let lazySqrt a0 eps z =
within eps (iterate (next z) a0);
let result2 = lazySqrt 10. 0.0001 42.;
printfn "lazy approach";
printfn "result: %f" result2;
// 6.4807406986501
Yet another approach is to use LazyList from F# PowerPack. The code is available in this article. Copying it to my answer for sake of integrity:
open Microsoft.FSharp.Collections.LazyList
let next N (x:float) = (x + N/x) / 2.0
let rec repeat f a =
LazyList.consDelayed a (fun() -> repeat f (f a))
let rec within (eps : float) = function
| LazyList.Cons(a, LazyList.Cons(b, rest)) when (abs (a - b)) <= eps -> b
| x -> within eps (LazyList.tail x)
let newton_square a0 eps N = within eps (repeat (next N) a0)
printfn "%A" (newton_square 16.0 0.001 16.0)
Some minor notes:
Your next function is wrong;
The meaning of eps is relative accuracy while in most academic books I've seen an absolute accuracy. The difference between the two is whether or not it's measured against b, here: <= eps * abs(b). The code from FPish treats eps as an absolute accuracy.

In pure functional languages, is there an algorithm to get the inverse function?

In pure functional languages like Haskell, is there an algorithm to get the inverse of a function, (edit) when it is bijective? And is there a specific way to program your function so it is?
In some cases, yes! There's a beautiful paper called Bidirectionalization for Free! which discusses a few cases -- when your function is sufficiently polymorphic -- where it is possible, completely automatically to derive an inverse function. (It also discusses what makes the problem hard when the functions are not polymorphic.)
What you get out in the case your function is invertible is the inverse (with a spurious input); in other cases, you get a function which tries to "merge" an old input value and a new output value.
No, it's not possible in general.
Proof: consider bijective functions of type
type F = [Bit] -> [Bit]
with
data Bit = B0 | B1
Assume we have an inverter inv :: F -> F such that inv f . f ≡ id. Say we have tested it for the function f = id, by confirming that
inv f (repeat B0) -> (B0 : ls)
Since this first B0 in the output must have come after some finite time, we have an upper bound n on both the depth to which inv had actually evaluated our test input to obtain this result, as well as the number of times it can have called f. Define now a family of functions
g j (B1 : B0 : ... (n+j times) ... B0 : ls)
= B0 : ... (n+j times) ... B0 : B1 : ls
g j (B0 : ... (n+j times) ... B0 : B1 : ls)
= B1 : B0 : ... (n+j times) ... B0 : ls
g j l = l
Clearly, for all 0<j≤n, g j is a bijection, in fact self-inverse. So we should be able to confirm
inv (g j) (replicate (n+j) B0 ++ B1 : repeat B0) -> (B1 : ls)
but to fulfill this, inv (g j) would have needed to either
evaluate g j (B1 : repeat B0) to a depth of n+j > n
evaluate head $ g j l for at least n different lists matching replicate (n+j) B0 ++ B1 : ls
Up to that point, at least one of the g j is indistinguishable from f, and since inv f hadn't done either of these evaluations, inv could not possibly have told it apart – short of doing some runtime-measurements on its own, which is only possible in the IO Monad.
                                                                                                                                   ⬜
You can look it up on wikipedia, it's called Reversible Computing.
In general you can't do it though and none of the functional languages have that option. For example:
f :: a -> Int
f _ = 1
This function does not have an inverse.
Not in most functional languages, but in logic programming or relational programming, most functions you define are in fact not functions but "relations", and these can be used in both directions. See for example prolog or kanren.
Tasks like this are almost always undecidable. You can have a solution for some specific functions, but not in general.
Here, you cannot even recognize which functions have an inverse. Quoting Barendregt, H. P. The Lambda Calculus: Its Syntax and Semantics. North Holland, Amsterdam (1984):
A set of lambda-terms is nontrivial if it is neither the empty nor the full set. If A and B are two nontrivial, disjoint sets of lambda-terms closed under (beta) equality, then A and B are recursively inseparable.
Let's take A to be the set of lambda terms that represent invertible functions and B the rest. Both are non-empty and closed under beta equality. So it's not possible to decide whether a function is invertible or not.
(This applies to the untyped lambda calculus. TBH I don't know if the argument can be directly adapted to a typed lambda calculus when we know the type of a function that we want to invert. But I'm pretty sure it will be similar.)
If you can enumerate the domain of the function and can compare elements of the range for equality, you can - in a rather straightforward way. By enumerate I mean having a list of all the elements available. I'll stick to Haskell, since I don't know Ocaml (or even how to capitalise it properly ;-)
What you want to do is run through the elements of the domain and see if they're equal to the element of the range you're trying to invert, and take the first one that works:
inv :: Eq b => [a] -> (a -> b) -> (b -> a)
inv domain f b = head [ a | a <- domain, f a == b ]
Since you've stated that f is a bijection, there's bound to be one and only one such element. The trick, of course, is to ensure that your enumeration of the domain actually reaches all the elements in a finite time. If you're trying to invert a bijection from Integer to Integer, using [0,1 ..] ++ [-1,-2 ..] won't work as you'll never get to the negative numbers. Concretely, inv ([0,1 ..] ++ [-1,-2 ..]) (+1) (-3) will never yield a value.
However, 0 : concatMap (\x -> [x,-x]) [1..] will work, as this runs through the integers in the following order [0,1,-1,2,-2,3,-3, and so on]. Indeed inv (0 : concatMap (\x -> [x,-x]) [1..]) (+1) (-3) promptly returns -4!
The Control.Monad.Omega package can help you run through lists of tuples etcetera in a good way; I'm sure there's more packages like that - but I don't know them.
Of course, this approach is rather low-brow and brute-force, not to mention ugly and inefficient! So I'll end with a few remarks on the last part of your question, on how to 'write' bijections. The type system of Haskell isn't up to proving that a function is a bijection - you really want something like Agda for that - but it is willing to trust you.
(Warning: untested code follows)
So can you define a datatype of Bijection s between types a and b:
data Bi a b = Bi {
apply :: a -> b,
invert :: b -> a
}
along with as many constants (where you can say 'I know they're bijections!') as you like, such as:
notBi :: Bi Bool Bool
notBi = Bi not not
add1Bi :: Bi Integer Integer
add1Bi = Bi (+1) (subtract 1)
and a couple of smart combinators, such as:
idBi :: Bi a a
idBi = Bi id id
invertBi :: Bi a b -> Bi b a
invertBi (Bi a i) = (Bi i a)
composeBi :: Bi a b -> Bi b c -> Bi a c
composeBi (Bi a1 i1) (Bi a2 i2) = Bi (a2 . a1) (i1 . i2)
mapBi :: Bi a b -> Bi [a] [b]
mapBi (Bi a i) = Bi (map a) (map i)
bruteForceBi :: Eq b => [a] -> (a -> b) -> Bi a b
bruteForceBi domain f = Bi f (inv domain f)
I think you could then do invert (mapBi add1Bi) [1,5,6] and get [0,4,5]. If you pick your combinators in a smart way, I think the number of times you'll have to write a Bi constant by hand could be quite limited.
After all, if you know a function is a bijection, you'll hopefully have a proof-sketch of that fact in your head, which the Curry-Howard isomorphism should be able to turn into a program :-)
I've recently been dealing with issues like this, and no, I'd say that (a) it's not difficult in many case, but (b) it's not efficient at all.
Basically, suppose you have f :: a -> b, and that f is indeed a bjiection. You can compute the inverse f' :: b -> a in a really dumb way:
import Data.List
-- | Class for types whose values are recursively enumerable.
class Enumerable a where
-- | Produce the list of all values of type #a#.
enumerate :: [a]
-- | Note, this is only guaranteed to terminate if #f# is a bijection!
invert :: (Enumerable a, Eq b) => (a -> b) -> b -> Maybe a
invert f b = find (\a -> f a == b) enumerate
If f is a bijection and enumerate truly produces all values of a, then you will eventually hit an a such that f a == b.
Types that have a Bounded and an Enum instance can be trivially made RecursivelyEnumerable. Pairs of Enumerable types can also be made Enumerable:
instance (Enumerable a, Enumerable b) => Enumerable (a, b) where
enumerate = crossWith (,) enumerate enumerate
crossWith :: (a -> b -> c) -> [a] -> [b] -> [c]
crossWith f _ [] = []
crossWith f [] _ = []
crossWith f (x0:xs) (y0:ys) =
f x0 y0 : interleave (map (f x0) ys)
(interleave (map (flip f y0) xs)
(crossWith f xs ys))
interleave :: [a] -> [a] -> [a]
interleave xs [] = xs
interleave [] ys = []
interleave (x:xs) ys = x : interleave ys xs
Same goes for disjunctions of Enumerable types:
instance (Enumerable a, Enumerable b) => Enumerable (Either a b) where
enumerate = enumerateEither enumerate enumerate
enumerateEither :: [a] -> [b] -> [Either a b]
enumerateEither [] ys = map Right ys
enumerateEither xs [] = map Left xs
enumerateEither (x:xs) (y:ys) = Left x : Right y : enumerateEither xs ys
The fact that we can do this both for (,) and Either probably means that we can do it for any algebraic data type.
Not every function has an inverse. If you limit the discussion to one-to-one functions, the ability to invert an arbitrary function grants the ability to crack any cryptosystem. We kind of have to hope this isn't feasible, even in theory!
In some cases, it is possible to find the inverse of a bijective function by converting it into a symbolic representation. Based on this example, I wrote this Haskell program to find inverses of some simple polynomial functions:
bijective_function x = x*2+1
main = do
print $ bijective_function 3
print $ inverse_function bijective_function (bijective_function 3)
data Expr = X | Const Double |
Plus Expr Expr | Subtract Expr Expr | Mult Expr Expr | Div Expr Expr |
Negate Expr | Inverse Expr |
Exp Expr | Log Expr | Sin Expr | Atanh Expr | Sinh Expr | Acosh Expr | Cosh Expr | Tan Expr | Cos Expr |Asinh Expr|Atan Expr|Acos Expr|Asin Expr|Abs Expr|Signum Expr|Integer
deriving (Show, Eq)
instance Num Expr where
(+) = Plus
(-) = Subtract
(*) = Mult
abs = Abs
signum = Signum
negate = Negate
fromInteger a = Const $ fromIntegral a
instance Fractional Expr where
recip = Inverse
fromRational a = Const $ realToFrac a
(/) = Div
instance Floating Expr where
pi = Const pi
exp = Exp
log = Log
sin = Sin
atanh = Atanh
sinh = Sinh
cosh = Cosh
acosh = Acosh
cos = Cos
tan = Tan
asin = Asin
acos = Acos
atan = Atan
asinh = Asinh
fromFunction f = f X
toFunction :: Expr -> (Double -> Double)
toFunction X = \x -> x
toFunction (Negate a) = \a -> (negate a)
toFunction (Const a) = const a
toFunction (Plus a b) = \x -> (toFunction a x) + (toFunction b x)
toFunction (Subtract a b) = \x -> (toFunction a x) - (toFunction b x)
toFunction (Mult a b) = \x -> (toFunction a x) * (toFunction b x)
toFunction (Div a b) = \x -> (toFunction a x) / (toFunction b x)
with_function func x = toFunction $ func $ fromFunction x
simplify X = X
simplify (Div (Const a) (Const b)) = Const (a/b)
simplify (Mult (Const a) (Const b)) | a == 0 || b == 0 = 0 | otherwise = Const (a*b)
simplify (Negate (Negate a)) = simplify a
simplify (Subtract a b) = simplify ( Plus (simplify a) (Negate (simplify b)) )
simplify (Div a b) | a == b = Const 1.0 | otherwise = simplify (Div (simplify a) (simplify b))
simplify (Mult a b) = simplify (Mult (simplify a) (simplify b))
simplify (Const a) = Const a
simplify (Plus (Const a) (Const b)) = Const (a+b)
simplify (Plus a (Const b)) = simplify (Plus (Const b) (simplify a))
simplify (Plus (Mult (Const a) X) (Mult (Const b) X)) = (simplify (Mult (Const (a+b)) X))
simplify (Plus (Const a) b) = simplify (Plus (simplify b) (Const a))
simplify (Plus X a) = simplify (Plus (Mult 1 X) (simplify a))
simplify (Plus a X) = simplify (Plus (Mult 1 X) (simplify a))
simplify (Plus a b) = (simplify (Plus (simplify a) (simplify b)))
simplify a = a
inverse X = X
inverse (Const a) = simplify (Const a)
inverse (Mult (Const a) (Const b)) = Const (a * b)
inverse (Mult (Const a) X) = (Div X (Const a))
inverse (Plus X (Const a)) = (Subtract X (Const a))
inverse (Negate x) = Negate (inverse x)
inverse a = inverse (simplify a)
inverse_function x = with_function inverse x
This example only works with arithmetic expressions, but it could probably be generalized to work with lists as well. There are also several implementations of computer algebra systems in Haskell that may be used to find the inverse of a bijective function.
No, not all functions even have inverses. For instance, what would the inverse of this function be?
f x = 1

Resources