I want to make the following
variables {Ω : Type} (P:set Ω → Ω → Prop)(a:Ω)
def G :set Ω → Ω:=
begin
intro V,
by_cases h: ∃!u : Ω, P V u,
--I want to use the u that is unique as return in this case, and in the other case use the a
sorry,
exact a,
end
I tried using the exists.elim and exists_unique.elim but i can't figure out how to use them properly, also I can't use h.some because I'm not using the axiom of choice.
I just want to know how to construct the function proving it's well defined, thanks.
You can use classical.some h to obtain a witness using the axiom of choice. Note that you are already using the axiom of choice via the by_cases tactic.
If you allow yourself to assume fintype Ω, then you can use fintype.choose h to use brute force instead of an axiom.
The problem I am thinking about is hash functions, although I'm mainly interested in the mathematical terms/background to describe my requested property.
Consider the case where I have a hash-function taking a secret (S) and a number (X) which creates another number (Y):
Hash : S, X → Y
I then define two different hash-functions with their own secrets (a and b):
H1(X) := Hash(a, X)
H2(X) := Hash(b, X)
The property I want is that:
H1(H2(x)) = H2(H1(X))
(I think this is called that the functions commute?)
Taking a step back from programming and thinking about math we can look at different operations. If the function consist of one operation only, then I'm quite sure that this property will always be satisfied if the operation has both associative and commutative properties. However there are operations which are order insensitive but non-commutative, e.g. division. How does I know if my choice of hash function will make it commute?
Some examples that seems to work:
Simple addition:
Hash(S, X) := S + X
Bitwise xor:
Hash(S, X) := S xor X
Modular exponentiation:
Hash(S, X) := X^S mod p
if S ∈ N and X ∈ Z
How do I know if my choice of hash function will make it commute?
Commutativity under composition is an unusual property. It's not typical unless the functions are using a commutative operation of some underlying algebraic structures, such as "multiply by x". This is the form of your three examples.
The practical answer is "if you don't have a proof that it's commutative, assume it's not commutative". There's no general algorithm that will provide that proof for you.
Suppose I have a list of subgoals in an apply style proof. I know that something like
apply blast
will provide a proof for a number of the subgoals within this list. Is there a way I can avoid duplicating this line?
For example, suppose I have three subgoals where the first and the third are provable using the above method while the second is provable with something like
apply (metis lemma1 lemma2 ...)
A naive proof for such subgoals will look like
apply blast
apply (metis lemma1 lemma2 ...)
apply blast
What I am looking for is a way to give a proof without duplicating the apply blast portion of the proof. Observe that using the method combinator + will not achieve this; it merely applies the method repeatedly until the first failure.
Actually apply blast will only try to solve the first subgoal. If you want to solve as many subgoals as possible you could try
apply blast+
I am not sure what exactly you are trying to achieve, but an alternative to your using some_lemma might be
apply (insert some_lemma)
which inserts some_lemma as additional assumption of all of your subgoals.
Update: There are some basic proof method combinators available in Isabelle (see also Section 6.4.1: Proof method expressions, of isar-ref). So you could do for example
apply (blast | metis ...)+
which will first try to solve a subgoal by blast and only if this fails by metis .... However, its usefulness depends on the specific subgoal situation, e.g., if blast takes a long time before failing, it might not be suitable. More fine-grained control of proof methods is available through the recent Isabelle/Eisbach proof method language (see isabelle doc eisbach).
Is it impossible to know if two functions are equivalent? For example, a compiler writer wants to determine if two functions that the developer has written perform the same operation, what methods can he use to figure that one out? Or can what can we do to find out that two TMs are identical? Is there a way to normalize the machines?
Edit: If the general case is undecidable, how much information do you need to have before you can correctly say that two functions are equivalent?
Given an arbitrary function, f, we define a function f' which returns 1 on input n if f halts on input n. Now, for some number x we define a function g which, on input n, returns 1 if n = x, and otherwise calls f'(n).
If functional equivalence were decidable, then deciding whether g is identical to f' decides whether f halts on input x. That would solve the Halting problem. Related to this discussion is Rice's theorem.
Conclusion: functional equivalence is undecidable.
There is some discussion going on below about the validity of this proof. So let me elaborate on what the proof does, and give some example code in Python.
The proof creates a function f' which on input n starts to compute f(n). When this computation finishes, f' returns 1. Thus, f'(n) = 1 iff f halts on input n, and f' doesn't halt on n iff f doesn't. Python:
def create_f_prime(f):
def f_prime(n):
f(n)
return 1
return f_prime
Then we create a function g which takes n as input, and compares it to some value x. If n = x, then g(n) = g(x) = 1, else g(n) = f'(n). Python:
def create_g(f_prime, x):
def g(n):
return 1 if n == x else f_prime(n)
return g
Now the trick is, that for all n != x we have that g(n) = f'(n). Furthermore, we know that g(x) = 1. So, if g = f', then f'(x) = 1 and hence f(x) halts. Likewise, if g != f' then necessarily f'(x) != 1, which means that f(x) does not halt. So, deciding whether g = f' is equivalent to deciding whether f halts on input x. Using a slightly different notation for the above two functions, we can summarise all this as follows:
def halts(f, x):
def f_prime(n): f(n); return 1
def g(n): return 1 if n == x else f_prime(n)
return equiv(f_prime, g) # If only equiv would actually exist...
I'll also toss in an illustration of the proof in Haskell (GHC performs some loop detection, and I'm not really sure whether the use of seq is fool proof in this case, but anyway):
-- Tells whether two functions f and g are equivalent.
equiv :: (Integer -> Integer) -> (Integer -> Integer) -> Bool
equiv f g = undefined -- If only this could be implemented :)
-- Tells whether f halts on input x
halts :: (Integer -> Integer) -> Integer -> Bool
halts f x = equiv f' g
where
f' n = f n `seq` 1
g n = if n == x then 1 else f' n
Yes, it is undecidable. This is a form of the halting problem.
Note that I mean that it's undecidable for the general case. Just as you can determine halting for sufficiently simple programs, you can determine equivalency for sufficiently simple functions, and it's not inconceivable that this could be of some use for an application. But you cannot make a general method for determining equivalency of any two possible functions.
The general case is undecidable by Rice's Theorem, as others have already said (Rice's Theorem essentially says that any nontrivial property of a Turing-complete formalism is undecidable).
There are special cases where equivalence is decidable, the best-known example is probably equivalence of finite state automata. If I remember correctly equivalence of pushdown automata is already undecidable by reduction to Post's Correspondence Problem.
To prove that two given functions are equivalent you would require as input a proof of the equivalence in some formalism, which you can then check for correctness. The essential parts of this proof are the loop invariants, as these cannot be derived automatically.
In the general case it's undecidable whether two turing machines have always the same output for the identical input. Since you can't even decide whether a tm will halt on the input, I don't see how it should be possible to decide whether both halt AND output the same result...
It depends on what you mean by "function."
If the functions you are talking about are guaranteed to terminate -- for example, because they are written in a language in which all functions terminate -- and operate over finite domains, it's "easy" (although it might still take a very, very long time): two functions are equivalent if and only if they have the same value at every point in their shared domain.
This is called "extensional" equivalence to distinguish it from syntactic or "intensional" equivalence. Two functions are extensionally equivalent if they are intensionally equivalent, but the converse does not hold.
(All the other people above noting that it is undecidable in the general case are quite correct, of course, this is a fairly uncommon -- and usually uninteresting in practice -- special case.)
Note that the halting problem is decidable for linear bounded automata. Real computers are always bounded, and programs for them will always loop back to a previous configuration after sufficiently many steps. If you are using an unbounded (imaginary) computer to keep track of the configurations, you can detect that looping and take it into account.
You could check in your compiler to see if they are "exactly" identical, sure, but determining if they return identical values would be difficult and time consuming. You would have to basically call that method and perform its routine over an infinite number of possible calls and compare the value with that from the other routine.
Even if you could do the above, you would have to account for what global values change within the function, what objects are destroyed / changed in the function that do not affect the outcome.
You can really only compare the compiled code. So compile the compiled code to refactor?
Imagine the run time on trying to compile the code with "that" compiler. You could spend a LOT of time on here answering questions saying: "busy compiling..." :)
I think if you allow side effects, you can show that the problem can be morphed into the Post correspondence problem so you can't, in general, show if two functions are even capable of having the same side effects.
Is it impossible to know if two functions are equivalent?
No. It is possible to know that two functions are equivalent. If you have f(x), you know f(x) is equivalent to f(x).
If the question is "it is possible to determine if f(x) and g(x) are equivalent with f and g being any function and for all functions g and f", then the answer is no.
However, if the question is "can a compiler determine that if f(x) and g(x) are equivalent that they are equivalent?", then the answer is yes if they are equivalent in both output and side effects and order of side effects. In other words, if one is a transformation of the other that preserves behavior, then a compiler of sufficient complexity should be able to detect it. It also means that the compiler can transform a function f into a more optimal and equivalent function g given a particular definition of equivalent. It gets even more fun if f includes undefined behavior, because then g can also include undefined (but different) behavior!
I'm learning functional programming, and have tried to solve a couple problems in a functional style. One thing I experienced, while dividing up my problem into functions, was it seemed I had two options: use several disparate functions with similar parameter lists, or using nested functions which, as closures, can simply refer to bindings in the parent function.
Though I ended up going with the second approach, because it made function calls smaller and it seemed to "feel" better, from my reading it seems like I may be missing one of the main points of functional programming, in that this seems "side-effecty"? Now granted, these nested functions cannot modify the outer bindings, as the language I was using prevents that, but if you look at each individual inner function, you can't say "given the same parameters, this function will return the same results" because they do use the variables from the parent scope... am I right?
What is the desirable way to proceed?
Thanks!
Functional programming isn't all-or-nothing. If nesting the functions makes more sense, I'd go with that approach. However, If you really want the internal functions to be purely functional, explicitly pass all the needed parameters into them.
Here's a little example in Scheme:
(define (foo a)
(define (bar b)
(+ a b)) ; getting a from outer scope, not purely functional
(bar 3))
(define (foo a)
(define (bar a b)
(+ a b)) ; getting a from function parameters, purely functional
(bar a 3))
(define (bar a b) ; since this is purely functional, we can remove it from its
(+ a b)) ; environment and it still works
(define (foo a)
(bar a 3))
Personally, I'd go with the first approach, but either will work equally well.
Nesting functions is an excellent way to divide up the labor in many functions. It's not really "side-effecty"; if it helps, think of the captured variables as implicit parameters.
One example where nested functions are useful is to replace loops. The parameters to the nested function can act as induction variables which accumulate values. A simple example:
let factorial n =
let rec facHelper p n =
if n = 1 then p else facHelper (p*n) (n-1)
in
facHelper 1 n
In this case, it wouldn't really make sense to declare a function like facHelper globally, since users shouldn't have to worry about the p parameter.
Be aware, however, that it can be difficult to test nested functions individually, since they cannot be referred to outside of their parent.
Consider the following (contrived) Haskell snippet:
putLines :: [String] -> IO ()
putLines lines = putStr string
where string = concat lines
string is a locally bound named constant. But isn't it also a function taking no arguments that closes over lines and is therefore referentially intransparent? (In Haskell, constants and nullary functions are indeed indistinguishable!) Would you consider the above code “side-effecty” or non-functional because of this?