Pointer equality in Haskell? - pointers

Is there any notion of pointer quality in Haskell? == requires things to be deriving Eq, and I have something which contains a (Value -> IO Value), and neither -> nor IO derive Eq.
EDIT: I'm creating an interpreter for another language which does have pointer equality, so I'm trying to model this behavior while still being able to use Haskell functions to model closures.
EDIT: Example: I want a function special that would do this:
> let x a = a * 2
> let y = x
> special x y
True
> let z a = a * 2
> special x z
False

EDIT: Given your example, you could model this with the IO monad. Just assign your functions to IORefs and compare them.
Prelude Data.IORef> z <- newIORef (\x -> x)
Prelude Data.IORef> y <- newIORef (\x -> x)
Prelude Data.IORef> z == z
True
Prelude Data.IORef> z == y
False

Pointer equality would break referential transparency, so NO.
Perhaps surprisingly, it is actually possible to compute extensional equality of total functions on compact spaces, but in general (e.g. functions on the integers with possible non-termination) this is impossible.
EDIT: I'm creating an interpreter for another language
Can you just keep the original program AST or source location alongside the Haskell functions you've translated them into? It seems that you want "equality" based on that.

== requires things to be deriving Eq
Actually (==) requires an instance of Eq, not necessarily a derived instance.
What you probably need to do is to provide your own instance of Eq which simply ignores the (Value -> IO Value) part. E.g.,
data D = D Int Bool (Value -> IO Value)
instance Eq D where
D x y _ == D x' y' _ = x==x && y==y'
Does that help, maybe?

Another way to do this is to exploit StableNames.
However, special would have to return its results inside of the IO monad unless you want to abuse unsafePerformIO.
The IORef solution requires IO throughout the construction of your structure. Checking StableNames only uses it when you want to check referential equality.

IORefs derive Eq. I don't understand what else you need to get the pointer equality of.
Edit: The only things that it makes sense to compare the pointer equality of are mutable structures. But as I mentioned above, mutable structures, like IORefs, already instance Eq, which allows you to see if two IORefs are the same structure, which is exactly pointer equality.

I'm creating an interpreter for
another language which does have
pointer equality, so I'm trying to
model this behavior while still being
able to use Haskell functions to model
closures.
I am pretty sure that in order to write such an interpreter, your code should be monadic. Don't you have to maintain some sort of environment or state? If so, you could also maintain a counter for function closures. Thus, whenever you create a new closure you equip it with a unique id. Then for pointer equivalence you simply compare these identifiers.

Related

Math-operations (or hash functions) without order dependency

The problem I am thinking about is hash functions, although I'm mainly interested in the mathematical terms/background to describe my requested property.
Consider the case where I have a hash-function taking a secret (S) and a number (X) which creates another number (Y):
Hash : S, X → Y
I then define two different hash-functions with their own secrets (a and b):
H1(X) := Hash(a, X)
H2(X) := Hash(b, X)
The property I want is that:
H1(H2(x)) = H2(H1(X))
(I think this is called that the functions commute?)
Taking a step back from programming and thinking about math we can look at different operations. If the function consist of one operation only, then I'm quite sure that this property will always be satisfied if the operation has both associative and commutative properties. However there are operations which are order insensitive but non-commutative, e.g. division. How does I know if my choice of hash function will make it commute?
Some examples that seems to work:
Simple addition:
Hash(S, X) := S + X
Bitwise xor:
Hash(S, X) := S xor X
Modular exponentiation:
Hash(S, X) := X^S mod p
if S ∈ N and X ∈ Z
How do I know if my choice of hash function will make it commute?
Commutativity under composition is an unusual property. It's not typical unless the functions are using a commutative operation of some underlying algebraic structures, such as "multiply by x". This is the form of your three examples.
The practical answer is "if you don't have a proof that it's commutative, assume it's not commutative". There's no general algorithm that will provide that proof for you.

In pure functional languages, is data (strings, ints, floats.. ) also just functions?

I was thinking about pure Object Oriented Languages like Ruby, where everything, including numbers, int, floats, and strings are themselves objects. Is this the same thing with pure functional languages? For example, in Haskell, are Numbers and Strings also functions?
I know Haskell is based on lambda calculus which represents everything, including data and operations, as functions. It would seem logical to me that a "purely functional language" would model everything as a function, as well as keep with the definition that a function most always returns the same output with the same inputs and has no state.
It's okay to think about that theoretically, but...
Just like in Ruby not everything is an object (argument lists, for instance, are not objects), not everything in Haskell is a function.
For more reference, check out this neat post: http://conal.net/blog/posts/everything-is-a-function-in-haskell
#wrhall gives a good answer. However you are somewhat correct that in the pure lambda calculus it is consistent for everything to be a function, and the language is Turing-complete (capable of expressing any pure computation that Haskell, etc. is).
That gives you some very strange things, since the only thing you can do to anything is to apply it to something else. When do you ever get to observe something? You have some value f and want to know something about it, your only choice is to apply it some value x to get f x, which is another function and the only choice is to apply it to another value y, to get f x y and so on.
Often I interpret the pure lambda calculus as talking about transformations on things that are not functions, but only capable of expressing functions itself. That is, I can make a function (with a bit of Haskelly syntax sugar for recursion & let):
purePlus = \zero succ natCase ->
let plus = \m n -> natCase m n (\m' -> plus m' n)
in plus (succ (succ zero)) (succ (succ zero))
Here I have expressed the computation 2+2 without needing to know that there are such things as non-functions. I simply took what I needed as arguments to the function I was defining, and the values of those arguments could be church encodings or they could be "real" numbers (whatever that means) -- my definition does not care.
And you could think the same thing of Haskell. There is no particular reason to think that there are things which are not functions, nor is there a particular reason to think that everything is a function. But Haskell's type system at least prevents you from applying an argument to a number (anybody thinking about fromInteger right now needs to hold their tongue! :-). In the above interpretation, it is because numbers are not necessarily modeled as functions, so you can't necessarily apply arguments to them.
In case it isn't clear by now, this whole answer has been somewhat of a technical/philosophical digression, and the easy answer to your question is "no, not everything is a function in functional languages". Functions are the things you can apply arguments to, that's all.
The "pure" in "pure functional" refers to the "freedom from side effects" kind of purity. It has little relation to the meaning of "pure" being used when people talk about a "pure object-oriented language", which simply means that the language manipulates purely (only) in objects.
The reason is that pure-as-in-only is a reasonable distinction to use to classify object-oriented languages, because there are languages like Java and C++, which clearly have values that don't have all that much in common with objects, and there are also languages like Python and Ruby, for which it can be argued that every value is an object1
Whereas for functional languages, there are no practical languages which are "pure functional" in the sense that every value the language can manipulate is a function. It's certainly possible to program in such a language. The most basic versions of the lambda calculus don't have any notion of things that are not functions, but you can still do arbitrary computation with them by coming up with ways of representing the things you want to compute on as functions.2
But while the simplicity and minimalism of the lambda calculus tends to be great for proving things about programming, actually writing substantial programs in such a "raw" programming language is awkward. The function representation of basic things like numbers also tends to be very inefficient to implement on actual physical machines.
But there is a very important distinction between languages that encourage a functional style but allow untracked side effects anywhere, and ones that actually enforce that your functions are "pure" functions (similar to mathematical functions). Object-oriented programming is very strongly wed to the use of impure computations3, so there are no practical object-oriented programming languages that are pure in this sense.
So the "pure" in "pure functional language" means something very different from the "pure" in "pure object-oriented language".4 In each case the "pure vs not pure" distinction is one that is completely uninteresting applied to the other kind of language, so there's no very strong motive to standardise the use of the term.
1 There are corner cases to pick at in all "pure object-oriented" languages that I know of, but that's not really very interesting. It's clear that the object metaphor goes much further in languages in which 1 is an instance of some class, and that class can be sub-classed, than it does in languages in which 1 is something else than an object.
2 All computation is about representation anyway. Computers don't know anything about numbers or anything else. They just have bit-patterns that we use to represent numbers, and operations on bit-patterns that happen to correspond to operations on numbers (because we designed them so that they would).
3 This isn't fundamental either. You could design a "pure" object-oriented language that was pure in this sense. I tend to write most of my OO code to be pure anyway.
4 If this seems obtuse, you might reflect that the terms "functional", "object", and "language" have vastly different meanings in other contexts also.
A very different angle on this question: all sorts of data in Haskell can be represented as functions, using a technique called Church encodings. This is a form of inversion of control: instead of passing data to functions that consume it, you hide the data inside a set of closures, and to consume it you pass in callbacks describing what to do with this data.
Any program that uses lists, for example, can be translated into a program that uses functions instead of lists:
-- | A list corresponds to a function of this type:
type ChurchList a r = (a -> r -> r) --^ how to handle a cons cell
-> r --^ how to handle the empty list
-> r --^ result of processing the list
listToCPS :: [a] -> ChurchList a r
listToCPS xs = \f z -> foldr f z xs
That function is taking a concrete list as its starting point, but that's not necessary. You can build up ChurchList functions out of just pure functions:
-- | The empty 'ChurchList'.
nil :: ChurchList a r
nil = \f z -> z
-- | Add an element at the front of a 'ChurchList'.
cons :: a -> ChurchList a r -> ChurchList a r
cons x xs = \f z -> f z (xs f z)
foldChurchList :: (a -> r -> r) -> r -> ChurchList a r -> r
foldChurchList f z xs = xs f z
mapChurchList :: (a -> b) -> ChurchList a r -> ChurchList b r
mapChurchList f = foldChurchList step nil
where step x = cons (f x)
filterChurchList :: (a -> Bool) -> ChurchList a r -> ChurchList a r
filterChurchList pred = foldChurchList step nil
where step x xs = if pred x then cons x xs else xs
That last function uses Bool, but of course we can replace Bool with functions as well:
-- | A Bool can be represented as a function that chooses between two
-- given alternatives.
type ChurchBool r = r -> r -> r
true, false :: ChurchBool r
true a _ = a
false _ b = b
filterChurchList' :: (a -> ChurchBool r) -> ChurchList a r -> ChurchList a r
filterChurchList' pred = foldChurchList step nil
where step x xs = pred x (cons x xs) xs
This sort of transformation can be done for basically any type, so in theory, you could get rid of all "value" types in Haskell, and keep only the () type, the (->) and IO type constructors, return and >>= for IO, and a suitable set of IO primitives. This would obviously be hella impractical—and it would perform worse (try writing tailChurchList :: ChurchList a r -> ChurchList a r for a taste).
Is getChar :: IO Char a function or not? Haskell Report doesn't provide us with a definition. But it states that getChar is a function (see here). (Well, at least we can say that it is a function.)
So I think the answer is YES.
I don't think there can be correct definition of "function" except "everything is a function". (What is "correct definition"? Good question...) Consider the next example:
{-# LANGUAGE NoMonomorphismRestriction #-}
import Control.Applicative
f :: Applicative f => f Int
f = pure 1
g1 :: Maybe Int
g1 = f
g2 :: Int -> Int
g2 = f
Is f a function or datatype? It depends.

What's the difference between "equal (=)" and "identical (==)" in ocaml?

In OCaml, we have two kinds of equity comparisons:
x = y and x == y,
So what's exact the difference between them?
Is that x = y in ocaml just like x.equals(y) in Java?
and x == y just like x == y (comparing the address) in Java?
I don't know exactly how x.equals(y) works in Java. If it does a "deep" comparison, then the analogy is pretty close. One thing to be careful of is that physical equality is a slippery concept in OCaml (and functional languages in general). The compiler and runtime system are going to move values around, and may merge and unmerge pure (non-mutable) values at will. So you should only use == if you really know what you're doing. At some level, it requires familiarity with the implementation (which is something to avoid unless necessary).
The specific guarantees that OCaml makes for == are weak. Mutable values compare as physically equal in the way you would expect (i.e., if mutating one of the two will actually mutate the other also). But for non-mutable values, the only guarantee is that values that compare physically equal (==) will also compare as equal (=). Note that the converse is not true, as sepp2k points out for floating values.
In essence, what the language spec is telling you for non-mutable values is that you can use == as a quick check to decide if two non-mutable values are equal (=). If they compare physically equal, they are equal value-wise. If they don't compare physically equal, you don't know if they're equal value-wise. You still have to use = to decide.
Edit: this answer delves into details of the inner working of OCaml, based on the Obj module. That knowledge isn't meant to be used without extra care (let me emphasis on that very important point once more: don't use it for your program, but only if you wish to experiment with the OCaml runtime). That information is also available, albeit perhaps in a more understandable form in the O'Reilly book on OCaml, available online (pretty good book, though a bit dated now).
The = operator is checking structural equality, whereas == only checks physical equality.
Equality checking is based on the way values are allocated and stored within memory. A runtime value in OCaml may roughly fit into 2 different categories : either boxed or unboxed. The former means that the value is reachable in memory through an indirection, and the later means that the value is directly accessible.
Since int (int31 on 32 bit systems, or int63 on 64 bit systems) are unboxed values, both operators are behaving the same with them. A few other types or values, whose runtime implementations are actually int, will also see both operators behaving the same with them, like unit (), the empty list [], constants in algebraic datatypes and polymorphic variants, etc.
Once you start playing with more complex values involving structures, like lists, arrays, tuples, records (the C struct equivalent), the difference between these two operators emerges: values within structures will be boxed, unless they can be runtime represented as native ints (1). This necessity arises from how the runtime system must handle values, and manage memory efficiently. Structured values are allocated when constructed from other values, which may be themselves structured values, in which case references are used (since they are boxed).
Because of allocations, it is very unlikely that two values instantiated at different points of a program could be physically equal, although they'd be structurally equal. Each of the fields, or inner elements within the values could be identical, even up to physical identity, but if these two values are built dynamically, then they would end up using different spaces in memory, and thus be physically different, but structurally equal.
The runtime tries to avoid unecessary allocations though: for instance, if you have a function returning always the same value (in other words, if the function is constant), either simple or structured, that function will always return the same physical value (ie, the same data in memory), so that testing for physical equality the result of two invocations of that function will be successful.
One way to observe when the physical operator will actually return true is to use the Obj.is_block function on its runtime representation (That is to say, the result of Obj.repr on it). This function simply tells whether its parameter runtime representation is boxed.
A more contrived way is to use the following function:
let phy x : int = Obj.magic (Obj.repr x);;
This function will return an int which is the actual value of the pointer to the value bound to x in memory, if this value is boxed. If you try it on a int literal, you will get the exact same value! That's because int are unboxed (ie. the value is stored directly in memory, not through a reference).
Now that we know that boxed values are actually "referenced" values, we can deduce that these values can be modified, even though the language says that they are immutable.
consider for instance the reference type:
# type 'a ref = {mutable contents : 'a };;
We could define an immutable ref like this:
# type 'a imm = {i : 'a };;
type 'a imm = {i : 'a; }
And then use the Obj.magic function to coerce one type into the other, because structurally, these types will be reduced to the same runtime representation.
For instance:
# let x = { i = 1 };;
- : val x : int imm = { i = 1 }
# let y : int ref = Obj.magic x;;
- : val y : int ref = { contents = 1 }
# y := 2;;
- : unit = ()
# x
- : int imm = { i = 2 }
There are a few exceptions to this:
if values are objects, then even seemingly structurally identical values will return false on structural comparison
# let o1 = object end;;
val o1 : < > = <obj>
# let o2 = object end;;
val o2 : < > = <obj>
# o1 = o2;;
- : bool = false
# o1 = o1;;
- : bool = true
here we see that = reverts to physical equivalence.
If values are functions, you cannot compare them structurally, but physical comparison works as intended.
lazy values may or may not be structurally comparable, depending on whether they have been forced or not (respectively).
# let l1 = lazy (40 + 2);;
val l1 : lazy_t = <lazy>
# let l2 = lazy (40 + 2);;
val l2 : lazy_t = <lazy>
# l1 = l2;;
Exception: Invalid_argument "equal: functional value".
# Lazy.force l1;;
- : int = 42
# Lazy.force l2;;
- : int = 42
# l1 = l2;;
- : bool = true
module or record values are also comparable if they don't contain any functional value.
In general, I guess that it is safe to say that values which are related to functions, or may hold functions inside are not comparable with =, but may be compared with ==.
You should obviously be very cautious with all this: relying on the implementation details of the runtime is incorrect (Note: I jokingly used the word evil in my initial version of that answer, but changed it by fear of it being taken too seriously). As you aptly pointed out in comments, the behaviour of the javascript implementation is different for floats (structurally equivalent in javascript, but not in the reference implementation, and what about the java one?).
(1) If I recall correctly, floats are also unboxed when stored in arrays to avoid a double indirection, but they become boxed once extracted, so you shouldn't see a difference in behaviour with boxed values.
Is that x = y in ocaml just like x.equals(y) in Java?
and x == y just like x == y (comparing the address) in Java?
Yes, that's it. Except that in OCaml you can use = on every kind of value, whereas in Java you can't use equals on primitive types. Another difference is that floating point numbers in OCaml are reference types, so you shouldn't compare them using == (not that it's generally a good idea to compare floating point numbers directly for equality anyway).
So in summary, you basically should always be using = to compare any kind of values.
according to http://rigaux.org/language-study/syntax-across-languages-per-language/OCaml.html, == checks for shallow equality, and = checks for deep equality

Why does ocaml need both "let" and "let rec"? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Why are functions in Ocaml/F# not recursive by default?
OCaml uses let to define a new function, or let rec to define a function that is recursive. Why does it need both of these - couldn't we just use let for everything?
For example, to define a non-recursive successor function and recursive factorial in OCaml (actually, in the OCaml interpreter) I might write
let succ n = n + 1;;
let rec fact n =
if n = 0 then 1 else n * fact (n-1);;
Whereas in Haskell (GHCI) I can write
let succ n = n + 1
let fact n =
if n == 0 then 1 else n * fact (n-1)
Why does OCaml distinguish between let and let rec? Is it a performance issue, or something more subtle?
Well, having both available instead of only one gives the programmer tighter control on the scope. With let x = e1 in e2, the binding is only present in e2's environment, while with let rec x = e1 in e2 the binding is present in both e1 and e2's environments.
(Edit: I want to emphasize that it is not a performance issue, that makes no difference at all.)
Here are two situations where having this non-recursive binding is useful:
shadowing an existing definition with a refinement that use the old binding. Something like: let f x = (let x = sanitize x in ...), where sanitize is a function that ensures the input has some desirable property (eg. it takes the norm of a possibly-non-normalized vector, etc.). This is very useful in some cases.
metaprogramming, for example macro writing. Imagine I want to define a macro SQUARE(foo) that desugars into let x = foo in x * x, for any expression foo. I need this binding to avoid code duplication in the output (I don't want SQUARE(factorial n) to compute factorial n twice). This is only hygienic if the let binding is not recursive, otherwise I couldn't write let x = 2 in SQUARE(x) and get a correct result.
So I claim it is very important indeed to have both the recursive and the non-recursive binding available. Now, the default behaviour of the let-binding is a matter of convention. You could say that let x = ... is recursive, and one must use let nonrec x = ... to get the non-recursive binder. Picking one default or the other is a matter of which programming style you want to favor and there are good reasons to make either choice. Haskell suffers¹ from the unavailability of this non-recursive mode, and OCaml has exactly the same defect at the type level : type foo = ... is recursive, and there is no non-recursive option available -- see this blog post.
¹: when Google Code Search was available, I used it to search in Haskell code for the pattern let x' = sanitize x in .... This is the usual workaround when non-recursive binding is not available, but it's less safe because you risk writing x instead of x' by mistake later on -- in some cases you want to have both available, so picking a different name can be voluntary. A good idiom would be to use a longer variable name for the first x, such as unsanitized_x. Anyway, just looking for x' literally (no other variable name) and x1 turned a lot of results. Erlang (and all language that try to make variable shadowing difficult: Coffeescript, etc.) has even worse problems of this kind.
That said, the choice of having Haskell bindings recursive by default (rather than non-recursive) certainly makes sense, as it is consistent with lazy evaluation by default, which makes it really easy to build recursive values -- while strict-by-default languages have more restrictions on which recursive definitions make sense.

Is finding the equivalence of two functions undecidable?

Is it impossible to know if two functions are equivalent? For example, a compiler writer wants to determine if two functions that the developer has written perform the same operation, what methods can he use to figure that one out? Or can what can we do to find out that two TMs are identical? Is there a way to normalize the machines?
Edit: If the general case is undecidable, how much information do you need to have before you can correctly say that two functions are equivalent?
Given an arbitrary function, f, we define a function f' which returns 1 on input n if f halts on input n. Now, for some number x we define a function g which, on input n, returns 1 if n = x, and otherwise calls f'(n).
If functional equivalence were decidable, then deciding whether g is identical to f' decides whether f halts on input x. That would solve the Halting problem. Related to this discussion is Rice's theorem.
Conclusion: functional equivalence is undecidable.
There is some discussion going on below about the validity of this proof. So let me elaborate on what the proof does, and give some example code in Python.
The proof creates a function f' which on input n starts to compute f(n). When this computation finishes, f' returns 1. Thus, f'(n) = 1 iff f halts on input n, and f' doesn't halt on n iff f doesn't. Python:
def create_f_prime(f):
def f_prime(n):
f(n)
return 1
return f_prime
Then we create a function g which takes n as input, and compares it to some value x. If n = x, then g(n) = g(x) = 1, else g(n) = f'(n). Python:
def create_g(f_prime, x):
def g(n):
return 1 if n == x else f_prime(n)
return g
Now the trick is, that for all n != x we have that g(n) = f'(n). Furthermore, we know that g(x) = 1. So, if g = f', then f'(x) = 1 and hence f(x) halts. Likewise, if g != f' then necessarily f'(x) != 1, which means that f(x) does not halt. So, deciding whether g = f' is equivalent to deciding whether f halts on input x. Using a slightly different notation for the above two functions, we can summarise all this as follows:
def halts(f, x):
def f_prime(n): f(n); return 1
def g(n): return 1 if n == x else f_prime(n)
return equiv(f_prime, g) # If only equiv would actually exist...
I'll also toss in an illustration of the proof in Haskell (GHC performs some loop detection, and I'm not really sure whether the use of seq is fool proof in this case, but anyway):
-- Tells whether two functions f and g are equivalent.
equiv :: (Integer -> Integer) -> (Integer -> Integer) -> Bool
equiv f g = undefined -- If only this could be implemented :)
-- Tells whether f halts on input x
halts :: (Integer -> Integer) -> Integer -> Bool
halts f x = equiv f' g
where
f' n = f n `seq` 1
g n = if n == x then 1 else f' n
Yes, it is undecidable. This is a form of the halting problem.
Note that I mean that it's undecidable for the general case. Just as you can determine halting for sufficiently simple programs, you can determine equivalency for sufficiently simple functions, and it's not inconceivable that this could be of some use for an application. But you cannot make a general method for determining equivalency of any two possible functions.
The general case is undecidable by Rice's Theorem, as others have already said (Rice's Theorem essentially says that any nontrivial property of a Turing-complete formalism is undecidable).
There are special cases where equivalence is decidable, the best-known example is probably equivalence of finite state automata. If I remember correctly equivalence of pushdown automata is already undecidable by reduction to Post's Correspondence Problem.
To prove that two given functions are equivalent you would require as input a proof of the equivalence in some formalism, which you can then check for correctness. The essential parts of this proof are the loop invariants, as these cannot be derived automatically.
In the general case it's undecidable whether two turing machines have always the same output for the identical input. Since you can't even decide whether a tm will halt on the input, I don't see how it should be possible to decide whether both halt AND output the same result...
It depends on what you mean by "function."
If the functions you are talking about are guaranteed to terminate -- for example, because they are written in a language in which all functions terminate -- and operate over finite domains, it's "easy" (although it might still take a very, very long time): two functions are equivalent if and only if they have the same value at every point in their shared domain.
This is called "extensional" equivalence to distinguish it from syntactic or "intensional" equivalence. Two functions are extensionally equivalent if they are intensionally equivalent, but the converse does not hold.
(All the other people above noting that it is undecidable in the general case are quite correct, of course, this is a fairly uncommon -- and usually uninteresting in practice -- special case.)
Note that the halting problem is decidable for linear bounded automata. Real computers are always bounded, and programs for them will always loop back to a previous configuration after sufficiently many steps. If you are using an unbounded (imaginary) computer to keep track of the configurations, you can detect that looping and take it into account.
You could check in your compiler to see if they are "exactly" identical, sure, but determining if they return identical values would be difficult and time consuming. You would have to basically call that method and perform its routine over an infinite number of possible calls and compare the value with that from the other routine.
Even if you could do the above, you would have to account for what global values change within the function, what objects are destroyed / changed in the function that do not affect the outcome.
You can really only compare the compiled code. So compile the compiled code to refactor?
Imagine the run time on trying to compile the code with "that" compiler. You could spend a LOT of time on here answering questions saying: "busy compiling..." :)
I think if you allow side effects, you can show that the problem can be morphed into the Post correspondence problem so you can't, in general, show if two functions are even capable of having the same side effects.
Is it impossible to know if two functions are equivalent?
No. It is possible to know that two functions are equivalent. If you have f(x), you know f(x) is equivalent to f(x).
If the question is "it is possible to determine if f(x) and g(x) are equivalent with f and g being any function and for all functions g and f", then the answer is no.
However, if the question is "can a compiler determine that if f(x) and g(x) are equivalent that they are equivalent?", then the answer is yes if they are equivalent in both output and side effects and order of side effects. In other words, if one is a transformation of the other that preserves behavior, then a compiler of sufficient complexity should be able to detect it. It also means that the compiler can transform a function f into a more optimal and equivalent function g given a particular definition of equivalent. It gets even more fun if f includes undefined behavior, because then g can also include undefined (but different) behavior!

Resources