What is the mechanism behind Function Application in Functional Programming - functional-programming

OK, let me try to rephrase my question.
Actually I wanted to know how is Function Application implemented in FP.
Is it done like a function call in imperative languages, where a stack frame is added for each call and removed on each return.
Or, is it like in inline functions, where the function call statement is replaced by the function definition.
Also, in terms of implementation of a function application, what is the significance of the statement functions in FP are mappings between domains and corresponding ranges. It is obviously not possible to maintain a mapping for each domain-range entry pair, so what exactly does the statement imply...

This question is broad enough that I can't answer it completely, since I don't know every single functional programming language. But I can tell you how it's done in one language, F#. You asked whether function application is done like a function call in imperative languages (another stack frame added for each call) or whether it's done as inline functions... and in F# the answer is both. The F# compiler is allowed to choose whether to create a stack-frame-using function call, or whether to inline the function at the call site; generally the choice is made based on the size of the compiled function. If the function compiles down to fewer than N bytes of compiled code (I can't tell you the exact number, but knowing the exact number doesn't actually matter) then the compiler will usually inline that function call; if it takes more than N bytes then the function call will use a stack frame. (Except in the case of tail-recursive calls, which are compiled to the equivalent of a goto and don't use a stack frame).
P.S. You can force the compiler's hand by using the inline keyword, which forces that function to be inlined at the call site every time. Most F# programmers don't recommend doing that on a regular basis, because the compiler is smart enough that it's usually not a good idea to override its decisions. (Also, the inline keyword means that the types of the function's parameters must be resolvable at compile time, so there are some functions for which that changes the semantics, but that's a little off-topic for the question you asked so I won't go into it. Except to say that in F#, statically-resolved type parameters or SRTPs are a very complicated subject, and you can do some very advanced things with them if you understand them.)

Related

Difference between the internal procedures and functions in Progress4gl?

Both internal procedures and functions are accepting the parameters to give the output. So what is the use of using Internal procedures instead of functions.
A user-defined function is used when you want to perform some calculation and return a single value. In this respect it is the same as a built-in ABL function, like the SUBSTRING or EXP functions. Putting this calculation code in a FUNCTION block instead of inline in your code allows you to put it in one place and reference it multiple times without code duplication.
An internal procedure is also an encapsulated piece of code that does some work, but it is more general-purpose. While a function must return a single value, an internal procedure may or may not have input parameters or output parameters.
https://docs.progress.com/category/openedge-archives
Also functions (like methods) parameters and return value type are checked at compile time, which removes some potential problems at run time later.
The question acknowledges that both functions and internal procedures allow OUTPUT parameters and asks "what is the use" of internal procedures instead of functions.
To me, this implies that the poster is contemplating always using functions and deprecating internal procedures and is asking: "what would I lose if I do that?"
Two things spring to mind:
Sort of the opposite of Jean-Christophe Cardot's point: you would lose some automatic type conversions and syntactic flexibility about the parameter lists. Some people see that flexibility in a negative light. Others see it as a positive.
You need to "forward declare" your functions or use dynamic invocations. With an internal procedure you can RUN it without providing a declaration earlier in the code.
If you tend to think that strict type checking is useful then these are probably not benefits that you think of as being lost. If you prefer more flexible behaviors, then you may regret choosing functions rather than internal procedures.

Julia functions: making mutable types immutable

Coming from Wolfram Mathematica, I like the idea that whenever I pass a variable to a function I am effectively creating a copy of that variable. On the other hand, I am learning that in Julia there are the notions of mutable and immutable types, with the former passed by reference and the latter passed by value. Can somebody explain me the advantage of such a distinction? why arrays are passed by reference? Naively I see this as a bad aspect, since it creates side effects and ruins the possibility to write purely functional code. Where I am wrong in my reasoning? is there a way to make immutable an array, such that when it is passed to a function it is effectively passed by value?
here an example of code
#x is an in INT and so is immutable: it is passed by value
x = 10
function change_value(x)
x = 17
end
change_value(x)
println(x)
#arrays are mutable: they are passed by reference
arr = [1, 2, 3]
function change_array!(A)
A[1] = 20
end
change_array!(arr)
println(arr)
which indeed modifies the array arr
There is a fair bit to respond to here.
First, Julia does not pass-by-reference or pass-by-value. Rather it employs a paradigm known as pass-by-sharing. Quoting the docs:
Function arguments themselves act as new variable bindings (new
locations that can refer to values), but the values they refer to are
identical to the passed values.
Second, you appear to be asking why Julia does not copy arrays when passing them into functions. This is a simple one to answer: Performance. Julia is a performance oriented language. Making a copy every time you pass an array into a function is bad for performance. Every copy operation takes time.
This has some interesting side-effects. For example, you'll notice that a lot of the mature Julia packages (as well as the Base code) consists of many short functions. This code structure is a direct consequence of near-zero overhead to function calls. Languages like Mathematica and MatLab on the other hand tend towards long functions. I have no desire to start a flame war here, so I'll merely state that personally I prefer the Julia style of many short functions.
Third, you are wondering about the potential negative implications of pass-by-sharing. In theory you are correct that this can result in problems when users are unsure whether a function will modify its inputs. There were long discussions about this in the early days of the language, and based on your question, you appear to have worked out that the convention is that functions that modify their arguments have a trailing ! in the function name. Interestingly, this standard is not compulsory so yes, it is in theory possible to end up with a wild-west type scenario where users live in a constant state of uncertainty. In practice this has never been a problem (to my knowledge). The convention of using ! is enforced in Base Julia, and in fact I have never encountered a package that does not adhere to this convention. In summary, yes, it is possible to run into issues when pass-by-sharing, but in practice it has never been a problem, and the performance benefits far outweigh the cost.
Fourth (and finally), you ask whether there is a way to make an array immutable. First things first, I would strongly recommend against hacks to attempt to make native arrays immutable. For example, you could attempt to disable the setindex! function for arrays... but please don't do this. It will break so many things.
As was mentioned in the comments on the question, you could use StaticArrays. However, as Simeon notes in the comments on this answer, there are performance penalties for using static arrays for really big datasets. More than 100 elements and you can run into compilation issues. The main benefit of static arrays really is the optimizations that can be implemented for smaller static arrays.
Another package-based options suggested by phipsgabler in the comments below is FunctionalCollections. This appears to do what you want, although it looks to be only sporadically maintained. Of course, that isn't always a bad thing.
A simpler approach is just to copy arrays in your own code whenever you want to implement pass-by-value. For example:
f!(copy(x))
Just be sure you understand the difference between copy and deepcopy, and when you may need to use the latter. If you're only working with arrays of numbers, you'll never need the latter, and in fact using it will probably drastically slow down your code.
If you wanted to do a bit of work then you could also build your own array type in the spirit of static arrays, but without all the bells and whistles that static arrays entails. For example:
struct MyImmutableArray{T,N}
x::Array{T,N}
end
Base.getindex(y::MyImmutableArray, inds...) = getindex(y.x, inds...)
and similarly you could add any other functions you wanted to this type, while excluding functions like setindex!.

Is recursion a feature in and of itself?

...or is it just a practice?
I'm asking this because of an argument with my professor: I lost credit for calling a function recursively on the basis that we did not cover recursion in class, and my argument is that we learned it implicitly by learning return and methods.
I'm asking here because I suspect someone has a definitive answer.
For example, what is the difference between the following two methods:
public static void a() {
return a();
}
public static void b() {
return a();
}
Other than "a continues forever" (in the actual program it is used correctly to prompt a user again when provided with invalid input), is there any fundamental difference between a and b? To an un-optimized compiler, how are they handled differently?
Ultimately it comes down to whether by learning to return a() from b that we therefor also learned to return a() from a. Did we?
To answer your specific question: No, from the standpoint of learning a language, recursion isn't a feature. If your professor really docked you marks for using a "feature" he hadn't taught yet, that was wrong.
Reading between the lines, one possibility is that by using recursion, you avoided ever using a feature that was supposed to be a learning outcome for his course. For example, maybe you didn't use iteration at all, or maybe you only used for loops instead of using both for and while. It's common that an assignment aims to test your ability to do certain things, and if you avoid doing them, your professor simply can't grant you the marks set aside for that feature. However, if that really was the cause of your lost marks, the professor should take this as a learning experience of his or her own- if demonstrating certain learning outcomes is one of the criteria for an assignment, that should be clearly explained to the students.
Having said that, I agree with most of the other comments and answers that iteration is a better choice than recursion here. There are a couple of reasons, and while other people have touched on them to some extent, I'm not sure they've fully explained the thought behind them.
Stack Overflows
The more obvious one is that you risk getting a stack overflow error. Realistically, the method you wrote is very unlikely to actually lead to one, since a user would have to give incorrect input many many times to actually trigger a stack overflow.
However, one thing to keep in mind is that not just the method itself, but other methods higher or lower in the call chain will be on the stack. Because of this, casually gobbling up available stack space is a pretty impolite thing for any method to do. Nobody wants to have to constantly worry about free stack space whenever they write code because of the risk that other code might have needlessly used a lot of it up.
This is part of a more general principle in software design called abstraction. Essentially, when you call DoThing(), all you should need to care about is that Thing is done. You shouldn't have to worry about the implementation details of how it's done. But greedy use of the stack breaks this principle, because every bit of code has to worry about how much stack it can safely assume it has left to it by code elsewhere in the call chain.
Readability
The other reason is readability. The ideal that code should aspire to is to be a human-readable document, where each line describes simply what it's doing. Take these two approaches:
private int getInput() {
int input;
do {
input = promptForInput();
} while (!inputIsValid(input))
return input;
}
versus
private int getInput() {
int input = promptForInput();
if(inputIsValid(input)) {
return input;
}
return getInput();
}
Yes, these both work, and yes they're both pretty easy to understand. But how might the two approaches be described in English? I think it'd be something like:
I will prompt for input until the input is valid, and then return it
versus
I will prompt for input, then if the input is valid I will return it, otherwise I get the input and return the result of that instead
Perhaps you can think of slightly less clunky wording for the latter, but I think you'll always find that the first one is going to be a more accurate description, conceptually, of what you are actually trying to do. This isn't to say recursion is always less readable. For situations where it shines, like tree traversal, you could do the same kind of side by side analysis between recursion and another approach and you'd almost certainly find recursion gives code which is more clearly self-describing, line by line.
In isolation, both of these are small points. It's very unlikely this would ever really lead to a stack overflow, and the gain in readability is minor. But any program is going to be a collection of many of these small decisions, so even if in isolation they don't matter much, it's important to learn the principles behind getting them right.
To answer the literal question, rather than the meta-question: recursion is a feature, in the sense that not all compilers and/or languages necessarily permit it. In practice, it is expected of all (ordinary) modern compilers - and certainly all Java compilers! - but it is not universally true.
As a contrived example of why recursion might not be supported, consider a compiler that stores the return address for a function in a static location; this might be the case, for example, for a compiler for a microprocessor that does not have a stack.
For such a compiler, when you call a function like this
a();
it is implemented as
move the address of label 1 to variable return_from_a
jump to label function_a
label 1
and the definition of a(),
function a()
{
var1 = 5;
return;
}
is implemented as
label function_a
move 5 to variable var1
jump to the address stored in variable return_from_a
Hopefully the problem when you try to call a() recursively in such a compiler is obvious; the compiler no longer knows how to return from the outer call, because the return address has been overwritten.
For the compiler I actually used (late 70s or early 80s, I think) with no support for recursion the problem was slightly more subtle than that: the return address would be stored on the stack, just like in modern compilers, but local variables weren't. (Theoretically this should mean that recursion was possible for functions with no non-static local variables, but I don't remember whether the compiler explicitly supported that or not. It may have needed implicit local variables for some reason.)
Looking forwards, I can imagine specialized scenarios - heavily parallel systems, perhaps - where not having to provide a stack for every thread could be advantageous, and where therefore recursion is only permitted if the compiler can refactor it into a loop. (Of course the primitive compilers I discuss above were not capable of complicated tasks like refactoring code.)
The teacher wants to know whether you have studied or not. Apparently you didn't solve the problem the way he taught you (the good way; iteration), and thus, considers that you didn't. I'm all for creative solutions but in this case I have to agree with your teacher for a different reason: If the user provides invalid input too many times (i.e. by keeping enter pressed), you'll have a stack overflow exception and your solution will crash. In addition, the iterative solution is more efficient and easier to maintain. I think that's the reason your teacher should have given you.
Deducting points because "we didn't cover recursion in class" is awful. If you learnt how to call function A which calls function B which calls function C which returns back to B which returns back to A which returns back to the caller, and the teacher didn't tell you explicitly that these must be different functions (which would be the case in old FORTRAN versions, for example), there is no reason that A, B and C cannot all be the same function.
On the other hand, we'd have to see the actual code to decide whether in your particular case using recursion is really the right thing to do. There are not many details, but it does sound wrong.
There are many point of views to look at regarding the specific question you asked but what I can say is that from the standpoint of learning a language, recursion isn't a feature on its own. If your professor really docked you marks for using a "feature" he hadn't taught yet, that was wrong but like I said, there are other point of views to consider here which actually make the professor being right when deducting points.
From what I can deduce from your question, using a recursive function to ask for input in case of input failure is not a good practice since every recursive functions' call gets pushed on to the stack. Since this recursion is driven by user input it is possible to have an infinite recursive function and thus resulting in a StackOverflow.
There is no difference between these 2 examples you mentioned in your question in the sense of what they do (but do differ in other ways)- In both cases, a return address and all method info is being loaded to the stack. In a recursion case, the return address is simply the line right after the method calling (of course its not exactly what you see in the code itself, but rather in the code the compiler created). In Java, C, and Python, recursion is fairly expensive compared to iteration (in general) because it requires the allocation of a new stack frame. Not to mention you can get a stack overflow exception if the input is not valid too many times.
I believe the professor deducted points since recursion is considered a subject of its own and its unlikely that someone with no programming experience would think of recursion. (Of course it doesn't mean they won't, but it's unlikely).
IMHO, I think the professor is right by deducting you the points. You could have easily taken the validation part to a different method and use it like this:
public bool foo()
{
validInput = GetInput();
while(!validInput)
{
MessageBox.Show("Wrong Input, please try again!");
validInput = GetInput();
}
return hasWon(x, y, piece);
}
If what you did can indeed be solved in that manner then what you did was a bad practice and should be avoided.
Maybe your professor hasn't taught it yet, but it sounds like you're ready to learn the advantages and disadvantages of recursion.
The main advantage of recursion is that recursive algorithms are often much easier and quicker to write.
The main disadvantage of recursion is that recursive algorithms can cause stack overflows, since each level of recursion requires an additional stack frame to be added to the stack.
For production code, where scaling can result in many more levels of recursion in production than in the programmer's unit tests, the disadvantage usually outweighs the advantage, and recursive code is often avoided when practical.
Regarding the specific question, is recursion a feature, I'm inclined to say yes, but after re-interpreting the question. There are common design choices of languages and compilers that make recursion possible, and Turing-complete languages do exist that don't allow recursion at all. In other words, recursion is an ability that is enabled by certain choices in language/compiler design.
Supporting first-class functions makes recursion possible under very minimal assumptions; see writing loops in Unlambda for an example, or this obtuse Python expression containing no self-references, loops or assignments:
>>> map((lambda x: lambda f: x(lambda g: f(lambda v: g(g)(v))))(
... lambda c: c(c))(lambda R: lambda n: 1 if n < 2 else n * R(n - 1)),
... xrange(10))
[1, 1, 2, 6, 24, 120, 720, 5040, 40320, 362880]
Languages/compilers that use late binding, or that define forward declarations, make recursion possible. For example, while Python allows the below code, that's a design choice (late binding), not a requirement for a Turing-complete system. Mutually recursive functions often depend on support for forward declarations.
factorial = lambda n: 1 if n < 2 else n * factorial(n-1)
Statically typed languages that allow recursively defined types contribute to enabling recursion. See this implementation of the Y Combinator in Go. Without recursively-defined types, it would still be possible to use recursion in Go, but I believe the Y combinator specifically would be impossible.
From what I can deduce from your question, using a recursive function to ask for input in case of input failure is not a good practice. Why?
Because every recursive functions call gets pushed on to the stack. Since this recursion is driven by user input it is possible to have an infinite recursive function and thus resulting in a StackOverflow :-p
Having a non recursive loop to do this is the way to go.
Recursion is a programming concept, a feature (like iteration), and a practice. As you can see from the link, there's a large domain of research dedicated to the subject. Perhaps we don't need to go that deep in the topic to understand these points.
Recursion as a feature
In plain terms, Java supports it implicitly, because it allows a method (which is basically a special function) to have "knowledge" of itself and of others methods composing the class it belongs to. Consider a language where this is not the case: you would be able to write the body of that method a, but you wouldn't be able to include a call to a within it. The only solution would be to use iteration to obtain the same result. In such a language, you would have to make a distinction between functions aware of their own existence (by using a specific syntax token), and those who don't! Actually, a whole group of languages do make that distinction (see the Lisp and ML families for instance). Interestingly, Perl does even allow anonymous functions (so called lambdas) to call themselves recursively (again, with a dedicated syntax).
no recursion?
For languages which don't even support the possibility of recursion, there is often another solution, in the form of the Fixed-point combinator, but it still requires the language to support functions as so called first class objects (i.e. objects which may be manipulated within the language itself).
Recursion as a practice
Having that feature available in a language doesn't necessary mean that it is idiomatic. In Java 8, lambda expressions have been included, so it might become easier to adopt a functional approach to programming. However, there are practical considerations:
the syntax is still not very recursion friendly
compilers may not be able to detect that practice and optimize it
The bottom line
Luckily (or more accurately, for ease of use), Java does let methods be aware of themselves by default, and thus support recursion, so this isn't really a practical problem, but it still remain a theoretical one, and I suppose that your teacher wanted to address it specifically. Besides, in the light of the recent evolution of the language, it might turn into something important in the future.

How does functional programming avoid state when it seems unavoidable?

Let's say we define a function c sum(a, b), functional programming -style, that returns the sum of its arguments. So far so good; all the nice things of FP without any problems.
Now let's say we run this in an environment with dynamic typing and a singleton, stateful error stream. Then let's say we pass a value of a and/or b that sum isn't designed to handle (i.e. not numbers), and it needs to indicate an error somehow.
But how? This function is supposed to be pure and side-effect-less. How does it insert an error into the global error stream without violating that?
No programming language that I know of has anything like a "singleton stateful error stream" built in, so you'd have to make one. And you simply wouldn't make such a thing if you were trying to write your program in a pure functional style.
You could, however, have a sum function that returns either the sum or an indication of an error. The type used to do this is in fact often known by the name Either. Then you could easily make a function that invokes a whole bunch of computations that could possibly return an error, and returns a list of all the errors that were encountered in the other computations. That's pretty close to what you were talking about; it's just explicitly returned rather than being global.
Remember, the question when you're writing a functional program is "how do I make a program that has the behavior I want?" not, "how would I duplicate one particular approach taken in another programming style?". A "global stateful error stream" is a means not an end. You can't have a global stateful error stream in pure function style, no. But ask yourself what you're using the global stateful error stream to achieve; whatever it is, you can achieve that in functional programming, just not with the same mechanism.
Asking whether pure functional programming can implement a particular technique that depends on side effects is like asking how you use techniques from assembly in object-oriented programming. OO provides different tools for you to use to solve problems; limiting yourself to using those tools to emulate a different toolset is not going to be an effective way to work with them.
In response to comments: If what you want to achieve with your error stream is logging error messages to a terminal, then yes, at some level the code is going to have to do IO to do that.1
Printing to terminal is just like any other IO, there's nothing particularly special about it that makes it worthy of singling out as a case where state seems especially unavoidable. So if this turns your question into "How do pure functional programs handle IO?", then there are no doubt many duplicate questions on SO, not to mention many many blog posts and tutorials speaking precisely to that issue. It's not like it's a sudden surprise to implementors and users of pure programming languages, the question has been around for decades, and there have been some quite sophisticated thought put into the answers.
There are different approaches taken in different languages (IO monad in Haskell, unique modes in Mercury, lazy streams of requests and responses in historical versions of Haskell, and more). The basic idea is to come up with a model which can be manipulated by pure code, and hook up manipulations of the model to actual impure operations within the language implementation. This allows you to keep the benefits of purity (the proofs that apply to pure code but not to general impure code will still apply to code using the pure IO model).
The pure model has to be carefully designed so that you can't actually do anything with it that doesn't make sense in terms of actual IO. For example, Mercury does IO by having you write programs as if you're passing around the current state of the universe as an extra parameter. This pure model accurately represents the behaviour of operations that depend on and affect the universe outside the program, but only when there is exactly one state of the universe in the system at any one time, which is threaded through the entire program from start to finish. So some restrictions are put in
The type io is made abstract so that there's no way to construct a value of that type; the only way you can get one is to be passed one from your caller. An io value is passed into the main predicate by the language implementation to kick the whole thing off.
The mode of the io value passed in to main is declared such that it is unique. This means you can't do things that might cause it to be duplicated, such as putting it in a container or passing the same io value to multiple different invocations. The unique mode ensures that you can only ass the io value to a predicate that also uses the unique mode, and as soon as you pass it once the value is "dead" and can't be passed anywhere else.
1 Note that even in imperative programs, you gain a lot of flexibility if you have your error logging system return a stream of error messages and then only actually make the decision to print them close to the outermost layer of the program. If your log calls are directly writing the output immediately, here's just a few things I can think of off the top of my head that become much harder to do with such a system:
Speculatively execute a computation and see whether it failed by checking whether it emitted any errors
Combine multiple high level systems into a single system, adding tags to the logs to distinguish each system
Emit debug and info log messages only if there is also an error message (so the output is clean when there are no errors to debug, and rich in detail when there are)

Do purely functional languages really guarantee immutability?

In a purely functional language, couldn't one still define an "assignment" operator, say, "<-", such that the command, say, "i <- 3", instead of directly assigning the immutable variable i, would create a copy of the entire current call stack, except replacing i with 3 in the new call stack, and executing the new call stack from that point onward? Given that no data actually changed, wouldn't that still be considered "purely functional" by definition? Of course the compiler would simply make the optimization to simply assign 3 to i, in which case what's the difference between imperative and purely functional?
Purely functional languages, such as Haskell, have ways of modelling imperative languages, and they are not shy about admitting it either. :)
See http://www.haskell.org/tutorial/io.html, in particular 7.5:
So, in the end, has Haskell simply
re-invented the imperative wheel?
In some sense, yes. The I/O monad
constitutes a small imperative
sub-language inside Haskell, and thus
the I/O component of a program may
appear similar to ordinary imperative
code. But there is one important
difference: There is no special
semantics that the user needs to deal
with. In particular, equational
reasoning in Haskell is not
compromised. The imperative feel of
the monadic code in a program does not
detract from the functional aspect of
Haskell. An experienced functional
programmer should be able to minimize
the imperative component of the
program, only using the I/O monad for
a minimal amount of top-level
sequencing. The monad cleanly
separates the functional and
imperative program components. In
contrast, imperative languages with
functional subsets do not generally
have any well-defined barrier between
the purely functional and imperative
worlds.
So the value of functional languages is not that they make state mutation impossible, but that they provide a way to allow you to keep the purely functional parts of your program separate from the state-mutating parts.
Of course, you can ignore this and write your entire program in the imperative style, but then you won't be taking advantage of the facilities of the language, so why use it?
Update
Your idea is not as flawed as you assume. Firstly, if someone familiar only with imperative languages wanted to loop through a range of integers, they might wonder how this could be achieved without a way to increment a counter.
But of course instead you just write a function that acts as the body of the loop, and then make it call itself. Each invocation of the function corresponds to an "iteration step". And in the scope of each invocation the parameter has a different value, acting like an incrementing variable. Finally, the runtime can note that the recursive call appears at the end of the invocation, and so it can reuse the top of the function-call stack instead of growing it (tail call). Even this simple pattern has almost all of the flavour of your idea - including the compiler/runtime quietly stepping in and actually making mutation occur (overwriting the top of the stack). Not only is it logically equivalent to a loop with a mutating counter, but in fact it makes the CPU and memory do the same thing physically.
You mention a GetStack that would return the current stack as a data structure. That would indeed be a violation of functional purity, given that it would necessarily return something different each time it was called (with no arguments). But how about a function CallWithStack, to which you pass a function of your own, and it calls back to your function and passes it the current stack as a parameter? That would be perfectly okay. CallCC works a bit like that.
Haskell doesn't readily give you ways to introspect or "execute" call stacks, so I wouldn't worry too much about that particular bizarre scheme. However in general it is true that one can subvert the type system using unsafe "functions" such as unsafePerformIO :: IO a -> a. The idea is to make it difficult, not impossible, to violate purity.
Indeed, in many situations, such as when making Haskell bindings for a C library, these mechanisms are quite necessary... by using them you are removing the burden of proof of purity from the compiler and taking it upon yourself.
There is a proposal to actually guarantee safety by outlawing such subversions of the type system; I'm not too familiar with it, but you can read about it here.
Immutability is a property of the language, not of the implementation.
An operation a <- expr that copies data is still an imperative operation, if values that refer to the location a appear to have changed from the programmers point of view.
Likewise, a purely functional language implementation may overwrite and reuse variables to its heart's content, as long as each modification is invisible to the programmer. For example, the map function can in principle overwrite a list instead of creating a new, whenever the language implementation can deduce that the old list won't be needed anywhere.

Resources