Is Elixir's Module.register_attribute mutability? - functional-programming

Is it a way to create mutable state with modules? How can using this be a good idea? Wouldn't that kind of break the immutability idea from functional programming?

No because it's used at compile-time. It's kind of #define in C.
You can see example https://gist.github.com/mprymek/8379066 where attribute "sensors" is used to accumulate functions defined with macro "sensor". When you have all these functions accumulated, you can automatically generate function "run_all" which runs all of them. Of course all of this must be done at compile-time.

Related

Julia functions: making mutable types immutable

Coming from Wolfram Mathematica, I like the idea that whenever I pass a variable to a function I am effectively creating a copy of that variable. On the other hand, I am learning that in Julia there are the notions of mutable and immutable types, with the former passed by reference and the latter passed by value. Can somebody explain me the advantage of such a distinction? why arrays are passed by reference? Naively I see this as a bad aspect, since it creates side effects and ruins the possibility to write purely functional code. Where I am wrong in my reasoning? is there a way to make immutable an array, such that when it is passed to a function it is effectively passed by value?
here an example of code
#x is an in INT and so is immutable: it is passed by value
x = 10
function change_value(x)
x = 17
end
change_value(x)
println(x)
#arrays are mutable: they are passed by reference
arr = [1, 2, 3]
function change_array!(A)
A[1] = 20
end
change_array!(arr)
println(arr)
which indeed modifies the array arr
There is a fair bit to respond to here.
First, Julia does not pass-by-reference or pass-by-value. Rather it employs a paradigm known as pass-by-sharing. Quoting the docs:
Function arguments themselves act as new variable bindings (new
locations that can refer to values), but the values they refer to are
identical to the passed values.
Second, you appear to be asking why Julia does not copy arrays when passing them into functions. This is a simple one to answer: Performance. Julia is a performance oriented language. Making a copy every time you pass an array into a function is bad for performance. Every copy operation takes time.
This has some interesting side-effects. For example, you'll notice that a lot of the mature Julia packages (as well as the Base code) consists of many short functions. This code structure is a direct consequence of near-zero overhead to function calls. Languages like Mathematica and MatLab on the other hand tend towards long functions. I have no desire to start a flame war here, so I'll merely state that personally I prefer the Julia style of many short functions.
Third, you are wondering about the potential negative implications of pass-by-sharing. In theory you are correct that this can result in problems when users are unsure whether a function will modify its inputs. There were long discussions about this in the early days of the language, and based on your question, you appear to have worked out that the convention is that functions that modify their arguments have a trailing ! in the function name. Interestingly, this standard is not compulsory so yes, it is in theory possible to end up with a wild-west type scenario where users live in a constant state of uncertainty. In practice this has never been a problem (to my knowledge). The convention of using ! is enforced in Base Julia, and in fact I have never encountered a package that does not adhere to this convention. In summary, yes, it is possible to run into issues when pass-by-sharing, but in practice it has never been a problem, and the performance benefits far outweigh the cost.
Fourth (and finally), you ask whether there is a way to make an array immutable. First things first, I would strongly recommend against hacks to attempt to make native arrays immutable. For example, you could attempt to disable the setindex! function for arrays... but please don't do this. It will break so many things.
As was mentioned in the comments on the question, you could use StaticArrays. However, as Simeon notes in the comments on this answer, there are performance penalties for using static arrays for really big datasets. More than 100 elements and you can run into compilation issues. The main benefit of static arrays really is the optimizations that can be implemented for smaller static arrays.
Another package-based options suggested by phipsgabler in the comments below is FunctionalCollections. This appears to do what you want, although it looks to be only sporadically maintained. Of course, that isn't always a bad thing.
A simpler approach is just to copy arrays in your own code whenever you want to implement pass-by-value. For example:
f!(copy(x))
Just be sure you understand the difference between copy and deepcopy, and when you may need to use the latter. If you're only working with arrays of numbers, you'll never need the latter, and in fact using it will probably drastically slow down your code.
If you wanted to do a bit of work then you could also build your own array type in the spirit of static arrays, but without all the bells and whistles that static arrays entails. For example:
struct MyImmutableArray{T,N}
x::Array{T,N}
end
Base.getindex(y::MyImmutableArray, inds...) = getindex(y.x, inds...)
and similarly you could add any other functions you wanted to this type, while excluding functions like setindex!.

Global State in Functional Programming (F#)

I want to compute some functions which are dependent on some variables (specific data on which I run the code) and global variables, which are unlikely to be changed, but I want to leave them user-tunable. Just to clarify with an example, suppose I want to declare the following function:
let multiplyByGain x =
x * gain
Where would you declare gain, being gain a global constant for the whole project. In a separate module with constants? That would couple the module with this code, though. Or would you use a curried version:
let multiblyByGain x gain =
x * gain
and then specialize for the specific values? But suppose you have many functions like that, you will have to inject gain to all of them (in a sort of linking module)?
In my specific problem this becomes more cumbersome because both x and gain are arrays which must have the same length, suppose I have to do a Array.zip, e.g.: what is the best practice in terms of functional design to address a global constant, as gain, in a general way?
P.S.: I have found this old postenter link description here, but addresses only a specific problem.
There is no single correct answer to the question and the best approach will depend on a variety of other constraints and requirements that you have. Also, it depends on whether you are asking specifically about F# or whether you are asking about functional programming more generally. I think there are three main points:
Keeping it simple.
Using a module that exposes gain as a global value, which has some initialization code to read configuration seems like a good default approach in F#. If this is changed only rarely (say, before you run the whole computation), then mutation is not going to cause you any troubles. You just need to be careful to avoid changing the values while some computation is still running. I think most F# programmers code tend to be quite pragmatic about this and this seems like the easiest thing to start with.
Unit testing.
If you want to unit ytest your multiplyByGain function with different gain as an argument, then you'll need some way of passing different values of gain to the function from your unit tests. In this case, having it as an additional parameter and using currying is nice, because you can just call it with other values of gain from your tests.
Functional programming.
Some functional language communities (especially Haskell and, sometimes, Scala) are way more strict about state. The purely functional way of keeping state would be to use monads (either the reader monad or some kind of free monad structure). This makes your code a lot more complicated (both conceptually and in terms of extra syntactic overhead), but it is a purely functional solution that eliminates state. In F#, this kind of approach is even more cumbersome, so it's not very common.

What is the mechanism behind Function Application in Functional Programming

OK, let me try to rephrase my question.
Actually I wanted to know how is Function Application implemented in FP.
Is it done like a function call in imperative languages, where a stack frame is added for each call and removed on each return.
Or, is it like in inline functions, where the function call statement is replaced by the function definition.
Also, in terms of implementation of a function application, what is the significance of the statement functions in FP are mappings between domains and corresponding ranges. It is obviously not possible to maintain a mapping for each domain-range entry pair, so what exactly does the statement imply...
This question is broad enough that I can't answer it completely, since I don't know every single functional programming language. But I can tell you how it's done in one language, F#. You asked whether function application is done like a function call in imperative languages (another stack frame added for each call) or whether it's done as inline functions... and in F# the answer is both. The F# compiler is allowed to choose whether to create a stack-frame-using function call, or whether to inline the function at the call site; generally the choice is made based on the size of the compiled function. If the function compiles down to fewer than N bytes of compiled code (I can't tell you the exact number, but knowing the exact number doesn't actually matter) then the compiler will usually inline that function call; if it takes more than N bytes then the function call will use a stack frame. (Except in the case of tail-recursive calls, which are compiled to the equivalent of a goto and don't use a stack frame).
P.S. You can force the compiler's hand by using the inline keyword, which forces that function to be inlined at the call site every time. Most F# programmers don't recommend doing that on a regular basis, because the compiler is smart enough that it's usually not a good idea to override its decisions. (Also, the inline keyword means that the types of the function's parameters must be resolvable at compile time, so there are some functions for which that changes the semantics, but that's a little off-topic for the question you asked so I won't go into it. Except to say that in F#, statically-resolved type parameters or SRTPs are a very complicated subject, and you can do some very advanced things with them if you understand them.)

Objects Referential transparency for functional programming in Java 8

I have been studying functional programming and one of the requirements is that they are pure in the sense that they only return the computed value and not touch anything else or throw exceptions, they don't also access shared mutable objects - this makes them inherently thread safe.
So then what would be the correct approach to implement a pure function that takes objects as arguments rather than primitive values. Would I have to deep clone them when passing to a function ?
If the function is a pure function, i.e. does not modify existing objects, whether they are passed as parameters or lying around somewhere else, there is no sense is copying or cloning the argument objects.
You could also see it the other way round: if cloning arguments is necessary, the invoked code is not functional and cloning the arguments doesn’t turn it into functional code, it’s actually working around a design flaw.
In the best case, you would be working with immutable objects which prevent modifications intrinsically, however, using immutable objects doesn’t change the way how the functional code should behave, they just enforce some aspects of it. When a particular class does not offer immutable objects, you can still use it in the right way, without the need to re-implement it in an immutable way.
Generally, it is not a good idea to develop your code assuming that all other code will misbehave and that it was your code’s task to solve the issues of that misbehavior.
The objects should ideally be immutable. Immutability and functional programming go hand-to-hand.
If having all immutable objects isn't feasible, yes, you would ideally need to make deep copies of everything to ensure changes to them don't effect anything else outside of the function.

Do purely functional languages really guarantee immutability?

In a purely functional language, couldn't one still define an "assignment" operator, say, "<-", such that the command, say, "i <- 3", instead of directly assigning the immutable variable i, would create a copy of the entire current call stack, except replacing i with 3 in the new call stack, and executing the new call stack from that point onward? Given that no data actually changed, wouldn't that still be considered "purely functional" by definition? Of course the compiler would simply make the optimization to simply assign 3 to i, in which case what's the difference between imperative and purely functional?
Purely functional languages, such as Haskell, have ways of modelling imperative languages, and they are not shy about admitting it either. :)
See http://www.haskell.org/tutorial/io.html, in particular 7.5:
So, in the end, has Haskell simply
re-invented the imperative wheel?
In some sense, yes. The I/O monad
constitutes a small imperative
sub-language inside Haskell, and thus
the I/O component of a program may
appear similar to ordinary imperative
code. But there is one important
difference: There is no special
semantics that the user needs to deal
with. In particular, equational
reasoning in Haskell is not
compromised. The imperative feel of
the monadic code in a program does not
detract from the functional aspect of
Haskell. An experienced functional
programmer should be able to minimize
the imperative component of the
program, only using the I/O monad for
a minimal amount of top-level
sequencing. The monad cleanly
separates the functional and
imperative program components. In
contrast, imperative languages with
functional subsets do not generally
have any well-defined barrier between
the purely functional and imperative
worlds.
So the value of functional languages is not that they make state mutation impossible, but that they provide a way to allow you to keep the purely functional parts of your program separate from the state-mutating parts.
Of course, you can ignore this and write your entire program in the imperative style, but then you won't be taking advantage of the facilities of the language, so why use it?
Update
Your idea is not as flawed as you assume. Firstly, if someone familiar only with imperative languages wanted to loop through a range of integers, they might wonder how this could be achieved without a way to increment a counter.
But of course instead you just write a function that acts as the body of the loop, and then make it call itself. Each invocation of the function corresponds to an "iteration step". And in the scope of each invocation the parameter has a different value, acting like an incrementing variable. Finally, the runtime can note that the recursive call appears at the end of the invocation, and so it can reuse the top of the function-call stack instead of growing it (tail call). Even this simple pattern has almost all of the flavour of your idea - including the compiler/runtime quietly stepping in and actually making mutation occur (overwriting the top of the stack). Not only is it logically equivalent to a loop with a mutating counter, but in fact it makes the CPU and memory do the same thing physically.
You mention a GetStack that would return the current stack as a data structure. That would indeed be a violation of functional purity, given that it would necessarily return something different each time it was called (with no arguments). But how about a function CallWithStack, to which you pass a function of your own, and it calls back to your function and passes it the current stack as a parameter? That would be perfectly okay. CallCC works a bit like that.
Haskell doesn't readily give you ways to introspect or "execute" call stacks, so I wouldn't worry too much about that particular bizarre scheme. However in general it is true that one can subvert the type system using unsafe "functions" such as unsafePerformIO :: IO a -> a. The idea is to make it difficult, not impossible, to violate purity.
Indeed, in many situations, such as when making Haskell bindings for a C library, these mechanisms are quite necessary... by using them you are removing the burden of proof of purity from the compiler and taking it upon yourself.
There is a proposal to actually guarantee safety by outlawing such subversions of the type system; I'm not too familiar with it, but you can read about it here.
Immutability is a property of the language, not of the implementation.
An operation a <- expr that copies data is still an imperative operation, if values that refer to the location a appear to have changed from the programmers point of view.
Likewise, a purely functional language implementation may overwrite and reuse variables to its heart's content, as long as each modification is invisible to the programmer. For example, the map function can in principle overwrite a list instead of creating a new, whenever the language implementation can deduce that the old list won't be needed anywhere.

Resources