Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
Does a pure functional language loses its purity if global variables are allowed?
I mean does having global variables affect the referential transparency of the language?
I suppose not, because of value semantics but I'm not sure and would like to
know what other people think.
In a pure functional language, "variable" means something different than what it usually means in imperative languages. It is not variable in the sense that it can be reassigned within a given scope, but rather in the sense that each time it comes into scope, it may have a different value. But for the lifetime of that scope it remains constant. So for example, in the function
f x y = x + y
x and y are variables which become bound when f is applied to them. Once bound, they never change within the scope of that invocation, they simply go out of scope at some point. Other invocations will bind x and y to different values. That is the sense in which functional variables "vary", which is closer (some might say identical) to the original mathematical meaning of a variable.
So, to your question: do global variables ruin purity? No, because global variables, since they never go out of scope, are effectively constants.
Mutable variables don't break referential transparency as long as reading/writing them happens in a scope that allows side effects. For example, in Haskell, the most basic type of mutable variables is IORef. Passing an IORef around doesn't break referential transparency. And reading or writing IORefs is only allowed within the IO monad.
Related
Assignment should avoided in functional programming, but in clojure we often use let.
Is let just a way of being practical or is assignment not the same as using let? Should we not avoid assignment in functional programming?
Mutable state is generally against the core concepts of functional programming.
However, let merely binds a name to a value. If that value is immutable, there's no reason for it to be inconsistent with functional programming ideals.
One cannot say that assignment in general is against the idea of functional programming (FP).
A def expression is an assignment as well as a let expression. Giving names to things and procedures/functions is a mean of abstraction - and programming means to a big part applying abstraction on recurring problems.
Imperative style misuses assignments for mutation and thus creating/maintaining/mapping of (global) states. Mutation is not possible without assignment.
So FP aims against such kind of mutations not assignments per se.
Actually FP is not even aiming against mutation per se.
Even in functional languages mutation is in some situations required for performance reasons.
There is harmless mutation - mutation of variables which are anyway never ever again referred to for the rest of the program - e.g. because they appear only within a certain scope (e.g. within the scope of a let expression or a function definition). I tend to call them 'benign' mutations. And there is harmful mutation - mutation of variables to which later is referred to - mutation of variables which go on living outside the scope they were created in - thus constituting some kind of an unlimited state. I call them 'malign' mutations.
Actually it is also wrong to say FP avoids state alltogether.
Closures are actually constituting states in FP. Through closures functions can refer to hidden variables which keep a "memory"/state between different function calls. But they are applied in a very controlled manner.
Probably this is why defining FP is so difficult. Very quickly one has oversimplified something thereby causing more confusion than clarifying things.
I want to compute some functions which are dependent on some variables (specific data on which I run the code) and global variables, which are unlikely to be changed, but I want to leave them user-tunable. Just to clarify with an example, suppose I want to declare the following function:
let multiplyByGain x =
x * gain
Where would you declare gain, being gain a global constant for the whole project. In a separate module with constants? That would couple the module with this code, though. Or would you use a curried version:
let multiblyByGain x gain =
x * gain
and then specialize for the specific values? But suppose you have many functions like that, you will have to inject gain to all of them (in a sort of linking module)?
In my specific problem this becomes more cumbersome because both x and gain are arrays which must have the same length, suppose I have to do a Array.zip, e.g.: what is the best practice in terms of functional design to address a global constant, as gain, in a general way?
P.S.: I have found this old postenter link description here, but addresses only a specific problem.
There is no single correct answer to the question and the best approach will depend on a variety of other constraints and requirements that you have. Also, it depends on whether you are asking specifically about F# or whether you are asking about functional programming more generally. I think there are three main points:
Keeping it simple.
Using a module that exposes gain as a global value, which has some initialization code to read configuration seems like a good default approach in F#. If this is changed only rarely (say, before you run the whole computation), then mutation is not going to cause you any troubles. You just need to be careful to avoid changing the values while some computation is still running. I think most F# programmers code tend to be quite pragmatic about this and this seems like the easiest thing to start with.
Unit testing.
If you want to unit ytest your multiplyByGain function with different gain as an argument, then you'll need some way of passing different values of gain to the function from your unit tests. In this case, having it as an additional parameter and using currying is nice, because you can just call it with other values of gain from your tests.
Functional programming.
Some functional language communities (especially Haskell and, sometimes, Scala) are way more strict about state. The purely functional way of keeping state would be to use monads (either the reader monad or some kind of free monad structure). This makes your code a lot more complicated (both conceptually and in terms of extra syntactic overhead), but it is a purely functional solution that eliminates state. In F#, this kind of approach is even more cumbersome, so it's not very common.
I'm aware that there are several definitions for functional programming. I think it's a nebulous category. My personal definition is something close to 'referential transparency'.
This question is not 'What is the definition of functional programming?'. The assumption is that what we know is as functional programming is a grab-bag of a couple of different ideas with some unclear boundaries.
Now the quite amazing book Structure and Interpretation of Computer Programs contains the following reference to the term functional programming.
Programming without any use of assignments, as we did throughout the first two chapters of this book, is accordingly known as functional programming.
To me that seemed odd.
My question is: Can 'programming without assignment' be considered within the definition of functional programming?
Yes, I think it can, though Scala and LISP users would probably call it a quite narrow definition. But while the one true definition of functional programming remains controversial, we can certainly infer something about the style of programming without assignments.
I assume here that by assignment, we mean mutation of a variable. Note that this is quite different from binding
int i;
i = 1; // overwrite whatever i is with 1
versus
let i = 1 in .... -- say that i is a name for an expression, here 1
Once you have no assignment, there is no mutation. When there is no mutation, certain constructs like loops become useless. For, every variable is just a name for an expression that is constant in the context of the loop, so the loop would run either never or forever. The only way to have "varying" variables is through application of a function to some value, which binds the argument name to that value within and for the lifetime of that function. The only way to have looping is recursion. This, in turn, makes functions eminently important, and as a bonus, all functions are by necessity pure since there is no mutation.
So, there you have it: Without mutation, all that is left is programming with pure functions (if we don't count different approaches of declarative programming without functions, but it turns out that this is less general and more specific for certain tasks (think SQL, Prolog)).
Now we can get some popcorn before we decide the question if programming (only) with pure functions is indeed functional programming. :)
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
according to http://java-bytes.blogspot.com/2009/10/hashcode-of-string-in-java.html: "First off, its a known fact that there is no perfect hashing algorithm, for which there are no collisions."
The author is talking practically and not theoretically right? Because theoretically, here is a perfect hash function: "for a given object, assign it a new number". There are an infinite amount of numbers, so we'll always have something to assign to an object that's unique. In practice this isn't feasible though because we have a limited amount of memory.
Typically, a hash function maps from one set of objects (the universe) to a smaller set of objects (the codomain). Commonly, the universe is an infinite set, such as the set of all strings or the set of all numbers, and the codomain is a finite set, such as the set of all 512-bit strings, or the set of all numbers between 0 and some number k, etc. In Java, the hashCode function on objects has a codomain of values that can be represented by an int, which is all 32-bit integers.
I believe that what the author is talking about when they say "there is no perfect hash function" is that there is no possible way to map the infinite set of all strings into the set of all 32-bit integers without having at least one collision. In fact, if you pick 232 + 1 different strings, you're guaranteed to have at least one collision.
Your argument - couldn't we just assign each object a different hash code? - makes the implicit assumption that the codomain of the hash function is infinite. For example, if you were to try this approach to build a hash function for strings, the codomain of the hash function would have to be at least as large as the set of all possible natural numbers, since there are infinitely many strings. Most programming languages don't support hash codes that work this way, though you're correct that in theory this would work. Of course, someone might object and say that this doesn't count as a valid hash function, since typically hash functions have finite codomains.
Hope this helps!
I'm currently trying to master Erlang. It's the first functional programming language that I look into and I noticed that in Erlang, each assignments that you do is a single assignment. And apparently, not just in Erlang, but in many other functional programming languages, assignments are done through single assignment.
I'm really confused about why they made it like that. What exactly is the purpose of single assignment? What benefits can we get from it?
Immutability (what you call single assignment), simplifies a lot of things because it takes out the "time" variable from your programs.
For example, in mathematics if you say
x = y
You can replace x for y, everywhere. In operational programming languages you can't ensure that this equality holds: there is a "time" (state) associated with each line of code. This time state also leaves the door open to undesired side effects which is the enemy number one of modularity and concurrency.
For more information see this.
Because of Single Assignment, Side effects are so minimal. Infact, its so hard to write code with race conditions or any side effects in Erlang. This is because, the Compiler easilly tells un-used variables, created terms which are not used, shadowed variables (especially inside funs ) e.t.c. Another advantage that Erlang gained in this is Referential Transparency. A function in Erlang will depend only on the variables passed to it and NOT on global variables, except MACROS (and macros cannot be changed at run-time, they are constants.). Lastly, if you watched the Erlang Movie, the Sophisticated Error Detection Mechanism which was built into Erlang depends so much on the fact that in Erlang, variables are assigned Once.
Having variables keep their values makes it much easier to understand and debug the code. With concurrent processes you get the same kind of problem anyway, so there is enough complication anyway without having just any variable potentially change its value at any time. Think of it as encapsulating side effects by only allowing them when explicit.