Elixir allow assign to variable two times [duplicate] - functional-programming

In Dave Thomas's book Programming Elixir he states "Elixir enforces immutable data" and goes on to say:
In Elixir, once a variable references a list such as [1,2,3], you know it will always reference those same values (until you rebind the variable).
This sounds like "it won't ever change unless you change it" so I'm confused as to what the difference between mutability and rebinding is. An example highlighting the differences would be really helpful.

Don't think of "variables" in Elixir as variables in imperative languages, "spaces for values". Rather look at them as "labels for values".
Maybe you would better understand it when you look at how variables ("labels") work in Erlang. Whenever you bind a "label" to a value, it remains bound to it forever (scope rules apply here of course).
In Erlang you cannot write this:
v = 1, % value "1" is now "labelled" "v"
% wherever you write "1", you can write "v" and vice versa
% the "label" and its value are interchangeable
v = v+1, % you can not change the label (rebind it)
v = v*10, % you can not change the label (rebind it)
instead you must write this:
v1 = 1, % value "1" is now labelled "v1"
v2 = v1+1, % value "2" is now labelled "v2"
v3 = v2*10, % value "20" is now labelled "v3"
As you can see this is very inconvenient, mainly for code refactoring. If you want to insert a new line after the first line, you would have to renumber all the v* or write something like "v1a = ..."
So in Elixir you can rebind variables (change the meaning of the "label"), mainly for your convenience:
v = 1 # value "1" is now labelled "v"
v = v+1 # label "v" is changed: now "2" is labelled "v"
v = v*10 # value "20" is now labelled "v"
Summary: In imperative languages, variables are like named suitcases: you have a suitcase named "v". At first you put sandwich in it. Than you put an apple in it (the sandwich is lost and perhaps eaten by the garbage collector). In Erlang and Elixir, the variable is not a place to put something in. It's just a name/label for a value. In Elixir you can change a meaning of the label. In Erlang you cannot. That's the reason why it doesn't make sense to "allocate memory for a variable" in either Erlang or Elixir, because variables do not occupy space. Values do. Now perhaps you see the difference clearly.
If you want to dig deeper:
1) Look at how "unbound" and "bound" variables work in Prolog. This is the source of this maybe slightly strange Erlang concept of "variables which do not vary".
2) Note that "=" in Erlang really is not an assignment operator, it's just a match operator! When matching an unbound variable with a value, you bind the variable to that value. Matching a bound variable is just like matching a value it's bound to. So this will yield a match error:
v = 1,
v = 2, % in fact this is matching: 1 = 2
3) It's not the case in Elixir. So in Elixir there must be a special syntax to force matching:
v = 1
v = 2 # rebinding variable to 2
^v = 3 # matching: 2 = 3 -> error

Immutability means that data structures don't change. For example the function HashSet.new returns an empty set and as long as you hold on to the reference to that set it will never become non-empty. What you can do in Elixir though is to throw away a variable reference to something and rebind it to a new reference. For example:
s = HashSet.new
s = HashSet.put(s, :element)
s # => #HashSet<[:element]>
What cannot happen is the value under that reference changing without you explicitly rebinding it:
s = HashSet.new
ImpossibleModule.impossible_function(s)
s # => #HashSet<[:element]> will never be returned, instead you always get #HashSet<[]>
Contrast this with Ruby, where you can do something like the following:
s = Set.new
s.add(:element)
s # => #<Set: {:element}>

Erlang and obviously Elixir that is built on top of it, embraces immutability.
They simply don’t allow values in a certain memory location to change. Never Until the variable gets garbage collected or is out of scope.
Variables aren't the immutable thing. The data they point to is the immutable thing. That's why changing a variable is referred to as rebinding.
You're point it at something else, not changing the thing it points to.
x = 1 followed by x = 2 doesn't change the data stored in computer memory where the 1 was to a 2. It puts a 2 in a new place and points x at it.
x is only accessible by one process at a time so this has no impact on concurrency and concurrency is the main place to even care if something is immutable anyway.
Rebinding doesn’t change the state of an object at all, the value is still in the same memory location, but it’s label (variable) now points to another memory location, so immutability is preserved. Rebinding is not available in Erlang, but while it is in Elixir this is not braking any constraint imposed by the Erlang VM, thanks to its implementation.
The reasons behind this choice are well explained by Josè Valim in this gist .
Let's say you had a list
l = [1, 2, 3]
and you had another process that was taking lists and then performing "stuff" against them repeatedly and changing them during this process would be bad. You might send that list like
send(worker, {:dostuff, l})
Now, your next bit of code might want to update l with more values for further work that's unrelated to what that other process is doing.
l = l ++ [4, 5, 6]
Oh no, now that first process is going to have undefined behavior because you changed the list right? Wrong.
That original list remains unchanged. What you really did was make a new list based on the old one and rebind l to that new list.
The separate process never has access to l. The data l originally pointed at is unchanged and the other process (presumably, unless it ignored it) has its own separate reference to that original list.
What matters is you can't share data across processes and then change it while another process is looking at it. In a language like Java where you have some mutable types (all primitive types plus references themselves) it would be possible to share a structure/object that contained say an int and change that int from one thread while another was reading it.
In fact, it's possible to change a large integer type in java partially while it's read by another thread. Or at least, it used to be, not sure if they clamped that aspect of things down with the 64 bit transition. Anyway, point is, you can pull the rug out from under other processes/threads by changing data in a place that both are looking at simultaneously.
That's not possible in Erlang and by extension Elixir. That's what immutability means here.
To be a bit more specific, in Erlang (the original language for the VM Elixir runs on) everything was single-assignment immutable variables and Elixir is hiding a pattern Erlang programmers developed to work around this.
In Erlang, if a=3 then that was what a was going to be its value for the duration of that variable's existence until it dropped out of scope and was garbage collected.
This was useful at times (nothing changes after assignment or pattern match so it is easy to reason about what a function is doing) but also a bit cumbersome if you were doing multiple things to a variable or collection over the course executing a function.
Code would often look like this:
A=input,
A1=do_something(A),
A2=do_something_else(A1),
A3=more_of_the_same(A2)
This was a bit clunky and made refactoring more difficult than it needed to be. Elixir is doing this behind the scenes, but hiding it from the programmer via macros and code transforms performed by the compiler.
Great discussion here
immutability-in-elixir

The variables really are immutable in sense, every new rebinding (assignment) is only visible to access that come after that. All previous access, still refer to old value(s) at the time of their call.
foo = 1
call_1 = fn -> IO.puts(foo) end
foo = 2
call_2 = fn -> IO.puts(foo) end
foo = 3
foo = foo + 1
call_3 = fn -> IO.puts(foo) end
call_1.() #prints 1
call_2.() #prints 2
call_3.() #prints 4

To make it a very simple
variables in elixir are not like container where you keep adding and removing or modifying items from the container.
Instead they are like Labels attached to a container, when you reassign a variable is as simple a you pick a label from one container and place it on a new container with expected data in it.

Related

bulk adding to a map, in F#

I've a simple type:
type Token =
{
Symbol: string
Address: string
Decimals: int
}
and a memory cache (they're in a db):
let mutable private tokenCache : Map<string, Token> = Map.empty
part of the Tokens module.
Sometimes I get a few new entries to add, in the form of a Token array, and I want to update the cache.
It happens very rarely (less than once per million reads).
When I update the database with the new batch, I want to update the cache map as well and I just wrote this:
tokenCache <- tokens |> Seq.fold (fun m i -> m.Add(i.Symbol, i)) tokenCache
Since this is happening rarely, I don't really care about the performance so this question is out of curiosity:
When I do this, the map will be recreated once per entry in the tokens array: 10 new tokens, 10 map re-creation. I assumed this was the most 'F#' way to deal with this. It got me thinking: wouldn't converting the map to a list of KVP, getting the output of distinct and re-creating a map be more efficient? or is there another method I haven't thought about?
This is not an answer to the question as stated, but a clarification to something you asked in the comments.
This premise that you have expressed is incorrect:
the map will be recreated once per entry in the tokens array
The map doesn't actually get completely recreated for every insertion. But at the same time, another hypothesis that you have expressed in the comments is also incorrect:
so the immutability is from the language's perspective, the compiler doesn't recreate the object behind the scenes?
Immutability is real. But the map also doesn't get recreated every time. Sometimes it does, but not every time.
I'm not going to describe exactly how Map works, because that's too involved. Instead, I'll illustrate the principle on a list.
F# lists are "singly linked lists", which means each list consists of two things: (1) first element (called "head") and (2) a reference (pointer) to the rest of elements (called "tail"). The crucial thing to note here is that the "rest of elements" part is also itself a list.
So if you declare a list like this:
let x = [1; 2; 3]
It would be represented in memory something like this:
x -> 1 -> 2 -> 3 -> []
The name x is a reference to the first element, and then each element has a reference to the next one, and the last one - to empty list. So far so good.
Now let's see what happens if you add a new element to this list:
let y = 42 :: x
Now the list y will be represented like this:
y -> 42 -> 1 -> 2 -> 3 -> []
But this picture is missing half the picture. If we look at the memory in a wider scope than just y, we'll see this:
x -> 1 -> 2 -> 3 -> []
^
|
/
y -> 42
So you see that the y list consists of two things (as all lists do): first element 42 and a reference to the rest of the elements 1->2->3. But the "rest of the elements" bit is not exclusive to y, it has its own name x.
And so it is that you have two lists x and y, 3 and 4 elements respectively, but together they occupy just 4 cells of memory, not 7.
And another thing to note is that when I created the y list, I did not have to recreate the whole list from scratch, I did not have to copy 1, 2, and 3 from x to y. Those cells stayed right where they are, and y only got a reference to them.
And a third thing to note is that this means that prepending an element to a list is an O(1) operation. No copying of the list involved.
And a fourth (and hopefully final) thing to note is that this approach is only possible because of immutability. It is only because I know that the x list will never change that I can take a reference to it. If it was subject to change, I would be forced to copy it just in case.
This sort of arrangement, where each iteration of a data structure is built "on top of" the previous one is called "persistent data structure" (well, to be more precise, it's one kind of a persistent data structure).
The way it works is very easy to see for linked lists, but it also works for more involved data structures, including maps (which are represented as trees).

Creating copies in Julia with = operator

When I create some array A and assign it to B
A = [1:10]
B = A
I can modify A and the change reflects in B
A[1] = 42
# B[1] is now 42
But if I do that with scalar variables, the change doesn't propagate:
a = 1
b = a
a = 2
# b remains being 1
I can even mix the things up and transform the vector to a scalar, and the change doesn't propagate:
A = [1:10]
B = A
A = 0
# B remains being 1,2,...,10
What exactly does the = operator does? When I want to copy variables and modify the old ones preserving the integrity of the new variables, when should I use b = copy(a) over just b=a?
The confusion stems from this: assignment and mutation are not the same thing.
Assignment. Assignment looks like x = ... – what's left of the = is an identifier, i.e. a variable name. Assignment changes which object the variable x refers to (this is called a variable binding). It does not mutate any objects at all.
Mutation. There are two typical ways to mutate something in Julia:
x.f = ... – what's left of the = is a field access expression;
x[i] = ... – what's left of the = is an indexing expression. Currently, field mutation is fundamental – that syntax can only mean that you are mutating a structure by changing its field. This may change. Array mutation syntax is not fundamental – x[i] = y means setindex!(x, y, i) and you can either add methods to setindex! or locally change which generic function setindex!. Actual array assignment is a builtin – a function implemented in C (and for which we know how to generate corresponding LLVM code).
Mutation changes the values of objects; it doesn't change any variable bindings. After doing either of the above, the variable x still refers to the same object it did before; that object may have different contents, however. In particular, if that object is accessible from some other scope – say the function that called one doing the mutation – then the changed value will be visible there. But no bindings have changed – all bindings in all scopes still refer to the same objects.
You'll note that in this explanation I never once talked about mutability or immutability. That's because it has nothing to do with any of this – mutable and immutable objects have exactly the same semantics when it comes to assignment, argument passing, etc. The only difference is that if you try to do x.f = ... when x is immutable, you will get an error.
This behavior is similar to Java. A and B are variables that can hold either a "plain" data type, such as an integer, float etc, or a references (aka pointers) to a more complex data structure. In contrast to Java, Julia handles many non-abstract types as "plain" data.
You can test with isbits(A) whether your variable A holds a bit value, or contains a reference to another data object. In the first case B=A will copy every bit from A to a new memory allocation for B, otherwise, only the reference to the object will be copied.
Also play around with pointer_from_objref(A).

Replacing functions with Table Lookups

I've been watching this MSDN video with Brian Beckman and I'd like to better understand something he says:
Every imperitive programmer goes through this phase of learning that
functions can be replaced with table lookups
Now, I'm a C# programmer who never went to university, so perhaps somewhere along the line I missed out on something everyone else learned to understand.
What does Brian mean by:
functions can be replaced with table lookups
Are there practical examples of this being done and does it apply to all functions? He gives the example of the sin function, which I can make sense of, but how do I make sense of this in more general terms?
Brian just showed that the functions are data too. Functions in general are just a mapping of one set to another: y = f(x) is mapping of set {x} to set {y}: f:X->Y. The tables are mappings as well: [x1, x2, ..., xn] -> [y1, y2, ..., yn].
If function operates on finite set (this is the case in programming) then it's can be replaced with a table which represents that mapping. As Brian mentioned, every imperative programmer goes through this phase of understanding that the functions can be replaced with the table lookups just for performance reason.
But it doesn't mean that all functions easily can or should be replaced with the tables. It only means that you theoretically can do that for every function. So the conclusion would be that the functions are data because tables are (in the context of programming of course).
There is a lovely trick in Mathematica that creates a table as a side-effect of evaluating function-calls-as-rewrite-rules. Consider the classic slow-fibonacci
fib[1] = 1
fib[2] = 1
fib[n_] := fib[n-1] + fib[n-2]
The first two lines create table entries for the inputs 1 and 2. This is exactly the same as saying
fibTable = {};
fibTable[1] = 1;
fibTable[2] = 1;
in JavaScript. The third line of Mathematica says "please install a rewrite rule that will replace any occurrence of fib[n_], after substituting the pattern variable n_ with the actual argument of the occurrence, with fib[n-1] + fib[n-2]." The rewriter will iterate this procedure, and eventually produce the value of fib[n] after an exponential number of rewrites. This is just like the recursive function-call form that we get in JavaScript with
function fib(n) {
var result = fibTable[n] || ( fib(n-1) + fib(n-2) );
return result;
}
Notice it checks the table first for the two values we have explicitly stored before making the recursive calls. The Mathematica evaluator does this check automatically, because the order of presentation of the rules is important -- Mathematica checks the more specific rules first and the more general rules later. That's why Mathematica has two assignment forms, = and :=: the former is for specific rules whose right-hand sides can be evaluated at the time the rule is defined; the latter is for general rules whose right-hand sides must be evaluated when the rule is applied.
Now, in Mathematica, if we say
fib[4]
it gets rewritten to
fib[3] + fib[2]
then to
fib[2] + fib[1] + 1
then to
1 + 1 + 1
and finally to 3, which does not change on the next rewrite. You can imagine that if we say fib[35], we will generate enormous expressions, fill up memory, and melt the CPU. But the trick is to replace the final rewrite rule with the following:
fib[n_] := fib[n] = fib[n-1] + fib[n-2]
This says "please replace every occurrence of fib[n_] with an expression that will install a new specific rule for the value of fib[n] and also produce the value." This one runs much faster because it expands the rule-base -- the table of values! -- at run time.
We can do likewise in JavaScript
function fib(n) {
var result = fibTable[n] || ( fib(n-1) + fib(n-2) );
fibTable[n] = result;
return result;
}
This runs MUCH faster than the prior definition of fib.
This is called "automemoization" [sic -- not "memorization" but "memoization" as in creating a memo for yourself].
Of course, in the real world, you must manage the sizes of the tables that get created. To inspect the tables in Mathematica, do
DownValues[fib]
To inspect them in JavaScript, do just
fibTable
in a REPL such as that supported by Node.JS.
In the context of functional programming, there is the concept of referential transparency. A function that is referentially transparent can be replaced with its value for any given argument (or set of arguments), without changing the behaviour of the program.
Referential Transparency
For example, consider a function F that takes 1 argument, n. F is referentially transparent, so F(n) can be replaced with the value of F evaluated at n. It makes no difference to the program.
In C#, this would look like:
public class Square
{
public static int apply(int n)
{
return n * n;
}
public static void Main()
{
//Should print 4
Console.WriteLine(Square.apply(2));
}
}
(I'm not very familiar with C#, coming from a Java background, so you'll have to forgive me if this example isn't quite syntactically correct).
It's obvious here that the function apply cannot have any other value than 4 when called with an argument of 2, since it's just returning the square of its argument. The value of the function only depends on its argument, n; in other words, referential transparency.
I ask you, then, what the difference is between Console.WriteLine(Square.apply(2)) and Console.WriteLine(4). The answer is, there's no difference at all, for all intents are purposes. We could go through the entire program, replacing all instances of Square.apply(n) with the value returned by Square.apply(n), and the results would be the exact same.
So what did Brian Beckman mean with his statement about replacing function calls with a table lookup? He was referring to this property of referentially transparent functions. If Square.apply(2) can be replaced with 4 with no impact on program behaviour, then why not just cache the values when the first call is made, and put it in a table indexed by the arguments to the function. A lookup table for values of Square.apply(n) would look somewhat like this:
n: 0 1 2 3 4 5 ...
Square.apply(n): 0 1 4 9 16 25 ...
And for any call to Square.apply(n), instead of calling the function, we can simply find the cached value for n in the table, and replace the function call with this value. It's fairly obvious that this will most likely bring about a large speed increase in the program.

modifying an element of a list in-place in J, can it be done?

I have been playing with an implementation of lookandsay (OEIS A005150) in J. I have made two versions, both very simple, using while. type control structures. One recurs, the other loops. Because I am compulsive, I started running comparative timing on the versions.
look and say is the sequence 1 11 21 1211 111221 that s, one one, two ones, etc.
For early elements of the list (up to around 20) the looping version wins, but only by a tiny amount. Timings around 30 cause the recursive version to win, by a large enough amount that the recursive version might be preferred if the stack space were adequate to support it. I looked at why, and I believe that it has to do with handling intermediate results. The 30th number in the sequence has 5808 digits. (32nd number, 9898 digits, 34th, 16774.)
When you are doing the problem with recursion, you can hold the intermediate results in the recursive call, and the unstacking at the end builds the results so that there is minimal handling of the results.
In the list version, you need a variable to hold the result. Every loop iteration causes you to need to add two elements to the result.
The problem, as I see it, is that I can't find any way in J to modify an extant array without completely reassigning it. So I am saying
try. o =. o,e,(0&{y) catch. o =. e,(0&{y) end.
to put an element into o where o might not have a value when we start. That may be notably slower than
o =. i.0
.
.
.
o =. (,o),e,(0&{y)
The point is that the result gets the wrong shape without the ravels, or so it seems. It is inheriting a shape from i.0 somehow.
But even functions like } amend don't modify a list, they return a list that has a modification made to it, and if you want to save the list you need to assign it. As the size of the assigned list increases (as you walk the the number from the beginning to the end making the next number) the assignment seems to take more time and more time. This assignment is really the only thing I can see that would make element 32, 9898 digits, take less time in the recursive version while element 20 (408 digits) takes less time in the loopy version.
The recursive version builds the return with:
e,(0&{y),(,lookandsay e }. y)
The above line is both the return line from the function and the recursion, so the whole return vector gets built at once as the call gets to the end of the string and everything unstacks.
In APL I thought that one could say something on the order of:
a[1+rho a] <- new element
But when I try this in NARS2000 I find that it causes an index error. I don't have access to any other APL, I might be remembering this idiom from APL Plus, I doubt it worked this way in APL\360 or APL\1130. I might be misremembering it completely.
I can find no way to do that in J. It might be that there is no way to do that, but the next thought is to pre-allocate an array that could hold results, and to change individual entries. I see no way to do that either - that is, J does not seem to support the APL idiom:
a<- iota 5
a[3] <- -1
Is this one of those side effect things that is disallowed because of language purity?
Does the interpreter recognize a=. a,foo or some of its variants as a thing that it should fastpath to a[>:#a]=.foo internally?
This is the recursive version, just for the heck of it. I have tried a bunch of different versions and I believe that the longer the program, the slower, and generally, the more complex, the slower. Generally, the program can be chained so that if you want the nth number you can do lookandsay^: n ] y. I have tried a number of optimizations, but the problem I have is that I can't tell what environment I am sending my output into. If I could tell that I was sending it to the next iteration of the program I would send it as an array of digits rather than as a big number.
I also suspect that if I could figure out how to make a tacit version of the code, it would run faster, based on my finding that when I add something to the code that should make it shorter, it runs longer.
lookandsay=: 3 : 0
if. 0 = # ,y do. return. end. NB. return on empty argument
if. 1 ~: ##$ y do. NB. convert rank 0 argument to list of digits
y =. (10&#.^:_1) x: y
f =. 1
assert. 1 = ##$ y NB. the converted argument must be rank 1
else.
NB. yw =. y
f =. 0
end.
NB. e should be a count of the digits that match the leading digit.
e=.+/*./\y=0&{y
if. f do.
o=. e,(0&{y),(,lookandsay e }. y)
assert. e = 0&{ o
10&#. x: o
return.
else.
e,(0&{y),(,lookandsay e }. y)
return.
end.
)
I was interested in the characteristics of the numbers produced. I found that if you start with a 1, the numerals never get higher than 3. If you start with a numeral higher than 3, it will survive as a singleton, and you can also get a number into the generated numbers by starting with something like 888888888 which will generate a number with one 9 in it and a single 8 at the end of the number. But other than the singletons, no digit gets higher than 3.
Edit:
I did some more measuring. I had originally written the program to accept either a vector or a scalar, the idea being that internally I'd work with a vector. I had thought about passing a vector from one layer of code to the other, and I still might using a left argument to control code. With I pass the top level a vector the code runs enormously faster, so my guess is that most of the cpu is being eaten by converting very long numbers from vectors to digits. The recursive routine always passes down a vector when it recurs which might be why it is almost as fast as the loop.
That does not change my question.
I have an answer for this which I can't post for three hours. I will post it then, please don't do a ton of research to answer it.
assignments like
arr=. 'z' 15} arr
are executed in place. (See JWiki article for other supported in-place operations)
Interpreter determines that only small portion of arr is updated and does not create entire new list to reassign.
What happens in your case is not that array is being reassigned, but that it grows many times in small increments, causing memory allocation and reallocation.
If you preallocate (by assigning it some large chunk of data), then you can modify it with } without too much penalty.
After I asked this question, to be honest, I lost track of this web site.
Yes, the answer is that the language has no form that means "update in place, but if you use two forms
x =: x , most anything
or
x =: most anything } x
then the interpreter recognizes those as special and does update in place unless it can't. There are a number of other specials recognized by the interpreter, like:
199(1000&|#^)199
That combined operation is modular exponentiation. It never calculates the whole exponentiation, as
199(1000&|^)199
would - that just ends as _ without the #.
So it is worth reading the article on specials. I will mark someone else's answer up.
The link that sverre provided above ( http://www.jsoftware.com/jwiki/Essays/In-Place%20Operations ) shows the various operations that support modifying an existing array rather than creating a new one. They include:
myarray=: myarray,'blah'
If you are interested in a tacit version of the lookandsay sequence see this submission to RosettaCode:
las=: ,#((# , {.);.1~ 1 , 2 ~:/\ ])&.(10x&#.inv)#]^:(1+i.#[)
5 las 1
11 21 1211 111221 312211

How does change of the state work under the hood in functional languages

I would like to know how could functional languages implement "under the hood" creation of new state of for example Vector. When I have a Vector and I add another element to that particular Vector the old one is still there unchanged and new Vector containing the old one is created containing one more element.
How is this handled internally? Could someone try to explain it? Thank you.
Conceptually, a new Vector is created each time the Vector is extended or modified. However, since the original Vector is unmodified, clever techniques may be used to share structure. See ropes for example.
Also see Okasaki's Purely Functional Data Structures.
If you prepend an element to a linked list, a new linked list is created with the new element as its head and a pointer to the old list as its tail.
If you add an item to an array, the whole array is usually copied (making it quite inefficient to build up an immutable array incrementally).
However if you only add to the end of each array once, like so:
arr1 = emptyArray()
arr2 = add(arr1, 1)
arr3 = add(arr2, 2)
arr4 = add(arr3, 3)
The whole thing could be optimized, so that arr1, arr2, arr3 and arr4, all have pointers to the same memory, but different lengths. Of course this optimization can only be performed the first time you add to any given array. If you have arr5 = add(arr4, 4) and arr5prime = add(arr4, 42) at least one of them needs to be a copy.
Note that this isn't a common optimization, so you shouldn't expect it to be there unless explicitly stated in the documentation.

Resources