Just discovered a nasty bug in my program based on the fact that Julia does not copy arrays when defining a closure. This makes continuation programming hard. What was the motivation for this design choice?
Any suggestions for decoupling the state of my closure from the program state?
As an example
l = [2 1; 0 0];
f = x -> l[2,2];
Then f(1) = 0 but if you change l[2,2] = 1, then f(1) = 1.
Your assumption that this is a "closure" does not hold. l is not a "closed" variable in the context of the anonymous function at that point. It is simply a reference to a variable inherited from 'external' scope (since it has not been redefined locally inside the anonymous function).
Here's an example of a true closure:
f = let l=[2 1;0 0]
x -> l[2,2];
end
The variable l now is local to the let block, and not present at global scope. f still has access to it, even though it has technically gone out of scope. This is what a closure means.
As a result of l having gone out of scope, it is no longer accessible except through f which is a closure having access to it as a closed variable.
PS. I'm going to go out on a limb here and assume that what you're expecting was matlab-like behaviour. The big difference with matlab is that when you define an anonymous function handle there, it captures the current state of the workspace by copying all the variables and making them part of the function 'object'. You can confirm this by using the functions command. Matlab doesn't have references in the same way as julia. This is a strength of julia, not a weakness, as it allow the user to make use of optimizations that avoid reallocation of memory, that are harder to achieve in matlab*.
* though in fairness, matlab shines in other ways, by attempting to optimise this for you
EDIT: Liso pointed out a very important pitfall in the comments. Assume l already exists in the global workspace, and we type
let l=l
while this is perfectly valid syntax, making l a local variable to the let block, this is still initialised simply as a reference to the global l. Therefore any changes to the global l will still affect the closure, which is not what you want. In this case, you should be trying to 'mimic' matlab behaviour by making a copy (or a deep copy, depending on your use case), such that the local variable is truly independent of anything else once it goes out of scope and becomes 'closed' i.e.
let l = deepcopy(l)
Also, for completeness, when one makes a closure in julia, it is worth pointing out how this is implemented under the hood: your resulting f function is simply a callable object, containing a field for each 'closed' variable it needs to be aware of; you could even access this as f.l.
Related
As a novice to Julia, I, like many others, am perplexed by the fact that loops in Julia create their own local scope (but not on the REPL nor within functions). There is much discussion online about this topic, but most of the questions here are about the particulars of this behaviour, such as why doing a=1 inside the loop doesn't affect variable a outside the loop, but a[1]=1 works. I get how it works now, for the most part.
My question is why was Julia implemented with this behaviour. Is there a benefit to this from the prespective of the user? I cannot think of one. Or was it necessary for some technical reason?
I appologise if this has been asked already, but all the questions and answers I've seen so far were about how this works and how to deal with it, but I am curious about WHY Julia was implemented this way.
Firstly, loops in Julia only introduce a new scope of the sort that hides variables existing outside the loop (as per your complaint) if the scope outside the loop is global scope. So, for instance
function foo()
a = 0
# Loop does not hide existing variable `a`, will work just fine
for i = 100
a += i^2
end
return a
end
julia> foo()
10000
in other words
# Anywhere other than global scope
a = 0
for i = 100
a += i^2
end
a == 10000 # TRUE
This is because in Julia, as in many many other languages, global scope may be considered harmful. At the very least, a for loop with global scope would encounter significant performance penalties. For instance, consider the following:
julia> a = 0
0
julia> #time for i=1:100
# Technically this "global" keyword is superfluous since we're running this at the repl, but doesn't hurt to be explicit
global a += rand()^2
end
0.000022 seconds (200 allocations: 3.125 KiB)
julia> function bar()
a = 0
for i=1:100
a += rand()^2
end
return a
end
bar (generic function with 1 method)
julia> #time bar()
0.000002 seconds
33.21364180865362
Note the massive difference in allocations (the bottom version has zero) and the ~10x time difference.
Now, you may have noticed I used a special keyword global there in the global example, but since this was being run in the REPL, that doesn't actually do anything other than make it explicit what is happening.
That brings us to the other significant difference you have noticed: when run in the REPL, for loops appear not to introduce a new scope, even though the REPL is certainly global scope. This is because it turns out to be a huge pain when debugging to have to add a bunch of global qualifiers to code you have copy-pasted from somewhere deeper in your program (say within a function, where loops do not hide outside variables). So for the sake of convenience when debugging, the REPL effectively adds those global keywords for you, making the presumption that if you cared about performance you wouldn't just be pasting raw loops into the REPL, and if you are just pasting raw loops into the REPL, you're probably debugging or something.
In a script, however, it is presumed that you do care about performance, so you will get an error if you try to use a global variable within a loop without explicitly declaring it as such.
The details are substantially more complicated, as the other answer explains in more technically correct terms. Some of this complication, as far as I know, is due to a reversal on the decision of whether or not global variables should or should not be accessible by default within a loop in the REPL that happened around the time of Julia v0.7.
for loops in Julia introduce a so called local (soft) scope, see https://docs.julialang.org/en/v1/manual/variables-and-scoping/#man-scope-table.
The rules for local (soft) scope are (quoting):
If x is not already a local variable and all of the scope constructs containing the assignment are soft scopes (loops, try/catch blocks, or struct blocks), the behavior depends on whether the global variable x is defined:
if global x is undefined, a new local named x is created in the scope of the assignment;
if global x is defined, the assignment is considered ambiguous:
in non-interactive contexts (files, eval), an ambiguity warning is printed and a new local is created;
in interactive contexts (REPL, notebooks), the global variable x is assigned.
So your statement:
why doing a=1 inside the loop doesn't affect variable a outside the loop
is only true in non-interactive contexts if the for loop is not inside a hard local scope (typically if for loop is in a global scope), and the variable you assign to is defined in global scope. However, you will get a warning then.
Now the crucial part of your question is I think:
My question is why was Julia implemented with this behaviour. Is there a benefit to this from the prespective of the user?
The answer is that for loop creates a new binding for a variable that is defined within its scope. To see the consequence consider the following code (I assume that variable x is not defined in enclosing scope so that x is defined in local scope):
julia> v = []
Any[]
julia> for i in 1:2
x = i
push!(v, () -> x)
end
julia> v[1]()
1
julia> v[2]()
2
Whe have created two anonymous functions and all works as you probably expected.
Now let us check what would happen in Python:
>>> v = []
>>> for i in range(1, 3):
... x = i
... v.append(lambda: x)
...
>>> v[0]()
2
>>> v[1]()
2
The result might surprise you. Both anonymous functions return 2. This is a consequence of not creating a local variable with a new binding in each iteration of the loop.
However, if in Julia you were working in REPL and x were defined in global scope you would get:
julia> x = 0
0
julia> v = []
Any[]
julia> for i in 1:2
x = i
push!(v, () -> x)
end
julia> v[1]()
2
julia> v[2]()
2
just like in Python.
The other consideration, as explained in the other answer is performance. But most likely performance critical code is written inside a function anyway, and the discussed performance considerations are only relevant in global scope.
EDIT
This is a design choice of Matlab, quoting from https://research.wmz.ninja/articles/2017/05/closures-in-matlab.html:
When an anonymous function is created, the immediate values of the referenced local variables will be captured. Hence if any changes to the referenced local variables made after the creation of this anonymous function will not affect this anonymous function.
So as you can see in Matlab there is a difference of anonymous function vs. a closure, which does something different:
When a nested function is created, the immediate values of the referenced local variables will not be captured. When the nested function is called, it will use the current values of the referenced local variables.
In Julia there is no such difference as you can see in the examples above.
And quoting the documentation of Matlab https://www.mathworks.com/help/matlab/matlab_prog/anonymous-functions.html:
Because a, b, and c are available at the time you create parabola, the function handle includes those values. The values persist within the function handle even if you clear the variables:
(but I think it is not as explicit as the explanation I linked above)
I know about recursion, but I don't know how it's possible. I'll use the fallowing example to further explain my question.
(def (pow (x, y))
(cond ((y = 0) 1))
(x * (pow (x , y-1))))
The program above is in the Lisp language. I'm not sure if the syntax is correct since I came up with it in my head, but it will do. In the program, I am defining the function pow, and in pow it calls itself. I don't understand how it's able to do this. From what I know the computer has to completely analyze a function before it can be defined. If this is the case, then the computer should give an undefined message when I use pow because I used it before it was defined. The principle I'm describing is the one at play when you use an x in x = x + 1, when x was not defined previously.
Compilers are much smarter than you think.
A compiler can turn the recursive call in this definition:
(defun pow (x y)
(cond ((zerop y) 1)
(t (* x (pow x (1- y))))))
into a goto intruction to re-start the function from scratch:
Disassembly of function POW
(CONST 0) = 1
2 required arguments
0 optional arguments
No rest parameter
No keyword parameters
12 byte-code instructions:
0 L0
0 (LOAD&PUSH 1)
1 (CALLS2&JMPIF 172 L15) ; ZEROP
4 (LOAD&PUSH 2)
5 (LOAD&PUSH 3)
6 (LOAD&DEC&PUSH 3)
8 (JSR&PUSH L0)
10 (CALLSR 2 57) ; *
13 (SKIP&RET 3)
15 L15
15 (CONST 0) ; 1
16 (SKIP&RET 3)
If this were a more complicated recursive function that a compiler cannot unroll into a loop, it would merely call the function again.
From what I know the computer has to completely analyze a function before it can be defined.
When the compiler sees that one defines a function POW, then it tells itself: now we are defining function POW. If it then inside the definition sees a call to POW, then the compiler says to itself: oh, this seems to be a call to the function that I'm currently compiling and it can then create code to make a recursive call.
A function is just a block of code. It's name is just help so you don't have to calculate the exact address it will end up in. The programming language will turn the names into where the program is to go to execute.
How one function call another is by storing the address of the next command in this function on the stack, perhaps add arguments to the stack and then jump to the address location of the function. The function itself jumps to the return address it finds so that control goes back to the callee. There are several calling conventions implemented by the language on which side do what. CPUs don't really have function support so just like there is nothing called a while loop in CPUs functions are emulated.
Just like functions have names, arguments have names too, however they are mere pointers just like the return address. When calling itself it just adds a new return address and arguments onto the stack and jump to itself. The top of the stack will be different and thus the same variable names are unique addresses to the call so x and y in the previous call is somewhere else than the current x and y. In fact there is no special treatment needed for calling itself than calling anything else.
Historically the first high level language, Fortran, did not support recursion. It would call itself but when it returned it returned to the original callee without doing the rest of the function after the self call. Fortran itself would have been impossible to write without recursion so while itself used recursion it did not offer it to the programmer that used it. This limitation is the reason why John McCarthy discovered Lisp.
I think to see how this can work in general, and in particular in cases where recursive calls can't be turned into loops, it's worth thinking about how a general compiled language might work, because the problems are not different.
Let's imagine how a compiler might turn this function into machine code:
(defun foo (x)
(+ x (bar x)))
And let's assume that it does not know anything about bar at the time of compilation. Well, it has two options.
It can compile foo in such a way that the call to bar is translated a set of instructions which say, 'look up the function definition stored under the name bar, whatever it currently is, and arrange to call that function with the right arguments'.
It can compile foo in such a way that there is a machine-level function call to a function but the address of that function is left as a placeholder of some kind. And it can then attach some metadata to foo which says: 'before this function is called you need to find the function named bar, find its address, splice it into the code in the right place, and remove this metadata.
Both of these mechanisms allow foo to be defined before it's known what bar is. And note that instead of bar I could have written foo: these mechanisms deal with recursive calls too. They differ apart from that, however.
The first mechanism means that, every time foo is called it needs to do some kind of dynamic lookup for bar which will involve some overhead (but this overhead can be pretty small):
as a consequence of this the first mechanism will be slightly slower than it might be;
but, also as a consequence of this, if bar gets redefined, then the new definition will get picked up, which is a very desirable thing for an interactive language, which Lisp implementations usually are.
The second mechanism means that, after foo has all its references to other functions linked in to it, then the calls happen at the machine level:
this means they will be quick;
but that redefinition will be, at best, more complicated or, at worst, not possible at all.
The second of these implementations is close to how traditional compilers compile code: they compile code leaving a bunch of placeholders with associated metadata saying what names those placeholders correspond to. A linker, (sometimes known as a link-loader, or loader) then grovels over all the files produced by the compiler as well as other libraries of code and resolves all these references, resulting in a bit of code which can actually be run.
A very simple-minded Lisp system might work entirely by the first mechanism (I am pretty sure that this is how Python works, for instance). A more advanced compiler will probably work by some combination of the first and second mechanism. As an example of this, CL allows the compiler to make assumptions that apparent self-calls in functions really are self-calls, and so the compiler may well compile them as direct calls (essentially it will compile the function and then link it on the fly). But when compiling code in general, it might call 'through the name' of the function.
There are also more-or-less heroic strategies which things could do: for instance at the first call of a function link it, on the fly, to all the things it refers to, and note in their definitions that if they change then this thing needs to be unlinked as well so it all happens again. These kind of tricks once seemed implausible, but compilers for languages like JavaScript do things at least as hairy as this all the time now.
Note that compilers and linkers for modern systems actually do something more complicated than I've described, because of shared libraries &c: what I described is more-or-less what happened pre shared-library.
Does anyone know the reasons why Julia chose a design of functions where the parameters given as inputs cannot be modified? This requires, if we want to use it anyway, to go through a very artificial process, by representing these data in the form of a ridiculous single element table.
Ada, which had the same kind of limitation, abandoned it in its 2012 redesign to the great satisfaction of its users. A small keyword (like out in Ada) could very well indicate that the possibility of keeping the modifications of a parameter at the output is required.
From my experience in Julia it is useful to understand the difference between a value and a binding.
Values
Each value in Julia has a concrete type and location in memory. Value can be mutable or immutable. In particular when you define your own composite type you can decide if objects of this type should be mutable (mutable struct) or immutable (struct).
Of course Julia has in-built types and some of them are mutable (e.g. arrays) and other are immutable (e.g. numbers, strings). Of course there are design trade-offs between them. From my perspective two major benefits of immutable values are:
if a compiler works with immutable values it can perform many optimizations to speed up code;
a user is can be sure that passing an immutable to a function will not change it and such encapsulation can simplify code analysis.
However, in particular, if you want to wrap an immutable value in a mutable wrapper a standard way to do it is to use Ref like this:
julia> x = Ref(1)
Base.RefValue{Int64}(1)
julia> x[]
1
julia> x[] = 10
10
julia> x
Base.RefValue{Int64}(10)
julia> x[]
10
You can pass such values to a function and modify them inside. Of course Ref introduces a different type so method implementation has to be a bit different.
Variables
A variable is a name bound to a value. In general, except for some special cases like:
rebinding a variable from module A in module B;
redefining some constants, e.g. trying to reassign a function name with a non-function value;
rebinding a variable that has a specified type of allowed values with a value that cannot be converted to this type;
you can rebind a variable to point to any value you wish. Rebinding is performed most of the time using = or some special constructs (like in for, let or catch statements).
Now - getting to the point - function is passed a value not a binding. You can modify a binding of a function parameter (in other words: you can rebind a value that a parameter is pointing to), but this parameter is a fresh variable whose scope lies inside a function.
If, for instance, we wanted a call like:
x = 10
f(x)
change a binding of variable x it is impossible because f does not even know of existence of x. It only gets passed its value. In particular - as I have noted above - adding such a functionality would break the rule that module A cannot rebind variables form module B, as f might be defined in a module different than where x is defined.
What to do
Actually it is easy enough to work without this feature from my experience:
What I typically do is simply return a value from a function that I assign to a variable. In Julia it is very easy because of tuple unpacking syntax like e.g. x,y,z = f(x,y,z), where f can be defined e.g. as f(x,y,z) = 2x,3y,4z;
You can use macros which get expanded before code execution and thus can have an effect modifying a binding of a variable, e.g. macro plusone(x) return esc(:($x = $x+1)) end and now writing y=100; #plusone(y) will change the binding of y;
Finally you can use Ref as discussed above (or any other mutable wrapper - as you have noted in your question).
"Does anyone know the reasons why Julia chose a design of functions where the parameters given as inputs cannot be modified?" asked by Schemer
Your question is wrong because you assume the wrong things.
Parameters are variables
When you pass things to a function, often those things are values and not variables.
for example:
function double(x::Int64)
2 * x
end
Now what happens when you call it using
double(4)
What is the point of the function modifying it's parameter x , it's pointless. Furthermore the function has no idea how it is called.
Furthermore, Julia is built for speed.
A function that modifies its parameter will be hard to optimise because it causes side effects. A side effect is when a procedure/function changes objects/things outside of it's scope.
If a function does not modifies a variable that is part of its calling parameter then you can be safe knowing.
the variable will not have its value changed
the result of the function can be optimised to a constant
not calling the function will not break the program's behaviour
Those above three factors are what makes FUNCTIONAL language fast and NON FUNCTIONAL language slow.
Furthermore when you move into Parallel programming or Multi Threaded programming, you absolutely DO NOT WANT a variable having it's value changed without you (The programmer) knowing about it.
"How would you implement with your proposed macro, the function F(x) which returns a boolean value and modifies c by c:= c + 1. F can be used in the following piece of Ada code : c:= 0; While F(c) Loop ... End Loop;" asked by Schemer
I would write
function F(x)
boolean_result = perform_some_logic()
return (boolean_result,x+1)
end
flag = true
c = 0
(flag,c) = F(c)
while flag
do_stuff()
(flag,c) = F(c)
end
"Unfortunately no, because, and I should have said that, c has to take again the value 0 when F return the value False (c increases as long the Loop lives and return to 0 when it dies). " said Schemer
Then I would write
function F(x)
boolean_result = perform_some_logic()
if boolean_result == true
return (true,x+1)
else
return (false,0)
end
end
flag = true
c = 0
(flag,c) = F(c)
while flag
do_stuff()
(flag,c) = F(c)
end
Given the following example for generating a lazy list number sequence:
type 'a lazy_list = Node of 'a * (unit -> 'a lazy_list);;
let make =
let rec gen i =
Node(i, fun() -> gen (i + 1))
in gen 0
;;
I asked myself the following questions when trying to understand how the example works (obviously I could not answer myself and therefore I am asking here)
When calling let Node(_, f) = make and then f(), why does the call of gen 1 inside f() succeed although gen is a local binding only existing in make?
Shouldn't the created Node be completely unaware of the existence of gen? (Obviously not since it works.)
How is a construction like this being handled by the compiler?
First of all, the questions that are asking have nothing to do with the concepts of lazy, so we can disregard this particular issue, to simplify the discussion.
As Jeffrey noted in the comment to your question, the answer is simple - it is a closure.
But let me extend it a little bit. Functional programming languages, as well as many other modern languages, including Python and C++, allows to define functions in a scope of another function and to refer to the variables available in the scope of the enclosing function. These variables are called captured variables, and the created functional object along with the captured values is called the closure.
From the compiler perspective, the implementation is rather simple (to understand). The closure is a normal value, that contains a code to be executed, as well as pointers to the extra values, that were captured from the outer scope. Since OCaml is a garbage collected language, the values are preserved, as they are referenced from a live object. In C++ the story is much more complicated, as C++ doesn't have the GC, but this is a completely different story.
Shouldn't the created Node be completely unaware of the existence of gen? (Obviously not since it works.)
The create Node is an object that has two pointers, a pointer to the initial object i, and a pointer to the anonymous function fun() -> gen (i + 1). The anonymous function has a pointer to the same initial object i. In our particular case, the i is an integer, so instead of being a pointer the i value is represented inline, but these are details that are irrelevant to the question.
I have read a lot about continuations and a very common definition I saw is, it returns the control state.
I am taking a functional programming course taught in SML.
Our professor defined continuations to be:
"What keeps track of what we still have to do"
; "Gives us control of the call stack"
A lot of his examples revolve around trees. Before this chapter, we did tail recursion. I understand that tail recursion lets go of the stack to hold the recursively called functions by having an additional argument to "build" up the answer. Reversing a list would be built in a new accumulator where we append to it accordingly. Also, he said something about functions are called(but not evaluated) except till we reach the end where we replace backwards. He said an improved version of tail recursion would be using CPS(Continuation Programming Style).
Could someone give a simplified explanation of what continuations are and why they are favoured over other programming styles?
I found this stackoverflow link that helped me, but still did not clarify the idea for me:
I just don't get continuations!
Continuations simply treat "what happens next" as first class objects that can be used once unconditionally, ignored in favour of something else, or used multiple times.
To address what Continuation Passing Style is, here is some expression written normally:
let h x = f (g x)
g is applied to x and f is applied to the result.
Notice that g does not have any control. Its result will be passed to f no matter what.
in CPS this is written
let h x next = (g x (fun result -> f result next))
g not only has x as an argument, but a continuation that takes the output of g and returns the final value. This function calls f in the same manner, and gives next as the continuation.
What happened? What changed that made this so much more useful than f (g x)? The difference is that now g is in control. It can decide whether to use what happens next or not. That is the essence of continuations.
An example of where continuations arise are imperative programming languages where you have control structures. Whiles, blocks, ordinary statements, breaks and continues are all generalized through continuations, because these control structures take what happens next and decide what to do with it, for example we can have
...
while(condition1) {
statement1;
if(condition2) break;
statement2;
if(condition3) continue;
statement3;
}
return statement3;
...
The while, the block, the statement, the break and the continue can all be described in a functional model through continuations. Each construct can be considered to be a function that accepts the
current environment containing
the enclosing scopes
optional functions accepting the current environment and returning a continuation to
break from the inner most loop
continue from the inner most loop
return from the current function.
all the blocks associated with it (if-blocks, while-block, etc)
a continuation to the next statement
and returns the new environment.
In the while loop, the condition is evaluated according to the current environment. If it is evaluated to true, then the block is evaluated and returns the new environment. The result of evaluating the while loop again with the new environment is returned. If it is evaluated to false, the result of evaluating the next statement is returned.
With the break statement, we lookup the break function in the environment. If there is no function found then we are not inside a loop and we give an error. Otherwise we give the current environment to the function and return the evaluated continuation, which would be the statement after the the while loop.
With the continue statement the same would happen, except the continuation would be the while loop.
With the return statement the continuation would be the statement following the call to the current function, but it would remove the current enclosing scope from the environment.