Converting code to pure functional form - functional-programming

In a sense, any imperative code can be converted to pure functional form by making every operation receive and pass on a 'state of the world' parameter.
However, suppose you have some code that is almost in pure functional form except that, buried many layers of function calls deep, are a handful of imperative operations that modify global or at least widely accessed state e.g. calling a random number generator, updating a counter, printing some debug info, and you want to convert it to pure functional form, algorithmically, with the minimum of changes.
Is there a way to do this without essentially turning the entire program inside out?

This isn't technically all that hard.
If all you want to do is avoid modifying global state, e.g, make the code re-entrant, then for any side-effecting function F that reads X, writes global variable Y,
and returns Z, transform it into F' that reads X, modifies a struct containing Y passed to it, and returns Z. If A calls B calls .. transitively calls F,
and F modifies global Y, then A' must build a reference to a struct containing Y and pass it to B. Now all your functions only modify values passed to them; they have no side effect on global state. (Well, we can argue about what A' does).
[Somebody may complain that (File/Screen/Device) Output done by F can't be handled. Well, either you want the immediate side effect on the world or you don't. If you don't, add the "state" of the Output as Y, and modify that; A' can return the desired Output as a result.]
If you insist on making the the program functional then any side effect of F on global Y has be changed to pass Y in, copying Y while changing it to produce Y', and then F' must return a pair . Callers of F must pass in Y, and use the resulting Y' in all code reachable from the call site.
The bit about copying gets you into real trouble: where is the logic to do a deep copy of Y to be found? If you find it, you may discover that deep copy produces an enormous structure and your storage demand gets impossible quickly. Now you need to find a way to make Y' share the parts of Y that haven't changed.
Now, if you want to do either of these tasks on a big code, you probably don't want to do it by hand; people are bad at handling this kind of detail and they want to cheat/rewrite/... If you really want to do this, you want a source-to-source program transformation system, which can mechanically apply the necessary steps.
I'll note that standard compilation techniques convert programs to so called "static single assignment (SSA)" form, which converts as much of the program, represented in the IR, into a functional program because it is easier to transform/optimize. They still worry about global storage.

Related

How is it possible that a function can call itself

I know about recursion, but I don't know how it's possible. I'll use the fallowing example to further explain my question.
(def (pow (x, y))
(cond ((y = 0) 1))
(x * (pow (x , y-1))))
The program above is in the Lisp language. I'm not sure if the syntax is correct since I came up with it in my head, but it will do. In the program, I am defining the function pow, and in pow it calls itself. I don't understand how it's able to do this. From what I know the computer has to completely analyze a function before it can be defined. If this is the case, then the computer should give an undefined message when I use pow because I used it before it was defined. The principle I'm describing is the one at play when you use an x in x = x + 1, when x was not defined previously.
Compilers are much smarter than you think.
A compiler can turn the recursive call in this definition:
(defun pow (x y)
(cond ((zerop y) 1)
(t (* x (pow x (1- y))))))
into a goto intruction to re-start the function from scratch:
Disassembly of function POW
(CONST 0) = 1
2 required arguments
0 optional arguments
No rest parameter
No keyword parameters
12 byte-code instructions:
0 L0
0 (LOAD&PUSH 1)
1 (CALLS2&JMPIF 172 L15) ; ZEROP
4 (LOAD&PUSH 2)
5 (LOAD&PUSH 3)
6 (LOAD&DEC&PUSH 3)
8 (JSR&PUSH L0)
10 (CALLSR 2 57) ; *
13 (SKIP&RET 3)
15 L15
15 (CONST 0) ; 1
16 (SKIP&RET 3)
If this were a more complicated recursive function that a compiler cannot unroll into a loop, it would merely call the function again.
From what I know the computer has to completely analyze a function before it can be defined.
When the compiler sees that one defines a function POW, then it tells itself: now we are defining function POW. If it then inside the definition sees a call to POW, then the compiler says to itself: oh, this seems to be a call to the function that I'm currently compiling and it can then create code to make a recursive call.
A function is just a block of code. It's name is just help so you don't have to calculate the exact address it will end up in. The programming language will turn the names into where the program is to go to execute.
How one function call another is by storing the address of the next command in this function on the stack, perhaps add arguments to the stack and then jump to the address location of the function. The function itself jumps to the return address it finds so that control goes back to the callee. There are several calling conventions implemented by the language on which side do what. CPUs don't really have function support so just like there is nothing called a while loop in CPUs functions are emulated.
Just like functions have names, arguments have names too, however they are mere pointers just like the return address. When calling itself it just adds a new return address and arguments onto the stack and jump to itself. The top of the stack will be different and thus the same variable names are unique addresses to the call so x and y in the previous call is somewhere else than the current x and y. In fact there is no special treatment needed for calling itself than calling anything else.
Historically the first high level language, Fortran, did not support recursion. It would call itself but when it returned it returned to the original callee without doing the rest of the function after the self call. Fortran itself would have been impossible to write without recursion so while itself used recursion it did not offer it to the programmer that used it. This limitation is the reason why John McCarthy discovered Lisp.
I think to see how this can work in general, and in particular in cases where recursive calls can't be turned into loops, it's worth thinking about how a general compiled language might work, because the problems are not different.
Let's imagine how a compiler might turn this function into machine code:
(defun foo (x)
(+ x (bar x)))
And let's assume that it does not know anything about bar at the time of compilation. Well, it has two options.
It can compile foo in such a way that the call to bar is translated a set of instructions which say, 'look up the function definition stored under the name bar, whatever it currently is, and arrange to call that function with the right arguments'.
It can compile foo in such a way that there is a machine-level function call to a function but the address of that function is left as a placeholder of some kind. And it can then attach some metadata to foo which says: 'before this function is called you need to find the function named bar, find its address, splice it into the code in the right place, and remove this metadata.
Both of these mechanisms allow foo to be defined before it's known what bar is. And note that instead of bar I could have written foo: these mechanisms deal with recursive calls too. They differ apart from that, however.
The first mechanism means that, every time foo is called it needs to do some kind of dynamic lookup for bar which will involve some overhead (but this overhead can be pretty small):
as a consequence of this the first mechanism will be slightly slower than it might be;
but, also as a consequence of this, if bar gets redefined, then the new definition will get picked up, which is a very desirable thing for an interactive language, which Lisp implementations usually are.
The second mechanism means that, after foo has all its references to other functions linked in to it, then the calls happen at the machine level:
this means they will be quick;
but that redefinition will be, at best, more complicated or, at worst, not possible at all.
The second of these implementations is close to how traditional compilers compile code: they compile code leaving a bunch of placeholders with associated metadata saying what names those placeholders correspond to. A linker, (sometimes known as a link-loader, or loader) then grovels over all the files produced by the compiler as well as other libraries of code and resolves all these references, resulting in a bit of code which can actually be run.
A very simple-minded Lisp system might work entirely by the first mechanism (I am pretty sure that this is how Python works, for instance). A more advanced compiler will probably work by some combination of the first and second mechanism. As an example of this, CL allows the compiler to make assumptions that apparent self-calls in functions really are self-calls, and so the compiler may well compile them as direct calls (essentially it will compile the function and then link it on the fly). But when compiling code in general, it might call 'through the name' of the function.
There are also more-or-less heroic strategies which things could do: for instance at the first call of a function link it, on the fly, to all the things it refers to, and note in their definitions that if they change then this thing needs to be unlinked as well so it all happens again. These kind of tricks once seemed implausible, but compilers for languages like JavaScript do things at least as hairy as this all the time now.
Note that compilers and linkers for modern systems actually do something more complicated than I've described, because of shared libraries &c: what I described is more-or-less what happened pre shared-library.

Why don't Julia closures copy arrays?

Just discovered a nasty bug in my program based on the fact that Julia does not copy arrays when defining a closure. This makes continuation programming hard. What was the motivation for this design choice?
Any suggestions for decoupling the state of my closure from the program state?
As an example
l = [2 1; 0 0];
f = x -> l[2,2];
Then f(1) = 0 but if you change l[2,2] = 1, then f(1) = 1.
Your assumption that this is a "closure" does not hold. l is not a "closed" variable in the context of the anonymous function at that point. It is simply a reference to a variable inherited from 'external' scope (since it has not been redefined locally inside the anonymous function).
Here's an example of a true closure:
f = let l=[2 1;0 0]
x -> l[2,2];
end
The variable l now is local to the let block, and not present at global scope. f still has access to it, even though it has technically gone out of scope. This is what a closure means.
As a result of l having gone out of scope, it is no longer accessible except through f which is a closure having access to it as a closed variable.
PS. I'm going to go out on a limb here and assume that what you're expecting was matlab-like behaviour. The big difference with matlab is that when you define an anonymous function handle there, it captures the current state of the workspace by copying all the variables and making them part of the function 'object'. You can confirm this by using the functions command. Matlab doesn't have references in the same way as julia. This is a strength of julia, not a weakness, as it allow the user to make use of optimizations that avoid reallocation of memory, that are harder to achieve in matlab*.
* though in fairness, matlab shines in other ways, by attempting to optimise this for you
EDIT: Liso pointed out a very important pitfall in the comments. Assume l already exists in the global workspace, and we type
let l=l
while this is perfectly valid syntax, making l a local variable to the let block, this is still initialised simply as a reference to the global l. Therefore any changes to the global l will still affect the closure, which is not what you want. In this case, you should be trying to 'mimic' matlab behaviour by making a copy (or a deep copy, depending on your use case), such that the local variable is truly independent of anything else once it goes out of scope and becomes 'closed' i.e.
let l = deepcopy(l)
Also, for completeness, when one makes a closure in julia, it is worth pointing out how this is implemented under the hood: your resulting f function is simply a callable object, containing a field for each 'closed' variable it needs to be aware of; you could even access this as f.l.

What's the fastest way to cast Vector{T} (T<:A) to Vector{A}?

Suppose A is an abstract type, I have a function f{T<:A}(x::Vector{A}). So x could be type Vector{A} or Vector{B} where B <: A. In the middle of the function I would like to cast x to Vector{A} so it can be consumed by another function that requires that signature.
What's the best way to do that? At the moment I am doing x = collect(A, x). Is there a way to avoid copying if possible?
If at all possible, I'd just change your second function definition to be parametric like f. Enforcing this kind of container structure in method signatures is a big performance bug that doesn't gain you any functionality… and just makes them much harder to use.
That said, the best way to do this kind of conversion where you don't care if the output aliases the input is with convert(Vector{A}, x). This will be a no-op if x already isa Vector{A}, but otherwise it'll be just like collect. That's as good as it gets.
Here's why: two containers of types Vector{A} and Vector{B} cannot share the same memory if A !== B since it'd be possible to corrupt the data in the Vector{B} by assigning a non-B element to the array through the Vector{A}.

Reflecting on a Type parameter

I am trying to create a function
import Language.Reflection
foo : Type -> TT
I tried it by using the reflect tactic:
foo = proof
{
intro t
reflect t
}
but this reflects on the variable t itself:
*SOQuestion> foo
\t => P Bound (UN "t") (TType (UVar 41)) : Type -> TT
Reflection in Idris is a purely syntactic, compile-time only feature. To predict how it will work, you need to know about how Idris converts your program to its core language. Importantly, you won't be able to get ahold of reflected terms at runtime and reconstruct them like you would with Lisp. Here's how your program is compiled:
Internally, Idris creates a hole that will expect something of type Type -> TT.
It runs the proof script for foo in this state. We start with no assumptions and a goal of type Type -> TT. That is, there's a term being constructed which looks like ?rhs : Type => TT . rhs. The ?foo : ty => body syntax shows that there's a hole called foo whose eventual value will be available inside of body.
The step intro t creates a function whose argument is t : Type - this means that we now have a term like ?foo_body : TT . \t : Type => foo_body.
The reflect t step then fills the current hole by taking the term on its right-hand side and converting it to a TT. That term is in fact just a reference to the argument of the function, so you get the variable t. reflect, like all other proof script steps, only has access to the information that is available directly at compile time. Thus, the result of filling in foo_body with the reflection of the term t is P Bound (UN "t") (TType (UVar (-1))).
If you could do what you are wanting here, it would have major consequences both for understanding Idris code and for running it efficiently.
The loss in understanding would come from the inability to use parametricity to reason about the behavior of functions based on their types. All functions would effectively become potentially ad-hoc polymorphic, because they could (say) run differently on lists of strings than on lists of ints.
The loss in performance would come from representing enough type information to do the reflection. After Idris code is compiled, there is no type information left in it (unlike in a system such as the JVM or .NET or a dynamically typed system such as Python, where types have a runtime representation that code can access). In Idris, types can be very large, because they can contain arbitrary programs - this means that far more information would have to be maintained, and computation occurring at the type level would also have to be preserved and repeated at runtime.
If you're wanting to reflect on the structure of a type for further proof automation at compile time, take a look at the applyTactic tactic. Its argument should be a function that takes a reflected context and goal and gives back a new reflected tactic script. An example can be seen in the Data.Vect source.
So I suppose the summary is that Idris can't do what you want, and it probably never will be able to, but you might be able to make progress another way.

How do I detect circular logic or recursion in a custom expression evaluator?

I've written an experimental function evaluator that allows me to bind simple functions together such that when the variables change, all functions that rely on those variables (and the functions that rely on those functions, etc.) are updated simultaneously. The way I do this is instead of evaluating the function immediately as it's entered in, I store the function. Only when an output value is requested to I evaluate the function, and I evaluate it each and every time an output value is requested.
For example:
pi = 3.14159
rad = 5
area = pi * rad * rad
perim = 2 * pi * rad
I define 'pi' and 'rad' as variables (well, functions that return a constant), and 'area' and 'perim' as functions. Any time either 'pi' or 'rad' change, I expect the results of 'area' and 'perim' to change in kind. Likewise, if there were any functions depending on 'area' or 'perim', the results of those would change as well.
This is all working as expected. The problem here is when the user introduces recursion - either accidental or intentional. There is no logic in my grammar - it's simply an evaluator - so I can't provide the user with a way to 'break out' of recursion. I'd like to prevent it from happening at all, which means I need a way to detect it and declare the offending input as invalid.
For example:
a = b
b = c
c = a
Right now evaluating the last line results in a StackOverflowException (while the first two lines evaluate to '0' - an undeclared variable/function is equal to 0). What I would like to do is detect the circular logic situation and forbid the user from inputing such a statement. I want to do this regardless of how deep the circular logic is hidden, but I have no idea how to go about doing so.
Behind the scenes, by the way, input strings are converted to tokens via a simple scanner, then to an abstract syntax tree via a hand-written recursive descent parser, then the AST is evaluated. The language is C#, but I'm not looking for a code solution - logic alone will be fine.
Note: this is a personal project I'm using to learn about how parsers and compilers work, so it's not mission critical - however the knowledge I take away from this I do plan to put to work in real life at some point. Any help you guys can provide would be appreciated greatly. =)
Edit: In case anyone's curious, this post on my blog describes why I'm trying to learn this, and what I'm getting out of it.
I've had a similar problem to this in the past.
My solution was to push variable names onto a stack as I recursed through the expressions to check syntax, and pop them as I exited a recursion level.
Before I pushed each variable name onto the stack, I would check if it was already there.
If it was, then this was a circular reference.
I was even able to display the names of the variables in the circular reference chain (as they would be on the stack and could be popped off in sequence until I reached the offending name).
EDIT: Of course, this was for single formulae... For your problem, a cyclic graph of variable assignments would be the better way to go.
A solution (probably not the best) is to create a dependency graph.
Each time a function is added or changed, the dependency graph is checked for cylces.
This can be cut short. Each time a function is added, or changed, flag it. If the evaluation results in a call to the function that is flagged, you have a cycle.
Example:
a = b
flag a
eval b (not found)
unflag a
b = c
flag b
eval c (not found)
unflag b
c = a
flag c
eval a
eval b
eval c (flagged) -> Cycle, discard change to c!
unflag c
In reply to the comment on answer two:
(Sorry, just messed up my openid creation so I'll have to get the old stuff linked later...)
If you switch "flag" for "push" and "unflag" for "pop", it's pretty much the same thing :)
The only advantage of using the stack is the ease of which you can provide detailed information on the cycle, no matter what the depth. (Useful for error messages :) )
Andrew

Resources