I see a lot of functional programming related topics mention destructive updates. I understand that it is something similar to mutation, so I understand the update part. But what is the destructive part? Or am I just over-thinking it?
You're probably overthinking it a bit. Mutability is all there is to it; the only thing being "destroyed" is the previous value of whatever you mutated.
Say you're using some sort of search tree to store values, and you want to insert a new one. After finding the location where the new values goes, you have two options:
With an immutable tree, you construct new nodes along the path from the new value's location up to the root. Subtrees not along the path are reused in the new tree, and if you still have a reference to the original tree's root you can use both, with the common subtrees shared between them. This economizes on space with no extra effort if you have lots of slightly-different copies floating around, and of course you have all the usual benefits of immutable data structures.
With a mutable tree, you attach the new value where it belongs and that's that; nothing else has to be changed. This is almost always faster, and economizes on memory allocation if you only ever have one copy around, but anything that had a reference to the "old" tree now has a reference to the new one. The original has been destroyed; it's gone forever. If you need to keep the original around, you have to go to the expense of creating an entirely new copy of the whole thing before changing it.
If "destruction" seems an unnecessarily harsh way to describe a simple in-place update, then you've probably not spent as much time as I have debugging code in order to figure out where on Earth some value is being changed behind your back.
The imperative programming languages allow variables to be redefined, e.g
x = 1
x = 2
So x first has the value 1 then, later, it has the value 2. The second operation is an destructive update, because x looses its initial definition as being equal to 1.
This is not how definition is handled in common mathematics. Once defined, a variable keeps its value.
The above, seen as system of equations, would allow to subtract the first from the second equation, which would give
x - x = 2 - 1 <=> 0 = 1
which is a false statement. It is assumed that once introduced, x is the same.
A familiar statement like
x = x + 1
would lead to the same conclusion.
The functional languages have the same use of variables, once they are defined it is not possible to reassign them. The above statement would turn into
x2 = x + 1
and we would have no for or while loop but rather recursion or some higher order function.
Related
Constraint Satisfaction Problems (CSPs) are basically, you have a set of constraints with variables and the domains of values for the variables. Then given some configuration of the variables (assignment of variables to values in their domains), you check to see if the constraints are "satisfied". That is, you check to see that evaluating all of the constraints returns a Boolean "true".
What I would like to do is sort of the reverse. Instead of this Boolean "testing" if the constraints are true, I would like to instead take the constraints and enforce them on the variables. That is, set the variables to whatever values they need to be in order to satisfy the constraints. An example of this would be like in a game, you say "this box's right side is always to the left of its containing box's right side," or, box.right < container.right. Then the constraint solving engine (like Cassowary for the game example) would take the box and set its "right" property to whatever number value it resolved to. So instead of the constraint solver giving you a Boolean value "yes the variable configuration satisfies the constraints", it instead updates the variables' configuration with appropriate values, "you have updated the variables". I think Cassowary uses the Simplex Algorithm for solving its constraints.
I am a bit confused because Wikipedia says:
constraint satisfaction is the process of finding a solution to a set of constraints that impose conditions that the variables must satisfy. A solution is therefore a set of values for the variables that satisfies all constraints—that is, a point in the feasible region.
That seems different than the constraint satisfaction problem, of which it says:
An evaluation is consistent if it does not violate any of the constraints.
That's why it seems CSPs are to return Boolean values, while in CS you can set the values. Not quite clear the distinction.
Anyways, I am looking for general techniques on Constraint Solving, in the sense of setting variables like in the simplex algorithm. However, I would like to apply it to any situation, not just linear programming. Some standard and simple example constraints are:
All variables are different.
box.right < container.right
The sum of all variables < 10
Variable a goes before variable b in evaluation.
etc.
For the first case, seeing if the constraints are satisfied (Boolean true) is pretty easy: iterate through the pairs of variables, and if any pair is not equal to each other, return false, otherwise return true after processing all variables.
However, doing the equivalent of setting the variables doesn't seem possible at first glance: iterate through the pairs of variables, and if they are not equal, perhaps you set the first one to the second one. You might have to do some fixed point thing, processing some of them more than once. And then figuring out what value to set them to seems arbitrary how I just did it. Maybe instead you need some further (nested) constraints defining how set the values (e.g. "set a to b if a > b, otherwise set b to a"). The possibilities are customizable.
In addition, for simpler cases like box.right < container.right, it is even complicated. You could say at first that if box.right >= container.right then set box.right = container.right. But maybe actually you don't want that, but instead you want some iPhone-like physics "bounce" where it overextends and then bounces back with momentum. So again, the possibilities are large, and you should probably have additional constraints.
So my question is, similar to how for testing the constraints (for Boolean value) is standardized to CSP, I am wondering if there are any references or standardizations in terms of setting the values used by the constraints.
The only thing I have seen so far is that Cassowary simplex algorithm example which works well for an array of linear inequalities on real-numbered variables. I would like to see something that can handle the "All variables are different" case, and the other cases listed, as well as the standard CSP example problems like for scheduling, box packing, etc. I am not sure why I haven't encountered more on setting/updating constraint variables instead of the Boolean "yes constraints are satisfied" problem.
The only limits I have are that the constraints work on finite domains.
If it turns out there is no standardization at all and that every different constraint listed requires its own entire field of research, that would be good to know. Then I at least know what the situation is and why I haven't really seen much about it.
CSP is a research fields with many publications each year. I suggest you to read one of the books on the subject, like Rina Dechter's.
For standardized CSP languages, check MiniZinc on one hand, and XCSP3 on the other.
There are two main approaches to CSP solving: systematic and stochastic (also known as local search). I have worked on three different CSP solvers, one of them stochastic, but I understand systematic solvers better.
There are many different approaches to systematic solvers. It is possible to fill a whole book covering all the possible approaches, so I will explain only the two approaches I believe the most in:
(G)AC3 which propagates constraints, until all global constraints (hyper-arcs) are consistent.
Reducing the problem to SAT, and letting the SAT solver do the hard work. There is a great algorithm that creates the CNF lazily, on demand when the solver is already working. In a sence, this is a hybrid SAT/CSP algorithm.
To get the AC3 approach going you need to maintain a domain for each variable. A domain is basically a set of possible assignments.
For example, consider the domains of a and b: D(a)={1,2}, D(b)={0,1} and the constraint a <= b. The algorithm checks one constraint at a time, and when it reaches a <= b, it sees that a=2 is impossible, and also b=0 is impossible, so it removes them from the domains. The new domains are D'(a)={1}, D'(b)={1}.
This process is called domain propagation. Using a queue of "dirty" constraints, or "dirty" variables, the solver knows which constraint to propagate next. When the queue is empty, then all constraints (hyper arcs) are consistent (this is where the name AC3 comes from).
When all arcs are consistent, then the solver picks a free variable (with more than one value in the domain), and restricts it to a single value. In SAT, this is called a decision It adds it to the queue and propagates the constraints. If it gets to a conflict (a constraint can't be satisfied), it goes back and undos an earlier decision.
There are a lot of things going on here:
First, how the domains are represented. Some solvers only hold a pair of bounds for each domain. Others, have a set of integers. My solver holds an interval set, or a bit vector.
Then, how the solver knows to propagate a constraint? Some solvers such as SAT solvers, Minion, and HaifaCSP, use watches to avoid propagating irrelevant constraints. This has a significant performance impact on clauses.
Then there is the issue of making decisions. Usually, it is good to choose a variable that has a small domain and high connectivity. There are many papers comparing many different strategies. I prefer a dynamic strategy that resembles the VSIDS of SAT solvers. This strategy is auto-tuned according to conflicts.
Making decision on the value is also important. Many simply take the smallest value in the domain. Sometimes this can be suboptimal if there is a constraint that limits a sum from below. Another option is to randomly choose between max and min values. I tune it further, and use the last assigned value.
After everything, there is the matter of backtracking. This is a whole can of worms. The problem with simple backtracking is that sometimes the cause for conflicts happened at the first decision, but it is detected only at the 100'th. The best thing is to analyze the conflict, and realize where the cause of the conflict is. SAT solvers have been doing this for decades. But CSP representation is not as trivial as CNF. So not many solvers could do it efficiently enough.
This is a nontrivial subject that can fill at least two university courses. Just the subject of conflict analysis can take half of a course.
I'm learning about Elm from Seven More Languages in Seven Weeks. The following example confuses me:
import Keyboard
main = lift asText (foldp (\dir presses -> presses + dir.x) 0 Keyboard.arrows)
foldp is defined as:
Signal.foldp : (a -> b -> b) -> b -> Signal a -> Signal b
It appears to me that:
the initial value of the accumulator presses is only 0 on the first evaluation of main
after the first evaluation of main it seems that the initial value of presses is whatever the result of function (a -> b -> b), or (\dir presses -> presses + dir.x) in the example, was on the previous evaluation.
If this is indeed the case, then isn't this a violation of functional programming principles, since main now maintains internal state (or at least foldp does)?
How does this work when I use foldp in multiple places in my code? Does it keep multiple internal states, one for each time I use it?
The only other alternative I see is that foldp (in the example) starts counting from 0, so to say, each time it's evaluated, and somehow folds up the entire history provided by Keyboard.arrows. This seems to me to be extremely wasteful and sure to cause out-of-memory exceptions for long run times.
Am I missing something here?
How it works
Yes, foldp keeps some internal state around. Saving the entire history would be wasteful and is not done.
If you use foldp multiple times in your code, doing distinct things or having distinct input signals, then each instance will keep it's own local state. Example:
import Keyboard
plus = (foldp (\dir presses -> presses + dir.x) 0 Keyboard.arrows)
minus = (foldp (\dir presses -> presses - dir.x) 0 Keyboard.arrows)
showThem p m = flow down (map asText [p, m])
main = lift2 showThem plus minus
But if you use the resulting signal from a foldp twice, only one foldp instance will be in your compiled program, the resulting changes will just be used in two place:
import Keyboard
plus = (foldp (\dir presses -> presses + dir.x) 0 Keyboard.arrows)
showThem p m = flow down (map asText [p, m])
main = lift2 showThem plus plus
The main question
If this is indeed the case, then isn't this a violation of functional programming principles, since main now maintains internal state (or at least foldp does)?
Functional programming doesn't have some great canonical definition that everybody uses. There are many examples of functional programming languages that allow for the use of mutable state. Some of these programming languages show you that a value is mutable in the type-system (you could see Haskell's State a type as such, it really depends on your viewpoint though).
But what is mutable state? What is a mutable value? It's a value inside the program, that is mutable. That is, it can change. It can be different things at different times. Ah, but we know how Elm calls values at change over time! That's a Signal.
So really a Signal in Elm is a value that can change over time, and can therefore be seen as a variable, a mutable value, or mutable state. It's just that we manage this value very strictly by allowing only a few well-chosen manipulations on Signals. Such a Signal can be based on other Signals in your program, or come from a library or come from the outside world (think of inputs like Mouse.position). And who knows how the outside world came up with that signal! So allowing your own Signals to be based on the past value of Signals is actually ok.
Conclusion / TL;DR
You could see Signal as a safety wrapper around mutable state. We assume that signals that come from the outside world (as input to your program) are not predictable, but because we have this safety wrapper that only allows lift/sample/filter/foldp, the program you write is otherwise completely predictable. Side-effects are contained and managed, therefore I think it's still "functional programming".
You're confusing an implementation detail with a conceptual detail. Every functional programming language eventually gets translated down to assembly code, which is decidedly imperative. That doesn't mean you can't have purity at the language level.
Don't think of main as being repeatedly evaluated, returning different results every time. A Signal is conceptually an infinite list of values. main takes an infinite list of keyboard arrows as input and translates that into an infinite list of elements. Given the same list of arrows, it will always return the exact same list of elements, without side effects. At this level of abstraction, it is therefore a pure function.
Now, it so happens that we are only interested in the last element of the sequence. This allows for some optimizations in the implementation, one of which is storing the accumulated value. What's important is that the implementation is referentially transparent. From the language's point of view, you're getting the exact same answer as if you stored the entire sequence and recomputed it from scratch every time a value is added to the end. You get the same output given the same input. The only difference is storage space and execution time.
In other words, the whole idea of functional programming is not to eliminate state tracking, but to abstract it away from the purview of the programmer. Programmers get to play in the ideal world, while the compiler and runtime slave away in the sewers of mutable state to make the ideal world possible for the rest of us.
You should note that "doesnt maintain internal state" isn't really strong definition of FP. Its much more like an implementation constraint. What definition I like more is "built from pure functions". Without diving deep, in plain English it means that all functions return same output when given same input. This definition unlike previous gives you huge reasoning power and a simple way to check whether some program follows it while keeping some optimization space on current hardware.
Given reformulated restriction functional languages are free to use mutables as long as it modelled with pure functions. Answering your question, elm programs built out of pure functions so its probably a functional language. Elm uses special data structure, Signal, to model outside world interactions and internal state as well as any other functional language does.
If we had an assignment:
Given a block of binary data, count the frequency of the bytes within it.
And you were supposed to do this in C, the answer would be trivial and reasonably fast even for larger binary blocks. How would one go about implementing this in a purely functional language, without side effects?
For example, if you wrote a function that accepted freqency counts for each byte and the rest of the list of bytes, and returned modified frequency counts, it would have to do awful lot of work for data set of 100M bytes.
Also, if you sorted the data and then somehow counted the amount of subsequent same-valued bytes, the sort itself would take a lot of time.
Is there a reasonable way to implement this?
The straightforward way to do it is indeed to pass in and return data structures mapping bytes to counts. This would probably be implemented as some kind of tree (since that's what you get out of the standard library containers, as far as I know). In pure functional programming when you're passed in a tree and you need to return a new tree with a difference in only one node, the returned tree ends up sharing almost all of its structure and data with the original tree.
There is some overhead in traversing the tree to get to the count, but since you're counting bytes the tree is only ever smaller than 256 elements, so the overhead is log(255), which is a constant. It doesn't get larger for large data sets - it doesn't change the big-oh complexity of the algorithm. That's actually true even if you use the greatest possible overhead of copying around a full 256-entry array of counts with no sharing.
If you want to optimise this, you can take advantage of the fact that the "intermediate" frequency counts are never needed except as part of the computation of the next set of counts. That means you can use various techniques for getting the implementation to use destructive updates even while you're still semantically writing functional code. An STref in Haskell is basically letting you do this manually.
Theoretically the compiler could notice that you're replacing a never-needed-again value with a new one, so it could do the update in place for you. I don't know whether or not any actual production ready compilers are currently able to make this optimisation.
I'm trying to integrate two Fortran 9x codes which contain data arrays with opposite array ordering. One code (I'll call it the old code) has an established library of subroutines and I am trying to take advantage of these with the other (new) code as efficiently as possible (i.e. not having to create temporary arrays just to reorder an array and pass it to a subroutine and then have to replace the old array with the new reordered result). For example,
Old code:
oldarray(1:n,1) -> variable 1 for n elements
oldarray(1:n,2) -> variable 2 for n elements
.. and so on
new code:
newarray(1,1:n) -> variable 1 for n elements
newarray(1,1:n) -> variable 2 for n elements
.. and so on
The variable indices do not necessarily relate between the two codes. If I only need one variable to pass to a procedure, I just pass newarray(1,1:n) and the procedure doesn't know the difference. However, if a procedure from the old code requires variables 1-6 of oldarray which might correspond to variables 2,6,8,1,4,3 (I just picked arbitrary numbers) of newarray, is it possible to create a pointer that I could pass to the procedure?
On a simpler side, would it be possible to just create pointer for the tranpose of the new array? As an example, pointer(1000,6) points to newarray(6,1000).
Note: It is not possible to rewrite the new code to use the same array ordering because both codes use an array ordering that best suits its loop structures which cannot be changed.
Also, I have very little experience with pointers. I know I can create a derived datatype which consists of an array of pointers but I don't think I would be able to pass that to a procedure in the manner required (I could be wrong as I also have very little experience with derived datatypes). The reference book I have (Fortran 95/2003 for Scientists and Engineers) only explores advanced applications of pointers in terms of linked lists and trees. I have also found little Fortran pointer information outside of what is covered in this book on the internet.
Thank you for your help.
I think the answer is no, you can't do this, and it wouldn't help anyway.
You can do all sorts of super-cool things with array pointers, with strides across arrays, etc, but I don't see on the face of it how you can change the order of the data.
So I could be wrong on this and it is possible, but then the question is: how would it help you? Presumably you want to user pointers to re-arrange the data without copying; but when you're passing such a thing around, the compiler is allowed to do copy-in, copy-out; eg, create a temporary array, copy the data in, pass it to the subroutine, and copy the data out upon return. And in fact that would almost certainly be the right thing to do in this case, performance-wise; that way the old code could be accessing memory in the fast order, and the transpose-copy could be done in a fast way, as well.
So I suspect the right way to treat this problem is to do the copy-in/copy-out approach yourself explicitly.
We are in the process of the optimization of a Flex AS3 Application.
One of my team members suggested we make the variable name lengths smaller to optimize the application performance.
I.e.:
var IsRegionSelected:Boolean = false; //Slower
var IsRS:Boolean = false; //faster
Is this true?
No, the gain you will obtain will be only for the size of the swf.
String are put into a constant pool and instruction refering to this String will use an index.
it can be seen as (very schematic) :
constant pool:
[0] IsRegionSelected
[1] IsRS
usage:
value at 0 = false
value at 1 = false
Your code will be probably translated as (for local variable):
push false
setlocal x
push false
setlocal y
where x and y are register int assign by the compiler, so no difference if it's register 2 or register 4
For more detailed read the avm specification
yep.. i second it. changing the name length is not gonna help you. concentrate on item renderers, effects, states and transitions. those may be killing your resource. also checkout for any embedding images, embedding fonts, etc, since those will increase ur final swf file size and increase initial loading time.
cheers, PK
I don't think so, the way you use your variable name does matter than its length.
Good code should be consistent. Whether that means setting rules for the names of variables and functions, adopting standard approaches, or simply making sure all of your code is indented the same way, consistency makes your code easier for others to read.
One should later construe on what is your variable name declared.
var g:String;
var gang:String;
Both perform the same operation, one is more readability where someone going through your code will also construe it.
There's a very small performance gain, but if you plan to use this application again later, it's not worth your sanity. Do absolutely any other optimization you can before this one - and if it's really slow enough to need optimizing, then there are definitely other factors that you'll need to take care of first before variable names.
Cut anything else you can before resorting to 1-2 millisecond boosts.
As Matchu says, there is a difference but a small one.
You should consider assigning meaningful ids to your variables instead of just using simple chars which have no sense.