What persistent data structures does Raku/Rakudo include? - functional-programming

Raku provides many types that are immutable and thus cannot be modified after they are created. Until I started looking into this area recently, my understanding was that these Types were not persistent data structures – that is, unlike the core types in Clojure or Haskell, my belief was that Raku's immutable types did not take advantage of structural sharing to allow for inexpensive copies. I thought that statement my List $new = (|$old-list, 42); literally copied the values in $old-list, without the data-sharing features of persistent data structures.
That description of my understanding is in the past tense, however, due to the following code:
my Array $a = do {
$_ = [rand xx 10_000_000];
say "Initialized an Array in $((now - ENTER now).round: .001) seconds"; $_}
my List $l = do {
$_ = |(rand xx 10_000_000);
say "Initialized the List in $((now - ENTER now).round: .001) seconds"; $_}
do { $a.push: rand;
say "Pushed the element to the Array in $((now - ENTER now).round: .000001) seconds" }
do { my $nl = (|$l, rand);
say "Appended an element to the List in $((now - ENTER now).round: .000001) seconds" }
do { my #na = |$l;
say "Copied List \$l into a new Array in $((now - ENTER now).round: .001) seconds" }
which produced this output in one run:
Initialized an Array in 5.938 seconds
Initialized the List in 5.639 seconds
Pushed the element to the Array in 0.000109 seconds
Appended an element to the List in 0.000109 seconds
Copied List $l into a new Array in 11.495 seconds
That is, creating a new List with the old values + one more is just as fast as pushing to a mutable Array, and dramatically faster than copying the List into a new Array – exactly the performance characteristics that you'd expect to see from a persistent List (copying to an Array is still slow because it can't take advantage of structural sharing without breaking the immutability of the List). The fast copying of $l into $nl is not due to either List being lazy; neither are.
All of the above leads me to believe that Lists in Rakudo actually are persistent data structures, with all the performance benefits that implies. That leaves me with several questions:
Am I right about Lists being persistent data structures?
Are all other immutable Types also persistent data structures? Or are any?
Is any of this part of Raku, or just an implementation choice Rakudo has made?
Are any of these performance characteristics documented/guaranteed anywhere?
I have to say, I am both extremely impressed and more than a bit baffled to discover evidence that at least some of Raku(do)'s types are persistent. It's the sort of feature that other languages list as a key selling point or that leads to the creation of libraries with 30k+ stars on GitHub. Have we really had it in Raku without even mentioning it?

I remember implementing these semantics, and I certainly don't recall thinking about them giving rise to a persistent data structure at the time - although it does seems fair to attach that label to the result!
I don't think you'll find anywhere that explicitly spells out this exact behavior, however the most natural implementation of things that are required by the language quite naturally leads to it. Taking the ingredients:
The infix:<,> operator is the List constructor in Raku
When a List is created, it is non-committal with regards to laziness and flattening (these arise from how we use the List, which we don't - in general - know at the point of its construction)
When we write (|$x, 1), the prefix:<|> operator constructs a Slip, which is a kind of List that should melt into its surrounding List. Thus what infix:<,> sees is a Slip and an Int.
Making the Slip melt into the result List immediately would mean making a commitment about eagerness, which List construction alone should not do. Thus the Slip and everything after it is placed into the lazily evaluated ("non-reified") portion of the List.
This last of these is what gives rise to the observed persistent data structure style behavior.
I expect it would be possible to have a implementation that inspects the Slip and chooses to eagerly copy things that are known not to be lazy, and still be in compliance with the specification test suite. That would change the time complexity of your example. If you want to be defensive against that, then:
do { my $nl = (|$l.lazy, rand);
say "Appended an element to the List in $((now - ENTER now).round: .000001) seconds" }
Should be sufficient to force the issue even if the implementation changed.
Of other cases that immediately come to mind that are related to persistent data structures or at least tail sharing:
The MoarVM implementation of strings, which is behind str and thus Str, implements string concatenation by creating a new string that refers to the two that are being concatenated instead of copying the data in the two strings (and does similar tricks for substr and repetition). This is strictly an optimization, not a language requirement, and in some delicate cases (the last grapheme of one string and the first grapheme of the next will form a single grapheme in the resulting string), it gives up and takes the copying path.
Outside of the core, modules like Concurrent::Stack, Concurrent::Queue, and Concurrent::Trie use tail sharing as a technique to implement relatively efficient lock-free data structures.

Related

Is there a more idiomatic way of creating an index buffer in Rust

In computer graphics, one of the most basic patterns is creating several buffers for vertex attributes and an index buffer that groups these attributes together.
In Rust, this basic pattern looks like this:
struct Position { ... }
struct Uv { ... }
struct Vertex {
pos: usize,
uv: usize
}
// v_vertex[_].pos is an index into v_pos
// v_vertex[_].uv is an index into v_uv
struct Object {
v_pos: Vec<Position>, // attribute buffer
v_uv: Vec<Uv>, // attribute buffer
v_vertex: Vec<Vertex> // index buffer
}
However, this pattern leaves a lot to be desired. Any operations to the attribute buffers that modifies existing data is going to be primarily concerned with making sure the index buffer isn't invalidated. In short, it leaves most of the promises the rust compiler is capable of making on the table.
Making a scheme like this work isn't impossible. For example, the struct could include a HashMap that keeps track of changed indices and rebuilds the vectors when it gets too large. However, all the workarounds inevitably feel like another hack that doesn't address the underlying problem: there is no compile-checked guarantee that I'm not introducing data races or that the "reference" hasn't been invalidated somewhere else on accident.
When I first approached this problem when moving from C++ to Rust, I tried to make the Vertex object hold references to the attributes. That looked something like this:
struct Position { ... }
struct Uv { ... }
struct Vertex {
pos: &Position,
uv: &Uv
}
// obj.v_vertex[_].pos is a reference to an element of obj.v_pos
// obj.v_vertex[_].uv is a reference to an element of obj.v_uv
// This causes a lot of problems; it's effectively impossible to use
struct Object {
v_pos: Vec<Position>,
v_uv: Vec<Uv>,
v_vertex: Vec<Vertex>
}
...Which threw me down deep into the rabbit holes of self-referential structs and why they cannot exist in safe Rust. After learning more about, it turns out that as I suspected, the original implementation hid a lot of unsafety pitfalls that were caught by the compiler when I started being more explicit.
I'm aware of the existence unsafe solutions like Pin, but I feel like at that point I might as well stick to the original method.
This leads me to the core question: Is there an idiomatic way of representing this relationship? I want to be able to modify the contents of each of the Vecs in a compiler-checked manner.
Is there an idiomatic way of representing this relationship?
The usizes you started with are the idiomatic way of representing this relationship.
Any operations to the attribute buffers that modifies existing data is going to be primarily concerned with making sure the index buffer isn't invalidated. …
Yes; you should write those operations within the module that defines Object, and keep the fields private so the Object cannot become inconsistent as long as those operations are correctly defined.
In short, it leaves most of the promises the rust compiler is capable of making on the table.
It doesn't — because the Rust compiler is not actually capable of making those promises. & and even &mut references are actually very limited — they work by statically enforcing “Nobody (else) is going to change this value while you have the reference”. They don't have any bigger picture than that. In your case, assuming you're planning to edit this data, you will need to do operations that modify multiple parts in a consistent fashion, like “add a Position and also a Vertex that uses it”, or maybe “simultaneously add 3 vertices making up a triangle, using these 3 existing Positions”. References cannot help you do this correctly.
The only kind of data structure of this sort that you can in fact build using references is an append-only one, using the help of, for example, typed-arena. This might be suitable for an algorithm which is building a mesh. However, given that it's append-only, there is very little benefit — the operation “append a vertex, choosing indices as you go” is easy to write correctly without references. Additionally, you won't be able to store the mesh constructed that way long-term (because it is made of vectors that borrow from the arena) unless you also throw in ouroboros to wrap up the self-reference.
Fundamentally, references are designed to be used as temporary things — as a formalization and enforcement of common patterns used in C and C++ when passing and returning pointers — hence also being called “borrows”. The rules which the compiler understands about references are rules designed to handle those temporary uses. They are almost never what you should be building a data structure out of.

Do functional language compilers optimize a "filter, then map" operation on lists into a single pass?

I mostly use TypeScript during my work day and when applying functional patterns I oftentimes see a pattern like:
const someArray = anotherArray.filter(filterFn).map(transformFn)
This code will filter through all of anotherArray's items and then go over the filtered list (which may be identical if no items are filtered) again and map things. In other words, we iterate over the array twice.
This behavior could be achieved with a "single pass" over the array with a reduce:
const someArray = anotherArray.reduce((acc, item) => {
if (filterFn(item) === false) {
return acc;
}
acc.push(item);
return acc;
}, [])
I was wondering if such optimization is something the transpiler (in the TS world) knows to do automatically and whether such optimizations are automatically done in more "functional-first" languages such as Clojure or Haskell. For example, I know that functional languages usually do optimizations with tail recursion, so I was wondering also about the "filter then map" case. Is this something that compilers actually do?
First of all, you usually shouldn't obsess about getting everything into a single pass. On small containers there is not that much difference between running a single-operation loop twice and running a dual-operation loop once. Aim to write code that's easily understandable. One for loop might be more readable than two, but a reduce is not more readable than a filter then map.
What the compiler does depends on your "container." When your loop is big enough to care about execution time, it's usually also big enough to care about memory consumption. So filtering then mapping on something like an observable works on one element at a time, all the way through the pipeline, before processing the next element. This means you only need memory for one element, even though your observable could be infinite.

Golang RWMutex on map content edit

I'm starting to use RWMutex in my Go project with map since now I have more than one routine running at the same time and while making all of the changes for that a doubt came to my mind.
The thing is that I know that we must use RLock when only reading to allow other routines to do the same task and Lock when writing to full-block the map. But what are we supposed to do when editing a previously created element in the map?
For example... Let's say I have a map[int]string where I do Lock, put inside "hello " and then Unlock. What if I want to add "world" to it? Should I do Lock or can I do RLock?
You should approach the problem from another angle.
A simple rule of thumb you seem to understand just fine is
You need to protect the map from concurrent accesses when at least one of them is a modification.
Now the real question is what constitutes a modification of a map.
To answer it properly, it helps to notice that values stored in maps are not addressable — by design.
This was engineered that way simply due to the fact maps internally have intricate implementation which
might move values they contain in memory
to provide (amortized) fast access time
when the map's structure changes due to insertions and/or deletions of its elements.
The fact map values are not addressable means you can not do
something like
m := make(map[int]string)
m[42] = "hello"
go mutate(&m[42]) // take a single element and go modifying it...
// ...while other parts of the program change _other_ values
m[123] = "blah blah"
The reason you are not allowed to do this is the
insertion operation m[123] = ... might trigger moving
the storage of the map's element around, and that might
involve moving the storage of the element keyed by 42
to some other place in memory — pulling the rug
from under the feet of the goroutine
running the mutate function.
So, in Go, maps really only support three operations:
Insert — or replace — an element;
Read an element;
Delete an element.
You cannot modify an element "in place" — you can only
go in three steps:
Read the element;
Modify the variable containing the (read) copy;
Replace the element by the modified copy.
As you can now see, the steps (1) and (3) are mere map accesses,
and so the answer to your question is (hopefully) apparent:
the step (1) shall be done under at least an read lock,
and the step (3) shall be done under a write (exclusive) lock.
In contrast, elements of other compound types —
arrays (and slices) and fields of struct types —
do not have the restriction maps have: provided the storage
of the "enclosing" variable is not relocated, it is fine to
change its different elements concurrently by different goroutines.
Since the only way to change the value associated with the key in the map is to reassign the changed value to the same key, that is a write / modification, so you have to obtain the write lock–simply using the read lock will not be sufficient.

Destructive place-modifying operators

The CLtL2 reference clearly distinguishes between nondestructive and destructive common-lisp operations. But, within the destructive camp, it seems a little less clear in marking the difference between those which simply return the result, and those which additionally modify a place (given as argument) to contain the result. The usual convention of annexing "f" to such place modifying operations (eg, setf, incf, alexandria:deletef) is somewhat sporadic, and also applies to many place accessors (eg, aref, getf). In an ideal functional programming style (based only on returned values) such confusion is probably not an issue, but it seems like it could lead to programming errors in some practical applications that do use place modification. Since different implementations can handle the place results differently, couldn't portability be affected? It even seems difficult to test a particular implementation's approach.
To better understand the above distinction, I've divided the destructive common-lisp sequence operations into two categories corresponding to "argument returning" and "operation returning". Could someone validate or invalidate these categories for me? I'm assuming these categories could apply to other kinds of destructive operations (for lists, hash-tables, arrays, numbers, etc) too.
Argument returning: fill, replace, map-into
Operation returning: delete, delete-if, delete-if-not, delete-duplicates, nsubstitute, nsubstitute-if, nsubstitute-not-if, nreverse, sort, stable-sort, merge
But, within the destructive camp, it seems a little less clear in marking the difference between those which simply return the result.
There are no easy syntactic marker about which operation is destructive or not, even though there are useful conventions like the n prefix. Remember that CL is a standard inspired by different Lisps, which does not help enforcing a consistent terminology.
The usual convention of annexing "f" to such place modifying operations (eg, setf, incf, alexandria:deletef) is somewhat sporadic, and also applies to many place accessors (eg, aref, getf).
All setf expanders should ends with f, but not everything that ends with f is a setf expander. For example, aref takes its name from array and reference and isn't a macro.
... but it seems like it could lead to programming errors in some practical applications that do use place modification.
Most data is mutable (see comments); once you code in CL with that in mind, you take care not to modify data you did not create yourself. As for using a destructive operation in place of a non-destructive one inadvertently, I don't know: I guess it can happen, with sort or delete, maybe the first times you use them. In my mind delete is stronger, more destructive than simply remove, but maybe that's because I already know the difference.
Since different implementations can handle the place results differently, couldn't portability be affected?
If you want portability, you follow the specification, which does not offer much guarantee w.r.t. which destructive operations are applied. Take for example DELETE (emphasis mine):
Sequence may be destroyed and used to construct the result; however, the result might or might not be identical to sequence.
It is wrong to assume anything about how the list is being modified, or even if it is being modified. You could actually implement delete as an alias of remove in a minimal implementation. In all cases, you use the return value of your function (both delete and remove have the same signature).
Categories
I've divided the destructive common-lisp sequence operations into two categories corresponding to "argument returning" and "operation returning".
It is not clear at all what those categories are supposed to represent. Are those definition the one you have in mind?
an argument returning operation is one which returns one of its argument as a return value, possibly modified.
an operation returning operation is one where the result is based on one of its argument, and might be identical to that argument, but needs not be.
The definition of operation returning is quite vague and encompass both destructive and non-destructive operations. I would classify cons as such because it does not return one of its argument; OTOH, it is a purely functional operation.
I don't really get what those categories offer in addition to destructive or non-destructive.
Setf composition gotcha
Suppose you write a function (remote host key) which gets a value from a remote key/value datastore. Suppose also that you define (setf remote) so that it updates the remote value.
You might expect (setf (first (remote host key)) value) to:
Fetch a list from host, indexed by key,
Replace its first element by value,
Push the changes back to the remote host.
However, step 3 does generally not happen: the local list is modified in place (this is the most efficient alternative, but it makes setf expansions somewhat lazy about updates). You could define a new set of macros such as the whole round-trip is always implemented, with DEFINE-SETF-EXPANDER, though.
Let me try to address your question by introducing some concepts.
I hope it helps you to consolidate your knowledge and to find your remaining answers about this subject.
The first concept is that of non-destructive versus destructive behavior.
A function that is non-destructive won't change the data passed to it.
A function that is destructive may change the data passed to it.
You can apply the (non-)destructive nature to something other than a single function. For instance, if a function stores the data passed to it somewhere, say in a object's slot, then the destructiveness depends on that object's behavior, its other operations, events, etc.
The convention for functions that immediately modify its arguments is to (usually) prefix with n.
The convention doesn't work the other way around, there are many functions that start with n (e.g. not/null, nth, ninth, notany, notevery, numberp etc.) There are also notable exceptions, such as delete, merge, sort and stable-sort. The only way to naturally grasp them is with time/experience. For instance, always refer to the HyperSpec whenever you see a function you don't know yet.
Moreover, you usually need to store the result of some destructive functions, such as delete and sort, because they may choose to skip the head of the list or to not be destructive at all. delete may actually return nil, the empty list, which is not possible to obtain from a modified cons.
The second concept is that of generalized reference.
A generalized reference is anything that can hold data, such as a variable, the car and cdr of a cons, the element locations of an array or hash table, the slots of an object, etc.
For each container data structure, you need to know the specific modifying function. However, for some generalized references, there might not be a function to modify it, such as a local variable, in which case there are still special forms to modify it.
As such, in order to modify any generalized reference, you need to know its modifying form.
Another concept closely related to generalized references is the place. A form that identifies a generalized reference is called a place. Or in other words, a place is the written way (form) that represents a generalized reference.
For each kind of place, you have a reader form and a writer form.
Some of these forms are documented, such as using the symbol of a variable to read it and setq a variable to write to it, or car/cdr to read from and rplaca/rplacd to write to a cons. Others are only documented to be accessors, such as aref to read from arrays; its writer form is not actually documented.
To get these forms, you have get-setf-expansion. You actually also get a set of variables and their initializing forms (to be used as through let*) that will be used by the reader form and/or the writer form, and a set of variables (to be bound to the new values) that will be used by the writer form.
If you've used Lisp before, you've probably used setf. setf is a macro that generates code that runs within the scope (environment) of its expansion.
Essentially, it behaves as if by using get-setf-expansion, generating a let* form for the variables and initializing forms, generating extra bindings for the writer variables with the result of the value(s) form and invoking the writer form within all this environment.
For instance, let's define a my-setf1 macro which takes only a single place and a single newvalue form:
(defmacro my-setf1 (place newvalue &environment env)
(multiple-value-bind (vars vals store-vars writer-form reader-form)
(get-setf-expansion place env)
`(let* (,#(mapcar #'(lambda (var val)
`(,var ,val))
vars vals))
;; In case some vars are used only by reader-form
(declare (ignorable ,#vars))
(multiple-value-bind (,#store-vars)
,newvalue
,writer-form
;; Uncomment the next line to mitigate buggy writer-forms
;;(values ,#store-vars)
))))
You could then define my-setf as:
(defmacro my-setf (&rest pairs)
`(progn
,#(loop
for (place newvalue) on pairs by #'cddr
collect `(my-setf1 ,place ,newvalue))))
There is a convention for such macros, which is to suffix with f, such as setf itself, psetf, shiftf, rotatef, incf, decf, getf and remf.
Again, the convention doesn't work the other way around, there are operators that end with f, such as aref, svref and find-if, which are functions, and if, which is a conditional execution special operator. And yet again, there are notable exceptions, such as push, pushnew, pop, ldb, mask-field, assert and check-type.
Depending on your point-of-view, many more operators are implicitly destructive, even if not effectively tagged as such.
For instance, every defining operator (e.g. the macros defun, defpackage, defclass, defgeneric, defmethod, the function load) changes either the global environment or a temporary one, such as the compilation environment.
Others, like compile-file, compile and eval, depend on the forms they'll execute. For compile-file, it also depends on how much it isolates the compilation environment from the startup environment.
Other operators, like makunbound, fmakunbound, intern, export, shadow, use-package, rename-package, adjust-array, vector-push, vector-push-extend, vector-pop, remhash, clrhash, shared-initialize, change-class, slot-makunbound, add-method and remove-method, are (more or less) clearly intended to have side-effects.
And it's this last concept that can be the widest. Usually, a side-effect is regarded as any observable variation in one environment. As such, functions that don't change data are usually considered free of side-effects.
However, this is ill-defined. You may consider that all code execution implies side-effects, depending on what you define to be your environment or on what you can measure (e.g. consumed quotas, CPU time and real time, used memory, GC overhead, resource contention, system temperature, energy consumption, battery drain).
NOTE: None of the example lists are exhaustive.

Are higher order functions on collections guaranteed to be executed sequentially?

In another question, a user suggested to write code like to that:
def list = ['a', 'b', 'c', 'd']
def i = 0;
assert list.collect { [i++] } == [0, 1, 2, 3]
Such code is, in other languages, considered bad practice because the content of collect changes the state of it's context (here it changes the value of i). In other words, the closure has side-effects.
Such higher order functions should be able to run the closure in parallel, and assemble it in a new list again. If the processing in the closure are long, CPU intensive operations, it may be worth executing them in separate threads. It would be easy to change collect to use an ExecutorCompletionService to achieve that, but it would break the above code.
Another example of a problem is if, for some reason, collect browse the collection in, say, reverse order, in which case the result would be [3, 2, 1, 0]. Note that in this case, the list have not been reverted, 0 is really the result of applying the closure to 'd'!
Interestingly, these functions are documented with "Iterates through this collection" in Collection's JavaDoc, which suggests the iteration is sequential.
Does the groovy specification explicitly defines the order of execution in higher order functions like collect or each? Is the above code broken, or is it OK?
I don't like explicit external variables being relied upon in my closures for the reasons you give above.
Indeed, the less variables I have to define, the happier I am ;-)
For the possibly parallel things as well, always code with a view to wrapping it with some level of GPars loveliness should it prove too much for a single thread to handle. For this, as you say, you want as little mutability as possible and to try and completely avoid side-effects (such as the external counter pattern above)
As for the question itself, if we take collect as an example function, and examine the source code, we can see that given an Object (Collection and Map are done in a similar way with slight differences as to how the Iterator is referenced) it iterates along InvokerHelper.asIterator(self), adding the result of each closure call to the resultant list.
InvokerHelper.asIterator (again source is here) basically calls the iterator() method on the Object passed in.
So for Lists, etc it will iterate down the objects in the order defined by the iterator.
It is therefore possible to compose your own class which follows the Iterable interface design (doesn't need to implement Iterable though, thanks to duck-typing), and define how the collection will be iterated.
I think by asking about the Groovy specification though, this answer might not be what you want, but I don't think there is an answer. Groovy has never really had a 'complete' specification (indeed this is point about groovy that some people dislike).
I think keeping the functions passed collect or findAll side-effect free is a good idea in general, not only for keeping the complexity low but making the code more parallel-friendly in case parallel execution is needed in the future.
But in the case of each there is not much point in keeping the function side-effect free, as it wouldn't do anything (in fact the sole purpose of this method is to replace act as a for-each loop). The Groovy's documentation have some examples of using each (and its variants, eachWithIndex and reverseEach) that require an execution order to be defined.
Now, from a pragmatic point of view, I think it can sometimes be OK to use functions with some side effects in methods like collect. For example, to transform a list in [index, value] pairs a transpose and range can be used
def list = ['a', 'b', 'c']
def enumerated = [0..<list.size(), list].transpose()
assert enumerated == [[0,'a'], [1,'b'], [2,'c']]
Or even an inject
def enumerated = list.inject([]) { acc, val -> acc << [acc.size(), val] }
But a collect and a counter does the trick too and I think the result is the most readable:
def n = 0, enumerated = list.collect{ [n++, it] }
Now, this example wouldn't make sense if Groovy provided acollect and similar methods with a index-value-param function (see Jira issue), but it kinda shows that sometimes practicality beats purity IMO :)

Resources