Idiomatic graphs in APL - graph

APL is great for array type problems but I'm curious as to how best work with graphs in APL. I'm playing around with leet questions, for example question 662. Maximum Width of Binary Tree, the exercise works with Node objects with a value/left/right pointer style, however the test-case uses a basic array like [1,3,null,5,3]. The notation is compressed; uncompressed would be [[1], [3,null], [5,3,null,null]]. Reading layer-by-layer give [[1], [3], [5,3]] (so 2 is the widest layer).
Another example,
[5,4,7,3,null,2,null,-1,null,9] gives the answer 2
So I'm not sure the idiomatic way to work with trees. Do I use classes? Or are arrays best? In either case how do I convert the input?
I came up with a couple of solutions, but both feel inelegant. (Apologies for lack of comments)
convert←{
prev ← {(-⌈2÷⍨≢⍵)↑⍵}
nxt←{
⍵≡⍬:⍺
m←2/×prev ⍺
cnt←+/m
(⍺,(m\cnt↑⍵))nxt(cnt↓⍵)
}
(1↑⍵)nxt(1↓⍵)
}
Alternatively,
convert ← {
total←(+/×⍵)
nxt←{
double←×1,2↓2/0,⍵
(((+/double)↑⍺)#⊢)double
}
⍵ nxt⍣{(+/×⍺)=total}1
}
Both solutions are limited in they assume that 0 is null.
Once I've decompressed the input it's simply just a matter of stratifying by it's order
⌈/(1+⌈/-⌊/)∘⍸¨×nodes⊆⍨⍸2*¯1+⍳⌈2⍟≢nodes
In Python though I could use other methods to traverse i.e. keep track of the left/right-most node on a per-depth basis.
NOTE: This may be two questions, one to decompress and the other how to traverse graphs in general, but one depends on the other
Any ideas?

The work of Co-dfns compiler has given lots of insights on working tree/graph like data structures with APL.
Thesis: A Data Parallel Compiler Hosted on the GPU
GitHub repo: github.com/Co-dfns/Co-dfns (Many related goodies in project README file)
However the thesis is quite lengthy so for this particular exercise I would give a brief explanation on how to approach it.
the exercise works with Node objects with a value/left/right pointer style, however the test-case uses a basic array like [1,3,null,5,3].
Do we really actually build the tree with Node type objects to get an answer to the question? You can write the solution in something like Python and translate to APL, but that would be, losing the whole point of writing it in APL...
Notice the input is already an array! It is a bfs traverse of the binary tree. (The co-dfns compiler uses dfs traverse order, though)
so, actually what we need to do is just built a matrix like below for the input like [1,3,2,5,3,null,9] (⍬ is a placeholder value for for null):
1 ⍬ ⍬ ⍬ ⍝ level 0
3 2 ⍬ ⍬ ⍝ level 1
5 3 ⍬ 9 ⍝ level 2
For this problem we don't need to know which node's parent is which.
We can even do something like, by abusing the fact that input has no negative value (even the number could be negative, actually we only care about if it is null), and change ⍬ to ¯1 or 0 and make it easier to compute the answer.
So the problem has became: "compute the matrix representation of the tree as variable tree from the input array, then calculate the width of each level by +/0<tree, then the output is just 2*level (notice the first level is level-0)" This is using wrong definition for the width. I'll show how to correct it below
And it is actually very easy to do the conversion from input to matrix, hint: ↑.
1 (3 2) 5
┌─┬───┬─┐
│1│3 2│5│
└─┴───┴─┘
↑1 (3 2) 5
1 0
3 2
5 0
Thanks for pointing out that my original solution has problem on constructing the tree matrix.
This is the corrected method for constructing the tree. To distinguish from 0 for null and the padding, I add one to the input array so 2 is for non-null and 1 is for null.
buildmatrix←{
⎕IO←0
in←1+(⊂⊂'null')(≢⍤1 0)⎕JSON ⍵
⍝ Build the matrix
loop←{
(n acc)←⍺
0=≢⍵:acc
cur←n↑⍵
(2×+/2=cur)(acc,⊂cur)∇ n↓⍵
}
↑1 ⍬ loop in
}
However since the definition for width here is:
The width of one level is defined as the length between the end-nodes (the leftmost and rightmost non-null nodes), where the null nodes between the end-nodes are also counted into the length calculation.
We can just compute the width while attempting to reconstructing the tree (compute each level's width using \ and / with patterns from previous level):
If last level is 1011 and next level is 100010
1 0 1 1
1 0 0 0 0 0 1 0
(2/1 0 1 1)\1 0 0 0 1 0
1 0 0 0 0 0 1 0
So the it isn't needed to construct the complete matrix, and the answer for the exercise is just:
width←{
⎕IO←0
in←(⊂⊂'null')(≢⍤1 0)⎕JSON ⍵
⍝ strip leading trailing zero
strip←(⌽⍳∘1↓⊢)⍣2
⍝ Build the matrix
loop←{
(prev mw)←⍺
0=≢⍵:mw
cur←⍵↑⍨n←2×+/prev
((⊢,⍥⊂mw⌈≢)strip cur\⍨2/prev)∇ n↓⍵
}
(,1)1 loop 1↓in
}
width '[1,null,2,3,null,4,5,6]'
2
And the interesting fact is, you can probably do the same in other non-array based languages like Haskell. So instead of translating existing algorithms between similar looking languages, by thinking in the APL way you find new algorithms for problems!

Related

Can SPARK be used to prove that Quicksort actually sorts?

I'm not a user of SPARK. I'm just trying to understand the capabilities of the language.
Can SPARK be used to prove, for example, that Quicksort actually sorts the array given to it?
(Would love to see an example, assuming this is simple)
Yes, it can, though I'm not particularly good at SPARK-proving (yet). Here's how quick-sort works:
We note that the idea behind quicksort is partitioning.
A 'pivot' is selected and this is used to partition the collection into three groups: equal-to, less-than, and greater-than. (This ordering impacts the procedure below; I'm using this because it's different than the in-order version to illustrate that it is primarily about grouping, not ordering.)
If the collection is 0 or 1 in length, then you are sorted; if 2 then check and possibly-correct the ordering and they are sorted; otherwise continue on.
Move the pivot to the first position.
Scan from the second position position to the last position, depending on the value under consideration:
Less – Swap with the first item in the Greater partition.
Greater – Null-op.
Equal — Swap with the first item of Less, the swap with the first item of Greater.
Recursively call on the Less & Greater partitions.
If a function return Less & Equal & Greater, if a procedure re-arrange the in out input to that ordering.
Here's how you would go about doing things:
Prove/assert the 0 and 1 cases as true,
Prove your handling of 2 items,
Prove that given an input-collection and pivot there are a set of three values (L,E,G) which are the count of the elements less-than/equal-to/greater-than the pivot [this is probably a ghost-subprogram],
Prove that L+E+G equals the length of your collection,
Prove [in the post-condition] that given the pivot and (L,E,G) tuple, the output conforms to L items less-than the pivot followed by E items which are equal, and then G items that are greater.
And that should do it. [IIUC]

When determining time complexity are variables like n etc always given to an input?

The short version of this long post is when determining time complexity are variables like n etc always given to an input? If not, how else can you define variables?
I'm leaving the long version of my question below in case it helps anyone.
NOTE: I'm aware the question has already been asked here but I'm not satisfied with the answers. The accepted answer ignores the part of the question that the recursion essentially creates a balanced binary tree, while the second answer wrongly presumes that the author used the input as the definition of n rather than the number of levels of calls in the binary tree. (although it may be making the correct point that the difference is the definition of n and its possible the author slipped up or just confused me instead)
I'm comparing these two examples on edition 6 of Cracking the Coding Interview
Pages 44-45 (VI Big O Recursive Runtime section)
int f(int n){
if (n <= 1){
return 1;
}
return f(n-1) + f(n-1);
}
In this case the author defined n as the number of levels created through the recursive calls.
Pages 49-50 (VI Big O Example 9)
Assume the input is a balanced binary search tree
int sum(Node node){
if(node == null){
return 0;
}
return sum(node.left) + node.value + sum(node.right);
}
Here the author defines n as the number of nodes in the tree and states that therefore the depth of the tree is log n. (and since 2^logn equals n its O(n)
So here's the number of calls and depth of the tree based on the input in the first example
Input Calls Depth (author started counting from 0, used the term levels)
1 1 0
2 2 1
3 7 2
4 15 3
etc
I'm actually confused why the author was able to choose the depth of the tree as n because in the past I've always seen an input used as n? (it also seems meaningless b/c the depth is the input minus 1) Was the 2nd answer in the question asked here actually correct instead of the author by using the proper definition of n as the input?
In the 2nd example above it seems sensible that n is the number of nodes in the tree and therefore it has the depth of n?
So I guess I'm asking if an input is always the proper criteria for defining n (or whatever term you want to use as the variable)? If not, how else can you define n? If the input is always used to define n I get why the answers would be different. If not, I'd be confused since the recursion in example 1 essentially does create a balanced binary tree which therefore also has a depth of log n.
Based on googling it does seem that n (etc) is supposed to refer to the input.
Explain Time Complexity?
https://rob-bell.net/2009/06/a-beginners-guide-to-big-o-notation/
https://www.interviewcake.com/article/java/big-o-notation-time-and-space-complexity

Directed Adjacency Lists

I have asked this question in a variety of ways, starting with:
When you have an adjacency list, does order matter? Say I had the adjacency list {1, 2, 5} is that
equivalent to {2, 1, 5}? Or does order signify something and therefore these two lists are not
equivalent?
I received several answers including it only matters if the graph is directed and the order signifies something to do with the adjacent nodes arrangement clockwise..? I also was given the opinion that no it does not matter, however he'd prefer it be ordered regarding weights (if used) such as the way the internet is ordered - page ranking algorithm. I don't presume to paraphrase any of these responses accurately although I think I conveyed the gist. Any thoughts are appreciated.
Also, I have refined my question in a way that if answered, I think will give me the exact answer I am after:
Suppose I have the adjacency matrix for a directed graph:
0 0 1 0
0 0 1 1
1 1 0 1
0 1 1 0
I am told the equivalent adjacency lists are as follows and presume my teacher listed it this way
intentionally rather than some arbitrary reordering - especially as seen in the last list:
{ 2 }
{ 2, 3 }
{ 0, 1, 3 }
{ 2, 1 }
The last list is { 2, 1 }! What in the equivalent adjacency matrix alerts me that it should be
{ 2, 1 } rather than { 1, 2 }?
Typically, no, the order in adjacency list doesn't matter.
... unless explicitly stated.
Implementation may have the list actually ordered for various reasons: as a consequence of how the graph is created, or because you want to process neighbors of a vertex in some order.
But conceptually, the order doesn't matter.
I believe that no is the answer in your case, {2,1} is the same as {1,2}. Perhaps your teacher wrote it wrong at first (like {2,3}) and didn't change the order after fixing it. Or he/she wanted you to go thinking whether the order does matter. Won't know for sure, unless you ask the teacher.
The value of a node in an adjacency list is a set. Sets are unordered. Therefore {1,2} is the same as {2,1}.

Benefits of starting arrays at 0?

What's the purpose of array indices starting at 0 in most programming languages, in contrast to the ordinal way in which we refer to most things IRL (first, second, third, etc.)? What's the logic or utility behind that?
I'm completely used to it by now, but never stopped to think about the reason behind it.
Update: One benefit I read about from Googling is that for loops can have i < n if you want to go up to n.
Dijkstra lays out the reasoning in Why numbering should start at zero.
When dealing with a sequence of length N, the elements of which we wish to distinguish by subscript, the next vexing question is what subscript value to assign to its starting element...
when starting with subscript 1, the subscript range 1 ≤ i < N+1; starting with 0, however, gives the nicer range 0 ≤ i < N. So let us let our ordinals start at zero: an element's ordinal (subscript) equals the number of elements preceding it in the sequence. And the moral of the story is that we had better regard —after all those centuries!— zero as a most natural number.
When we're accessing item by index like a[i] the compiler converts it to [a+i]. So the index of first element is zero because [a+0] will give us 'a' that points to the first item in array. This is quite obviously for, say, C++ but not for more recent languages such as C#.
Dijkstra wrote a really interesting paper about this, in 1982: Why numbering should start at zero.
You may google for it, there's been a lot of discussions about it. I'd say that the fact that the offset of the first element from the beginning (which it is) is zero certainly makes sense.
Because Dijkstra said so.
In my old assembler days it was natural for the offset to start at zero.
dcl foo(9)
ldx0 0 'offset to index register 0
lda foo,x0 'get first element
adx0 1,du 'get 2nd
ldq foo,x0
When looking at it from the perspective of the hardware it makes more sense.

modifying an element of a list in-place in J, can it be done?

I have been playing with an implementation of lookandsay (OEIS A005150) in J. I have made two versions, both very simple, using while. type control structures. One recurs, the other loops. Because I am compulsive, I started running comparative timing on the versions.
look and say is the sequence 1 11 21 1211 111221 that s, one one, two ones, etc.
For early elements of the list (up to around 20) the looping version wins, but only by a tiny amount. Timings around 30 cause the recursive version to win, by a large enough amount that the recursive version might be preferred if the stack space were adequate to support it. I looked at why, and I believe that it has to do with handling intermediate results. The 30th number in the sequence has 5808 digits. (32nd number, 9898 digits, 34th, 16774.)
When you are doing the problem with recursion, you can hold the intermediate results in the recursive call, and the unstacking at the end builds the results so that there is minimal handling of the results.
In the list version, you need a variable to hold the result. Every loop iteration causes you to need to add two elements to the result.
The problem, as I see it, is that I can't find any way in J to modify an extant array without completely reassigning it. So I am saying
try. o =. o,e,(0&{y) catch. o =. e,(0&{y) end.
to put an element into o where o might not have a value when we start. That may be notably slower than
o =. i.0
.
.
.
o =. (,o),e,(0&{y)
The point is that the result gets the wrong shape without the ravels, or so it seems. It is inheriting a shape from i.0 somehow.
But even functions like } amend don't modify a list, they return a list that has a modification made to it, and if you want to save the list you need to assign it. As the size of the assigned list increases (as you walk the the number from the beginning to the end making the next number) the assignment seems to take more time and more time. This assignment is really the only thing I can see that would make element 32, 9898 digits, take less time in the recursive version while element 20 (408 digits) takes less time in the loopy version.
The recursive version builds the return with:
e,(0&{y),(,lookandsay e }. y)
The above line is both the return line from the function and the recursion, so the whole return vector gets built at once as the call gets to the end of the string and everything unstacks.
In APL I thought that one could say something on the order of:
a[1+rho a] <- new element
But when I try this in NARS2000 I find that it causes an index error. I don't have access to any other APL, I might be remembering this idiom from APL Plus, I doubt it worked this way in APL\360 or APL\1130. I might be misremembering it completely.
I can find no way to do that in J. It might be that there is no way to do that, but the next thought is to pre-allocate an array that could hold results, and to change individual entries. I see no way to do that either - that is, J does not seem to support the APL idiom:
a<- iota 5
a[3] <- -1
Is this one of those side effect things that is disallowed because of language purity?
Does the interpreter recognize a=. a,foo or some of its variants as a thing that it should fastpath to a[>:#a]=.foo internally?
This is the recursive version, just for the heck of it. I have tried a bunch of different versions and I believe that the longer the program, the slower, and generally, the more complex, the slower. Generally, the program can be chained so that if you want the nth number you can do lookandsay^: n ] y. I have tried a number of optimizations, but the problem I have is that I can't tell what environment I am sending my output into. If I could tell that I was sending it to the next iteration of the program I would send it as an array of digits rather than as a big number.
I also suspect that if I could figure out how to make a tacit version of the code, it would run faster, based on my finding that when I add something to the code that should make it shorter, it runs longer.
lookandsay=: 3 : 0
if. 0 = # ,y do. return. end. NB. return on empty argument
if. 1 ~: ##$ y do. NB. convert rank 0 argument to list of digits
y =. (10&#.^:_1) x: y
f =. 1
assert. 1 = ##$ y NB. the converted argument must be rank 1
else.
NB. yw =. y
f =. 0
end.
NB. e should be a count of the digits that match the leading digit.
e=.+/*./\y=0&{y
if. f do.
o=. e,(0&{y),(,lookandsay e }. y)
assert. e = 0&{ o
10&#. x: o
return.
else.
e,(0&{y),(,lookandsay e }. y)
return.
end.
)
I was interested in the characteristics of the numbers produced. I found that if you start with a 1, the numerals never get higher than 3. If you start with a numeral higher than 3, it will survive as a singleton, and you can also get a number into the generated numbers by starting with something like 888888888 which will generate a number with one 9 in it and a single 8 at the end of the number. But other than the singletons, no digit gets higher than 3.
Edit:
I did some more measuring. I had originally written the program to accept either a vector or a scalar, the idea being that internally I'd work with a vector. I had thought about passing a vector from one layer of code to the other, and I still might using a left argument to control code. With I pass the top level a vector the code runs enormously faster, so my guess is that most of the cpu is being eaten by converting very long numbers from vectors to digits. The recursive routine always passes down a vector when it recurs which might be why it is almost as fast as the loop.
That does not change my question.
I have an answer for this which I can't post for three hours. I will post it then, please don't do a ton of research to answer it.
assignments like
arr=. 'z' 15} arr
are executed in place. (See JWiki article for other supported in-place operations)
Interpreter determines that only small portion of arr is updated and does not create entire new list to reassign.
What happens in your case is not that array is being reassigned, but that it grows many times in small increments, causing memory allocation and reallocation.
If you preallocate (by assigning it some large chunk of data), then you can modify it with } without too much penalty.
After I asked this question, to be honest, I lost track of this web site.
Yes, the answer is that the language has no form that means "update in place, but if you use two forms
x =: x , most anything
or
x =: most anything } x
then the interpreter recognizes those as special and does update in place unless it can't. There are a number of other specials recognized by the interpreter, like:
199(1000&|#^)199
That combined operation is modular exponentiation. It never calculates the whole exponentiation, as
199(1000&|^)199
would - that just ends as _ without the #.
So it is worth reading the article on specials. I will mark someone else's answer up.
The link that sverre provided above ( http://www.jsoftware.com/jwiki/Essays/In-Place%20Operations ) shows the various operations that support modifying an existing array rather than creating a new one. They include:
myarray=: myarray,'blah'
If you are interested in a tacit version of the lookandsay sequence see this submission to RosettaCode:
las=: ,#((# , {.);.1~ 1 , 2 ~:/\ ])&.(10x&#.inv)#]^:(1+i.#[)
5 las 1
11 21 1211 111221 312211

Resources