I need to create a binary search tree in the following (strange) way:
I am given an array (A[n]). A[1] becomes the root of the tree.
Then, I insert A[1]+A[2] to the left subtree (subtree1, used below) of the root and also insert A[1]-A[2] to the right subtree (subtree2) of the root.
I insert A[1]+A[2]+A[3] to the left subtree of subtree1 (subtree3) and A[1]+A[2]-A[3] to the right subtree of subtree1 (subtree4).
Then, I insert A[1]-A[2]+A[3] to the left subtree of subtree2 (subtree5) and A[1]-A[2]-A[3] to the right subtree of subtree2 (subtree6).
I repeat for subtree3, subtree4, subtree5, subtree6 until I reach the end of the array.
So, basically, the first element of the array becomes the root of the tree and then I move down: Every left subtree has for value the sum of its parent plus the next element of the array and every right subtree has for value the difference of its parent and of the next element in the array.
I understand I need to use the concept of recursion but in a modified way. Typing my problem here and trying to explain it to someone else apart from my brain actually made me form it in a way that gave me some ideas to try but I can see the problem I am dealing with being a usual problem so maybe you could give me some pointers on how to use recursion to build the tree.
Looking around at other questions and the discussions I understand there is a policy against asking whole solutions so I wanted to make it clear that I am not asking for the solution but for guidance to it. If someone would like to have a look I can show you what I've already done.
The way to do recursion is to always assume you already have a working function in hand. So let's see [using Java syntax]...
Tree buildTree(int currentSum, int[] array, int index, boolean sign);
Suppose that works. Then would do u need to do to build a tree at index i?
// current value to look at at this level
int curValue = array[index];
// depending on sign, it may be negative
if (!sign) {
curValue *= -1;
}
// add it to the running total
int nodeValue = currentSum + curValue;
Node nd = new Node(nodeValue);
nd.left = buildTree(nodeValue, array, index + 1, true);
nd.right = buildTree(nodeValue, array, index + 1, false);
That's basically it. You need to take care of the edge cases: of index = array.length, creation of the very first node, and the like
Related
I'm working on a practice program for doing belief propagation stereo vision. The relevant aspect of that here is that I have a fairly long array representing every pixel in an image, and want to carry out an operation on every second entry in the array at each iteration of a for loop - first one half of the entries, and then at the next iteration the other half (this comes from an optimisation described by Felzenswalb & Huttenlocher in their 2006 paper 'Efficient belief propagation for early vision'.) So, you could see it as having an outer for loop which runs a number of times, and for each iteration of that loop I iterate over half of the entries in the array.
I would like to parallelise the operation of iterating over the array like this, since I believe it would be thread-safe to do so, and of course potentially faster. The operation involved updates values inside the data structures representing the neighbouring pixels, which are not themselves used in a given iteration of the outer loop. Originally I just iterated over the entire array in one go, which meant that it was fairly trivial to carry this out - all I needed to do was put .Parallel between Array and .iteri. Changing to operating on every second array entry is trickier, however.
To make the change from simply iterating over every entry, I from Array.iteri (fun i p -> ... to using for i in startIndex..2..(ArrayLength - 1) do, where startIndex is either 1 or 0 depending on which one I used last (controlled by toggling a boolean). This means though that I can't simply use the really nice .Parallel to make things run in parallel.
I haven't been able to find anything specific about how to implement a parallel for loop in .NET which has a step size greater than 1. The best I could find was a paragraph in an old MSDN document on parallel programming in .NET, but that paragraph only makes a vague statement about transforming an index inside a loop body. I do not understand what is meant there.
I looked at Parallel.For and Parallel.ForEach, as well as creating a custom partitioner, but none of those seemed to include options for changing the step size.
The other option that occurred to me was to use a sequence expression such as
let getOddOrEvenArrayEntries myarray oddOrEven =
seq {
let startingIndex =
if oddOrEven then
1
else
0
for i in startingIndex..2..(Array.length myarray- 1) do
yield (i, myarray.[i])
}
and then using PSeq.iteri from ParallelSeq, but I'm not sure whether it will work correctly with .NET Core 2.2. (Note that, currently at least, I need to know the index of the given element in the array, as it is used as the index into another array during the processing).
How can I go about iterating over every second element of an array in parallel? I.e. iterating over an array using a step size greater than 1?
You could try PSeq.mapi which provides not only a sequence item as a parameter but also the index of an item.
Here's a small example
let res = nums
|> PSeq.mapi(fun index item -> if index % 2 = 0 then item else item + 1)
You can also have a look over this sampling snippet. Just be sure to substitute Seq with PSeq
I'm working with a polymorphic binary search tree with the standard following type definition:
type tree =
Empty
| Node of int * tree * tree (*value, left sub tree, right sub tree*);;
I want to do an in order traversal of this tree and add the values to a list, let's say. I tried this:
let rec in_order tree =
match tree with
Empty -> []
| Node(v,l,r) -> let empty = [] in in_order r#empty;
v::empty;
in_order l#empty
;;
But it keeps returning an empty list every time. I don't see why it is doing that.
When you're working with recursion you need to always reason as follows:
How do I solve the easiest version of the problem?
Supposing I have a solution to an easier problem, how can I modify it to solve a harder problem?
You've done the first part correctly, but the second part is a mess.
Part of the problem is that you've not implemented the thing you said you want to implement. You said you want to do a traversal and add the values to a list. OK, so then the method should take a list somewhere -- the list you are adding to. But it doesn't. So let's suppose it does take such a parameter and see if that helps. Such a list is traditionally called an accumulator for reasons which will become obvious.
As always, get the signature right first:
let rec in_order tree accumulator =
OK, what's the easy solution? If the tree is empty then adding the tree contents to the accumulator is simply the identity:
match tree with
| Empty -> accumulator
Now, what's the recursive case? We suppose that we have a solution to some smaller problems. For instance, we have a solution to the problem of "add everything on one side to the accumulator with the value":
| Node (value, left, right) ->
let acc_with_right = in_order right accumulator in
let acc_with_value = value :: acc_with_right in
OK, we now have the accumulator with all the elements from one side added. We can then use that to add to it all the elements from the other side:
in_order left acc_with_value
And now we can make the whole thing implement the function you tried to write in the first place:
let in_order tree =
let rec aux tree accumulator =
match tree with
| Empty -> accumulator
| Node (value, left, right) ->
let acc_with_right = aux right accumulator in
let acc_with_value = value :: acc_with_right in
aux left acc_with_value in
aux tree []
And we're done.
Does that all make sense? You have to (1) actually implement the exact thing you say you're going to implement, (2) solve the base case, and (3) assume you can solve smaller problems and combine them into solutions to larger problems. That's the pattern you use for all recursive problem solving.
I think your problem boils down to this. The # operator returns a new list that is the concatenation of two other lists. It doesn't modify the other lists. In fact, nothing ever modifies a list in OCaml. Lists are immutable.
So, this expression:
r # empty
Has no effect on the value named empty. It will remain an empty list. In fact, the value empty can never be changed either. Variables in OCaml are also immutable.
You need to imagine constructing and returning your value without modifying lists or variables.
When you figure it out, it won't involve the ; operator. What this operator does is to evaluate two expressions (to the left and right), then return the value of the expression at the right. It doesn't combine values, it performs an action and discards its result. As such, it's not useful when working with lists. (It is used for imperative constructs, like printing values.)
If you thought about using # where you're now using ;, you'd be a lot closer to a solution.
I want to understand recursion.
I understand stupid example with math but i m not sure to know the essence of it.
I have 1 example that i don t understand how it works:
TREE-ROOT-INSERT(x, z)
if x = NIL
return z
if z.key < x.key
x.left = TREE-ROOT-INSERT(x.left, z)
return RIGHT-ROTATE(x)
else
x.right = TREE-ROOT-INSERT(x.right, z)
return LEFT-ROTATE(x)
I know what this code does:
First insert a node in a BST and then rotate each time so the new node became the root.
But in my mind analysing the code i suppose that it insert the node where it has to go and then JUST 1 TIME it rotates the tree.
How is it possible that the tree is rotated every time?
You need to maintain your place in the recursive call for each level of the tree. When you hit return RIGHT-ROTATE (or left) for the first time, you're not completely done; you take the tree that is the result of the ROTATE function, and place it in the code where the recursive TREE-ROOT-INSERT call was one level higher in the stack. You then rotate again, and return the current tree one level higher up in the stack, until you've hit the original root of the tree.
What is important for understanding recursion is to think of the recursive function as an abstract black box. In other words, when reading or reasoning about recursive function, you should focus on the current iteration, treat the invocation of the recursive function as atomic (something you cannot explore into) assuming it can do what it is supposed to do, and see how its result can be used to solve the current iteration.
You already know the contract of your TREE-ROOT-INSERT(x, z):
insert z into a binary search tree rooted at x, transform the tree so that z will become the new root.
let's look at this snippet:
if z.key < x.key
x.left = TREE-ROOT-INSERT(x.left, z)
return RIGHT-ROTATE(x)
This says z is less than x so it goes to the left sub-tree (because it is BST). TREE-ROOT-INSERT is invoked again, but we won't follow into it. Instead we just assume it can do what it is meant to do: it will insert z to the tree rooted at x.left, and make z the new root. Then you will get a tree of the bellow structure:
x
/ \
z ...
/ \
... ...
Again, you don't know how exactly calling TREE-ROOT-INSERT(x.left, z) will get you the z-rooted sub-tree. At this moment you don't care, because the real important part is what follows: how to make this entire tree rooted at z? The answer is the RIGHT-ROTATE(x)
But in my mind analysing the code i suppose that it insert the node where it has to go and then JUST 1 TIME it rotates the tree.
How is it possible that the tree is rotated every time?
If I understand you correctly, you are still thinking how to solve the problem in a non-recursive way. It is true that you can insert z into the BST rooted at x using the standard BST insertion procedure. That will put z in the correct position. However to bring z to the root from that position, you need more than 1 rotation.
In the recursive version, rotation is required to bring z to the root after you get a z-rooted sub-tree. But to get the z-rooted sub-tree from the original x.left rooted sub-tree, you need a rotation as well. Rotation is called many times, but on different sub-trees.
Please read my question before you report it as a duplicate. In the literature, to find the minimum height of a tree, the common approach is as follow:
int minDepth(TreeNode root) {
if (root == null) { return 0;}
return 1 + Math.min(minDepth(root.left), minDepth(root.right));
}
However, I think it does not distinguish between a leaf and a node with only one child and so it returns a wrong value. For example if our tree looks like this:
A is root
B is the left child of A
C is the right child of B
M is the left child of C
This function returns one while the leaf is 3 hop away from the root and so the min height is 4.
Since this recursive version is generally suggested in the literature, I think I am missing something.
Could somebody clear this for me?
Your comments indicate that the texts where you found this actually use the same definitions for the terms I use. If that is indeed the case, then the question is not why the algorithm you have shown is correct – it simply is wrong under those conditions.
Just take the third simplest binary tree, the one consisting of two nodes. It has exactly one leaf, its depth is two and its minimal depth is also two. But the algorithm you quoted returns the value one. So, unless the authors use a different definition for one of the terms (e.g., “minimal height” meaning “shortest path out of the tree”/“shortest path to a null pointer”), the result is simply wrong.
I'd like to remove an(y) element from an associative array and process it.
Currently I'm using a RedBlackTree together with .removeAny(), but I don't need the data to be in any order. I could use .byKey() on the AA, but that always produces an array with all keys. I only need one at a time and will probably change the AA while processing every other element. Is there any other smart way to get exactly one key without (internally) traversing the whole data structure?
There is a workaround, which works as well as using .byKeys():
auto anyKey(K, V)(inout ref V[K] aa)
{
foreach (K k, ref inout(V) v; aa)
return k;
assert(0, "Associative array hasn't any keys.");
}
For my needs, .byKeys().front seems to be fast enough though. Not sure if the workaround is actually faster.