In this code how does this work (java):
/** Move A[A.length-1] to the first position, k, in A such that there
* are no smaller elements after it, moving all elements
* A[k .. A.length-2] over to A[k+1 .. A.length-1]. */
static void moveOver (int A[]) {
moveOver (A, A.length-1);
}
/** Move A[U] to the first position, k<=U, in A such that there
* are no smaller elements after it, moving all elements
* A[k .. U-1] over to A[k+1 .. U]. */
static void moveOver (int A[], int U) {
if (U > 0) {
if (A[U-1] > A[U]) {
/* Swap A[U], A[U-1] */
moveOver (A, U-1);
}
}
}
I got this from a berkeley cs class I am going through online, teaching myself. it is not homework (i wish was but not that fortunate). what I don't understand is the following:
suppose the numbers in A[] are 8, 2, 10, 5, 4, 12. When I use them above I get this in my iterations, tracing it.
the upper most subscript is U, or in
this case 12, U-1 is 4, no swap is
done
U is now 4 (recursive U-1) and the number above it is 5 (the other U-1). they get swapped.
U is now 4 because four just moved up and 10 is U-1 they get swapped.
My sequence is now 8,2,4,10,5,12.
My question is how do I get the numbers I already passed by? how will I get five, for example, to move up if I never go back to that subscript to test with.
I don't think I am tracing the program correctly and my be getting confused with the recursion. For the sake of this please assume the swap is done correctly.
Thank you.
se
I think the key to your mis-understanding of the problem is actually hidden in the title of your question
Needing help understanding insertion sort
The algorithm indeed only sorts the current element, but the idea is that its run every time an element is added to the array. That way each time its called the rest of the array is already in order. In other words you are only trying to sort into position that last element.
So using your example numbers (8, 2, 10, 5, 4, 12) and adding/sorting them into the array one at a time in order, the sequence would be as follows (the sorting at each step happens exactly how you already describe):
To be added | old array | after push | Result (after moveOver())
(8, 2, 10, 5, 4, 12)| [] | [8] | [8]
(2, 10, 5, 4, 12) | [8] | [8,2] | [2,8]
(10, 5, 4, 12) | [2,8] | [2,8,10] | [2,8,10]
(5, 4, 12) | [2,8,10] | [2,8,10,5] | [2,5,8,10]
(4, 12) | [2,5,8,10] | [2,5,8,10,4] | [2,4,5,8,10]
(12) | [2,4,5,8,10] | [2,4,5,8,10,12]| [2,4,5,8,10,12]
As to your trace: you are going wrong at point 2, because the if condition does not hold, so no recursive call is ever made.
If the recursion confuses you, it may help to rewrite into an iterative form:
static void moveOver (int A[], int U) {
while (U > 0 && A[U-1] > A[U]) {
/* Swap A[U], A[U-1] */
--U;
}
}
Now it's easy to see that the loop will stop as soon as a smaller element is encountered. For this to be a complete sorting algorithm, more work is needed. Either way, this is not insertion sort; it looks more like a partial bubble sort to me.
Now where did you say you got this code from?
Related
Using ndarray. This playground snippet says it all -- I want to multiply a matrix view element-wise with a matrix, and I can't figure out a combination of views and casts and whatnot that'll make it work.
#![allow(unused)]
use ndarray::{Array1, Array2, Axis};
fn main () {
let bob = Array1::from(vec![1.2, 3.3, 4.]);
let ralph = Array2::from(vec![[3.3, 1.0, -2.0],[4., 5., 8.], [-9., 2., 1.]]);
println!("{:?}", ralph.index_axis(Axis(1), 0) * bob);
}
On compilation gives the error:
error[E0369]: cannot multiply `ArrayBase<ViewRepr<&{float}>, Dim<[usize; 1]>>` by `ArrayBase<OwnedRepr<{float}>, Dim<[usize; 1]>>`
--> src/lib.rs:8:51
|
8 | println!("{:?}", ralph.index_axis(Axis(1), 0) * bob);
| ---------------------------- ^ --- ArrayBase<OwnedRepr<{float}>, Dim<[usize; 1]>>
| |
| ArrayBase<ViewRepr<&{float}>, Dim<[usize; 1]>>
Is there a magic finger-ring combination that'll make it do what I want, or do I need to do it by hand?
Adding a & to both elements of the multiplication will prevent either value from being consumed in the process of multiplication:
&ralph.index_axis(Axis(1), 0) * &bob
The following link from the ndarray docs explains allocating a new array vs consuming the array during binary operations: https://docs.rs/ndarray/0.14.0/ndarray/struct.ArrayBase.html#binary-operators-with-two-arrays
I think because the result of index_axis() is a view into another array, it can't be consumed safely, hence the error
Swapping the order of the operations also fixes it:
#![allow(unused)]
use ndarray::{Array1, Array2, Axis};
fn main () {
let bob = Array1::from(vec![1.2, 3.3, 4.]);
let ralph = Array2::from(vec![[3.3, 1.0, -2.0],[4., 5., 8.], [-9., 2., 1.]]);
println!("{:?}", bob * ralph.index_axis(Axis(1), 0));
}
Below is an example of quicksort. I was wondering how two recursive method call inside quicksort method works i.e in what sequence it'll execute? So to check in what sequence I placed syso after each method (check output).My doubt why this sequence?Thus it depends on any conditions? if so, what condition? It would be helpful if explained the logic in detail.
Thank you in advance :)
void quicksort(int a[], int p, int r)
{
if(p < r)
{
int q;
q = partition(a, p, r);
System.out.println("q:"+q);
quicksort(a, p, q);
System.out.println("1");
quicksort(a, q+1, r);
System.out.println("2");
}
}
int partition(int a[], int p, int r)
{
System.out.println("p:"+p+" r:"+r);
int i, j, pivot, temp;
pivot = a[p];
i = p;
j = r;
while(1)
{
while(a[i] < pivot && a[i] != pivot)
i++;
while(a[j] > pivot && a[j] != pivot)
j--;
if(i < j)
{
temp = a[i];
a[i] = a[j];
a[j] = temp;
}
else
{
return j;
}
}
}
Output
p:0 r:7
q:4
p:0 r:4
q:0
1
p:1 r:4
q:1
1
p:2 r:4
q:3
p:2 r:3
q:2
1
2
1
2
2
2
1
p:5 r:7
q:7
p:5 r:7
q:6
p:5 r:6
q:5
1
2
1
2
1
2
2
Would like to know why the gap between method calls?i.e how println(placed after method calls) statement getting executed w/o executing method call?
Yes, it depends on conditions: specifically, the values of p and r on each call. Each instance of the sort will do the two calls in order: none of the execution branches will get to the second call until the first call of that branch is completely done.
You will get a much nicer trace if you put a println at the top of the function that displays the parameter values. You might want to place one after you compute the value of q, as well. Try that, and see whether it tells you the rest of the story.
Okay, you've done the printing ... and you don't see the reason for that gap? When you get to the output line "q:2", you've made five calls to quicksort, and the only progress through that sequence is that you've made it past the "1" print for two of them (you're in the second call). Your current stack looks like this, in terms of p and r:
2, 3
2, 4
0, 4
0, 7
This is the right half of the left half (second quarter) of the array, four levels deep. You now have to finish off those calls, which will get you to the "1" print for the "right-half" ones, the "2" print for each of them.
Looking at it another way, you work to partition the array down to single elements. While you're doing this, you stack up calls for smaller partitions. Any call with at least two elements has to finish off both of its partitions before it returns to the next print.
Once you get to a single-element array, you return right away, and get to print the next "1" or "2". If the other partition is also fully atomized, then you get to the next "1" or "2" without any more partitioning calls.
Halfway through, you get to the point where you've fully atomized the left half of the array; that's when you clear out all the outstanding processing, back up the stack, and do all of that printing. Then you recur down the right half, and get a similar effect.
I think you might have an easier time understanding if you'd give yourself a full trace. Either follow this in your debugger, or modify and add print statements so that (1) you have a unique, easily-read output for every line, rather than 8 lines each of "1" and "2" that you can't tell apart; (2) you can also trace the return from each routine. The objective here is to be able to recognize where you are in the process at each output line.
Yes, it's another programming problem to be able to print out things such as
1.L.R.R
1.1.2
(0,4) L
...
... or whatever format you find readable.
I think its a stupid question but sorry to say it will clear my Confusion.
If you just look into this code
void printInOrder(){
printPrivateInOrder(root);
}
void printPrivateInOrder(Node* n){
if (root != NULL){
if (n->left != NULL){
printPrivateInOrder(n->left);
}
cout << n->val << " ";
if (n->right != NULL){
printPrivateInOrder(n->right);
}
}
else{
cout << "Tree is Empty\n";
}
}
In this Traversal if we go to the extreme left child, then how this function is called again? Suppose just see the picture
BST Example
we have moved to node 4, then how this function is called again? if both child's are null I am not calling this function again, but this function is called again and printing all the nodes in InOrder Traversal? How?
When you recurse down to the next level, that basically involves taking a snapshot of exactly where you are, then going off to do something else. Once that "something else" is complete, you return to your snapshot and carry on.
It's very similar to calling non-recursive functions. When a function calls xyzzy(), it knows exactly where to carry on from when the call returns. Recursive functions are identical except that they're all passing through the same pieces of code on the way down and back up.
So, when you come back up a level (having processed the node on the left, for example), you will then print the current node, then go down the right side of the sub-tree.
Consider the sample tree:
2
/ \
1 4
/ \
3 5
\
6
To process this tree, for each node (starting at two), you process the left node, print the current node value, then process the right node.
However, you need to understand that "process the left/right node" is the entire "process left, print current, process right" set of steps over again on one of the children. In this sense, there is no difference between processing the root node and processing any other node.
The "processing" is the printing out, in order, of all nodes under a given point (including that point). It's just a happy effect that if you start at the root node, you get the entire tree :-)
So, in terms of what's actually happening, it's basically following the recursive path:
2, has a left node 1, process it:
| 1, has no left node.
> | 1, print 1.
| 1, has no right node.
| 1, done.
> 2, print 2.
2, has a right node 4, process it.
| 4, has a left node 3, process it.
| | 3, has no left node.
> | | 3, print 3.
| | 3, has no right node.
| | 3, done.
> | 4, print 4.
| 4, has a right node 5, process it.
| | 5, has no left node.
> | | 5, print 5.
| | 5, has a right node 6, process it.
| | | 6, has no left node.
> | | | 6, print 6.
| | | 6, has no right node.
| | | 6, done.
| | 5, done.
| 4, done.
2, done.
If you examine each of the printing lines (see the > markers), you'll see they come out in the desired order.
I don't quite understand this piece of code. So if for example n = 5 and we have:
array[5] = {13, 27, 78, 42, 69}
Would someone explain please?
All I understand is if n = 1, that is the lowest.
But when n = 5, we would get the 4th index and compare it to the 4th index and check which is the smallest and return the smallest, then take the 4th index and compare it to the 3rd index and check which one is the smallest and return the smallest? I am confused.
int min(int a, int b)
{
return (a < b) ? a: b;
}
// Recursively find the minimum element in an array, n is the length of the
// array, which you assume is at least 1.
int find_min(int *array, int n)
{
if(n == 1)
return array[0];
return min(array[n - 1], find_min(array, n - 1));
}
Given your array:
1. initial call: find_min(array, 5)
n!=1, therefore if() doesn't trigger
2. return(min(array[4], find_min(array, 4)))
n!=1, therefore if doesn't trigger
3. return(min(array[3], find_min(array,3)))
n!=1, therefore if doesn't trigger
4. return(min(array[2], find_min(array,2)))
n!=1, threfore if() doesn't trigger
5. return(min(array[1], find_min(array,1)))
n==1, so return array[0]
4. return(min(array[1], array[0]))
return(min(13, 27)
return(13)
3. return(min(array[2], 13))
etc...
It's quite simple. Run through the code using the example you gave.
On the first run through find_min(), it will return the minimum of the last element in the array (69) and the minimum of the rest of the array. To calculate the minimum of the rest of the array, it calls itself, i.e. it is recursive. This 2nd-level call will compare the number 42 (the new "last" element) with the minimum from the rest of the array, and so on. The final call to find_min() will have n=1 with the array "{13}", so it will return 13. The layer that called it will compare 13 with 27 and find that 13 is less so it will return it, and so on back up the chain.
Note: I assume the backward quotes in your code are not supposed to be there.
The solution uses recursion to compute the minimum for the smallest possible comparison set and comparing that result with the next bigger set of numbers. Each recursive call returns a result that is compared against the next element in a backward manner until the minimum value bubbles up to the top. Recursion appears to be tricky at first, but can be quite effective once you get familiar with it.
I have been practicing java 8 streams and functional style for a while.
Sometimes I try to solve some programming puzzles just using streams.
And during this time I found a class of tasks which I don't know how to solve with streams, only with classical approach.
One example of this kind of tasks is:
Given an array of numbers find index of the element which will make sum of the left part of array below zero.
e.g. for array [1, 2, 3, -1, 3, -10, 9] answer will be 5
My first idea was to use IntStream.generate(0, arr.length)... but then I don't know how to accumulate values and being aware of index same time.
So questions are:
Is it possible to somehow accumulate value over stream and then make conditional exit?
What is then with parallel execution? it's not fitting to problem of finding indexes where we need to be aware of elements order.
I doubt your task is well suited for streams. What you are looking for is a typical scan left operation which is by nature a sequential operation.
For instance imagine the following elements in the pipeline: [1, 2, -4, 5]. A parallel execution may split it into two subparts namely [1, 2] and [-4, 5]. Then what would you do with them ? You cannot sum them independently because it will yields [3] and [1] and then you lost the fact that 1 + 2 - 4 < 0 was respected.
So even if you write a collector that keeps track of the index, and the sum, it won't be able to perform well in parallel (I doubt you can even benefit from it) but you can imagine such a collector for sequential use :
public static Collector<Integer, ?, Integer> indexSumLeft(int limit) {
return Collector.of(
() -> new int[]{-1, 0, 0},
(arr, elem) -> {
if(arr[2] == 0) {
arr[1] += elem;
arr[0]++;
}
if(arr[1] < limit) {
arr[2] = 1;
}
},
(arr1, arr2) -> {throw new UnsupportedOperationException("Cannot run in parallel");},
arr -> arr[0]
);
}
and a simple usage:
int index = IntStream.of(arr).boxed().collect(indexSumLeft(0));
This will still traverse all the elements of the pipeline, so not very efficient.
Also you might consider using Arrays.parallelPrefix if the data-source is an array. Just compute the partial sums over it and then use a stream to find the first index where the sum is below the limit.
Arrays.parallelPrefix(arr, Integer::sum);
int index = IntStream.range(0, arr.length)
.filter(i -> arr[i] < limit)
.findFirst()
.orElse(-1);
Here also all the partial sums are computed (but in parallel).
In short, I would use a simple for-loop.
I can propose a solution using my StreamEx library (which provides additional functions to the Stream API), but I would not be very happy with such solution:
int[] input = {1, 2, 3, -1, 3, -10, 9};
System.out.println(IntStreamEx.of(
IntStreamEx.of(input).scanLeft(Integer::sum)).indexOf(x -> x < 0));
// prints OptionalLong[5]
It uses IntStreamEx.scanLeft operation to compute the array of prefix sums, then searches over this array using IntStreamEx.indexOf operation. While indexOf is short-circuiting, the scanLeft operation will process the whole input and create an intermediate array of the same length as the input which is completely unnecessary when solving the same problem in imperative style.
With new headTail method in my StreamEx library it's possibly to create lazy solution which works well for very long or infinite streams. First, we can define a new intermediate scanLeft operation:
public static <T> StreamEx<T> scanLeft(StreamEx<T> input, BinaryOperator<T> operator) {
return input.headTail((head, tail) ->
scanLeft(tail.mapFirst(cur -> operator.apply(head, cur)), operator)
.prepend(head));
}
This defines a lazy scanLeft using the headTail: it applies given function to the head and the first element of the tail stream, then prepends the head. Now you can use this scanLeft:
scanLeft(StreamEx.of(1, 2, 3, -1, 3, -10, 9), Integer::sum).indexOf(x -> x < 0);
The same can be applied to the infinite stream (e.g. stream of random numbers):
StreamEx<Integer> ints = IntStreamEx.of(new Random(), -100, 100)
.peek(System.out::println).boxed();
int idx = scanLeft(ints, Integer::sum).indexOf(x -> x < 0);
This will run till the cumulative sum becomes negative and returns the index of the corresponding element.