Diffie-Hellman -- Primitive root mod n -- cryptography question - vector

In the below snippet, please explain starting with the first "for" loop what is happening and why. Why is 0 added, why is 1 added in the second loop. What is going on in the "if" statement under bigi. Finally explain the modPow method. Thank you in advance for meaningful replies.
public static boolean isPrimitive(BigInteger m, BigInteger n) {
BigInteger bigi, vectorint;
Vector<BigInteger> v = new Vector<BigInteger>(m.intValue());
int i;
for (i=0;i<m.intValue();i++)
v.add(new BigInteger("0"));
for (i=1;i<m.intValue();i++)
{
bigi = new BigInteger("" + i);
if (m.gcd(bigi).intValue() == 1)
v.setElementAt(new BigInteger("1"), n.modPow(bigi,m).intValue());
}
for (i=0;i<m.intValue();i++)
{
bigi = new BigInteger("" + i);
if (m.gcd(bigi).intValue() == 1)
{
vectorint = v.elementAt(bigi.intValue());
if ( vectorint.intValue() == 0)
i = m.intValue() + 1;
}
}
if (i == m.intValue() + 2)
return false;
else
return true;
}

Treat the vector as a list of booleans, with one boolean for each number 0 to m. When you view it that way, it becomes obvious that each value is set to 0 to initialize it to false, and then set to 1 later to set it to true.
The last for loop is testing all the booleans. If any of them are 0 (indicating false), then the function returns false. If all are true, then the function returns true.
Explaining the if statement you asked about would require explaining what a primitive root mod n is, which is the whole point of the function. I think if your goal is to understand this program, you should first understand what it implements. If you read Wikipedia's article on it, you'll see this in the first paragraph:
In modular arithmetic, a branch of
number theory, a primitive root modulo
n is any number g with the property
that any number coprime to n is
congruent to a power of g (mod n).
That is, if g is a primitive root (mod
n), then for every integer a that has
gcd(a, n) = 1, there is an integer k
such that gk ≡ a (mod n). k is called
the index of a. That is, g is a
generator of the multiplicative group
of integers modulo n.
The function modPow implements modular exponentiation. Once you understand how to find a primitive root mod n, you'll understand it.
Perhaps the final piece of the puzzle for you is to know that two numbers are coprime if their greatest common divisor is 1. And so you see these checks in the algorithm you pasted.
Bonus link: This paper has some nice background, including how to test for primitive roots near the end.

Related

Recursion and Multi-Argument Functions in z3 in C#

I'm new to z3 and trying to use it to solve logic puzzles. The puzzle type I'm working on, Skyscrapers, includes given constraints on the number of times that a new maximum value is found while reading a series of integers.
For example, if the constraint given was 3, then the series [2,3,1,5,4] would satisfy the constraint as we'd detect the maximums '2', '3', '5'.
I've implemented a recursive solution, but the rule does not apply correctly and the resulting solutions are invalid.
for (int i = 0; i < clues.Length; ++i)
{
IntExpr clue = c.MkInt(clues[i].count);
IntExpr[] orderedCells = GetCells(clues[i].x, clues[i].y, clues[i].direction, cells, size);
IntExpr numCells = c.MkInt(orderedCells.Length);
ArrayExpr localCells = c.MkArrayConst(string.Format("clue_{0}", i), c.MkIntSort(), c.MkIntSort());
for (int j = 0; j < orderedCells.Length; ++j)
{
c.MkStore(localCells, c.MkInt(j), orderedCells[j]);
}
// numSeen counter_i(index, localMax)
FuncDecl counter = c.MkFuncDecl(String.Format("counter_{0}", i), new Sort[] { c.MkIntSort(), c.MkIntSort()}, c.MkIntSort());
IntExpr index = c.MkIntConst(String.Format("index_{0}", i));
IntExpr localMax = c.MkIntConst(String.Format("localMax_{0}", i));
s.Assert(c.MkForall(new Expr[] { index, localMax }, c.MkImplies(
c.MkAnd(c.MkAnd(index >= 0, index < numCells), c.MkAnd(localMax >= 0, localMax <= numCells)), c.MkEq(c.MkApp(counter, index, localMax),
c.MkITE(c.MkOr(c.MkGe(index, numCells), c.MkLt(index, c.MkInt(0))),
c.MkInt(0),
c.MkITE(c.MkOr(c.MkEq(localMax, c.MkInt(0)), (IntExpr)localCells[index] >= localMax),
1 + (IntExpr)c.MkApp(counter, index + 1, (IntExpr)localCells[index]),
c.MkApp(counter, index + 1, localMax)))))));
s.Assert(c.MkEq(clue, c.MkApp(counter, c.MkInt(0), c.MkInt(0))));
Or as an example of how the first assertion is stored:
(forall ((index_3 Int) (localMax_3 Int))
(let ((a!1 (ite (or (= localMax_3 0) (>= (select clue_3 index_3) localMax_3))
(+ 1 (counter_3 (+ index_3 1) (select clue_3 index_3)))
(counter_3 (+ index_3 1) localMax_3))))
(let ((a!2 (= (counter_3 index_3 localMax_3)
(ite (or (>= index_3 5) (< index_3 0)) 0 a!1))))
(=> (and (>= index_3 0) (< index_3 5) (>= localMax_3 0) (<= localMax_3 5))
a!2))))
From reading questions here, I get the sense that defining functions via Assert should work. However, I didn't see any examples where the function had two arguments. Any ideas what is going wrong? I realize that I could define all primitive assertions and avoid recursion, but I want a general solver not dependent on the size of the puzzle.
Stack-overflow works the best if you post entire code segments that can be independently run to debug. Unfortunately posting chosen parts makes it really difficult for people to understand what might be the problem.
Having said that, I wonder why you are coding this in C/C# to start with? Programming z3 using these lower level interfaces, while certainly possible, is a terrible idea unless you've some other integration requirement. For personal projects and learning purposes, it's much better to use a higher level API. The API you are using is extremely low-level and you end up dealing with API-centric issues instead of your original problem.
In Python
Based on this, I'd strongly recommend using a higher-level API, such as from Python or Haskell. (There are bindings available in many languages; but I think Python and Haskell ones are the easiest to use. But of course, this is my personal bias.)
The "skyscraper" constraint can easily be coded in the Python API as follows:
from z3 import *
def skyscraper(clue, xs):
# If list is empty, clue has to be 0
if not xs:
return clue == 0;
# Otherwise count the visible ones:
visible = 1 # First one is always visible!
curMax = xs[0]
for i in xs[1:]:
visible = visible + If(i > curMax, 1, 0)
curMax = If(i > curMax, i, curMax)
# Clue must equal number of visibles
return clue == visible
To use this, let's create a row of skyscrapers. We'll make the size based on a constant you can set, which I'll call N:
s = Solver()
N = 5 # configure size
row = [Int("v%d" % i) for i in range(N)]
# Make sure row is distinct and each element is between 1-N
s.add(Distinct(row))
for i in row:
s.add(And(1 <= i, i <= N))
# Add the clue, let's say we want 3 for this row:
s.add(skyscraper(3, row))
# solve
if s.check() == sat:
m = s.model()
print([m[i] for i in row])
else:
print("Not satisfiable")
When I run this, I get:
[3, 1, 2, 4, 5]
which indeed has 3 skyscrapers visible.
To solve the entire grid, you'd create NxN variables and add all the skyscraper assertions for all rows/columns. This is a bit of coding, but you can see that it's quite high-level and a lot easier to use than the C-encoding you're attempting.
In Haskell
For reference, here's the same problem encoded using the Haskell SBV library, which is built on top of z3:
import Data.SBV
skyscraper :: SInteger -> [SInteger] -> SBool
skyscraper clue [] = clue .== 0
skyscraper clue (x:xs) = clue .== visible xs x 1
where visible [] _ sofar = sofar
visible (x:xs) curMax sofar = ite (x .> curMax)
(visible xs x (1+sofar))
(visible xs curMax sofar)
row :: Integer -> Integer -> IO SatResult
row clue n = sat $ do xs <- mapM (const free_) [1..n]
constrain $ distinct xs
constrain $ sAll (`inRange` (1, literal n)) xs
constrain $ skyscraper (literal clue) xs
Note that this is even shorter than the Python encoding (about 15 lines of code, as opposed to Python's 30 or so), and if you're familiar with Haskell quite a natural description of the problem without getting lost in low-level details. When I run this, I get:
*Main> row 3 5
Satisfiable. Model:
s0 = 1 :: Integer
s1 = 4 :: Integer
s2 = 5 :: Integer
s3 = 3 :: Integer
s4 = 2 :: Integer
which tells me the heights should be 1 4 5 3 2, again giving a row with 3 visible skyscrapers.
Summary
Once you're familiar with the Python/Haskell APIs and have a good idea on how to solve your problem, you can code it in C# if you like. I'd advise against it though, unless you've a really good reason to do so. Sticking the Python or Haskell is your best bet not to get lost in the details of the API.

Asymptotic complexity of log(n) * log(log(n))

I was working through a problem last night where I had to insert into a priority queue n times. Therefore, asymptotic complexity was n log n. However n could be as large as 10^16, so I had to do better. I found a solution that allowed me to only have to insert into the priority queue log n times with everything else remaining constant time. So, the complexity is log(n) * log(log(n)). Is that my asymptotic complexity or can this be simplified further?
Here is the alogrithm. I was able to reduce the complexity by using a hashmap to count the duplicate prioroities that would be inserted into the priority queue and providing a single calculation based on that.
I know by my code that it may not be intutitve how n log n complexity is reduced to log n log log n. I had to walk through examples to figure out it n was reduced to log n. While solvedUpTo used to increase at the same rate as n, now by ~n<=20 there were half the steps to get to the same value in solvedUpTo, ~n<=30 there were 1/3 the steps, quickly after that it was at 1/4 the steps and so on (all approximates, because I cannot remember the exact numbers).
The code is intentionally left ambiguous to what it is solving:
fun solve(n:Long, x:Long, y:Long): Long {
val numCount = mutableMapOf<Long,Long>()
val minQue: PriorityQueue<Long> = PriorityQueue<Long>()
addToQueue(numCount,minQue,x,1)
addToQueue(numCount,minQue,y,1)
var answer = x + y
var solvedUpTo = 2L
while (solvedUpTo < n) {
val next = minQue.poll()
val nextCount = numCount.remove(next)!!
val quantityToSolveFor = min(nextCount,n - solvedUpTo)
answer = ((answer + ((next + x + y) * quantityToSolveFor))).rem(1000000007)
addToQueue(numCount,minQue,next + x,quantityToSolveFor)
addToQueue(numCount,minQue,next + y,quantityToSolveFor)
solvedUpTo += quantityToSolveFor
}
return answer
}
fun <K> addToQueue(numCount: MutableMap<K,Long>, minQue: PriorityQueue<K>, num: K, incrementBy: Long) {
if (incrementMapAndCheckIfNew(numCount, num, incrementBy)) {
minQue.add(num)
}
}
//Returns true if just added
fun <K> incrementMapAndCheckIfNew(map: MutableMap<K,Long>, key: K, incrementBy: Long): Boolean {
val prevKey = map.putIfAbsent(key,0L)
map[key] = map[key]!! + incrementBy
return prevKey == null
}
Nope, O(log n log log n) is as simplified as that expression is going to get. You sometimes see runtimes like O(n log n log log n) popping up in number theory contexts, and there aren’t simpler common functions that quantities like these are equivalent to.

Generate sequence using previous values

I'm learning functional programming with F#, and I want to write a function that will generate a sequence for me.
There is a some predetermined function for transforming a value, and in the function I need to write there should be two inputs - the starting value and the length of the sequence. Sequence starts with the initial value, and each following item is a result of applying the transforming function to the previous value in the sequence.
In C# I would normally write something like that:
public static IEnumerable<double> GenerateSequence(double startingValue, int n)
{
double TransformValue(double x) => x * 0.9 + 2;
yield return startingValue;
var returnValue = startingValue;
for (var i = 1; i < n; i++)
{
returnValue = TransformValue(returnValue);
yield return returnValue;
}
}
As I tried to translate this function to F#, I made this:
let GenerateSequence startingValue n =
let transformValue x =
x * 0.9 + 2.0
seq {
let rec repeatableFunction value n =
if n = 1 then
transformValue value
else
repeatableFunction (transformValue value) (n-1)
yield startingValue
for i in [1..n-1] do
yield repeatableFunction startingValue i
}
There are two obvious problems with this implementation.
First is that because I tried to avoid making a mutable value (analogy of returnValue variable in C# implementation), I didn't reuse values of former computations while generating sequence. This means that for the 100th element of the sequence I have to make additional 99 calls of the transformValue function instead of just one (as I did in C# implementation). This reeks with extremely bad performance.
Second is that the whole function does not seem to be written in accordance with Functional Programming. I am pretty sure that there are more elegant and compact implementation. I suspect that Seq.fold or List.fold or something like that should have been used here, but I'm still not able to grasp how to effectively use them.
So the question is: how to re-write the GenerateSequence function in F# so it would be in Functional Programming style and have a better performance?
Any other advice would also be welcomed.
The answer from #rmunn shows a rather nice solution using unfold. I think there are other two options worth considering, which are actually just using a mutable variable and using a recursive sequence expression. The choice is probably a matter of personal preference. The two other options look like this:
let generateSequenceMutable startingValue n = seq {
let transformValue x = x * 0.9 + 2.0
let mutable returnValue = startingValue
for i in 1 .. n do
yield returnValue
returnValue <- transformValue returnValue }
let generateSequenceRecursive startingValue n =
let transformValue x = x * 0.9 + 2.0
let rec loop value i = seq {
if i < n then
yield value
yield! loop (transformValue value) (i + 1) }
loop startingValue 0
I modified your logic slightly so that I do not have to yield twice - I just do one more step of the iteration and yield before updating the value. This makes the generateSequenceMutable function quite straightforward and easy to understand. The generateSequenceRecursive implements the same logic using recursion and is also fairly nice, but I find it a bit less clear.
If you wanted to use one of these versions and generate an infinite sequence from which you can then take as many elements as you need, you can just change for to while in the first case or remove the if in the second case:
let generateSequenceMutable startingValue n = seq {
let transformValue x = x * 0.9 + 2.0
let mutable returnValue = startingValue
while true do
yield returnValue
returnValue <- transformValue returnValue }
let generateSequenceRecursive startingValue n =
let transformValue x = x * 0.9 + 2.0
let rec loop value i = seq {
yield value
yield! loop (transformValue value) (i + 1) }
loop startingValue 0
If I was writing this, I'd probably go either with the mutable variable or with unfold. Mutation may be "generally evil" but in this case, it is a localized mutable variable that is not breaking referential transparency in any way, so I don't think it's harmful.
Your description of the problem was excellent: "Sequence starts with the initial value, and each following item is a result of applying the transforming function to the previous value in the sequence."
That is a perfect description of the Seq.unfold method. It takes two parameters: the initial state and a transformation function, and returns a sequence where each value is calculated from the previous state. There are a few subtleties involved in using Seq.unfold which the rather terse documentation may not explain very well:
Seq.unfold expects the transformation function, which I'll call f from now on, to return an option. It should return None if the sequence should end, or Some (...) if there's another value left in the sequence. You can create infinite sequences this way if you never return None; infinite sequences are perfectly fine since F# evaluates sequences lazily, but you do need to be careful not to ever loop over the entirely of an infinite sequence. :-)
Seq.unfold also expects that if f returns Some (...), it will return not just the next value, but a tuple of the next value and the next state. This is shown in the Fibonacci example in the documentation, where the state is actually a tuple of the current value and the previous value, which will be used to calculate the next value shown. The documentation example doesn't make that very clear, so here's what I think is a better example:
let infiniteFibonacci = (0,1) |> Seq.unfold (fun (a,b) ->
// a is the value produced *two* iterations ago, b is previous value
let c = a+b
Some (c, (b,c))
)
infiniteFibonacci |> Seq.take 5 |> List.ofSeq // Returns [1; 2; 3; 5; 8]
let fib = seq {
yield 0
yield 1
yield! infiniteFibonacci
}
fib |> Seq.take 7 |> List.ofSeq // Returns [0; 1; 1; 2; 3; 5; 8]
And to get back to your GenerateSequence question, I would write it like this:
let GenerateSequence startingValue n =
let transformValue x =
let result = x * 0.9 + 2.0
Some (result, result)
startingValue |> Seq.unfold transformValue |> Seq.take n
Or if you need to include the starting value in the sequence:
let GenerateSequence startingValue n =
let transformValue x =
let result = x * 0.9 + 2.0
Some (result, result)
let rest = startingValue |> Seq.unfold transformValue |> Seq.take n
Seq.append (Seq.singleton startingValue) rest
The difference between Seq.fold and Seq.unfold
The easiest way to remember whether you want to use Seq.fold or Seq.unfold is to ask yourself which of these two statements is true:
I have a list (or array, or sequence) of items, and I want to produce a single result value by running a calculation repeatedly on pairs of items in the list. For example, I want to take the product of this whole series of numbers. This is a fold operation: I take a long list and "compress" it (so to speak) until it's a single value.
I have a single starting value and a function to produce the next value from the current value, and I want to end up with a list (or sequence, or array) of values. This is an unfold operation: I take a small starting value and "expand" it (so to speak) until it's a whole list of values.

iterative version of recursive algorithm to make a binary tree

Given this algorithm, I would like to know if there exists an iterative version. Also, I want to know if the iterative version can be faster.
This some kind of pseudo-python...
the algorithm returns a reference to root of the tree
make_tree(array a)
if len(a) == 0
return None;
node = pick a random point from the array
calculate distances of the point against the others
calculate median of such distances
node.left = make_tree(subset of the array, such that the distance of points is lower to the median of distances)
node.right = make_tree(subset, such the distance is greater or equal to the median)
return node
A recursive function with only one recursive call can usually be turned into a tail-recursive function without too much effort, and then it's trivial to convert it into an iterative function. The canonical example here is factorial:
# naïve recursion
def fac(n):
if n <= 1:
return 1
else:
return n * fac(n - 1)
# tail-recursive with accumulator
def fac(n):
def fac_helper(m, k):
if m <= 1:
return k
else:
return fac_helper(m - 1, m * k)
return fac_helper(n, 1)
# iterative with accumulator
def fac(n):
k = 1
while n > 1:
n, k = n - 1, n * k
return k
However, your case here involves two recursive calls, and unless you significantly rework your algorithm, you need to keep a stack. Managing your own stack may be a little faster than using Python's function call stack, but the added speed and depth will probably not be worth the complexity. The canonical example here would be the Fibonacci sequence:
# naïve recursion
def fib(n):
if n <= 1:
return 1
else:
return fib(n - 1) + fib(n - 2)
# tail-recursive with accumulator and stack
def fib(n):
def fib_helper(m, k, stack):
if m <= 1:
if stack:
m = stack.pop()
return fib_helper(m, k + 1, stack)
else:
return k + 1
else:
stack.append(m - 2)
return fib_helper(m - 1, k, stack)
return fib_helper(n, 0, [])
# iterative with accumulator and stack
def fib(n):
k, stack = 0, []
while 1:
if n <= 1:
k = k + 1
if stack:
n = stack.pop()
else:
break
else:
stack.append(n - 2)
n = n - 1
return k
Now, your case is a lot tougher than this: a simple accumulator will have difficulties expressing a partly-built tree with a pointer to where a subtree needs to be generated. You'll want a zipper -- not easy to implement in a not-really-functional language like Python.
Making an iterative version is simply a matter of using your own stack instead of the normal language call stack. I doubt the iterative version would be faster, as the normal call stack is optimized for this purpose.
The data you're getting is random so the tree can be an arbitrary binary tree. For this case, you can use a threaded binary tree, which can be traversed and built w/o recursion and no stack. The nodes have a flag that indicate if the link is a link to another node or how to get to the "next node".
From http://en.wikipedia.org/wiki/Threaded_binary_tree
Depending on how you define "iterative", there is another solution not mentioned by the previous answers. If "iterative" just means "not subject to a stack overflow exception" (but "allowed to use 'let rec'"), then in a language that supports tail calls, you can write a version using continuations (rather than an "explicit stack"). The F# code below illustrates this. It is similar to your original problem, in that it builds a BST out of an array. If the array is shuffled randomly, the tree is relatively balanced and the recursive version does not create too deep a stack. But turn off shuffling, and the tree gets unbalanced, and the recursive version stack-overflows whereas the iterative-with-continuations version continues along happily.
#light
open System
let printResults = false
let MAX = 20000
let shuffleIt = true
// handy helper function
let rng = new Random(0)
let shuffle (arr : array<'a>) = // '
let n = arr.Length
for x in 1..n do
let i = n-x
let j = rng.Next(i+1)
let tmp = arr.[i]
arr.[i] <- arr.[j]
arr.[j] <- tmp
// Same random array
let sampleArray = Array.init MAX (fun x -> x)
if shuffleIt then
shuffle sampleArray
if printResults then
printfn "Sample array is %A" sampleArray
// Tree type
type Tree =
| Node of int * Tree * Tree
| Leaf
// MakeTree1 is recursive
let rec MakeTree1 (arr : array<int>) lo hi = // [lo,hi)
if lo = hi then
Leaf
else
let pivot = arr.[lo]
// partition
let mutable storeIndex = lo + 1
for i in lo + 1 .. hi - 1 do
if arr.[i] < pivot then
let tmp = arr.[i]
arr.[i] <- arr.[storeIndex]
arr.[storeIndex] <- tmp
storeIndex <- storeIndex + 1
Node(pivot, MakeTree1 arr (lo+1) storeIndex, MakeTree1 arr storeIndex hi)
// MakeTree2 has all tail calls (uses continuations rather than a stack, see
// http://lorgonblog.spaces.live.com/blog/cns!701679AD17B6D310!171.entry
// for more explanation)
let MakeTree2 (arr : array<int>) lo hi = // [lo,hi)
let rec MakeTree2Helper (arr : array<int>) lo hi k =
if lo = hi then
k Leaf
else
let pivot = arr.[lo]
// partition
let storeIndex = ref(lo + 1)
for i in lo + 1 .. hi - 1 do
if arr.[i] < pivot then
let tmp = arr.[i]
arr.[i] <- arr.[!storeIndex]
arr.[!storeIndex] <- tmp
storeIndex := !storeIndex + 1
MakeTree2Helper arr (lo+1) !storeIndex (fun lacc ->
MakeTree2Helper arr !storeIndex hi (fun racc ->
k (Node(pivot,lacc,racc))))
MakeTree2Helper arr lo hi (fun x -> x)
// MakeTree2 never stack overflows
printfn "calling MakeTree2..."
let tree2 = MakeTree2 sampleArray 0 MAX
if printResults then
printfn "MakeTree2 yields"
printfn "%A" tree2
// MakeTree1 might stack overflow
printfn "calling MakeTree1..."
let tree1 = MakeTree1 sampleArray 0 MAX
if printResults then
printfn "MakeTree1 yields"
printfn "%A" tree1
printfn "Trees are equal: %A" (tree1 = tree2)
Yes it is possible to make any recursive algorithm iterative. Implicitly, when you create a recursive algorithm each call places the prior call onto the stack. What you want to do is make the implicit call stack into an explicit one. The iterative version won't necessarily be faster, but you won't have to worry about a stack overflow. (do I get a badge for using the name of the site in my answer?
While it is true in the general sense that directly converting a recursive algorithm into an iterative one will require an explicit stack, there is a specific sub-set of algorithms which render directly in iterative form (without the need for a stack). These renderings may not have the same performance guarantees (iterating over a functional list vs recursive deconstruction), but they do often exist.
Here is stack based iterative solution (Java):
public static Tree builtBSTFromSortedArray(int[] inputArray){
Stack toBeDone=new Stack("sub trees to be created under these nodes");
//initialize start and end
int start=0;
int end=inputArray.length-1;
//keep memoy of the position (in the array) of the previously created node
int previous_end=end;
int previous_start=start;
//Create the result tree
Node root=new Node(inputArray[(start+end)/2]);
Tree result=new Tree(root);
while(root!=null){
System.out.println("Current root="+root.data);
//calculate last middle (last node position using the last start and last end)
int last_mid=(previous_start+previous_end)/2;
//*********** add left node to the previously created node ***********
//calculate new start and new end positions
//end is the previous index position minus 1
end=last_mid-1;
//start will not change for left nodes generation
start=previous_start;
//check if the index exists in the array and add the left node
if (end>=start){
root.left=new Node(inputArray[((start+end)/2)]);
System.out.println("\tCurrent root.left="+root.left.data);
}
else
root.left=null;
//save previous_end value (to be used in right node creation)
int previous_end_bck=previous_end;
//update previous end
previous_end=end;
//*********** add right node to the previously created node ***********
//get the initial value (inside the current iteration) of previous end
end=previous_end_bck;
//start is the previous index position plus one
start=last_mid+1;
//check if the index exists in the array and add the right node
if (start<=end){
root.right=new Node(inputArray[((start+end)/2)]);
System.out.println("\tCurrent root.right="+root.right.data);
//save the created node and its index position (start & end) in the array to toBeDone stack
toBeDone.push(root.right);
toBeDone.push(new Node(start));
toBeDone.push(new Node(end));
}
//*********** update the value of root ***********
if (root.left!=null){
root=root.left;
}
else{
if (toBeDone.top!=null) previous_end=toBeDone.pop().data;
if (toBeDone.top!=null) previous_start=toBeDone.pop().data;
root=toBeDone.pop();
}
}
return result;
}

iterative version of easy recursive algorithm

I have a quite simple question, I think.
I've got this problem, which can be solved very easily with a recursive function, but which I wasn't able to solve iteratively.
Suppose you have any boolean matrix, like:
M:
111011111110
110111111100
001111111101
100111111101
110011111001
111111110011
111111100111
111110001111
I know this is not an ordinary boolean matrix, but it is useful for my example.
You can note there is sort of zero-paths in there...
I want to make a function that receives this matrix and a point where a zero is stored and that transforms every zero in the same area into a 2 (suppose the matrix can store any integer even it is initially boolean)
(just like when you paint a zone in Paint or any image editor)
suppose I call the function with this matrix M and the coordinate of the upper right corner zero, the result would be:
111011111112
110111111122
001111111121
100111111121
110011111221
111111112211
111111122111
111112221111
well, my question is how to do this iteratively...
hope I didn't mess it up too much
Thanks in advance!
Manuel
ps: I'd appreciate if you could show the function in C, S, python, or pseudo-code, please :D
There is a standard technique for converting particular types of recursive algorithms into iterative ones. It is called tail-recursion.
The recursive version of this code would look like (pseudo code - without bounds checking):
paint(cells, i, j) {
if(cells[i][j] == 0) {
cells[i][j] = 2;
paint(cells, i+1, j);
paint(cells, i-1, j);
paint(cells, i, j+1);
paint(cells, i, j-1);
}
}
This is not simple tail recursive (more than one recursive call) so you have to add some sort of stack structure to handle the intermediate memory. One version would look like this (pseudo code, java-esque, again, no bounds checking):
paint(cells, i, j) {
Stack todo = new Stack();
todo.push((i,j))
while(!todo.isEmpty()) {
(r, c) = todo.pop();
if(cells[r][c] == 0) {
cells[r][c] = 2;
todo.push((r+1, c));
todo.push((r-1, c));
todo.push((r, c+1));
todo.push((r, c-1));
}
}
}
Pseudo-code:
Input: Startpoint (x,y), Array[w][h], Fillcolor f
Array[x][y] = f
bool hasChanged = false;
repeat
for every Array[x][y] with value f:
check if the surrounding pixels are 0, if so:
Change them from 0 to f
hasChanged = true
until (not hasChanged)
For this I would use a Stack ou Queue object. This is my pseudo-code (python-like):
stack.push(p0)
while stack.size() > 0:
p = stack.pop()
matrix[p] = 2
for each point in Arround(p):
if matrix[point]==0:
stack.push(point)
The easiest way to convert a recursive function into an iterative function is to utilize the stack data structure to store the data instead of storing it on the call stack by calling recursively.
Pseudo code:
var s = new Stack();
s.Push( /*upper right point*/ );
while not s.Empty:
var p = s.Pop()
m[ p.x ][ p.y ] = 2
s.Push ( /*all surrounding 0 pixels*/ )
Not all recursive algorithms can be translated to an iterative algorithm. Normally only linear algorithms with a single branch can. This means that tree algorithm which have two or more branches and 2d algorithms with more paths are extremely hard to transfer into recursive without using a stack (which is basically cheating).
Example:
Recursive:
listsum: N* -> N
listsum(n) ==
if n=[] then 0
else hd n + listsum(tl n)
Iteration:
listsum: N* -> N
listsum(n) ==
res = 0;
forall i in n do
res = res + i
return res
Recursion:
treesum: Tree -> N
treesum(t) ==
if t=nil then 0
else let (left, node, right) = t in
treesum(left) + node + treesum(right)
Partial iteration (try):
treesum: Tree -> N
treesum(t) ==
res = 0
while t<>nil
let (left, node, right) = t in
res = res + node + treesum(right)
t = left
return res
As you see, there are two paths (left and right). It is possible to turn one of these paths into iteration, but to translate the other into iteration you need to preserve the state which can be done using a stack:
Iteration (with stack):
treesum: Tree -> N
treesum(t) ==
res = 0
stack.push(t)
while not stack.isempty()
t = stack.pop()
while t<>nil
let (left, node, right) = t in
stack.pop(right)
res = res + node + treesum(right)
t = left
return res
This works, but a recursive algorithm is much easier to understand.
If doing it iteratively is more important than performance, I would use the following algorithm:
Set the initial 2
Scan the matrix for finding a 0 near a 2
If such a 0 is found, change it to 2 and restart the scan in step 2.
This is easy to understand and needs no stack, but is very time consuming.
A simple way to do this iteratively is using a queue.
insert starting point into queue
get first element from queue
set to 2
put all neighbors that are still 0 into queue
if queue is not empty jump to 2.

Resources