For an assignment on dynamic programming, we need to give an iterative implementation for a recursive formula.
To keep it simple: let's say that in all iterations, either aor b hold, if not we're in the base-clause.
We have a formula like:
f(x){
if(x.a && !x.b):
return f(option_1)
else if(x.b && !x.a):
return f(option_2)
else if(x.a && x.b):
f_1 = f(option_1)
f_2 = f(option_2)
return betterOption(f_1,f_2)
else:
return base_clause
How do I compute this iteratively? I get stuck on the option where both a & b hold, because we need to compute both options. Using recursion in any form is not allowed here.
Related
I was looking at the first Recursive solution for this problem on Leetcode. Here is the code for the proposed solution.
public boolean isSymmetric(TreeNode root) {
return isMirror(root, root);
}
public boolean isMirror(TreeNode t1, TreeNode t2) {
if (t1 == null && t2 == null) return true;
if (t1 == null || t2 == null) return false;
return (t1.val == t2.val)
&& isMirror(t1.right, t2.left)
&& isMirror(t1.left, t2.right);
}
The part of the solution that I do not understand is why the writer of the solution says the time complexity is O(n). Let's say I had the input:
1
/ \
2 2
This is how I trace the call stack in this case:
isMirror(1, 1)
(t1.val == t2.val) returns true
isMirror(2, 2) returns true
(t1.val == t2.val) returns true
isMirror(null, null) return true
isMirror(null, null) return true
isMirror(2, 2) returns true
(t1.val == t2.val) returns true
isMirror(null, null) return true
isMirror(null, null) return true
In the call stack above, isMirror() is called 7 times while n is 3. For the time complexity to be O(n), should isMirror() have only been called 3 times? Or am I looking at this the wrong way? Is it the fact that the call stack only went 3 levels deep that shows that the time complexity is O(n)?
Thank you for the help.
You call the mirror on the null nodes too. So your elements are actually not 3 they are 7. Think the next level on the binary tree and you will see.
Solutions;
Try to check the nullity before calling the function.
it will be still constant factor to your cost. Approximately 2 times. By big O rules you are still O(n).
I need to identify the rows (/columns) that have defined values in a large sparse Boolean Matrix. I want to use this to 1. slice (actually view) the Matrix by those rows/columns; and 2. slice (/view) vectors and matrices that have the same dimensions as the margins of a Matrix. I.e. the result should probably be a Vector of indices / Bools or (preferably) an iterator.
I've tried the obvious:
a = sprand(10000, 10000, 0.01)
cols = unique(a.colptr)
rows = unique(a.rowvals)
but each of these take like 20ms on my machine, probably because they allocate about 1MB (at least they allocate cols and rows). This is inside a performance-critical function, so I'd like the code to be optimized. The Base code seems to have an nzrange iterator for sparse matrices, but it is not easy for me to see how to apply that to my case.
Is there a suggested way of doing this?
Second question: I'd need to also perform this operation on views of my sparse Matrix - would that be something like x = view(a,:,:); cols = unique(x.parent.colptr[x.indices[:,2]]) or is there specialized functionality for this? Views of sparse matrices appear to be tricky (cf https://discourse.julialang.org/t/slow-arithmetic-on-views-of-sparse-matrices/3644 – not a cross-post)
Thanks a lot!
Regarding getting the non-zero rows and columns of a sparse matrix, the following functions should be pretty efficient:
nzcols(a::SparseMatrixCSC) = collect(i
for i in 1:a.n if a.colptr[i]<a.colptr[i+1])
function nzrows(a::SparseMatrixCSC)
active = falses(a.m)
for r in a.rowval
active[r] = true
end
return find(active)
end
For a 10_000x10_000 matrix with 0.1 density it takes 0.2ms and 2.9ms for cols and rows, respectively. It should also be quicker than method in question (apart from the correctness issue as well).
Regarding views of sparse matrices, a quick solution would be to turn view into a sparse matrix (e.g. using b = sparse(view(a,100:199,100:199))) and use functions above. In code:
nzcols(b::SubArray{T,2,P}) where {T,P<:AbstractSparseArray} = nzcols(sparse(b))
nzrows(b::SubArray{T,2,P}) where {T,P<:AbstractSparseArray} = nzrows(sparse(b))
A better solution would be to customize the functions according to view. For example, when the view uses UnitRanges for both rows and columns:
# utility predicate returning true if element of sorted v in range r
inrange(v,r) = searchsortedlast(v,last(r))>=searchsortedfirst(v,first(r))
function nzcols(b::SubArray{T,2,P,Tuple{UnitRange{Int64},UnitRange{Int64}}}
) where {T,P<:SparseMatrixCSC}
return collect(i+1-start(b.indexes[2])
for i in b.indexes[2]
if b.parent.colptr[i]<b.parent.colptr[i+1] &&
inrange(b.parent.rowval[nzrange(b.parent,i)],b.indexes[1]))
end
function nzrows(b::SubArray{T,2,P,Tuple{UnitRange{Int64},UnitRange{Int64}}}
) where {T,P<:SparseMatrixCSC}
active = falses(length(b.indexes[1]))
for c in b.indexes[2]
for r in nzrange(b.parent,c)
if b.parent.rowval[r] in b.indexes[1]
active[b.parent.rowval[r]+1-start(b.indexes[1])] = true
end
end
end
return find(active)
end
which work faster than the versions for the full matrices (for 100x100 submatrix of above 10,000x10,000 matrix cols and rows take 16μs and 12μs, respectively on my machine, but these are unstable results).
A proper benchmark would use fixed matrices (or at least fix the random seed). I'll edit this line with such a benchmark if I do it.
In case the indices are not ranges, the fallback to converting to a sparse matrix works, but here are versions for indices which are Vectors. If the indices are mixed, yet another set of versions needs to be made. Quite repetitive, but this is the strength of Julia, when the versions are done, the code will choose optimized methods correctly using the types in the caller without too much effort.
function sortedintersecting(v1, v2)
i,j = start(v1), start(v2)
while i <= length(v1) && j <= length(v2)
if v1[i] == v2[j] return true
elseif v1[i] > v2[j] j += 1
else i += 1
end
end
return false
end
function nzcols(b::SubArray{T,2,P,Tuple{Vector{Int64},Vector{Int64}}}
) where {T,P<:SparseMatrixCSC}
brows = sort(unique(b.indexes[1]))
return [k
for (k,i) in enumerate(b.indexes[2])
if b.parent.colptr[i]<b.parent.colptr[i+1] &&
sortedintersecting(brows,b.parent.rowval[nzrange(b.parent,i)])]
end
function nzrows(b::SubArray{T,2,P,Tuple{Vector{Int64},Vector{Int64}}}
) where {T,P<:SparseMatrixCSC}
active = falses(length(b.indexes[1]))
for c in b.indexes[2]
active[findin(b.indexes[1],b.parent.rowval[nzrange(b.parent,c)])] = true
end
return find(active)
end
-- ADDENDUM --
Since it was noted nzrows for Vector{Int} indices is a bit slow, this is an attempt to improve its speed by replacing findin with a version exploiting sortedness:
function findin2(inds,v,w)
i,j = start(v),start(w)
res = Vector{Int}()
while i<=length(v) && j<=length(w)
if v[i]==w[j]
push!(res,inds[i])
i += 1
elseif (v[i]<w[j]) i += 1
else j += 1
end
end
return res
end
function nzrows(b::SubArray{T,2,P,Tuple{Vector{Int64},Vector{Int64}}}
) where {T,P<:SparseMatrixCSC}
active = falses(length(b.indexes[1]))
inds = sortperm(b.indexes[1])
brows = (b.indexes[1])[inds]
for c in b.indexes[2]
active[findin2(inds,brows,b.parent.rowval[nzrange(b.parent,c)])] = true
end
return find(active)
end
What's the term for when a recursive algorithm doesn't always reach all functions on each level? Consider this code:
function f(value):
if val < -10 return g(value + 1)
if var > 10 return h(value - 1)
else return 0
function g(value):
if value % 2 == 0 return f(value / 2)
else return h(value)
function h(value):
if value % 2 == 1 return g(value - 1)
else return h(value - 1)
Each recursive call may call a different function such as when we start with 15:
f(15)
h(14)
h(13)
g(12)
f(6)
return 0
I've never heard of a specific term for what you describe, it's simply referred to as recursion.
The fact that it executes potentially different pathways for certain levels of recursion doesn't really make it special, any more than the same feature in a loop:
for i = 1 to 10:
if i is even:
print i, " is even"
else:
print i, " is odd"
That's still called a loop (not something like "alternating loop" or "iteration dependent behavioural loop"). It's similar to your recursion example.
In fact, recursion always has different behaviour at at least one level (the last one) since, without that, it would recur forever and you would most likely overflow the stack.
In the below snippet, please explain starting with the first "for" loop what is happening and why. Why is 0 added, why is 1 added in the second loop. What is going on in the "if" statement under bigi. Finally explain the modPow method. Thank you in advance for meaningful replies.
public static boolean isPrimitive(BigInteger m, BigInteger n) {
BigInteger bigi, vectorint;
Vector<BigInteger> v = new Vector<BigInteger>(m.intValue());
int i;
for (i=0;i<m.intValue();i++)
v.add(new BigInteger("0"));
for (i=1;i<m.intValue();i++)
{
bigi = new BigInteger("" + i);
if (m.gcd(bigi).intValue() == 1)
v.setElementAt(new BigInteger("1"), n.modPow(bigi,m).intValue());
}
for (i=0;i<m.intValue();i++)
{
bigi = new BigInteger("" + i);
if (m.gcd(bigi).intValue() == 1)
{
vectorint = v.elementAt(bigi.intValue());
if ( vectorint.intValue() == 0)
i = m.intValue() + 1;
}
}
if (i == m.intValue() + 2)
return false;
else
return true;
}
Treat the vector as a list of booleans, with one boolean for each number 0 to m. When you view it that way, it becomes obvious that each value is set to 0 to initialize it to false, and then set to 1 later to set it to true.
The last for loop is testing all the booleans. If any of them are 0 (indicating false), then the function returns false. If all are true, then the function returns true.
Explaining the if statement you asked about would require explaining what a primitive root mod n is, which is the whole point of the function. I think if your goal is to understand this program, you should first understand what it implements. If you read Wikipedia's article on it, you'll see this in the first paragraph:
In modular arithmetic, a branch of
number theory, a primitive root modulo
n is any number g with the property
that any number coprime to n is
congruent to a power of g (mod n).
That is, if g is a primitive root (mod
n), then for every integer a that has
gcd(a, n) = 1, there is an integer k
such that gk ≡ a (mod n). k is called
the index of a. That is, g is a
generator of the multiplicative group
of integers modulo n.
The function modPow implements modular exponentiation. Once you understand how to find a primitive root mod n, you'll understand it.
Perhaps the final piece of the puzzle for you is to know that two numbers are coprime if their greatest common divisor is 1. And so you see these checks in the algorithm you pasted.
Bonus link: This paper has some nice background, including how to test for primitive roots near the end.
I have a quite simple question, I think.
I've got this problem, which can be solved very easily with a recursive function, but which I wasn't able to solve iteratively.
Suppose you have any boolean matrix, like:
M:
111011111110
110111111100
001111111101
100111111101
110011111001
111111110011
111111100111
111110001111
I know this is not an ordinary boolean matrix, but it is useful for my example.
You can note there is sort of zero-paths in there...
I want to make a function that receives this matrix and a point where a zero is stored and that transforms every zero in the same area into a 2 (suppose the matrix can store any integer even it is initially boolean)
(just like when you paint a zone in Paint or any image editor)
suppose I call the function with this matrix M and the coordinate of the upper right corner zero, the result would be:
111011111112
110111111122
001111111121
100111111121
110011111221
111111112211
111111122111
111112221111
well, my question is how to do this iteratively...
hope I didn't mess it up too much
Thanks in advance!
Manuel
ps: I'd appreciate if you could show the function in C, S, python, or pseudo-code, please :D
There is a standard technique for converting particular types of recursive algorithms into iterative ones. It is called tail-recursion.
The recursive version of this code would look like (pseudo code - without bounds checking):
paint(cells, i, j) {
if(cells[i][j] == 0) {
cells[i][j] = 2;
paint(cells, i+1, j);
paint(cells, i-1, j);
paint(cells, i, j+1);
paint(cells, i, j-1);
}
}
This is not simple tail recursive (more than one recursive call) so you have to add some sort of stack structure to handle the intermediate memory. One version would look like this (pseudo code, java-esque, again, no bounds checking):
paint(cells, i, j) {
Stack todo = new Stack();
todo.push((i,j))
while(!todo.isEmpty()) {
(r, c) = todo.pop();
if(cells[r][c] == 0) {
cells[r][c] = 2;
todo.push((r+1, c));
todo.push((r-1, c));
todo.push((r, c+1));
todo.push((r, c-1));
}
}
}
Pseudo-code:
Input: Startpoint (x,y), Array[w][h], Fillcolor f
Array[x][y] = f
bool hasChanged = false;
repeat
for every Array[x][y] with value f:
check if the surrounding pixels are 0, if so:
Change them from 0 to f
hasChanged = true
until (not hasChanged)
For this I would use a Stack ou Queue object. This is my pseudo-code (python-like):
stack.push(p0)
while stack.size() > 0:
p = stack.pop()
matrix[p] = 2
for each point in Arround(p):
if matrix[point]==0:
stack.push(point)
The easiest way to convert a recursive function into an iterative function is to utilize the stack data structure to store the data instead of storing it on the call stack by calling recursively.
Pseudo code:
var s = new Stack();
s.Push( /*upper right point*/ );
while not s.Empty:
var p = s.Pop()
m[ p.x ][ p.y ] = 2
s.Push ( /*all surrounding 0 pixels*/ )
Not all recursive algorithms can be translated to an iterative algorithm. Normally only linear algorithms with a single branch can. This means that tree algorithm which have two or more branches and 2d algorithms with more paths are extremely hard to transfer into recursive without using a stack (which is basically cheating).
Example:
Recursive:
listsum: N* -> N
listsum(n) ==
if n=[] then 0
else hd n + listsum(tl n)
Iteration:
listsum: N* -> N
listsum(n) ==
res = 0;
forall i in n do
res = res + i
return res
Recursion:
treesum: Tree -> N
treesum(t) ==
if t=nil then 0
else let (left, node, right) = t in
treesum(left) + node + treesum(right)
Partial iteration (try):
treesum: Tree -> N
treesum(t) ==
res = 0
while t<>nil
let (left, node, right) = t in
res = res + node + treesum(right)
t = left
return res
As you see, there are two paths (left and right). It is possible to turn one of these paths into iteration, but to translate the other into iteration you need to preserve the state which can be done using a stack:
Iteration (with stack):
treesum: Tree -> N
treesum(t) ==
res = 0
stack.push(t)
while not stack.isempty()
t = stack.pop()
while t<>nil
let (left, node, right) = t in
stack.pop(right)
res = res + node + treesum(right)
t = left
return res
This works, but a recursive algorithm is much easier to understand.
If doing it iteratively is more important than performance, I would use the following algorithm:
Set the initial 2
Scan the matrix for finding a 0 near a 2
If such a 0 is found, change it to 2 and restart the scan in step 2.
This is easy to understand and needs no stack, but is very time consuming.
A simple way to do this iteratively is using a queue.
insert starting point into queue
get first element from queue
set to 2
put all neighbors that are still 0 into queue
if queue is not empty jump to 2.