Recursion without executing all methods on each recursive call - recursion

What's the term for when a recursive algorithm doesn't always reach all functions on each level? Consider this code:
function f(value):
if val < -10 return g(value + 1)
if var > 10 return h(value - 1)
else return 0
function g(value):
if value % 2 == 0 return f(value / 2)
else return h(value)
function h(value):
if value % 2 == 1 return g(value - 1)
else return h(value - 1)
Each recursive call may call a different function such as when we start with 15:
f(15)
h(14)
h(13)
g(12)
f(6)
return 0

I've never heard of a specific term for what you describe, it's simply referred to as recursion.
The fact that it executes potentially different pathways for certain levels of recursion doesn't really make it special, any more than the same feature in a loop:
for i = 1 to 10:
if i is even:
print i, " is even"
else:
print i, " is odd"
That's still called a loop (not something like "alternating loop" or "iteration dependent behavioural loop"). It's similar to your recursion example.
In fact, recursion always has different behaviour at at least one level (the last one) since, without that, it would recur forever and you would most likely overflow the stack.

Related

How to optimize this recursion function?

This is really similar to Fibonacci Sequence problem. I understand the DP optimization with Fibonacci function using the for loop, but I'm having hard time to connect to this problem.
The recursion function I want to optimize is:
def cal(n):
if n <= 0:
return 1
else:
return cal(n-25)+cal(n-26)
Something like this may help:
(It's inspired by previous post)
from functools import cache
#cache
def cal(n):
if n <= 0:
return 1
else:
return cal(n-25) + cal(n-26)
print(cal(100))
The idea of a "for loop optimization" is that we calculate cal(0), cal(1), cal(2), ... consecutively. And when we want cal(n), we already have cal(n-25) and cal(n-26) stored in an array.
So, the following solution has linear complexity for non-negative n:
def cal(n):
mem = [1] # cal(0) is 1
for i in range(1, n + 1):
num = 1 if i - 25 < 0 else mem[i - 25]
num += 1 if i - 26 < 0 else mem[i - 26]
mem.append (num)
return mem[-1]
One can further optimize to make all the values cal(1), cal(2), ..., cal(n) globally available after calculating the last of them.

Can I use <- instead of = in Julia?

Like in R:
a <- 2
or even better
a ← 2
which should translate to
a = 2
and if possible respect method overloading.
= is overloaded (not in the multiple dispatch sense) a lot in Julia.
It binds a new variable. As in a = 3. You won't be able to use ← instead of = in this context, because you can't overload binding in Julia.
It gets lowered to setindex!. As in, a[i] = b gets lowered to setindex!(a, b, i). Unfortunately, setindex! takes 3 variables while ← can only take 2 variables. So you can't overload = with 3 variables.
But, you can use only 2 variables and overload a[:] = b, for example. So, you can define ←(a,b) = (a[:] = b) or ←(a,b) = setindex!(a,b,:).
a .= b gets lowered to (Base.broadcast!)(Base.identity, a, b). You can overload this by defining ←(a,b) = (a .= b) or ←(a,b) = (Base.broadcast!)(Base.identity, a, b).
So, there are two potentially nice ways of using ←. Good luck ;)
Btw, if you really want to use ← to do binding (like in 1.), the only way to do it is using macros. But then, you will have to write a macro in front of every single assignment, which doesn't look very good.
Also, if you want to explore how operators get lowered in Julia, do f(a,b) = (a .= b), for example, and then #code_lowered f(x,y).
No. = is not an operator in Julia, and cannot be assigned to another symbol.
Disclaimer: You are fully responsible if you will try my (still beginner's) experiments bellow! :P
MacroHelper is module ( big thanks to #Alexander_Morley and #DanGetz for help ) I plan to play with in future and we could probably try it here :
julia> module MacroHelper
# modified from the julia source ./test/parse.jl
function parseall(str)
pos = start(str)
exs = []
while !done(str, pos)
ex, pos = parse(str, pos) # returns next starting point as well as expr
ex.head == :toplevel ? append!(exs, ex.args) : push!(exs, ex)
end
if length(exs) == 0
throw(ParseError("end of input"))
elseif length(exs) == 1
return exs[1]
else
return Expr(:block, exs...) # convert the array of expressions
# back to a single expression
end
end
end;
With module above you could define simple test "language":
julia> module TstLang
export #tst_cmd
import MacroHelper
macro tst_cmd(a)
b = replace("$a", "←", "=") # just simply replacing ←
# in real life you would probably like
# to escape comments, strings etc
return MacroHelper.parseall(b)
end
end;
And by using it you could probably get what you want:
julia> using TstLang
julia> tst```
a ← 3
println(a)
a +← a + 3 # maybe not wanted? :P
```
3
9
What about performance?
julia> function test1()
a = 3
a += a + 3
end;
julia> function test2()
tst```
a ← 3
a +← a + 3
```
end;
julia> test1(); #time test1();
0.000002 seconds (4 allocations: 160 bytes)
julia> test2(); #time test2();
0.000002 seconds (4 allocations: 160 bytes)
If you like to see syntax highlight (for example in atom editor) then you need to use it differently:
function test3()
#tst_cmd begin
a ← 3
a ← a + a + 3 # parser don't allow you to use "+←" here!
end
end;
We could hope that future Julia IDEs could syntax highlight cmd macros too. :)
What could be problem with "solution" above? I am not so experienced julian so many things. :P (in this moment something about "macro hygiene" and "global scope" comes to mind...)
But what you want is IMHO good for some domain specific languages and not to redefine basic of language! It is because readability very counts and if everybody will redefine everything then it will end in Tower of Babel...

Tail recursion in R

I seem to misunderstand tail recursion; according to this stackoverflow question R does not support tail recursion. However, let's consider the following functions to compute the nth fibonacci number:
Iterative version:
Fibo <- function(n){
a <- 0
b <- 1
for (i in 1:n){
temp <- b
b <- a
a <- a + temp
}
return(a)
}
"Naive" recursive version:
FiboRecur <- function(n){
if (n == 0 || n == 1){
return(n)
} else {
return(FiboRecur(n-1) + FiboRecur(n-2))
}
}
And finally an example I found that should be tail call recursive:
FiboRecurTail <- function(n){
fib_help <- function(a, b, n){
if(n > 0){
return(fib_help(b, a+b, n-1))
} else {
return(a)
}
}
return(fib_help(0, 1, n))
}
Now if we take a look at the traces when these functions are called, here is what we get:
Fibo(25)
trace: Fibo(25)
[1] 75025
trace(FiboRecur)
FiboRecur(25)
Thousands of calls to FiboRecur and takes a lot of time to run
FiboRecurTail(25)
trace: FiboRecurTail(25)
[1] 75025
In the cases of Fibo(25) and FiboRecurTail(25), the answer is displayed instantaneously and only one call is made. For FiboRecur(25), thousands of calls are made and it runs for some seconds before showing the result.
We can also take a look at the run times using the benchmark function from the package rbenchmark:
benchmark(Fibo(30), FiboRecur(30), FiboRecurTail(30), replications = 5)
test replications elapsed relative user.self sys.self user.child sys.child
1 Fibo(30) 5 0.00 NA 0.000 0 0 0
2 FiboRecur(30) 5 13.79 NA 13.792 0 0 0
3 FiboRecurTail(30) 5 0.00 NA 0.000 0 0 0
So if R does not support tail recursion, what is happening in FiboRecurTail(25) that makes it run as fast as the iterative version while the "naive" recursive function runs like molasses? Is it rather that R supports tail recursion, but does not optimize a "naive" recursive version of a function to be tail-call recursive like other programming languages (Haskell for instance) do? This is what I understand from this post in R's mailing list.
I would greatly appreciate if someone would shed some light into this. Thanks!
The difference is that for each recursion, FiboRecur calls itself twice. Within FiboRecurTail, fib_help calls itself only once.
Thus you have a whole lot more function calls with the former. In the case of FiboRecurTail(25) you have a recursion depth of ~25 calls. FiboRecur(25) results in 242,785 function calls (including the first).
I didn't time any of the routines, but note that you show 0.00 for both of the faster routines. You should see some difference with a higher input value, but note that Fibo iterates exactly as much as FiboRecurTail recurses.
In the naive recursive approach, you repetitively calculated a lot of values. For example, when you calculate FiboRecur(30) you will calculate FiboRecur(29) and FiboRecur(28), and each of these two calls are independent. And in FiboRecur(29) you will calculate FiboRecur(28) again and FiboRecur(27) even though FiboRecur(28) has already been calculated somewhere else as above. And this happens for every stage of recursion. Or simply put, for every increase of n, the calculation effort almost doubles but obviously, in reality it should just be as simple as add the last two calculated numbers together.
A little summary of FiboRecur(4): FiboRecur(0) is calculated twice, FiboRecur(1) is calculated three times, FiboRecur(2) is calculated twice and FiboRecur(3) is calculated once. The former three should really be calculated once and stored somewhere so that you can extract the values whenever they are needed. And that's why you see so many function calls even though it's not a large number.
In the tail recursive version, however, every previously calculated values are passed to the next stage via a + b parameter, which avoids countless repetitive calculations as in the naive recursive version, and thus more efficient.
The following algorithm uses accumulator parameter technique to make things tail recursive, then wraps it in a memoization function.
Number of function calls shouldn't necessarily differ for tail-recursion. This is mostly about managing stack memory, not speed. Every call to fib(n) generates calls to fib(n - 1) and fib(n - 2), expect in tail-recursive cases, the stack frame is reused rather than a new one being allocated for each call.
Memoization is what gives a speed-boost. Results are cached for future use.
library(hash)
# Generate Fibonacci numbers
# Tail Recursive Algorithm using Accumulator Parameter Technique
fibTR <- function(n) {
fibLoop <- function(acc, m, k) {
if (k == 0)
acc
else
fibLoop(acc = m, m = acc + m, k = k - 1)
}
fibLoop(acc = 0, m = 1, k = n)
}
# A generic memoization function for function fn taking integer input
memoize <- function(fn, inp) {
cache <- hash::hash()
key <- as.character(inp)
if (hash::has.key(key = key, hash = cache))
cache[[key]]
else {
cache[[key]] <- inp %>% fn
cache[[key]]
}
}
# Partial Application of a Function
# Memoized and Tail Recursive Fibonacci Number Generator
fib <- partial(.f = memoize, fn = fibTR)
# Get the first 10 Fibonacci numbers
map(.x = 0:9, .f = fib) %>% unlist
Running fibAux(10000) yields
Error: C stack usage 15927040 is too close to the limit
So, I doubt R does efficient tail call optimization.
Another issue is the construction of the cache or lookaside table. In functional languages such as Haskell, ML, ..., that intermediary data structures get built when you first partially call the function. Assuming the same effect in R, another issue is that memory allocation in R is very expensive so is growing vectors, matrices, etc: Here, we are growing a dictionary, and if we pre-allocate the dictionary of appropriate size, then we have to supply the n argument and the cache gets constructed every time we call the function which defeats the purpose.
// Here is F# code to do the same:
// A generate Fibonacci numbers: Tail Recursive Algorithm
let fibTR n =
let rec fibLoop acc m k =
match k with
| 0 -> acc
| n -> fibLoop m (acc + m) (n - 1)
fibLoop 0 1 n
// A generic memoization function
let memoize (fn: 'T -> 'U) =
let cache = new System.Collections.Generic.Dictionary<_, _>()
fun inp ->
match cache.TryGetValue inp with
| true, res -> res
| false, _ ->
let res = inp |> fn
cache.Add(inp, res)
res
// A tail recursive and
let fib = fibTR |> memoize
// Get the first 10 Fibonacci numbers
[ 0..9 ] |> List.map fib

How to build an iterative form of multiple recursive calls?

For an assignment on dynamic programming, we need to give an iterative implementation for a recursive formula.
To keep it simple: let's say that in all iterations, either aor b hold, if not we're in the base-clause.
We have a formula like:
f(x){
if(x.a && !x.b):
return f(option_1)
else if(x.b && !x.a):
return f(option_2)
else if(x.a && x.b):
f_1 = f(option_1)
f_2 = f(option_2)
return betterOption(f_1,f_2)
else:
return base_clause
How do I compute this iteratively? I get stuck on the option where both a & b hold, because we need to compute both options. Using recursion in any form is not allowed here.

iterative version of easy recursive algorithm

I have a quite simple question, I think.
I've got this problem, which can be solved very easily with a recursive function, but which I wasn't able to solve iteratively.
Suppose you have any boolean matrix, like:
M:
111011111110
110111111100
001111111101
100111111101
110011111001
111111110011
111111100111
111110001111
I know this is not an ordinary boolean matrix, but it is useful for my example.
You can note there is sort of zero-paths in there...
I want to make a function that receives this matrix and a point where a zero is stored and that transforms every zero in the same area into a 2 (suppose the matrix can store any integer even it is initially boolean)
(just like when you paint a zone in Paint or any image editor)
suppose I call the function with this matrix M and the coordinate of the upper right corner zero, the result would be:
111011111112
110111111122
001111111121
100111111121
110011111221
111111112211
111111122111
111112221111
well, my question is how to do this iteratively...
hope I didn't mess it up too much
Thanks in advance!
Manuel
ps: I'd appreciate if you could show the function in C, S, python, or pseudo-code, please :D
There is a standard technique for converting particular types of recursive algorithms into iterative ones. It is called tail-recursion.
The recursive version of this code would look like (pseudo code - without bounds checking):
paint(cells, i, j) {
if(cells[i][j] == 0) {
cells[i][j] = 2;
paint(cells, i+1, j);
paint(cells, i-1, j);
paint(cells, i, j+1);
paint(cells, i, j-1);
}
}
This is not simple tail recursive (more than one recursive call) so you have to add some sort of stack structure to handle the intermediate memory. One version would look like this (pseudo code, java-esque, again, no bounds checking):
paint(cells, i, j) {
Stack todo = new Stack();
todo.push((i,j))
while(!todo.isEmpty()) {
(r, c) = todo.pop();
if(cells[r][c] == 0) {
cells[r][c] = 2;
todo.push((r+1, c));
todo.push((r-1, c));
todo.push((r, c+1));
todo.push((r, c-1));
}
}
}
Pseudo-code:
Input: Startpoint (x,y), Array[w][h], Fillcolor f
Array[x][y] = f
bool hasChanged = false;
repeat
for every Array[x][y] with value f:
check if the surrounding pixels are 0, if so:
Change them from 0 to f
hasChanged = true
until (not hasChanged)
For this I would use a Stack ou Queue object. This is my pseudo-code (python-like):
stack.push(p0)
while stack.size() > 0:
p = stack.pop()
matrix[p] = 2
for each point in Arround(p):
if matrix[point]==0:
stack.push(point)
The easiest way to convert a recursive function into an iterative function is to utilize the stack data structure to store the data instead of storing it on the call stack by calling recursively.
Pseudo code:
var s = new Stack();
s.Push( /*upper right point*/ );
while not s.Empty:
var p = s.Pop()
m[ p.x ][ p.y ] = 2
s.Push ( /*all surrounding 0 pixels*/ )
Not all recursive algorithms can be translated to an iterative algorithm. Normally only linear algorithms with a single branch can. This means that tree algorithm which have two or more branches and 2d algorithms with more paths are extremely hard to transfer into recursive without using a stack (which is basically cheating).
Example:
Recursive:
listsum: N* -> N
listsum(n) ==
if n=[] then 0
else hd n + listsum(tl n)
Iteration:
listsum: N* -> N
listsum(n) ==
res = 0;
forall i in n do
res = res + i
return res
Recursion:
treesum: Tree -> N
treesum(t) ==
if t=nil then 0
else let (left, node, right) = t in
treesum(left) + node + treesum(right)
Partial iteration (try):
treesum: Tree -> N
treesum(t) ==
res = 0
while t<>nil
let (left, node, right) = t in
res = res + node + treesum(right)
t = left
return res
As you see, there are two paths (left and right). It is possible to turn one of these paths into iteration, but to translate the other into iteration you need to preserve the state which can be done using a stack:
Iteration (with stack):
treesum: Tree -> N
treesum(t) ==
res = 0
stack.push(t)
while not stack.isempty()
t = stack.pop()
while t<>nil
let (left, node, right) = t in
stack.pop(right)
res = res + node + treesum(right)
t = left
return res
This works, but a recursive algorithm is much easier to understand.
If doing it iteratively is more important than performance, I would use the following algorithm:
Set the initial 2
Scan the matrix for finding a 0 near a 2
If such a 0 is found, change it to 2 and restart the scan in step 2.
This is easy to understand and needs no stack, but is very time consuming.
A simple way to do this iteratively is using a queue.
insert starting point into queue
get first element from queue
set to 2
put all neighbors that are still 0 into queue
if queue is not empty jump to 2.

Resources