Wolfram-Cloud/Mathematica, effective working with recursive functions - recursion

I am working with Чебышёв-polynomials at the moment, recursive defined polynomials. For the very likely case you never saw them before:
f[0,x_] := 1;
f[1,x_] := x;
f[n_,x_] := 2 * x * f[n-1, x] - f[n-2, x];
Plot[{f[9, x],f[3, x]},{x, -1, 1}]
And I found myself asking, since I usually work with python, if there is a way to build an array of functions in wolfram-cloud, to ease the process.
Thus I have to calculate every f[n] only once, allowing me to improve the run-time quite a bit and also allowing me to extend the range of n.

Use memoization.
In this case memoization is trickier than usual because we work with functions, not function values.
Clear[cheb]
cheb[0] = 1 &;
cheb[1] = # &;
cheb[n_] := cheb[n] = Evaluate#Expand[2 # cheb[n - 1][#] - cheb[n - 2][#]] &
The Evaluate makes sure that the insides of the Function get evaluated even before supplying and argument.

Related

Change the sign of one number to match the sign of another number

I'm not really sure what to search for on this.
If I have a variable A = 10. And another variable B. If B is negative I want to make A = -10. If B is positive I want A = 10.
Here is how I have been doing this quite often:
A = A * abs(B) / B
The obvious issue here is that if B is zero I get a divide by zero error.
Is there a better (preferably mathematical) way to accomplish this without the complexity of conditional statements?
Backstory. I am working with students in a graphical robotics programming language called Lego EV3.
The algorithm above looks like this:
Using a conditional statement it looks like this:
Quite the waste of space, especially when you are working on 13" laptop screens. And confusing.
Just to turn #MBo's comment into an official answer, note that many languages have a function called sign(x) or signum(x) that returns -1, 0, or 1 if x is negative, zero, or positive respectively, and another function abs(x) (for absolute value) that can be used together to achieve your purpose:
A = abs(A) * sign(B)
will copy the sign from B to A if B ≠ 0. If B == 0 you will have to do something extra.
Many languages (C++, Java, python) also have a straightforward copysign(x, y) function that does exactly what you want, returning x modified to have y's sign.
In many programming languages, a simple if statement would work:
A = 10;
if (B < 0) {
A = -1*A;
}
If your language supports ternary expressions, we could reduce the above to a single line:
A = B < 0 ? -1*A : A;
Another option might be to define a helper function:
reverseSign(A, B) {
if (B < 0) {
return -1*A;
}
else {
return A;
}
}
C99 has the (POSIX) function copysign that does just this. Fortran has had this for ages. It's also a IEEE 754 recommended function

Time and space complexity of a "nested" recursive function

In one of the previous intro to cs exams there was a question: calculate the space and time complexity of the function f1 as a function of n, assume that the time complexity of malloc(n) is O(1) and its space complexity is O(n).
int f1(int n) {
if(n < 3)
return 1;
int* arr = (int*) malloc(sizeof(int) * n);
f1(f1(n – 3));
free(arr);
return n;
}
The official solution is: time complexity: O(2^(n/3)), space complexity: O(n^2)
I tried to solve it but i didn't know how until i saw a note in my notebook that said: since the function returns n then we can treat f(f(n-3)) as f(n-3)+f(n-3) or as 2f(n-3). In this case the question becomes very similar to this one: Space complexity of recursive function
I tried solving it this way and i got the correct answer.
For the time complexity:
T(n)=2T(n-3)+1 , T(0)=1
T(n-3)=2T(n-3*2)+1
T(n)=2*2T(n-3*2)+2+1
T(n-3*2)=2T(n-3*3)+1
T(n)=2*2*2T(n-3*3)+2*2+2+1
...
T(n)=(2^k)T(n-3*k)+2^(k-1)+...+2^2+2+1
n-3*k=0
k=n/3
===> 2^(n/3)+...+2^2+2+1=2^(n/3)[1+(1/2)+(1/2^2)+...]=2^(n/3)*constant
Thus I got O(2^(n/3))
For the space complexity: the tree depth is n/3 and each time we do malloc so we get (n/3)^2 thus O(n^2).
My question:
why can we treat f1(f1(n – 3)) as f1(n-3)+f1(n-3) or as 2f1(n-3)?
if the function didn't return n but changed it, for example: return n/3 instead of return n, then how do we solve it? do we treat it as 2f1((n-3)/3)?
if we can't always treat f1(f1(n – 3)) as f1(n-3)+f1(n-3) or as 2f1(n-3) then how do we draw the recursion tree and how do we write and solve it using the induction method T(n)?
Why can we treat f1(f1(n – 3)) as f1(n-3)+f1(n-3) or as 2f1(n-3)?
Because i) the nested f1 is evaluated first, and its return value is used to call the outer f1; so these nested calls are equivalent to:
int result = f1(n - 3);
f1(result);
... and ii) the return value of f1 is just its argument (except for the base case, but it doesn't matter asymptotically), so the above is further equivalent to:
f1(n - 3);
f1(n - 3); // result = n - 3
If the function didn't return n but changed it, for example: return n/3 instead of return n, then how do we solve it? do we treat it as 2f1((n-3)/3)?
Only the outer call is affected. Again, using the equivalent expression from before:
f1(n - 3); // = (n - 3) / 3
f1((n - 3) / 3);
i.e. just f1(n - 3) + f1((n - 3) / 3) for your example.
If we can't always treat f1(f1(n – 3)) as f1(n-3)+f1(n-3) or as 2f1(n-3) then how do we draw the recursion tree and how do we write and solve it using the induction method T(n)?
You can always separate them into two separate calls as above, and again remember that only the second call is affected by the return result. If this is different to n - 3 then you would need a recursion tree instead of simple expansion. Depends on the specific problem, needless to say.

Recursively wrapping up an element

Say I have an element <x>x</x> and some empty elements (<a/>, <b/>, <c/>), and I want to wrap up the first inside the second one at a time, resulting in <c><b><a><x>x</x></a></b></c>. How do I go about this when I don't know the number of the empty elements?
I can do
xquery version "3.0";
declare function local:wrap-up($inner-element as element(), $outer-elements as element()+) as element()+ {
if (count($outer-elements) eq 3)
then element{node-name($outer-elements[3])}{element{node-name($outer-elements[2])}{element{node-name($outer-elements[1])}{$inner-element}}}
else
if (count($outer-elements) eq 2)
then element{node-name($outer-elements[2])}{element{node-name($outer-elements[1])}{$inner-element}}
else
if (count($outer-elements) eq 1)
then element{node-name($outer-elements[1])}{$inner-element}
else ($outer-elements, $inner-element)
};
let $inner-element := <x>x</x>
let $outer-elements := (<a/>, <b/>, <c/>)
return
local:wrap-up($inner-element, $outer-elements)
but is there a way to do this by recursion, not decending and parsing but ascending and constructing?
In functional programming, you usually try to work with the first element and the tail of a list, so the canonical solution would be to reverse the input before nesting the elements:
declare function local:recursive-wrap-up($elements as element()+) as element() {
let $head := head($elements)
let $tail := tail($elements)
return
element { name($head) } { (
$head/#*,
$head/node(),
if ($tail)
then local:recursive-wrap-up($tail)
else ()
) }
};
let $inner-element := <x>x</x>
let $outer-elements := (<a/>, <b/>, <c/>)
return (
local:wrap-up($inner-element, $outer-elements),
local:recursive-wrap-up(reverse(($inner-element, $outer-elements)))
)
Whether reverse(...) will actually require reversing the output or not will depend on your XQuery engine. In the end, reversing does not increase computational complexity, and might not only result in cleaner code, but even faster execution!
Similar could be achieved by turning everything upside down, but there are no functions for getting the last element and everything before this, and will possibly reduce performance when using predicates last() and position() < last(). You could use XQuery arrays, but will have to pass counters in each recursive function call.
Which solution is fastest in the end will require benchmarking using the specific XQuery engine and code.

How can this imperative code be rewritten to be more functional?

I found an answer on SO that explained how to write a randomly weighted drop system for a game. I would prefer to write this code in a more functional-programming style but I couldn't figure out a way to do that for this code. I'll inline the pseudo code here:
R = (some random int);
T = 0;
for o in os
T = T + o.weight;
if T > R
return o;
How could this be written in a style that's more functional? I am using CoffeeScript and underscore.js, but I'd prefer this answer to be language agnostic because I'm having trouble thinking about this in a functional way.
Here are two more functional versions in Clojure and JavaScript, but the ideas here should work in any language that supports closures. Basically, we use recursion instead of iteration to accomplish the same thing, and instead of breaking in the middle we just return a value and stop recursing.
Original pseudo code:
R = (some random int);
T = 0;
for o in os
T = T + o.weight;
if T > R
return o;
Clojure version (objects are just treated as clojure maps):
(defn recursive-version
[r objects]
(loop [t 0
others objects]
(let [obj (first others)
new_t (+ t (:weight obj))]
(if (> new_t r)
obj
(recur new_t (rest others))))))
JavaScript version (using underscore for convenience).
Be careful, because this could blow out the stack.
This is conceptually the same as the clojure version.
var js_recursive_version = function(objects, r) {
var main_helper = function(t, others) {
var obj = _.first(others);
var new_t = t + obj.weight;
if (new_t > r) {
return obj;
} else {
return main_helper(new_t, _.rest(others));
}
};
return main_helper(0, objects);
};
You can implement this with a fold (aka Array#reduce, or Underscore's _.reduce):
An SSCCE:
items = [
{item: 'foo', weight: 50}
{item: 'bar', weight: 35}
{item: 'baz', weight: 15}
]
r = Math.random() * 100
{item} = items.reduce (memo, {item, weight}) ->
if memo.sum > r
memo
else
{item, sum: memo.sum + weight}
, {sum: 0}
console.log 'r:', r, 'item:', item
You can run it many times at coffeescript.org and see that the results make sense :)
That being said, i find the fold a bit contrived, as you have to remember both the selected item and the accumulated weight between iterations, and it doesn't short-circuit when the item is found.
Maybe a compromise solution between pure FP and the tedium of reimplementing a find algorithm can be considered (using _.find):
total = 0
{item} = _.find items, ({weight}) ->
total += weight
total > r
Runnable example.
I find (no pun intended) this algorithm much more accessible than the first one (and it should perform better, as it doesn't create intermediate objects, and it does short-circuiting).
Update/side-note: the second algorithm is not "pure" because the function passed to _.find is not referentially transparent (it has the side effect of modifying the external total variable), but the whole of the algorithm is referentially transparent. If you were to encapsulate it in a findItem = (items, r) -> function, the function will be pure and will always return the same output for the same input. That's a very important thing, because it means that you can get the benefits of FP while using some non-FP constructs (for performance, readability, or whatever reason) under the hoods :D
I think the underlying task is randomly selecting 'events' (objects) from array os with a frequency defined by their respective weights. The approach is to map (i.e. search) a random number (with uniform distribution) onto the stairstep cumulative probability distribution function.
With positive weights, their cumulative sum is increasing from 0 to 1. The code you gave us simply searches starting at the 0 end. To maximize speed with repeated calls, pre calculate sums, and order the events so the largest weights are first.
It really doesn't matter whether you search with iteration (looping) or recursion. Recursion is nice in a language that tries to be 'purely functional' but doesn't help understanding the underlying mathematical problem. And it doesn't help you package the task into a clean function. The underscore functions are another way of packaging the iterations, but don't change the basic functionality. Only any and all exit early when the target is found.
For small os array this simple search is sufficient. But with a large array, a binary search will be faster. Looking in underscore I find that sortedIndex uses this strategy. From Lo-Dash (an underscore dropin), "Uses a binary search to determine the smallest index at which the value should be inserted into array in order to maintain the sort order of the sorted array"
The basic use of sortedIndex is:
os = [{name:'one',weight:.7},
{name:'two',weight:.25},
{name:'three',weight:.05}]
t=0; cumweights = (t+=o.weight for o in os)
i = _.sortedIndex(cumweights, R)
os[i]
You can hide the cumulative sum calculation with a nested function like:
osEventGen = (os)->
t=0; xw = (t+=y.weight for y in os)
return (R) ->
i = __.sortedIndex(xw, R)
return os[i]
osEvent = osEventGen(os)
osEvent(.3)
# { name: 'one', weight: 0.7 }
osEvent(.8)
# { name: 'two', weight: 0.25 }
osEvent(.99)
# { name: 'three', weight: 0.05 }
In coffeescript, Jed Clinger's recursive search could be written like this:
foo = (x, r, t=0)->
[y, x...] = x
t += y
return [y, t] if x.length==0 or t>r
return foo(x, r, t)
An loop version using the same basic idea is:
foo=(x,r)->
t=0
while x.length and t<=r
[y,x...]=x # the [first, rest] split
t+=y
y
Tests on jsPerf http://jsperf.com/sortedindex
suggest that sortedIndex is faster when os.length is around 1000, but slower than the simple loop when the length is more like 30.

What are block expressions actually good for?

I just solved the first problem from Project Euler in JavaFX for the fun of it and wondered what block expressions are actually good for? Why are they superior to functions? Is it the because of the narrowed scope? Less to write? Performance?
Here's the Euler example. I used a block here but I don't know if it actually makes sense
// sums up all number from low to high exclusive which are divisible by a or b
function sumDivisibleBy(a: Integer, b: Integer, high: Integer) {
def low = if (a <= b) a else b;
def sum = {
var result = 0;
for (i in [low .. <high] where i mod a == 0 or i mod b == 0) {
result += i
}
result
}
}
Does a block make sense here?
Well, no, not really, it looks like extra complexity with no real benefit. Try removing the sum variable and the block and you will see that the code still works the same.
In general block expressions can be useful when you want to create an anonymous scope rather than factoring the code into a function, most of the time you should rather create a function.

Resources