How does one find a recurrence equation (for this pseudocode)? - recursion

I've been tasked with finding the answers for the questions below, but honestly I'm completely lost as where to start. I'm not looking for straight answers per say (although they would be appreciated), but rather how I can find/derive them. I understand recursion, I just don't understand how to find a recurrence equation.
a.) Give the recurrence for the expected running time of RANDOM.
b.) Give the exact recurrence equation for the expected number of recursive calls executed by a call to RANDOM(n).
c.) Give the exact recurrence equation for the expected number of times the rerun statements on line 14 is executed, in all called to RANDOM(n), recursive or not.
Pseudocode:
Function RANDOM(u)
if u = 1 then
return(1)
else
assign x=0 with probability 1/2, or
assign x=1 with probability 1/3, or
assign x=2 with probability 1/6
if x=0 then
return(RANDOM(u-1) + RANDOM(u-2))
end-if
if x=1 then
return(RANDOM(u) + 2*RANDOM(u-1))
end-if
if x=2 then
return(3*RANDOM(u) + RANDOM(u) + 3)
end-if
end-if
end-RANDOM

First of all it is important to note that, since the questions ask for run time / no. of calls, the coefficients in front of the recursive calls to RANDOM don't matter (because none of the answers depend on the actual return value).
Also, since the questions ask for expected quantities, you can mix the appropriate recursive calls probabilistically.
a)
Starting off quite easy. Probabilistic mixing of functions:
T(u) = [1/2] * [T(u-1) + T(u-2)] +
[1/3] * [T(u) + T(u-1)] +
[1/6] * [T(u) + T(u) ] // + constant amount of work
b)
Same as before, but remember to add one for each call:
N(u) = [1/2] * [N(u-1) + N(u-2) + 2] +
[1/3] * [N(u) + N(u-1) + 2] +
[1/6] * [N(u) + N(u) + 2] // no constants here
c)
This is trickier than the other two. It seems contradictory that the question asks for "[all calls to RANDOM(u)] whether recursive or not", but only the ones on line 14 which are recursive...
Anyways ignoring this minor detail, the key thing to note is that the call to RANDOM(u) on line 11 can also produce the required recursive calls, without contributing to the total count itself. Adapting the above:
R(u) = [1/3] * [R(u) ] + // don't add 1 here
[1/6] * [R(u) + R(u) + 2] // add 2 as before
Note that the question probably expects you to rearrange all of the T(u), N(u), R(u) terms to the LHS. I'll leave that to you since it is trivial.

Related

21 matchsticks # of possible games

I am sure everyone is familiar with the famous 21 matchsticks game where each person picks up 1,2 or 3 matches and the last person to pick up a match loses.
Let's simplify the game and assume that it is only possible to pick 1 or 2 matches. My question is, how many games are possible?
I know this is very easy to solve recursively, however, I am trying to come up with a combinatorial solution.
To provide an example, let's reduce 21 to just 4 matches. The number of possible games would be 5. {'MCM', 'MMMM', 'CC', 'CMM', 'MMC'}. Where C represents removing 2 matches and M represents removing a single match.
Symbolic method allows us to deduce that the generating function for this combinatorial class is
f(z) = 1/(1 - z - z^2 - z^3)
At this point, we can obtain the answer through a power series expansion, e.g. see here. The coefficient on z^21 will give the number of possible games in "21 matchsticks" (it might be 233317).
Looking back, suppose that players were allowed to take one match only. Then, there would be only one possible scenario. For each game length (power of z), there is only one game outcome:
1/(1 - z) = 1*1 + 1*z + 1*z^2 + 1*z^3 + 1*z^4 + 1*z^5 + ...
If players are allowed to take one or two matches, we have multiple scenarios:
1/(1 - z - z^2) = 1*1 + 1*z + 2*z^2 + 3*z^3 + 5*z^4 + 8*z^5 + ...
The coefficients recover the Fibonacci sequence and can be interpreted as a number of integer compositions of n using only numbers 1 and 2.
Allowing for taking one, two or three matches leads to the following expansion,
1/(1 - z - z^2 - z^3) = 1*1 + 1*z + 2*z^2 + 4*z^3 + 7*z^4 + 13*z^5 + ...
which can be found in this OEIS sequence, cordially named the "Tribonacci numbers".
It is possible to arrive at the 233317 answer using pen, paper and a shifted generalization of the Pascal triangle, although I would leave that task to someone else.
As an aside, I highly recommend the book "Analytic Combinatorics" by Philippe Flajolet and Robert Sedgewick for their introduction to the symbolic method and beyond.

Asymptotic complexity of this summation?

Just started a new course (data struct) and having some trouble with a question:
For what F(n) function is this true?
My direction is that it should happen when The N exponent is 1 or less, because that will match the definition of thetha as the function will be closer and bounded by C1*F(n) and C2*F(n), but im not sure about that. Thanks!
Notice that
0n + 1n + 2n + ... + n·n
= n(0 + 1 + 2 + ... + n)
= n(n(n+1)/2)
with that last step following from Gauss's sum. Therefore, the summation is Θ(n3).
I'm studying the same course right now, and I think that if you apply the next rule it might work out.
lim n->inf f(x) / g(x) = c ,

The time-corrected Verlet numerical integration formula

There is a commonly used verlet-integration formula on the web by Johnathan Dummer, called Time-Corrected Verlet. However I've read several forum posts, that people get weird or unexpected results with it in certain conditions.
Formula by Johnathan Dummer:
x1 = x + (x – x0) * dt / dt0 + a * dt^2
There is also a stackoverflow answer, which states that Dummer's time-corrected formula is broken and the poster presents his own derivation as the correct one.
Suggested correct formula by a stackoverflow answer
x1 = x + (x – x0) * dt / dt0 + a * dt * (dt + dt0) / 2
Well, is Dummer's formula really broken? If yes, is the derivation of the poster better?
PS: It is also weird that Dummer uses the verlet integration formula x1 = x - x0 + a * dt^2 on his website instead of the correct x1 = 2x - x0 + a * dt^2.
The wikipedia page Verlet integration - Non-constant time differences presents the two formula, without referenced. I've not checked the derivation myself but the reasoning for the second improved formula looks sound.
I've downloaded Dummer's spreadsheet and modified one of the formula to use the correction. The results are much better.
The exact results are in yellow, we see that just using the normal Verlet algorithm with fluctuating frame-rate is bad. Dummer's time-correct varient in red is pretty good, but a little off. The dark green version with the improved correction is much better.
For projectiles under gravity which has a quadratic solution you may find that the improved version is exact. When the degree gets a bit higher it will vary from the true path and it might be worth testing to see if we still get a better approximation.
Doing the same calculation for a sin curve shows the improved method is considerably better. Here Time-correct Verlet is drifting quite a bit. The improved version is better, only a little bit off the exact answer.
For the PS. Note that if you set dt=dt0 in the TCV formula
x1 = x + (x – x0) * dt / dt0 + a * dt^2
you get
x1 = x + x – x0 + a * dt^2
= 2 x – x0 + a * dt^2
the original Verlet formula.
The true derivation is based on Taylor formulas
x(t-h0) = x(t) - x'(t)*h0 + 0.5*x''(t)*h0^2 + O(h0^3)
x(t+h1) = x(t) + x'(t)*h1 + 0.5*x''(t)*h1^2 + O(h1^3)
and now eliminate x'(t) from these two formulas to get a Verlet-like formula
h0*x(t+h1) + h1*x(t-h0) = (h0+h1)*x(t) + 0.5*a(t)*h0*h1*(h0+h1) +O(h^3)
which makes a propagation formula
x(t+h1) = (1+h1/h0)*x(t) - h1/h0*x(t-h0) + 0.5*a(t)*h1*(h0+h1)
= x(t) + (x(t)-x(t-h0))*h1/h0 + 0.5*a(t)*h1*(h0+h1)
so that indeed the corrected formula is the correct one.
Note that if you use velocity Verlet steps
Verlet(dt) {
v += a * 0.5*dt
x += v*dt
a = acceleration(x)
v += a * 0.5*dt
}
then each step is indepentently symplectic, so that the change of step size between steps is absolutely unproblematic.
Notice that the main advantages of Verlet and other similar symplectic schemes over Runge-Kutta methods etc. crucially depends on using a fixed step. In detail, the modified energy function (and other more than quadratic constants of motion) that is largely constant under the numerical method is a modification of the exact energy where the difference scales with the step size. So when changing the step size a different modification gives a constant energy. Frequent changes of the step size allow thus, possibly, arbitrary changes of the energy level.
I decided to quit being lazy and show some kind of derivation of how the original Verlet method looks with a variable step size. Because it seems like this faulty adaptation by Dummer is more pervasive than I thought, which is saddening. I also noticed that, as the above answer points out, the correct version is now on wikipedia alongside the Dummer one, though it was added after my "suggested correct answer".
When I look at Verlet method, I see that it looks a lot like leapfrog, velocity Verlet, implicit Euler etc., which look like second-order versions of modified midpoint, and some of them may be identical. In each of these, to some degree they have a leapfrog idea where integration of acceleration (into velocity) and integration of constant velocity (into position) are each staggered so that they overlap by half. This brings things like time-reversibility and stability, which are more important for the 'realism' of a simulation than accuracy is. And the 'realism', the believability, is more important for video games. We don't care if something moves to a slightly different position than it's exact mass would have truly caused, so long as it looks and feels realistic. We're not calculating where to point our high-powered satellite telescopes to look at features on distant objects, or future celestial events. Here, stability and efficiency takes priority here over mathematical accuracy. So, it seems like leapfrog method is appropriate. When you adapt leapfrog for variable time step, it loses some of this advantage, and it loses some of it's appeal for game physics. Stormer-Verlet is like leapfrog, except it uses the average velocity of the previous step instead of a separately maintained velocity. You can adapt this Stormer-Verlet in the same way as leapfrog. To integrate velocity forward with a fixed acceleration, you use half the length of the previous step and half the length of the next step, because they are staggered. If the steps were fixed like true leapfrog, they would be the same length and so the two half lengths sum to one. I use h for step size, a/v/p for acceleration/velocity/position, and hl/pl for 'last' as in previous step. These aren't really equations, more like assignment operations.
Original leapfrog:
v = v + a*h
p = p + v*h
With variable time step:
v = v + a*hl/2 + a*h/2
p = p + v*h
Factor a/2:
v = v + a*(hl + h)/2
p = p + v*h
Use previous position (p - pl)/hl for initial velocity:
v = (p - pl)/hl + a*(hl + h)/2
p = p + v*h
Substitute, we don't need v:
p = p + ( (p - pl)/hl + a*(hl + h)/2)*h
Distribute h:
p = p + (p - pl)*h/hl + a*h*(h + hl)/2
The result is not as simple or fast as the original Stormer form of Verlet, 2p - pl + a*h^2. I hope this makes some sense. You would omit the last step in actual code, no need to multiply h twice.

Mathematica does not calculate absolute values of a complex number with real coefficients

Using code FullSimplify[Abs[q + I*w], Element[{q, w}, Reals]] results in
Abs[q + I w]
and not
Sqrt[q^2 + w^2]
What am I missing?
P.S. Assuming[{q \[Element] Reals, w \[Element] Reals},
Abs[q + I*w]] does not work either.
Note: Simplify[Abs[w]^2, Element[{q, w}, Reals]] and Simplify[Abs[I*q]^2, Element[{q, w}, Reals]] work.
The problem is that what you assume to be "Simple" and what MMA assumes to be simple are two different things. Taking a look at ComplexityFunction indicates that MMA primarily looks at "LeafCount". Applying LeafCount gives:
In[3]:= Abs[q + I w] // LeafCount
Out[3]= 8
In[4]:= Sqrt[q^2 + w^2] // LeafCount
Out[4]= 11
So, MMA considers the Abs form to be better. (One can visually explore the simplicity using either TreeForm or FullForm). What we need to do is tell MMA to treat MMA as more expensive. To do this, we take the example from ComplexityFunction and write:
In[7]:= f[e_] := 100 Count[e, _Abs, {0, Infinity}] + LeafCount[e]
FullSimplify[Abs[q + I w], Element[{q, w}, Reals],
ComplexityFunction -> f]
Out[8]= Sqrt[q^2 + w^2]
As requested. Basically, we are telling MMA through f[e] that the count of all parts of the form Abs should count as 100 leaves.
EDIT: As mentioned by Brett, you can also make it more general, and use _Complex as the rule to look for:
In[20]:= f[e_] := 100 Count[e, _Complex, {0, Infinity}] + LeafCount[e]
FullSimplify[Abs[q + I w], Element[{q, w}, Reals],
ComplexityFunction -> f]
Out[21]= Sqrt[q^2 + w^2]
I suggest using ComplexExpand, which tells the system that all variables are real.
In[28]:= Abs[q + I*w] // ComplexExpand
Out[28]= Sqrt[q^2 + w^2]
These comments are not helpful. Mathematica is failing to evaluate complex numbers, as in Abs[5+i20] is left unchanged. The i is coded correctly. Making abstract observations about 'what is or is not simple' is unrelated and wrong. There is a float that should result, not some algebra. N and ImportForm do not work, either.

Best way to find the Coordinates of a Point on a Line-Segment a specified Distance Away from another Point [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Image of the problem at:
In my code I have 4 points: Q, R ,S , T.
I know the following
Coordinates for R, T, and S;
That segment RT < RQ < RS;
I need to figure out the coordinates of Q.
I already know point Q can be found on the line segment TS. However I need to get the coordinates for Q and I need it to be a relatively efficient calculation.
I have several solutions for this problem but they are all so convoluted and long I know I must be doing something wrong. I feel certain there must a simple elegant way to solve this. The best solution would be one that minimizes the number of more intensive calculations but that also isn't ridiculously long.
Q is the intersecting point between a circle of radius d around R and the line TS, which leads to a quadratic equation with a number of parameters in the coefficients. I don't know if the following if “the best” solution (it may even be better to use a numerical solver in between), but it is completely worked out. Because I think it's more readable, I've changed your coordinate names to put T at (T1, T2), S at (S1, S2) and, to keep the formulas shorter, R at (0, 0) – just adjust S and T and the returned values accordingly.
tmp1 = S1^2 - S2*T2 - S1*T1 + S2^2;
tmp2 = sqrt(- S1^2*T2^2 + S1^2*d^2 + 2*S1*S2*T1*T2 - 2*S1*T1*d^2 -
S2^2*T1^2 + S2^2*d^2 - 2*S2*T2*d^2 + T1^2*d^2 + T2^2*d^2);
tmp3 = S1^2 - 2*S1*T1 + S2^2 - 2*S2*T2 + T1^1 + T2^2;
t = (tmp1 + tmp2)/tmp3;
if (0 > t || t > 1) {
// pick the other solution instead
t = (tmp1 - tmp2)/tmp3;
}
Q1 = S1+t*(T1-S1);
Q2 = S2+t*(T2-S2);
Obviously, I take no warranties that I made no typos etc. :-)
EDIT: Alternatively, you could also get a good approximation by some iterative method (say, Newton) to find a zero of dist(S+t*(T-S), R)-d, as a function of t in [0,1]. That would take nine seven multiplications and one division per Newton step, if I count correctly. Re-using the names from above, that would look something like this:
t = 0.5;
d2 = d^2;
S1T1 = S1 - T1;
S2T2 = S2 - T2;
do {
tS1T1 = S1 - t*S1T1;
tS2T2 = S2 - t*S2T2;
f = tS1T1*tS1T1 + tS2T2*tS2T2 - d2;
fp = 2*(S1T1*tS1T1 + S2T2*tS2T2);
t = t + f/fp;
} while (f > eps);
Set eps to control your required accuracy, but do not set it too low – computing f does involve a subtraction that will have serious cancellation problems near the solution.
Since there are two solutions Q on the (TS) line (with only one solution between T and S), any solution probably involves some choice of sign, or arccos(), etc.
Therefore, a good solution is probably to put Q on the (TS) line like so (with vectors implied):
(1) TQ(t) = t * TS
(where O is some origin). Requiring that Q be at a distance d from R gives a 2nd degree equation in t, which is easy to solve (again, vectors are implied):
d^2 = |RQ(t)|^2 = |RT + TQ(t)|^2
The coordinates of Q can then be obtained by putting a solution t0 into equation (1), via OQ(t0) = OT + TQ(t). The solution 0 <= t <= 1 must be chosen, so that Q lies between T and S.
Now, it may happen that the final formula has some simple interpretation in terms of trigonometric functions… Maybe you can tell us what value of t and what coordinates you find with this method and we can look for a simpler formula?

Resources