I am not sure whether this is right place to ask. Here N, L, H, p, and d are parameters. I need to solve this system of equations. Specifically, I need to solve for b(t) and e(t).
Variables | t=1 | t>1
----------|--------|------------------------
n(t) | N |N(1-p)^(t-1)
s(t) | 1 |((1-p+dp)/(1-p))^(t-1)
b(t) | L |b(t-1)+p(H-b(t-1))
e(t) |(H-L)/2 |e(t-1)+(p(H-b(t-1)))/2
c(t) |(1-d)pN |(1-d)pN(1-p+dp)^(t-1)
Please help me how should I start this problem to solve.
Since you used a Wolfram-Mathematica tag, perhaps you intend to use Mathematica
RSolve[{b[1]==L, b[t]==b[t-1]+p(H-b[t-1]),
e[1]==(H-L)/2, e[t]==e[t-1]+p(H-b[t-1])/2}, {b[t],e[t]}, t]//FullSimplify
which returns
Solve::svars: Equations may not give solutions for all "solve" variables
{b[t]->H+(-H+L)(1-p)^(-1+t),
e[t]->((H-L)(-2+(1-p)^t+2 p))/(2(-1+p))}
It seems that these formulas give recurrent equations - you find values for t = 1 (from table), then calculate values for t = 2, then for t = 3 and so on
b(t) = b(t-1) + p * (H - b(t-1))
t = 1: L
t = 2: b(2) = b(1) + p * (H - b(1)) or
L + p * (H - L) = L + p * H - p * L
t = 3: b(3) = b(2) + p * (H - b(2))
Example: L= 2; p = 3; H = 7;
b(1) = 2
b(2) = 2 + 3 * (7 - 2) = 17
b(3) = 17 + 3 * (7 - 17) = -13
Related
How would I find the maximum possible angle (a) which a rectangle of width (W) can be at within a slot of width (w) and depth (h) - see my crude drawing below
Considering w = hh + WW at the picture:
we can write equation
h * tan(a) + W / cos(a) = w
Then, using formulas for half-angles and t = tan(a/2) substitution
h * 2 * t / (1 - t^2) + W * (1 + t^2) / (1 - t^2) = w
h * 2 * t + W * (1 + t^2) = (1 - t^2) * w
t^2 * (W + w) + t * (2*h) + (W - w) = 0
We have quadratic equation, solve it for unknown t, then get critical angle as
a = 2 * atan(t)
Quick check: Python example for picture above gives correct angle value 18.3 degrees
import math
h = 2
W = 4.12
w = 5
t = (math.sqrt(h*h-W*W+w*w) - h) / (W + w)
a = math.degrees(2 * math.atan(t))
print(a)
Just to elaborate on the above answer as it is not necessarly obvious, this is why why you can write equation:
h * tan(a) + W / cos(a) = w
PS: I suppose that the justification for "why a is the maximum angle" is obvious
I'm trying to find the big O of T(n) = 2T(n/2) + 1. I figured out that it is O(n) with the Master Theorem but I'm trying to solve it with recursion and getting stuck.
My solving process is
T(n) = 2T(n/2) + 1 <= c * n
(we know that T(n/2) <= c * n/2)
2T(n/2) + 1 <= 2 * c * n/2 +1 <= c * n
Now I get that 1 <= 0.
I thought about saying that
T(n/2) <= c/2 * n/2
But is it right to do so? I don't know if I've seen it before so I'm pretty stuck.
I'm not sure what you mean by "solve it with recursion". What you can do is to unroll the equation.
So first you can think of n as a power of 2: n = 2^k.
Then you can rewrite your recurrence equation as T(2^k) = 2T(2^(k-1)) + 1.
Now it is easy to unroll this:
T(2^k) = 2 T(2^(k-1)) + 1
= 2 (T(2^(k-2) + 1) + 1
= 2 (2 (T(2^(k-3) + 1) + 1) + 1
= 2 (2 (2 (T(2^(k-4) + 1) + 1) + 1) + 1
= ...
This goes down to k = 0 and hits the base case T(1) = a. In most cases one uses a = 0 or a = 1 for exercises, but it could be anything.
If we now get rid of the parentheses the equation looks like this:
T(n) = 2^k * a + 2^k + 2^(k-1) + 2^(k-2) + ... + 2^1 + 2^0
From the beginning we know that 2^k = n and we know 2^k + ... + 2 + 1 = 2^(k+1) -1. See this as a binary number consisting of only 1s e.g. 111 = 1000 - 1.
So this simplifies to
T(n) = 2^k * a + 2^(k+1) - 1
= 2^k * a + 2 * 2^k - 1
= n * a + 2 * n - 1
= n * (2 + a) - 1
And now one can see that T(n) is in O(n) as long as a is a constant.
I would like to calculate the bisection of two 3D lines which have an intersecting point. The lines are sympy lines defined by a point and a direction vector. How can I find the equation of the two lines which are the bisection of them?
Let lines are defined as A + t * dA, B + s * dB where A, B are base points and dA, dB are normalized direction vectors.
If it is guaranteed that lines have intersection, it could be found using dot product approach (adapted from skew line minimal distance algorithm):
u = A - B
b = dot(dA, dB)
if abs(b) == 1: # better check with some tolerance
lines are parallel
d = dot(dA, u)
e = dot(dB, u)
t_intersect = (b * e - d) / (1 - b * b)
P = A + t_intersect * dA
Now about bisectors:
bis1 = P + v * normalized(dA + dB)
bis2 = P + v * normalized(dA - dB)
Quick check for 2D case
k = Sqrt(1/5)
A = (3,1) dA = (-k,2k)
B = (1,1) dB = (k,2k)
u = (2,0)
b = -k^2+4k2 = 3k^2=3/5
d = -2k e = 2k
t = (b * e - d) / (1 - b * b) =
(6/5*k+2*k) / (16/25) = 16/5*k * 25/16 = 5*k
Px = 3 - 5*k^2 = 2
Py = 1 + 10k^2 = 3
normalized(dA+dB=(0,4k)) = (0,1)
normalized(dA-dB=(-2k,0)) = (-1,0)
Python implementation:
from sympy.geometry import Line3D, Point3D, intersection
# Normalize direction vectors:
def normalize(vector: list):
length = (vector[0]**2 + vector[1]**2 + vector[2]**2)**0.5
vector = [i/length for i in vector]
return vector
# Example points for creating two lines which intersect at A
A = Point3D(1, 1, 1)
B = Point3D(0, 2, 1)
l1 = Line3D(A, direction_ratio=[1, 0, 0])
l2 = Line3D(A, B)
d1 = normalize(l1.direction_ratio)
d2 = normalize(l2.direction_ratio)
p = intersection(l1, l2)[0] # Point3D of intersection between the two lines
bis1 = Line3D(p, direction_ratio=[d1[i]+d2[i] for i in range(3)])
bis2 = Line3D(p, direction_ratio=[d1[i]-d2[i] for i in range(3)])
So my problem is the following:
Given a number X of size and an A (1st number), B(Last number) interval, I have to find the number of all different kind of non decreasing combinations (increasing or null combinations) that I can build.
Example:
Input: "2 9 11"
X = 2 | A = 9 | B = 11
Output: 8
Possible Combinations ->
[9],[9,9],[9,10],[9,11],[10,10],[10,11],[11,11],[10],[11].
Now, If it was the same input, but with a different X, line X = 4, this would change a lot...
[9],[9,9],[9,9,9],[9,9,9,9],[9,9,9,10],[9,9,9,11],[9,9,10,10]...
Your problem can be reformulated to simplify to just two parameters
X and N = B - A + 1 to give you sequences starting with 0 instead of A.
If you wanted exactly X numbers in each item, it is simple combination with repetition and the equation for that would be
x_of_n = (N + X - 1)! / ((N - 1)! * X!)
so for your first example it would be
X = 2
N = 11 - 9 + 1 = 3
x_of_n = 4! / (2! * 2!) = 4*3*2 / 2*2 = 6
to this you need to add the same with X = 1 to get x_of_n = 3, so you get the required total 9.
I am not aware of simple equation for the required output, but when you expand all the equations to one sum, there is a nice recursive sequence, where you compute next (N,X) from (N,X-1) and sum all the elements:
S[0] = N
S[1] = S[0] * (N + 1) / 2
S[2] = S[1] * (N + 2) / 3
...
S[X-1] = S[X-2] * (N + X - 1) / X
so for the second example you give we have
X = 4, N = 3
S[0] = 3
S[1] = 3 * 4 / 2 = 6
S[2] = 6 * 5 / 3 = 10
S[3] = 10 * 6 / 4 = 15
output = sum(S) = 3 + 6 + 10 + 15 = 34
so you can try the code here:
function count(x, a, b) {
var i,
n = b - a + 1,
s = 1,
total = 0;
for (i = 0; i < x; i += 1) {
s *= (n + i) / (i + 1); // beware rounding!
total += s;
}
return total;
}
console.log(count(2, 9, 11)); // 9
console.log(count(4, 9, 11)); // 34
Update: If you use a language with int types (JS has only double),
you need to use s = s * (n + i) / (i + 1) instead of *= operator to avoid temporary fractional number and subsequent rounding problems.
Update 2: For a more functional version, you can use a recursive definition
function count(x, n) {
return n < 1 || x < 1 ? 0 : 1 + count(n - 1, x) + count(n, x - 1);
}
where n = b - a + 1
I have this recurrence formula:
P(n) = ( P(n-1) + 2^(n/2) ) % (X)
s.t. P(1) = 2;
where n/2 is computer integer division i.e. floor of x/2
Since i am taking mod X, this relation should repeat at least with in X outputs.
but it can start repeating before that.
How to find this value?
It needn't repeat within x terms, consider x = 3:
P(1) = 2
P(2) = (P(1) + 2^(2/2)) % 3 = 4 % 3 = 1
P(3) = (P(2) + 2^(3/2)) % 3 = (1 + 2) % 3 = 0
P(4) = (P(3) + 2^(4/2)) % 3 = 4 % 3 = 1
P(5) = (P(4) + 2^(5/2)) % 3 = (1 + 4) % 3 = 2
P(6) = (P(5) + 2^(6/2)) % 3 = (2 + 8) % 3 = 1
P(7) = (P(6) + 2^(7/2)) % 3 = (1 + 8) % 3 = 0
P(8) = (P(7) + 2^(8/2)) % 3 = 16 % 3 = 1
P(9) = (P(8) + 2^(9/2)) % 3 = (1 + 16) % 3 = 2
P(10) = (P(9) + 2^(10/2)) % 3 = (2 + 32) % 3 = 1
P(11) = (P(10) + 2^(11/2)) % 3 = (1 + 32) % 3 = 0
P(12) = (P(11) + 2^(12/2)) % 3 = (0 + 64) % 3 = 1
and you see that the period is 4.
Generally (suppose X is odd, it's a bit more involved for even X), let k be the period of 2 modulo X, i.e. k > 0, 2^k % X = 1, and k is minimal with these properties (see below).
Consider all arithmetic modulo X. Then
n
P(n) = 2 + ∑ 2^(j/2)
j=2
It is easier to see when we separately consider odd and even n:
m m
P(2*m+1) = 2 + 2 * ∑ 2^i = 2 * ∑ 2^i = 2*(2^(m+1) - 1) = 2^((n+2)/2) + 2^((n+1)/2) - 2
i=1 i=0
since each 2^j appears twice, for j = 2*i and j = 2*i+1. For even n = 2*m, there's one summand 2^m missing, so
P(2*m) = 2^(m+1) + 2^m - 2 = 2^((n+2)/2) + 2^((n+1)/2) - 2
and we see that the length of the period is 2*k, since the changing parts 2^((n+1)/2) and 2^((n+2)/2) have that period. The period immediately begins, there is no pre-period part (there can be a pre-period for even X).
Now k <= φ(X) by Euler's generalisation of Fermat's theorem, so the period is at most 2 * φ(X).
(φ is Euler's totient function, i.e. φ(n) is the number of integers 1 <= k <= n with gcd(n,k) = 1.)
What makes it possible that the period is longer than X is that P(n+1) is not completely determined by P(n), the value of n also plays a role in determining P(n+1), in this case the dependence is simple, each power of 2 being used twice in succession doubles the period of the pure powers of 2.
Consider the sequence a[k] = (2^k) % X for odd X > 1. It has the simple recurrence
a[0] = 1
a[k+1] = (2 * a[k]) % X
so each value completely determines the next, thus the entire following part of the sequence. (Since X is assumed odd, it also determines the previous value [if k > 0] and thus the entire previous part of the sequence. With H = (X+1)/2, we have a[k-1] = (H * a[k]) % X.)
Hence if the sequence assumes one value twice (and since there are only X possible values, that must happen within the first X+1 values), at indices i and j = i+p > i, say, the sequence repeats and we have a[k+p] = a[k] for all k >= i. For odd X, we can go back in the sequence, therefore a[k+p] = a[k] also holds for 0 <= k < i. Thus the first value that occurs twice in the sequence is a[0] = 1.
Let p be the smallest positive integer with a[p] = 1. Then p is the length of the smallest period of the sequence a, and a[k] = 1 if and only if k is a multiple of p, thus the set of periods of a is the set of multiples of p. Euler's theorem says that a[φ(X)] = 1, from that we can conclude that p is a divisor of φ(X), in particular p <= φ(X) < X.
Now back to the original sequence.
P(n) = 2 + a[1] + a[1] + a[2] + a[2] + ... + a[n/2]
= a[0] + a[0] + a[1] + a[1] + a[2] + a[2] + ... + a[n/2]
Since each a[k] is used twice in succession, it is natural to examine the subsequences for even and odd indices separately,
E[m] = P(2*m)
O[m] = P(2*m+1)
then the transition from one value to the next is more regular. For the even indices we find
E[m+1] = E[m] + a[m] + a[m+1] = E[m] + 3*a[m]
and for the odd indices
O[m+1] = O[m] + a[m+1] + a[m+1] = O[m] + 2*a[m+1]
Now if we ignore the modulus for the moment, both E and O are geometric sums, so there's an easy closed formula for the terms. They have been given above (in slightly different form),
E[m] = 3 * 2^m - 2 = 3 * a[m] - 2
O[m] = 2 * 2^(m+1) - 2 = 2 * a[m+1] - 2 = a[m+2] - 2
So we see that O has the same (minimal) period as a, namely p, and E also has that period. Unless maybe if X is divisible by 3, that is also the minimal (positive) period of E (if X is divisible by 3, the minimal positive period of E could be a proper divisor of p, for X = 3 e.g., E is constant).
Thus we see that 2*p is a period of the sequence P obtained by interlacing E and O.
It remains to be seen that 2*p is the minimal positive period of P. Let m be the minimal positive period. Then m is a divisor of 2*p.
Suppose m were odd, m = 2*j+1. Then
P(1) = P(m+1) = P(2*m+1)
P(2) = P(m+2) = P(2*m+2)
and consequently
P(2) - P(1) = P(m+2) - P(m+1) = P(2*m+2) - P(2*m+1)
But P(2) - P(1) = a[1] and
P(m+2) - P(m+1) = a[(m+2)/2] = a[j+1]
P(2*m+2) - P(2*m+1) = a[(2*m+2)/2] = a[m+1] = a[2*j+2]
So we must have a[1] = a[j+1], hence j is a period of a, and a[j+1] = a[2*j+2], hence j+1 is a period of a too. But that means that 1 is a period of a, which implies X = 1, a contradiction.
Therefore m is even, m = 2*j. But then j is a period of O (and of E), thus a multiple of p. On the other hand, m <= 2*p implies j <= p, and the only (positive) multiple of p satisfying that inequality is p itself, hence j = p, m = 2*p.