I have this formula: A(k) + iB(k) + C(k)e^(5 * pi() * i * k/12500)
where (for example) B(k) = sin(k * pi()/20) and k = 1, 2, 3,..., 2500
How should I write this formula in R?
How
Thanks in advance
If a and b are numbers, you can write a + bi exactly like that in R, e.g. 1 + 2i is legal in math or R. To write i, use 1i and iB(k) is 1i * B(k).
Related
I have a Computer Science Midterm tomorrow and I need help determining the complexity of a particular recursive function as below, which is much complicated than the stuffs I've already worked on: it has two variables
T(n) = 3 + mT(n-m)
In simpler cases where m is a constant, the formula can be easily obtained by writing unpacking the relation; however, in this case, unpacking doesn't make the life easier as follows (let's say T(0) = c):
T(n) = 3 + mT(n-m)
T(n-1) = 3 + mT(n-m-1)
T(n-2) = 3 + mT(n-m-2)
...
Obviously, there's no straightforward elimination according to these inequalities. So, I'm wondering whether or not I should use another technique for such cases.
Don't worry about m - this is just a constant parameter. However you're unrolling your recursion incorrectly. Each step of unrolling involves three operations:
Taking value of T with argument value, which is m less
Multiplying it by m
Adding constant 3
So, it will look like this:
T(n) = m * T(n - m) + 3 = (Step 1)
= m * (m * T(n - 2*m) + 3) + 3 = (Step 2)
= m * (m * (m * T(n - 3*m) + 3) + 3) + 3 = ... (Step 3)
and so on. Unrolling T(n) up to step k will be given by following formula:
T(n) = m^k * T(n - k*m) + 3 * (1 + m + m^2 + m^3 + ... + m^(k-1))
Now you set n - k*m = 0 to use the initial condition T(0) and get:
k = n / m
Now you need to use a formula for the sum of geometric progression - and finally you'll get a closed formula for the T(n) (I'm leaving that final step to you).
According to this very elaborate answer I would estimate the maximum relative error δres,max of the following computation like this:
// Pseudo code
float a, b, c; // Prefilled IEEE 754 floats with double precision
res = a / b * c;
res = a * (1 + δa) / ( b * (1 + δb) ) * (1 + δa/b) * c * (1 + δc) * (1 + δa/b*c)
= a / b * c * (1 + δa) / (1 + δb) * (1 + δa/b) * (1 + δc) * (1 + δa/b*c)
= a / b * c * (1 + δres)
=> δres = (1 + δa) / (1 + δb) * (1 + δa/b) * (1 + δc) * (1 + δa/b*c) - 1
All δs are within the bounds of ± ε / 2, where ε is 2^-52.
=> δres,max = (1 + ε / 2)^4 / (1 - ε / 2) - 1 ≈ 2.5 * ε
Is this a valid approach for error estimation that can be used for every combination of basic floating-point operations?
PS:
Yes, I read "What Every Computer Scientist Should Know About Floating-Point Arithmetic". ;)
Well, it's probably a valid approach. I'm not sure how you've jockeyed that last line, but your conclusion is basically correct (though note that, since the theoretical error can exceed 2.5e, in practice the error bound is 3e).
And yes, this is a valid approach which will work for any floating-point expression of this form. However, the results won't always be as clean. Once you have addition/subtraction in the mix, rather than just multiplication and division, you won't usually be able to cleanly separate the exact expression from an error multiplier. Instead, you'll see input terms and error terms getting multiplied directly together, rather than the pleasantly relatively constant bound here.
As a useful example, try deriving the maximum relative error for (a+b)-a (assuming a and b are exact).
Let's say I have a program that calculates the value of the sine wave at time t. The sine wave is of the form sin(f*t + phi). Amplitude is 1.
If I only have one sin term all is fine. I can easily calculate the value at any time t.
But, at runtime, the wave form becomes modified when it combines with other waves. sin(f1 * t + phi1) + sin(f2 * t + phi2) + sin(f3 * t + phi3) + ...
The simplest solution is to have a table with columns for phi and f, iterate over all rows, and sum the results. But to me it feels that once I reach thousands of rows, the computation will become slow.
Is there a different way of doing this? Like combining all the sines into one statement/formula?
If you have a Fourier series (i.e. f_i = i f for some f) you can use the Clenshaw recurrence relation which is significantly faster than computing all the sines (but it might be slightly less accurate).
In your case you can consider the sequence:
f_k = exp( i ( k f t + phi_k) ) , where i is the imaginary unit.
Notice that Im(f_k) = sin( k f t + phi_k ), that is your sequence.
Also
f_k = exp( i ( k f t + phi_k) ) = exp( i k f t ) exp( i phi_k )
Hence you have a_k = exp(i phi_k). You can precompute these values and store them in an array. For simplicity from now on assume a_0 = 0.
Now, exp( i (k + 1) f t) = exp(i k f t) * exp(i f t), so alpha_k = exp(i f t) and beta_k = 0.
You can now apply the recurrence formula, in C++ you can do something like this:
complex<double> clenshaw_fourier(double f, double t, const vector< complex<double> > & a )
{
const complex<double> alpha = exp(f * t * i);
complex<double> b = 0;
for (int k = a.size() - 1; k >0; -- k )
b = a[k] + alpha * b;
return a[0] + alpha * b;
}
Assuming that a[k] == exp( i phi_k ).
The real part of the answer is the sum of cos(k f t + phi_k), while the imaginary part is the sum of sin(k f t + phi_k).
As you can see this only uses addition and multiplications, except for exp(f * t * i) that is only computed once.
There are different bases (plural of basis) that can be advantageous (i.e. compact) for representing different waveforms. The most common and well-known one is that which you mention, called the Fourier basis usually. Daubechies wavelets for example are a relatively recent addition that cope with more discontinuous waveforms much better than a Fourier basis does. But this is really a math topic and probably if you post on Math Overflow you will get better answers.
Good day. I am using a Quadratic Bezier Curve with the following configurations:
Start Point P1 = (1, 2)
Anchor Point P2 = (1, 8)
End Point P3 = (10, 8)
I know that given a t, I know I can solve for x and y using the following equation:
t = 0.5; // given example value
x = (1 - t) * (1 - t) * P1.x + 2 * (1 - t) * t * P2.x + t * t * P3.x;
y = (1 - t) * (1 - t) * P1.y + 2 * (1 - t) * t * P2.y + t * t * P3.y;
where P1.x is the x coordinate of P1, and so on.
What I've tried now is that given an x value, I calculate for t using wolframalpha and then I plug that t in to the y equation and I get a my x and y point.
However, I want to automate finding t and then y. I have a formula to get x and y given a t. However, I don't have a formula to get t based on x. I'm a bit rusty with my algebra and expanding the first equation to isolate t doesn't look too easy.
Does anyone have a formula to get t based on x? My google search skills are failing me as of now.
I think it's also worth noting that my Bezier curve faces right.
Any help will be very much appreciated. Thanks.
problem is that what you want to solve is not function in general
for any t is just one (x,y) pair
but for any x there can be 0,1,2,+inf solutions of t
I would do this iteratively
you already can get any point p(t)=Bezier(t) so use iteration of t to minimize distance |p(t).x-x|
for(t=0.0,dt=0.1;t<=1.0;t+=dt)
find all local mins of d=|p(t).x-x|
so when d start rising again set dt*=-0.1 and stop if |dt|<1e-6 or any other threshold. Stop if t is out of interval <0,1> and remember the solution to some list. Restore original t,dt and reset the local min search variables
process all local mins
eliminate all that has bigger distance then some threshold/accuracy compute y and do what you need with the point ...
It is much slower then algebraic approach but you can use this for any curvature not just quadratic
Usually cubic curves are used and do this algebraically with them is a nightmare.
Look at your Bernstein polynomials B[i]; you have...
x = SUM_i ( B[i](t) * P[i].x )
...where...
B[0](t) = t^2 - 2*t + 1
B[1](t) = -2*t^2 + 2*t
B[2](t) = t^2
...so you can rearrange (assuming I did this right)...
0 = (P[0].x - 2*P[1].x + P[2].x) * t^2 + (-2*P[0].x + 2*P[1].x) * t + P[0].x - x
Now you should just be able to use the quadratic formula to find if the solutions for t exist (i.e., are real, not complex), and what they are.
import numpy as np
import matplotlib.pyplot as plt
#Control points
p0=(1000,2500); p1=(2000,-1500); p2=(5000,3000)
#x-coordinates to fit
xcoord = [1750., 2750., 3950.,4760., 4900.]
# t variable with as few points as needed, considering accuracy. I found 30 is good enough
t = np.linspace(0,1,30)
# calculate coordinates of quadratic Bezier curve
x = (1 - t) * (1 - t) * p0[0] + 2 * (1 - t) * t * p1[0] + t * t * p2[0];
y = (1 - t) * (1 - t) * p0[1] + 2 * (1 - t) * t * p1[1] + t * t * p2[1];
# find the closest points to each x-coordinate. Interpolate y-coordinate
ycoord=[]
for ind in xcoord:
for jnd in range(len(x[:-1])):
if ind >= x[jnd] and ind <= x[jnd+1]:
ytemp = (ind-x[jnd])*(y[jnd+1]-y[jnd])/(x[jnd+1]-x[jnd]) + y[jnd]
ycoord.append(ytemp)
plt.figure()
plt.xlim(0, 6000)
plt.ylim(-2000, 4000)
plt.plot(p0[0],p0[1],'kx', p1[0],p1[1],'kx', p2[0],p2[1],'kx')
plt.plot((p0[0],p1[0]),(p0[1],p1[1]),'k:', (p1[0],p2[0]),(p1[1],p2[1]),'k:')
plt.plot(x,y,'r', x, y, 'k:')
plt.plot(xcoord, ycoord, 'rs')
plt.show()
This is the approach John Carmack uses to calculate the determinant of a 4x4 matrix. From my investigations i have determined that it starts out like the laplace expansion theorem but then goes on to calculate 3x3 determinants which doesn't seem to agree with any papers i've read.
// 2x2 sub-determinants
float det2_01_01 = mat[0][0] * mat[1][1] - mat[0][1] * mat[1][0];
float det2_01_02 = mat[0][0] * mat[1][2] - mat[0][2] * mat[1][0];
float det2_01_03 = mat[0][0] * mat[1][3] - mat[0][3] * mat[1][0];
float det2_01_12 = mat[0][1] * mat[1][2] - mat[0][2] * mat[1][1];
float det2_01_13 = mat[0][1] * mat[1][3] - mat[0][3] * mat[1][1];
float det2_01_23 = mat[0][2] * mat[1][3] - mat[0][3] * mat[1][2];
// 3x3 sub-determinants
float det3_201_012 = mat[2][0] * det2_01_12 - mat[2][1] * det2_01_02 + mat[2][2] * det2_01_01;
float det3_201_013 = mat[2][0] * det2_01_13 - mat[2][1] * det2_01_03 + mat[2][3] * det2_01_01;
float det3_201_023 = mat[2][0] * det2_01_23 - mat[2][2] * det2_01_03 + mat[2][3] * det2_01_02;
float det3_201_123 = mat[2][1] * det2_01_23 - mat[2][2] * det2_01_13 + mat[2][3] * det2_01_12;
return ( - det3_201_123 * mat[3][0] + det3_201_023 * mat[3][1] - det3_201_013 * mat[3][2] + det3_201_012 * mat[3][3] );
Could someone explain to me how this approach works or point me to a good write up which uses the same approach?
NOTE
If it matters this matrix is row major.
It seems to be the method that involves using minors. The mathematical aspect can be found on wikipedia at
http://en.wikipedia.org/wiki/Determinant#Properties_characterizing_the_determinant
Basically you reduce the matrix to something smaller and easier to compute, and sum those results up (it involves some (-1) factors which should be described on the page i linked to).
He uses the standard formula where you can compute, in pseudocode,
det(M) = sum(M[0, i] * det(M.minor[0, i]) * (-1)^i)
Here minor[0, i] is a matrix you obtain by crossing out 0-th row and i-th column from your original matrix and (-1)*i stands for i-th power of -1.
The same (up to an overall sign) formula will work if you take a different row or if you make a loop over a column. If you think about how det is defined, it's pretty self-explanatory. Note how for 2-matrix this becomes:
det(M) = M[0, 0] * M[1, 1] * (+1) + M[0, 1] * M[1, 0] * (-1)
or, by row 1 rather then 0,
-det(M) = M[1, 0] * M[0, 1] * (+1) + M[1, 1] * M[0, 0] * (-1)
– you should recognize the standard formula for determinant of 2x2 matrix.
Similarly, for a 3-matrix composed as N = [[a, b, c], [d, e, f], [g, h, i]] this leads to the formula
det(N) = a * det([[e, f], [h, i]]) - b * det([[d, f], [g, i]]) + c * det([[d, e], [g, h]])
which of course becomes the textbook formula
a*e*i + b*f*g + c*d*h - c*e*g - a*f*h - b*d*i
once you expand each of 2x2 determinants.
Now if you take a 4-matrix X, you will see that to compute det(X) you need to compute determinants of 4 minors, each minor being a 3x3 matrix; but you can also expand them further so you'll have the determinants of 6 2x2 matrices with some coefficients. You should really try it yourself similarly to what is above for 3x3 matrices.