Varience of angles - math

I would like to calculate the variance of angles.
The problem is that angels are cyclic.
Variance{0°,0°,0°,0°,360°,360°,360°,360°} = 32400 - should be 0.
Variance{0°,0°,0°,0°,90°,90°,90°,90°} = 2025 - correct.
You get the idea...
Is there a proper way to compute this?

The typical moments you know about (expectation, (co)variance, etc.) are defined for random variables whose support is Euclidean space (Rn). Your random variables support is not Euclidean space. Expectation and variance are not defined (at least not in the usual way).
Take for example this set: { 0, π, 0, π, 0, π, ... }. These are 2N samples of the random angle variable A. What is the expectation of A, E[A]? π/2 or 3π/2?
You need to adjust your question to make sense, either by asking for something different, or by explicitly defining what you mean by variance.

You cannot do this directly. This because with angles 0=360 and such.
What I have done before is do statistics on points (x,y) = (COS(φ),SIN(φ)) and then converting back to angle with φ = ATAN2(y, x).

The usual way is to convert the angles to complex numbers on the unit circle, average them, and take the argument as the average angle. This gives you the mean angle, from which you can compute the variance normally (but remember to add or subtract 360 degrees until the difference is in the range (-180 degrees, 180 degrees) before squaring).

Related

Calculate Normals from Heightmap

I am trying to convert an heightmap into a matrix of normals using central differencing which will later correspond to the steepness of a giving point.
I found several links with correct results but without explaining the math behind.
T
L O R
B
From this link I realised I can just do:
Vec3 normal = Vec3(2*(R-L), 2*(B-T), -4).Normalize();
The thing is that I don't know where the 2* and -4 comes from.
In this explanation of central differencing I see that we should divide that value by 2, but I still don't know how to connect all of this.
What I really want to know is the linear algebra definition behind this.
I have an heightmap, I want to measure the central differences and I want to obtain the normal vector to use later to measure the steepness.
PS: the Z-axis is the height.
From vector calculus, the normal of a surface is given by the gradient operator:
A height map h(x, y) is a special form of the function f:
For a discretized height map, assuming that the grid size is 1, the first-order approximations to the two derivative terms above are given by:
Since the x step from L to R is 2, and same for y. The above is exactly the formula you had, divided through by 4. When this vector is normalized, the factor of 4 is canceled.
(No linear algebra was harmed in the writing of this answer)

How can I get a more precise value from a list of weighted numbers?

A is a list of increasing fixed values (frequencies). It doesn't step evenly but the values never change.
A = 5, 10, 17, 23, 30
Each value in A is weighted by the corresponding value in list B (volume).
B = 2.2, 3.5, 4.4, 3.2, 1.1
I want to calculate the loudest frequency (A). The problem is that the loudest frequency may be 14 but I can't tell from this data set. How can I calculate, based on list B, what the loudest frequency might be in list A?
Here's a rough outline of a solution: I haven't nutted out all the maths for you, but I hope it helps.
Approximate the frequency amplitude using interpolatory splines.
This will give you the function between each adjacent pair of frequency sample points as a sum of basis functions for the frequency values surrounding the pair.
This means you have a function f(x) defined on each interval.
f(x) = A phi_0(x) + B phi_1(x) + C phi_2(x) + D phi_3(x)
At the maximum
0 = f'(x) = A phi_0'(x) + B phi_1(x) + C phi_2(x) + D phi_3(x)
If you're using a cubic spline interpolation, the derivate will be quadratic in x. And thus you can obtain 2 potential extremums for each interval.
Scan through all the intervals, calculate that extremum. Check if it falls inside the interval .. if it doesn't its not really a potential extremum. You now have a list of all the potential internal maxima. Add to this list the values at each node. The maximum from this list will be the maximum value of the interpolatory spline.
You have not been terribly clear here (IMHO). I don't know what it means to "weight" values in A by B. Do we assume we are to treat B as a function of A? Therefore, we are looking for the frequency (A) such that B attains its maximum value, AS A FUNCTION OF A?
If so, this is simply a problem of interpolation, then maximizing the interpolant. Depending on what tools you have available to you, you might do a spline interpolation, as shown in this figure. Then it would be merely a problem of finding the maximum value of that spline.
This spline model suggests the maximum value is Bmax=4.4132, which occurs at A=16.341.
Alternatively, one might simply fit an interpolating polynomial through the points. Your curve is not that noisy that a 4th degree polynomial will be ill-posed. (Had you more points, a high order polynomial would be a terrible idea. Then you might use a piecewise Lagrange interpolant.) Done in MATLAB,
>> P = polyfit(A,B,4)
P =
6.6992e-05 -0.0044803 0.084249 -0.34529 2.3384
I'll plot the polynomial itself.
>> ezplot(#(x) polyval(P,x),[5,30])
We can find the maximum value by looking for a root (zero value) of the derivative function. Since the derivative is a cubic polynomial, there are three roots. Only one of them is of interest.
>> roots(polyder(P))
ans =
31.489
16.133
2.5365
The root of interest is at 16.133, which is consistent with the prediction from the interpolating spline.

efficiently determining if a polynomial has a root in the interval [0,T]

I have polynomials of nontrivial degree (4+) and need to robustly and efficiently determine whether or not they have a root in the interval [0,T]. The precise location or number of roots don't concern me, I just need to know if there is at least one.
Right now I'm using interval arithmetic as a quick check to see if I can prove that no roots can exist. If I can't, I'm using Jenkins-Traub to solve for all of the polynomial roots. This is obviously inefficient since it's checking for all real roots and finding their exact positions, information I don't end up needing.
Is there a standard algorithm I should be using? If not, are there any other efficient checks I could do before doing a full Jenkins-Traub solve for all roots?
For example, one optimization I could do is to check if my polynomial f(t) has the same sign at 0 and T. If not, there is obviously a root in the interval. If so, I can solve for the roots of f'(t) and evaluate f at all roots of f' in the interval [0,T]. f(t) has no root in that interval if and only if all of these evaluations have the same sign as f(0) and f(T). This reduces the degree of the polynomial I have to root-find by one. Not a huge optimization, but perhaps better than nothing.
Sturm's theorem lets you calculate the number of real roots in the range (a, b). Given the number of roots, you know if there is at least one. From the bottom half of page 4 of this paper:
Let f(x) be a real polynomial. Denote it by f0(x) and its derivative f′(x) by f1(x). Proceed as in Euclid's algorithm to find
f0(x) = q1(x) · f1(x) − f2(x),
f1(x) = q2(x) · f2(x) − f3(x),
.
.
.
fk−2(x) = qk−1(x) · fk−1(x) − fk,
where fk is a constant, and for 1 ≤ i ≤ k, fi(x) is of degree lower than that of fi−1(x). The signs of the remainders are negated from those in the Euclid algorithm.
Note that the last non-vanishing remainder fk (or fk−1 when fk = 0) is a greatest common
divisor of f(x) and f′(x). The sequence f0, f1,. . ., fk (or fk−1 when fk = 0) is called a Sturm sequence for the polynomial f.
Theorem 1 (Sturm's Theorem) The number of distinct real zeros of a polynomial f(x) with
real coefficients in (a, b) is equal to the excess of the number of changes of sign in the sequence f0(a), ..., fk−1(a), fk over the number of changes of sign in the sequence f0(b), ..., fk−1(b), fk.
You could certainly do binary search on your interval arithmetic. Start with [0,T] and substitute it into your polynomial. If the result interval does not contain 0, you're done. If it does, divide the interval in 2 and recurse on each half. This scheme will find the approximate location of each root pretty quickly.
If you eventually get 4 separate intervals with a root, you know you are done. Otherwise, I think you need to get to intervals [x,y] where f'([x,y]) does not contain zero, meaning that the function is monotonically increasing or decreasing and hence contains at most one zero. Double roots might present a problem, I'd have to think more about that.
Edit: if you suspect a multiple root, find roots of f' using the same procedure.
Use Descartes rule of signs to glean some information. Just count the number of sign changes in the coefficients. This gives you an upper bound on the number of positive real roots. Consider the polynomial P.
P = 131.1 - 73.1*x + 52.425*x^2 - 62.875*x^3 - 69.225*x^4 + 11.225*x^5 + 9.45*x^6 + x^7
In fact, I've constructed P to have a simple list of roots. They are...
{-6, -4.75, -2, 1, 2.3, -i, +i}
Can we determine if there is a root in the interval [0,3]? Note that there is no sign change in the value of P at the endpoints.
P(0) = 131.1
P(3) = 4882.5
How many sign changes are there in the coefficients of P? There are 4 sign changes, so there may be as many as 4 positive roots.
But, now substitute x+3 for x into P. Thus
Q(x) = P(x+3) = ...
4882.5 + 14494.75*x + 15363.9*x^2 + 8054.675*x^3 + 2319.9*x^4 + 370.325*x^5 + 30.45*x^6 + x^7
See that Q(x) has NO sign changes in the coefficients. All of the coefficients are positive values. Therefore there can be no roots larger than 3.
So there MAY be either 2 or 4 roots in the interval [0,3].
At least this tells you whether to bother looking at all. Of course, if the function has opposite signs on each end of the interval, we know there are an odd number of roots in that interval.
It's not that efficient, but is quite reliable. You can construct the polynomial's Companion Matrix (A sparse matrix whose eigenvalues are the polynomial's roots).
There are efficient eigenvalue algorithms that can find eigenvalues in a given interval. One of them is the inverse iteration (Can find eigenvalues closest to some input value. Just give the middle point of the interval as the above value).
If the value f(0)*f(t)<=0 then you are guaranteed to have a root. Otherwise you can start splitting the domain into two parts (bisection) and check the values in the ends until you are confident there is no root in that segment.
if f(0)*f(t)>0 you either have no, two, four, .. roots. Your limit is the polynomial order. if f(0)*f(t)<0 you may have one, three, five, .. roots.

Problem minimizing function in Matlab (fmincon)

I have a function which calculates the acoustic strength of a fish depending on the incident angle of the wavefront on the fish. I also have some in situ measurements of acoustic strength. What I'm trying to do is figure out which normal distribution of angles results in the model data matching up most closely with the in situ data.
To do this, I'm trying to use the Matlab function fmincon to minimize the following function:
function f = myfun(x)
TS_krm = KRM(normrnd(x(1),x(2),100,1), L);
f = sum((TS_insitu - TS_krm).^2);
So what this function does is calculates the sum of squared residuals, which I want to minimize. To do this, I try using fmincon:
x = fmincon(#myfun, [65;8], [], [], [], [], [0;0], [90;20], [], options);
Thus, I'm using a starting orientation with a mean of 65 degrees and a standard deviation of 8. I'm also setting the mean angle bounds to be from 0 to 90 degrees and the standard deviation bounds to be from 0 to 20 degrees.
Yet it doesn't seem to be properly finding the mean and standard deviation angles which minimize the function. Usually it outputs something right around N(65,8), almost like it isn't really trying many other values far from the starting points.
Any ideas on what I can do to make this work? I know I can set the TolX and TolFun settings, but I'm not really sure what those do and what effect they'd have. If it helps, the typical values that I'm dealing with are usually around -45 dB.
Thanks!
you should look at the order of magnitude of the values of f for different inputs. it might influence the values you need to put in TolFun (the tolerance of the minimization algorithm to changes in f). for example, if TolFun = 1e-6 and the difference between f(45) and f(64) is 1e-7, the algorithm might stop at 65.
also, i think the algorith that you are using assume that the functions is differentiable (it uses derivatives to find "where to go next"), not sure this is the case in your function. if it is not, you should use simplex to find the minimum.

How to make sure if a polynomial curve is monotonic under interval [a,b]?

If I got a polynomial curve, and I want to find all monotonic curve segments and corresponding intervals by programming.
What's the best way to do this...
I want to avoid solving equation like f'(x) = 0;
Using some nice numerical ways to do this,like bi-section, is preferred.
f'(x) expression is available.
Thanks.
Add additional details. For example, I get a curve in 2d space, and its polynomial is
x: f(t)
y: g(t)
t is [0,1]
So, if I want to get its monotonic curve segment, I must know the position of t where its tangent vector is (1,0).
One direct way to resolve this is to setup an equation "f'(x) = 0".
But I want to use the most efficient way to do this.
For example, I try to use recursive ways to find this.
Divide the range [0,1] to four parts, and check whether the four tangents projection on vector (1,0) are in same direction, and two points are close enough. If not, continue to divide the range into 4 parts, until they are in same direction in (1,0) and (0,1), and close enough.
I think you will have to find the roots of f'(x) using a numerical method (feel free to implement any root-seeking algorithm you want, Wikipedia has a list). The roots will be those points where the gradient reaches zero; say x1, x2, x3.
You then have a set of intervals (-inf, x1) (x1, x2) etc, continuity of a polynomial ensures that the gradient will be always positive or always negative between a particular pair of points.
So evaluating the gradient sign at a point within each interval will tell you whether that interval is monotically increasing or not. If you don't care for a "strictly" increasing section, you could patch together adjacent intervals which have positive gradient (as a point of inflection will show up as one of the f'(x)=0 roots).
As an alternative to computing the roots of f', you can also use Sturm Sequences.
They allow counting the number of roots (here, the roots of f') in an interval.
The monotic curve segments are delimited by the roots of f'(x). You can find the roots by using an iterative algorithm like Newton's method.

Resources