I thought I understood the division concept but it seems I don't. The example explains T and F division of 42/-10 and -42/10. And my task is as follows:
You F-divided some positive number by −5 and obtained a remainder of 1 What remainder will you obtain T-dividing this number by −5?
:
Normally I wouldn't hesitate with division but I failed to understand this concept.
trunc(n) will always round towards zero.
floor(n) will always round down.
If n is positive, they will behave identically. (trunc(4.7) = 4 and floor(4.7) = 4)
But if n is negative, they will round in opposite directions. (trunc(-4.7) = -4 and floor(-4.7) = -5)
Then we just expand this so that n is the result of division. For T division, we divide and then round towards zero. For F division, we divide and then round down.
I wanted to know if there exists a scalar measure of how close a rotational matrix is to an identity matrix of the same dimensions? If not can anyone please suggest a workaround?
I am doing an optimization study using genetic algorithm and rotational matrices which are close to identity matrices are more desirable. That is why I need this measure for including in the fitness function.
A fairly simple one is
d(S,T) = sqrt( Trace( (S-R)'*(S-R)))
This is a metric in the mathematical sense, ie
d(S,T) >= 0, and d(S,T) = 0 iff S==T
d(S,T) = d(T,S)
d(S,T) <= d(S,U) + d(U,T)
moreover it is invariant under multiplication, ie
d( U*S, U*T) = d( S*U, S*T) = d( S, T)
In the above each of S,T,U are orthogonal matrices
In case of rotational matrix (not homogenuous one) if the matrix is identity then:
abs value of each row and column is 1.0
dot between any two rows is 0.0
dot between any two columns is 0.0
no two rows are equal
no two columns are equal
So simply check all rows or columns with some threshold. For "close to" You could construct a score. For example something like this:
m[n][m]; // your n x m matrix
score=0.0; // score
for (i=0;i<n;i++) // abs value of rows
{
a=0.0;
for (j=0;j<m;j++ ) a += m[i][j]*m[i][j];
score+=abs(1.0-sqrt(a))/n;
}
for (i=0;i<n;i++) // dot between rows
for (j=i+1;j<n;j++)
{
a=0.0;
for (k=0;k<m;k++ ) a += m[i][k]*m[j][k];
score+=abs(a)/(n*n);
}
Now score should hold some value. the closer it is to zero the more close to identity the matrix is. the bigger the value the less close to identity it is. So:
if (score<threshold) matrix_is_identity;
where threshold is some small value like 1e-3 dependent on what you consider to still be "close to" identity. I constructed the example score so it should be invariant on matrix size. You can add weights between size and dot products or add own tests ... The first part of score sense how far your basis vectors are from unit size and the second part sense the perpendicularity of your basis vectors.
In some cases is better to max out instead of += the score like:
score = max(score,a);
instead of:
score+= a/n;
or:
score+= a/(n*n);
depends on the behavior you want ...
I came across this equation in a research paper, and can't seem to make sense of it. Let me give an argument for why it doesn't make sense, and perhaps someone can tell me where my flaw is?
P_L(d_0) is the RSSI value for distance 0
P_L(d_i) is the RSSI value for distance 1
d_0 is distance 0
d_i is distance 1
So if you put the two RSSI values on the left side of the equation, you have:
(RSSI value 1) - (RSSI value 0) = 10n * log ((Dist 1) / (Dist 0))
Let's consider the case when distance 1 is larger than distance 0:
(Dist 1) > (Dist 0)
Greater distance means less RSSI, so (RSSI 1) < (RSSI 0). So the left side of the equation is negative. The research paper states that n is normally between 2 and 4, so the "10n" part of the right side of the equation is positive, which means the log value must be negative, right?
But that leads to a contradiction. We said Dist 1 is greater, so the number inside the log is greater than 1, therefore the log value itself is positive. So intuitively, we have found an equation with the left side negative and the right side positive. What's going on??
(The opposite leads to a contradiction too: if dist 1 is less than dist 0, we get the left side positive and the right side negative)
According to the paper, P_L is the mean path loss for a given distance. The loss increases with the distance.
I know that I can represent fuzzy max via power function(i need it in neural network) i.e.
def max(p:Double)(a:Double,b:Double) =
pow(pow(a,p) + pow(b,p) , 1/p)
// assumption a >=0 and b >=0
It is become maximum when p -> infinity and sum when p = 1
Not sure how correctly implement fuzzy minimum.
If you are willing to replace "sum" with "harmonic sum" for the p=1 case, you can use
1/(pow(pow(a,-p) + pow(b,-p),1/p))
This converges to min(a,b) as p goes to infinity.
For p=1 it's 1/(1/a + 1/b), which is related to the harmonic mean but without the factor of 2. Just like in your original formula, a+b is related to the arithmetic mean but without the factor of 2.
However, note that both of these formulas (yours and mine) converge much more slowly to the limit as p goes to infinity, for cases where a and b are closer together.
I have polynomials of nontrivial degree (4+) and need to robustly and efficiently determine whether or not they have a root in the interval [0,T]. The precise location or number of roots don't concern me, I just need to know if there is at least one.
Right now I'm using interval arithmetic as a quick check to see if I can prove that no roots can exist. If I can't, I'm using Jenkins-Traub to solve for all of the polynomial roots. This is obviously inefficient since it's checking for all real roots and finding their exact positions, information I don't end up needing.
Is there a standard algorithm I should be using? If not, are there any other efficient checks I could do before doing a full Jenkins-Traub solve for all roots?
For example, one optimization I could do is to check if my polynomial f(t) has the same sign at 0 and T. If not, there is obviously a root in the interval. If so, I can solve for the roots of f'(t) and evaluate f at all roots of f' in the interval [0,T]. f(t) has no root in that interval if and only if all of these evaluations have the same sign as f(0) and f(T). This reduces the degree of the polynomial I have to root-find by one. Not a huge optimization, but perhaps better than nothing.
Sturm's theorem lets you calculate the number of real roots in the range (a, b). Given the number of roots, you know if there is at least one. From the bottom half of page 4 of this paper:
Let f(x) be a real polynomial. Denote it by f0(x) and its derivative f′(x) by f1(x). Proceed as in Euclid's algorithm to find
f0(x) = q1(x) · f1(x) − f2(x),
f1(x) = q2(x) · f2(x) − f3(x),
.
.
.
fk−2(x) = qk−1(x) · fk−1(x) − fk,
where fk is a constant, and for 1 ≤ i ≤ k, fi(x) is of degree lower than that of fi−1(x). The signs of the remainders are negated from those in the Euclid algorithm.
Note that the last non-vanishing remainder fk (or fk−1 when fk = 0) is a greatest common
divisor of f(x) and f′(x). The sequence f0, f1,. . ., fk (or fk−1 when fk = 0) is called a Sturm sequence for the polynomial f.
Theorem 1 (Sturm's Theorem) The number of distinct real zeros of a polynomial f(x) with
real coefficients in (a, b) is equal to the excess of the number of changes of sign in the sequence f0(a), ..., fk−1(a), fk over the number of changes of sign in the sequence f0(b), ..., fk−1(b), fk.
You could certainly do binary search on your interval arithmetic. Start with [0,T] and substitute it into your polynomial. If the result interval does not contain 0, you're done. If it does, divide the interval in 2 and recurse on each half. This scheme will find the approximate location of each root pretty quickly.
If you eventually get 4 separate intervals with a root, you know you are done. Otherwise, I think you need to get to intervals [x,y] where f'([x,y]) does not contain zero, meaning that the function is monotonically increasing or decreasing and hence contains at most one zero. Double roots might present a problem, I'd have to think more about that.
Edit: if you suspect a multiple root, find roots of f' using the same procedure.
Use Descartes rule of signs to glean some information. Just count the number of sign changes in the coefficients. This gives you an upper bound on the number of positive real roots. Consider the polynomial P.
P = 131.1 - 73.1*x + 52.425*x^2 - 62.875*x^3 - 69.225*x^4 + 11.225*x^5 + 9.45*x^6 + x^7
In fact, I've constructed P to have a simple list of roots. They are...
{-6, -4.75, -2, 1, 2.3, -i, +i}
Can we determine if there is a root in the interval [0,3]? Note that there is no sign change in the value of P at the endpoints.
P(0) = 131.1
P(3) = 4882.5
How many sign changes are there in the coefficients of P? There are 4 sign changes, so there may be as many as 4 positive roots.
But, now substitute x+3 for x into P. Thus
Q(x) = P(x+3) = ...
4882.5 + 14494.75*x + 15363.9*x^2 + 8054.675*x^3 + 2319.9*x^4 + 370.325*x^5 + 30.45*x^6 + x^7
See that Q(x) has NO sign changes in the coefficients. All of the coefficients are positive values. Therefore there can be no roots larger than 3.
So there MAY be either 2 or 4 roots in the interval [0,3].
At least this tells you whether to bother looking at all. Of course, if the function has opposite signs on each end of the interval, we know there are an odd number of roots in that interval.
It's not that efficient, but is quite reliable. You can construct the polynomial's Companion Matrix (A sparse matrix whose eigenvalues are the polynomial's roots).
There are efficient eigenvalue algorithms that can find eigenvalues in a given interval. One of them is the inverse iteration (Can find eigenvalues closest to some input value. Just give the middle point of the interval as the above value).
If the value f(0)*f(t)<=0 then you are guaranteed to have a root. Otherwise you can start splitting the domain into two parts (bisection) and check the values in the ends until you are confident there is no root in that segment.
if f(0)*f(t)>0 you either have no, two, four, .. roots. Your limit is the polynomial order. if f(0)*f(t)<0 you may have one, three, five, .. roots.