I wanted to know if there exists a scalar measure of how close a rotational matrix is to an identity matrix of the same dimensions? If not can anyone please suggest a workaround?
I am doing an optimization study using genetic algorithm and rotational matrices which are close to identity matrices are more desirable. That is why I need this measure for including in the fitness function.
A fairly simple one is
d(S,T) = sqrt( Trace( (S-R)'*(S-R)))
This is a metric in the mathematical sense, ie
d(S,T) >= 0, and d(S,T) = 0 iff S==T
d(S,T) = d(T,S)
d(S,T) <= d(S,U) + d(U,T)
moreover it is invariant under multiplication, ie
d( U*S, U*T) = d( S*U, S*T) = d( S, T)
In the above each of S,T,U are orthogonal matrices
In case of rotational matrix (not homogenuous one) if the matrix is identity then:
abs value of each row and column is 1.0
dot between any two rows is 0.0
dot between any two columns is 0.0
no two rows are equal
no two columns are equal
So simply check all rows or columns with some threshold. For "close to" You could construct a score. For example something like this:
m[n][m]; // your n x m matrix
score=0.0; // score
for (i=0;i<n;i++) // abs value of rows
{
a=0.0;
for (j=0;j<m;j++ ) a += m[i][j]*m[i][j];
score+=abs(1.0-sqrt(a))/n;
}
for (i=0;i<n;i++) // dot between rows
for (j=i+1;j<n;j++)
{
a=0.0;
for (k=0;k<m;k++ ) a += m[i][k]*m[j][k];
score+=abs(a)/(n*n);
}
Now score should hold some value. the closer it is to zero the more close to identity the matrix is. the bigger the value the less close to identity it is. So:
if (score<threshold) matrix_is_identity;
where threshold is some small value like 1e-3 dependent on what you consider to still be "close to" identity. I constructed the example score so it should be invariant on matrix size. You can add weights between size and dot products or add own tests ... The first part of score sense how far your basis vectors are from unit size and the second part sense the perpendicularity of your basis vectors.
In some cases is better to max out instead of += the score like:
score = max(score,a);
instead of:
score+= a/n;
or:
score+= a/(n*n);
depends on the behavior you want ...
Related
I'm looking for a mixing function that given an integer from an interval <0, n) returns a random-looking integer from the same interval. The interval size n will typically be a composite non power of 2 number. I need the function to be one to one. It can only use O(1) memory, O(1) time is strongly preferred. I'm not too concerned about randomness of the output, but visually it should look random enough (see next paragraph).
I want to use this function as a pixel shuffling step in a realtime-ish renderer to select the order in which pixels are rendered (The output will be displayed after a fixed time and if it's not done yet this gives me a noisy but fast partial preview). Interval size n will be the number of pixels in the render (n = 1920*1080 = 2073600 would be a typical value). The function must be one to one so that I can be sure that every pixel is rendered exactly once when finished.
I've looked at the reversible building blocks used by hash prospector, but these are mostly specific to power of 2 ranges.
The only other method I could think of is multiply by large prime, but it doesn't give particularly nice random looking outputs.
What are some other options here?
Here is one solution based on the idea of primitive roots modulo a prime:
If a is a primitive root mod p then the function g(i) = a^i % p is a permutation of the nonzero elements which are less than p. This corresponds to the Lehmer prng. If n < p, you can get a permutation of 0, ..., n-1 as follows: Given i in that range, first add 1, then repeatedly multiply by a, taking the result mod p, until you get an element which is <= n, at which point you return the result - 1.
To fill in the details, this paper contains a table which gives a series of primes (all of which are close to various powers of 2) and corresponding primitive roots which are chosen so that they yield a generator with good statistical properties. Here is a part of that table, encoded as a Python dictionary in which the keys are the primes and the primitive roots are the values:
d = {32749: 30805,
65521: 32236,
131071: 66284,
262139: 166972,
524287: 358899,
1048573: 444362,
2097143: 1372180,
4194301: 1406151,
8388593: 5169235,
16777213: 9726917,
33554393: 32544832,
67108859: 11526618,
134217689: 70391260,
268435399: 150873839,
536870909: 219118189,
1073741789: 599290962}
Given n (in a certain range -- see the paper if you need to expand that range), you can find the smallest p which works:
def find_p_a(n):
for p in sorted(d.keys()):
if n < p:
return p, d[p]
once you know n and the matching p,a the following function is a permutation of 0 ... n-1:
def f(i,n,p,a):
x = a*(i+1) % p
while x > n:
x = a*x % p
return x-1
For a quick test:
n = 2073600
p,a = find_p_a(n) # p = 2097143, a = 1372180
nums = [f(i,n,p,a) for i in range(n)]
print(len(set(nums)) == n) #prints True
The average number of multiplications in f() is p/n, which in this case is 1.011 and will never be more than 2 (or very slightly larger since the p are not exact powers of 2). In practice this method is not fundamentally different from your "multiply by a large prime" approach, but in this case the factor is chosen more carefully, and the fact that sometimes more than 1 multiplication is required adding to the apparent randomness.
I have a list of about 100 000 probabilities on an event stored in a vector.
I want to know if it is possible to calculate the probability of n occuring events (e.g. what is the probability that exactly 1000 events occur).
I managed to calculate several probabilities in R :
p is the vector containing all the probabilities
probability of none : prod(1-p)
probability of at least one : 1 - prod(1-p)
I found how to calculate the probability of exactly one event :
sum(p * (prod(1-p) / (1-p)))
But I don't know how to generate a formula for n events.
I do not know R, but I know how I would solve this with programming.
This is a straightforward dynamic programming problem. We start with a vector v = [1.0] of probabilities. Then in untested Python:
for p_i in probabilities:
next_v = [p_i * v[0]]
v.append(0.0)
for j in range(len(v) - 1):
next_v.append(v[j]*p_i + v[j+1]*(1-p_i)
# For roundoff errors
total = sum(next_v)
for j in range(len(next_v)):
next_v[j] /= total
v = next_v
And now your answers can be just read off of the right entry in the vector.
This approach is equivalent to calculating Pascal's triangle row by row, throwing away the old row when you're done.
I have two vectors a=(a_1, a_2,...,a_n) and b=(b_1, b_2,...,b_n). I want to find a scalar "s" such that s=max{s : a+sb >= 0}. Here inequality is elementwise i.e. a_i+sb_i>=0 for all i=[1,...,n]. How to compute such a scalar ? Also if s=infinity is the solution, we will bound s by s=1.
Also vector "a" is nonnegative (i.e. each element is >=0).
Okay so with a_i >= 0, we can see that s=0 is always a solution.
One possible way is to solve the inequality on all components and then take the intersection of the domains. Their upper bound (which, if finite, is part of the intersection) is then your wanted number.
that means:
is what you're trying to solve. Note that, because in the second case b_i is negative, the number on the right hand side is positive. That means the intersection of all s_i you get like this is non-empty. The maximum has a lower bound of 0, so technically you can ignore the inequalities where b_i is positive, they are true anyways. But, for completeness and illustration purposes:
Example:
a= (1,1), b=(-0,5,1)
1 - s*0.5 >= 0 , that means s <= 2, or s in (-inf, 2]
1 + s*1 >= 0, that means s>= -1, or s in [-1,inf)
intersection: [-1,2]
That means the maximum value such that both equations hold is 2.
That is the most straight-forward way, of course there are probably more elegant ways.
Edit: As algorithm: check if b_i is positive or negative. If positive, save s_i = 1 (if you want your value to be bound by one). If negative, save s_i = -a_i/b_i. In the end you want to actually take the minimum!
But more efficient: You don't actually need to care when b_i is positive. Your maximum will be greater or equal than zero anyways. So just check the cases where it is smaller than 0 and keep the minimum of them -a_i/b_i, as that is the upper bound of the region.
Pseudocode:
s = 1
for i in range = 1 to length(b):
if b[i]<0:
s = min(s, -a[i]/b[i])
Why the minimum? Because that -a[i]/b[i] is the upper bound of the region.
Suppose, you have some uniform destribution rnd(x) function what will return 0 or 1.
How you can use this function to create any rnd(x,n) function what will return uniform distributed numbers from 0 to n?
I mean everyone using it, but for me it's not so clever. For example, I can create distributions with right border 2^n-1 ([0-1],[0-3],[0-7], etc.) but can't find a way how to do this for ranges like [0-2] or [0-5] without using very big numbers for reasonable precision.
Suppose that you need to create function rnd(n) which returns uniformly distributed random number in range [0, n] by using another function rnd1() which returns 0 or 1.
Find such smallest k that 2^k >= n+1
Create number consisting of k bits and fill all its bits by using rnd1(). Result is uniformly distributed number in range [0, 2^k-1]
Compare generated number to n. If it is smaller or equal to n, return it. Otherwise go to step 2.
In general, this is a variation of how to generate uniform numbers in small range by using library function which generates numbers in large range:
unsigned int rnd(n) {
while (true) {
unsigned int x = rnd_full_unsigned_int();
if (x < MAX_UNSIGNED_INT / (n+1) * (n+1)) {
return x % (n+1);
}
}
}
Explanation for above code. If you simply return rnd_full_unsigned_int() % (n+1) then this will generate bias towards small valued numbers. Black spiral represents all possible values from 0 to MAX_UNSIGNED_INT, counted from inside. Length of single revolution path is (n+1). Red line shows why bias occurs. So, in order to remove this bias we first create random number x in range [0, MAX_UNSIGNED_INT] (this is easy with bit-fill). Then, if x falls into bias-generating region, we recreate it. We keep recreating it until it doesn't fall into bias-generating region. x at this moment is in form a*(n+1)-1, so x % (n+1) is a uniformly distributed number [0, n].
Consider a vector V riddled with noisy elements. What would be the fastest (or any) way to find a reasonable maximum element?
For e.g.,
V = [1 2 3 4 100 1000]
rmax = 4;
I was thinking of sorting the elements and finding the second differential {i.e. diff(diff(unique(V)))}.
EDIT: Sorry about the delay.
I can't post any representative data since it contains 6.15e5 elements. But here's a plot of the sorted elements.
By just looking at the plot, a piecewise linear function may work.
Anyway, regarding my previous conjecture about using differentials, here's a plot of diff(sort(V));
I hope it's clearer now.
EDIT: Just to be clear, the desired "maximum" value would be the value right before the step in the plot of the sorted elements.
NEW ANSWER:
Based on your plot of the sorted amplitudes, your diff(sort(V)) algorithm would probably work well. You would simply have to pick a threshold for what constitutes "too large" a difference between the sorted values. The first point in your diff(sort(V)) vector that exceeds that threshold is then used to get the threshold to use for V. For example:
diffThreshold = 2e5;
sortedVector = sort(V);
index = find(diff(sortedVector) > diffThreshold,1,'first');
signalThreshold = sortedVector(index);
Another alternative, if you're interested in toying with it, is to bin your data using HISTC. You would end up with groups of highly-populated bins at both low and high amplitudes, with sparsely-populated bins in between. It would then be a matter of deciding which bins you count as part of the low-amplitude group (such as the first group of bins that contain at least X counts). For example:
binEdges = min(V):1e7:max(V); % Create vector of bin edges
n = histc(V,binEdges); % Bin amplitude data
binThreshold = 100; % Pick threshold for number of elements in bin
index = find(n < binThreshold,1,'first'); % Find first bin whose count is low
signalThreshold = binEdges(index);
OLD ANSWER (for posterity):
Finding a "reasonable maximum element" is wholly dependent upon your definition of reasonable. There are many ways you could define a point as an outlier, such as simply picking a set of thresholds and ignoring everything outside of what you define as "reasonable". Assuming your data has a normal-ish distribution, you could probably use a simple data-driven thresholding approach for removing outliers from a vector V using the functions MEAN and STD:
nDevs = 2; % The number of standard deviations to use as a threshold
index = abs(V-mean(V)) <= nDevs*std(V); % Index of "reasonable" values
maxValue = max(V(index)); % Maximum of "reasonable" values
I would not sort then difference. If you have some reason to expect continuity or bounded change (the vector is of consecutive sensor readings), then sorting will destroy the time information (or whatever the vector index represents). Filtering by detecting large spikes isn't a bad idea, but you would want to compare the spike to a larger neighborhood (2nd difference effectively has you looking within a window of +-2).
You need to describe formally the expected information in the vector, and the type of noise.
You need to know the frequency and distribution of errors and non-errors. In the simplest model, the elements in your vector are independent and identically distributed, and errors are all or none (you randomly choose to store the true value, or an error). You should be able to figure out for each element the chance that it's accurate, vs. the chance that it's noise. This could be very easy (error data values are always in a certain range which doesn't overlap with non-error values), or very hard.
To simplify: don't make any assumptions about what kind of data an error produces (the worst case is: you can't rule out any of the error data points as ridiculous, but they're all at or above the maximum among non-error measurements). Then, if the probability of error is p, and your vector has n elements, then the chance that the kth highest element in the vector is less or equal to the true maximum is given by the cumulative binomial distribution - http://en.wikipedia.org/wiki/Binomial_distribution
First, pick your favorite method for identifying outliers...
If you expect the numbers to come from a normal distribution, you can use a say 2xsd (standard deviation) above the mean to determine your max.
Do you have access to bounds of your noise-free elements. For example, do you know that your noise-free elements are between -10 and 10 ?
In that case, you could remove noise, and then find the max
max( v( find(v<=10 & v>=-10) ) )