I'm developing an application that involves getting the camera angle in a game. The angle can be anywhere from 0-359. 0 is North, 90 is East, 180 is South, etc. I'm using an API, which has a getAngle() method in Camera class.
How would I find the average between different camera angles. The real average of 0 and 359 is 179.5. As a camera angle, that would be South, but obviously 0 and 359 are both very close to North.
You can think of it in terms of vectors. Let θ1 and θ2 be your two angles expressed in radians. Then we can determine the x and y components of the unit vectors that are at these angles:
x1 = sin(θ1)
y1 = cos(θ1)
x2 = sin(θ2)
y2 = cos(θ2)
You can then add these two vectors, and determine the x and y components of the result:
x* = x1 + x2
y* = y1 + y2
Finally, you can determine the angle of this resulting vector:
θavg = tan-1(y*/x*)
or, even better, use atan2 (a function supported by many languages):
θavg = atan2(y*, x*)
You will probably have to separately handle the cases where y* = 0 and x* = 0, since this means the two vectors are pointing in exactly opposite directions (so what should the 'average' be?).
It depends what you mean by "average". But the normal definition is the bisector of the included acute angle. You must put both within 180 degrees of each other. There are many ways to do this, but a simple one is to increment or decrement one of the angles. If the angles are a and b, then this will do it:
if (a < b)
while (abs(a - b) > 180) a = a + 360
else
while (abs(a - b) > 180) a = a - 360
Now you can compute the simple average:
avg = (a + b) / 2
Of course you may want to normalize one more time:
while (avg < 0) avg = avg + 360
while (avg >= 360) avg = avg - 360
On your example, you'd have a=0, b=359. The first loop would increment a to 360. The average would be 359.5. Of course you could round that to an integer if you like. If you round up to 360, then the final set of loops will decrement to 0.
Note that if your angles are always normalized to [0..360) none of these loops ever execute more than once. But they're probably good practice so that a wild argument doesn't cause your code to fail.
You want to bisect the angles not average them. First get the distance between them, taking the shortest way around, then divide that in half and add to one of the angles. Eg:
A = 355
B = 5
if (abs(A - B) < 180) {
Distance = abs(A - B)
if (A < B) {
Bisect = A + Distance / 2
}
else {
Bisect = B + Distance / 2
}
}
else {
Distance = 360 - abs(A - B)
if (A < B) {
Bisect = A - Distance / 2
}
else {
Bisect = B - Distance / 2
}
}
Or something like that -- "Bisect" should come out to zero for the given inputs. There are probably clever ways to make the arithmetic come out with fewer if and abs operations.
In a comment, you mentioned that all "angles" to be averaged are within 90 degrees to each other. I am guessing that there is really only one camera, but it moves around a lot, and you are creating some sort of picture stability mechanism for the camera POV.
In any case, there is only the special case where the camera may be in the 270-359 quadrant and the 0-89 quadrant. For all other cases, you can just take a simple average. So, you just need to detect that special case, and when it happens, treat the angles in the 270-359 quadrant as -90 to -1 instead. Then, after computing the simple average, adjust it back into the 270-359 quadrant if necessary.
In C code:
int quadrant (int a) {
assert(0 <= a && a < 360);
return a/90;
}
double avg_rays (int rays[], int num) {
int i;
int quads[4] = { 0, 0, 0, 0 };
double sum = 0;
/* trivial case */
if (num == 1) return rays[0];
for (i = 0; i < num; ++i) ++quads[quadrant(rays[i])];
if (quads[0] == 0 || quads[3] == 0) {
/* simple case */
for (i = 0; i < num; ++i) sum += rays[i];
return sum/num;
}
/* special case */
for (i = 0; i < num; ++i) {
if (quadrant(rays[i]) == 3) rays[i] -= 360;
sum += rays[i];
}
return sum/num + (sum < 0) * 360;
}
This code can be optimized at the expense of clarity of purpose. When you detect the special case condition, you can fix up the sum after the fact. So, you can compute sum and figure out the special case and do the fix up in a single pass.
double avg_rays_opt (int rays[], int num) {
int i;
int quads[4] = { 0, 0, 0, 0 };
double sum = 0;
/* trivial case */
if (num == 1) return rays[0];
for (i = 0; i < num; ++i) {
++quads[quadrant(rays[i])];
sum += rays[i];
}
if (quads[0] == 0 || quads[3] == 0) {
/* simple case */
return sum/num;
}
/* special case */
sum -= quads[3]*360;
return sum/num + (sum < 0) * 360;
}
I am sure it can be further optimized, but it should give you a start.
Related
below codes(rewritten by C#) are used to compress unit normal vector from Wild Magic 5.17,could someone explain some math behind them or share some related refs ? I can figure out the octant bits setting, but the mantissa packing and unpacking seem complex ...
codes gist
some of codes here
// ...
public static ushort CompressNormal(Vector3 normal)
{
var x = normal.x;
var y = normal.y;
var z = normal.z;
Debug.Assert(MathUtil.IsSame(x * x + y * y + z * z, 1));
// Determine octant.
ushort index = 0;
if (x < 0.0)
{
index |= 0x8000;
x = -x;
}
if (y < 0.0)
{
index |= 0x4000;
y = -y;
}
if (z < 0.0)
{
index |= 0x2000;
z = -z;
}
// Determine mantissa.
ushort usX = (ushort)Mathf.Floor(gsFactor * x);
ushort usY = (ushort)Mathf.Floor(gsFactor * y);
ushort mantissa = (ushort)(usX + ((usY * (255 - usY)) >> 1));
index |= mantissa;
return index;
}
// ...
Author wanted to use 13 bits.
Trivial way: 6 bits for x component + 6 bits for y - occupies only 12 bits, so he invented approach to assign ~90 (lsb) units for x and and ~90 (msb) units for y (90*90~2^13).
I have no idea why he uses quadratic formula for y-component - this way gives slightly different distribution of approximated values between smaller and larger values - but why specifically for y?
I've asked Mr. Eberly (author of Wild Magic) and he gives the ref, desc in short, codes above try to map (x, y) to an index of triangular array (index is from 0 to N * (N + 1) / 2 - 1)
more details are in the related doc here,
btw, another solution here with a different compress method.
So I searched the in internet looking for programs with Cramer's Rule and there were some few, but apparently these examples were for fixed matrices only like 2x2 or 4x4.
However, I am looking for a way to solve a NxN Matrix. So I started and reached the point of asking the user for the size of the matrix and asked the user to input the values of the matrix but then I don't know how to move on from here.
As in I guess my next step is to apply Cramer's rule and get the answers but I just don't know how.This is the step I'm missing. can anybody help me please?
First, you need to calculate the determinant of your equations system matrix - that is the matrix, that consists of the coefficients (from the left-hand side of the equations) - let it be D.
Then, to calculate the value of a certain variable, you need to take the matrix of your system (from the previous step), replace the coefficients of the corresponding column with constant terms (from the right-hand side), calculate the determinant of resulting matrix - let it be C, and divide C by D.
A bit more about the replacement from the previous step: say, your matrix if 3x3 (as in the image) - so, you have a system of equations, where every a coefficient is multiplied by x, every b - by y, and every c by z, and ds are the constant terms. So, to calculate y, you replace those coefficients that are multiplied by y - bs in this case, with ds.
You perform the second step for every variable and your system gets solved.
You can find an example in https://rosettacode.org/wiki/Cramer%27s_rule#C
Although the specific example deals with a 4X4 matrix the code is written to accommodate any size square matrix.
What you need is calculate the determinant. Cramer's rule is just for the determinant of a NxN matrix
if N is not big, you can use the Cramer's rule(see code below), which is quite straightforward. However, this method is not efficient; if your N is big, you need to resort to other methods, such as lu decomposition
Assuming your data is double, and result can be hold by double.
#include <malloc.h>
#include <stdio.h>
double det(double * matrix, int n) {
if( 1 >= n ) return matrix[ 0 ];
double *subMatrix = (double*)malloc(( n - 1 )*( n - 1 ) * sizeof(double));
double result = 0.0;
for( int i = 0; i < n; ++i ) {
for( int j = 0; j < n - 1; ++j ) {
for( int k = 0; k < i; ++k )
subMatrix[ j*( n - 1 ) + k ] = matrix[ ( j + 1 )*n + k ];
for( int k = i + 1; k < n; ++k )
subMatrix[ j*( n - 1 ) + ( k - 1 ) ] = matrix[ ( j + 1 )*n + k ];
}
if( i % 2 == 0 )
result += matrix[ 0 * n + i ] * det(subMatrix, n - 1);
else
result -= matrix[ 0 * n + i ] * det(subMatrix, n - 1);
}
free(subMatrix);
return result;
}
int main() {
double matrix[ ] = { 1,2,3,4,5,6,7,8,2,6,4,8,3,1,1,2 };
printf("%lf\n", det(matrix, 4));
return 0;
}
I have a C code off finding large perfect numbers below,
#include <stdio.h>
int main ()
{
unsigned long long num,i,sum;
while (scanf ("%llu",&num) != EOF && num)
{
sum = 1;
for (i=2; i*i<=num; i++)
{
if (num % i == 0)
{
if (i*i == num)
sum += i;
else
sum += (i + num/i);
}
}
if (sum == num)
printf ("Perfect\n");
else if (sum > num)
printf ("Abundant\n");
else
printf ("Deficient\n");
}
return 0;
}
I tried to find whether a number is perfect, abundant or deficient. I run a loop upto the square root of numto minimize the runtime. It works fine <= 10^15 but for the larger values it takes too long time to execute.
For example,for the following input sets,
8
6
18
1000000
1000000000000000
0
this code shows the following outputs,
Deficient
Perfect
Abundant
Abundant
Abundant
But, for 10^16 it doesn't respond quickly.
So, is there any better way to find a perfect number for too long values? Or is there any better algorithm to implement here??? :)
Yes, there is a better algorithm.
Your algorithm is basically the simple one--adding up the divisors of a number to find... the sum of the divisors of a number (excluding itself). But you can use the number-theoretic formula for finding the sum of the divisors of a number (including itself). If the prime numbers dividing n are p1, p2, ..., pk and the powers of those primes in the canonical decomposition of n are a1, a2, ..., ak, then the sum of the divisors of n is
(p1**(a1+1) - 1) / (p1 - 1) * (p2**(a2+1) - 1) / (p2 - 1) * ...
* (pk**(ak+1) - 1) / (pk - 1)
You can find the prime divisors and their exponents more quickly than finding all the divisors of n. Subtract n from that expression above and you get the sum you want.
There are some tricks, of course, to find the pis and ais more efficiently: I'll leave that to you.
By the way, if your purpose is just to find the perfect numbers, as in your title, you would do better to use Euclid's formula for even prime numbers. Find the Mersenne prime numbers by examining all 2**p-1 for prime p to see if they are prime--there are shortcuts to doing this as well--then constructing a perfect number from that Mersenne prime. This would leave out any odd perfect numbers, though. If you find any, let the mathematical community know--that would make you world famous.
Of course, the fastest way of all to find perfect numbers is to use the lists already made of some of them.
It is a matter of factorization of numbers. You can read more here: https://en.wikipedia.org/wiki/Integer_factorization
Unfortunately no good news for you - the bigger the number gets, the longer it takes.
To start with your code, try not to multiply i*i each iteration.
Instead of:
for (i=2; i*i<=num; i++)
calculate square root of num first, and then compare
i <= square_root_of_num in the loop.
// Program to determine whether perfect or not
# include <bits/stdc++.h>
using namespace std;
map<long long int, int> mp; // to store prime factors and there frequency
void primeFactors(long long int n)
{
// counting the number of 2s that divide n
while (n%2 == 0)
{
mp[2] = mp[2]+1;
n = n/2;
}
long long int root = sqrt(n);
// n must be odd at this point. So we can skip every even numbers next
for (long long int i = 3; i <= root; i = i+2)
{
// While i divides n, count frequency of i prime factor and divide n
while (n%i == 0)
{
mp[i] = mp[i]+1;
n = n/i;
}
}
// This condition is to handle the case whien n is a prime number
// greater than 2
if (n > 2)
{
mp[n] = mp[n]+1;
}
}
long long int pow(long long int base, long long int exp)
{
long long int result = 1;
base = base;
while (exp>0)
{
if (exp & 1)
result = (result*base);
exp >>= 1;
base = (base*base);
}
return result;
}
int main ()
{
long long num, p, a, sum;
while (scanf ("%lld",&num) != EOF && num)
{
primeFactors(num);
sum = 1;
map<long long int, int> :: iterator i;
for(i=mp.begin(); i!=mp.end(); i++)
{
p = i->first;
a = i->second;
sum = sum*((pow(p,a+1)-1)/(p-1));
}
if (sum == 2*num)
printf ("Perfect\n");
else if (sum > num)
printf ("Abundant\n");
else
printf ("Deficient\n");
mp.clear();
}
return 0;
}
How does one, computationally and dynamically, derive the 'ths' place equivalent of a whole integer? e.g.:
187 as 0.187
16 as 0.16
900041 as 0.900041
I understand one needs to compute the exact th's place. I know one trick is to turn the integer into a string, count how many places there are (by how many individual characters there are) and then create our future value to multiply against by the tenth's value derived - like how we would on pen and paper - such as:
char integerStr[7] = "186907";
int strLength = strlen(integerStr);
double thsPlace = 0.0F;
for (int counter = 0; counter < strLength; ++counter) {
thsPlace = 0.1F * thsPlace;
}
But what is a non-string, arithmetic approach to solving this?
pseudocode:
n / pow(10, floor(log10(n))+1)
Divide the original value by 10 repeatedly until it's less than one:
int x = 69105;
double result = (double) x;
while (x > 1.0) x /= 10.0;
/* result = 0.69105 */
Note that this won't work for negative values; for those, you need to perform the algorithm on the absolute value and then negate the result.
[edited for strange indenting]
I'm not sure exactly what you mean with your question, but here's what I would do:
int placeValue(int n)
{
if (n < 10)
{
return 1;
}
else
{
return placeValue(n / 10) + 1;
}
}
[This is a recursive method]
I don't know how performant the pow(10, x) version is, but you could try to do most of this with integer arithmetic. Assuming we are only dealing with positive values or 0 (use the absolute value, if necessary):
int divisor = 1;
while (divisor < x)
divisor *= 10;
if (divisor > 0)
return (double)x / divisor;
Note that the above needs some safeguards, i.e. checking if divisor may have overflow (in that case, it would be negative), if x is positive, etc. But I assume you can do that yourself.
I have 2 tables of values and want to scale the first one so that it matches the 2nd one as good as possible. Both have the same length. If both are drawn as graphs in a diagram they should be as close to each other as possible. But I do not want quadratic, but simple linear weights.
My problem is, that I have no idea how to actually compute the best scaling factor because of the Abs function.
Some pseudocode:
//given:
float[] table1= ...;
float[] table2= ...;
//wanted:
float factor= ???; // I have no idea how to compute this
float remainingDifference=0;
for(int i=0; i<length; i++)
{
float scaledValue=table1[i] * factor;
//Sum up the differences. I use the Abs function because negative differences are differences too.
remainingDifference += Abs(scaledValue - table2[i]);
}
I want to compute the scaling factor so that the remainingDifference is minimal.
Simple linear weights is hard like you said.
a_n = first sequence
b_n = second sequence
c = scaling factor
Your residual function is (sums are from i=1 to N, the number of points):
SUM( |a_i - c*b_i| )
Taking the derivative with respect to c yields:
d/dc SUM( |a_i - c*b_i| )
= SUM( b_i * (a_i - c*b_i)/|a_i - c*b_i| )
Setting to 0 and solving for c is hard. I don't think there's an analytic way of doing that. You may want to try https://math.stackexchange.com/ to see if they have any bright ideas.
However if you work with quadratic weights, it becomes significantly simpler:
d/dc SUM( (a_i - c*b_i)^2 )
= SUM( 2*(a_i - c*b_i)* -c )
= -2c * SUM( a_i - c*b_i ) = 0
=> SUM(a_i) - c*SUM(b_i) = 0
=> c = SUM(a_i) / SUM(b_i)
I strongly suggest the latter approach if you can.
I would suggest trying some sort of variant on Newton Raphson.
Construct a function Diff(k) that looks at the difference in area between your two graphs between fixed markers A and B.
mathematically I guess it would be integral ( x = A to B ){ f(x) - k * g(x) }dx
anyway realistically you could just subtract the values,
like if you range from X = -10 to 10, and you have a data point for f(i) and g(i) on each integer i in [-10, 10], (ie 21 datapoints )
then you just sum( i = -10 to 10 ){ f(i) - k * g(i) }
basically you would expect this function to look like a parabola -- there will be an optimum k, and deviating slightly from it in either direction will increase the overall area difference
and the bigger the difference, you would expect the bigger the gap
so, this should be a pretty smooth function ( if you have a lot of data points )
so you want to minimise Diff(k)
so you want to find whether derivative ie d/dk Diff(k) = 0
so just do Newton Raphson on this new function D'(k)
kick it off at k=1 and it should zone in on a solution pretty fast
that's probably going to give you an optimal computation time
if you want something simpler, just start with some k1 and k2 that are either side of 0
so say Diff(1.5) = -3 and Diff(2.9) = 7
so then you would pick a k say 3/10 of the way (10 = 7 - -3) between 1.5 and 2.9
and depending on whether that yields a positive or negative value, use it as the new k1 or k2, rinse and repeat
In case anyone stumbles upon this in the future, here is some code (c++)
The trick is to first sort the samples by the scaling factor that would result in the best fit for the 2 samples each. Then start at both ends iterate to the factor that results in the minimum absolute deviation (L1-norm).
Everything except for the sort has a linear run time => Runtime is O(n*log n)
/*
* Find x so that the sum over std::abs(pA[i]-pB[i]*x) from i=0 to (n-1) is minimal
* Then return x
*/
float linearFit(const float* pA, const float* pB, int n)
{
/*
* Algebraic solution is not possible for the general case
* => iterative algorithm
*/
if (n < 0)
throw "linearFit has invalid argument: expected n >= 0";
if (n == 0)
return 0;//If there is nothing to fit, any factor is a perfect fit (sum is always 0)
if (n == 1)
return pA[0] / pB[0];//return x so that pA[0] = pB[0]*x
//If you don't like this , use a std::vector :P
std::unique_ptr<float[]> targetValues_(new float[n]);
std::unique_ptr<int[]> indices_(new int[n]);
//Get proper pointers:
float* targetValues = targetValues_.get();//The value for x that would cause pA[i] = pB[i]*x
int* indices = indices_.get(); //Indices of useful (not nan and not infinity) target values
//The code above guarantees n > 1, so it is safe to get these pointers:
int m = 0;//Number of useful target values
for (int i = 0; i < n; i++)
{
float a = pA[i];
float b = pB[i];
float targetValue = a / b;
targetValues[i] = targetValue;
if (std::isfinite(targetValue))
{
indices[m++] = i;
}
}
if (m <= 0)
return 0;
if (m == 1)
return targetValues[indices[0]];//If there is only one target value, then it has to be the best one.
//sort the indices by target value
std::sort(indices, indices + m, [&](int ia, int ib){
return targetValues[ia] < targetValues[ib];
});
//Start from the extremes and meet at the optimal solution somewhere in the middle:
int l = 0;
int r = m - 1;
// m >= 2 is guaranteed => l > r
float penaltyFactorL = std::abs(pB[indices[l]]);
float penaltyFactorR = std::abs(pB[indices[r]]);
while (l < r)
{
if (l == r - 1 && penaltyFactorL == penaltyFactorR)
{
break;
}
if (penaltyFactorL < penaltyFactorR)
{
l++;
if (l < r)
{
penaltyFactorL += std::abs(pB[indices[l]]);
}
}
else
{
r--;
if (l < r)
{
penaltyFactorR += std::abs(pB[indices[r]]);
}
}
}
//return the best target value
if (l == r)
return targetValues[indices[l]];
else
return (targetValues[indices[l]] + targetValues[indices[r]])*0.5;
}