I need to calculate a distance vector from two GPS coordinates.
The purpose is to calculate the vector of one's change in position,
so the coordinates are not far from each other.
I would like to calculate the latitudinal and longitudinal distances in meters.
I found something here,
but this only gives the direction without distance.
Due to the fact that those coordinates are very close in my case,
I made the approximation that the center of the earth and those two points form a triangle.
Thus, I can use the Al Kashi theorem.
Here is the code:
// Common values
double b = EARTH_RADIUS + destination.altitude;
double c = EARTH_RADIUS + this.altitude;
double b2 = b*b;
double c2 = c*c;
double bc2 = 2*b*c;
// Longitudinal calculations
double alpha = destination.longitude - this.longitude;
// Conversion to radian
alpha = alpha * Math.PI / 180;
// Small-angle approximation
double cos = 1 - alpha*alpha/2; //Math.cos(alpha);
// Use the law of cosines / Al Kashi theorem
double x = Math.sqrt(b2 + c2 - bc2*cos);
// Repeat for latitudinal calculations
alpha = destination.latitude - this.latitude;
alpha = alpha * Math.PI / 180;
double cos = 1 - alpha*alpha/2; //Math.cos(alpha);
double y = Math.sqrt(b2 + c2 - bc2*cos);
// Obtain vertical difference, too
double z = destination.altitude - this.altitude;
return new Vector3D(x, y, z);
As you can see,
I have approximated the cosine because the angles are really small.
I think adding the altitude to the earth's radius doesn't give a better approximation,
but since I have it…
I tested it compared to Google Maps for a distance of 38 meters and I got a result of 37.877.
My result might be more accurate! ^^
Related
I have a custom Dougnut chart drawer, which can draw doughnuts like this:
I'm pretty satisfied with this, however I'd like to finetune it and I just have no idea of the right solution.
I'd like to have parallel gaps (paddings) between slices.
I raised those paddings to have better view of my goal.
This is how I'm drawing this currently:
double cx, cy; //center points of circle
double r1, r2; //radius of outer and inner circle
double pad = M_PI / 360 * 12; //12 degree pad
double alpha = -1 * M_PI / 2; //starting from up (noon)
double da = 0.0; // delta of current slice
for (ALL_SLICES) {
calculate_da(&da); //calculate slice's delta
// drawing
move_pen(cx + r2 * cos(alpha + pad/2), cy + r2 * sin(alpha + pad/2)); //STEP1
draw_arc(cx, cy, r1, alpha + pad/2, alpha + da - pad/2); //STEP2
draw_negative_arc(cx, cy, r2, alpha + da - pad/2, alpha + pad/2); //STEP3
fill();
//update next alpha
alpha += da;
}
Drawing steps:
So this is fairly simple, however to have parallel gaps between slices, I'd have to calculate another angle for the outer end and start points:
I've drawn quite a lot of triangles already but I was not able to solve this easily.
Again: the goal is to have parallel paddings between the slices, i.e. bringing these 2 red dots closer together on the circumference, to get exact parallel lines between two slices.
Update:
This is how it looks with scaled padding from #chux answer:
Unfortunately it's not perfect, these lines are not parallel:
"Pad should be 12degree" --> If you want parallel lines, the inner padding angle and outer padding angles are different - not both 12°.
Use a scaled padding.
Given the outer padding angle is double pad = ...,
Draw first arc as before.
draw_arc(cx, cy, r1, alpha + pad/2, alpha + da - pad/2); //STEP2
The inner padding angle is proportionally larger.
double inner_pad = pad*r1/r2;
draw_negative_arc(cx, cy, r2, alpha + da - inner_pad/2, alpha + inner_pad/2); //STEP3
Tip: Less confusing to use r_inner, r_outer, than r2, r1.
Minor: Padding calculation looks off by 2x.
// double pad = M_PI / 360 * 12; //12 degree pad.
double pad = 2 * M_PI / 360 * 12; //12 degree pad.
We'll solve this for the case of one arbitrary gap. Then, you can apply our solution to any number of gaps.
First, we can calculate the average of the angles at which the two boundaries radiate from the center. This will be the angle at which our two new segments will radiate from the inner circle to the outer circle. If the original segments radiate from the center of the inner circle at angles a and b, let c be the angle going through the middle of the smaller segment of the circle between them. If a = 60 deg and b = 90 deg, choose c = 75 deg as an example.
Now, for the points of intersection with the edge of the inner circle you've already found, place lines whose slopes have angle c (w.r.t. the positive x-axis, as normal). Then, find the correspondibg points of intersection with the outer circle. These new outer points and your old inner points define the parallel segments that you are looking for.
Example: inner circle radius r = 10; outer circle radius 20; angles a = 30 deg and b = 60 deg. Your current inner and outer points are p1 = (5sqrt(3), 5), p2 = (5, 5sqrt(3)), q1 = 2p1, q2 = 2p2 (assume the center of the inner circle is at the origin here). Calculate c = 45 deg. The slope of lines with this angle can be found using tangent; m = 1. Define two lines going through p1 and p2 with slope 1; we get y = x + 5(1 - sqrt(3)) and y = x + 5(sqrt(3) - 1). Now we find the points of intersection of these lines with the outer circle; I will leave this as an exercise but basically just take the equation of the outer circle, x^2 + y^2 = 400, replace y in this with the right-hand side of each equation, and solve for x. You will get two solutions (a line passing through a point inside the outer circle must intersect the circle in two places)... pick the one the is in the smaller segment defined by your original outer points.
This seems pretty tedious to do by hand, and it is, but in reality once you write the code once the computer will have no issue doing this for you all day long.
(Please comment if you need me to try to work out the points of intersection in the example, on my phone now).
For #chux - Reinstate Monica's request, I'm posting here my final solution which relies heavily on his link in comment: Radius and central angle on a circular segment.
1.) Set the initial constants:
double pad = PADDING_VALUE;
double r_inner = INNER_RADIUS;
double r_outer = OUTER_RADIUS;
2.) Calculate the slice's starting point's coordinates (on the inner circle):
double alpha = CURRENT_SLICE_ANGLE;
double inner_angle = alpha + pad/2; // slice will begin with pad/2 offset
double x1 = r_inner * cos(inner_angle);
double y1 = r_inner * sin(inner_angle);
3.) Now calculate the previous slice's ending point's coordinates (also on the inner circle):
double x2 = r_inner * cos(inner_angle - pad); //easy, exactly angle-pad
double y2 = r_inner * sin(inner_angle - pad); //easy, exactly angle-pad
4.) Now I have the coords for the current slice's start and the previous slice's end. I want to keep a constant length between the 2 slices, and this length should be exactly the segment's length between (x1,y1) and (x2,y2). This is a right-angled triangle, and the segment's length I'm looking for is easily expressable with Pitagoras formula:
double hyp = sqrt(pow(x1-x2,2) + pow(y1-y2,2));
5.) This hyp is exactly the chord length (c) here:
6.) From the same Wikipedia page the Theta is expressible with chord length and radius:
double theta = 2 * asin(hyp / 2 / r_outer);
7.) I have to draw the outer arc with the help of theta and pad:
double outer_angle_correction = (pad - theta) / 2;
Applying this calculation results this:
This is a bit odd, as the gaps are a bit too large. But anyway these huge gaps were only used for demonstration, and after I changed them back to my initial intended values this is the result:
Perfectly parallel gaps between all slices without using any approximation - just pure math. Sweet.
I want to make compass with Arduino and QMC5883. Now, the magnetometer outputs me only X Y Z values, and I have to calculate the rest myself. So far, I've used this:
float azimuth = atan2(x, y) * 180.0/PI;
But it's pretty buggy, and vulnerable to tilting in any direction. Is there any better algorythm that - for example - phone manufactures use? I could use accelerometer for help, if it would be needed.
The BBC micro:bit's device abstraction layer (DAL) includes this code to do tilt adjustment based on angles derived from accelerometer data. From https://github.com/lancaster-university/microbit-dal/blob/master/source/drivers/MicroBitCompass.cpp
/**
* Calculates a tilt compensated bearing of the device, using the accelerometer.
*/
int MicroBitCompass::tiltCompensatedBearing()
{
// Precompute the tilt compensation parameters to improve readability.
float phi = accelerometer->getRollRadians();
float theta = accelerometer->getPitchRadians();
// Convert to floating point to reduce rounding errors
Sample3D cs = this->getSample(NORTH_EAST_DOWN);
float x = (float) cs.x;
float y = (float) cs.y;
float z = (float) cs.z;
// Precompute cos and sin of pitch and roll angles to make the calculation a little more efficient.
float sinPhi = sin(phi);
float cosPhi = cos(phi);
float sinTheta = sin(theta);
float cosTheta = cos(theta);
// Calculate the tilt compensated bearing, and convert to degrees.
float bearing = (360*atan2(x*cosTheta + y*sinTheta*sinPhi + z*sinTheta*cosPhi, z*sinPhi - y*cosPhi)) / (2*PI);
// Handle the 90 degree offset caused by the NORTH_EAST_DOWN based calculation.
bearing = 90 - bearing;
// Ensure the calculated bearing is in the 0..359 degree range.
if (bearing < 0)
bearing += 360.0f;
return (int) (bearing);
}
I'm using Wikipedia's spherical coordinate system article to create a sphere made out of particles in Three.js. Based on this article, I created a small Polarizer class that takes in polar coordinates with setPolar(rho, theta, phi) and it returns its corresponding x, y, z
Here's the setPolar() function:
// Rho: radius
// theta θ: polar angle on Y axis
// phi φ: azimuthal angle on Z axis
Polarizer.prototype.setPolar = function(rho, theta, phi){
// Limit values to zero
this.rho = Math.max(0, rho);
this.theta = Math.max(0, theta);
this.phi = Math.max(0, phi);
// Calculate x,y,z
this.x = this.rho * Math.sin(this.theta) * Math.sin(this.phi);
this.y = this.rho * Math.cos(this.theta);
this.z = this.rho * Math.sin(this.theta) * Math.cos(this.phi);
return this;
}
I'm using it to position my particles as follows:
var tempPolarizer = new Polarizer();
for(var i = 0; i < geometry.vertices.length; i++){
tempPolarizer.setPolar(
50, // Radius of 50
Math.random() * Math.PI, // Theta ranges from 0 - PI
Math.random() * 2 * Math.PI // Phi ranges from 0 - 2PI
);
// Set new vertex positions
geometry.vertices[i].set(
tempPolarizer.x,
tempPolarizer.y,
tempPolarizer.z
);
}
It works wonderfully, except that I'm getting high particle densities, or "pinching" at the poles:
I'm stumped as to how to avoid this from happening. I thought of passing a weighted random number to the latitude, but I'm hoping to animate the particles without the longitude also slowing down and bunching up at the poles.
Is there a different formula to generate a sphere where the poles don't get as much weight? Should I be using quaternions instead?
For random uniform sampling
use random point in unit cube , handle it as vector and set its length to radius of your sphere. For example something like this in C++:
x = 2.0*Random()-1.0;
y = 2.0*Random()-1.0;
z = 2.0*Random()-1.0;
m=r/sqrt(x*x+y*y+z*z);
x*=m;
y*=m;
z*=m;
where Random return number in <0.0,1.0>. For more info see:
Procedural generation of stars with skybox
For uniform non-random sampling
see related QAs:
Sphere triangulation by mesh subdivision
Make a sphere with equidistant vertices
In order to avoid high density at the poles, I had to lower the likelihood of theta (latitude) landing close to 0 and PI. My input of
Math.random() * Math.PI, for theta gives an equal likelihood to all values (orange).
Math.acos((Math.random() * 2) - 1) perfectly weights the output to make 0 and PI less likely along the sphere's surface (yellow)
Now I can't even tell where the poles are!
Given n images and a projection matrix for each image, how can i calculate the ray (line) emitted by each pixel of the images, which is intersecting one of the three planes of the real-world coordinate system? The object captured by the camera is at the same position, just the camera's position is different for each image. That's why there is a separate projection matrix for each image.
As far as my research suggests, this is the inverse of the 3D to 2D projection. Since information is lost when projecting to 2D, it's only possible to calculate the ray (line) in the real-world coordinate system, which is fine.
An example projection matrix P, that a calculated based on given K, R and t component, according to K*[R t]
3310.400000 0.000000 316.730000
K= 0.000000 3325.500000 200.550000
0.000000 0.000000 1.000000
-0.14396457836077139000 0.96965263281337499000 0.19760617153779569000
R= -0.90366580603479685000 -0.04743335255026152200 -0.42560419233334673000
-0.40331536459778505000 -0.23984130575212276000 0.88306936201487163000
-0.010415508744
t= -0.0294278883669
0.673097816109
-604.322 3133.973 933.850 178.711
P= -3086.026 -205.840 -1238.247 37.127
-0.403 -0.240 0.883 0.673
I am using the "DinoSparseRing" data set available at http://vision.middlebury.edu/mview/data
for (int i = 0; i < 16; i++) {
RealMatrix rotationMatrix = MatrixUtils.createRealMatrix(rotationMatrices[i]);
RealVector translationVector = MatrixUtils.createRealVector(translationMatrices[i]);
// construct projection matrix according to K*[R t]
RealMatrix projMatrix = getP(kalibrationMatrices[i], rotationMatrices[i], translationMatrices[i]);
// getM returns the first 3x3 block of the 3x4 projection matrix
RealMatrix projMInverse = MatrixUtils.inverse(getM(projMatrix));
// compute camera center
RealVector c = rotationMatrix.transpose().scalarMultiply(-1.f).operate(translationVector);
// compute all unprojected points and direction vector per project point
for (int m = 0; m < image_m_num_pixel; m++) {
for (int n = 0; n < image_n_num_pixel; n++) {
double[] projectedPoint = new double[]{
n,
m,
1};
// undo perspective divide
projectedPoint[0] *= projectedPoint[2];
projectedPoint[1] *= projectedPoint[2];
// undo projection by multiplying by inverse:
RealVector projectedPointVector = MatrixUtils.createRealVector(projectedPoint);
RealVector unprojectedPointVector = projMInverse.operate(projectedPointVector);
// compute direction vector
RealVector directionVector = unprojectedPointVector.subtract(c);
// normalize direction vector
double dist = Math.sqrt((directionVector.getEntry(0) * directionVector.getEntry(0))
+ (directionVector.getEntry(1) * directionVector.getEntry(1))
+ (directionVector.getEntry(2) * directionVector.getEntry(2)));
directionVector.setEntry(0, directionVector.getEntry(0) * (1.0 / dist));
directionVector.setEntry(1, directionVector.getEntry(1) * (1.0 / dist));
directionVector.setEntry(2, directionVector.getEntry(2) * (1.0 / dist));
}
}
}
The following 2 plots show the outer rays for each images (total of 16 images). The blue end is the camera point and the cyan is a bounding box containing the object captured by the camera. One can clearly see the rays projecting back to the object in world coordinate system.
To define the ray you need a start point (which is the camera/eye position) and a direction vector, which can be calculated using any point on the ray.
For a given pixel in the image, you have a projected X and Y (zeroed at the center of the image) but no Z depth value. However the real-world co-ordinates corresponding to all possible depth values for that pixel will all lie on the ray you are trying to calculate, so you can just choose any arbitrary non-zero Z value, since any point on the ray will do.
float projectedX = (x - imageCenterX) / (imageWidth * 0.5f);
float projectedY = (y - imageCenterY) / (imageHeight * 0.5f);
float projectedZ = 1.0f; // any arbitrary value
Now that you have a 3D projected co-ordinate you can undo the projection by applying the perspective divide in reverse by multiplying X and Y by Z, then multiplying the result by the inverse projection matrix to get the unprojected point.
// undo perspective divide (redundant if projectedZ = 1, shown for completeness)
projectedX *= projectedZ;
projectedY *= projectedZ;
Vector3 projectedPoint = new Vector3(projectedX, projectedY, projectedZ);
// undo projection by multiplying by inverse:
Matrix invProjectionMat = projectionMat.inverse();
Vector3 unprojectedPoint = invProjectionMat.multiply(projectedPoint);
Subtract the camera position from the unprojected point to get the direction vector from the camera to the point, and then normalize it. (This step assumes that the projection matrix defines both the camera position and orientation, if the position is stored separately then you don't need to do the subtraction)
Vector3 directionVector = unprojectedPoint.subtract(cameraPosition);
directionVector.normalize();
The ray is defined by the camera position and the normalized direction vector. You can then intersect it with any of the X, Y, Z planes.
I have two vectors in a game. One vector is the player, one vector is an object. I also have a vector that specifies the direction the player if facing. The direction vector has no z part. It is a point that has a magnitude of 1 placed somewhere around the origin.
I want to calculate the angle between the direction the soldier is currently facing and the object, so I can correctly pan some audio (stereo only).
The diagram below describes my problem. I want to calculate the angle between the two dashed lines. One dashed line connects the player and the object, and the other is a line representing the direction the player is facing from the point the player is at.
At the moment, I am doing this (assume player, object and direction are all vectors with 3 points, x, y and z):
Vector3d v1 = direction;
Vector3d v2 = object - player;
v1.normalise();
v2.normalise();
float angle = acos(dotProduct(v1, v2));
But it seems to give me incorrect results. Any advice?
Test of code:
Vector3d soldier = Vector3d(1.f, 1.f, 0.f);
Vector3d object = Vector3d(1.f, -1.f, 0.f);
Vector3d dir = Vector3d(1.f, 0.f, 0.f);
Vector3d v1 = dir;
Vector3d v2 = object - soldier;
long steps = 360;
for (long step = 0; step < steps; step++) {
float rad = (float)step * (M_PI / 180.f);
v1.x = cosf(rad);
v1.y = sinf(rad);
v1.normalise();
float dx = dotProduct(v2, v1);
float dy = dotProduct(v2, soldier);
float vangle = atan2(dx, dy);
}
You shoud always use atan2 when computing angular deltas, and then normalize.
The reason is that for example acos is a function with domain -1...1; even normalizing if the input absolute value (because of approximations) gets bigger than 1 the function will fail even if it's clear that in such a case you would have liked an angle of 0 or PI instead. Also acos cannot measure the full range -PI..PI and you'd need to use explicitly sign tests to find the correct quadrant.
Instead atan2 only singularity is at (0, 0) (where of course it doesn't make sense to compute an angle) and its codomain is the full circle -PI...PI.
Here is an example in C++
// Absolute angle 1
double a1 = atan2(object.y - player.y, object.x - player.x);
// Absolute angle 2
double a2 = atan2(direction.y, direction.x);
// Relative angle
double rel_angle = a1 - a2;
// Normalize to -PI .. +PI
rel_angle -= floor((rel_angle + PI)/(2*PI)) * (2*PI) - PI;
In the case of a general 3d orientation you need two orthogonal directions, e.g. the vector of where the nose is pointing to and the vector to where your right ear is.
In that case the formulas are just slightly more complex, but simpler if you have the dot product handy:
// I'm assuming that '*' is defined as the dot product
// between two vectors: x1*x2 + y1*y2 + z1*z2
double dx = (object - player) * nose_direction;
double dy = (object - player) * right_ear_direction;
double angle = atan2(dx, dy); // Already in -PI ... PI range
In 3D space, you also need to compute the axis:
Vector3d axis = normalise(crossProduct(normalise(v1), normalise(v2)));