Applying a rotation matrix on a point with an origin fails - math

From this answer; https://stackoverflow.com/a/34060479/1493455 I have made some code that should rotate a point around an origin point. The origin is at 0,0,0.
Instead of my rotations being yaw pitch roll, I have xrot yrot and zrot which are degrees (0-360). A rotation around the X axis results in the object pitching back and forth, a rotation around the Z axis results in it rotating as if it was the yaw of the object.
I think I messed up somewhere in translating the values, but I can't seem to figure out why.
When rotating the points around each seperate angle (keeping the other 2 on 0 degrees) every rotation gives the right solution.
Combined, the results are good when rotating over X and Z or Y and Z. The results are not correct when rotating around X and Y.
var cosa = cos(-degtorad(zrot));
var sina = sin(-degtorad(zrot));
var cosb = cos(-degtorad(yrot));
var sinb = sin(-degtorad(yrot));
var cosc = cos(-degtorad(xrot));
var sinc = sin(-degtorad(xrot));
var Axx = cosa*cosb;
var Axy = cosa*sinb*sinc - sina*cosc;
var Axz = cosa*sinb*cosc + sina*sinc;
var Ayx = sina*cosb;
var Ayy = sina*sinb*sinc + cosa*cosc;
var Ayz = sina*sinb*cosc - cosa*sinc;
var Azx = -sinb;
var Azy = cosb*sinc;
var Azz = cosb*cosc;
var px = Axx*x + Axy*y + Axz*z;
var py = Ayx*x + Ayy*y + Ayz*z;
var pz = Azx*x + Azy*y + Azz*z;
degtorad is basically return input * pi / 180
I feel like there is an issue with the order of the calculation - since individual rotations give the proper outcome, just the combination of these 3 don't.
I know this because I'm rotating a 3D model using the zrot, yrot and xrot values, and drawing the point in the same 3D space - the points line up with the model in most cases, but not rotations on X and Y, or X, Y and Z.
VIDEO OF RESULT: https://streamable.com/481ly (there are 2 points on the corners of that cupboard which are calculated through this code)

Related

three.js rotate object by an axe direction

I want to rotate a cylinder in a certain axe made by two points p1 and p2.
I create the cylinder with the height l equal to the distance between the two points, I place it in the middle of that axe.
var xd = p2.x - p1.x,
yd = p2.y - p1.y,
zd = p2.z - p1.z,
l = Math.sqrt(xd*xd + yd*yd + zd*zd);
var cylinder = new THREE.Mesh( new THREE.CylinderGeometry( 5, 5, l, 32 ), new THREE.MeshBasicMaterial( {color: "#ffffff"} ) );
cylinder.position.set(p1.x+xd/2, p1.y+yd/2, p1.z+zd/2);
I use setFromUnitVectors to get the rotaion matrix needed between the two points and apply it to the rotation matrix of the cylinder
var quaternion = new THREE.Quaternion();
quaternion.setFromUnitVectors(new THREE.Vector3(p1.x,p1.y,p1.z).normalize(),new THREE.Vector3(p2.x,p2.y,p2.z).normalize());
cylinder.rotation.setFromQuaternion(quaternion);
I dont see what is wrong or maybe there is another way to do it?
another way to do this is to just make the cylinder LookAt(p2) if p2 is a vector.

How is this done in 3 dimensions instead of 2d?

I would like to know how to move an object from pointA towards pointB by a small distance.
I know how it's done in 2d, but 3d is definitely a little different.
Here is some code of how i do it in 2D.
Vector pointA = ccp(1,1);
Vector pointB = ccp(10,10);
//find angle between the two points
float diffX = pointA.x - pointB.x;
float diffY = pointA.y - pointB.y;
float angle = atan2f(diffX, diffZ);
angle -= CC_DEGREES_TO_RADIANS(90);
float dis = 1.5;//how far away this should extend from the last particle
float x = -cosf(angle)*dis;
float y = sinf(angle)*dis;
object.position = ccp(object.position.x+x,object.position.y+y);
With that code i can successfully move an object by a distance of 1.5 towards pointB.
BUT i just can't find out how to do it with 3 dimensions.
So my question is how do i accomplish this in 3D?
Uh, I think you don't need to calculate angle.
Just normalize gap and move.
Vector pointA = Vector(1,1,1);
Vector pointB = Vector(10,10,10);
Vector gap = pointB - pointA;
// This code can occur division-by-zero error.
Vector dir = gap / sqrt(gap.x * gap.x + gap.y * gap.y + gap.z * gap.z);
float dist = 1.5;
object.position += dir * dist;

Draw Lines Over a Circle

There's a line A-B and C at the center between A and B. It forms a circle as in the figure. If we assume A-B line as a diameter of the circle and then C is it's center. My problem is I have no idea how to draw another three lines (in blue) each 45 degree away from AC or AB. No, this is not a homework, it's part of my complex geometry in a rendering.
alt text http://www.freeimagehosting.net/uploads/befcd84d8c.png
There are a few ways to solve this problem, one of which is to find the angle of the line and then add 45 degrees to this a few times. Here's an example, it's in Python, but translating the math should be easy (and I've tried to write the Python in a simplistic way).
Here's the output for a few lines:
The main function is calc_points, the rest is just to give it A and B that intersect the circle, and make the plots.
from math import atan2, sin, cos, sqrt, pi
from matplotlib import pyplot
def calc_points(A, B, C):
dx = C[0]-A[0]
dy = C[1]-A[1]
line_angle = atan2(dy, dx)
radius = sqrt(dy*dy + dx*dx)
new_points = []
# now go around the circle and find the points
for i in range(3):
angle = line_angle + (i+1)*45*(pi/180) # new angle in radians
x = radius*cos(angle) + C[0]
y = radius*sin(angle) + C[1]
new_points.append([x, y])
return new_points
# test this with some reasonable values
pyplot.figure()
for i, a in enumerate((-20, 20, 190)):
radius = 5
C = [2, 2]
# find an A and B on the circle and plot them
angle = a*(pi/180)
A = [radius*cos(pi+angle)+C[0], radius*sin(pi+angle)+C[1]]
B = [radius*cos(angle)+C[0], radius*sin(angle)+C[1]]
pyplot.subplot(1,3,i+1)
pyplot.plot([A[0], C[0]], [A[1], C[1]], 'r')
pyplot.plot([B[0], C[0]], [B[1], C[1]], 'r')
# now run these through the calc_points function and the new lines
new_points = calc_points(A, B, C)
for np in new_points:
pyplot.plot([np[0], C[0]], [np[1], C[1]], 'b')
pyplot.xlim(-8, 8)
pyplot.ylim(-8, 8)
for x, X in (("A", A), ("B", B), ("C", C)):
pyplot.text(X[0], X[1], x)
pyplot.show()
If you want to find coordinates of blue lines, may be you will find helpful some information about tranformations (rotations):
http://en.wikipedia.org/wiki/Rotation_matrix
You need to rotate for example vector AC and then you can find coordinate of end point of blue line.
start with this and add a button with code:
private void btnCircleLined_Click(object sender, System.EventArgs e)
{
Graphics graph = Graphics.FromImage(DrawArea);
int x = 100, y = 100, diameter = 50;
myPen.Color = Color.Green;
myPen.Width = 10;
graph.DrawEllipse(myPen, x, y, diameter, diameter);
myPen.Color = Color.Red;
double radian = 45 * Math.PI / 180;
int xOffSet = (int)(Math.Cos(radian) * diameter / 2);
int yOffSet = (int)(Math.Sin(radian) * diameter / 2);
graph.DrawLine(myPen, x, y + yOffSet + myPen.Width + diameter / 2, x + xOffSet + myPen.Width + diameter / 2, y);
graph.DrawLine(myPen, x, y, x + xOffSet + myPen.Width + diameter / 2, y + yOffSet + myPen.Width + diameter / 2);
graph.Dispose();
this.Invalidate();
}
edit: could not see your picture so I misinterpeted your question, but this should get you started.
Translate A with C at the origin (i.e. A-C), rotate CW 45°, then translate back. Repeat three more times.
If I were doing this I'd use polar co-ordinates (apologies for including the link if you are already well aware what they are) as an easy way of figuring out the co-ordinates of the points on the circumference that you need. Then draw lines to there from the centre of the circle.

correcting fisheye distortion programmatically

BOUNTY STATUS UPDATE:
I discovered how to map a linear lens, from destination coordinates to source coordinates.
How do you calculate the radial distance from the centre to go from fisheye to rectilinear?
1). I actually struggle to reverse it, and to map source coordinates to destination coordinates. What is the inverse, in code in the style of the converting functions I posted?
2). I also see that my undistortion is imperfect on some lenses - presumably those that are not strictly linear. What is the equivalent to-and-from source-and-destination coordinates for those lenses? Again, more code than just mathematical formulae please...
Question as originally stated:
I have some points that describe positions in a picture taken with a fisheye lens.
I want to convert these points to rectilinear coordinates. I want to undistort the image.
I've found this description of how to generate a fisheye effect, but not how to reverse it.
There's also a blog post that describes how to use tools to do it; these pictures are from that:
(1) : SOURCE Original photo link
Input : Original image with fish-eye distortion to fix.
(2) : DESTINATION Original photo link
Output : Corrected image (technically also with perspective correction, but that's a separate step).
How do you calculate the radial distance from the centre to go from fisheye to rectilinear?
My function stub looks like this:
Point correct_fisheye(const Point& p,const Size& img) {
// to polar
const Point centre = {img.width/2,img.height/2};
const Point rel = {p.x-centre.x,p.y-centre.y};
const double theta = atan2(rel.y,rel.x);
double R = sqrt((rel.x*rel.x)+(rel.y*rel.y));
// fisheye undistortion in here please
//... change R ...
// back to rectangular
const Point ret = Point(centre.x+R*cos(theta),centre.y+R*sin(theta));
fprintf(stderr,"(%d,%d) in (%d,%d) = %f,%f = (%d,%d)\n",p.x,p.y,img.width,img.height,theta,R,ret.x,ret.y);
return ret;
}
Alternatively, I could somehow convert the image from fisheye to rectilinear before finding the points, but I'm completely befuddled by the OpenCV documentation. Is there a straightforward way to do it in OpenCV, and does it perform well enough to do it to a live video feed?
The description you mention states that the projection by a pin-hole camera (one that does not introduce lens distortion) is modeled by
R_u = f*tan(theta)
and the projection by common fisheye lens cameras (that is, distorted) is modeled by
R_d = 2*f*sin(theta/2)
You already know R_d and theta and if you knew the camera's focal length (represented by f) then correcting the image would amount to computing R_u in terms of R_d and theta. In other words,
R_u = f*tan(2*asin(R_d/(2*f)))
is the formula you're looking for. Estimating the focal length f can be solved by calibrating the camera or other means such as letting the user provide feedback on how well the image is corrected or using knowledge from the original scene.
In order to solve the same problem using OpenCV, you would have to obtain the camera's intrinsic parameters and lens distortion coefficients. See, for example, Chapter 11 of Learning OpenCV (don't forget to check the correction). Then you can use a program such as this one (written with the Python bindings for OpenCV) in order to reverse lens distortion:
#!/usr/bin/python
# ./undistort 0_0000.jpg 1367.451167 1367.451167 0 0 -0.246065 0.193617 -0.002004 -0.002056
import sys
import cv
def main(argv):
if len(argv) < 10:
print 'Usage: %s input-file fx fy cx cy k1 k2 p1 p2 output-file' % argv[0]
sys.exit(-1)
src = argv[1]
fx, fy, cx, cy, k1, k2, p1, p2, output = argv[2:]
intrinsics = cv.CreateMat(3, 3, cv.CV_64FC1)
cv.Zero(intrinsics)
intrinsics[0, 0] = float(fx)
intrinsics[1, 1] = float(fy)
intrinsics[2, 2] = 1.0
intrinsics[0, 2] = float(cx)
intrinsics[1, 2] = float(cy)
dist_coeffs = cv.CreateMat(1, 4, cv.CV_64FC1)
cv.Zero(dist_coeffs)
dist_coeffs[0, 0] = float(k1)
dist_coeffs[0, 1] = float(k2)
dist_coeffs[0, 2] = float(p1)
dist_coeffs[0, 3] = float(p2)
src = cv.LoadImage(src)
dst = cv.CreateImage(cv.GetSize(src), src.depth, src.nChannels)
mapx = cv.CreateImage(cv.GetSize(src), cv.IPL_DEPTH_32F, 1)
mapy = cv.CreateImage(cv.GetSize(src), cv.IPL_DEPTH_32F, 1)
cv.InitUndistortMap(intrinsics, dist_coeffs, mapx, mapy)
cv.Remap(src, dst, mapx, mapy, cv.CV_INTER_LINEAR + cv.CV_WARP_FILL_OUTLIERS, cv.ScalarAll(0))
# cv.Undistort2(src, dst, intrinsics, dist_coeffs)
cv.SaveImage(output, dst)
if __name__ == '__main__':
main(sys.argv)
Also note that OpenCV uses a very different lens distortion model to the one in the web page you linked to.
(Original poster, providing an alternative)
The following function maps destination (rectilinear) coordinates to source (fisheye-distorted) coordinates. (I'd appreciate help in reversing it)
I got to this point through trial-and-error: I don't fundamentally grasp why this code is working, explanations and improved accuracy appreciated!
def dist(x,y):
return sqrt(x*x+y*y)
def correct_fisheye(src_size,dest_size,dx,dy,factor):
""" returns a tuple of source coordinates (sx,sy)
(note: values can be out of range)"""
# convert dx,dy to relative coordinates
rx, ry = dx-(dest_size[0]/2), dy-(dest_size[1]/2)
# calc theta
r = dist(rx,ry)/(dist(src_size[0],src_size[1])/factor)
if 0==r:
theta = 1.0
else:
theta = atan(r)/r
# back to absolute coordinates
sx, sy = (src_size[0]/2)+theta*rx, (src_size[1]/2)+theta*ry
# done
return (int(round(sx)),int(round(sy)))
When used with a factor of 3.0, it successfully undistorts the images used as examples (I made no attempt at quality interpolation):
Dead link
(And this is from the blog post, for comparison:)
If you think your formulas are exact, you can comput an exact formula with trig, like so:
Rin = 2 f sin(w/2) -> sin(w/2)= Rin/2f
Rout= f tan(w) -> tan(w)= Rout/f
(Rin/2f)^2 = [sin(w/2)]^2 = (1 - cos(w))/2 -> cos(w) = 1 - 2(Rin/2f)^2
(Rout/f)^2 = [tan(w)]^2 = 1/[cos(w)]^2 - 1
-> (Rout/f)^2 = 1/(1-2[Rin/2f]^2)^2 - 1
However, as #jmbr says, the actual camera distortion will depend on the lens and the zoom. Rather than rely on a fixed formula, you might want to try a polynomial expansion:
Rout = Rin*(1 + A*Rin^2 + B*Rin^4 + ...)
By tweaking first A, then higher-order coefficients, you can compute any reasonable local function (the form of the expansion takes advantage of the symmetry of the problem). In particular, it should be possible to compute initial coefficients to approximate the theoretical function above.
Also, for good results, you will need to use an interpolation filter to generate your corrected image. As long as the distortion is not too great, you can use the kind of filter you would use to rescale the image linearly without much problem.
Edit: as per your request, the equivalent scaling factor for the above formula:
(Rout/f)^2 = 1/(1-2[Rin/2f]^2)^2 - 1
-> Rout/f = [Rin/f] * sqrt(1-[Rin/f]^2/4)/(1-[Rin/f]^2/2)
If you plot the above formula alongside tan(Rin/f), you can see that they are very similar in shape. Basically, distortion from the tangent becomes severe before sin(w) becomes much different from w.
The inverse formula should be something like:
Rin/f = [Rout/f] / sqrt( sqrt(([Rout/f]^2+1) * (sqrt([Rout/f]^2+1) + 1) / 2 )
I blindly implemented the formulas from here, so I cannot guarantee it would do what you need.
Use auto_zoom to get the value for the zoom parameter.
def dist(x,y):
return sqrt(x*x+y*y)
def fisheye_to_rectilinear(src_size,dest_size,sx,sy,crop_factor,zoom):
""" returns a tuple of dest coordinates (dx,dy)
(note: values can be out of range)
crop_factor is ratio of sphere diameter to diagonal of the source image"""
# convert sx,sy to relative coordinates
rx, ry = sx-(src_size[0]/2), sy-(src_size[1]/2)
r = dist(rx,ry)
# focal distance = radius of the sphere
pi = 3.1415926535
f = dist(src_size[0],src_size[1])*factor/pi
# calc theta 1) linear mapping (older Nikon)
theta = r / f
# calc theta 2) nonlinear mapping
# theta = asin ( r / ( 2 * f ) ) * 2
# calc new radius
nr = tan(theta) * zoom
# back to absolute coordinates
dx, dy = (dest_size[0]/2)+rx/r*nr, (dest_size[1]/2)+ry/r*nr
# done
return (int(round(dx)),int(round(dy)))
def fisheye_auto_zoom(src_size,dest_size,crop_factor):
""" calculate zoom such that left edge of source image matches left edge of dest image """
# Try to see what happens with zoom=1
dx, dy = fisheye_to_rectilinear(src_size, dest_size, 0, src_size[1]/2, crop_factor, 1)
# Calculate zoom so the result is what we wanted
obtained_r = dest_size[0]/2 - dx
required_r = dest_size[0]/2
zoom = required_r / obtained_r
return zoom
I took what JMBR did and basically reversed it. He took the radius of the distorted image (Rd, that is, the distance in pixels from the center of the image) and found a formula for Ru, the radius of the undistorted image.
You want to go the other way. For each pixel in the undistorted (processed image), you want to know what the corresponding pixel is in the distorted image.
In other words, given (xu, yu) --> (xd, yd). You then replace each pixel in the undistorted image with its corresponding pixel from the distorted image.
Starting where JMBR did, I do the reverse, finding Rd as a function of Ru. I get:
Rd = f * sqrt(2) * sqrt( 1 - 1/sqrt(r^2 +1))
where f is the focal length in pixels (I'll explain later), and r = Ru/f.
The focal length for my camera was 2.5 mm. The size of each pixel on my CCD was 6 um square. f was therefore 2500/6 = 417 pixels. This can be found by trial and error.
Finding Rd allows you to find the corresponding pixel in the distorted image using polar coordinates.
The angle of each pixel from the center point is the same:
theta = arctan( (yu-yc)/(xu-xc) ) where xc, yc are the center points.
Then,
xd = Rd * cos(theta) + xc
yd = Rd * sin(theta) + yc
Make sure you know which quadrant you are in.
Here is the C# code I used
public class Analyzer
{
private ArrayList mFisheyeCorrect;
private int mFELimit = 1500;
private double mScaleFESize = 0.9;
public Analyzer()
{
//A lookup table so we don't have to calculate Rdistorted over and over
//The values will be multiplied by focal length in pixels to
//get the Rdistorted
mFisheyeCorrect = new ArrayList(mFELimit);
//i corresponds to Rundist/focalLengthInPixels * 1000 (to get integers)
for (int i = 0; i < mFELimit; i++)
{
double result = Math.Sqrt(1 - 1 / Math.Sqrt(1.0 + (double)i * i / 1000000.0)) * 1.4142136;
mFisheyeCorrect.Add(result);
}
}
public Bitmap RemoveFisheye(ref Bitmap aImage, double aFocalLinPixels)
{
Bitmap correctedImage = new Bitmap(aImage.Width, aImage.Height);
//The center points of the image
double xc = aImage.Width / 2.0;
double yc = aImage.Height / 2.0;
Boolean xpos, ypos;
//Move through the pixels in the corrected image;
//set to corresponding pixels in distorted image
for (int i = 0; i < correctedImage.Width; i++)
{
for (int j = 0; j < correctedImage.Height; j++)
{
//which quadrant are we in?
xpos = i > xc;
ypos = j > yc;
//Find the distance from the center
double xdif = i-xc;
double ydif = j-yc;
//The distance squared
double Rusquare = xdif * xdif + ydif * ydif;
//the angle from the center
double theta = Math.Atan2(ydif, xdif);
//find index for lookup table
int index = (int)(Math.Sqrt(Rusquare) / aFocalLinPixels * 1000);
if (index >= mFELimit) index = mFELimit - 1;
//calculated Rdistorted
double Rd = aFocalLinPixels * (double)mFisheyeCorrect[index]
/mScaleFESize;
//calculate x and y distances
double xdelta = Math.Abs(Rd*Math.Cos(theta));
double ydelta = Math.Abs(Rd * Math.Sin(theta));
//convert to pixel coordinates
int xd = (int)(xc + (xpos ? xdelta : -xdelta));
int yd = (int)(yc + (ypos ? ydelta : -ydelta));
xd = Math.Max(0, Math.Min(xd, aImage.Width-1));
yd = Math.Max(0, Math.Min(yd, aImage.Height-1));
//set the corrected pixel value from the distorted image
correctedImage.SetPixel(i, j, aImage.GetPixel(xd, yd));
}
}
return correctedImage;
}
}
I found this pdf file and I have proved that the maths are correct (except for the line vd = *xd**fv+v0 which should say vd = **yd**+fv+v0).
http://perception.inrialpes.fr/CAVA_Dataset/Site/files/Calibration_OpenCV.pdf
It does not use all of the latest co-efficients that OpenCV has available but I am sure that it could be adapted fairly easily.
double k1 = cameraIntrinsic.distortion[0];
double k2 = cameraIntrinsic.distortion[1];
double p1 = cameraIntrinsic.distortion[2];
double p2 = cameraIntrinsic.distortion[3];
double k3 = cameraIntrinsic.distortion[4];
double fu = cameraIntrinsic.focalLength[0];
double fv = cameraIntrinsic.focalLength[1];
double u0 = cameraIntrinsic.principalPoint[0];
double v0 = cameraIntrinsic.principalPoint[1];
double u, v;
u = thisPoint->x; // the undistorted point
v = thisPoint->y;
double x = ( u - u0 )/fu;
double y = ( v - v0 )/fv;
double r2 = (x*x) + (y*y);
double r4 = r2*r2;
double cDist = 1 + (k1*r2) + (k2*r4);
double xr = x*cDist;
double yr = y*cDist;
double a1 = 2*x*y;
double a2 = r2 + (2*(x*x));
double a3 = r2 + (2*(y*y));
double dx = (a1*p1) + (a2*p2);
double dy = (a3*p1) + (a1*p2);
double xd = xr + dx;
double yd = yr + dy;
double ud = (xd*fu) + u0;
double vd = (yd*fv) + v0;
thisPoint->x = ud; // the distorted point
thisPoint->y = vd;
This can be solved as an optimization problem. Simply draw on curves in images that are supposed to be straight lines. Store the contour points for each of those curves. Now we can solve the fish eye matrix as a minimization problem. Minimize the curve in points and that will give us a fisheye matrix. It works.
It can be done manually by adjusting the fish eye matrix using trackbars! Here is a fish eye GUI code using OpenCV for manual calibration.

Calculate second point knowing the starting point and distance

using a Latitude and Longitude value (Point A), I am trying to calculate another Point B, X meters away bearing 0 radians from point A. Then display the point B Latitude and Longitude values.
Example (Pseudo code):
PointA_Lat = x.xxxx;
PointA_Lng = x.xxxx;
Distance = 3; //Meters
bearing = 0; //radians
new_PointB = PointA-Distance;
I was able to calculate the distance between two Points but what I want to find is the second point knowing the distance and bearing.
Preferably in PHP or Javascript.
Thank you
It seems you are measuring distance (R) in meters, and bearing (theta) counterclockwise from due east. And for your purposes (hundereds of meters), plane geometry should be accurate enough. In that case,
dx = R*cos(theta) ; theta measured counterclockwise from due east
dy = R*sin(theta) ; dx, dy same units as R
If theta is measured clockwise from due north (for example, compass bearings),
the calculation for dx and dy is slightly different:
dx = R*sin(theta) ; theta measured clockwise from due north
dy = R*cos(theta) ; dx, dy same units as R
In either case, the change in degrees longitude and latitude is:
delta_longitude = dx/(111320*cos(latitude)) ; dx, dy in meters
delta_latitude = dy/110540 ; result in degrees long/lat
The difference between the constants 110540 and 111320 is due to the earth's oblateness
(polar and equatorial circumferences are different).
Here's a worked example, using the parameters from a later question of yours:
Given a start location at longitude -87.62788 degrees, latitude 41.88592 degrees,
find the coordinates of the point 500 meters northwest from the start location.
If we're measuring angles counterclockwise from due east, "northwest" corresponds
to theta=135 degrees. R is 500 meters.
dx = R*cos(theta)
= 500 * cos(135 deg)
= -353.55 meters
dy = R*sin(theta)
= 500 * sin(135 deg)
= +353.55 meters
delta_longitude = dx/(111320*cos(latitude))
= -353.55/(111320*cos(41.88592 deg))
= -.004266 deg (approx -15.36 arcsec)
delta_latitude = dy/110540
= 353.55/110540
= .003198 deg (approx 11.51 arcsec)
Final longitude = start_longitude + delta_longitude
= -87.62788 - .004266
= -87.632146
Final latitude = start_latitude + delta_latitude
= 41.88592 + .003198
= 41.889118
It might help if you knew that 3600 seconds of arc is 1 degree (lat. or long.), that there are 1852 meters in a nautical mile, and a nautical mile is 1 second of arc. Of course you're depending on the distances being relatively short, otherwise you'd have to use spherical trigonometry.
Here is an updated version using Swift:
let location = CLLocation(latitude: 41.88592 as CLLocationDegrees, longitude: -87.62788 as CLLocationDegrees)
let distanceInMeter : Int = 500
let directionInDegrees : Int = 135
let lat = location.coordinate.latitude
let long = location.coordinate.longitude
let radDirection : CGFloat = Double(directionInDegrees).degreesToRadians
let dx = Double(distanceInMeter) * cos(Double(radDirection))
let dy = Double(distanceInMeter) * sin(Double(radDirection))
let radLat : CGFloat = Double(lat).degreesToRadians
let deltaLongitude = dx/(111320 * Double(cos(radLat)))
let deltaLatitude = dy/110540
let endLat = lat + deltaLatitude
let endLong = long + deltaLongitude
Using this extension:
extension Double {
var degreesToRadians : CGFloat {
return CGFloat(self) * CGFloat(M_PI) / 180.0
}
}
dx = sin(bearing)
dy = cos(bearing)
x = center.x + distdx;
y = center.y + distdy;

Resources