Get Camera 2d position by 3 image points (1D) - math

i have an image and 3 points with following datas for each point:
x and y 2d-world coordinates
x image coordinate
how can i calculate the camera orientation (only left/right) and the 2d-world position?
thanks.
edit: the image is a normal photography (so perspective projection). The world coordinate is a top view of a map, so Orthographic projection).

Given a point in world space, the projection can be expressed as
(x - cx) * cos(phi) - (y - cy) * sin(phi)
proj(x, y) = -----------------------------------------
(x - cx) * sin(phi) + (y - cy) * cos(phi)
cx and cy are the camera position and phi is the camera rotation. The projection will result in a value in camera coordinates (not image coordinates). To transform image coordinates to camera coordinates, usw
cameraX(imageX) = (2 * imageX / W - 1) * tan(fovy / 2) * ratio
W is the pixel width of the image, fovy is the vertical field of view, ratio is the image's aspect ratio.
Then you want to solve the system of equations formed by the three given points. There is an analytic solution, but it is quite complex. So you're left with numerical (probably least-squares) solvers. Pick one, plug in the formula and get your result. Since you optimize for both a position and an angle, you may want to normalize the values so that they have a similar range. I got quite good results with levmar for similar problems if you're unsure what optimizer to use.
This all assumes that the camera does not distort the image.

Related

Translating Screen Coordinates [ x, y ] to Camera Pan and Tilt angles

I have a IP Camera which can PTZ. I am currently streaming live feed into the browser and want to allow user to click a point on the screen and the camera will pan and tilt so that the user clicked position will now become the center point of view.
my Camera Pan 360 degrees and Tilt from -55 to 90.
any algorithm that will guide to me achieve my goal ??
Let's start by declaring a 3D coordinate system around the camera (the origin). I will use the following: The z-axis points upwards. The x-axis is the camera direction with pan=tilt=0 and positive pan angles will move the camera towards the positive y-axis.
Then, the transform for a given pan/tilt configuration is:
T = Ry(-tilt) * Rz(pan)
This is the transform that positions our virtual image plane in 3D space. Let's keep that in mind and go to the image plane.
If we know the vertical and horizontal field of view and assume that lens distortions are already corrected, we can set up our image plane as follows: The image plane is 1 unit away from the camera (just by declaration) in the view direction. Let the center be the plane's local origin. Then, its horizontal extents are +- tan(fovx / 2) and its vertical extents are +- tan(fovy / 2).
Now, given a pixel position (x, y) in this image (origin in the top left corner), we first need to convert this location into a 3D direction. We start by calculating the local coordinates in the image plane. This is for the image's pixel width w and pixel height h:
lx = (2 * x / w - 1) * tan(fovx / 2)
ly = (-2 * y / h + 1) * tan(fovy / 2) (local y-axis points upwards)
lz = 1 (image plane is 1 unit away)
This is the ray that contains the according pixel under the assumption that there is no pan or tilt yet. But now it is time to get rid of this assumption. That's where our initial transform comes into play. We just need to transform this ray:
tx = cos(pan) * cos(tilt) * lx - cos(tilt) * sin(pan) * ly - sin(tilt) * lz
ty = sin(pan) * lx + cos(pan) * ly
tz = cos(pan) * sin(tilt) * lx - sin(pan) * sin(tilt) * ly + cos(tilt) * lz
The resulting direction now describes the ray that contains the specified pixel in the global coordinate system that we set up in the beginning. All that's left is calculate the new pan/tilt parameters:
tilt = atan2(tz, tx)
pan = asin(ty / sqrt(tx^2 + ty^2 + tz^2))

Interpolation of quaternion orientations and vector positions

I have a camera in 3d space that is defined by quaternion and position vector (q1 and p1)
I want to move camera to another viewpoint defined by another pair of quaternion and vector (q2 and p2). To achieve smooth animation I interpolate quaternions using spherical linear interpolation and position vectors using linear interpolation. For small camera movements it works ok, but if camera should orbit around model on 180 degrees it looks ugly, because it doesn't orbit model, but goes through it.
So the question is how to interpolate camera position taking into account slerp interpolation of camera orientation?
I found solution for my problem:
Firstly I calculate difference between quaternions q1 and q2 and calculate axis-angle representation from this difference. Then I calculate rotation centre from line p1-p2, rotation axis and angle value:
center = (p1 + p2) * 0.5 + norm(axis X (p2 - p1)) * (0.5 * |p2 - p1| / sin(angle * 0.5) )
and then I just rotate point p1 around center to interpolate camera position

find 3d point on a circle given the angle and radius

http://i.stack.imgur.com/7InNo.png
I am trying to find the green points using the angle ,radius and center of the circle.
I am using this image that was posted by another member.
I wish to find the green points, but in a 3d space instead.
I am able to get the x and y value but i am unable to get the z.
r = radius
X = r * cos(angle)
Y = r * sin(angle)
How can i get the value for z-axis?
In the case of 3 dimensions you need 2 angles. Basically what you are doing is converting from spherical coordinates to cartesian coordinates. So your formulas can be found here

How do I calculate x, y, z velocity given two rotation angles and a speed?

Another way of saying this question: How do I find the length, width and height of a cuboid given it's diagonal length and 2 rotational angles.
This is for a 3d game where the user can change up/down rotation (UP and DOWN arrow keys), left/right rotation (LEFT and RIGHT arrow keys) and the object can accelerated and reverse (Q and w). Each frame, the objects x, y, z gets updated according to it's current speed and up/down and left/right rotation.
If alpha is the left/right angle and beta is the up/down angle, then
v.x = speed * sin (alpha) * cos(beta)
v.y = speed * sin (beta)
v.z = speed * cos (alpha) * cos(beta)
Assuming, that no rotation will return the direction (0, 0, 1)
I'm assuming that this cuboid is measured using a static frame of reference, where the diagonal starts at the origin and extends to some other point. If not, this question has no definitive answer, as a diagonal length alone can not determine the width, height and length of some arbitrary cuboid, as there are an infinite number of cuboids that could have the same diagonal.
It sounds like what you're using is a spherical coordinate system: http://en.wikipedia.org/wiki/Spherical_coordinate_system#Cartesian_coordinates
From the article:
x = r sin θ cos φ
y = r sin θ sin φ
z = r cos θ
r is your diagonal length. You'll have to determine θ and φ based on your rotation angles; they may not be proper inclination and azimuth angles. See the article for details on how these angles are defined in spherical coordinates.

Processing - Set X/Y Zero Coordinates To Center of Display Window

I'm trying to use latitude and longitude coordinates to plot a map in Processing. Is there a way to set the zero coordinates of the X and Y axis to the center of the display window.
Or does anyone know how to convert spherical coordinates to cartesian?
Thanks
I'll assume you have spherical coordinates of r, radius; theta, horizontal angle around Z-axis starting at (1,0,0) and rotating toward (0,1,0); and phi, vertical angle from positive Z-axis toward negative Z-axis; that being how I remember it from back when. Remember that angles are in radians in most programming languages; 2*pi radians = 180 degrees.
x = r * cos(theta) * sin(phi)
y = r * sin(theta) * sin(phi)
z = r * cos(phi)

Resources