i am working on program for tracking people from cctv cameras, where i have video input from cctv camera and i have 2D top view image of building that contains all cctv cameras. when i detect person in the scene i draw bounding box around the person, and this bounding box has center point represented as x,y in image coordinates system, and what i want to do is to convert this bounding box center point to another 2D point in building image coordinates system. can any one give me a hint or idea ???
here the image from cctv camera, and the image of building where the red line in cctv image is the line that i have and the red line in building image is the line i want to obtain.
CCTV camera image :
building image :
So because the camera is projecting a 3D world down to 2D and then you want to get it from another 2D perspective there can be errors. For instance the center point of the rectangle of the person could either be ~2.7ft off the ground where they are standing OR could be the floor if the person wasn't there. Those would be 2 very different places on your birds eye view map.
However, since this is for tracking people, you could make the assumption that everyone is approximately the same height and assume the center of the rectangle is always approximately 2.7ft off the ground. If you make this assumption then the problem is more tractable.
With that assumption, what you could do is have a calibration phase. So you could have the person stand at the end of the hallway and see what coordinates that is on the camera and what coordinates on the map. Then have them walk up in front of the camera and see the coordinates on the camera and coordinates on the map. With these two points you could do a linear interpolation to be able to figure out where in the hallway someone is based on the camera. You would need to do this calibration for each camera that you have but it should give fairly accurate results.
Let (x1, y1) be the camera coordinates at end of hall and (X1, Y1) map coordinates at end of hall. Then let (x2, y2) be camera coordinates when close to camera and (X2, Y2) map coordinates close to camera. So then find a linear A such that
A(x1, y1) = (X1, Y1) and A(x2, y2) = (X2, Y2)
You can solve this as a matrix equation (not sure how to type math in here)
A|x1 x2| = |X1 X2|
|y1 y2| |Y1 Y2|
A = |X1 X2||x1 x2|^(-1)
|Y1 Y2||y1 y2|
And this should give you a decent way to convert coordinates on the camera to coordinates on the map.
Related
If I had a sphere with unit radius, and I had some points defined as latitudes and longitudes on that sphere, and I had a camera defined with a vertical and horizontal field of view angle which is always in the centre of the sphere. How can I project these points onto that camera?
A point at direction (x,y,z) at infinity has homogeneous coordinates of (x,y,z,0). So assuming that you use a typical view-projection matrices to describe your camera model, it is as simple as calculating
P * V * ( cos(lon)*cos(lat), sin(lon)*cos(lat), sin(lat), 0 )'
and then proceeding with a perspective divide and rasterization.
My problem is quite simple yet I struggle to solve it correctly.
I have a camera looking towards the ground and I know all the parameters of the shot. So, using some maths I was able to compute the 4 points defining the field of view of the camera (the coordinates on the ground of each image's corners).
Now, from the coordinates (x, y) of a pixel of the image, I would like to know its real coordinates projected on the ground.
I thought that homography was the way to go, but I read here and there that "homography maps a plane seen from a camera to the same plane seen from another" which is a slightly different problem.
What should I use, please?
Edit: Here is an example.
Given this image:
I know everything about the camera that took the picture (height, angles of view, orientation), so I could calculate the coordinates of the four corners forming its field of view on the ground, for example (in centimeters, relative to the camera position, clockwise from top-left): (-300, 500), (300, 500), (100, 50), (-100, 50).
Knowing that the coordinates on the image of the blade of grass are (1750, 480), how can I know its actual coordinates on the ground?
By "knowing everything" about the camera, do you mean you have the camera FOV, rotation and translation with respect to the ground plane? Then it's trivial, right?
Write the camera matrix K = [[f, 0, w/2],[0, f, h/2],[0, 0, 1]]. Let R and t be respectively the 3x3 rotation matrix and 3x1 translation from camera to ground. A point on the ray going through a given pixel p=[u, v, 1] has camera coordinates r = inv(K) * p. Express it in world coordinates as R * r + t, intersect with the ground plane and you are done.
To view my 3D environment, I use the "true" 3D isometric projection (flat square on XZ plane, Y is "always" 0). I used the explanation on wikipedia: http://en.wikipedia.org/wiki/Isometric_projection to come to how to do this transformation:
The projection matrix is an orthographic projection matrix between some minimum and maximum coordinate.
The view matrix is two rotations: one around the Y-axis (n * 45 degrees) and one around the X-axis (arctan(sin(45 degrees))).
The result looks ok, so I think I have done it correctly.
But now I want to be able to pick a coordinate with the mouse. I have successfully implemented this by rendering coordinates to an invisible framebuffer and then getting the pixel under the mouse cursor to get the coordinate. Although this works fine, I would really like to see a mathematical sollution because I will need it to calculate bounding boxes, frustums of the area on the screen and stuff like that.
My instincts tell me to:
- go from screen-coordinates to 2D projection coordinates (or how do you say this, I mean transforming screen coordinates to a coordinate between -1 and +1 for both axisses, with y inverted)
- untransform the coordinate with the inverse of the view-matrix.
- yeah... untransform this coordinate with the inverse of the projection matrix, but as my instincts tell, this won't work as everything will have the same Z-coordinate.
This, while every information is perfectly available on the isometric view (I know that the Y value is always 0). So I should be able to convert the isometric 2D x,y coordinate to a calculated 3d (x, 0, z) coordinate without using scans or something like that.
My math isn't bad, but this is something I can't seem to grasp.
Edit: IMO. every different (x, 0, z) coordinate corresponds to a different (x2, y2) coordinate in isometric view. So I should be able to simply calculate a way from (x2, y2) to (x, 0, z). But how?
Anyone?
there is something called project and unproject to transform screen to world and vice versa....
You seem to miss some core concepts here (it’s been a while since I did this stuff, so minor errors included):
There are 3 kinds of coordinates involved here (there are more, these are the relevant ones): Scene, Projection and Window
Scene (3D) are the coordinates in your world
Projection (3D) are those coordinates after being transformed by camera position and projection
Window (2D) are the coordinates in your window. They are generated from projection by scaling x and y appropriately and discarding z (z is still used for “who’s in front?” calculations)
You can not transform from window to scene with a matrix, as every point in window does correspond to a whole line in scene. If you want (x, 0, z) coordinates, you can generate this line and intersect it with the y-plane.
If you want to do this by hand, generate two points in projection with the same (x,y) and different (arbitrary) z coordinates and transform them to scene by multiplying with the inverse of your projection transformation. Now intersect the line through those two points with your y-plane and you’re done.
Note that there should be a “static” solution (a single formula) to this problem – if you solve this all on paper, you should get to it.
I'm trying to figure out some calculations using arcs in 3d space but am a bit lost. Lets say that I want to animate an arc in 3d space to connect 2 x,y,z coordinates (both coordinates have a z value of 0, and are just points on a plane). I'm controlling the arc by sending it a starting x,y,z position, a rotation, a velocity, and a gravity value. If I know both the x,y,z coordinates that need to be connected, is there a way to calculate what the necessary rotation, velocity, and gravity values to connect it from the starting x,y,z coordinate to the ending one?
Thanks.
EDIT: Thanks tom10. To clarify, I'm making "arcs" by creating a parabola with particles. I'm trying to figure out how to ( by starting a parabola formed by a series particles with an beginning x,y,z,velocity,rotation,and gravity) determine where it will in end(the last x,y,z coordinates). So if it if these are the two coordinates that need to be connected:
x1=240;
y1=140;
z1=0;
x2=300;
y2=200;
z2=0;
how can the rotation, velocity, and gravity of this parabola be calculated using only these variables start the formation of the parabola:
x1=240;
y1=140;
z1=0;
rotation;
velocity;
gravity;
I am trying to keep the angle a constant value.
This link describes the ballistic trajectory to "hit a target at range x and altitude y when fired from (0,0) and with initial velocity v the required angle(s) of launch θ", which is what you want, right? To get your variables into the right form, set the rotation angle (in the x-y plane) so you're pointing in the right direction, that is atan(y/x), and from then on out, to match the usual terminology for 2D problem, rewrite your z to y, and the horizontal distance to the target (which is sqrt(xx + yy)) as x, and then you can directly use the formula in link.
Do the same as you'd do in 2D. You just have to convert your figures to an affine space by rotating the axis, so one of them becomes zero; then solve and undo the rotation.
It's been a while since my math in university, and now I've come to need it like I never thought i would.
So, this is what I want to achieve:
Having a set of 3D points (geographical points, latitude and longitude, altitude doesn't matter), I want to display them on a screen, considering the direction I want to take into account.
This is going to be used along with a camera and a compass , so when I point the camera to the North, I want to display on my computer the points that the camera should "see". It's a kind of Augmented Reality.
Basically what (i think) i need is a way of transforming the 3D points viewed from above (like viewing the points on google maps) into a set of 3d Points viewed from a side.
The conversion of Latitude and longitude to 3-D cartesian (x,y,z) coordinates can be accomplished with the following (Java) code snippet. Hopefully it's easily converted to your language of choice. lat and lng are initially the latitude and longitude in degrees:
lat*=Math.PI/180.0;
lng*=Math.PI/180.0;
z=Math.sin(-lat);
x=Math.cos(lat)*Math.sin(-lng);
y=Math.cos(lat)*Math.cos(-lng);
The vector (x,y,z) will always lie on a sphere of radius 1 (i.e. the Earth's radius has been scaled to 1).
From there, a 3D perspective projection is required to convert the (x,y,z) into (X,Y) screen coordinates, given a camera position and angle. See, for example, http://en.wikipedia.org/wiki/3D_projection
It really depends on the degree of precision you require. If you're working on a high-precision, close-in view of points anywhere on the globe you will need to take the ellipsoidal shape of the earth into account. This is usually done using an algorithm similar to the one descibed here, on page 38 under 'Conversion between Geographical and Cartesian Coordinates':
http://www.icsm.gov.au/gda/gdatm/gdav2.3.pdf
If you don't need high precision the techniques mentioned above work just fine.
could anyone explain me exactly what these params mean ?
I've tried and the results where very weird so i guess i am missunderstanding some of the params for the perspective projection
* {a}_{x,y,z} - the point in 3D space that is to be projected.
* {c}_{x,y,z} - the location of the camera.
* {\theta}_{x,y,z} - The rotation of the camera. When {c}_{x,y,z}=<0,0,0>, and {\theta}_{x,y,z}=<0,0,0>, the 3D vector <1,2,0> is projected to the 2D vector <1,2>.
* {e}_{x,y,z} - the viewer's position relative to the display surface. [1]
Well, you'll want some 3D vector arithmetic to move your origin, and probably some quaternion-based rotation functions to rotate the vectors to match your direction. There are any number of good tutorials on using quaternions to rotate 3D vectors (since they're used a lot for rendering and such), and the 3D vector stuff is pretty simple if you can remember how vectors are represented.
well, just a pice ov advice, you can plot this points into a 3d space (you can do easily this using openGL).
You have to transforrm the lat/long into another system for example polar or cartesian.
So starting from lat/longyou put the origin of your space into the center of the heart, than you have to transform your data in cartesian coord:
z= R * sin(long)
x= R * cos(long) * sin(lat)
y= R * cos(long) * cos(lat)
R is the radius of the world, you can put it at 1 if you need only to cath the direction between yoour point of view anthe points you need "to see"
than put the Virtual camera in a point of the space you've created, and link data from your real camera (simply a vector) to the data of the virtual one.
The next stemp to gain what you want to do is to try to plot timages for your camera overlapped with your "virtual space", definitevly you should have a real camera that is a control to move the virtual one in a virtual space.