From the Velodyne point, how to get pixel coordinate for each camera?
Using pykitti
point_cam0 = data.calib.T_cam0_velo.dot(point_velo)
We can get the projection on the image which is equation 7 of the Kitti Dataset paper:
y = Prect(i) Rrect(0) Tvelocam x
But from there, how to get the actual pixel coordinates on each image?
Update: PyKitti version 0.2.1 exposes projection matrices for all cameras.
I recently faced the same problem. For me, the problem was that pykitty didn't expose Prect and Rrect matrices for all cameras.
For Pykitti > 0.2.1, use Prect and Rrect from calibration data.
For previous versions, you have two options:
Enter the matrices by hand (data is in the .xml calibration file for each sequence).
Use this fork of pykitti: https://github.com/Mi-lo/pykitti/
Then, you can use equation 7 to project a velodyne point into an image. Note that:
You will need 3D points as a 4xN array in homogeneous coordinates. Points returned by pykitti are a Nx4 numpy array, with the reflectance in the 4th column. You can prepare the points with the prepare_velo_points function below, which keeps only points with reflectance > 0, then replaces reflectance values with 1 to get homogeneous coordinates.
The velodyne is 360°. Equation 7 will give you a result even for points that are behind the camera (they will get projected as if they were in front, but vertically mirrored). To avoid this, you should project only points that are in front of the camera. For this, you can use the function project_velo_points_in_img below. It returns 2d points in homogeneous coordinates so you should discard the 3rd row.
Here are the functions I used:
def prepare_velo_points(pts3d_raw):
'''Replaces the reflectance value by 1, and tranposes the array, so
points can be directly multiplied by the camera projection matrix'''
pts3d = pts3d_raw
# Reflectance > 0
pts3d = pts3d[pts3d[:, 3] > 0 ,:]
pts3d[:,3] = 1
return pts3d.transpose()
def project_velo_points_in_img(pts3d, T_cam_velo, Rrect, Prect):
'''Project 3D points into 2D image. Expects pts3d as a 4xN
numpy array. Returns the 2D projection of the points that
are in front of the camera only an the corresponding 3D points.'''
# 3D points in camera reference frame.
pts3d_cam = Rrect.dot(T_cam_velo.dot(pts3d))
# Before projecting, keep only points with z>0
# (points that are in fronto of the camera).
idx = (pts3d_cam[2,:]>=0)
pts2d_cam = Prect.dot(pts3d_cam[:,idx])
return pts3d[:, idx], pts2d_cam/pts2d_cam[2,:]
Hope this helps!
Related
Basically, I want to transform a point from the image of camera 2 (x_pixel_c2, y_pixel_c2) to a pixel point (x_pixel_c2, y_pixel_c3) in camera 3.
For a simple setup (no distortion parameters, no rectification), I would usually:
assign a certain distance to the point in C2
compute the coordinates in 3D space of C2 from the intrinsics
compute the World coordinates from the extrinsic matrix of C2
compute the 3D coordinates in C3 from its extrinsic matrix
project in pixel space of C2
For the Kitty dataset, I do not use this approach because of the distortion parameters (especially). In Kitty, we have the projection matrices for the 4 rectified cameras, which from what I understand, are the transformation matrices relating the 3D coordinates in Camera 0 to Camera X. However, from the tests I've made, this does not work:
pt_3d_cam0 = np.dot(inverse(P_rect_02), pt_cam_2)
pt_cam_3 = np.dot(P_rect_03, pt_3d_cam0)
I'm not sure the projection matrices move the referential back to camera 0 as it is mentioned. For instance, the points are closer to the expected coordinates if I add a translation in X equal to the amount of the baseline.
I'm not sure if I'm missing something. If anyone has encountered a similar problem with Kitty, it would be appreciated if you could help.
Thank you!
Hi? I have a data of seedlings distribution which contains species types, X and Y coordinates in UTM. I want to create a point pattern by their X & Y coordinate location with the help of ppp() function in spatstat package. I tried it with following 2 ways:
p.patt <- ppp(mydata$X, mydata$Y)
p.patt <- ppp(mydata$X, mydata$Y, owin(c(100,131), c(100,130)))
But there is a “Warning message: 435 points were rejected as lying outside the specified window” for both of them.
I guess this is related to ranges of X and Y coordinates that should be specified in this code in c(…), c(…). I checked the range of X &Y and R gave me following ranges:
for X: 368615 and 368746,
for Y: 4587355 and 4587485
When I plot the data, a shape of the plot looks like "tilted rombo". I don't know if it is help.
Here I have just randomly chosen tried some numbers: 100 & 131 & 130. I couldn’t find any information how to set them online.
So my question is how I can use these ranges of coordinates to set observation window geometry of point patterm in spatstat package in R?.
Thank you very much in advance!
The numbers in the owin call are not the width and height of the window; they are the X and Y coordinates of the corners of the window.
Since the range of X coordinate values of the data points is from 368615 to 368746, the window needs to contain this range, at least. Similarly the range of Y values must be contained in the window. The minimal window that will not give a warning is
p.patt <- ppp(mydata$X, mydata$Y, owin(c(368615,368746), c(4587355,4587485)))
or equivalently
p.patt <- ppp(mydata$X, mydata$Y, c(368615,368746), c(4587355,4587485))
But this is just the minimal window that is acceptable; for a proper analysis, you need information about the survey region. If it is not a rectangle then, as Ege says, you need to specify owin(poly=...) using the coordinate locations of the vertices of the polygon.
Don't you have information about the plot? E.g. the coordinates of the corners of a polygonal region delimiting the plot? If you have these coordinates use them as input in the argument poly of owin. See the help file for owin for details. In lack of any information you can try ripras to estimate the boundary of the plot.
What you do right now is to say that you define a point pattern in the rectangle [0,131]×[0,130] and then you provide a bunch of points with coordinates outside this area (much larger coordinate values) and they are all discarded.
Suppose we have a polygon with five vertices. The two coordinates of the vertices are-
>x=c(1,4,6,3,-2)
>y=c(1,1,5,9,4)
We define the centre of the polygon as the point (mean(x),mean(y)).
I am struggling to draw spokes from the centre of the polygon to the boundary of the polygon such that the spokes creates same angle in the centre (i.e., two neighbouring spokes create equal angle in the centre). I also want to have the all the points on the boundary of the polygon (red circle in the following plot) in orderly manner.
Here is a rough sample plot (convex) which I want to have:
Note: The polygon I am dealing with not necessarily convex.
Sample plot (non-convex)
The output I want: 1) The coordinates of the line (i.e., the intersection points of the line through the origin and boundary segments of the polygon).
2) For each equispaced angle (theta in fig.2) I want a to draw a spoke corresponding to each theta (as in figure 2). Note that, angle lies between 0 to 360 degree.
3) In case of my second polygon (non-convex) where the same line go through two boundary segments (creating three intersecting points), I want to have three coordinates corresponding to the same angle (theta).
Could anyone help me in doing that using R? Thanks in advance.
Here you go. You need the sp and rgeos packages:
spokey <- function(xy,n=20){
xcent = mean(xy[,1])
ycent = mean(xy[,2])
cent = sp::SpatialPoints(cbind(xcent, ycent))
pts = sp::SpatialPoints(xy)
## take the furthest distance from centre to vertex, times two!
r = 2 * max(sp::spDistsN1(pts, cent))
theta=seq(0,2*pi,length=n+1)[-(n+1)]
## construct a big wheel of spoke lines
sl = sp::SpatialLines(
lapply(1:length(theta),function(id){
t = theta[id]
sp::Lines(
list(
sp::Line(
rbind(
c(xcent, ycent),
c(xcent + r * cos(t),ycent + r * sin(t))
)
)
),ID=id)
}))
## construct the polygon as a SpatialPolygons object:
pol = sp::SpatialPolygons(list(sp::Polygons(list(sp::Polygon(rbind(xy,xy[1,]))),ID=1)))
## overlay spokes on polygon as "SpatialLines" so we do line-on-line
## intersect which gets us points
spokes = rgeos::gIntersection(sl, as(pol,"SpatialLines"), byid=TRUE)
spokes
}
It takes a matrix of coordinates where the first point is not the last point:
xy1 = structure(c(4.49425847117117, 4.9161781929536, 7.95751618746858,
7.92235621065338, 9.76825499345149, 9.9616348659351, 8.04541612950659,
7.83445626861537, 6.42805719600729, 0.644241009906543, 2.40223985066665,
1.24196061576498, 2.13854002455263, 7.935927470861, 9.41043173309254,
9.33179150577352, 6.50074332228897, 7.34612576596839, 2.76533252463575,
1.07456763727692, 3.88595576393172, 1.17286792142569, 2.745672467806,
5.20317957152522, 5.81264133324759, 8.21116826647756), .Dim = c(13L,
2L))
and then:
> plot(xy1,asp=1)
> polygon(xy1)
> spokes = spokey(xy1,20) # second arg is number of spokes
> points(spokes,pch=19,col="red")
gets you:
If you don't believe it, draw the segments from the centre to the points :)
segments(mean(xy1[,1]),mean(xy1[,2]), coordinates(spokes)[,1], coordinates(spokes)[,2])
The function coordinates(spokes) will get you a two-column matrix of the spoke points - its returned as a SpatialPoints object at present.
I modified this to handle the convex case illustrated.
You will have to write code that computes the intersection of a spoke from the center to each edge line segment. Not that hard, really, but have never seen it in R. Then you will have to loop over the angles that you are interested in drawing, loop over the segments, find the ones it intersects, sort those values, and then draw the line to the intersection you are interested in.
You would then to the furthest, or some combination (maybe a dotted line between the closest and the furthest).
In pseudo-code:
for each spoke you want to draw
calculate the spoke-line from the center to some point far outside
initialize edge intersection-point list to empty
for each edge-segment
calculate the intersection-point of spoke-line and edge-segment
if the intersection-point exists
add it to the intersection list
now go through the intersections and find the furthest
draw the spoke from the center to the furthest intersection point
continue with the next spoke
This would probably take several hours to research and write, unless you write this kind of graphics code constantly.
I have generated a Voronoi tessellation for N number of points in 2D space using the deldir R package.
Now I want to divide each Voronoi cell into three Voronoi cells according to given azimuth described as below:
Azimuth is given as an input. E.g.: azimuth = 0 means an area should be separated by 2 lines at angle = 0 to angle = 120. Next area by angle = 120 to angle = 240 and last area is the remainder.
Azimuth is the starting angle from north for this separation and always it spans 120 degrees. In more detail, from each point Voronoi is generated exactly three lines are drawn dividing previous Voronoi cell into three Voronoi cells.
Can this be achieved using the deldir package? if not can anyone suggest a extension for this?
I don't know any easy/implemented way of doing this. However, you could try creating those lines manually.
I would try something along the lines of:
Access the coordinates of the edges of a voronoi polygon using deldir()
Convert the coordinates into line objects using the sp package
Create line objects that reach from the "center"-point to the border of the plot (calculating the end points based on your azimuth)
Find intersections of the lines created in 3 with the lines created in 2 (check How to get the intersection point of two vector?)
Create new (shorter) lines starting from your original point and ending at the intersection point retrieved from step 4.
Plot the lines created in 5.
Loop for every polygon
This may well be a very clumsy solution, but it is the only workaround I could come up with ;)
As far as I know, Direct3D works with an LH coordinate system right?
So how would I get position and x/y/z axis (local orientation axis) out of a LH 4x4 (world) matrix?
Thanks.
In case you don't know: LH stands for left-handed
If the 4x4 matrix is what I think it is (a homogeneous rigid body transformation matrix, same as an element of SE(3)) then it should be fairly easy to get what you want. Any rigid body transformation can be represented by a 4x4 matrix of the form
g_ab = [ R, p;
0, 1]
in block matrix notation. The ab subscript denotes that the transformation will take the coordinates of a point represented in frame b and will tell you what the coordinates are as represented in frame a. R here is a 3x3 rotation matrix and p is a vector that, when the rotation matrix is unity (no rotation) tells you the coordinates of the origin of b in frame a. Usually, however, a rotation is present, so you have to do as below.
The position of the coordinate system described by the matrix will be given by applying the transformation to the point (0,0,0). This will well you what world coordinates the point is located at. The trick is that, when dealing with SE(3), you have to add a 1 at the end of points and a 0 at the end of vectors, which makes them vectors of length 4 instead of length 3, and hence operable on by the matrix! So, to transform point (0,0,0) in your local coordinate frame to the world frame, you'd right multiply your matrix (let's call it g_SA) by the vector (0,0,0,1). To get the world coordinates of a vector (x,y,z) you multiply the matrix by (x,y,z,0). You can think of that as being because vectors are differences of points, so the 1 in the last element goes the away. So, for example, to find the representation of your local x-axis in the world coordinates, you multiply g_SA*(1,0,0,0). To find the y-axis you do g_SA*(0,1,0,0), and so on.
The best place I've seen this discussed (and where I learned it from) is A Mathematical Introduction to Robotic Manipulation by Murray, Li and Sastry and the chapter you are interested in is 2.3.1.