quantify accuracy of model reconstructed in Meshroom based on ground points? - math

I have ground points placed in the scene with known Pw for each point in the world coordinate system. The P_m in the model coordinate system is calculated from the reconstructed model. However, the O_w and O_m are not aligned.
O_w and O_m are world and model coordinate systems, respectively. Can someone help enlist the steps needed to ensure the difference in real-world units between P_w and P_m?
Thanks in advance.

Related

How to avoid point cloud waiveness of intel d415 at 2meter distance

A understand stereo algorithm provides more error as we move towards more distant ranges. What i am experiencing is the point cloud waiveness is very high at 2m. On my experiment Minimum z and Maximum Z of a plane at 2meter varies by 10 cm.
I am trying to plane fit a Ransac on my cloud, but the Ransac always differs around 1cm for the same scene at different instances. How to overcome this and provide depth data.
Any help is appreciated. Thanks.

Estimation of camera displacement

I am currently working on a experiment that i took multiple photos
of a scene on diferent days with a fixed camera position.
The problem is that on real world it is hard to keep the camera
perfectly fixed.
What i need is to fix the small variance I got automaticaly. The research
I made returned methods considering more complex assumption, like camera
pose estimation, homography estimation etc.
For me its enought to discover just the movement at the image plane returning an
x and y.
A perfect solution would be a function such as:
function [movx movy] = detectMotion(im1,im2).
The solution I already made was to calculate some image features, like harris or
hessian, match them and after manualy select the best ones and use the difference
of their position as a camera displacement estimation. I dont know if this is good
enough but it would be better if it was made automaticaly.
You can do the feature matching automatically be extracting feature descriptors around the interest points. Take a look at this OpenCV tutorial on how to perform feature matching using SURF and FLANN. Once you have the feature matches, run RANSAC or least squares to find the best fit for an x- and y-offset. This will give you a decent estimate of the camera motion.
Another option is to compute sparse optical flow on the detected interest points between the two frames, followed by the RANSAC or least squares procedure as above to compute the best x- and y-offset. Dense optical flow could possibly be more accurate, but at the same time could prove to be overkill.

Pinning latitude longitude on a ski map

I have a map of a mountainous landscape, http://skimap.org/data/989/60/1218033025.jpg. It contains a number of known points, the lat-longs of which can be easily found out using Google maps. I wish to be able to pin any latitude longitude coordinate on the map, of course within the bounds of the landscape.
For this, I tried an approach that seems to be largely failing. I assumed the map to be equivalent to an aerial photograph of the Swiss landscape, without any info about the altitude or other coordinates of the camera. So, I assumed the plane perpendicular to the camera lens normal to be Ax+By+Cz-d=0.
I attempt to find the plane constants, using the known points. I fix my origin at a point, with z=0 at the sea level. I take two known points in the landscape, and using the equation for a line in 3D, I find the length of the projection of this line segment joining the two known points, on the plane. I multiply it by another constant K to account for the resizing of this length on a static 2d representation of this 3D image. The length between the two points on a 2d static representation of this image on this screen can be easily found in pixels, and the actual length of the line joining the two points, can be easily found, since I can calculate the distance between the two points with their lat-longs, and their heights above sea level.
So, I end up with an equation directly relating the distance between the two points on the screen 2d representation, lets call it Ls, and the actual length in the landscape, L. I have many other known points, so plugging them into the equation should give me values of the 4 constants. For this, I needed 8 known points (known parameters being their name, lat-long, and heights above sea level), one being my orogin, and the second being a fixed reference point. The rest 6 points generate a system of 6 linear equations in A^2, B^2, C^2, AB, BC and CA. Solving the system using a online tool, I get the result that the system has a unique solution with all 6 constants being 0.
So, it seems that the assumption that the map is equivalent to an aerial photograph taken from an aircraft, is faulty. Can someone please give me some pointers or any other ideas to get this to work? Do open street maps have a Mercator projection?
I would say that this impossible to do in an automatic way. The skimap should be considered as an image rather than a map, a map is an projection of the real world into one plane, since this doesn't fit skimaps very well they are drawn instead.
The best way is probably to manually define a lot of points in the skimap with known or estimated coordinates and use them to estimate the points betwween. To get an acceptable result you probably have to assign coordinates to each pixel in the skimap.
You could do something like the following: http://magazin.unic.com/en/2012/02/16/making-of-interactive-mobile-piste-map-by-laax/
I am solving the exact same issue. It is pretty hard and lots of maths. Taking me a few weeks to solve it. Interpolation is the key as well with lots of manual mapping. I would say that for a ski mountain it will take at least 1000/1500 points to be able to get the very basic. So, not a trivial task unless you can automate the collection of these points (what I am doing!) ;)

Point cloud:project city square to ground plane

I have a city square with people, cars, trees and buildings in pcl format. I need to automatically determine the ground plane and project this objects on that ground plane to get a 2D map with occupied places.
Any idea?
I think the best thing to do here would be to familiarise yourself with the following two PCL tutorials:
http://pointclouds.org/documentation/tutorials/planar_segmentation.php
http://pointclouds.org/documentation/tutorials/project_inliers.php
The first tutorial makes use of the RANSAC algorithm to find a dominant plane in a scene. I use it to find tables and floors in robotics scenarios. You would use it to find your dominant ground plane.
The second tutorial shows how to project points directly onto a plane. This is what you would use to make your 3D point cloud into a 2D one. Note that, despite the "inlier" keyword, you can pass your whole point cloud to be projected onto the plane.
Actually, if you are after "occupied" places, you might want to project all of the points that aren't in the ground plane (i.e. the outliers), and that are above it (you can use a PCL filter, such as PlaneClipper3D, for example, or just the complement of the outliers from the plane-segmentation operation.
If the plane that you end up with (containing all your projected points) is not in the coordinate frame you want, you may wish to rotate the whole lot, for example, to align with the coordinate axes so that all z-coordinates are zero. See pcl::transformPointCloud for this (the transform will be obtainable from the plane coefficients returned from the plane segmentation).
I hope this is helpful and not at too basic a level, though the question was rather general so I suppose it should be okay.

Finding the bounding box (axially aligned) of a parametric range of a 3D NURBS surface

I'll apologize in advance in case this is obvious; I've been unable to find the right terms to put into Google.
What I want to do is to find a bounding volume (AABB is good enough) for an arbitrary parametric range over a trimmed NURBS surface. For instance, (u,v) between (0.1,0.2) and (0.4,0.6).
EDIT: If it helps, it would be fine for me if the method confined the parametric region entirely within a bounding region as defined in the paragraph below. I am interested in sub-dividing those regions.
I got started thinking about this after reading this paragraph from this paper ( http://www.cs.utah.edu/~shirley/papers/raynurbs.pdf ), which explains how to create a tree of bounding volumes with a depth relative to the degree of the surface:
The convex hull property of B-spline surfaces guarantees that the surface is contained in the convex hull of its control mesh.
As a result, any convex objects which bound the mesh will bound the underlying surface. We can actually make a stronger
claim; because we closed the knot intervals in the last section [made the multiplicity of the internal knots k − 1], each nonempty
interval [ui; ui+1) [vj; vj+1) corresponds to a surface patch which is completely contained in the convex hull of
its corresponding mesh points. Thus, if we produce bounding volumes for each of these intervals, we will have completely
enclosed the surface. We form the tree by sorting the volumes according tothe axis direction which has greatest extent across the bounding volumes, splitting the data in half, and repeating the process.
Thanks! Sean
You will need to slice out a smaller NURBS surface, which only covers the parameter range you are interested in. Using your example, which I take to mean you are in the region where the u parameter is between 0.1 and 0.4. Let Pu be the degree of the spline in that parameter (A cubic spline has Pu = 3). You need to perform "knot insertion" (There's your Google search term) to get knots of degree Pu located at u=0.1 and u=0.4 Do the same thing on the v parameter to get knots of degree Pv at 0.2 and 0.6. The process of knot insertion will modify (and add to) the array of control points. There's a bit of bookkeeping involved, but you can then find the control_points that determine the surface in the parameter patch you just isolated between inserted knots. The convex property then says the surface is bounded by these control points, so you can use them to determine your bounding volume.
The NURBS reference I like to use for operations like this is: "The NURBS Book", by Les Piegl and Wayne Tiller.

Resources