A understand stereo algorithm provides more error as we move towards more distant ranges. What i am experiencing is the point cloud waiveness is very high at 2m. On my experiment Minimum z and Maximum Z of a plane at 2meter varies by 10 cm.
I am trying to plane fit a Ransac on my cloud, but the Ransac always differs around 1cm for the same scene at different instances. How to overcome this and provide depth data.
Any help is appreciated. Thanks.
Related
Problem image
I am trying to align/register (>4) 2-D point cloud segments from several laser scanners with high accuracy, producing an perimeter contour of the scanned product. The segments between lasers may look like that above. The issue is that the calibration process may be both incorrect, and slightly not accurate enough thus I am where I am (and possibly containing individual elevation tilt errors so the segments are not shape-wise similar--close but not exact) and trying to make the best of the situation.
Visually, the segments have a slight bias in both directions as well as a rotational error compared to each other.
The difficulty is that the segments only partially overlap, contain a low but noticeable noise which maybe coherent, and the sampled point distribution is both low and uneven in the overlapping region, since the camera placement are apart (approximately 90 degrees).
My solution so far is to ignore the rotational bias, estimate the mean bias of selected corresponce points within point cloud segments in the overlapping region, and translate each segment by that estimate until I get to the opposite corner. It works somewhat OK, but it is a problem in the last set of sensors since all the errors appear to add up there. Additionally, it fails when there is little or no overlapping region.
I am not a specialist, so complicated solutions maybe useful for others. A relatively robust, iterative approach that can be simply coded is the best! I am thankfully grateful for any advice to solve this simple but quite challenging problem.
I am trying to reproduce the software algorithm of a line laser 3D scanning device. Its original point cloud is shown in Figure 2, the processed and reconstructed point cloud is shown in Figure 1, and Figure 3 shows the reconstructed result using the BPA Ball-Pivoting Algorithm.
Figure 1 shows excellent point cloud uniformity and equal spacing, and excellent point cloud edge characteristics, has anyone seen point cloud smoothing or point cloud reconstruction results similar to Figure 1? What algorithm was used, thank you very much for your answer!
I am currently working on a experiment that i took multiple photos
of a scene on diferent days with a fixed camera position.
The problem is that on real world it is hard to keep the camera
perfectly fixed.
What i need is to fix the small variance I got automaticaly. The research
I made returned methods considering more complex assumption, like camera
pose estimation, homography estimation etc.
For me its enought to discover just the movement at the image plane returning an
x and y.
A perfect solution would be a function such as:
function [movx movy] = detectMotion(im1,im2).
The solution I already made was to calculate some image features, like harris or
hessian, match them and after manualy select the best ones and use the difference
of their position as a camera displacement estimation. I dont know if this is good
enough but it would be better if it was made automaticaly.
You can do the feature matching automatically be extracting feature descriptors around the interest points. Take a look at this OpenCV tutorial on how to perform feature matching using SURF and FLANN. Once you have the feature matches, run RANSAC or least squares to find the best fit for an x- and y-offset. This will give you a decent estimate of the camera motion.
Another option is to compute sparse optical flow on the detected interest points between the two frames, followed by the RANSAC or least squares procedure as above to compute the best x- and y-offset. Dense optical flow could possibly be more accurate, but at the same time could prove to be overkill.
I have a map of a mountainous landscape, http://skimap.org/data/989/60/1218033025.jpg. It contains a number of known points, the lat-longs of which can be easily found out using Google maps. I wish to be able to pin any latitude longitude coordinate on the map, of course within the bounds of the landscape.
For this, I tried an approach that seems to be largely failing. I assumed the map to be equivalent to an aerial photograph of the Swiss landscape, without any info about the altitude or other coordinates of the camera. So, I assumed the plane perpendicular to the camera lens normal to be Ax+By+Cz-d=0.
I attempt to find the plane constants, using the known points. I fix my origin at a point, with z=0 at the sea level. I take two known points in the landscape, and using the equation for a line in 3D, I find the length of the projection of this line segment joining the two known points, on the plane. I multiply it by another constant K to account for the resizing of this length on a static 2d representation of this 3D image. The length between the two points on a 2d static representation of this image on this screen can be easily found in pixels, and the actual length of the line joining the two points, can be easily found, since I can calculate the distance between the two points with their lat-longs, and their heights above sea level.
So, I end up with an equation directly relating the distance between the two points on the screen 2d representation, lets call it Ls, and the actual length in the landscape, L. I have many other known points, so plugging them into the equation should give me values of the 4 constants. For this, I needed 8 known points (known parameters being their name, lat-long, and heights above sea level), one being my orogin, and the second being a fixed reference point. The rest 6 points generate a system of 6 linear equations in A^2, B^2, C^2, AB, BC and CA. Solving the system using a online tool, I get the result that the system has a unique solution with all 6 constants being 0.
So, it seems that the assumption that the map is equivalent to an aerial photograph taken from an aircraft, is faulty. Can someone please give me some pointers or any other ideas to get this to work? Do open street maps have a Mercator projection?
I would say that this impossible to do in an automatic way. The skimap should be considered as an image rather than a map, a map is an projection of the real world into one plane, since this doesn't fit skimaps very well they are drawn instead.
The best way is probably to manually define a lot of points in the skimap with known or estimated coordinates and use them to estimate the points betwween. To get an acceptable result you probably have to assign coordinates to each pixel in the skimap.
You could do something like the following: http://magazin.unic.com/en/2012/02/16/making-of-interactive-mobile-piste-map-by-laax/
I am solving the exact same issue. It is pretty hard and lots of maths. Taking me a few weeks to solve it. Interpolation is the key as well with lots of manual mapping. I would say that for a ski mountain it will take at least 1000/1500 points to be able to get the very basic. So, not a trivial task unless you can automate the collection of these points (what I am doing!) ;)
Hey so after reading this article I've been left with a few questions I hope to resolve here.
My understanding is that the goal of any multi-dimensional collision response is to convert it to a 1D collision be putting the bodies on some kind of shared axis. I've deduced from the article that the steps to responding to a 2d collision between 2 polygons is to
First find the velocity vector of each bodies collision point
Find relative velocity based on each collision point's velocity (see question 1)
Factor in how much of that velocity is along the the "force transfer line (see question 2)"
(which is the only velocity that matters for the collision)
Factor in elasticity
Factor in mass
Find impulse/ new linear velocity based on 2-4
Finally figure out new angular velocity by figuring out how much of the impulse is "rotating around" each object's CM (which is what determines angular acceleration)
All these steps basically figure out how much velocity each point is coming at the other with after each velocity is translated to a new 1D coordinate system, right?
Question 1: The article says relative velocity is meant to find and expression for the velocity with which the colliding points are approaching each other, but to me it seems as though is simply the vector of
CM 1 -> CM 2, with magnitude based on each point's velocity. I don't understand the reasoning behind even including the CMs in the calculations since it is the points colliding, not the CMs. Also, I like visualizing things, so how does relative velocity translate geometrically, and how does it work toward the goal of getting a 1D collision problem.
Question 2: The article states that the only force during the collision is in the direction perpendicular to the impacted edge, but how was this decided? Also how can they're only be force in one direction when each body is supposed to end up bouncing off in 2 different directions.
"All these steps basically figure out how much velocity each point is coming at the other with after each velocity is translated to a new 1D coordinate system, right?"
That seems like a pretty good description of steps 1 and 2.
"Question 1: The article says relative velocity is meant to find and expression for the velocity with which the colliding points are approaching each other, but to me it seems as though is simply the vector of CM 1 -> CM 2, with magnitude based on each point's velocity."
No, imagine both CMs almost stationary, but one rectangle rotating and striking the other. The relative velocity of the colliding points will be almost perpendicular to the displacement vector between CM1 and CM2.
"...How does relative velocity translate geometrically?"
Zoom in on the site of collision, just before impact. If you are standing on the collision point of one body, you see the collision point on the other point approaching you with a certain velocity (in your frame, the one in which you are standing still).
"...And how does it work toward the goal of getting a 1D collision problem?"
At the site of collision, it is a 1D collision problem.
"Question 2: The article states that the only force during the collision is in the direction perpendicular to the impacted edge, but how was this decided?"
It looks like an arbitrary decision to make the surfaces slippery, in order to make the problem easier to solve.
"Also how can [there] only be force in one direction when each body is supposed to end up bouncing off in 2 different directions."
Each body experiences a force in one direction. It departs in a certain direction, rotating with a certain angular velocity. I can't parse the rest of the question.