Hough Transform for finding curve segments - math

Hough Transform can be used to extract lines from images. It can also be used to extract curves - this is a little harder though because higher dimensional Hough transforms are resource consuming. I was wondering whether how one restricts the Hough transform to 2D voting space for a curve of order 3 i.e. x^{3}+ax^{2}+bx+c ?
Anyone know any good sites explaining this (can't seem to find any). Or an explanation here if there isn't one :).

The essence of the Generalised Hough Transform that the "sides" of the accumulator is the answer you are looking for. If you are trying to match ellipses or arbitrary curves - in your case a, b, c parameters then you should build 3D accumulator and look for maximum there. Google "ellipse detection using hough transform" or "arbitrary shape detection using hough transform".
There are many way to optimise your search in multi dimensional accumulator, so don't be afraid to build multidimensional HT parameterised space - it can give you good overview of your problem.
You may want to split your search into two stage - for example build a classic 2D for your a and b parameters, then use very simple 1D accumulator for finding c, this has been done in edge detection, but be aware that this split can introduce large errors if you a,b,c interdependent.
Ways to optimise multidimensional Hough Transform: (Probabilistic) Randomised Hough transform, Hybrid and Multidimensional Hough Transform.
Also Generalised Hough Transform and Radon Transform are nearly synonymous, so for arbitrary shape detection "Radon transform" may give you better ideas: Hough Transform is a discrete version of continuous Radon Transform.

Try googling "Generalized Hough Transform" and you'll find a lot of stuff on this, including the original paper by Ballard, which seems quite readable. Which is the best of these for you depends on where you're starting from with this, so google is probably your best option.
scholar.google.com gives many papers, but few of them are free (though if you have access, it's probably the best start).

Do you only need to locate the curve for which you already know your parameters a,b,c? Using GHT you can create a discrete voting space from your eq. Use it to vote in a 2d space and you will find your curve. If your are trying to determine a,b,c from the Hough Transform it will be harder :)

Related

stereo vision 3d point calculation with known intrinsic and extrinsic matrix

I have successfully calculated Rotation, Translation with the intrinsic camera matrix of two cameras.
I also got rectified images from the left and right cameras. Now, I wonder how I calculate the 3D coordinate of a point, just one point in an image. Here, please see the green points. I have a look at the equation, but it requires baseline which I don't know how to calculate. Could you show me the process of calculating the 3d coordinate of the green point with the given information (R, T, and intrinsic matrix)?
FYI
1. I also have a Fundamental matrix and Essential matrix, just in case we need them.
2. Original image size is 960 x 720. Rectified ones are 925 x 669
3. The green point from the left image: (562, 185), from the right image: (542, 185)
The term "baseline" usually just means translation. Since you already have your rotation, translation and intrinsics matrices (let's not them R, T and K). you can triangulate and don't need either the Fundamental or Essential matrices (they could be used to extract R, T etc but you already have them). You don't really need your images to be rectified either, since it doesn't change the triangulation process that much. There are many ways to triangulate, each with their pros and cons, and many libraries that implement them. So, all I can do here is give you and overview of the problem and potential solutions, as well as pointers to resources that you can either use as their are or as a source of inspiration to write your own code.
Formalization and solution outlines. Let's formalize what we are after here. You have a 3d point X, with two observations x_1 and x_2 respectively in the left and right images. If you backproject them, you obtain two rays:
ray_1=K^{1}x_1
rat_2=R*K^{-1}x_2+T //I'm assuming that [R|T] is the pose of the second camera expressed in the referential of the first camera
Ideally, you'd want those two rays to meet at point X. Since in practice we always have some noise (discretization noise, rounding errors and so on) the two rays wont meet at X, so the best answer would be a point Q such that
Q=argmin_X {d(X,ray_1)^2+d(X,ray_2)^2}
where d(.) denotes the Euclidian distance between a line and a point. You can solve this problem as a regular least squares problem, or you can just take the geometric approach (called midpoint) of considering the line segment l that is perpendicular to both ray_1 and ray_2, and take its middle as your solution. Another quick and dirty way is to use the DLT. Basically, you re-write the constrains (i.e. X should be as close as possible to both rays) as a linear system AX=0 and solve it with SVD.
Usually, the geometric (midpoint) method is the less precise. The DLT based one, while not the most stable numerically, usually produces acceptable results.
Ressources that present in depth formalization
Hartley-Zisserman's book of course! Chapter 12. A simple DLT-based method, which is the one used in opencv (both in the calibration and sfm modules) is explained on page 312. It is very easy to implement, it shouldn't take more that 10 minutes in any language.
Szeliski'st book. It has an intersting discussion on triangulation in the chapter on SFM, but is not as straight-forward or in depth as Hartley-Zisserman's.
Code. You can use the triangulation methods from opencv, either from the calib3d module, or from the contribs/sfm module. Both use the DLT, but the code from the SFM module is more easily understandable (the calib3d code has a lot of old-school C code which is not very pleasant to read). There is also another lib, called openGV, which has a few interesting methods for triangulation.
cv::triangulatePoints
cv::sfm::triangulatePoints
OpenGV
The openGV git repo doesn't seem very active, and I'm not a big fan of the design of the library, but if I remember correctly (feel free to tell me otherwise) it offers methods other that the DLT for triangulations.
Naturally, those are all written in C++, but if you use other languages, finding wrappers or similar libraries wont be difficult (with python you still have opencv wrappers, and MATLAB has a bundle module, etc.).

Adapt geometry on printed points

I draw a vectorial geometry with some calibration points around it.
I print this geometry and then I physically scan the printed calibration points (I can't scan the geometry, I can only scan the calibration points).
When I acquire these points, these aren't in their position anymore because of some print error or bad print calibration.
The question is:
Is there any algorithm that helps me to adapt the original geometry in base of the new points scanned?
In practice I need to warp the geometry in order to obtain the real geometry printed on the paper with the same print error that I have on the calibration points.
The distortion is given by the physical distortion of the material (not paper but cloth) during the print process. I can't know how much the material will distort during the print.
Yes, there are algorithms to help you with that. In general you need to learn/find the transformation between the two images that you have.
Typical geometrical transformations are affine transformations (shift, scale, rotation, shear, reflections) which need at least three control points or piecewise local linear/ local weighted mean which need at least 4-6 control points. The more control points you have, the better in general.
Given a set of control points in one image and the corresponding set of control points in the other image there are algorithms for finding the optimal transformation between if you specify a class (affine or piecewise local linear). See for example fitgeotrans in Matlab. I don't know how exactly it solves the problem by I guess by some kind of optimization. It should be easy to find available implementations for other programming languages (Python, C, Java).
What remains is finding the correspondence between the control points in the two images. For a few images you may be able to do that by hand, but in the general case you might want to automatize this as well. General image registration algorithms like imregister should do well for your images. They give you a good initial estimate for the transformation (may already be sufficient) so that then identification of the corresponding point pairs is trivial (always take the nearest) and allow refining.
So I advice you to first just try to register the images (gray scale data) with an identity transformation as starting value. Then identify corresponding point pairs and refine the transformation either using an affine or a piecewiece/local transformation. Then apply the transformation on the geometry to get the printed geometry. Depending on your choice of programming languages you will find many implementations that do the job.

Edge detection for 3d point clouds/meshes with R

The detection of edges in 3d objects may be the first step for the automatic processing of particular characteristics and landmarks.
Thus, I'm looking for a method to identify such edges for some of my 3d-scanned objects.
However, with all my ideas (Hough transformation, angles threshold for neighboring vertices) I didn't succeed.
Thus, I'd be quite happy if someone could point me to a solution to the edge-finding-problem for 3d point clouds which can be applied using R.
There is a nice paper from last year about this topic.
Basically, you need to compute several features, for each point, based on it's neighbors.
I usually prefer Python over R so I'm not aware of any point-cloud processing package un R. But the implementation of that paper in R should be easy.
If you can translate Python-R, you can take a look at this library that I wrote as it has already implemented the computation of all the features mentioned on that paper.
If that helps you, in this answer you can find example code on how to add the curvature for each point. You just have to replace the word curvature with the other names of features.

Path finding for games

What are some path finding algorithms used in games of all types? (Of all types where characters move, anyway) Is Dijkstra's ever used? I'm not really looking to code anything; just doing some research, though if you paste pseudocode or something, that would be fine (I can understand Java and C++).
I know A* is like THE algorithm to use in 2D games. That's great and all, but what about 2D games that are not grid-based? Things like Age of Empires, or Link's Awakening. There aren't distinct square spaces to navigate to, so what do they do?
What do 3D games do? I've read this thingy http://www.ai-blog.net/archives/000152.html, which I hear is a great authority on the subject, but it doesn't really explain HOW, once the meshes are set, the path finding is done. IF A* is what they use, then how is something like that done in a 3D environment? And how exactly do the splines work for rounding corners?
Dijkstra's algorithm calculates the shortest path to all nodes in a graph that are reachable from the starting position. For your average modern game, that would be both unnecessary and incredibly expensive.
You make a distinction between 2D and 3D, but it's worth noting that for any graph-based algorithm, the number of dimensions of your search space doesn't make a difference. The web page you linked to discusses waypoint graphs and navigation meshes; both are graph-based and could in principle work in any number of dimensions. Although there are no "distinct square spaces to move to", there are discrete "slots" in the space that the AI can move to and which have been carefully layed out by the game designers.
Concluding, A* is actually THE algorithm to use in 3D games just as much as in 2D games. Let's see how A* works:
At the start, you know the coordinates of your current position and
your target position. You make an optimistic estimate of the
distance to your destination, for example the length of the straight
line between the start position and the target.
Consider the adjacent nodes in the graph. If one of them is your
target (or contains it, in case of a navigation mesh), you're done.
For each adjacent node (in the case of a navigation mesh, this could
be the geometric center of the polygon or some other kind of
midpoint), estimate the associated cost of traveling along there as the
sum of two measures: the length of the path you'd have traveled so
far, and another optimistic estimate of the distance that would still
have to be covered.
Sort your options from the previous step by their estimated cost
together with all options that you've considered before, and pick
the option with the lowest estimated cost. Repeat from step 2.
There are some details I haven't discussed here, but this should be enough to see how A* is basically independent of the number of dimensions of your space. You should also be able to see why this works for continous spaces.
There are some closely related algorithms that deal with certain problems in the standard A* search. For example recursive best-first search (RBFS) and simplified memory-bounded A* (SMA*) require less memory, while learning real-time A* (LRTA*) allows the agent to move before a full path has been computed. I don't know whether these algorithms are actually used in current games.
As for the rounding of corners, this can be done either with distance lines (where corners are replaced by circular arcs), or with any kind of spline function for full-path smoothing.
In addition, algorithms are possible that rely on a gradient over the search space (where each point in space is associated with a value), rather than a graph. These are probably not applied in most games because they take more time and memory, but might be interesting to know about anyway. Examples include various hill-climbing algorithms (which are real-time by default) and potential field methods.
Methods to procedurally obtain a graph from a continuous space exist as well, for example cell decomposition, Voronoi skeletonization and probabilistic roadmap skeletonization. The former would produce something compatible with a navigation mesh (though it might be hard to make it equally efficient as a hand-crafted navigation mesh) while the latter two produce results that will be more like waypoint graphs. All of these, as well as potential field methods and A* search, are relevant to robotics.
Sources:
Artificial Intelligence: A Modern Approach, 2nd edition
Introduction to The Design and Analysis of Algorithms, 2nd edition

Generating the function of the plane/surface that a given set of coordinates lie on

This is something related with Mathematics as well. But this is useful in computing as well.
Lets say you have 10 coordinates. (x1,y1)(x2,y2)..... in 2D Space. (i.e on a X-Y Plane). Can we find a single smooth curve going across the each coordinate.
While expanding the question, If the space is 3D, then can we find an equation of a smooth surface that going across a given set of spacial coordinates?
Are there any Libraries (Any language) \ tools to perform such calculations?
What you should be looking for is some library implementing NURBS (or Non Uniform Rational B-Splines). This will solve your problem in both 2d and 3d, since 2d is just a special case of the 3d.
Roughly speaking, you are not interested in the actual equation, you are only interested in getting the points approximated with smooth curves or surfaces. This is done by finding "control points" in 2d or 3d space, which are multiplied with B-spline base functions. A NURBS library will do this for you.
Cheers !
Edit:
Have a look at this one
you can always fit an order-10 polynomial through the points. that's not necessarily what you want to do, though - fitting a smooth curve via a series of splines will give you a better-looking result. the curve-fitting article on wikipedia gives you a good overview of the various options.
In the 2D case you are asking for curve fitting. This actually exists in excel, where you plot your points (I usually use XY scatter if you have x and y listed) and then right-click on the curve. Select Add Trendline. There you can choose which kind of function you want to fit to and you can ask excel to display it in the image (Tab named Options, check the box "Display equation on chart"). Nice and quick.
Otherwise you can use matlab and use the lsqr (least square method). If you want to find the polynomial closest that best describes your data you could use the polyfit function. It uses the least square method, but returns coefficients. Matlab has a whole set of other algorithms for solving/finding "best" approximations to systems of linear equations. I mention lsqr because it is one of the simplest to implement yourself if you don't have matlab. On the other hand it is for solving sets of linear equations - I don't know anything about your data.
Have a look at splines
in wiki
an interactive introduction
Searching for 'spline interpolation library' might give some useful hints for implementations.

Resources