Segmentation of a RGB-D point cloud - point-cloud-library

I have a scene point cloud (I have coordinates and RGB-D information of each point). The scene consist of some objects point cloud (for example obj1 and obj 2,...). I want to do a segmentation on this scene and provide labeled point cloud data.(for example in out put I can have information of the labeled point clouds,all points of object 1 labeled as 1 and object 2 as 2 and so an.) Is there any accurate method for that?

The best resource to use would probably be the Point Cloud Library (PCL). It has a vast amount of functions and classes for dealing with point clouds including various segmentation methods. I would suggest checking out this set of segmentation tutorials and finding one that is well suited to your specific use case.

Related

Using two or more index buffers when creating custom geometry with Qt 3D? [duplicate]

I have some vertex data. Positions, normals, texture coordinates. I probably loaded it from a .obj file or some other format. Maybe I'm drawing a cube. But each piece of vertex data has its own index. Can I render this mesh data using OpenGL/Direct3D?
In the most general sense, no. OpenGL and Direct3D only allow one index per vertex; the index fetches from each stream of vertex data. Therefore, every unique combination of components must have its own separate index.
So if you have a cube, where each face has its own normal, you will need to replicate the position and normal data a lot. You will need 24 positions and 24 normals, even though the cube will only have 8 unique positions and 6 unique normals.
Your best bet is to simply accept that your data will be larger. A great many model formats will use multiple indices; you will need to fixup this vertex data before you can render with it. Many mesh loading tools, such as Open Asset Importer, will perform this fixup for you.
It should also be noted that most meshes are not cubes. Most meshes are smooth across the vast majority of vertices, only occasionally having different normals/texture coordinates/etc. So while this often comes up for simple geometric shapes, real models rarely have substantial amounts of vertex duplication.
GL 3.x and D3D10
For D3D10/OpenGL 3.x-class hardware, it is possible to avoid performing fixup and use multiple indexed attributes directly. However, be advised that this will likely decrease rendering performance.
The following discussion will use the OpenGL terminology, but Direct3D v10 and above has equivalent functionality.
The idea is to manually access the different vertex attributes from the vertex shader. Instead of sending the vertex attributes directly, the attributes that are passed are actually the indices for that particular vertex. The vertex shader then uses the indices to access the actual attribute through one or more buffer textures.
Attributes can be stored in multiple buffer textures or all within one. If the latter is used, then the shader will need an offset to add to each index in order to find the corresponding attribute's start index in the buffer.
Regular vertex attributes can be compressed in many ways. Buffer textures have fewer means of compression, allowing only a relatively limited number of vertex formats (via the image formats they support).
Please note again that any of these techniques may decrease overall vertex processing performance. Therefore, it should only be used in the most memory-limited of circumstances, after all other options for compression or optimization have been exhausted.
OpenGL ES 3.0 provides buffer textures as well. Higher OpenGL versions allow you to read buffer objects more directly via SSBOs rather than buffer textures, which might have better performance characteristics.
I found a way that allows you to reduce this sort of repetition that runs a bit contrary to some of the statements made in the other answer (but doesn't specifically fit the question asked here). It does however address my question which was thought to be a repeat of this question.
I just learned about Interpolation qualifiers. Specifically "flat". It's my understanding that putting the flat qualifier on your vertex shader output causes only the provoking vertex to pass it's values to the fragment shader.
This means for the situation described in this quote:
So if you have a cube, where each face has its own normal, you will need to replicate the position and normal data a lot. You will need 24 positions and 24 normals, even though the cube will only have 8 unique positions and 6 unique normals.
You can have 8 vertexes, 6 of which contain the unique normals and 2 of normal values are disregarded, so long as you carefully order your primitives indices such that the "provoking vertex" contains the normal data you want to apply to the entire face.
EDIT: My understanding of how it works:

Apply projective transformation on plane in 3D

Scenario
I have a 3D environment which contains a 3D scene and a '2D' scene.
The 3D scene contains a cube and a perspective camera.
The '2D' scene contains 4 round objects and an orthographic camera. These round objects can be moved around by the user therefor the orthographic camera is used otherwise the round objects can be moved 'in depth' (along z-axis) and could change in size and i want them to maintain size.
Depending on positioning the round objects, the corners of the cube in the 3D scene should be aligned with the positions of the round objects. And maintaining perspective.
Edit:
What i am trying to accomplish is: Based on an image of a room a user uses those round objects to define the dimensions of the room. Based on those dimensions a hidden cube is positioned to act as a boundery box. The next step would be to add 3d objects to the scene and maintaining perspective of the room.
I tried explaining this scenario in a picture:
Problems
Basically i have no clue where to start.
The round objects are in a '2D' environment because of the orthographic camera, therefor i have no depth value that i think i need.
I think i need some perspective transformation based on camera positions/settings? There are all sorts of matrices that could be produced but don't know how to implement them.
Sources i studied
http://www.graphicsmill.com/docs/gm/affine-and-projective-transformations.htm
below is a similar situation
https://math.stackexchange.com/questions/296794/finding-the-transform-matrix-from-4-projected-points-with-javascript
Cannot post more links because of my reputation
I hope someone can make this clear or point me in the right direction
Counting the real degrees of freedom, I would say that you don't have enough data. Imagine the projetive camera of the 3D scene as an actual pinhole camera. Then the image that camera creates on its film, sensor or whatever is described by at least 9 parameters:
3 parameters for the position of the camera in space,
2 parameters for the direction the camera is looking at and
1 parameter rotating the camera + sensor around their optical axis,
1 parameter determining the distance from pinhole to sensor and
2 parameters translating the sensor in its plane
On the other hand, knowing a projective transformation from one plane to another, e.g. using my answer to the question you already referenced, will only yield 8 geometrically meaningful parameters. So you cannot hope to reconstruct the camera position from that, so you cannot find the image of the 3D scene that would fit your markers. The Wikipedia article on 3D pose estimation writes that
Most implementations of POSIT only work on non-coplanar points (in other words, it won't work with flat objects or planes).[3]
That being said, you gave an example of where someone is actually doing this! So how do they do it? Honestly, I'm not sure, but they would have to make use of some additional knowledge or extra assumptions. For example, if they knew details about their camera (focal length, relative position between lens and sensor, or something like that), that could provide the required data. Since these apps tend to work on mobile devices, I think it rather likely that they might have either an API to request these things or a database where they can be looked up for the more common devices.
Judging from your question, you don't have that. Neither do you have all the vertical edges of the cube depicted vertically parallel to one another, which would have been another possible way to add more information. You have to come up with one more piece of information in order to allow for a hopefully unique solution.
Of course, without more information the system is just underspecified. It's not hard to find any transformation matrix which does what you requested. Actually the answer I references is placed in a setup where a 2D to 2D map is to be modeled using a 3D transformation matrix. You can do the same and be done with it. But your users might become frustrated, since the transformation they obtain might do completely wrong things to the out-of-plane direction, and there is no knob to tune that to the correct behavior.

How to handle missing data in structure from motion optimization/bundle adjustment

I am working on a structure from motion application and I am tracking a number of markers placed on the object to determine the rigid structure of the object.
The app is essentially using standard Levenberg-Marquardt optimization over multiple camera views and minimizing the differences between expected marker points and the marker points obtained in 2D from each view.
For each marker point and each view the following function is minimised:
double diff = calculatedXY[index] - observedXY[index]
Where calculatedXY value depends on a number of unknown parameters that need to be found via the optimization and observedXY is the marker point position in 2D. In total I have (marker points * views) number of functions like the one above that I am aiming to minimise.
I have coded up a simulation of the camera seeing all the marker points but I was wondering how to handle the cases when during running the points are not visible due to lighting, occlusion or just not being in the camera view. In the real running of the app I will be using a web cam to view the object so it is likely that not all markers will be visible at once and depending on how robust my computer vision algorithm is, I might not be able to detect a marker all the time.
I thought of setting the diff value to be 0 (sigma squared difference = 0) in the case where the marker point could not be observed, could this skew the results however?
Another thing I noticed is that the algorithm is not as good when presented with too many views. It is more likely to estimate a bad solution when presented with too many views. Is this a common problem with bundle adjustment due to the increased likeliness of hitting a local minimum when presented with too many views?
It is common practice to just leave out terms corresponding to missing markers. Ie. don't try to minimise calculateXY-observedXY if there is no observedXY term. There's no need to set anything to zero, you shouldn't even be considering this term in the first place - just skip it (or, I guess in your code, it's equivalent to set the error to zero).
Bundle adjustment can fail terribly if you simply throw a large number of observations at it. Build your solution up incrementally by solving with a few views first and then keep on adding.
You might want to try some kind of 'robust' approach. Instead of using least squares, use a "loss function"1. These allow your optimisation to survive even if there are a handful of observations that are incorrect. You can still do this in a Levenberg-Marquardt framework, you just need to incorporate the derivative of your loss function into the Jacobian.

Problem with huge objects in a quad tree

Let's say I have circular objects. Each object has a diameter of 64 pixels.
The cells of my quad tree are let's say 96x96 pixels.
Everything will be fine and working well when I check collision from the cell a circle is residing in + all it's neighbor cells.
BUT what if I have one circle that has a diameter of 512 pixels? It would cover many cells and thus this would be a problem when checking only the neighbor cells. But I can't re-size my quad-tree-grid every time a much larger object is inserted into the tree...
Instead och putting objects into a single cell put them in all cells they collide with. That way you can just test each cell individually. Use pointers to the object so you dont create copies. Also you only need to do this with leavenodes, so no need to combine data contained in higher nodes with lower ones.
This an interesting problem. Maybe you can extend the node or the cell with a tree height information? If you have an object bigger then the smallest cell nest it with the tree height. That's what map's application like google or bing maps does.
Here a link to a similar solution: http://www.gamedev.net/topic/588426-2d-quadtree-collision---variety-in-size. I was confusing the screen with the quadtree. You can check collision with a simple recusion.
Oversearching
During the search, and starting with the largest objects first...
Test Object.Position.X against QuadTreeNode.Centre.X, and also
test Object.Position.Y against QuadTreeNode.Centre.Y;
... Then, by taking the Absolute value of the difference, treat the object as lying within a specific child node whenever the absolute value is NOT more than the radius of the object...
... that is, when some portion of the object intrudes into that quad : )
The same can be done with AABB (Axis Aligned Bounding Boxes)
The only real caveat here is that VERY large objects that cover most of the screen, will force a search of the entire tree. In these cases, a different approach may be called for.
Of course, this only takes care of the object that everything else is being tested against. To ensure that all the other large objects in the world are properly identified, you will need to alter your quadtree slightly...
Use Multiple Appearances
In this variation on the QuadTree we ONLY place objects in the leaf nodes of the QuadTree, as pointers. Larger objects may appear in multiple leaf nodes.
Since some objects have multiple appearances in the tree, we need a way to avoid them once they've already been tested against.
So...
A simple Boolean WasHit flag can avoid testing the same object multiple times in a hit-test pass... and a 'cleanup' can be run on all 'hit' objects so that they are ready for the next test.
Whilst this makes sense, it is wasteful if performing all-vs-all hit-tests
So... Getting a little cleverer, we can avoid having any cleanup at all by using a Pointer 'ptrLastObjectTestedAgainst' inside of each object in the scene. This avoids re-testing the same objects on this run (the pointer is set after the first encounter)
It does not require resetting when testing a new object against the scene (the new object has a different pointer value than the last one). This avoids the need to reset the pointer as you would with a simple Bool flag.
I've used the latter approach in scenes with vastly different object sizes and it worked well.
Elastic QuadTrees
I've also used an 'elastic' QuadTree. Basically, you set a limit on how many items can IDEALLY fit in each QuadTreeNode - But, unlike a standard QuadTree, you allow the code to override this limit in specific cases.
The overriding rule here is that an object may NOT be placed into a Node that cannot hold it ENTIRELY... with the top node catching any objects that are larger than the screen.
Thus, small objects will continue to 'fall through' to form a regular QuadTree but large objects will not always fall all the way through to the leaf node - but will instead expand the node that last fitted them.
Think of the non-leaf nodes as 'sieving' the objects as they fall down the tree
This turns out to be a very efficient choice for many scenarios : )
Conclusion
Remember that these standard algorithms are useful general tools, but they are not a substitute for thinking about your specific problem. Do not fall into the trap of using a specific algorithm or library 'just because it is well known' ... your application is unique, and it may benefit from a slightly different approach.
Therefore, don't just learn to apply algorithms ... learn from those algorithms, and apply the principles themselves in novel and fitting ways. These are NOT the only tools, nor are they necessarily the best fit for your application.
Hope some of those ideas helped.

Best solution for 2D occlusion culling

In my 2D game, I have static and dynamic objects. There can be multiple cameras. My problem: Determine objects that intersect with the current camera's view rectangle.
Currently, I simply iterate over all existing objects (not caring wheter dynamic or static) and do an AABB check with the cameras view rect on them. This seems acceptable for very dynamic objects, but not for static objects, where there can be tens of thousands of them (static level geometry scattered over the whole scene).
I have looked into multiple data structures which could solve my problem:
Quadtree
This was the first thing I considered, however the problem is that it would force my scenes to be of fixed size. (Acceptable for static, but not for dynamic objects)
Dynamic AABB tree
Seems good, but the overhead for rebalancing it seems just too great for many dynamic objects.
Spatial hash
The main problem here for me was that if you zoom out with the camera a lot, a huge number of mostly non-existing spatial hash buckets had to be queried, causing low performance.
In general, my criterias for a good solution of this problem are:
Dynamic size: The solution must not cause the scene size to be limited, or require heavy recomputation for resizing
Good query performance (for the camera)
Good support of very dynamic objects: The computations needed to handle objects with constantly changing position should be good:
The maximum sane number of dynamic objects in my game at one time probably is at 5000. Consider they all change their position every frame. Is there even a data structure which can be faster, considering the frequent insertions and deletions, than comparing the AABBs of the objects with the camera every frame?
Don't try to find the silver bullet. Just split your scene into dynamic and static parts and use different algorithms for them.
Quad trees are obviously suitable for static geometry with fixed
bounds.
Spatial hashes are ideal for sets of objects with similar sizes
(particle systems, for example).
AFAIK dynamic AABB trees are rarely used for occlusion culling, their
main purpose is the broad phase of collision detection.
And as you noticed, bruteforce culling is normal for dynamic objects
if the number of them is not really big.
static level geometry scattered over the whole scene
If your scene is highly-sparse, you can divide it into islands, i.e. create a list of scene parts with "good density".

Resources