I want to use PCL(point cloud library) to implement cube or rectangle detection for any size in a scene.
Can anyone give me some direction?
You may want to take a look at this PCL tutorial or, in general, at all the techniques implemented in the pcl::recognition module.
On the PCL users mailing list archive (here), there is an older but yet useful discussion about simple object recognition. For simple objects, as in your case, you may consider using Sample Consensus for segmenting the model inside your scene point cloud.
Related
Is there a way to create and use a simple 3d model on the Unreal Engine?
Your best bet would be to create the initial 3D asset in a third party tool and import it into the IDE. From there you can change the texture map, and manipulate the aesthetics in one way or another, but the initial 3D model should be in an external 3D format, and then placed as a prefab into your world.
Creating an object dynamically in UDK is cumbersome and requires lots of tweaking, and won't save much in terms of cost of resources. Especially if you want it to look good and more than just 3D meshes thrown together rudimentarily. It is possible, but almost not worth it, especially if you have 3DSMax, Maya, Cinema4D, MotionBuilder, or one of the other hundred tools available to do the grunt work for you.
Most 3D Engines (IE Unity, UDK, Torque, Cry and now Havok) support many formats, and especially the unversal FBX. You could even use google sketchup and export to DAE or FBX format to get it into your Engine. Grant it you lose a lot of the elements, but the basic 3D mesh stays relatively in tact.
Does anyone have any good implementation strategies or resources for putting together a b-rep modeling system?
OpenCascade is an apparently good library for b-rep modeling (used by FreeCad and PythonOCC are both very cool) but the library is huge, complicated and may not be a good starting point to learn about b-rep modeling 'engines'.
I've done quite a bit of research paper reading, and while the fundamental math is useful for understanding why everything works, its left me with some implementation questions.
The halfedge data-structure seems to be the preferred way to store information about a body in b-rep implementations.
So a handful of questions in no particular order:
Using the halfedge data-structure how is rendering typically implemented? Triangulation based on the solid's boundaries?
How are circular faces/curved surfaces typically implemented? For instance a cylinder in one basic introduction to b-rep's I read, was internally stored as a prism. IE an extruded triangle and meta-data was stored about the cap faces denoting that they were indeed circular.
How are boolean operations typically implemented? I've read about generating BSP-Tree's along the intersection curves then combining those tree's to generate the new geometry. Are there other ways to implement boolean operations and what sort of pro's/con's do they have?
Thanks!
If you'd like to provide a code example don't worry about the language -- the questions are more about algorithmic/data-structure implementation details
I'm working on a B-Rep modeler in C# (I'm in a very early stage: it's an huge project) so I ask myself the same questions as you. Here is my answers:
Triangulation: I've not done this step, but the strategy I'm thinking about is as follow: project the face boundaries in parameter space to obtain 2D polygons (with holes), triangulate that with the ear clipping algorithm and then reproject triangle vertices in 3D space. For curved surfaces, I need to split the polygons with a grid in order to follow the surface;
For a cylinder, there is 3 edges : two circulars and one line segment. I have classes for each type of curves (Segment3d, Circle3d...) and each half-edge hold an instance of one of theses classes. Each face hold an instance of a surface object (plane, cylinder, sphere...);
There is an interesting project here based on BSP-Tree, but it uses CSG method, not B-rep. I'm still researching how to do this, but I don't think I will need a BSP tree. The difficulty is in computing intersections and topology.
The best books I've found on this subject:
3D CAD - Principles and Applications (old but still relevant)
Geometric Modeling: The mathematics of shapes (more recent than the previous one, but less clear)
hi guys
i am in trouble with add picking object in a JOGL project.
i know that this could be done with pick buffer.. but i can't find examples
anyone?
In general, as you are probably aware, JOGL code translates directly from any other OpenGL examples you might see on the web.
GL_SELECT based picking seems to be very much out of favour these days; deprecated in the spec and poorly implemented by drivers.
Alternatives you can use are:
Rendering each object with a unique color (and all lighting / fog etc disabled) so you can determine which object the mouse is over via glReadPixels. (Clearing buffers after the picking stage so that you can then render your normal graphics). This approach is explained by the top rated answer in OpenGL GL_SELECT or manual collision detection? for example.
Ray-casting into your geometry (see the selection FAQ link below). This also means that you don't have to have an active gl context in the thread you call the code from, fwiw.
I've used both of these methods in the same application, currently having good results with the latter, but since most of the objects in that application are spheres it is a lot cheaper than it might be with arbitrary models.
http://www.opengl.org/resources/faq/technical/selection.htm
Currently I'm trying to develop some simple plot prototype and I'm struggling with some kind of white/empty sheet syndrome.
I'm back to Qt after 2 years, so I feel quite retarded.
My application should:
plot and manage custom layers of data
plot on custom canvas background
manage markers on plot
My plan is to use following design:
QGraphicsScene /View/Item as a sprite like management widgets for background, markers, pointers and other "bitmap" objects etc.
QPainter/ Qpixmap or QPicture for actual data layers - and if possible set them as QGraphicItem to simplify management of dynamic graphics
I don't want to use Qwt or similar library, unless I can plot with it on custom background (I don't like the look of the qwt's graphic style).
Is my plan proper in scope of qt class usage and composition?
I'd like to have at least clear overview of the classes which should be involved for this kind of prototype.
Thanks in advance.
P.
I think you have the basic idea with QGraphicsView. Here are a few resources which might help:
Graphics View
Diagram Scene
If you want to use the new animation and state set classes:
Stickman
Also, take a look at gunnar's labs blog. He recently did a series on graphics performance.
All of these are strictly Qt (animation and state set are in 4.6). They are in C++ but hopefully you can translate what you need to python.
You don't say much about your project for me to propose a more helpful answer, but have a look at the Qt demos involving the graphics view, especially diagram scene and 40000 chips. I think you will find them inspiring for what you want to do.
May be MathGL is appropriate for you. It have Qt widget or you can use RGBA image directly to combine it with any background in your widget.
I recommend you to use QCustomPlot which is a Qt C++ library. It focuses on making good looking, publication quality 2D plots, graphs and charts and also has high performance for realtime visualization applications. You get it here: http://www.qcustomplot.com/
You may want to take a look at the Core Plot framework. Core Plot is OS X specific, but it is built on the the OS X Core Animation system which has a lot of conceptual similarity to the Qt Graphics View Framework. You'll have to learn to visually parse the Objective-C (a less-than-two day process for any competent C++ developer), but you should be able to see the general architecture relatively easily. The Core Plot wiki has some nice high-level documentation that might set you on your way without even needing to look at the code.
I read in a book that a projection in Repast Simphony can be any user implementation of the Projection interface. I would like to create a custom projection but it looks more complicated that I expected.
Have any of you ever tried to create their own projection? If so, would it be possible for you to explain how to proceed please? Thank you.
I think creating the Projection implementation should be fairly straightforward. However, it will not be integrated with the visualization architecture. So, your agents will be able to participate in the Projection but it will not be visualized.
If you implement a Class with the Projection interface and the ContextListener interface that should be enough. You can use the DefaultProjection as a starting point. Most, if not all, of the standard Projection hierarchies use the DefaultProjection as a starting point and their subclasses implement ContextListener. See AbstractGrid and ContextGrid, for example. The source should be useful as guide to implementation.