Has anyone already used the IFC (Industry Foundation Classes) from BuildingSmart, typically adopted for BIM projects and building domain ?
I would like to know how to navigate the IFC objects to get the coordinates of a IfcWallStandardCase or of an affine object (i.e., yet a Wall).
I am interesting in getting the coordinates of all or at least one of the vertices delimiting the Wall.
Please indicate the navigation through the Ifc objects of an Ifc file, to know where to locate the coordinates information in the Ifc file starting from an IfcWallStandardCase or affine object.
First go for the Representation attribute which is optional for IfcProduct. You want shape representations (IfcProductDefinitionShape), not material representations. If there are representations at all, you may get multiple representations, each with a context specifying dimensionality, precision, and coordinate system. Since you are hunting for coordinates, you probably want a representation of type IfcShapeRepresentation, not IfcTopologyRepresentation. Each representation then consists of multiple representation items.
There are several types of geometry representations - check the inheritance tree of IfcGeometricRepresentationItem. Here is an example for a faceted BREP: each representation item is then of type IfcFacetedBrep, which is explained nicely in the IFC2x4 specs. With attribute outer you get a closed shell, which consist of a set of faces (IfcFace) reachable trough the attribute CfsFaces. Each face has a set of bounds (IfcFaceBound, attribute Bounds), each of which is defined by a loop (IfcLoop, attribute Bound) and an orientation. The loops again may be of different type, let's assume IfcPolyLoop. Those have a list of points (IfcCartesianPoint) under the attribute Polygon, which finally give you the coordinates (of type IfcLengthMeasure which is a REAL) with attribute Coordinates.
Be aware that those coordinates are relative to the coordinate system of the geometric context mentioned in the beginning. Contexts may be nested with multiple coordinate transformations to be resolved in order to get absolute world coordinates.
The path of attribute names is: Representation, Items, CfsFaces, Bounds, Bound, Polygon, Coordinates.
Related
I want to write a program with 'geometry automata'. I'd like it to be a companion to a book on artistic designs. There will be different units, like the 'four petal unit' and 'six petal unit' shown below, and users and choose rulesets to draw unique patterns onto the units:
I don't know what the best data structure to use for this project is. I also don't know if similar things have been done and if so, using what packages or languages. I'm willing to learn anything.
All I know right now is 2D arrays to represent a grid of units. I'm also having trouble mathematically partitioning the 'subunits'. I can see myself just overlapping a bunch of unit circle formulas and shrinking the x/y domains (cartesian system). I can also see myself representing the curve from one unit to another (radians).
Any help would be appreciated.
Thanks!!
I can't guarantee that this is the most efficient solution, but it is a solution so should get you started.
It seems that a graph (vertices with edges) is a natural way to encode this grid. Each node has 4 or 6 neighbours (the number of neighbours matches the number of petals). Each node has 8 or 12 edges, two for each neighbour.
Each vertex has an (x,y) co-ordinate, for example the first row in in the left image, starting from the left is at location (1,0), the next node to its right is (3,0). The first node on the second row is (0,1). This can let you make sure they get plotted correctly, but otherwise the co-ordinate doesn't have much to do with it.
The trouble comes from having two different edges to each neighbour, each aligned with a different circle. You could identify them with the centres of their circles, or you could just call one "upper" and the other "lower".
This structure lets you follow edges easily, and can be stored sparsely if necessary in a hash set (keyed by co-ordinate), or linked list.
Data structure:
The vertices can naturally be stored as a 2-dimensional array (row, column), with the special characteristic that every second column has a horizontal offset.
Each vertex has a set of possible connections to those vertices to its right (upper-right, right, or lower right). The set of possible connections depends on the grid. Whether a connection should be displayed as a thin or a thick line can be represented as a single bit, so all possible connections for the vertex could be packed into a single byte (more compact than a boolean array). For your 4-petal variant, only 4 bits need storing; for the 6-petal variant you need to store 6 bits.
That means your data structure should be a 2-dimensional array of bytes.
Package:
Anything you like that allows drawing and mouse/touch interaction. Drawing the connections is pretty straightforward; you could either draw arcs with SVG or you could even use a set of PNG sprites for different connection bit-patterns (the sprites having partial transparency so as not to obscure other connections).
I draw a vectorial geometry with some calibration points around it.
I print this geometry and then I physically scan the printed calibration points (I can't scan the geometry, I can only scan the calibration points).
When I acquire these points, these aren't in their position anymore because of some print error or bad print calibration.
The question is:
Is there any algorithm that helps me to adapt the original geometry in base of the new points scanned?
In practice I need to warp the geometry in order to obtain the real geometry printed on the paper with the same print error that I have on the calibration points.
The distortion is given by the physical distortion of the material (not paper but cloth) during the print process. I can't know how much the material will distort during the print.
Yes, there are algorithms to help you with that. In general you need to learn/find the transformation between the two images that you have.
Typical geometrical transformations are affine transformations (shift, scale, rotation, shear, reflections) which need at least three control points or piecewise local linear/ local weighted mean which need at least 4-6 control points. The more control points you have, the better in general.
Given a set of control points in one image and the corresponding set of control points in the other image there are algorithms for finding the optimal transformation between if you specify a class (affine or piecewise local linear). See for example fitgeotrans in Matlab. I don't know how exactly it solves the problem by I guess by some kind of optimization. It should be easy to find available implementations for other programming languages (Python, C, Java).
What remains is finding the correspondence between the control points in the two images. For a few images you may be able to do that by hand, but in the general case you might want to automatize this as well. General image registration algorithms like imregister should do well for your images. They give you a good initial estimate for the transformation (may already be sufficient) so that then identification of the corresponding point pairs is trivial (always take the nearest) and allow refining.
So I advice you to first just try to register the images (gray scale data) with an identity transformation as starting value. Then identify corresponding point pairs and refine the transformation either using an affine or a piecewiece/local transformation. Then apply the transformation on the geometry to get the printed geometry. Depending on your choice of programming languages you will find many implementations that do the job.
I'm working on a problem to eliminate common line segments in a collection of Paths. Many of these paths share the same segment.
It seems that a 2D line would have some way to uniquely identity itself. Like a Key.
So a Line [(A,B), (C,D)] is the same as [(C,D), (A,B)]
Only Solution I could come up with is to sort the points.
This seems like it would be a common problem in Math or Graphics but the solution escapes me.
From a mathematical point of view, this looks like a matter of an undirected graph (as opposed to a directed graph).
Sorting the points is one way to handle this: it's a straightforward way to represent an unordered edge with a single, unambiguously selected value (it shouldn't matter what ordering you choose, as long as it is consistent for all possible segments). You do need to ensure that you maintain this ordering as an invariant: accidentally slipping in a mis-ordered edge could cause problems for anything that depends on the ordering.
However, mathematically speaking, undirected graphs are often defined as directed graphs with a symmetry property: if (A,B) is an edge, then so is (B,A). This suggests another way: ensure that you always store both (A,B) and (B,A). Perhaps both segment orderings could have a link to any common data, and possibly a fast way to access one from the other. (As with the sorted point method, you need to maintain this symmetry as an invariant.)
The best choice depends on your application. If you're using your segments as a key, the sorting method might be best. However, some applications are a better match for the symmetric method. For example, the doubly connected edge list is a data structure which represents each edge as two linked "half-edges", one in each direction.
Since you mention graphics, note that the doubly connected edge list is often used to represent the edges of 3-D polytopes.
Also, note the similarity to oriented triangles: there are good, practical reasons for computer graphics to treat triangles as "one-sided", such that drawing a triangle visible from one side is distinct from drawing the same triangle visible from the other. Like half-edges, this distinction is determined by the order of the vertices: clockwise for one side, counterclockwise for the other.
I'm very new to PCL.
I try to detect the floor under an object for checking if the object topples or is it positioned horizontally.
I've checked API and found the method: pcl::PointCloud< T >::at.
Seems like I could detect Z-value of a point using at. Is it correct?
If yes, I'm confused, how it should work. Mathematically a point is infinite small. On my scans I see the point-density the smaller the more distinct they are in Z-direction.
Will at always return a point? Is the value the mean of nearest physical points?
As referenced in the documentation, pcl::PointCloud< T >::at returns the information of a single point (the coordinates plus other data depending on the point format) given column and row information (roughly the X,Y in the depth image). For this reason, this method just works on organized clouds.
Unfortunately, not every point is a valid point. Unless you filter the point cloud, you could find invalid measurements (points which have NaN components). This is pretty normal, just discard those points using a filter. Your intuition is right, the point density is smaller the further away you go from the sensor.
As for what you're trying to achieve, you should take a look at the planar segmentation tutorial on the PCL website and at the Table Object Detector software by Nicolas Burrus. The latter extracts a plane, and the clusters of objects on top of it.
I have a game world with lots of irregular objects with varying coordinate systems controlling how objects on their surface work. However the camera and these objects can leave and move out into open empty space, where a normal Cartesian coordinate system is used. How do I manage mapping between the two?
One idea I had would be to wrap these objects in a bounds such as a sphere or box, within which said coordinate system would be used, however this becomes problematic if those bounding objects overlap, at which point I'm unsure whether the idea is fundamentally flawed or a solution can be found, since these objects are moving and could overlap at some point
I think you should place all your objects in the cartesian 'empty space' coordinate system by composition of your irregular objects coordinates system with the position matrix.
It adds a level, but will make everything easier.
Regarding the use of bounds I had an idea where the object would use the coordinate system of the smallest bounds it occupied, and then transform according to the heirarchy of systems from top to bottom.
Thus lets say stick figures on a cylinder adjacent to a large object would follow the cylinder rather than flitting between the two objects and their coordinate systems.
Irregardless of the local coordinate system around each of irregular objects, all points will still map to the global world coordinates at one point or another because eventually when you want to render your objects they'll have to get mapped into world space and then camera space. You can use the same object space to world space transform matrices to do the mapping.
You can use Lame's coefficients to transform the dimensions of different coordinate systems.
You can transform any kind of coordinate systems, your own as well. The only condition is to have orthogonal dimensions (every dimension has to be independent from other dimensions).
Here is some document I found: link text.
Hope it helps.