Required Data for IFC - data-conversion

I'm working on a project where I need to generate an IFC file, and am given not much more information than geometry (I have access to the density and heat-conductivity of materials, and basic labeling for Objects).
So far I could only find what IFC can store, never what IFC needs to store.
What do I need to include in an IFC file so it is properly functional?
What does an IFC file need besides basic geometry?

Disclaimer: I have not read (or bought) the standard. My knowledge primarily stems from working with IFC files and trying different things. And reading the buildingSMART documentation. So I can't give you a hard guarantee, but I am rather confident my information is correct/usable.
As an alternative to buying the official standards file, you could look into the official documentation by buildingsmart. (Also have a look here for more general information and availability of other/more modern releases).
Now assuming you are familiar with the basic STEP file layout (header and data segment), let's jump to what an IFC file absolutely has to include to be considered correct (as far as I understand the documentation; there might be parsers/loaders which can load incorrect/incomplete files, but we aren't aiming for them). Also note I am building this example for IFC 4.0. This should be correct for the current IFC 4.1 standard, but probably not for the older IFC2X3 standard (there have been some relaxations in IFC4 from IFC2X3). Also I am skipping on names and descriptions - you can set those fields for testing to recognize your structures in a viewer (it's easier than comparing GUIDs).
IfcProject
The root of all elements is the IfcProject. It also contains most basic properties and definitions for all other elements. The attributes required per documentation on this entity are only the unique id. But for a working example you usually also need a minimal unit assignment and representation context.
#20= IFCPROJECT('344O7vICcwH8qAEnwJDjSU',$,$,$,$,$,$,(#19),#13);
In the unit assignment you define required units, starting from geometric units to monetary, thermal, etc. The minimum is length, area and angle to meaningfully define geometric items. So for our example we include only those: metre as length, square meter as area and radians as angle. If you need foot or inch or degree you can define those as derived units.
#10= IFCSIUNIT(*,.LENGTHUNIT.,$,.METRE.);
#11= IFCSIUNIT(*,.AREAUNIT.,$,.SQUARE_METRE.);
#12= IFCSIUNIT(*,.PLANEANGLEUNIT.,$,.RADIAN.);
#13= IFCUNITASSIGNMENT((#10,#11,#12));
The representation context defines for a given class of representations (=geometric/parametric descriptions) the basic coordinate system. So the simple case would be a 3-dimensional right handed system at point zero. IFC is working with the z-axis pointing up - this might be important if your are working with models/files originating from 3D/OpenGl applications which usually assume the y-axis pointing upwards. You also need a precision value - I am using 1.0e-5 here, but you might want to test out if you can go with less or need more. The precision is usually applied when comparing points/edges when combining geometry (during constructive solid geometry steps). If you have errors, try a different precision value.
The second attribute of the representation context is the context type. This is a string identifying on which representations this context should be applied. The documentation states that values are based on "implementers agreement" - which means AFAIK "look what the others are using". From my experience using "Model" works for 3D geometry. Using "Plan" for 2D plans and sketches should work, too.
#14= IFCDIRECTION((1.,0.,0.));
#15= IFCDIRECTION((0.,0.,1.));
#16= IFCCARTESIANPOINT((0.,0.,0.));
#17= IFCAXIS2PLACEMENT3D(#16,#15,#14);
#18= IFCDIRECTION((0.,1.));
#19= IFCGEOMETRICREPRESENTATIONCONTEXT($,'Model',3,1.0E-5,#17,#18);
Spatial container for elements
Elements can't be added to the IfcProject directly - they need to be placed into a spatial element which is contained in the project. There are three possible choices: IfcSite, IfcBuilding and IfcSpatialZone (see section Spatial Decomposition on the IfcProject page). The IfcSpatialZone is defined as non-hierarchical spatial element - its usage is slightly different from the other two (elements are added using a different relation).
A single site is sufficient as spatial container. Adding all elements to it might be sematically vague (mostly fences are directly added to it, other elements are usually inside a building) but not incorrect. (IFC does not care if you have electrical appliances in your garden). As nearly all attributes of IfcSite are optional we can skip on those. But beware: if you give your site a representation (=some geometric shape) you will need to include a placement for it. The site will be aggregated into the project to be related to it.
#30= IFCSITE('20FpTZCqJy2vhVJYtjuIce',$,$,$,$,$,$,$,.ELEMENT.,$,$,$,$,$);
#31= IFCRELAGGREGATES('0Du7$nzQXCktKlPUTLFSAT',$,$,$,#20,(#30));
Elements
Actually that is all that is needed as absolute minimum structure. Now you can add your elements - entities of some type derived from IfcProduct. As all those elements have some sort of meaning attached to it you either need to select those closely matching the objects you have, or you might want to use IfcBuildingElementProxy which is the most "meaningless" (or better: no specialized semantic meaning) object type. The following code places one proxy without geometry. The placement references the same coordinate system definition that is used to create the coordinate system out of convenience as it doesn't transform or move anything. Your geometry would be added through a product definition shape which has shape aspects and finally some geometry items. The building smart documentation has a few examples with assigned geometry.
#40= IFCLOCALPLACEMENT($,#17);
#41= IFCBUILDINGELEMENTPROXY('3W29Drc$H6CxK3FGIxjJNl',$,$,$,$,#40,$,$,.NOTDEFINED.);
#42= IFCRELCONTAINEDINSPATIALSTRUCTURE('04ldtj6cp2dME6CiP80Bzh',#12,$,$,(#41),#30);
Conclusion
So there isn't much needed as bare minimum to add elements:
a project
basic unit definitions
one spatial container
The complete example file would be:
ISO-10303-21;
HEADER;FILE_DESCRIPTION(('IFC4'),'2;1');
FILE_NAME('example.ifc','2018-08-8',(''),(''),'','','');
FILE_SCHEMA(('IFC4'));
ENDSEC;
DATA;
#10= IFCSIUNIT(*,.LENGTHUNIT.,$,.METRE.);
#11= IFCSIUNIT(*,.AREAUNIT.,$,.SQUARE_METRE.);
#12= IFCSIUNIT(*,.PLANEANGLEUNIT.,$,.RADIAN.);
#13= IFCUNITASSIGNMENT((#10,#11,#12));
#14= IFCDIRECTION((1.,0.,0.));
#15= IFCDIRECTION((0.,0.,1.));
#16= IFCCARTESIANPOINT((0.,0.,0.));
#17= IFCAXIS2PLACEMENT3D(#16,#15,#14);
#18= IFCDIRECTION((0.,1.));
#19= IFCGEOMETRICREPRESENTATIONCONTEXT($,'Model',3,1.0E-5,#17,#18);
#20= IFCPROJECT('344O7vICcwH8qAEnwJDjSU',$,$,$,$,$,$,(#19),#13);
#30= IFCSITE('20FpTZCqJy2vhVJYtjuIce',$,$,$,$,$,$,$,.ELEMENT.,$,$,$,$,$);
#31= IFCRELAGGREGATES('0Du7$nzQXCktKlPUTLFSAT',$,$,$,#20,(#30));
#40= IFCLOCALPLACEMENT($,#17);
#41= IFCBUILDINGELEMENTPROXY('3W29Drc$H6CxK3FGIxjJNl',$,$,$,$,#40,$,$,.NOTDEFINED.);
#42= IFCRELCONTAINEDINSPATIALSTRUCTURE('04ldtj6cp2dME6CiP80Bzh',$,$,$,(#41),#30);
ENDSEC;
END-ISO-10303-21;
Note that loading this one up doesn't show anything, because it doesn't contain any geometry. Also please note that I have not yet verified if it is error free - I currently don't have my IFC tools at hand (if you would like to verify your files have a look at stepcode which can check if your files are syntactically correct - it won't check semantic meaning or enforcement of the mentioned concepts in the building smart documentation.)
Also good to know is that the order of references/ids (like #20) can be freely arranged - you can reference elements that you add later in the file and the references only need to be unique to this one file. This means the lines of the example file can be shuffled and it is still a valid file - parsers usually use a two-step apporach to create an in-memory representation (1. parse into IFC classes, 2. resolve references).

Related

Using two or more index buffers when creating custom geometry with Qt 3D? [duplicate]

I have some vertex data. Positions, normals, texture coordinates. I probably loaded it from a .obj file or some other format. Maybe I'm drawing a cube. But each piece of vertex data has its own index. Can I render this mesh data using OpenGL/Direct3D?
In the most general sense, no. OpenGL and Direct3D only allow one index per vertex; the index fetches from each stream of vertex data. Therefore, every unique combination of components must have its own separate index.
So if you have a cube, where each face has its own normal, you will need to replicate the position and normal data a lot. You will need 24 positions and 24 normals, even though the cube will only have 8 unique positions and 6 unique normals.
Your best bet is to simply accept that your data will be larger. A great many model formats will use multiple indices; you will need to fixup this vertex data before you can render with it. Many mesh loading tools, such as Open Asset Importer, will perform this fixup for you.
It should also be noted that most meshes are not cubes. Most meshes are smooth across the vast majority of vertices, only occasionally having different normals/texture coordinates/etc. So while this often comes up for simple geometric shapes, real models rarely have substantial amounts of vertex duplication.
GL 3.x and D3D10
For D3D10/OpenGL 3.x-class hardware, it is possible to avoid performing fixup and use multiple indexed attributes directly. However, be advised that this will likely decrease rendering performance.
The following discussion will use the OpenGL terminology, but Direct3D v10 and above has equivalent functionality.
The idea is to manually access the different vertex attributes from the vertex shader. Instead of sending the vertex attributes directly, the attributes that are passed are actually the indices for that particular vertex. The vertex shader then uses the indices to access the actual attribute through one or more buffer textures.
Attributes can be stored in multiple buffer textures or all within one. If the latter is used, then the shader will need an offset to add to each index in order to find the corresponding attribute's start index in the buffer.
Regular vertex attributes can be compressed in many ways. Buffer textures have fewer means of compression, allowing only a relatively limited number of vertex formats (via the image formats they support).
Please note again that any of these techniques may decrease overall vertex processing performance. Therefore, it should only be used in the most memory-limited of circumstances, after all other options for compression or optimization have been exhausted.
OpenGL ES 3.0 provides buffer textures as well. Higher OpenGL versions allow you to read buffer objects more directly via SSBOs rather than buffer textures, which might have better performance characteristics.
I found a way that allows you to reduce this sort of repetition that runs a bit contrary to some of the statements made in the other answer (but doesn't specifically fit the question asked here). It does however address my question which was thought to be a repeat of this question.
I just learned about Interpolation qualifiers. Specifically "flat". It's my understanding that putting the flat qualifier on your vertex shader output causes only the provoking vertex to pass it's values to the fragment shader.
This means for the situation described in this quote:
So if you have a cube, where each face has its own normal, you will need to replicate the position and normal data a lot. You will need 24 positions and 24 normals, even though the cube will only have 8 unique positions and 6 unique normals.
You can have 8 vertexes, 6 of which contain the unique normals and 2 of normal values are disregarded, so long as you carefully order your primitives indices such that the "provoking vertex" contains the normal data you want to apply to the entire face.
EDIT: My understanding of how it works:

Why would VkImageView format differ from the underlying VkImage format?

VkImageCreateInfo has the following member:
VkFormat format;
And VkImageViewCreateInfo has the same member.
What I don't understand why you would ever have a different format in the VkImageView from the VkImage needed to create it.
I understand some formats are compatible with one another, but I don't know why you would use one of the alternate formats
The canonical use case and primary original motivation (in D3D10, where this idea originated) is using a single image as either R8G8B8A8_UNORM or R8G8B8A8_SRGB -- either because it holds different content at different times, or because sometimes you want to operate in sRGB-space without linearization.
More generally, it's useful sometimes to have different "types" of content in an image object at different times -- this gives engines a limited form of memory aliasing, and was introduced to graphics APIs several years before full-featured memory aliasing was a thing.
Like a lot of Vulkan, the API is designed to expose what the hardware can do. Memory layout (image) and the interpretation of that memory as data (image view) are different concepts in the hardware, and so the API exposes that. The API exposes it simply because that's how the hardware works and Vulkan is designed to be a thin abstraction; just because the API can do it doesn't mean you need to use it ;)
As you say, in most cases it's not really that useful ...
I think there are some cases where it could be more efficient, for example getting a compute shader to generate integer data for some types of image processing can be more energy efficient than either float computation or manually normalizing integer data to create unorm data. Using aliasing you the compute shader can directly write e.g. uint8 integers and a fragment shader can read the same data as unorm8 data

Can the __LINKEDIT segment of a Mach-O executable be moved

In a Mach-O executable, I am trying to increase the size of the __LLVM segment that precedes the __LINKEDIT segment (with a home-grown tool). I am considering two strategies: (a) move the __LLVM segment to after the __LINKEDIT segment, producing a file that is not what ld would create (now with a gap and section addresses out of order), and (b) move the __LINKEDIT segment to allow resizing of the __LLVM segment that precedes it. I need the result to be accepted for downstream processing, e.g. generating an .ipa file or sending to the App Store.
This question is about my assumptions and the viability of these approaches. Specifically, what are the potential pitfalls of each that might lead them to fail?
I implemented the first approach (a) is understood by segedit's -extract option, but its -replace option complains that the segments are out of order. I append a new segment to the file and update the address and length values in the corresponding load command to refer to this new segment data (both in the file and the destination memory). This might be fine, as long as the other downstream processing will accept the result (still to check; e.g. any local signature is likely invalidated).
The second approach (b) would seem cleaner, as long as there are no references into the __LINKEDIT segment, which I guess contains linking information (symbol tables etc., rather than code). I have not tried this yet, though it seems to be a foregone conclusion that segedit will be happy with the result, which may suggest other processing might also be happier. Are there likely to be any references that are invalidated due to simply moving this segment? I am guessing that I will have to update further load commands (they seem to reference into the __LINKEDIT segment), which I have not examined, but this should be fairly straightforward.
EDIT: Replaced my confused use of "section" with "segment" (mentioned in answer).
ADDED: Context is where I have no control of generating the original executable. I need to post-process it, essentially performing a 'segedit -replace' process, wherein the a section in the segment is to be replaced with a section that is larger than space previously allocated for the segment.
RUN-ON clarifying question: It seems from the answer that moving the __LINKEDIT segment will break it. Can this be fixed by adjusting load commands only (e.g. LC_DYLD_INFO_ONLY, LC_LOAD_DYLINKER, LC_LOAD_DYLIB), not data in any segments? I am not yet familiar with these load commands, and would like to know whether to pursue this.
So basically the segments and sections describe how the physical file maps onto virtual memory.
As I mentioned in my previous iteration of the answer there are limitations on the segments order:
__TEXT section must start at executable physical file offset 0
__LINKEDIT section must not start at physical file offset 0
__LINKEDIT's File Offset + File Size should be equal to physical executable size (this implies __LINKEDIT being the last segment). Otherwise code signing won't work.
__DYLD_INFO_ONLY contains file offsets to dyld loading bind opcodes for:
rebase
bind at load
weak bind
lazy bind
export
For each kind there is file offset and size entry in __DYLD_INFO_ONLY describing the data in file that matches __LINKEDIT (in a "regular" ld linked executable). __DYLD_INFO_ONLY does not use any segment & section information from __LINKEDIT directly, the file offsets and sizes are enough.
EDIT also as mentioned in #kirelagin answer here
"Apparently, the new version of dyld from 10.12 Sierra performs a check that previous versions did not perform: it makes sure that the LC_SYMTAB symbols table is entirely within the __LINKEDIT segment."
I assume since you want to inflate the size of the preceding __LLVM segment you would also want some extra data in the file itself. Typically data described by __LINKEDIT (i.e. not the segment & sections themselves, but the actual data) won't use 100% of it's space so it could be modified to start "later" and occupy less space.
A tool called jtool by Jonathan Levin could probably do it for you.
I know this is an old question, but I solved this problem while solving another problem.
define the slide amount, this must be page-aligned, so I choose 0x4000.
add the slide amount to the relevant load commands, this includes but is not limited to:
__LINKEDIT segment (duh)
dyld_info_command
symtab_command
dysymtab_command
linkedit_data_commands
physically move the __LINKEDIT in the file.

DICOM reconstruction tag

I'm look for a DICOM image reconstruction tag. Is there any tag to recognize a DICOM Image is result of a reconstruction?
First try searching for "MPR" (multiplanare Rekonstruktion) but just for Siemens?
(0008,0008);Image Type;DERIVED\PRIMARY\AXIAL\CT_SOM5 MPR
(0008,103e);Series Description;Abdomen nativ 3.0 MPR kor
Image Type (0008, 0008) is the field you are searching for. Unfortunately, you will run into three issues:
Not all vendors stick to the defined terms for this attribute, some treat them as free text. So does Siemens - "CT_SOM5 MPR" is not a defined term for this attribute.
it depends on the type of object (SOP Class UID) which defined terms apply and from which component of Image Type they can be obtained.
DERIVED\SECONDARY\MPR (MPR is value 3 for MR objects)
DERIVED\SECONDARY\ANGIO\RESAMPLED (RESAMPLED is value 4 for Enhanced IODs)
There are several reconstruction techniques, MPR is just one of them
There is an attribute Volume Based Calculation Technique (0008, 9207) from which this could be safely determined, but so far I have never seen it included in practical datasets. Plus, it is not allowed for all IODs
Long story short: Using Image Type and sticking to the rules and defined terms applying to this attribute would be DICOM conformant and correct, but fail in some practical cases. I do not see any other generic approach. To include more practical cases, you will need to implement vendor-specific heuristics.

Problem with huge objects in a quad tree

Let's say I have circular objects. Each object has a diameter of 64 pixels.
The cells of my quad tree are let's say 96x96 pixels.
Everything will be fine and working well when I check collision from the cell a circle is residing in + all it's neighbor cells.
BUT what if I have one circle that has a diameter of 512 pixels? It would cover many cells and thus this would be a problem when checking only the neighbor cells. But I can't re-size my quad-tree-grid every time a much larger object is inserted into the tree...
Instead och putting objects into a single cell put them in all cells they collide with. That way you can just test each cell individually. Use pointers to the object so you dont create copies. Also you only need to do this with leavenodes, so no need to combine data contained in higher nodes with lower ones.
This an interesting problem. Maybe you can extend the node or the cell with a tree height information? If you have an object bigger then the smallest cell nest it with the tree height. That's what map's application like google or bing maps does.
Here a link to a similar solution: http://www.gamedev.net/topic/588426-2d-quadtree-collision---variety-in-size. I was confusing the screen with the quadtree. You can check collision with a simple recusion.
Oversearching
During the search, and starting with the largest objects first...
Test Object.Position.X against QuadTreeNode.Centre.X, and also
test Object.Position.Y against QuadTreeNode.Centre.Y;
... Then, by taking the Absolute value of the difference, treat the object as lying within a specific child node whenever the absolute value is NOT more than the radius of the object...
... that is, when some portion of the object intrudes into that quad : )
The same can be done with AABB (Axis Aligned Bounding Boxes)
The only real caveat here is that VERY large objects that cover most of the screen, will force a search of the entire tree. In these cases, a different approach may be called for.
Of course, this only takes care of the object that everything else is being tested against. To ensure that all the other large objects in the world are properly identified, you will need to alter your quadtree slightly...
Use Multiple Appearances
In this variation on the QuadTree we ONLY place objects in the leaf nodes of the QuadTree, as pointers. Larger objects may appear in multiple leaf nodes.
Since some objects have multiple appearances in the tree, we need a way to avoid them once they've already been tested against.
So...
A simple Boolean WasHit flag can avoid testing the same object multiple times in a hit-test pass... and a 'cleanup' can be run on all 'hit' objects so that they are ready for the next test.
Whilst this makes sense, it is wasteful if performing all-vs-all hit-tests
So... Getting a little cleverer, we can avoid having any cleanup at all by using a Pointer 'ptrLastObjectTestedAgainst' inside of each object in the scene. This avoids re-testing the same objects on this run (the pointer is set after the first encounter)
It does not require resetting when testing a new object against the scene (the new object has a different pointer value than the last one). This avoids the need to reset the pointer as you would with a simple Bool flag.
I've used the latter approach in scenes with vastly different object sizes and it worked well.
Elastic QuadTrees
I've also used an 'elastic' QuadTree. Basically, you set a limit on how many items can IDEALLY fit in each QuadTreeNode - But, unlike a standard QuadTree, you allow the code to override this limit in specific cases.
The overriding rule here is that an object may NOT be placed into a Node that cannot hold it ENTIRELY... with the top node catching any objects that are larger than the screen.
Thus, small objects will continue to 'fall through' to form a regular QuadTree but large objects will not always fall all the way through to the leaf node - but will instead expand the node that last fitted them.
Think of the non-leaf nodes as 'sieving' the objects as they fall down the tree
This turns out to be a very efficient choice for many scenarios : )
Conclusion
Remember that these standard algorithms are useful general tools, but they are not a substitute for thinking about your specific problem. Do not fall into the trap of using a specific algorithm or library 'just because it is well known' ... your application is unique, and it may benefit from a slightly different approach.
Therefore, don't just learn to apply algorithms ... learn from those algorithms, and apply the principles themselves in novel and fitting ways. These are NOT the only tools, nor are they necessarily the best fit for your application.
Hope some of those ideas helped.

Resources