DICOM reconstruction tag - dicom

I'm look for a DICOM image reconstruction tag. Is there any tag to recognize a DICOM Image is result of a reconstruction?
First try searching for "MPR" (multiplanare Rekonstruktion) but just for Siemens?
(0008,0008);Image Type;DERIVED\PRIMARY\AXIAL\CT_SOM5 MPR
(0008,103e);Series Description;Abdomen nativ 3.0 MPR kor

Image Type (0008, 0008) is the field you are searching for. Unfortunately, you will run into three issues:
Not all vendors stick to the defined terms for this attribute, some treat them as free text. So does Siemens - "CT_SOM5 MPR" is not a defined term for this attribute.
it depends on the type of object (SOP Class UID) which defined terms apply and from which component of Image Type they can be obtained.
DERIVED\SECONDARY\MPR (MPR is value 3 for MR objects)
DERIVED\SECONDARY\ANGIO\RESAMPLED (RESAMPLED is value 4 for Enhanced IODs)
There are several reconstruction techniques, MPR is just one of them
There is an attribute Volume Based Calculation Technique (0008, 9207) from which this could be safely determined, but so far I have never seen it included in practical datasets. Plus, it is not allowed for all IODs
Long story short: Using Image Type and sticking to the rules and defined terms applying to this attribute would be DICOM conformant and correct, but fail in some practical cases. I do not see any other generic approach. To include more practical cases, you will need to implement vendor-specific heuristics.

Related

Why would VkImageView format differ from the underlying VkImage format?

VkImageCreateInfo has the following member:
VkFormat format;
And VkImageViewCreateInfo has the same member.
What I don't understand why you would ever have a different format in the VkImageView from the VkImage needed to create it.
I understand some formats are compatible with one another, but I don't know why you would use one of the alternate formats
The canonical use case and primary original motivation (in D3D10, where this idea originated) is using a single image as either R8G8B8A8_UNORM or R8G8B8A8_SRGB -- either because it holds different content at different times, or because sometimes you want to operate in sRGB-space without linearization.
More generally, it's useful sometimes to have different "types" of content in an image object at different times -- this gives engines a limited form of memory aliasing, and was introduced to graphics APIs several years before full-featured memory aliasing was a thing.
Like a lot of Vulkan, the API is designed to expose what the hardware can do. Memory layout (image) and the interpretation of that memory as data (image view) are different concepts in the hardware, and so the API exposes that. The API exposes it simply because that's how the hardware works and Vulkan is designed to be a thin abstraction; just because the API can do it doesn't mean you need to use it ;)
As you say, in most cases it's not really that useful ...
I think there are some cases where it could be more efficient, for example getting a compute shader to generate integer data for some types of image processing can be more energy efficient than either float computation or manually normalizing integer data to create unorm data. Using aliasing you the compute shader can directly write e.g. uint8 integers and a fragment shader can read the same data as unorm8 data

Required Data for IFC

I'm working on a project where I need to generate an IFC file, and am given not much more information than geometry (I have access to the density and heat-conductivity of materials, and basic labeling for Objects).
So far I could only find what IFC can store, never what IFC needs to store.
What do I need to include in an IFC file so it is properly functional?
What does an IFC file need besides basic geometry?
Disclaimer: I have not read (or bought) the standard. My knowledge primarily stems from working with IFC files and trying different things. And reading the buildingSMART documentation. So I can't give you a hard guarantee, but I am rather confident my information is correct/usable.
As an alternative to buying the official standards file, you could look into the official documentation by buildingsmart. (Also have a look here for more general information and availability of other/more modern releases).
Now assuming you are familiar with the basic STEP file layout (header and data segment), let's jump to what an IFC file absolutely has to include to be considered correct (as far as I understand the documentation; there might be parsers/loaders which can load incorrect/incomplete files, but we aren't aiming for them). Also note I am building this example for IFC 4.0. This should be correct for the current IFC 4.1 standard, but probably not for the older IFC2X3 standard (there have been some relaxations in IFC4 from IFC2X3). Also I am skipping on names and descriptions - you can set those fields for testing to recognize your structures in a viewer (it's easier than comparing GUIDs).
IfcProject
The root of all elements is the IfcProject. It also contains most basic properties and definitions for all other elements. The attributes required per documentation on this entity are only the unique id. But for a working example you usually also need a minimal unit assignment and representation context.
#20= IFCPROJECT('344O7vICcwH8qAEnwJDjSU',$,$,$,$,$,$,(#19),#13);
In the unit assignment you define required units, starting from geometric units to monetary, thermal, etc. The minimum is length, area and angle to meaningfully define geometric items. So for our example we include only those: metre as length, square meter as area and radians as angle. If you need foot or inch or degree you can define those as derived units.
#10= IFCSIUNIT(*,.LENGTHUNIT.,$,.METRE.);
#11= IFCSIUNIT(*,.AREAUNIT.,$,.SQUARE_METRE.);
#12= IFCSIUNIT(*,.PLANEANGLEUNIT.,$,.RADIAN.);
#13= IFCUNITASSIGNMENT((#10,#11,#12));
The representation context defines for a given class of representations (=geometric/parametric descriptions) the basic coordinate system. So the simple case would be a 3-dimensional right handed system at point zero. IFC is working with the z-axis pointing up - this might be important if your are working with models/files originating from 3D/OpenGl applications which usually assume the y-axis pointing upwards. You also need a precision value - I am using 1.0e-5 here, but you might want to test out if you can go with less or need more. The precision is usually applied when comparing points/edges when combining geometry (during constructive solid geometry steps). If you have errors, try a different precision value.
The second attribute of the representation context is the context type. This is a string identifying on which representations this context should be applied. The documentation states that values are based on "implementers agreement" - which means AFAIK "look what the others are using". From my experience using "Model" works for 3D geometry. Using "Plan" for 2D plans and sketches should work, too.
#14= IFCDIRECTION((1.,0.,0.));
#15= IFCDIRECTION((0.,0.,1.));
#16= IFCCARTESIANPOINT((0.,0.,0.));
#17= IFCAXIS2PLACEMENT3D(#16,#15,#14);
#18= IFCDIRECTION((0.,1.));
#19= IFCGEOMETRICREPRESENTATIONCONTEXT($,'Model',3,1.0E-5,#17,#18);
Spatial container for elements
Elements can't be added to the IfcProject directly - they need to be placed into a spatial element which is contained in the project. There are three possible choices: IfcSite, IfcBuilding and IfcSpatialZone (see section Spatial Decomposition on the IfcProject page). The IfcSpatialZone is defined as non-hierarchical spatial element - its usage is slightly different from the other two (elements are added using a different relation).
A single site is sufficient as spatial container. Adding all elements to it might be sematically vague (mostly fences are directly added to it, other elements are usually inside a building) but not incorrect. (IFC does not care if you have electrical appliances in your garden). As nearly all attributes of IfcSite are optional we can skip on those. But beware: if you give your site a representation (=some geometric shape) you will need to include a placement for it. The site will be aggregated into the project to be related to it.
#30= IFCSITE('20FpTZCqJy2vhVJYtjuIce',$,$,$,$,$,$,$,.ELEMENT.,$,$,$,$,$);
#31= IFCRELAGGREGATES('0Du7$nzQXCktKlPUTLFSAT',$,$,$,#20,(#30));
Elements
Actually that is all that is needed as absolute minimum structure. Now you can add your elements - entities of some type derived from IfcProduct. As all those elements have some sort of meaning attached to it you either need to select those closely matching the objects you have, or you might want to use IfcBuildingElementProxy which is the most "meaningless" (or better: no specialized semantic meaning) object type. The following code places one proxy without geometry. The placement references the same coordinate system definition that is used to create the coordinate system out of convenience as it doesn't transform or move anything. Your geometry would be added through a product definition shape which has shape aspects and finally some geometry items. The building smart documentation has a few examples with assigned geometry.
#40= IFCLOCALPLACEMENT($,#17);
#41= IFCBUILDINGELEMENTPROXY('3W29Drc$H6CxK3FGIxjJNl',$,$,$,$,#40,$,$,.NOTDEFINED.);
#42= IFCRELCONTAINEDINSPATIALSTRUCTURE('04ldtj6cp2dME6CiP80Bzh',#12,$,$,(#41),#30);
Conclusion
So there isn't much needed as bare minimum to add elements:
a project
basic unit definitions
one spatial container
The complete example file would be:
ISO-10303-21;
HEADER;FILE_DESCRIPTION(('IFC4'),'2;1');
FILE_NAME('example.ifc','2018-08-8',(''),(''),'','','');
FILE_SCHEMA(('IFC4'));
ENDSEC;
DATA;
#10= IFCSIUNIT(*,.LENGTHUNIT.,$,.METRE.);
#11= IFCSIUNIT(*,.AREAUNIT.,$,.SQUARE_METRE.);
#12= IFCSIUNIT(*,.PLANEANGLEUNIT.,$,.RADIAN.);
#13= IFCUNITASSIGNMENT((#10,#11,#12));
#14= IFCDIRECTION((1.,0.,0.));
#15= IFCDIRECTION((0.,0.,1.));
#16= IFCCARTESIANPOINT((0.,0.,0.));
#17= IFCAXIS2PLACEMENT3D(#16,#15,#14);
#18= IFCDIRECTION((0.,1.));
#19= IFCGEOMETRICREPRESENTATIONCONTEXT($,'Model',3,1.0E-5,#17,#18);
#20= IFCPROJECT('344O7vICcwH8qAEnwJDjSU',$,$,$,$,$,$,(#19),#13);
#30= IFCSITE('20FpTZCqJy2vhVJYtjuIce',$,$,$,$,$,$,$,.ELEMENT.,$,$,$,$,$);
#31= IFCRELAGGREGATES('0Du7$nzQXCktKlPUTLFSAT',$,$,$,#20,(#30));
#40= IFCLOCALPLACEMENT($,#17);
#41= IFCBUILDINGELEMENTPROXY('3W29Drc$H6CxK3FGIxjJNl',$,$,$,$,#40,$,$,.NOTDEFINED.);
#42= IFCRELCONTAINEDINSPATIALSTRUCTURE('04ldtj6cp2dME6CiP80Bzh',$,$,$,(#41),#30);
ENDSEC;
END-ISO-10303-21;
Note that loading this one up doesn't show anything, because it doesn't contain any geometry. Also please note that I have not yet verified if it is error free - I currently don't have my IFC tools at hand (if you would like to verify your files have a look at stepcode which can check if your files are syntactically correct - it won't check semantic meaning or enforcement of the mentioned concepts in the building smart documentation.)
Also good to know is that the order of references/ids (like #20) can be freely arranged - you can reference elements that you add later in the file and the references only need to be unique to this one file. This means the lines of the example file can be shuffled and it is still a valid file - parsers usually use a two-step apporach to create an in-memory representation (1. parse into IFC classes, 2. resolve references).

Dicom VR OB and OW max value length

I would like to know what is the max size of the value length field for the VR OB and OW. I know that currently its 2^32(32 bit application).I want to know in 64 bit application it will be 64 bit? I referred dicom standard(DICOM PS3.5 2014c - Data Structures and Encoding). I did not get any clue. Since we want to store the huge non image data (more than 4gb) I would like to know if that is possible.
Thanks in advance.
Although the maximum size of an attribute is 0xfffffffe, in the image data attribute (0x7fe0, 0x0010) larger data can be stored by using an encapsulated transfer syntax. This effectively lets you split up your image data into multiple "items" called fragments. Each fragment also has a maximum size of 0xfffffffe, but there is no limitation to the number of fragments in the image data attribute.
Refer to chapter 5, annex A.4 "Transfer Syntaxes For Encapsulation of Encoded Pixel Data" of the DICOM Standard for detailed explanation.
If you use a library also take a look at their documentation, lots of libraries, for example dcmtk, do support splitting an image into multiple frames. Just look for keywords like fragment or encapsulation.
The maximum size of the tag is dictated by the DICOM standard, not by the CPU architecture on which the DICOM library is compiled or used.
At the moment the maximum size (in bytes) of a OB or OW tag is represented by a 32-bit wide value (minus 1 or 2 because 0xFFFFFFFF is reserved).

Does an entire DICOM file has just one transfer syntax?

Sorry if this is quite basic, I am new to DICOM.
I know a DICOM file has multiple parts like: Patient, Study, Series and Instance (Image).
Now to communicate with a device it needs a Transfer Syntax, which tells the mode of communication, like Little-Endian, Big-Endian, JPEG-Lossless, lossy etc.
So, does each of the DICOM file parts (Patient, Study, Series and Instance (Image)) have their own transfer syntax? Like Patient can communicate as Little-Endian, Study might use JPEG-Lossless or MPEG-4 (if it is video) etc?
OR does the entire DICOM file just use one transfer syntax.
A single transfer syntax is used through all the entire DICOM file (except for the first group with ID=0002, which is written with low endian/explicit VR transfer syntax)
When sending DICOM messages through a network then you can have a different transfer syntax for each message: there you can define different Presentation Contexts during the association negotiation, and each Presentation Context can have a different Transfer Syntax.
After the association negotiation, you can transmit messages with different transfer syntaxes by selecting the proper presentation context/transfer identifier in the message header
Your question does not entirely make sense with how DICOM is organized.
DICOM is composed of various SOP Classes. A SOP Class is Service-Object-Pair. Example services are the Storage Service Class (a service for network storage of messages (typically modality images) or the Media Service Class (for writing of files to media or just saving them to disk).
The Object portion of the SOP Class is defined in an IOD (Information Object Definition). IODs are defined by multiple Modules. The Modules in turn are composed of DICOM tags. Each Module usually groups tags together, and are typically related to an "Entity" in the DICOM model. The module might be associated with the Patient, Series, or Image level of the DICOM model. The IOD consists of all the tags defined in the various Modules. When encoding the IOD, the context of the module that the tags are defined in doesn't matter.
The DICOM Service defines how the tags within an IOD are encoded. Both a DICOM message for network transfer services (in its Group 0x0000 elements) and a DICOM file for media (in its Group 0x0002 elements) contain meta data that describe the encoding and a data set which contains the IOD tags. The group 0x0000 elements in a DICOM Message are always encoded in Implicit VR Little Endian, and the Group 0x0002 elements in a DICOM file are always encoded in the Explicit VR Little Endian transfer syntax. The datasets are always encoded in a single transfer syntax.
Hope this helps a bit.

How to decide if a DICOM series is a 3D volume or a series of images?

We are writing an importer for dicom files.
How does one generally deceide if a series of images forms a 3D-Volume or is just a series of 2D images?
Is there a universal way to decide this for most vendors? I looked a the DICOM tags and could no find an apparent solution.
The DICOM standard defines UIDs for describing the hierarchy. These are from top to bottom:
Study UID - Identifier of the study or scanning session.
Series UID - The same within a series acquired in one scan.
Image UID - Should be unique for any image.
A DICOM image saved by a standard-conforming implementation should have all these IDs. If multiple images have the same SeriesUID, they are a volume (or time-series) as defined in the standard. Some software of course is not standard-conforming and you'll have to look at other things like timestamps and patient position, but it is usually best to start by following the standard.
For ordering the series after identifying it, GDCM (as malat suggested) or dcmtkdicom are pretty well-established libraries.
In MR, you'll want to look for:
MR Acquisition Type (0018,0023). It has two enumerated values:
2D = frequency x phase
3D = frequency x phase x phase
I'm not as sure about CT.
Most of the time, malat's answer is what you'll want to do (i.e. organize the slices by position and orientation and treat them in a 3D fashion through multi-planar reconstruction).
I think what you are searching for is the algorithm to organise DICOM dataset using Image Position (Patient) and Image Orientation (Patient).
A typical implementation can be found in GDCM
Please note that my answer may be totally unrelated to your specific DICOM instances, but since you did not specified which SOP Class UID you were dealing with, I simply assumed you were dealing with old CT or MR Image Storage
Patient Position (0018, 5100) is a type 1 required attribute for both the CT and MR modalities. This attribute is VERY IMPORTANT for accurately interpreting the patient's orientation.
Projection radiograph typically will have Patient Orientation (0020, 0020) attribute and cross-sectional image should have Image Position (0020, 0032) and Image Orientation (0020, 0037) attributes as they are type 1 required element of Image Plane module (see PS 3.3 section C.7.6.2.1.1).
However, localizer or scout image included with CT study is not really a cross-sectional image but a projection image and may contain Image Position and Image Orientation attributes. So is the case of MR study where one or more sagittal or coronal images are usually captured from which axial images are prescribed. In this case different logic is needed to identify the localizer image. For example, CT localizer may use the string "LOCALIZER" for value 3 of "Image Type" attributes.
If someone haven't found the answer, I looked through the tags in RadiAnt DICOM viewer where I compared different files and the Scan Options (0018, 0022) tag I think which contains the information. If the tag exists (because on some it was not there) and the value is equal to HELICAL MODE or HELIX then a 3D image can be constructed from that.

Resources