How do I know the scout data is acquired when tub is on the top or at the 90 degree from DICOM header? - dicom

When I receive scout images, where I can find the x-tube angle information from the DICOM header? Basically I need to know the scout was taken when x-tube is at the top or it is at the 90 degree or any other degree.

The precise answer to your question depends on the type (SOP Class UID) of DICOM object that your question refers to. AFAIK, the tube position is never encoded in DICOM headers, however, what you probably want to know is the orientation of the image plane in the patient coordinate system.
Most commonly (for CT/MR) this is encoded in the attribute (0020,0037) Image Orientation Patient which contains 6 floating point numbers describing the x,y,z components of the row and column vector of the image.
Please note that this orientation refers to the other slices of the same scan, there is no absolute reference coordinate system.
If this attribute is missing, (0020,0020) Patient Orientation may give you a hint, but not as precisely as the vectors.

Related

Using two or more index buffers when creating custom geometry with Qt 3D? [duplicate]

I have some vertex data. Positions, normals, texture coordinates. I probably loaded it from a .obj file or some other format. Maybe I'm drawing a cube. But each piece of vertex data has its own index. Can I render this mesh data using OpenGL/Direct3D?
In the most general sense, no. OpenGL and Direct3D only allow one index per vertex; the index fetches from each stream of vertex data. Therefore, every unique combination of components must have its own separate index.
So if you have a cube, where each face has its own normal, you will need to replicate the position and normal data a lot. You will need 24 positions and 24 normals, even though the cube will only have 8 unique positions and 6 unique normals.
Your best bet is to simply accept that your data will be larger. A great many model formats will use multiple indices; you will need to fixup this vertex data before you can render with it. Many mesh loading tools, such as Open Asset Importer, will perform this fixup for you.
It should also be noted that most meshes are not cubes. Most meshes are smooth across the vast majority of vertices, only occasionally having different normals/texture coordinates/etc. So while this often comes up for simple geometric shapes, real models rarely have substantial amounts of vertex duplication.
GL 3.x and D3D10
For D3D10/OpenGL 3.x-class hardware, it is possible to avoid performing fixup and use multiple indexed attributes directly. However, be advised that this will likely decrease rendering performance.
The following discussion will use the OpenGL terminology, but Direct3D v10 and above has equivalent functionality.
The idea is to manually access the different vertex attributes from the vertex shader. Instead of sending the vertex attributes directly, the attributes that are passed are actually the indices for that particular vertex. The vertex shader then uses the indices to access the actual attribute through one or more buffer textures.
Attributes can be stored in multiple buffer textures or all within one. If the latter is used, then the shader will need an offset to add to each index in order to find the corresponding attribute's start index in the buffer.
Regular vertex attributes can be compressed in many ways. Buffer textures have fewer means of compression, allowing only a relatively limited number of vertex formats (via the image formats they support).
Please note again that any of these techniques may decrease overall vertex processing performance. Therefore, it should only be used in the most memory-limited of circumstances, after all other options for compression or optimization have been exhausted.
OpenGL ES 3.0 provides buffer textures as well. Higher OpenGL versions allow you to read buffer objects more directly via SSBOs rather than buffer textures, which might have better performance characteristics.
I found a way that allows you to reduce this sort of repetition that runs a bit contrary to some of the statements made in the other answer (but doesn't specifically fit the question asked here). It does however address my question which was thought to be a repeat of this question.
I just learned about Interpolation qualifiers. Specifically "flat". It's my understanding that putting the flat qualifier on your vertex shader output causes only the provoking vertex to pass it's values to the fragment shader.
This means for the situation described in this quote:
So if you have a cube, where each face has its own normal, you will need to replicate the position and normal data a lot. You will need 24 positions and 24 normals, even though the cube will only have 8 unique positions and 6 unique normals.
You can have 8 vertexes, 6 of which contain the unique normals and 2 of normal values are disregarded, so long as you carefully order your primitives indices such that the "provoking vertex" contains the normal data you want to apply to the entire face.
EDIT: My understanding of how it works:

DICOM: understanding the relationship between Patient Position (0018,5100) Image Orientation (Patient) (0020,0037)

If a patient is scanned in the axial plane and the Patient Position attribute reads 'HFS' or head first supine, then shouldn't the Image Orientation (Patient)(0020,0037) attribute be [-1,0,0,0,1,0]? It seems it should be [1,0,0,0,1,0]. My confusion might lie with an incorrect understanding of what the 'scanning viewpoint' (technical name for this?). Hopefully this image clarifies what I mean:
If the 'scanning viewpoint' is the 'Neuro-Surgeon's' view then the attribute should read [-1,0,0,0,1,0]? I thought that this should be the case if the Patient Position attribute reads 'HFS'. If somebody can clarify I would be very gateful!
Sorry. I had to update my answer as I overssimplified and made a relation between the two attributes while there really is none. The answer was accepted in the meantime but I decided to update the answer anyway for future reference.
#OP sorry the answer is a bit more complex.
In case of HFS the patient is positioned exactly as in your image. Patient Position (0018,5100) specifies the position of the patient relative to the imaging equipment space when facing the front of the imaging equipment.
The different view points mentioned in the OP are the two main viewing conventions. Viewing convention is not part of the DICOM standard. DICOM-compliant scanning devices transmit pixels for radiological viewing convention. The scanning device orders the pixels depending on the patient orientation during the scan. Hence, the image technologist must indicate the patient orientation before scanning starts.
//edit
More correct but technically more complex answer. The two dicom attributes are not related. The Image Orientation Patient is related to the patient body, regardless how she's placed in the machine, while Patient Position specifies the position of the patient relative to the imaging equipment. The machine manufacturer has to know the Patient Position to be able to calculate the Image Orientation (Patient) from the image orientation cosines in the frame of reference of the machine.
Knowing this we can now determine what image orientation patient means. The vectors first component specifies the direction of the first row of the image with respect to the patient coordinate system. Similarly, the second component of the vector specifies the direction of the first column of the image. We now define the image orientation patient as the rotation matrix P used to describe patient orientation in the DICOM coordinate system. **now, in the default CT case you typically have the scenario as shown in the image above. Where HFS is [1 0 0; 0 1 0]. However, MR almost never has identity for the Image Orientation patient but typically has a slightly angularjs set of axes. Note that both cases would still have HFS as their default.

Apply projective transformation on plane in 3D

Scenario
I have a 3D environment which contains a 3D scene and a '2D' scene.
The 3D scene contains a cube and a perspective camera.
The '2D' scene contains 4 round objects and an orthographic camera. These round objects can be moved around by the user therefor the orthographic camera is used otherwise the round objects can be moved 'in depth' (along z-axis) and could change in size and i want them to maintain size.
Depending on positioning the round objects, the corners of the cube in the 3D scene should be aligned with the positions of the round objects. And maintaining perspective.
Edit:
What i am trying to accomplish is: Based on an image of a room a user uses those round objects to define the dimensions of the room. Based on those dimensions a hidden cube is positioned to act as a boundery box. The next step would be to add 3d objects to the scene and maintaining perspective of the room.
I tried explaining this scenario in a picture:
Problems
Basically i have no clue where to start.
The round objects are in a '2D' environment because of the orthographic camera, therefor i have no depth value that i think i need.
I think i need some perspective transformation based on camera positions/settings? There are all sorts of matrices that could be produced but don't know how to implement them.
Sources i studied
http://www.graphicsmill.com/docs/gm/affine-and-projective-transformations.htm
below is a similar situation
https://math.stackexchange.com/questions/296794/finding-the-transform-matrix-from-4-projected-points-with-javascript
Cannot post more links because of my reputation
I hope someone can make this clear or point me in the right direction
Counting the real degrees of freedom, I would say that you don't have enough data. Imagine the projetive camera of the 3D scene as an actual pinhole camera. Then the image that camera creates on its film, sensor or whatever is described by at least 9 parameters:
3 parameters for the position of the camera in space,
2 parameters for the direction the camera is looking at and
1 parameter rotating the camera + sensor around their optical axis,
1 parameter determining the distance from pinhole to sensor and
2 parameters translating the sensor in its plane
On the other hand, knowing a projective transformation from one plane to another, e.g. using my answer to the question you already referenced, will only yield 8 geometrically meaningful parameters. So you cannot hope to reconstruct the camera position from that, so you cannot find the image of the 3D scene that would fit your markers. The Wikipedia article on 3D pose estimation writes that
Most implementations of POSIT only work on non-coplanar points (in other words, it won't work with flat objects or planes).[3]
That being said, you gave an example of where someone is actually doing this! So how do they do it? Honestly, I'm not sure, but they would have to make use of some additional knowledge or extra assumptions. For example, if they knew details about their camera (focal length, relative position between lens and sensor, or something like that), that could provide the required data. Since these apps tend to work on mobile devices, I think it rather likely that they might have either an API to request these things or a database where they can be looked up for the more common devices.
Judging from your question, you don't have that. Neither do you have all the vertical edges of the cube depicted vertically parallel to one another, which would have been another possible way to add more information. You have to come up with one more piece of information in order to allow for a hopefully unique solution.
Of course, without more information the system is just underspecified. It's not hard to find any transformation matrix which does what you requested. Actually the answer I references is placed in a setup where a 2D to 2D map is to be modeled using a 3D transformation matrix. You can do the same and be done with it. But your users might become frustrated, since the transformation they obtain might do completely wrong things to the out-of-plane direction, and there is no knob to tune that to the correct behavior.

How to decide if a DICOM series is a 3D volume or a series of images?

We are writing an importer for dicom files.
How does one generally deceide if a series of images forms a 3D-Volume or is just a series of 2D images?
Is there a universal way to decide this for most vendors? I looked a the DICOM tags and could no find an apparent solution.
The DICOM standard defines UIDs for describing the hierarchy. These are from top to bottom:
Study UID - Identifier of the study or scanning session.
Series UID - The same within a series acquired in one scan.
Image UID - Should be unique for any image.
A DICOM image saved by a standard-conforming implementation should have all these IDs. If multiple images have the same SeriesUID, they are a volume (or time-series) as defined in the standard. Some software of course is not standard-conforming and you'll have to look at other things like timestamps and patient position, but it is usually best to start by following the standard.
For ordering the series after identifying it, GDCM (as malat suggested) or dcmtkdicom are pretty well-established libraries.
In MR, you'll want to look for:
MR Acquisition Type (0018,0023). It has two enumerated values:
2D = frequency x phase
3D = frequency x phase x phase
I'm not as sure about CT.
Most of the time, malat's answer is what you'll want to do (i.e. organize the slices by position and orientation and treat them in a 3D fashion through multi-planar reconstruction).
I think what you are searching for is the algorithm to organise DICOM dataset using Image Position (Patient) and Image Orientation (Patient).
A typical implementation can be found in GDCM
Please note that my answer may be totally unrelated to your specific DICOM instances, but since you did not specified which SOP Class UID you were dealing with, I simply assumed you were dealing with old CT or MR Image Storage
Patient Position (0018, 5100) is a type 1 required attribute for both the CT and MR modalities. This attribute is VERY IMPORTANT for accurately interpreting the patient's orientation.
Projection radiograph typically will have Patient Orientation (0020, 0020) attribute and cross-sectional image should have Image Position (0020, 0032) and Image Orientation (0020, 0037) attributes as they are type 1 required element of Image Plane module (see PS 3.3 section C.7.6.2.1.1).
However, localizer or scout image included with CT study is not really a cross-sectional image but a projection image and may contain Image Position and Image Orientation attributes. So is the case of MR study where one or more sagittal or coronal images are usually captured from which axial images are prescribed. In this case different logic is needed to identify the localizer image. For example, CT localizer may use the string "LOCALIZER" for value 3 of "Image Type" attributes.
If someone haven't found the answer, I looked through the tags in RadiAnt DICOM viewer where I compared different files and the Scan Options (0018, 0022) tag I think which contains the information. If the tag exists (because on some it was not there) and the value is equal to HELICAL MODE or HELIX then a 3D image can be constructed from that.

How to Draw Marker in MRI File With Respect to Contrast Agent

I am really confuse over the draw overlay
on MRI Image means its part is similar to Structure Report(SR) Processing or not
I am trying to read the MRI File in such way that from Contrast Agent. Also after so much searching on Google finally get some information such as
"The data is extracted by injecting a contrast agent into a patient’s vein,then taking sequential snapshots of a volume of interest as the contrast agent diffuses through that area"
but i am totally new for this to help out can you help me for
1. Give me specific link for these topic
2. How to read the Contrast Agent value from MRI Dicom File.
3. Also how to show a shaded region where the Cancer is detected or
some kind of marker on that location where the pixel intensity of
dicom file is higher.
Well, MRI scan is just a stack of grayscale images, pretty much as CT is, except that intensity units are of course different. So, just read it as any other dicom image, and look pixels values for intensities, or perform segmentation.
Cancer tumor regions and other features are stored in a separate dicom object, called RT structure set (it is produced usually by radiotherapy planning system or some contouring software).

Resources