Is it possible to retrieve beam real world position from RT PLAN - dicom

I am currently working on some radiotherapy plan generation and I am trying to retrieve the beam source position from a DICOM RTPLAN file and point it on a related CT-Scan 3D image.
With the RTPLAN, I am able to access the isocenter position of each beam but this is in patient coordinates and I am note quite sure how to find the coordinates in the basis that is used by the 3D CT-Scan image.
I have access to the attributes ImagePosition and ImageOrientation of the DICOM of the CT. Moreover, the CT DICOM-like file (it is in practice a json regrouping some DICOM information) and the RTPLAN share the same FrameOfReference (Does it mean that they share the Patient coordinate system ?).
What does ImagePosition truely indicates ? As well as I can understand I think this is the position of the point (0, 0, 0) of the CT-3DImage in the Patient Coordinates. I am also a bit confused about the ImageOrientation attribute.

As you can read in this answer here, the ImagePosition attribute gives you the x, y, and z coordinates of the upper left hand corner of the image, in mm, i.e. the coordinates of the center of the upper left pixel of the image.
For your convenience I copy-paste below a table from the DICOM Documentation Part 3 (page 561).
Regarding the ImageOrientation attribute, as described in the documentation, gives you the direction cosines of the first row and the first column with respect​ to the patient. To understand better this attribute take a look at the very useful website, DICOM is Easy, by Roni Zaharia. In one of his images (below), you can clearly see that when the attribute is not equal to 1\0\0\0\1\0, then the coordinate system of the image is not align with the coordinate system of the patient. To align them, you have to use the direction cosines provided by the attribute and apply a rotation (take a look at the transformation matrix at page 562 of the aforementioned DICOM documentation).

Related

What direction do the DICOM instance numbers grow along?

By direction I mean for example from a patient's head to bottom or from his bottom to head. The CHEST CT scans I have seen so far indicates that Instance Number 1 slice is usually the first one down from the upper part of the body but I don't know whether this is part of the standard or there are some other tags that I should inspect into to determine.
There is no rule in DICOM that requires the Instance Number to be related to the slice position in a particular way. The link of Bartloimiej shows that there is a rule how the slice coordinates defined by Image Position Patient (0020,0027) and Image Orientation Patient (0020, 0037) are related to directions in the patient's body (head, feet, etc.)
So if you want to apply spatial ordering, these attributes are what you want to use. Slice Location (0020,1041) will not help you as well:
C.7.6.2.1.2 [...] This information is relative to an unspecified implementation specific reference point.
For original (i.e. Image Type (0008,0008) is ORIGINAL\PRIMARY...) CT slices, it is quite safe to assume that some growth in the Z-Direction is always present in a volumetric dataset. But for MRI or for reconstructed CT-slices (MPR), you may find datasets in which slices are parallel to the xz or yz plane. If your application is supposed to handle such images, make sure to avoid division by zero...
Yes, the standard defines it. DICOM PS3.3, part C.7.6.2:
The direction of the axes is defined fully by the patient's orientation.
If Anatomical Orientation Type (0010,2210) is absent or has a value of BIPED, the x-axis is increasing to the left hand side of the patient. The y-axis is increasing to the posterior side of the patient. The z-axis is increasing toward the head of the patient.
There is also a tag (0020,0037), Image Orientation (Patient), which relates actual position of the patient to the global coordinate frame. In trunk CT it is almost always 1 0 0 0 1 0 (no rotation) and you don't need to deal with it. Otherwise, see comments under the link above.
You are correct. The chest CT series are sorted from head to feet. The slice closest to the head should have the lowest Instance Number.
I don't know if this is defined by the DICOM standard or not, but I have seen a lot of DICOM images and the convention is this:
AXIAL - sorted by Z axis high to low (head to feet)
CORONAL - sorted by Y axis high to low (back to front)
SAGITTAL - sorted by X axis low to high (right to left)
Notice in all cases, the first slice in the series will be farthest from the observer.
If you need to generate Instance Number, you should sort the images by the dot product of Image Position Patient and (1,-1,-1) from low to high. In the rare degenerate case (all dot products are the same), I don't know. Pick another direction to sort, but probably (0,-1,-1) would be a good choice.
EDIT: I just discussed this with a friend who is more experienced. He said it varies. Some departments prefer back to front order, some prefer front to back. Also some DICOM viewers will give users the choice of how the slices are sorted (by Instance Number, Slice Location, IPP, Content Time, etc)

Detecting floor under an object in PCL

I'm very new to PCL.
I try to detect the floor under an object for checking if the object topples or is it positioned horizontally.
I've checked API and found the method: pcl::PointCloud< T >::at.
Seems like I could detect Z-value of a point using at. Is it correct?
If yes, I'm confused, how it should work. Mathematically a point is infinite small. On my scans I see the point-density the smaller the more distinct they are in Z-direction.
Will at always return a point? Is the value the mean of nearest physical points?
As referenced in the documentation, pcl::PointCloud< T >::at returns the information of a single point (the coordinates plus other data depending on the point format) given column and row information (roughly the X,Y in the depth image). For this reason, this method just works on organized clouds.
Unfortunately, not every point is a valid point. Unless you filter the point cloud, you could find invalid measurements (points which have NaN components). This is pretty normal, just discard those points using a filter. Your intuition is right, the point density is smaller the further away you go from the sensor.
As for what you're trying to achieve, you should take a look at the planar segmentation tutorial on the PCL website and at the Table Object Detector software by Nicolas Burrus. The latter extracts a plane, and the clusters of objects on top of it.

Pinning latitude longitude on a ski map

I have a map of a mountainous landscape, http://skimap.org/data/989/60/1218033025.jpg. It contains a number of known points, the lat-longs of which can be easily found out using Google maps. I wish to be able to pin any latitude longitude coordinate on the map, of course within the bounds of the landscape.
For this, I tried an approach that seems to be largely failing. I assumed the map to be equivalent to an aerial photograph of the Swiss landscape, without any info about the altitude or other coordinates of the camera. So, I assumed the plane perpendicular to the camera lens normal to be Ax+By+Cz-d=0.
I attempt to find the plane constants, using the known points. I fix my origin at a point, with z=0 at the sea level. I take two known points in the landscape, and using the equation for a line in 3D, I find the length of the projection of this line segment joining the two known points, on the plane. I multiply it by another constant K to account for the resizing of this length on a static 2d representation of this 3D image. The length between the two points on a 2d static representation of this image on this screen can be easily found in pixels, and the actual length of the line joining the two points, can be easily found, since I can calculate the distance between the two points with their lat-longs, and their heights above sea level.
So, I end up with an equation directly relating the distance between the two points on the screen 2d representation, lets call it Ls, and the actual length in the landscape, L. I have many other known points, so plugging them into the equation should give me values of the 4 constants. For this, I needed 8 known points (known parameters being their name, lat-long, and heights above sea level), one being my orogin, and the second being a fixed reference point. The rest 6 points generate a system of 6 linear equations in A^2, B^2, C^2, AB, BC and CA. Solving the system using a online tool, I get the result that the system has a unique solution with all 6 constants being 0.
So, it seems that the assumption that the map is equivalent to an aerial photograph taken from an aircraft, is faulty. Can someone please give me some pointers or any other ideas to get this to work? Do open street maps have a Mercator projection?
I would say that this impossible to do in an automatic way. The skimap should be considered as an image rather than a map, a map is an projection of the real world into one plane, since this doesn't fit skimaps very well they are drawn instead.
The best way is probably to manually define a lot of points in the skimap with known or estimated coordinates and use them to estimate the points betwween. To get an acceptable result you probably have to assign coordinates to each pixel in the skimap.
You could do something like the following: http://magazin.unic.com/en/2012/02/16/making-of-interactive-mobile-piste-map-by-laax/
I am solving the exact same issue. It is pretty hard and lots of maths. Taking me a few weeks to solve it. Interpolation is the key as well with lots of manual mapping. I would say that for a ski mountain it will take at least 1000/1500 points to be able to get the very basic. So, not a trivial task unless you can automate the collection of these points (what I am doing!) ;)

How to calculate space between dicom slices for MPR?

Due to showing MPR view based on Dicoms. I've made a 3D array from series of dicom files. And I show it from Coronal and Sagittal sides.
My 3D array includes:
- z = count of dicoms
- c = column value for every dicoms
- r = Row value for every dicoms
But I have a problem. When there is some space between slices, image is made by this way doesn't show a correct view. Because I can not think of simulation distance between them!
I don't know how to calculate space between slices? I want to add extra space between slices. for example, If space between slices is 4. I have to add 4 time z inner slices.
I hope to arrive my mean.
Image Position (Patient) and Image Orientation (Patient) are the two only attributes you should ever used when computing distance between slices. For more details see here or here. For an actual implementation see here, this implementation also does take into account Frame Of Reference UID, as well as Gantry/Detector Tilt.
This question is the question #1 asked on comp.protocols.dicom.
Please see ImageJ bug
I believe the answer from #Matt is erroneous, let me clarify a few things here.
No: 'DICOM does not have an attribute called Spacing Between Slices'. That is very wrong (technically it does not even mean anything).
DICOM defines IODs which define the set of required attributes available in an SOP Class Instance. Let's consider two very common cases: CT Image Storage (legacy) and MR Image Storage (legacy). So we need to compare the set of attributes in between:
CT Image IOD Modules
MR Image IOD Modules
Now let's say we want to check that MR Image Storage support Spacing Between Slices, it is easy to jump to:
MR Image Module Attributes
However it is much harder to find this attribute for CT Image Storage: simply because this attribute does not exist (per standard). So the only time you would find such attribute would be within an extended SOP Class (some vendors may decide that Spacing Between Slices attribute make sense within their extended SOP Class Instance).
Mixing in the same answer both Spacing Between Slices and Slice Thickness (0018,0050) is very confusing for new users.
I agree that Slice Thickness is perfectly defined in the standard for both CT Image Storage and MR Image Storage since they both include Image Plane Module Attributes, however let's not exchange one for the other.
I found a nice summary of Slice Thickness vs Spacing Between Slices here (if you scroll to the section, you can even play the small demo) :
CT Physics: CT Reconstruction and Helical CT
In step and shoot CT the Slice Thickness and Spacing Between Slices are identical so there is no big issue here. However for helical CT those values are not the same and can vary in any direction (they are independent).
[…] Slice Thickness is determined by the detector width and pitch,
while reconstruction interval (=Spacing Between Slices) can be chosen
arbitrarily. […]
In conclusion to compute (safely!) the Spacing Between Slices (= Reconstruction Interval), it is much safer to use Image Orientation (Patient) and Image Position (Patient) since they are available in either MR Image Storage or CT Image Storage instances.
DICOM has an attribute called Spacing Between Slices (0018, 0088) that gives the distance between two adjacent slices (perpendicular to the image plane) and it also has an attribute called Slice Thickness (0018, 0050) that gives the thickness of the imaged slice (the image plane exists at the center of the slice, with half of the volume above the plane and half below). Image Position (Patient) (0020, 0032) and Image Orientation (Patient) (0020, 0037) are also useful attributes for computing spatial relationships between slices.
For a more detailed explanation, see section C.7.6.2 of part 3 of the DICOM standard. (p. 409)
WARNING: Please be aware that different vendors use the same dicom tags for addressing different things. For instance, the attribute Spacing Between Slices (0018, 0088) means two different things depending on the vendor. See this table to have a guide, and this thread for an explanation.
As discussed in the previous answers, it is not straightforward how to calculate space between DICOM slices. Let's phrase the question differently: How to store DICOM slices in a 3D volume, i.e. a list of equally spaced slices for rendering (guess you want to upload into a 3D texture).
This is because the actual position that a CT slice is captured might not be identical to the position selected by the radiologist. A dataset might have been configured to capture 1 mm slices, but the CT returns slices at position 0.0 mm, 0.997 mm, 2.010 mm, ...
If you use an attribute such as Spacing Between Slices to calculate the size of the 3D volume, you will obtain subtle rounding errors easily. Don't go there.
Rather it is essential to use Image Position (Patient) (0020, 0032) and then perform an optimization to figure our how the slices could be fit into an grid.
Typical problems in practice to consider:
Missing slices (interpolate? Gap?)
Out of step slices (hardware defect? data defect?)

Math for a geodesic sphere

I'm trying to create a very specific geodesic tessellation, but I can't find anything online about it.
It is normal to subdivide the triangles of an icosahedron into triangle patches and project them onto the sphere. However, I noticed an animated GIF on the Wikipedia entry for Geodesic Domes that appears not to follow this scheme. Geodesic spheres generally comprise a mixture of mostly hexagonal triangle patches, with pentagonal patches forming at the vertices of the original icosahedron; in most cases, these pentagons are linked together; that is, following a straight edge from the center of one pentagon leads to the center of another pentagon. In the Wikipedia animation, however, the edge from the center of one pentagon doesn't appear to intersect the center of an adjacent pentagons; instead it intersects the side of the other pentagon.
Where can I go to learn about the math behind this particular geometry? Ideally, I'd like to know of an algorithm for generating such tessellations.
Marcelo,
The most-commonly employed geodesic tessellations are either Class-I or Class-II. The image you reference is of a Class-III tessellation, more-specifically, 4v{3,1}. The classes can be diagrammed, so:
Class-III tessellations are chiral, and can have left-handed or right-handed twist. Here's the mirror-image of the sample you referenced:
You can find some 3D models of Class-III spheres, at Google's 3D Warehouse:
http://sketchup.google.com/3dwarehouse/cldetails?mid=b926c2713e303860a99d92cd8fe533cd
Being properly identified should get you off to a good start.
Feel free to stop by the Geodesic Help Group; http://groups.google.com/group/GeodesicHelp?hl=en
TaffGoch
Here's an image from one of Joe Clinton's NASA publications:
I believe it is actually just a matter of resolution (i.e., number of sub-divisions). The tessellation you show does seem to emanate from an icosahedron scheme: cf p.7 here, mid-page example. Check out the rest of the document for some calculation details - also its cited references, and some further code samples here.
Marcelo,
If you want to devise algorithms to generate any class of geodesic spheres, you can do it here:
http://thomson.phy.syr.edu/thomsonapplet.htm
Start by using the "custom(m,n)" option, select your desired parameters, then hit the "pause" button. Switch to "lattice energy" and hit the "Auto" button.
If you're intimately familiar with java, you can save the "jar" file(s) for this app, and examine the contents, to back-engineer the algorithms.
BTW, this java app also has a "File" menu option, which can activate a new window, listing the "Point set" (vertex coordinates.) I copy & paste them into an Excel spreadsheet, from which I can generate a "csv" file that can be, subsequently, imported into 3D-graphic programs.
Taff

Resources