Does anyone know if it is possible to sort dicom files if there is no SliceLocation attribute?
By "sort" I mean arrange the images longitudinally (in a craniocaudal orientation) with the most superior images first followed by the inferior images.
After reviewing the forums it appears SliceLocation is an optional parameter.
I have access to the following attributes in my dicom data set.
(0008, 0018) SOP Instance UID UI: ID_32fede0dc
(0008, 0060) Modality CS: 'CT'
(0010, 0020) Patient ID LO: 'ID_4c8ee851'
(0020, 000d) Study Instance UID UI: ID_a362744476
(0020, 000e) Series Instance UID UI: ID_1ec35eec9c
(0020, 0010) Study ID SH: ''
(0020, 0032) Image Position (Patient) DS: [-115, -1, 146.599976]
(0020, 0037) Image Orientation (Patient) DS: [1, 0, 0, 0, 1, 0]
(0028, 0002) Samples per Pixel US: 1
(0028, 0004) Photometric Interpretation CS: 'MONOCHROME2'
(0028, 0010) Rows US: 512
(0028, 0011) Columns US: 512
(0028, 0030) Pixel Spacing DS: [0.44921875, 0.44921875]
(0028, 0100) Bits Allocated US: 16
(0028, 0101) Bits Stored US: 12
(0028, 0102) High Bit US: 11
(0028, 0103) Pixel Representation US: 0
(0028, 1050) Window Center DS: [00036, 00036]
(0028, 1051) Window Width DS: [00080, 00080]
(0028, 1052) Rescale Intercept DS: "-1024.0"
(0028, 1053) Rescale Slope DS: "1.0"
(7fe0, 0010) Pixel Data OW: Array of 524288 elements
Initially I though Image Position (Patient) might work but this does not seem to arange the images in a sequential order.
I need a sequence because the temporal information is relevant for the detection of abnormalities.
Anyone have any bright ideas or is this simply not possible?
I am using:
pydicom 1.4.2
python 3.6.9
Sorting images by location is best done by using Image Position (Patient) [(0020,0032)] if that tag is available (as it always is for example in CT and MR images). This is the most reliable way to do this.
As the tag contains three elements which represent the x, y and z coordinates in the DICOM coordinate system, you best sort by the main axis, e.g. the axis there the position changes most. In most cases, you probably already know that axis (in your case you want to sort cranio-caudal, so the axis should be in that direction), or you can easily check it by comparing slice positions, but you can also calculate it from Image Orientation (Patient).
Image Orientation (Patient) [(0020,0037)] contains the normalized row and column vectors. To get the main axis, you can multiply these, and in the resulting vector the component with the largest absolute value will show the main axis.
In your example you have the row and columns vectors (1, 0, 0) and (0, 1, 0), by multiplying them you get (0, 0, 1), which means that the main axis is the z axis (e.g. the cranio-caudal direction in the DICOM coordinate system).
Related
By direction I mean for example from a patient's head to bottom or from his bottom to head. The CHEST CT scans I have seen so far indicates that Instance Number 1 slice is usually the first one down from the upper part of the body but I don't know whether this is part of the standard or there are some other tags that I should inspect into to determine.
There is no rule in DICOM that requires the Instance Number to be related to the slice position in a particular way. The link of Bartloimiej shows that there is a rule how the slice coordinates defined by Image Position Patient (0020,0027) and Image Orientation Patient (0020, 0037) are related to directions in the patient's body (head, feet, etc.)
So if you want to apply spatial ordering, these attributes are what you want to use. Slice Location (0020,1041) will not help you as well:
C.7.6.2.1.2 [...] This information is relative to an unspecified implementation specific reference point.
For original (i.e. Image Type (0008,0008) is ORIGINAL\PRIMARY...) CT slices, it is quite safe to assume that some growth in the Z-Direction is always present in a volumetric dataset. But for MRI or for reconstructed CT-slices (MPR), you may find datasets in which slices are parallel to the xz or yz plane. If your application is supposed to handle such images, make sure to avoid division by zero...
Yes, the standard defines it. DICOM PS3.3, part C.7.6.2:
The direction of the axes is defined fully by the patient's orientation.
If Anatomical Orientation Type (0010,2210) is absent or has a value of BIPED, the x-axis is increasing to the left hand side of the patient. The y-axis is increasing to the posterior side of the patient. The z-axis is increasing toward the head of the patient.
There is also a tag (0020,0037), Image Orientation (Patient), which relates actual position of the patient to the global coordinate frame. In trunk CT it is almost always 1 0 0 0 1 0 (no rotation) and you don't need to deal with it. Otherwise, see comments under the link above.
You are correct. The chest CT series are sorted from head to feet. The slice closest to the head should have the lowest Instance Number.
I don't know if this is defined by the DICOM standard or not, but I have seen a lot of DICOM images and the convention is this:
AXIAL - sorted by Z axis high to low (head to feet)
CORONAL - sorted by Y axis high to low (back to front)
SAGITTAL - sorted by X axis low to high (right to left)
Notice in all cases, the first slice in the series will be farthest from the observer.
If you need to generate Instance Number, you should sort the images by the dot product of Image Position Patient and (1,-1,-1) from low to high. In the rare degenerate case (all dot products are the same), I don't know. Pick another direction to sort, but probably (0,-1,-1) would be a good choice.
EDIT: I just discussed this with a friend who is more experienced. He said it varies. Some departments prefer back to front order, some prefer front to back. Also some DICOM viewers will give users the choice of how the slices are sorted (by Instance Number, Slice Location, IPP, Content Time, etc)
If the Image Orientation (Patient) tag (0020,0037) reads [1,0,0,0,1,0] and the Patient Position tag (0018, 5100) reads ‘HFS’, how do I interpret Slice Location tag (0020,1041), assuming that it exists?
I know that it represents the `Relative position of the image plane in millimeters', I'm just having trouble relating the end points of the range to the Z axis in the DICOM Reference Coordinates System (RCS).
Example: I have an sequence of Slice Location numbers in the range: [-1873.382, -771.782]
Since the numbers are increasing and in the DICOM RCS, the Z axis increases in the Inferior to Superior direction, can I conclude that '-1873.382' is the position of the most Inferior slice?
Also, just to note that the z coordinate of my Image Position (Patient) (0020,0032) attribute for each slice, contains the same information as my Slice Location tag.
I still advise against using the Slice Location attribute for sorting. In MR imaging, slices can have arbitrary orientation and even in CT the gantry can be tilted, so you cannot rely that all slices are parallel to the xy-plane. So you actually do not know to which axis the Slice Location refers.
What I do is to subtract ImagePositionPatient from two slices which gives me the direction in which the slices in the stack are moving. Ordering can be done by the amount of the difference vectors.
Image Position (Patient) (0020, 0032) is the x, y, and z coordinates of the upper left hand corner of the image and Image Orientation (0020, 0037) says the direction of the first row and the first column with respect patient (farther defined by patient orientation). X-axis increasing direction is towards the left hand side of the patient, y-axis increasing is towards the posterior side and z-axis increasing is toward the head of the patient.
In your case, if the Z axis is changing and increase is towards the head, I would use the Z-axis values for sorting the stack. It is more reliable than Slice Location. Yes the smallest value (e.g. value -1873.382) is the most Inferior slice.
Just wondering why we need the overlay and when we will need it?
I have a Scout image with overlay, what do these dots mean and what do these numbers or fractions mean?
How these numbers are drawn on the image?
DICOM standard allows two specific types of overlays (graphics and ROI) along with the image and overlays are stored as 1-bit image in Overlay Data (60XX, 0050) attribute. A dataset can have up to 16 separate overpay planes (using the repeating groups encoding).
The overlay plane that represents region of interest (ROI) will have value of “R” for Overlay Type (60xx, 0040) attribute and ROI Area (60xx, 1301), ROI Mean (60xx,1302) and ROI Standard Deviation (60xx, 1303) can be used for the corresponding values of ROI. All bits representing ROI will have a value of 1 that represents the pixels under the boundaries of the actual image data.
Graphic Overlay will have value of “G” in Overlay Type (60xx, 0040) attribute and it is used for expressing reference marks (reference line), graphic annotation, or bitmap text etc. Again, all visible values in an overlay plane are set to 1.
The Overlay Rows (60xx, 0010) and Overlay Columns (60xx,0011) specifies the width and height of the overlay plane. Overlay Bits Allocated is always 1 and Overlay Bit Position is 0 (it was used in previous version and usage has been retired). Overlay Origin (60xx, 0050) is used to described the first overlay point with respect to the pixel in the image and 1\1 represents upper left pixel of the image.
Overlays can be used to display any data over an image. You could, for example, allow users to make annotations or graphics marks. You cannot mark the original data, so the overlay is stored in a separate layer.
In your case, the creator of the overlay should explain its meaning.
The meaning of the overlay is:
i.e. 2/16 -> Series number 2 and slice number 16
I have extracted edge using image processing then I selected pixel coordinate using xclick of extracted edge.Is this correct or there is need of reverse y axis coordinate?(Extracted edge is white on black background)
I want to automatically extracted pixel coordinates of extracted edge not by mouse selection.Is there is any command available in scilab?(I use canny edge detector and morphological filter to extract edge)
Please give me some suggestions
Thanks
1.) Whether to reverse the y coordinte or not, depends on the further processing. Any coordinate system can be used if you need only relative measurements and the true orientation of your features is not important (e.g. reversing top and bottom makes no difference if you simply want to count objects or droplets). Hovewer if you want to indicate your found features by plotting a dot, or a line, or a rectangle (e.g. with plot2d or xrect) or a number (e.g. with xnumb) over the image, then it's necessary to match the two coordinate sytems. I recommend this second option and to plot your result over the original image, since this is the easiest way to check your results.
2.) Automatic coordinate extraction can be made by the find function: it returns those indices of the matrix, where the expression is true.
IM=[0,0,0,1;0,0,0,1;0,1,1,1;1,1,0,0]; //edge image, edge = 1, background = 0
disp(IM,"Edge image");
[row,col]=find(IM==1); //row & column indices where IM = 1 (= edge)
disp([row',col'],"Egde coordinates (row, col)");
If your "Egde image" marks the edges not with 1 (or 255, pure white pixel) but with a relatively high number (bright pixel), then you can modify the logical expression of the find function to detect pixels with a value above a certain threshold:
[row,col]=find(IM>0.8); //if edges > a certain threshold, e.g. 0.8
EDIT: For your specific image:
Try the following code:
imagefile="d:\Attila\PROJECTS\Scilab\Stackoverflow\MORPHOLOGICAL_FILTERING.jpg";
//you have to modify this path!
I=imread(imagefile);
IM=imcrop(I,[170,100,950,370]); //discard the thick white border of the image
scf(0); clf(0);
ShowImage(IM,'cropped image');
threshold=100; //try different values between 0-255 (black - white)
[row,col]=find(IM>threshold);
imheight=size(IM,"r"); //image height
row=imheight-row+1; //reverse y axes coordinates (0 is at top)
plot2d(col,row,style=0); //plot over the image (zoom to see the dots)
scf(1); clf(1); //plot separate graph
plot2d(col,row,style=0);
If you play with the threshold parameter, you will see how the darker or whiter pixels are found.
Due to showing MPR view based on Dicoms. I've made a 3D array from series of dicom files. And I show it from Coronal and Sagittal sides.
My 3D array includes:
- z = count of dicoms
- c = column value for every dicoms
- r = Row value for every dicoms
But I have a problem. When there is some space between slices, image is made by this way doesn't show a correct view. Because I can not think of simulation distance between them!
I don't know how to calculate space between slices? I want to add extra space between slices. for example, If space between slices is 4. I have to add 4 time z inner slices.
I hope to arrive my mean.
Image Position (Patient) and Image Orientation (Patient) are the two only attributes you should ever used when computing distance between slices. For more details see here or here. For an actual implementation see here, this implementation also does take into account Frame Of Reference UID, as well as Gantry/Detector Tilt.
This question is the question #1 asked on comp.protocols.dicom.
Please see ImageJ bug
I believe the answer from #Matt is erroneous, let me clarify a few things here.
No: 'DICOM does not have an attribute called Spacing Between Slices'. That is very wrong (technically it does not even mean anything).
DICOM defines IODs which define the set of required attributes available in an SOP Class Instance. Let's consider two very common cases: CT Image Storage (legacy) and MR Image Storage (legacy). So we need to compare the set of attributes in between:
CT Image IOD Modules
MR Image IOD Modules
Now let's say we want to check that MR Image Storage support Spacing Between Slices, it is easy to jump to:
MR Image Module Attributes
However it is much harder to find this attribute for CT Image Storage: simply because this attribute does not exist (per standard). So the only time you would find such attribute would be within an extended SOP Class (some vendors may decide that Spacing Between Slices attribute make sense within their extended SOP Class Instance).
Mixing in the same answer both Spacing Between Slices and Slice Thickness (0018,0050) is very confusing for new users.
I agree that Slice Thickness is perfectly defined in the standard for both CT Image Storage and MR Image Storage since they both include Image Plane Module Attributes, however let's not exchange one for the other.
I found a nice summary of Slice Thickness vs Spacing Between Slices here (if you scroll to the section, you can even play the small demo) :
CT Physics: CT Reconstruction and Helical CT
In step and shoot CT the Slice Thickness and Spacing Between Slices are identical so there is no big issue here. However for helical CT those values are not the same and can vary in any direction (they are independent).
[…] Slice Thickness is determined by the detector width and pitch,
while reconstruction interval (=Spacing Between Slices) can be chosen
arbitrarily. […]
In conclusion to compute (safely!) the Spacing Between Slices (= Reconstruction Interval), it is much safer to use Image Orientation (Patient) and Image Position (Patient) since they are available in either MR Image Storage or CT Image Storage instances.
DICOM has an attribute called Spacing Between Slices (0018, 0088) that gives the distance between two adjacent slices (perpendicular to the image plane) and it also has an attribute called Slice Thickness (0018, 0050) that gives the thickness of the imaged slice (the image plane exists at the center of the slice, with half of the volume above the plane and half below). Image Position (Patient) (0020, 0032) and Image Orientation (Patient) (0020, 0037) are also useful attributes for computing spatial relationships between slices.
For a more detailed explanation, see section C.7.6.2 of part 3 of the DICOM standard. (p. 409)
WARNING: Please be aware that different vendors use the same dicom tags for addressing different things. For instance, the attribute Spacing Between Slices (0018, 0088) means two different things depending on the vendor. See this table to have a guide, and this thread for an explanation.
As discussed in the previous answers, it is not straightforward how to calculate space between DICOM slices. Let's phrase the question differently: How to store DICOM slices in a 3D volume, i.e. a list of equally spaced slices for rendering (guess you want to upload into a 3D texture).
This is because the actual position that a CT slice is captured might not be identical to the position selected by the radiologist. A dataset might have been configured to capture 1 mm slices, but the CT returns slices at position 0.0 mm, 0.997 mm, 2.010 mm, ...
If you use an attribute such as Spacing Between Slices to calculate the size of the 3D volume, you will obtain subtle rounding errors easily. Don't go there.
Rather it is essential to use Image Position (Patient) (0020, 0032) and then perform an optimization to figure our how the slices could be fit into an grid.
Typical problems in practice to consider:
Missing slices (interpolate? Gap?)
Out of step slices (hardware defect? data defect?)