Which format for the bad pixel image (Dicom Tag 0014,3080) file format - dicom

I wrote a Dicom/DICONDE file library. Now I would like to add a bad pixel image into the file.
According to Dicom standard part06, the bad pixel image is to be stored in tag 0014,3080 with data type OB. But what is the exact format? I guess it is one bit per pixel, 0 means pixel OK, 1 means bad pixel.
Can someone confirm that?
Where can I find some sample Dicom files containing a bad pixel?
EDIT: ASTM E 2339-15 do not specify the bad pixel image format.
Any advice appreciated.

This tag belongs to the DICONDE standard, a standard which is used in industry for imaging related to non-desctructive testing. It is not included in any of the DICOM IODs for medical imaging (see part 3 - you will not find any reference to this attribute).
I suspect you want to use your DICOM library for medical imaging - in this application domain, this attribute is not used.
If you really want to support DICONDE, the DICONDE standard should tell about the syntax and semantics. I would have had a look, but unlike DICOM it is not available for free.

The bad pixel image format is described in ASTM E 2767-11 document in table 5. This is exactly as I expected: Byte image with the same number of rows and columns as the Pixel Data (7FE0,0010). The pixel data of this image will contain a "1" for a good pixel and a "0" for a bad pixel.

Related

hyperspectral pix2pix - image to image translation

I would like to convert a 3d image (hyperspectral cube) into a 2d one with 3 channels.
I have all images in the source and target – paired.
What should be changed in the code in order to support this?
(see good code with explanation here (Jason Brownlee):
https://machinelearningmastery.com/how-to-develop-a-pix2pix-gan-for-image-to-image-translation)
Thanks,
Eli

DICOM pixel data lossless rendering and representation

I quote :
DICOM supports up to 65,536 (16 bits) shades of gray for monochrome image display, thus capturing the slightest nuances in medical imaging. In comparison, converting DICOM images into JPEGs or bitmaps (limited to 256 shades of gray) often renders the images unacceptable for diagnostic reading. - Digital Imaging and Communications in Medicine (DICOM): A Practical Introduction and Survival Guide by Oleg S. Pianykh
As I am a beginner in image processing I'm used to process images colored and monochrome with 256 levels, so for Dicom images, in which representation I have to process pixels without rendering them to 256 levels?, because of the loss of information.
note: If you can put a better tittle for this question, please feel free to do so, I've a hard time doing that and didn't come to a good one.
First you have to put the image's pixels through the Modality VOI/LUT transform in order to transform modality-dependent values to known units (e.g. Hounsfield or Optical Density).
Then, all your processing must be done on the entire range of values (do not convert 16 bit values to 8 bit).
The presentation (visualization) can be performed using scaled values (using 8 bit values), usually passing the data through the Presentation VOI/LUT (window-width or LUT).
See this for the Modality transform: rescale slope and rescale intercept
See the this for Window/Width: Window width and center calculation of DICOM image

Converting DICOM images into nrrd images preserving pixel spacing

I am trying to convert a series of MRI DICOM images (.dcm) into .nnrd format. I found this guide for doing it in 3D slicer and I managed to do it. The problem is that the new nrrd image that is created has lost the pixel spacing of the original DICOM image.
In the additional settings, while converting the image, I also unticked the "Compress" box but the problem is still there. For instance, checking the two images (original .dcm and new .nrrd) in Imagej I get this:
The two images (nrrd on the left and dcm on the right) where I highlighted the old and the new pixel spacing
Anyone knows how to solve this? Any other alternative (that preserves the pixel spacing) is well accepted.
Thanks a lot in advance,
Tommaso
Your DICOM file is corrupted. Some mandatory tags are missing (e.g. 20,37, Image Orientation (Patient)). Therefore Slicer cannot properly compute the spacing. It even shows you in "Dicom Browser" window the following warning (after clicking Examine button): "Reference image in series does not contain geometry information. Please use caution".
If you cannot fix original images, you can apply manually all required spacing elements. Either do it in Slicer before exporting (Module Volumes -> Volume Information), or you can fix nrrd files themselves. Open them in your favorite text editor:
NRRD0004
# Complete NRRD file format specification at:
# http://teem.sourceforge.net/nrrd/format.html
type: short
dimension: 3
space: left-posterior-superior
sizes: 512 512 1
space directions: (1,0,0) (0,1,0) (0,0,1)
kinds: domain domain domain
endian: little
encoding: gzip
space origin: (0,0,0)
You have to update this line:
space directions: (0.507812,0,0) (0,0.507812,0) (0,0,4)
The true spacing are under tags 0028,0030 (X and Y) and 0018,0050 (Z).

How to calculate space between dicom slices for MPR?

Due to showing MPR view based on Dicoms. I've made a 3D array from series of dicom files. And I show it from Coronal and Sagittal sides.
My 3D array includes:
- z = count of dicoms
- c = column value for every dicoms
- r = Row value for every dicoms
But I have a problem. When there is some space between slices, image is made by this way doesn't show a correct view. Because I can not think of simulation distance between them!
I don't know how to calculate space between slices? I want to add extra space between slices. for example, If space between slices is 4. I have to add 4 time z inner slices.
I hope to arrive my mean.
Image Position (Patient) and Image Orientation (Patient) are the two only attributes you should ever used when computing distance between slices. For more details see here or here. For an actual implementation see here, this implementation also does take into account Frame Of Reference UID, as well as Gantry/Detector Tilt.
This question is the question #1 asked on comp.protocols.dicom.
Please see ImageJ bug
I believe the answer from #Matt is erroneous, let me clarify a few things here.
No: 'DICOM does not have an attribute called Spacing Between Slices'. That is very wrong (technically it does not even mean anything).
DICOM defines IODs which define the set of required attributes available in an SOP Class Instance. Let's consider two very common cases: CT Image Storage (legacy) and MR Image Storage (legacy). So we need to compare the set of attributes in between:
CT Image IOD Modules
MR Image IOD Modules
Now let's say we want to check that MR Image Storage support Spacing Between Slices, it is easy to jump to:
MR Image Module Attributes
However it is much harder to find this attribute for CT Image Storage: simply because this attribute does not exist (per standard). So the only time you would find such attribute would be within an extended SOP Class (some vendors may decide that Spacing Between Slices attribute make sense within their extended SOP Class Instance).
Mixing in the same answer both Spacing Between Slices and Slice Thickness (0018,0050) is very confusing for new users.
I agree that Slice Thickness is perfectly defined in the standard for both CT Image Storage and MR Image Storage since they both include Image Plane Module Attributes, however let's not exchange one for the other.
I found a nice summary of Slice Thickness vs Spacing Between Slices here (if you scroll to the section, you can even play the small demo) :
CT Physics: CT Reconstruction and Helical CT
In step and shoot CT the Slice Thickness and Spacing Between Slices are identical so there is no big issue here. However for helical CT those values are not the same and can vary in any direction (they are independent).
[…] Slice Thickness is determined by the detector width and pitch,
while reconstruction interval (=Spacing Between Slices) can be chosen
arbitrarily. […]
In conclusion to compute (safely!) the Spacing Between Slices (= Reconstruction Interval), it is much safer to use Image Orientation (Patient) and Image Position (Patient) since they are available in either MR Image Storage or CT Image Storage instances.
DICOM has an attribute called Spacing Between Slices (0018, 0088) that gives the distance between two adjacent slices (perpendicular to the image plane) and it also has an attribute called Slice Thickness (0018, 0050) that gives the thickness of the imaged slice (the image plane exists at the center of the slice, with half of the volume above the plane and half below). Image Position (Patient) (0020, 0032) and Image Orientation (Patient) (0020, 0037) are also useful attributes for computing spatial relationships between slices.
For a more detailed explanation, see section C.7.6.2 of part 3 of the DICOM standard. (p. 409)
WARNING: Please be aware that different vendors use the same dicom tags for addressing different things. For instance, the attribute Spacing Between Slices (0018, 0088) means two different things depending on the vendor. See this table to have a guide, and this thread for an explanation.
As discussed in the previous answers, it is not straightforward how to calculate space between DICOM slices. Let's phrase the question differently: How to store DICOM slices in a 3D volume, i.e. a list of equally spaced slices for rendering (guess you want to upload into a 3D texture).
This is because the actual position that a CT slice is captured might not be identical to the position selected by the radiologist. A dataset might have been configured to capture 1 mm slices, but the CT returns slices at position 0.0 mm, 0.997 mm, 2.010 mm, ...
If you use an attribute such as Spacing Between Slices to calculate the size of the 3D volume, you will obtain subtle rounding errors easily. Don't go there.
Rather it is essential to use Image Position (Patient) (0020, 0032) and then perform an optimization to figure our how the slices could be fit into an grid.
Typical problems in practice to consider:
Missing slices (interpolate? Gap?)
Out of step slices (hardware defect? data defect?)

Air in synthetic DICOM

I'm generating a synthetic DICOM image with the Insight Toolkit (using itk::GDCMImageIO) and I've found two problems:
VolView fails loading my DICOM (with the message: Sorry, the file cannot be read). ITK-Snap opens and shows it OK.
I'm trying to use this image in a Stryker surgical navigator. The problem is that the image is loaded ok, but then the padding pixels are shown in a certain gray level, showing a box (actually the bounding box) of the image. If I load non synthetic DICOMs this doesn't happen.
This is what gdcminfo is showing:
MediaStorage is 1.2.840.10008.5.1.4.1.1.7 [Secondary Capture Image Storage]
TransferSyntax is 1.2.840.10008.1.2.1 [Explicit VR Little Endian]
NumberOfDimensions: 2
Dimensions: (33,159,1)
Origin: (0,0,0)
Spacing: (1,1,1)
DirectionCosines: (1,0,0,0,1,0)
Rescale Intercept/Slope: (0,1)
SamplesPerPixel :1
BitsAllocated :16
BitsStored :16
HighBit :15
PixelRepresentation:0
ScalarType found :UINT16
PhotometricInterpretation: MONOCHROME2
PlanarConfiguration: 0
TransferSyntax: 1.2.840.10008.1.2.1
Orientation Label: AXIAL
I'm using unsigned short as pixel type in itk::Image object and I'm setting all the padding pixels to 0 (zero), as is suggested by the DICOM standard for unsigned scalar images. gdcminfo does not show it, but I'm also setting the Pixel Padding (0028,0120) field to zero.
I would really appreciate any hint about this problem.
Thanks in advance,
Federico
After a lot of experimentation, I'll answer my own question. I've found that some DICOM readers directly assume that you're using the Hounsfield scale if the type of the DICOM files is CT. In this case you have to use short as pixel type and use -1024 for air (less than -1000 is air in Hounsfield scale), and it will render the image ok. These readers I've been experimenting with don't use the Pixel Padding field nor the Rescale Intercept/Slope. But If you use ITK-Snap/VolView/3DSlicer you won't have any problem if you specify those fields.
Dicom is a VERY tricky file format. You will need to carefully read and understand the conventions for the visualization platform, the storage platform, and the type of medical image you are trying to synthesize.
This is very likely NOT an error with the toolkit, but an error with what is being defined in the file format itself.

Resources