I'm generating a synthetic DICOM image with the Insight Toolkit (using itk::GDCMImageIO) and I've found two problems:
VolView fails loading my DICOM (with the message: Sorry, the file cannot be read). ITK-Snap opens and shows it OK.
I'm trying to use this image in a Stryker surgical navigator. The problem is that the image is loaded ok, but then the padding pixels are shown in a certain gray level, showing a box (actually the bounding box) of the image. If I load non synthetic DICOMs this doesn't happen.
This is what gdcminfo is showing:
MediaStorage is 1.2.840.10008.5.1.4.1.1.7 [Secondary Capture Image Storage]
TransferSyntax is 1.2.840.10008.1.2.1 [Explicit VR Little Endian]
NumberOfDimensions: 2
Dimensions: (33,159,1)
Origin: (0,0,0)
Spacing: (1,1,1)
DirectionCosines: (1,0,0,0,1,0)
Rescale Intercept/Slope: (0,1)
SamplesPerPixel :1
BitsAllocated :16
BitsStored :16
HighBit :15
PixelRepresentation:0
ScalarType found :UINT16
PhotometricInterpretation: MONOCHROME2
PlanarConfiguration: 0
TransferSyntax: 1.2.840.10008.1.2.1
Orientation Label: AXIAL
I'm using unsigned short as pixel type in itk::Image object and I'm setting all the padding pixels to 0 (zero), as is suggested by the DICOM standard for unsigned scalar images. gdcminfo does not show it, but I'm also setting the Pixel Padding (0028,0120) field to zero.
I would really appreciate any hint about this problem.
Thanks in advance,
Federico
After a lot of experimentation, I'll answer my own question. I've found that some DICOM readers directly assume that you're using the Hounsfield scale if the type of the DICOM files is CT. In this case you have to use short as pixel type and use -1024 for air (less than -1000 is air in Hounsfield scale), and it will render the image ok. These readers I've been experimenting with don't use the Pixel Padding field nor the Rescale Intercept/Slope. But If you use ITK-Snap/VolView/3DSlicer you won't have any problem if you specify those fields.
Dicom is a VERY tricky file format. You will need to carefully read and understand the conventions for the visualization platform, the storage platform, and the type of medical image you are trying to synthesize.
This is very likely NOT an error with the toolkit, but an error with what is being defined in the file format itself.
Related
I wrote a Dicom/DICONDE file library. Now I would like to add a bad pixel image into the file.
According to Dicom standard part06, the bad pixel image is to be stored in tag 0014,3080 with data type OB. But what is the exact format? I guess it is one bit per pixel, 0 means pixel OK, 1 means bad pixel.
Can someone confirm that?
Where can I find some sample Dicom files containing a bad pixel?
EDIT: ASTM E 2339-15 do not specify the bad pixel image format.
Any advice appreciated.
This tag belongs to the DICONDE standard, a standard which is used in industry for imaging related to non-desctructive testing. It is not included in any of the DICOM IODs for medical imaging (see part 3 - you will not find any reference to this attribute).
I suspect you want to use your DICOM library for medical imaging - in this application domain, this attribute is not used.
If you really want to support DICONDE, the DICONDE standard should tell about the syntax and semantics. I would have had a look, but unlike DICOM it is not available for free.
The bad pixel image format is described in ASTM E 2767-11 document in table 5. This is exactly as I expected: Byte image with the same number of rows and columns as the Pixel Data (7FE0,0010). The pixel data of this image will contain a "1" for a good pixel and a "0" for a bad pixel.
I quote :
DICOM supports up to 65,536 (16 bits) shades of gray for monochrome image display, thus capturing the slightest nuances in medical imaging. In comparison, converting DICOM images into JPEGs or bitmaps (limited to 256 shades of gray) often renders the images unacceptable for diagnostic reading. - Digital Imaging and Communications in Medicine (DICOM): A Practical Introduction and Survival Guide by Oleg S. Pianykh
As I am a beginner in image processing I'm used to process images colored and monochrome with 256 levels, so for Dicom images, in which representation I have to process pixels without rendering them to 256 levels?, because of the loss of information.
note: If you can put a better tittle for this question, please feel free to do so, I've a hard time doing that and didn't come to a good one.
First you have to put the image's pixels through the Modality VOI/LUT transform in order to transform modality-dependent values to known units (e.g. Hounsfield or Optical Density).
Then, all your processing must be done on the entire range of values (do not convert 16 bit values to 8 bit).
The presentation (visualization) can be performed using scaled values (using 8 bit values), usually passing the data through the Presentation VOI/LUT (window-width or LUT).
See this for the Modality transform: rescale slope and rescale intercept
See the this for Window/Width: Window width and center calculation of DICOM image
I am trying to convert a series of MRI DICOM images (.dcm) into .nnrd format. I found this guide for doing it in 3D slicer and I managed to do it. The problem is that the new nrrd image that is created has lost the pixel spacing of the original DICOM image.
In the additional settings, while converting the image, I also unticked the "Compress" box but the problem is still there. For instance, checking the two images (original .dcm and new .nrrd) in Imagej I get this:
The two images (nrrd on the left and dcm on the right) where I highlighted the old and the new pixel spacing
Anyone knows how to solve this? Any other alternative (that preserves the pixel spacing) is well accepted.
Thanks a lot in advance,
Tommaso
Your DICOM file is corrupted. Some mandatory tags are missing (e.g. 20,37, Image Orientation (Patient)). Therefore Slicer cannot properly compute the spacing. It even shows you in "Dicom Browser" window the following warning (after clicking Examine button): "Reference image in series does not contain geometry information. Please use caution".
If you cannot fix original images, you can apply manually all required spacing elements. Either do it in Slicer before exporting (Module Volumes -> Volume Information), or you can fix nrrd files themselves. Open them in your favorite text editor:
NRRD0004
# Complete NRRD file format specification at:
# http://teem.sourceforge.net/nrrd/format.html
type: short
dimension: 3
space: left-posterior-superior
sizes: 512 512 1
space directions: (1,0,0) (0,1,0) (0,0,1)
kinds: domain domain domain
endian: little
encoding: gzip
space origin: (0,0,0)
You have to update this line:
space directions: (0.507812,0,0) (0,0.507812,0) (0,0,4)
The true spacing are under tags 0028,0030 (X and Y) and 0018,0050 (Z).
I am working on 3D reconstruction with tango. Our system is quite similar to KinectFusion, which uses voxel representation, but use Tango as tracker. Left image (in video linked below) is rendered by Raycast at current pose (given by tango) in real time. Raw pose converted by GetOC2OWMat() as in code examples, in addition sign of tx and rx are flipped to cope with our system. Everything works fine except ration in Z axis, which changes angle in rendered image. I guess coordinate system conversion is not done properly, but depth integration is working if no Z rotation is involved. I have also checked det(R) is always 1.
Video
It sounds like you are not factoring in intrinsics - have you accounted for camera and device IMU frames ? You need these to fully re-establish original viewpoint, i.e. both camera and device imu frame matrices need to be multiplied in to your stack
Sorry that I just find the place where things goes wrong. When the image is displayed with opengl, the rendered gl size does not have same aspect ratio as Raycasting image.
Do you program with Java/C/Unity? I'm curious because my device has problems with the camera data and you seem to capture it without problems. I am quite sure it's a bug but I would like to make sure it really is one.
I create a big image stitched out of many single microscope images.
Suddenly, (after several month of working properly) the stitched overview images became blurry and they are containing strange structural artefacts like askew lines (not the rectangulars, they are because of not perfect stitching)
If I open any particular tile in full size, they are not blurry and the artefacts are hardly observable. (Consider, the image below is already 4x scaled)
The overview image is created manually by scaling each tile using QImage::scaled and copying all of them to the corresponding region in the big image. I'm not using opencv's stitching.
I assume, this happens because of image contents, because most if the overview images are ok.
The question is, how can I avoid such hardly observable artefacts to become very clearly visible after scaling? Is there some means in OpenCV or QImage?
Is there any algorithms to find out, if image content could lead to such effect for defined scale-factor?
Many thanks in advance!
Are you sure the camera is calibrated properly? That the lightning is uniform? Is the lens clear? Do you have electrical components that interfere with the camera connection?
If you add image frames of photos on a uniform material (or non-uniform material, moved randomly for significant time), the resultant integrated image should be completely uniform.
If your produced image is not uniform, especially if you get systematic noise (like the apparent sinusoidal noise in the provided pictures), write a calibration function that transforms image -> calibrated image.
Filtering in Fourier space is another way to filter out the noise but considering that the image is rotated you will lose precision, and you'll be cutting off components of the real signal, too. The following empiric method will reduce the noise in your particular case significantly:
ground_output: composite image with per-pixel sum of >10 frames (more is better) over uniform material (e.g. excited slab of phosphorus)
ground_input: the average(or sqrt(sum of px^2)) in ground_output
calib_image: ground_input /(per px) ground_output. Saved for the session, or persistent in a file (important: ensure no lossy compression! (jpeg)).
work_input: the images to work on
work_output = work_input *(per px) calib_image: images calibrated for systematic noise.
If you can't create a perfect ground_input target such as having a uniform material on hand, do not worry too much. If you move any material uniformly (or randomly) for enough time, it will act as a uniform material in this case (think of a blurred photo).
This method has the added advantage of calibrating solitary faulty pixels that ccd cameras have (eg NormalPixel.value(signal)).
If you want to have more fun you can always fit the calibration function to something more complex than a zero-intercept line (steps 3. and 5.).
I suggest scaling the image with some other software to verify if the artifacts are in fact caused by Qt or are inherent in the image you've captured.
The askew lines look a lot like analog tv interference, or CCTV noise induced by 50 or 60 Hz power lines running alongside the signal cable or some other electrical interference on the signal.
If the image distortion is caused by signal interference then you can try to mitigate it by moving the signal lines away from whatever could be the source of the problem, or fit something to try to filter the noise (baluns for example).