How to Draw Marker in MRI File With Respect to Contrast Agent - dicom

I am really confuse over the draw overlay
on MRI Image means its part is similar to Structure Report(SR) Processing or not
I am trying to read the MRI File in such way that from Contrast Agent. Also after so much searching on Google finally get some information such as
"The data is extracted by injecting a contrast agent into a patient’s vein,then taking sequential snapshots of a volume of interest as the contrast agent diffuses through that area"
but i am totally new for this to help out can you help me for
1. Give me specific link for these topic
2. How to read the Contrast Agent value from MRI Dicom File.
3. Also how to show a shaded region where the Cancer is detected or
some kind of marker on that location where the pixel intensity of
dicom file is higher.

Well, MRI scan is just a stack of grayscale images, pretty much as CT is, except that intensity units are of course different. So, just read it as any other dicom image, and look pixels values for intensities, or perform segmentation.
Cancer tumor regions and other features are stored in a separate dicom object, called RT structure set (it is produced usually by radiotherapy planning system or some contouring software).

Related

ITK-SNAP segmentation displays same intensity value even after registration

I'm using ITK-SNAP to compare the intensities of several Regions of Interest between several conditions.
For some subjects, I need to realign one image to another by using the Registration tool.
However, I noticed that the intensity values of a specific segmentation that I drew on the reference image doesn't change no matter how I register.
The value will be different between the two images, but even if I manually register the second image to something completely off, it will stay the same.
Is it possible to get the actual mean intensity of my segmentation depending on where it is on the registered image ?
Segmentation menu, option "Volumes and Statistics..." should show you what you are looking for.
Registration does not impact the intensity. Depending on how you transform your image, it affects the location and coordination of your voxels! It does not play with the intensities! It may reform, or reshape, rotate, or translate the image. If you expect different intensities after registration, you need to apply some other techniques rather than registration! because all the transformation matrix are applied on the coordination and location. You should play with the other features of your data!
There are some registration methods which influence the intensities but they are not used in ITKSNAP for example. You should look for its special package.
For example this paper is on:
Intensity based image registration by minimizing the complexity of weighted subtraction under illumination changes
Which is specifically playing with the intensities for fusion.
https://www.sciencedirect.com/science/article/abs/pii/S1746809415001755
Other example is this matlab script for Intensity based automatic registration, The process begins with the transform type you specify and an internally determined transformation matrix. Together, they determine the specific image transformation that is applied to the moving image with bilinear interpolation.
https://www.sciencedirect.com/science/article/abs/pii/S1746809415001755

random memory access and bank conflict

in these days, i'm trying program on mobile gpu(adreno)
the algorithm what i use for image processing has 'randomness' for memory access.
it refers some pixels in 'fixed' range for filtering.
BUT, i cant know exactly which pixel will be referred(depends on image)
as far as i understood. if multiple thread access local memory bank
it causes bank conflict. so in my case it should make bank conflict.
MY question: Can i eliminate bank conflict at random memory access?
or can i reduce them?
Assuming that the distances of your randomly accessed pixels is somehow normal distributed, you could think of tiling your image into subimages.
What I mean: instead of working with a (lets say) 1024x1024 image, you might have 4x4 images of size 256x256. Each of them is kept together in memory, so "near" pixel access stays within the same image object. Only the far distance operations need to access different subimages.
A second option: instead of using CLImage objects, try to save your data into an array. The data in the array can be stored in a Z-order curve sorting. This also leads to a reduced spatially distribution (compared to row-order-sorting)
But of course, this depends strongly on your image size.
There are a variety of ways to deal with bank conflicts - the size of the elements you are working with, the strides between lines and shifting the coordinates around to different memory addresses. It's never going to be as good as non-random / conflict free though and so what you will notice is depending on the image - you will see significantly different compute times.
See http://cuda-programming.blogspot.com/2013/02/bank-conflicts-in-shared-memory-in-cuda.html

How to decide if a DICOM series is a 3D volume or a series of images?

We are writing an importer for dicom files.
How does one generally deceide if a series of images forms a 3D-Volume or is just a series of 2D images?
Is there a universal way to decide this for most vendors? I looked a the DICOM tags and could no find an apparent solution.
The DICOM standard defines UIDs for describing the hierarchy. These are from top to bottom:
Study UID - Identifier of the study or scanning session.
Series UID - The same within a series acquired in one scan.
Image UID - Should be unique for any image.
A DICOM image saved by a standard-conforming implementation should have all these IDs. If multiple images have the same SeriesUID, they are a volume (or time-series) as defined in the standard. Some software of course is not standard-conforming and you'll have to look at other things like timestamps and patient position, but it is usually best to start by following the standard.
For ordering the series after identifying it, GDCM (as malat suggested) or dcmtkdicom are pretty well-established libraries.
In MR, you'll want to look for:
MR Acquisition Type (0018,0023). It has two enumerated values:
2D = frequency x phase
3D = frequency x phase x phase
I'm not as sure about CT.
Most of the time, malat's answer is what you'll want to do (i.e. organize the slices by position and orientation and treat them in a 3D fashion through multi-planar reconstruction).
I think what you are searching for is the algorithm to organise DICOM dataset using Image Position (Patient) and Image Orientation (Patient).
A typical implementation can be found in GDCM
Please note that my answer may be totally unrelated to your specific DICOM instances, but since you did not specified which SOP Class UID you were dealing with, I simply assumed you were dealing with old CT or MR Image Storage
Patient Position (0018, 5100) is a type 1 required attribute for both the CT and MR modalities. This attribute is VERY IMPORTANT for accurately interpreting the patient's orientation.
Projection radiograph typically will have Patient Orientation (0020, 0020) attribute and cross-sectional image should have Image Position (0020, 0032) and Image Orientation (0020, 0037) attributes as they are type 1 required element of Image Plane module (see PS 3.3 section C.7.6.2.1.1).
However, localizer or scout image included with CT study is not really a cross-sectional image but a projection image and may contain Image Position and Image Orientation attributes. So is the case of MR study where one or more sagittal or coronal images are usually captured from which axial images are prescribed. In this case different logic is needed to identify the localizer image. For example, CT localizer may use the string "LOCALIZER" for value 3 of "Image Type" attributes.
If someone haven't found the answer, I looked through the tags in RadiAnt DICOM viewer where I compared different files and the Scan Options (0018, 0022) tag I think which contains the information. If the tag exists (because on some it was not there) and the value is equal to HELICAL MODE or HELIX then a 3D image can be constructed from that.

Any theoretical limit to compression?

Imagine that you had all the supercomputers in the world at your disposal for the next 10 years. Your task was to compress 10 full-length movies losslessly as much as possible. Another criteria was that a normal computer should be able to decompress it on the fly and should not need to spend much of his HD to install the decompressing software.
My question is, how much more compression could you achieve than the best alternatives today? 1%, 5%, 50%? More specifically: is there a theoretical limit to compression, given a fixed dictionary size (if it is called that for video compression as well)?
The limits of compression are dictated by the randomness of the source. Welcome to the study of information theory! See data compression.
There is a theoretical limit: I suggest reading this article on Information theory and the pigeon hole principle. It seems to sum up the issue in a very easy to understand way.
If you have a fixed catalogue of all the movies you were ever going to compress, you could just send an id for the movie and have the "decompression" lookup up the data with that index. So compression could be to a fixed size of log2(N) bits, where N was the number of movies.
I suspect the practical lower bound is rather higher than this.
Do you really mean lossless? Most of today's video compression is lossy, I thought.
It is important to redefine the limits with the latest developments regarding information theory. Therefore, it is essential to report the hypotheses for which the limit is valid.
In information theory, 3 fundamental hypotheses are used which are the following:
the information is defined by the entropy function H(X).
the information that identifies the source is known both by the encoder and by the decoder.
the source and its isomorphisms are considered. It means that we can decode a symbol at a time.
First limit, the most famous, defined by Shannon in which all 3 hypotheses are true.
NH(X)
With H(X) entropy of the source X.
Second limit. we remove the second hypothesis the decoder does not know the source.
NH(X)+source information
Third limit, let's remove the third hypothesis. In this case, the Set Shaping Theory SST is used, a new method that is revolutionizing information theory. This theory studies the one-to-one functions f that transform a set of strings into a set of equal size made up of strings of greater length. With this method, we get the following limit:
N2H(Y)+ source information≈NH(X)
with f(X)=Y and N2>N.
In practice, we obtain a gain in terms of compression equivalent to the information necessary to describe the source is obtained. The information needed to describe the source represents the inefficiency of the entropy coding.
In this case, however, it is not possible to decode a symbol at a time (the code is not instantaneous) but the message must be decoded in full before obtaining the original message.
Important progress has been made in this area. It was possible to apply this theory to a concrete case of data compression "Practical applications of Set Shaping Theory in Huffman coding".
Another interesting aspect is that the authors shared the code and the function that performs the transform described in the set shaping theory. The file is shared on Matlab file exchange: https://www.mathworks.com/matlabcentral/fileexchange/115590-test-sst-huffman-coding?s_tid=FX_rc1_behav

Volume render DICOMDIR CT scan

I got a CD from the hospital that is a head CD scan.
I am completely new to medical imaging. What I would like to do perform a volume rendering of the CT scan.
It is in DICOMDIR format. How and where would I start?
From messing about with various tools I get the feeling that I need to extract each series into DICOM format. Is this correct and if so how would I do it?
Unless you were given the volume data your rendering will be disappointing at best. Many institutions still acquire head CT's in separate "step-slices", and not as volumes so here you will have significant 'stepping' artifact.
Even if it was acquired with volume data, unless they transferred all the data to your CD, you will still be stuck with only the processed 'slab' or 'slice' images.
The best way to do a volume rendering is to actually have the volume data. "Slice image" data has most of the information dumbed down and removed. You are just getting 20 or 30 images in 256 x 256 x (8 or 16 bit greyscale) array data.
If you have a mac try OsiriX - it's free, open source and will do everything you need and more. If you don't, and this is a one time thing, you could always sign up for a free demo of a commercial grade DICOM viewer. Medical image viewing software is insanely expensive and would be impossible to sell without demos. Just claim to be working for a clinician and you'll have no problem getting working software.
I believe ImageJ will open any of the files in the DICOMDIR for you. I'm not entirely sure it can open the entire study from the DICOMDIR, but I'm fairly certain it will handle any individual files you need to open. It should also offer the option to export the images to various other formats. If you need more info, feel free to post a comment.
You can also try MevisLab (http://www.mevislab.de/) it is free but a bit more complex to use and maybe it requires two steps to get the rendering of your dicom images.
Most probably you have to use one of the widgets they provide to convert the image and then to load the converted image and render it.
I have done with ImageJ but ImageJ not support compress dicom files at that time you have create your own logic to read compress dicom file.
Fiji and VolumeJ are also Good Option for Volume Rendering
Try Real3d VolViCon which is an advanced application for reconstruction of computed tomography (CT), magnetic resonance (MR), ultrasound, and x-rays images. It gives features for exporting 3D surfaces or volume as triangular mesh files for creating physical models using 3D printing technologies. It also provides high-quality visualization, linear and angular measurement tools, and various type of markups. It takes a single raw volume file or a sequence of 2D (i.e., DICOM) files and reconstructs 3D volume (voxels) and mesh (surfaces) models.

Resources