What is the full Optical Density representation range of shades of gray in a Dicom CT image? - dicom

For normal images the range of shades of gray is from 0 to 256, what is it for Dicom CT monochrome images?

Techically, I agree with Amit Joshi. But especially for CT images you will usually find a grayscale range expressed in Hounsfield Units. These range from -1000 to +4000. By applying a linear transformation of the pixel data as specified by the attributes Rescale Intercept (0028,1052) and Rescale Slope (0028,1053), pixel data as described by Amit Joshi's anwer are mapped to that range.
To this range, the windowing (VOI LUT) funtion is applied.
[EDIT]
A little explanation on Hounsfield units (HU)...
HU are valid for CT images only. A range of HU corresponds to a particular type of tissue or depicted structures, e.g.
-1000 -> air
0 -> fat
10-40 -> kidney
and so on.
So in CT images, pixel values found in the file are usually mapped to HU using Rescale Slope and -Intercept. Whether or not the this is the case is indicated by the attribute Rescale Type (0028,1054). HUs are not only used for measuring grayvalues and deducing tissue type from the median value but also for windowing. In CT, you refer to "bone window", "tissue window" etc.
So the relevant part of the DICOM Grayscale Standard Display Function (GSDF, see PS3.14) is:
interpret the pixel values according to their type description (signed/unsigned, bpp)
map the pixel values to HU using Rescale Slope and -Intercept
apply windowing to HU, i.e. map the range of the HU defined by the window to the display range (usually 256 grayscales)

I am not sure about Optical Density Representation.
Shades of gray can be calculated by 2^Bits Stored.
For CT images, Bits Stored may be 10 to 16 depending on manufacturer and other factors.
Considering value 16, shades are 65536.
For unsigned image: -
Start value: 0
End value: 65536
For signed image: -
Start value: -32768
End value: 32768
CT images are generally signed. In signed images, 0 is center and range is split half negative values and half positive values.

Related

Obtain the differential spectrum of each marine floating target and its background/neighborhood water in Google Earth Engine

How to obtain the differential spectrum of each floating target (algal pattern here), that is, the band value of each algal pattern subtract the band value of the adjacent water around it (such as the median water spectrum)
I first extract floating algae from the sea. I can use NDVI, NDWI, etc. to extract the algae and its edges first (See the Fig.1, algae is in viridis palette). My goal is to get the difference between the spectra of the algae and the surrounding water. Therefore, I carried out buffer operation on the edge of algae patterns (See the Fig.2, yellow buffer). The buffer represents the water around algae. My goal was to calculate the difference between algae pattern and the surrounding background water body. I have considered the object-based approach, but this is very memory intensive and has limitations on spot size. Now I want to do it based on pixels and morphology. How to achieve this?
An alternative idea maybe:Fill nodata values (masked algae) using neighborhood water in an image, then using subtraction between the original image and the new one to obtain the difference between the spectra of the algae and the surrounding water.

Clamp values in gnuplot splot?

I'm using splot to visualize the fitness histogram for an optimization problem. In this scenario positive Z values (say in the +2 - +15 range) representing good solutions are of particular interest whereas negative values don't provide much insight, i.e. it doesn't matter if a bad solution as a Z value of -50, -500 or -5000.
Using the autorange option all the interesting bits around +/- 0 are 'scaled away' (i.e. mostly flat to include neg. peaks in the surface) so I'm now using an explicit zrange of [-bestValue:bestValue] to focus the plot on the interesting Z values.
Now the development of best solutions close to 0 can be traced much better, however the surface is rendered with 'holes' for neg. Z values exceeding the range:
This is very confusing to look at/interpret.
(FWIW the hidden3d option is enabled)
Can we (gnuplot) 'fill' the holes in some way, e.g. by clamping neg. values in the surface plot instead of just dropping the points from the surface?

Rotate Image Orientation(Patient) in DICOM

I have extracted a 3D surface from an MRI acquisition and the coordinates of the points describing this surface are (I believe) with respect to the reference system of the first image of the series (I mean that the origin corresponds to the Image Position(Patient) and the axes orientation to the Image Orientation(Patient)).
I have another set of images with a different Image Position(Patient) and a different Image Orientation(Patient); I want to rotate and translate the surface extracted from the first set in order to have it match exactly the second set of images.
I'm having trouble with finding the correct 4x4 matrix that would do the job, once I get it, I know how to apply it to my surface.
Any kind of help would be greatly appreciated, thank you.
Simon
This page explains how to form a transformation matrix from the geometry information in the DICOM headers. These transformation matrices are used to transform from the volume coordinate system (pixel-x, pixel-y, slice number) to the patient/world coordinate system (in millimeters).
The basic idea to transform from volume 1 to volume 2 is to tranform from volume 1 to patient coordinates and from patient coordinates to volume 2 coordinate system. Multiplying both matrices yields the matrix to transform directly from volume 1 to volume 2.
Caution: Obviously, there can be no guarantee that every coordinate in v1 matches a coordinate in v2, i.e. the stacks may have different size and/or position.
So you have:
M1 - the matrix to transform from volume 1 to the world coordinate system and
M2 - the matrix to transform from volume 2 to the world coordinate system
Then
M1 * (M2^(-1)) is the matrix to transform a position vector from volume 1 to volume 2 (input and output is pixel-x, pixel-y, slice number)
and
M2 * (M1^(-1)) is the matrix to transform a position vector from volume 1 to volume 2 (input and output is pixel-x, pixel-y, slice number)

Up to a scale factor

I am reading up on homographies and i have seen some places that it says that the homography is defined "up to a scale factor" what does this mean? Is there an upper limit for scaling the homography or what does it mean, and why?
General Meaning
A is unique up to Variation
A is the same as B up to Variation
A is equal to B up to Variation
Statement up to Variation
Phrases of the forms above typically mean that the Statement - the part before "up to" - is true excepting some kind of Variation. It can be thought of as meaning "...up to...but no further."
Example
Two points in the plane determine a line.
One point in the plane determines a line up to rotation about the point.
Meaning with respect to Homographies
Taken from the first section of this document:
1. From 3D to 2D Coordinates
Under homography, we can write the transformation of points in 3D from camera 1 to camera 2 as:
X2 = H*X1, X1,X2 in R^3
In the image planes, using homogeneous coordinates, we have
a*x1 = X1, b*x2 = X2, therefore b*x2 = H*a*x1
This means that x2 is equal to H*x1 up to a scale (due to universal scale ambiguity).
In the next section of the same document, Homography Estimation is described, wherein the z1 variable being solved for is "without loss of generality" set to 1. There is a whole set of solution homographies (with variation across scale), so a convention is made in this case to always choose the homograph that has universal scale z1 set to 1.

find the mean for points of binary features

I have groups of binary string each bit represent a feature in a variable e.g I have a color variable where red blue and green are the features thus if I have 010 --> I have a blue object.
I need to get the center of these objects by calculating a weighted mean example 010 weight's 0.5; 100 weights 0.4 and 001 weights 0.8 [010 *0.5 + 100*0.4 + 001*0.8]/[1.7]
is there a possibility to get a point which represents the center of those points which should had same properties of others points (binary on 3 bits)
thank u in advance for your help
I guess you can use the following approach from cluster analysis: you need to choose metric for your object space (Euclidean, Taxicab or something else) and then for all objects from group (or if cardinality of the set is small - for all possible objects) calculate average distance to all objects from group. Then, you can assume object with a smallest average distance is center of a group.

Resources