The DICOM images have "Slice location" parameter recorded in addition to the "Slice Thickness".
Question: "Slice location" in where? I understand it's in the body depth - but it must be in images as well, right?
I think a series must or may have a corresponding series in which we can find a reference of the "Slice Location".
If I am right, How to find that those images? And then how to establish the point of the slice in the corresponding images?
As well, if you know a well written refference guide to dicom image structure, please share.
Thanks a lot.
The standard specifies that the unit for Slice Location is millimeters. Usually there is a special scout image that is refferenced by slices with an overlay that looks like a grid, that shows where each slice is mapped.
And a suggestion: don't expect it to work "in theory", without trying on real samples. DICOM is rather a collection of all standards that existed at the time it was created. Also, many modalities use their own private tags for additional info. If you need to process the output of a certain modality then you are lucky, you just have to find its DICOM Conformance Statement. If it's for a viewer.. then good luck :)
EDIT: Also, CT series usually have one image with LOCALIZER in the Image Type tag that refferences the rest of images in the series that are slices (or is reffered by them).
Here's a page that contains some good information on DICOM:
http://www.thefullwiki.org/DICOM
As to your question; I think you're looking for the "image number", which defines an order to the image. The "slice location" is a spatial coordinate as to the offset of the image slice; the "image number" is an integer indicating the order of an image in reference to other images. YMMV; DICOM is a very loosely adhered to standard.
I suspect the slice position is for thick slab acquisitions, where the extents of each slice overlap with the next slice. The axial resolution of current scanners is high enough that this does not happen anymore. Hence the slice thickness is used more frequently now as opposed to slice position.
Please look at http://medical.nema.org/dicom/2003/03_10PU.PDF. Especially Annex A.
The method I usually follow is as follows. I access DICOMDIR for the dcm file paths. This can be accessed at tag [0004, 1500]. Once I get all the paths, I iterate through each dcm image to access any property I want. The slice location might be a property in dcm image but without knowing the modality it is quiet difficult to figure out the tag.
dcmtk is a nice library written in C++ that can help to get specific tags or to extract pixel data from dcm images. You can use it to make your life easier.
Related
What is the difference between DICOM Overlay and DICOM Annotation?
The hint is in the name: Annotation is one of the embedded objects and Overlay is only a chunk of pixel data.
The only term that DICOM explicitly defines is "Overlay". which refers to a binary raster image which is either stored in unused bits of the pixel data (this is retired but you still find overlays encoded this way in practice) or in separate attributes.
Annotation refer to anything else, including:
header attributes used to label the images (like patient's name, study date)
softcopy presentation state objects referring to the annotated image and defining simple vector graphics to draw with the image
I'd say that the term "annotation" refers to everything but overlays.
Overlay is the common term used for text display of header data on the images, this would be configurable by the modaity and usually site or user specific. Annotations are the mark-up of the image by the user, Arrows, ROIs, Rulers etc.
Overlay can be a DICOM object and often called as such for CAD , as the original history was to use DICOM group 6000 for these objects.
Mostly Annotations covers Comment Text, ROI Markings while overlay covers patient details, dicom details like image type, compressions, etc.,
An overlay is a bitmap which is "overlaid" on top of the raw pixel data. Overlays are described by the overlay module.
There are two ways of storing overlays in DICOM. The first is to squeeze it into the unused bits in the raw pixel data. This is possible if, e.g. your bits allocated is more than your bits stored. This method of storing overlays is deprecated. The second approach is using the overlay attribute.
You will notice, in Section C.9.2.1.1 of the DICOM standard, it says:
There are two specific types of overlays. The type is specified in this Attribute [referring to the Overlay Type attribute].
A Region of Interest (ROI) is a specific use of an Overlay. The overlay bits corresponding to all the pixels included in the ROI shall be set to 1. All other bits are set to 0. This is used to specify an area of the image of particular interest.
A Graphics overlay may express reference marks, graphic annotation, or bit mapped text, etc. A Graphics overlay may be used to mark the boundary of a ROI. If this is the case and the ROI statistical parameters are used, they will only refer to the pixels under the boundaries, not those in the included regions.
The overlay bits corresponding to all the pixels included in the Graphics shall be set to 1. All other bits are set to 0.
So as you can see, the concepts of an overlay and annotation are not so cleanly separated. An overlay is a way of storing an image that should be overlaid on top of the raw data. An annotation is some sort of text or comments about the image, which may be stored in an overlay, or it may be stored elsewhere.
For example, an annotation may be burned into the raw pixel data directly (see the Burned in Annotation Attribute).
I was doing a project on dicom, and I found the DICOM Standard document(http://medical.nema.org/standard.html) is really a mess for programmers.For example when I come across a Tag(0008, 0060), I look into 11_03pu.pdf(http://medical.nema.org/Dicom/2011/11_03pu.pdf). It is said to be a modality with some value(GM, SM). Where could I find a more specific meaning about the tag and the value (GM, SM...) ?
The index of Part 3 contains a mapping from Tag to page. There you can find any reference to your tag in question. Looking at the referenced pages will show you how the tag is used, what it means in what situation and so on.
Well, dicom standard is your friend. But normally, any dicom parser contains what is called a dictionary that maps tag keys (two short numbers) to tag name and type. Without such a dictionary it is not possible to parse dicom data encoded implicitly.
Another thing you can look at is dicom modules (also defined in standard). Modules are groups of tags assembled by some global meaning (e.g. patient data, study data, equipment, image). Then you know for instance that for CR image one must put together patient, study, series, exposure, image plane, etc. modules in order to obtain a valid data set.
I would recommend you Digital Imaging and Communications in Medicine book, before reading dicom standard, this will ease your suffering.
you can also find more information with Dicom PS 3 as well in Tag Info
List of DICOM Tags are located under chapter 6 of DICOM Standard PS 3.6 or part 6. This list includes DICOM element Name, Tag number, Value Representation (VR) and Value Multiplicity (VM) of the tag value. Also, tag value encoding rules are located under sections 6 of PS 3.5 or part 5. See table 6.2-1.
You can download the DICOM standard from http://medical.nema.org/standard.html
I have devices moving across the entire country that report their GPS positions back to me. What i would like to do is to have a system that maps these coordinates to a named area.
I see two approaches to this:
Have a database that defines areas as polygons stretching between various GPS coords.
Use some form of webservice that can provide the info for me.
Either will be fine. It doesn't have to be very accurate at all, as i only need to know the region involved so that i know which regional office to call if something wrong happens with the device.
In the first approach, how would you build an SQL table that contained the data? And what would be your approach for matching a GPS coordinate to one of the defined areas? There wouldn't be many areas to define, and they'd be quite large, so manually inputting the values defining the areas wouldn't be a problem.
In the case of the second approach, does anyone know a way of programatically pulling this info off the web on demand? (I'd probably go for Perl WWW::Mechanize in this case). "close to Somecity" would be enough.
-
PS: This is not a "do the work for me" kind of question, but more of a brainstorming request. pseudo-code is fine. General theorizing on the subject is also fine.
In the first approach, how would you build an SQL table that contained
the data? And what would be your approach for matching a GPS
coordinate to one of the defined areas?
Asume: An area is defined as an closed polygon.
You match the GPS coordinate by simply calling a point inside polygon method, like
boolean isInside = polygon.contains(latitude, longitude);
If you have few polygons you can do a brute force search through all existing polygons.
If you have many of them and each (ten-) thousands of points, the you want to use a spatial grid, like a quadtree or k-d tree, to reduce the search to the relevant polygons.
method.
this process is called reverse geocoding, many services providers such as google, yahoo, and esri provide services that will allow to do this thing
they will return the closest point of interest or address, but you can keep the administrative level you are interested in
check terms of use to see which service is compatible with your intended usage
Is there any way to achieve auto completing as similar to the http://maps.google.com where it directly start doing suggestions even if I type a single letter?
I know theres this google.maps.places.Autocomplete class however I definetly need to suggest for a single country and this is not possible with this API due to lack of settings.
Thanks
You should specify bounds parameter in the request.
bounds is a google.maps.LatLngBounds object specifying the area in which to search for Places. The results are biased towards, but not restricted to, Places contained within these bounds.
see: http://code.google.com/intl/fr/apis/maps/documentation/javascript/places.html#places_autocomplete
I have looked at some of the DICOM standard and the Wikipedia page (and all DICOM topics on SO) but without really digging into the file structure doc (ugh), I'm left confused what exactly is stored inside a DICOM file for a scan comprising 3D/4D/5D data. I only want an overview.
Let's take MRI as an example. Does the DICOM file contain
a set of the raw 2D images taken from various angles
a stack of slices forming a 3D voxel dataset
a full 3D dataset
In other words, does DICOM include any post-processing on the raw images captured by the imaging machine?
As far as 4D, presumably it's simply a collection of multiple 3D datasets, each 'frame' is a separate dataset?
The DICOM "standard" has so many different options in it, along with vendor extensions, that most things are possible.
That said, most 3D DICOM datasets that I have encountered are 2D image stacks. For post-processing, it depends solely on the imaging machine.
If you're looking for a freely available DICOM library, you can try out GDCM, I've been fairly happy with it.
The original DICOM MR format holds a single, 2D image. That format is still widely supported, quite likely what you have. If that is what you have, then you'll want to gather all of the images in a series to construct your volume.
A newer, enhanced MR image format is able to store an entire volume as a multi-frame image. If it does, that's your volume. I don't recall whether a single enhanced MR object has to be the entire series or not.
You can tell them apart by looking at the DICOM attribute SOP Class UID (0008,0016):
Original MR: 1.2.840.10008.5.1.4.1.1.4
Enhanced MR: 1.2.840.10008.5.1.4.1.1.4.1
Above, when I said that you might need to gather "all of the images in a series" to build a volume, that means you are using the images that share the same value for the attribute Series Instance UID (0020,000E).
Notes:
DICOM purists will prefer the term "Image Information Object Definition" instead of the word "format" as I've been using it.
DICOM purists will object to saying that SOP Class UID defines the image format. For your purposes, it will work.