What is the difference between DICOM Overlay and DICOM Annotation?
The hint is in the name: Annotation is one of the embedded objects and Overlay is only a chunk of pixel data.
The only term that DICOM explicitly defines is "Overlay". which refers to a binary raster image which is either stored in unused bits of the pixel data (this is retired but you still find overlays encoded this way in practice) or in separate attributes.
Annotation refer to anything else, including:
header attributes used to label the images (like patient's name, study date)
softcopy presentation state objects referring to the annotated image and defining simple vector graphics to draw with the image
I'd say that the term "annotation" refers to everything but overlays.
Overlay is the common term used for text display of header data on the images, this would be configurable by the modaity and usually site or user specific. Annotations are the mark-up of the image by the user, Arrows, ROIs, Rulers etc.
Overlay can be a DICOM object and often called as such for CAD , as the original history was to use DICOM group 6000 for these objects.
Mostly Annotations covers Comment Text, ROI Markings while overlay covers patient details, dicom details like image type, compressions, etc.,
An overlay is a bitmap which is "overlaid" on top of the raw pixel data. Overlays are described by the overlay module.
There are two ways of storing overlays in DICOM. The first is to squeeze it into the unused bits in the raw pixel data. This is possible if, e.g. your bits allocated is more than your bits stored. This method of storing overlays is deprecated. The second approach is using the overlay attribute.
You will notice, in Section C.9.2.1.1 of the DICOM standard, it says:
There are two specific types of overlays. The type is specified in this Attribute [referring to the Overlay Type attribute].
A Region of Interest (ROI) is a specific use of an Overlay. The overlay bits corresponding to all the pixels included in the ROI shall be set to 1. All other bits are set to 0. This is used to specify an area of the image of particular interest.
A Graphics overlay may express reference marks, graphic annotation, or bit mapped text, etc. A Graphics overlay may be used to mark the boundary of a ROI. If this is the case and the ROI statistical parameters are used, they will only refer to the pixels under the boundaries, not those in the included regions.
The overlay bits corresponding to all the pixels included in the Graphics shall be set to 1. All other bits are set to 0.
So as you can see, the concepts of an overlay and annotation are not so cleanly separated. An overlay is a way of storing an image that should be overlaid on top of the raw data. An annotation is some sort of text or comments about the image, which may be stored in an overlay, or it may be stored elsewhere.
For example, an annotation may be burned into the raw pixel data directly (see the Burned in Annotation Attribute).
Related
My aim is to add some annotations (for example, lines, arrows, shapes, or textual annotations) to a DICOM image with an opportunity to hide them and bring back when needed.
I have found two ways to accomplish it. One is to use the Overlay Plane module, but it does not fully satisfy our needs as our software requires annotations to be color. So I found another opportunity: different kinds of Softcopy Presentation State IOD-s.
As much as I understand by now, Grayscale Softcopy Presentation State IOD is most widely used. However, the word "grayscale" confuses me. Despite it is grayscale, the Graphic Annotation module contains attributes to pass information about the color of annotations, for instance, Text Color CIELab Value and Pattern On Color CIELab Value.
There are also other Softcopy Presentation State IOD-s: Color Softcopy Presentation State IOD, Pseudo-color Softcopy Presentation State IOD, and others.
Using Grayscale Softcopy Presentation State IOD I have created a presentation state file (with the .pre extension). I have tested it with Weasis DICOM viewer and it worked fine. All needed annotations were in the right places and almost all colors were right.
Now I am searching for information on how .pre files could be passed to a hypothetical PACS server, but it is not the topic of my post. Here I found the next information:
The grayscale softcopy presentation state refers to the grayscale
image transformations that are to be applied in an explicitly defined
manner to convert the stored image pixel data values in a Composite
Image Instance to presentation values (P-Values) when an image is
displayed on a softcopy device.
The color and pseudo-color softcopy presentation states refer to the
color image transformations that are to be applied in an explicitly
defined manner to convert the stored image pixel data values in a
Composite Image Instance to Profile Connection Space values
(PCS-Values) when an image is displayed on a softcopy device.
So I am a little bit confused. What kind of softcopy presentation state should be used in this situation?
The word "Grayscale" and "Color" refer to the images handled (i.e. referenced) by the presentation state object. Apart from vector graphics that you are aiming at, the presentation state also defines (among other aspects like shutters, 2D transformations etc.) the transformation of pixel values stored in the DICOM object to so-called P-Values which can be considered as display-independent intensity values which can be displayed on a calibrated device. This obviously depends on the format of the pixel data in the source objects, i.e. Grayscale/Color.
So the choice of the presentation state object you want to use solely depends on the type of images it is supposed to handle.
I was doing a project on dicom, and I found the DICOM Standard document(http://medical.nema.org/standard.html) is really a mess for programmers.For example when I come across a Tag(0008, 0060), I look into 11_03pu.pdf(http://medical.nema.org/Dicom/2011/11_03pu.pdf). It is said to be a modality with some value(GM, SM). Where could I find a more specific meaning about the tag and the value (GM, SM...) ?
The index of Part 3 contains a mapping from Tag to page. There you can find any reference to your tag in question. Looking at the referenced pages will show you how the tag is used, what it means in what situation and so on.
Well, dicom standard is your friend. But normally, any dicom parser contains what is called a dictionary that maps tag keys (two short numbers) to tag name and type. Without such a dictionary it is not possible to parse dicom data encoded implicitly.
Another thing you can look at is dicom modules (also defined in standard). Modules are groups of tags assembled by some global meaning (e.g. patient data, study data, equipment, image). Then you know for instance that for CR image one must put together patient, study, series, exposure, image plane, etc. modules in order to obtain a valid data set.
I would recommend you Digital Imaging and Communications in Medicine book, before reading dicom standard, this will ease your suffering.
you can also find more information with Dicom PS 3 as well in Tag Info
List of DICOM Tags are located under chapter 6 of DICOM Standard PS 3.6 or part 6. This list includes DICOM element Name, Tag number, Value Representation (VR) and Value Multiplicity (VM) of the tag value. Also, tag value encoding rules are located under sections 6 of PS 3.5 or part 5. See table 6.2-1.
You can download the DICOM standard from http://medical.nema.org/standard.html
I have looked at some of the DICOM standard and the Wikipedia page (and all DICOM topics on SO) but without really digging into the file structure doc (ugh), I'm left confused what exactly is stored inside a DICOM file for a scan comprising 3D/4D/5D data. I only want an overview.
Let's take MRI as an example. Does the DICOM file contain
a set of the raw 2D images taken from various angles
a stack of slices forming a 3D voxel dataset
a full 3D dataset
In other words, does DICOM include any post-processing on the raw images captured by the imaging machine?
As far as 4D, presumably it's simply a collection of multiple 3D datasets, each 'frame' is a separate dataset?
The DICOM "standard" has so many different options in it, along with vendor extensions, that most things are possible.
That said, most 3D DICOM datasets that I have encountered are 2D image stacks. For post-processing, it depends solely on the imaging machine.
If you're looking for a freely available DICOM library, you can try out GDCM, I've been fairly happy with it.
The original DICOM MR format holds a single, 2D image. That format is still widely supported, quite likely what you have. If that is what you have, then you'll want to gather all of the images in a series to construct your volume.
A newer, enhanced MR image format is able to store an entire volume as a multi-frame image. If it does, that's your volume. I don't recall whether a single enhanced MR object has to be the entire series or not.
You can tell them apart by looking at the DICOM attribute SOP Class UID (0008,0016):
Original MR: 1.2.840.10008.5.1.4.1.1.4
Enhanced MR: 1.2.840.10008.5.1.4.1.1.4.1
Above, when I said that you might need to gather "all of the images in a series" to build a volume, that means you are using the images that share the same value for the attribute Series Instance UID (0020,000E).
Notes:
DICOM purists will prefer the term "Image Information Object Definition" instead of the word "format" as I've been using it.
DICOM purists will object to saying that SOP Class UID defines the image format. For your purposes, it will work.
The DICOM images have "Slice location" parameter recorded in addition to the "Slice Thickness".
Question: "Slice location" in where? I understand it's in the body depth - but it must be in images as well, right?
I think a series must or may have a corresponding series in which we can find a reference of the "Slice Location".
If I am right, How to find that those images? And then how to establish the point of the slice in the corresponding images?
As well, if you know a well written refference guide to dicom image structure, please share.
Thanks a lot.
The standard specifies that the unit for Slice Location is millimeters. Usually there is a special scout image that is refferenced by slices with an overlay that looks like a grid, that shows where each slice is mapped.
And a suggestion: don't expect it to work "in theory", without trying on real samples. DICOM is rather a collection of all standards that existed at the time it was created. Also, many modalities use their own private tags for additional info. If you need to process the output of a certain modality then you are lucky, you just have to find its DICOM Conformance Statement. If it's for a viewer.. then good luck :)
EDIT: Also, CT series usually have one image with LOCALIZER in the Image Type tag that refferences the rest of images in the series that are slices (or is reffered by them).
Here's a page that contains some good information on DICOM:
http://www.thefullwiki.org/DICOM
As to your question; I think you're looking for the "image number", which defines an order to the image. The "slice location" is a spatial coordinate as to the offset of the image slice; the "image number" is an integer indicating the order of an image in reference to other images. YMMV; DICOM is a very loosely adhered to standard.
I suspect the slice position is for thick slab acquisitions, where the extents of each slice overlap with the next slice. The axial resolution of current scanners is high enough that this does not happen anymore. Hence the slice thickness is used more frequently now as opposed to slice position.
Please look at http://medical.nema.org/dicom/2003/03_10PU.PDF. Especially Annex A.
The method I usually follow is as follows. I access DICOMDIR for the dcm file paths. This can be accessed at tag [0004, 1500]. Once I get all the paths, I iterate through each dcm image to access any property I want. The slice location might be a property in dcm image but without knowing the modality it is quiet difficult to figure out the tag.
dcmtk is a nice library written in C++ that can help to get specific tags or to extract pixel data from dcm images. You can use it to make your life easier.
Question:
On
http://www.bbc.co.uk/news/10150007
one can see a map of european countries colored according to state debt/deficit.
Now I would have already found it useful several times if I was able to do such a thing myself, for example to visualize regional sales data.
Does anybody know:
Is there any (OpenSource) tool with which I can color a world/continental/reginal map according to colors mapped to values in a database ?
Or any tool that can construct a custom map ?
Or if there is no such thing, how would one do such a thing oneselfs ?
Get the outlines of countries from somewhere, make everything outside the country outlines transparent, set the coordinates and z-indeces to stack several images over one another, and then replace the base color with the selected color in each image, then merge the result to a single picture ?
I normally do this in R. Here are a bunch of examples of how to do this in R.
I also played a bit with QQis, and IIRC it can take input from a postgres PostGIS file.
The canonical commercial tool is ArcView, but it ends up being pricey.
The standard file format for maps is ESRI Shapefiles. These are actually collections of files with the attributes stored in a DBase IV format. Googling for 'shapefile viewer' will get you lots of tools.
There is also mapserver, which allows you to generate maps directly to the web.