I am new to the field of medical imaging - and trying to solve this (potentially basic problem). For a machine learning purpose, I am trying to standardize and normalize a library of DICOM images, to ensure that all images have the same rotation and are at the same scale (e.g. in mm). I have been playing around with the Mango viewer, and understand that one can create transformation matrices that might be helpful in this regard. I have however the following basic questions:
I would have thought that a scaling of the image would have changed the pixel spacing in the image header. Does this tag not provide the distance between pixels, and should this not change as a result of scaling?
What is the easiest way to standardize a library of images (ideally in python)? Is it possible and should one extract a mean pixel spacing across all images, and then scaling all images to match that mean? or is there a smarter way to ensure consistency in scaling and rotation?
Many thanks in advance, W
Does this tag not provide the distance between pixels, and should this
not change as a result of scaling?
Think of the image voxels as fixed units of space, which are sampling your image. When you apply your transform, you are translating/rotating/scaling your image around within these fixed units of space. That is, the size and shape of the voxels doesn't change. They just sample different parts of your image.
You can resample your image by making your voxels bigger or smaller or changing their shape (pixel spacing), but this can be independent of the transform you are applying to the image.
What is the easiest way to standardize a library of images (ideally in
python)?
One option is FSL-FLIRT, although it only accepts data in NIFTI format, so you'd have to convert your DICOMs to NIFTI. There is also this Python interface to FSL.
Is it possible and should one extract a mean pixel spacing across all
images, and then scaling all images to match that mean? or is there a
smarter way to ensure consistency in scaling and rotation?
I think you'd just to have pick a reference image to register all your other images too. There's no right answer: picking the highest resolution image/voxel dimensions or an average or some resampling into some other set of dimensions all sound reasonable.
Related
I have developed a framework in R to automatically measure vegetation structure variables from whiteboard photos taken on grasslands for ecological related studies. Until now we have preprocessed the images by hand, however now we need to automatise the rotation and cropping of the images.
The idea is the following: Use reference marks on the whiteboard, and detect these markings to rotate and crop the original photographs. I need help to detect the reference markings. After knowing the position of reference markings (centroids) we can calculate the coordinates/pixels where to crops the image. In the end, we want to get a picture like that.
We can use some special colour for the markings, but these can be obstructed by the vegetation. The bottom of the whiteboard is always obstructed, the cropped part (without the reference markings) should be 25×100 cm.
Possibly edge detection can be a solution. I'm familiar with only with R programming.
I have 2 medical image datasets for a given patient, each acquired at the same time, each with a different modality. The frame of reference or coordinate space is different for each dataset (and I don't know the origins). One dataset has smaller physical dimensions than the other, the voxel sizes, as well as the number of frames are also different. I want to resample and register the images, does it matter which I do first?
It's better if you resize the images first. This is because image registration algorithms will try to find a transformation that optimizes the match of the images. If you know the proper size of the images, it will be easier for the algorithms to find such transformation.
I am trying to enlarge a point cloud data set. Suppose I have a point cloud data set consisting of 100 points & I want to enlarge it to say 5 times. Actually I am studying some specific structure which is very small, so I want to zoom in & do some computations. I want something like imresize() in Matlab.
Is there any function to do this? What does resize() function do in PCL? Any idea about how can I do it?
Why would you need this? Points are just numbers, regardless whether they are 1 or 100, until all of them are on the same scale and in the same coordinate system. Their size on the screen is just a visual representation, you can zoom in and out as you wish.
You want them to be a thousandth of their original value (eg. millimeters -> meters change)? Divide them by 1000.
You want them spread out in a 5 times larger space in that particular coordinate system? Multiply their coordinates with 5. But even so, their visual representations will look exactly the same on the screen. The data remains basically the same, they will not be resized per se, they numeric representation will change a bit. It is the simplest affine transform, just a single multiplication.
You want to have finer or coarser resolution of your numeric representation? Or have different range? Change your data type accordingly.
That is, if you deal with a single set.
If you deal with different sets, say, recorded with different kinds of sensors and the numeric representations differ a bit (there are angles between the coordinate systems, mm vs cm scale, etc.) you just have to find the transformation from one coordinate system to the other one and apply it to the first one.
Since you want to increase the number of points while preserving shape/structure of the cloud, I think you want to do something like 'upsampling'.
Here is another SO question on this.
The PCL offers a class for bilateral upsampling.
And as always google gives you a lot of hints on this topic.
Beside (what Ziker mentioned) increasing allocated memory (that's not what you want, right?) or zooming in in visualization you could just rescale your point cloud.
This can be done by multiplying each points dimensions with a constant factor or using an affine transformation. So you can e.g switch from mm to m.
If i understand your question correctly
If you have defined your cloud like this
pcl::PointCloud<pcl::PointXYZ>::Ptr cloud (new pcl::PointCloud<pcl::PointXYZ>);
in fact you can do resize
cloud->points.resize (cloud->width * cloud->height);
Note that doing resize does nothing more than allocate more memory for variable thus after resizing original data remain in cloud. So if you want to have empty resized cloud dont forget to add cloud->clear();
If you just want zoom some pcd for visual puposes(i.e you cant see what is shape of cloud because its too small) why dont you use PCL Visualization and zoom by scrolling up/down
I have a graph like:
I would like to generate a set of (x,y) pairs that correspond to points of this graph.
Maybe one for each horizontal pixel.
How would I go about doing this?
If I had the image in uncompressed bitmap format, maybe cropped to the actual graph, I could examine each vertical strip for the blackest point...
I would prefer to work in Python, but I'm interested in any technique.
I answered a question like this a while back. It should be fairly easy to detect the grid, from there you can get the pixel's coordinates relatively to the grid. However, it wasn't clear how to extract the numbers, which you need to do in order to get the the scale of the grid. Although, it might be possible fairly easily if you can match the font and font size (which might be possible via scaling). Otherwise, you'd have to enter the numbers manually.
To extract the grid, you'd start from the top right and move diagonally until you find the start of the grid. From there you can follow the vertical and horizontal lines (of the grid) until they end. This should allow you to say with fairly high probability where the outer rectangle of the grid is and what the x and y intervals of the grid are in terms of pixels. The blackest parts within the grid should do for finding the curve, but it may require some interpolation depending on how many data points you need/want.
It also may be useful to look into techniques for reversing anti-aliasing effects. Although, the uncompressed bitmap image may not need it.
I'm not sure this is the right place but here I go:
I have a database of 300 picture in high-resolution. I want to compute the PCA on this database and so far here is what I do: - reshape every image as a single column vector - create a matrix of all my data (500x300) - compute the average column and substract it to my matrix, this gives me X - compute the correlation C = X'X (300x300) - find the eigenvectors V and Eigen Values D of C. - the PCA matrix is given by XV*D^-1/2, where each column is a Principal Component
This is great and gives me correct component.
Now what I'm doing is doing the same PCA on the same database, except that the images have a lower resolution.
Here are my results, low-res on the left and high-res on the right. Has you can see most of them are similar but SOME images are not the same (the ones I circled)
Is there any way to explain this? I need for my algorithm to have the same images, but one set in high-res and the other one in low-res, how can I make this happen?
thanks
It is very possible that the filter you used could have done a thing or two to some of the components. After all, lower resolution images don't contain higher frequencies that, too, contribute to which components you're going to get. If component weights (lambdas) at those images are small, there's also a good possibility of errors.
I'm guessing your component images are sorted by weight. If they are, I would try to use a different pre-downsampling filter and see if it gives different results (essentially obtain lower resolution images by different means). It is possible that the components that come out differently have lots of frequency content in the transition band of that filter. It looks like images circled with red are nearly perfect inversions of each other. Filters can cause such things.
If your images are not sorted by weight, I wouldn't be surprised if the ones you circled have very little weight and that could simply be a computational precision error or something of that sort. In any case, we would probably need a little more information about how you downsample, how you sort the images before displaying them. Also, I wouldn't expect all images to be extremely similar because you're essentially getting rid of quite a few frequency components. I'm pretty sure it wouldn't have anything to do with the fact that you're stretching out images into vectors to compute PCA, but try to stretch them out in a different direction (take columns instead of rows or vice versa) and try that. If it changes the result, then perhaps you might want to try to perform PCA somewhat differently, not sure how.