How to compute the volume of a single voxel of nifti medical image? - volume

I have loaded a nifti image file using nibabel tool and I have played with some properties.
But I don’t have idea how to compute the volume (in mm³) of a single voxel.

Here's the answer using NiBabel, as OP asked:
import nibabel as nib
nii = nib.load('t1.nii.gz')
sx, sy, sz = nii.header.get_zooms()
volume = sx * sy * sz

I am not a NiBabel expert, but I can instead recommend the SimpleITK package for Python. I often use it for reading NifTi image files. It has a method GetSpacing() which returns the pixel spacing in mm.
import SimpleITK as sitk
# read image
im = sitk.ReadImage("/path/to/input/image.nii")
# get voxel spacing (for 3-D image)
spacing = im.GetSpacing()
spacing_x = spacing[0]
spacing_y = spacing[1]
spacing_z = spacing[2]
# determine volume of a single voxel
voxel_volume = spacing_x * spacing_y * spacing_z

Related

Converting image array to RGB to HSL/HSV and back?

I read in colored jpg images using readJPEG() from the jpeg package. Now I have my images as three-dimensional arrays (width, height, channels) in R.
I want to convert these image arrays into the HSL or HSV color space, mutate the images and save them as JPGs in the RGB format again. However, as the images are quite large (5000 x 8000), it would be too time consuming to loop through every single cell. I found the package OpenImageRto convert the image to the HSV color space quickly, however, I am confused by large negative values in the "saturation" channel. Also, the package contains no functions to convert the image back.
Is there any package to perform fast conversions from RGB to HSL or HSV (and back)? Or is there any other way to perform the converison quickly?
These are my current attempts for converting into one direction, element-wise:
# load packages
library(jpeg)
library(plotwidgets)
# load image
img <- readJPEG(img_path)
img <- img * 255
# new empty image
img_new <- array(NA, dim = dim(img))
# this takes way too long
for (img_row in 1:dim(img)[1]) {
for (img_col in 1:dim(img)[2]) {
img_new[img_row,img_col,] <- round(rgb2hsl(as.matrix(img[img_row,img_col,])))
}
}
# this takes also way too long
for (img_row in 1:dim(img)[1]) {
img_new[img_row,,] <- t(round(rgb2hsl(t(matrix(img[img_row,,], ncol = 3)))))
}
# this takes also ages
rgb_hsl_fun <- function(x) {
as.numeric(rgb2hsl(matrix(x)))
}
img_hsl <- apply(X = img, MARGIN = c(1,2), FUN = rgb_hsl_fun)
The whole thing is quite simple to do.
Use the colorspace library for this.
Here is my original img.jpg file.
Here is the code.
library(jpeg)
library(colorspace)
#Reading a jpg file
img = readJPEG("img.jpg") * 255
#Row-by-row conversion
for(i in 1:dim(img)[1]){
#Convert to HSV format
hsv = RGB(img[i,,1], img[i,,2], img[i,,3]) |> as("HSV")
#Mutation of H, S, V components
attributes(hsv)$coords[,"H"] = attributes(hsv)$coords[,"H"]/2
attributes(hsv)$coords[,"S"] = attributes(hsv)$coords[,"S"]*.998
attributes(hsv)$coords[,"V"] = attributes(hsv)$coords[,"V"]-1
#Convert to RGB format and save to the current line.
rgb = as(hsv, "RGB")
img[i,,1] = attributes(rgb)$coords[,"R"]
img[i,,2] = attributes(rgb)$coords[,"G"]
img[i,,3] = attributes(rgb)$coords[,"B"]
}
#Save to JPG file
writeJPEG(img / 255, "img_hsv.jpg")
Just note that to get to the individual H, S, V (or R, G, B) components you have to use the coords attribute.
As you can see, my mutation of the components H, S, V was as follows:
H = H / 2
S = S * 0.998
V = V-1
After this mutation, the original file looks like this.
However, if you prefer to carry out the mutation on the HLS palette, it is possible.
#Reading a jpg file
img = readJPEG("img.jpg") * 255
#Row-by-row conversion
for(i in 1:dim(img)[1]){
#Convert to HLS format
hls = RGB(img[i,,1], img[i,,2], img[i,,3]) |> as("HLS")
#Mutation of H, S, V components
attributes(hls)$coords[,"H"] = attributes(hls)$coords[,"H"]/2
attributes(hls)$coords[,"L"] = attributes(hls)$coords[,"L"]/2
attributes(hls)$coords[,"S"] = attributes(hls)$coords[,"S"]/2
#Convert to RGB format and save to the current line.
rgb = as(hls, "RGB")
img[i,,1] = attributes(rgb)$coords[,"R"]
img[i,,2] = attributes(rgb)$coords[,"G"]
img[i,,3] = attributes(rgb)$coords[,"B"]
}
#Save to JPG file
writeJPEG(img / 255, "img_hls.jpg")
Here is the image with H/2, L/2 and S/2 conversion.
Hope this is what you were looking for.
It would be wise to open an issue to the Github repository (in case that there is a quick fix to the error case for the HSV transformation). For the record I'm the author and maintainer of the OpenImageR package.
I took a look once again to the code of the RGB_to_HSV function and as I mention at the top of the function of the Rcpp code the implementation is based on the paper
Analytical Study of Colour Spaces for Plant Pixel Detection, Pankaj Kumar and Stanley J. Miklavcic, 2018, Journal of Imaging (page 3 of 12) or section 2.1.3,
The negative values of the saturation channel were highly probable related to a mistake of the following line,
S(i) = 1.0 - (3.0 * s_val) * (R(i) + G(i) + B(i));
which actually (based on the paper) should have been:
S(i) = 1.0 - (3.0 * s_val) / (R(i) + G(i) + B(i));
(division rather than multiplication of the last term)
I uploaded the updated version to Github and you can install it using
remotes::install_github('mlampros/OpenImageR')
and please report back if it works so that I can upload the new version to CRAN.
The package does not include a transformation from HSV to RGB (from what I understand you want to modify the pixel values and then convert to RGB).

How to check given a coordinate (lat,long) if is in a raster image file (tif file) and how to extract it with NXN window?

Hi I am researching to complete a project.
The aim is by given a coordinate I can extract the image amongs the raster files.
How do I check if the coordinate that I have for example (51.3334198, 3.2973934) is in a raster image file - k_01.tif? If this coordinate is indeed in k_01.tif, how do I extract a small part of if i.e NxN window?
My code:
import rasterio
src = rasterio.open('k_01.tif')
src.bound
BoundingBox(left=145000.0, bottom=238000.0, right=162000.0, top=247000.0)
src.crs
CRS.from_epsg(31370)
Any ideas? Thanks in advance.
To check if a location (lat,long) is within the raster image's boundary.
import pyproj
pp = pyproj.Proj(init='epsg:31370')
x,y = pp(3.2973934, 51.3334198) #long, lat sequence
then, you can use (x,y) to check with the src.bound
src= rasterio.open( your_tiffile )
bnd = src.bounds
if bnd.left < x < bnd.right and bnd.top < y < bnd.bottom:
print("inside")
else:
print("outside")
To slice part of the array.
band1 = src.read(1) #read band-1
# usually we flip the image, uncomment next line if you dont need it
band1_flip = band1[::-1,:] # reverse height in (height, width, channel)
# extract part of the image array
fr_row, to_row, fr_col, to_col = 0, 100, 0, 100 #you set the values
aslice = band1_flip[fr_row:to_row, fr_col:to_col]

How to select a region of an image in bokeh

In a web app, I would like to let the user select a region of interest in a plotted image using the nice box/lasso selection tools of bokeh. I would the like to receive the selected pixels for further operations in python.
For scatter plots, this is easy to do in analogy with the gallery,
import bokeh.plotting
import numpy as np
# data
X = np.linspace(0, 10, 20)
def f(x): return np.random.random(len(x))
# plot and add to document
fig = bokeh.plotting.figure(x_range=(0, 10), y_range=(0, 10),
tools="pan,wheel_zoom,box_select,lasso_select,reset")
plot = fig.scatter(X, f(X))
#plot = fig.image([np.random.random((10,10))*255], dw=[10], dh=[10])
bokeh.plotting.curdoc().add_root(fig)
# callback
def callback(attr, old, new):
# easily access selected points:
print sorted(new['1d']['indices'])
print sorted(plot.data_source.selected['1d']['indices'])
plot.data_source.data = {'x':X, 'y':f(X)}
plot.data_source.on_change('selected', callback)
however if I replace the scatter plot with
plot = fig.image([np.random.random((10,10))*255], dw=[10], dh=[10])
then using the selection tools on the image does not change anything in plot.data_source.selected.
I'm sure this is the intended behavior (and it makes sense too), but what if I want to select pixels of an image? I could of course put a grid of invisible scatter points on top of the image, but is there some more elegant way to accomplish this?
It sounds like the tool you're looking for is actually the BoxEditTool. Note that the BoxEditTool requires a list of glyphs (normally these will be Rect instances) that will render the ROIs, and that listening to changes should be set using:
rect_glyph_source.on_change('data', callback)
This will trigger the callback function any time you make any changes to your ROIs.
The relevant ColumnDataSource instance (rect_glyph_source in this example) will be updated so that the 'x' and 'y' keys list the center of each ROI in the image's coordinates space, and of course 'width' and 'height' describe its size. As far as I know there isn't currently a built-in method for extracting the data itself, so you will have to do something like:
rois = rect_glyph_source.data
roi_index = 0 # x, y, width and height are lists, and each ROI has its own index
x_center = rois['x'][roi_index]
width = rois['width'][roi_index]
y_center = rois['y'][roi_index]
height = rois['height'][roi_index]
x_start = int(x_center - 0.5 * width)
x_end = int(x_center + 0.5 * width)
y_start = int(y_center - 0.5 * height)
y_end = int(y_center + 0.5 * height)
roi_data = image_plot.source.data['image'][0][y_start:y_end, x_start:x_end]
IMPORTANT: In the current version of Bokeh (0.13.0) there is a problem with the synchronization of the BoxEditTool at the server and it isn't functional. This should be fixed in the next official Bokeh release. For more information and a temporary solution see this answer or this discussion.

What's a simple way of warping an image with a given set of points?

I'd like to implement image morphing, for which I need to be able to deform the image with given set of points and their destination positions (where they will be "dragged"). I am looking for a simple and easy solution that gets the job done, it doesn't have to look great or be extremely fast.
This is an example what I need:
Let's say I have an image and a set of only one deforming point [0.5,0.5] which will have its destination at [0.6,0.5] (or we can say its movement vector is [0.1,0.0]). This means I want to move the very center pixel of the image by 0.1 to the right. Neighboring pixels in some given radius r need to of course be "dragged along" a little with this pixel.
My idea was to do it like this:
I'll make a function mapping the source image positions to destination positions depending on the deformation point set provided.
I will then have to find the inverse function of this function, because I have to perform the transformation by going through destination pixels and seeing "where the point had to come from to come to this position".
My function from step 1 looked like this:
p2 = p1 + ( 1 / ( (distance(p1,p0) / r)^2 + 1 ) ) * s
where
p0 ([x,y] vector) is the deformation point position.
p1 ([x,y] vector) is any given point in the source image.
p2 ([x,y] vector) is the position, to where p1 will be moved.
s ([x,y] vector) is movement vector of deformation point and says in which direction and how far p0 will be dragged.
r (scalar) is the radius, just some number.
I have problem with step number 2. The calculation of the inverse function seems a little too complex to me and so I wonder:
If there is an easy solution for finding the inverse function, or
if there is a better function for which finding the inverse function is simple, or
if there is an entirely different way of doing all this that is simple?
Here's the solution in Python - I did what Yves Daoust recommended and simply tried to use the forward function as the inverse function (switching the source and destination). I also altered the function slightly, changing exponents and other values produces different results. Here's the code:
from PIL import Image
import math
def vector_length(vector):
return math.sqrt(vector[0] ** 2 + vector[1] ** 2)
def points_distance(point1, point2):
return vector_length((point1[0] - point2[0],point1[1] - point2[1]))
def clamp(value, minimum, maximum):
return max(min(value,maximum),minimum)
## Warps an image accoording to given points and shift vectors.
#
# #param image input image
# #param points list of (x, y, dx, dy) tuples
# #return warped image
def warp(image, points):
result = img = Image.new("RGB",image.size,"black")
image_pixels = image.load()
result_pixels = result.load()
for y in range(image.size[1]):
for x in range(image.size[0]):
offset = [0,0]
for point in points:
point_position = (point[0] + point[2],point[1] + point[3])
shift_vector = (point[2],point[3])
helper = 1.0 / (3 * (points_distance((x,y),point_position) / vector_length(shift_vector)) ** 4 + 1)
offset[0] -= helper * shift_vector[0]
offset[1] -= helper * shift_vector[1]
coords = (clamp(x + int(offset[0]),0,image.size[0] - 1),clamp(y + int(offset[1]),0,image.size[1] - 1))
result_pixels[x,y] = image_pixels[coords[0],coords[1]]
return result
image = Image.open("test.png")
image = warp(image,[(210,296,100,0), (101,97,-30,-10), (77,473,50,-100)])
image.save("output.png","PNG")
You don't need to construct the direct function and invert it. Directly compute the inverse function, by swapping the roles of the source and destination points.
You need some form of bivariate interpolation, have a look at radial basis function interpolation. It requires to solve a linear system of equations.
Inverse distance weighting (similar to your proposal) is the easiest to implement but I am afraid it will give disappointing results.
https://en.wikipedia.org/wiki/Multivariate_interpolation#Irregular_grid_.28scattered_data.29

Generating movement based on time t for real time ocean waves from an initial spectrum

I've spent the last week or so rendering a simple ocean using gerstner waves but having issues with tiling, so I decided to start rendering them "properly" and dip my toes into the murky waters of rendering a heightfield using an iFFT.
There are plenty of papers explaining the basic gist -
1) calculate a frequency spectrum
2) use this to create a heightfield using ifft to convert from frequency domain to spatial domain - animating with time t
Since the beginning of this journey I have learned about things like the complex plane, the complex exponent equation, the FFT in more detail etc but after the initial steps of creating an initial spectrum (rendering a texture full of guassian numbers with mean 0 and sd of 1, filtered by the phillips spectrum) I am still totally lost.
My code for creating the initial data is here (GLSL):
float PhillipsSpectrum(vec2 k){
//kLen is the length of the vector from the centre of the tex
float kLen = length(k);
float kSq = kLen * kLen;
// amp is wave amplitude, passed in as a uniform
float Amp = amplitude;
//L = velocity * velcoity / gravity
float L = (velocity*velocity)/9.81;
float dir = dot(normalize(waveDir),normalize(k));
return Amp * (dir*dir) * exp(-1.0/(kSq * L * L))/ (kSq * kSq) ;
}
void main(){
vec3 sums;
//get screenpos - center is 0.0 and ranges from -0.5 to 0.5 in both
//directions
vec2 screenPos = vec2(gl_FragCoord.x,gl_FragCoord.y)/texSize - vec2(0.5,0.5);
//get random Guass number
vec2 randomGuass = vec2(rand(screenPos),rand(screenPos.yx));
//use phillips spectrum as a filter depending on position in freq domain
float Phil = sqrt(PhillipsSpectrum(screenPos));
float coeff = 1.0/sqrt(2.0);
color = vec3(coeff *randomGuass.x * Phil,coeff * randomGuass.y * Phil,0.0);
}
which creates a texture like this:
Now I am totally lost as how to :
a) derive spectrums in three directions from the initial texture
b) animate this according to time t like mentioned in this paper (https://developer.nvidia.com/sites/default/files/akamai/gamedev/files/sdk/11/OceanCS_Slides.pdf) on slide 5
I might be completely stupid and overlooking something really obvious - I've looked at a bunch of papers and just get lost in formulae even after acquainting myself with their meaning. Please help.

Resources