Quantize grayscale images - r

I have grayscale images which I want to quantize to different gray levels.
To be more precise, in the EBImage package, we have a function equalize() which has an argument levels. we can set levels value to 256 or 128 or 64 etc to quantize our grayscale images. (But the equalize() function will perform a histogram equalization of the given grayscale image, which is not preferred for my current situation)
Can somebody suggest a formula or a function which we can use to change the number of gray levels in the given grayscale image.

First convert the format in something continous.
Now pseudocode.
int x = (int) (value / (Quantisation));
(new format) y = x * Quantisation;
It is also possible to compress images that lousy way.

The default image data representation in EBImage is a continuous range between 0 and 1. In order to quantize an image to a given number of levels, first convert it to integers in the range of 0:(levels-1), and then back to 0:1, as in the quantize function from the following example.
library(EBImage)
## sample grayscale image
x = readImage(system.file("images", "sample.png", package="EBImage"))
## function for performing image quantization
quantize = function(img, levels) round(img * (levels-1)) / (levels-1)
## quantize the image
y = quantize(x, levels = 8)
## show the result
display(y)

Related

R: How to increase the pixel size (decrease the spatial resolution) of a satellite image by applying a Gaussian filter with a (large) width

The goal
I am trying to simulate coarse data as though they were measured with a coarse PSF (point spread function).
The data
I have a satellite image with 15m pixel size and I want to convolve it with a Gaussian kernel to reduce the spatial resolution at 460m. To do this I need to apply a transfer function (TF; e.g., Gaussian) to the fine data, but with a very large width. This produces the coarse data.
Is there any function that takes as input a fine resolution image, applies a Gaussian TF and produces a coarse spatial resolution image?
To make my problem even more clear, I am following the paper 'The effect of the point spread function on downscaling continua'. All in all, the authors wanted to downscale a coarse satellite image using an ancillary fine spatial resolution variable. The downscaling consists of two steps:
regression
kriging on regressions residuals
During the regression, they had to upscale the fine resolution image to match the pixel size of the coarse resolution image and then they performed the regression. This upscaling had to be done using the PSF.
From here you can download my image.
Based on the this question, the code of the person who invented the method Area-to-point regression Kriging, Qunming Wang, and the description of the function down_sample_image from the OpenImageR package, I managed to upscale the image using a Gaussian blur. All in all, I had to multiply the PSF * the zoom factor.
library(raster)
library(OpenImageR)
r = raster("path/tirs.tif")
m = as.matrix(r)
psf = down_sample_image(m,
factor = 4.6, # zoom factor
gaussian_blur = T,
gauss_sigma = 0.5) # sigma of pixel size
e <- extent(r)
m2r <- raster(psf)
extent(m2r) <- e
raster::crs(m2r) <- "EPSG:7767"
res(m2r)
writeRaster(m2r, "path/tirs460.tif")
So, because the scaling factor is not integer (i.e., the target resolution is 460m and the input raster is at 100m, that is 460/100 = 4.6), the solution is:
Blur the image and then resample the blurred image using nearest neighbor
For example:
library(raster)
library(gridkernel)
library(gridprocess)
fr = raster("path/fine_resolution_image.tif") # image to be blurred
cr = raster("path/coarse_resolution_image.tif") # another image at the target resolution. to be used for the resampling of the blurred fr image
g = as.grid(fr)
smoothed = gaussiansmooth(g, sd = 0.3 * 460, max.r = 100) # units in pixels
r <- raster(smoothed)
resample(fr, cr, method="ngb", filename="path/blurred_resampled.tif")

R: Convert/Read 3D Matrix into a 'magick' object and vice versa

I want to work with the magick package for its fantastic image manipulations capabilities. Looking through here I can't seem to find out how to convert a 3D matrix (width x height x channels) to a magick object I can further manipulate, and vice versa.
There is no as.magick function
The as.matrix function does not work
But I would like something like:
height <- 100
width <- 80
X <- array(runif(height * width * 3, min = 0, max = 255), c(height, width, 3))
magick::as.magick(X) %>% magick::image_scale("500x400")
(Obviously I could write the matrix to disk as an image, then read it with magick::image_read, that would be an overkill)
What did I miss?
You can use image_read() to read a matrix as well. However note that the convention is to scale the values between 0 and 1 in case of doubles. So you need to divide your X by 255. Try this:
img <- magick::image_read(X / 255) %>% magick::image_scale("500x400")
If you want to convert the magick object back to an array:
image_data(img, 'rgba')
Or just img[[1]] works as well.

How to change rgb image into matrix using EBImage?

I want to change an image into matrix of numbers in R using EBImage package. I have tried this code but it only output all 1's:
library(EBImage)
img<-readImage("test.jpg")
imageData(img)[1:50,1:60,1]
this is the image
The following example illustrates how to load a grayscale image containing an alpha channel, convert it to single-channel grayscale image, and do some post-processing: crop the border and resize.
library(EBImage)
img <- readImage("http://i.stack.imgur.com/9VTWx.png")
# grayscale images containing an alpha channel are represented in EBImage as
# RGBA images by replicating the grayscale intensities over the red, green and
# blue channels
print(img, short=TRUE)
## Image
## colorMode : Color
## storage.mode : double
## dim : 819 460 4
## frames.total : 4
## frames.render: 1
# convert to grayscale
img <- channel(img, "gray")
# collect matrix indices of non-white pixles
ind <- which(img < 1, arr.ind=TRUE)
# find min/max indices across rows/columns
ind <- apply(ind, 2L, range)
rownames(ind) <- c("min", "max")
ind
## row col
## min 17 7
## max 819 413
# crop the image
img <- img[ind["min","row"]:ind["max","row"], ind["min","col"]:ind["max","col"]]
# resize to specific width and height
img <- resize(img, w=128, h=128)
display(img)
To extract the underlying matrix use imageData(img).
First of all, that's not a JPEG image, but PNG.
Second, it's not RGB, but greyscale + alpha (according to ImageMagick at least), although the alpha channel is completely opaque, so it doesn't hold any actual data.
Third, the reason you're getting all ones is because the section of the image you are choosing is all white, i.e. maximum intensity, which is represented by the value 1.
Try something like imageData(img)[51:100,1:60,1] and see if that doesn't give a different result.

How to get a pixel matrix from grayscale image in R?

When grayscale images are represented by matrices each element of the matrix determines the intensity of the corresponding pixel. For convenience, most of the current digital files use integer numbers between 0 (to indicate black, the color of minimal intensity) and 255 (to indicate white, maximum intensity), giving a total of 256 = 2^8 different levels of gray.
Is there a way to get a pixel matrix of graysale images in R whose pixel values will range from 0 to 255?
It will also be helpful to know if I can resize the images in preferred dimension (say, $28 \times 28$) in R and then convert them into a pixel matrix whose elements range from 0 to 255?
What happens if the original image is RGB but I want the matrix for grayscale?
The R package png offers the readPNG() function which can read raster graphics (consisting of "pixel matrices") in PNG format into R. It returns either a single matrix with gray values in [0, 1] or three matrices with the RGB values in [0, 1].
For transforming between [0, 1] and {0, ..., 255} simply multiply or divide with 255 and round, if desired.
For transforming between RGB and grayscale you can use for example the desaturate() function from the colorspace package.
As an example, let's download the image you suggested:
download.file("http://www.greenmountaindiapers.com/skin/common_files/modules/Socialize/images/twitter.png",
destfile = "twitter.png")
Then we load the packages mentioned above:
library("png")
library("colorspace")
First, we read the PNG image into an array x with dimension 28 x 28 x 4. Thus, the image has 28 x 28 pixels and four channels: red, green, blue and alpha (for semi-transparency).
x <- readPNG("twitter.png")
dim(x)
## [1] 28 28 4
Now we can transform this into various other formats: y is a vector of hex character strings, specifying colors in R. yg is the corresponding desaturated color (again as hex character) with grayscale only. yn is the numeric amount of gray. All three objects are arranged into 28 x 28 matrices at the end
y <- rgb(x[,,1], x[,,2], x[,,3], alpha = x[,,4])
yg <- desaturate(y)
yn <- col2rgb(yg)[1, ]/255
dim(y) <- dim(yg) <- dim(yn) <- dim(x)[1:2]
I hope that at least one of these versions is what you are looking for. To check the pixel matrices I have written a small convenience function for visualization:
pixmatplot <- function (x, ...) {
d <- dim(x)
xcoord <- t(expand.grid(1:d[1], 1:d[2]))
xcoord <- t(xcoord/d)
par(mar = rep(1, 4))
plot(0, 0, type = "n", xlab = "", ylab = "", axes = FALSE,
xlim = c(0, 1), ylim = c(0, 1), ...)
rect(xcoord[, 2L] - 1/d[2L], 1 - (xcoord[, 1L] - 1/d[1L]),
xcoord[, 2L], 1 - xcoord[, 1L], col = x, border = "transparent")
}
For illustration let's look at:
pixmatplot(y)
pixmatplot(yg)
If you have a larger image and want to bring it to 28 x 28, I would average the gray values from the corresponding rows/columns and insert the results into a matrix of the desired dimension.
Final note: While it is certainly possible to do all this in R, it might be more convenient to use an image manipulation software instead. Depending on what you aim at, it might be easier to just use ImageMagick's mogrify for example:
mogrify -resize 28 -type grayscale twitter.png
Here is an example of converting and drawing an image from a grayscale png. Please ensure installing the relevant packages first.
library(png)
library(RCurl)
myurl = "https://postgis.net/docs/manual-dev/images/apple_st_grayscale.png"
my_image = readPNG(getURLContent(myurl))
img_mat=my_image[,,1] # will hold the grayscale values divided by 255
img_mat=t(apply(img_mat, 2, rev)) # otherwise the image will be rotated
image(img_mat, col = gray((0:255)/255)) # plot in grayscale

Grey to Binary Image using R

Is there any R function to convert grey scale image to binary image. There is one to convert from RGB to Grey but I want to convert Grey to Binary.
This is called thresholding or binarization. The most robust in my experience is adaptive thresholding. This is implemented in EBImage as the thresh method
x = readImage(system.file('images', 'nuclei.tif', package='EBImage'))
if (interactive()) display(x)
y = thresh(x, 10, 10, 0.05)
if (interactive()) display(y)
You didn't say what class or "typeof" your data is, so I'm going to provide an answer in a simple case. Suppose your image is an array of integers. These integers range from 0 to, say 512 for a 9-bit greyscale image. You need to decide what the cutoff point is for 0 vs. 1 in your binary image. Then
bin_image <- round(grey_image/max(grey_image),0)
should do it. If your data range from 0 to 1, do a similar operation but adjust the rounding parameters.
Edit: ooops, I left out a choice of cutoff level. Replace max(grey_image) with K*max(grey_image) where K = 1 for cutting at half-max, K>1 to cut higher and K<1 to cut lower.
The EBImage Bioconductor package is a handy tool for performing image analysis in R.
A basic example taken from the package's Vignette:
lena = readImage(system.file("images", "lena.gif", package="EBImage"))
display(lena>0.5)

Resources