How convert spectral image elements to RGB in R language? - r

I'm planning to convert multispectral images to rgb based images- (RGB values of visible spectrum)
Basically, I'm reading png spectral image by "readPNG" function in R.
So, I have 2 dimensions 512X512 matrix. then according to the link above I write function that return values for R,G,B.
Now, my question is how I can apply this RGB to the my image to convert to rgb?
Some part of my code:
img <-readPNG("sample_img.png") # 512 X 512 Matrix
# after calculate R,G,B
r = 1
g = 0.892
b = 0
el1 <- img * r
el2 <- img * g
el3 <- img * b
col <- as.matrix(el1, el2, el3)
plot (1:512 , type="n" )
rasterImage(col, 1, 1, 512, 512)
I'm doing code like above , and still couldn't convert to get color image.
(more information about spectral: multispectral )

The rasterImage() function takes a 3D array, which you cannot create using as.matrix(). Instead, use abind() from the abind package.
library(abind)
col <- abind(el1, el2, el3, along=3)
plot (1:512 , type="n" )
rasterImage(col, 1, 1, 512, 512)
That should do it!

Related

adjusting the RGB values in an image

I am trying to use the magick package in R to make a the same adjustment in RGB for each pixel in an image
I want to reduce the r value by 3, increase the g value by 2, and increase the b value by 27
#colour corrections
install.packages("magick")
library(magick)
setwd("/Users/hollymcdougall/desktop")
# Load the image
image <- image_read("R10I1.heic")
# Convert the image to the RGB color space
image <- image_convert(image, "RGB")
# Define the subtraction values for each channel
red_sub_value <- 3
green_sub_value <- -2
blue_sub_value <- -27
# Subtract the values from each channel
image[[1]] <- image[[c]] - red_sub_value
image[[2]] <- image[[2]] - green_sub_value
image[[3]] <- image[[3]] - blue_sub_value
# Clipping the values to stay within 0-255 range
image[image < 0] <- 0
image[image > 255] <- 255
# Save the output image
image_write(image, "output.jpg")
The step where the channels are being subtracted from does not work.
We don't have your image, so let's use this one:
library(magick)
image <- image_read("https://i.stack.imgur.com/A5Cn3.png")
We can convert it into an array of RGB values in the 0-255 range like this:
rgb <- as.numeric(image_data(image, "rgb")) * 255
Now, let's make the changes a bit more obvious for effect:
red_change <- 30
green_change <- -2
blue_change <- -50
We can apply the differences as follows:
rgb[,,1] <- rgb[,,1] + red_change
rgb[,,2] <- rgb[,,2] + green_change
rgb[,,3] <- rgb[,,3] + blue_change
rgb <- rgb / 255
# Clip to 0-1 range
rgb[rgb < 0] <- 0
rgb[rgb > 1] <- 1
Now we convert back to an image and save:
image2 <- image_read(rgb)
image_write(image2, "image2.png")
image2.png

Colour contrast checker in R for more than 2 colours

I am trying to make a reproducible way to calculate the colour contrast of various colours for graphics on a webpage. I found the following function: https://rdrr.io/github/m-clark/visibly/src/R/color_contrast_checker.R, which does exactly what I want for a comparison between two colours.
I want to be able to give this function a foreground and background colour which are effectively lists of hex codes that can be looped through so the function calculates the colour contrast for all combinations of all colours within the two lists. I have tried to give the function a list of colours in a dataframe and use a for loop to repeat the function several times but have had no luck. I think the function isn't designed to take more than 1 element i.e. 1x foreground and 1x background colour but I am a bit of a novice with functions and for loops in R. Does anyone know how I could achieve this thanks?
Below is example code that has not had any success:
library(gplots)
library(jsonlite)
library(dplyr)
background_col_list<-as.character(c("#6A86B8","#DFE3EB","#E57D3A","#BBB332"))
foreground_col_list<-as.character(c("#6A86B8","#DFE3EB","#E57D3A","#BBB332"))
result.df <- expand.grid(as.character(foreground_col_list),as.character(background_col_list))
result.df<-result.df %>%
mutate_all(as.character)
color_contrast_checker <- function(foreground, background) {
#initial checks
if ((is.null(foreground) | rlang::is_empty(foreground)) |
(is.null(background) | rlang::is_empty(background)))
stop('Need both foreground and background colors')
if (!is.character(foreground) | !is.character(background))
stop(strwrap('Elements must be character string as a named R color or
hex (e.g. "#ffffff")'))
#note: alpha returned by col2hex will be ignored
if (foreground %in% colors()){
foreground <- col2hex(foreground)
} else {
if (!nchar(foreground) %in% c(7, 9) | !grepl('^#', foreground))
stop(strwrap('foreground must be an R color, e.g. see colors(),
or a hex of the form #ff5500'))
}
if (background %in% colors()) {
background <- col2hex(background)
} else {
if (!nchar(background) %in% c(7, 9) | !grepl('^#', background))
stop(strwrap('background must be an R color, e.g. see colors(),
or a hex of the form #ff5500'))
}
#remove pound sign
foreground <- substr(foreground, start = 2, stop = nchar(foreground))
background <- substr(background, start = 2, stop = nchar(background))
url <- paste0('https://webaim.org/resources/contrastchecker/?fcolor=',
foreground,
'&bcolor=',
background,
'&api')
result <- suppressWarnings({readLines(url)})
if (!requireNamespace('jsonlite', quietly = TRUE)) {
result <- strsplit(
gsub(result, pattern = '\\{|\\}|\"', replacement = ''),
',')
return(result[[1]])
}
data.frame(jsonlite::fromJSON(result))
}
for (value in result.df) {
color_contrast_checker(foreground=result.df$Var1, background=result.df$Var2) #Var1 & Var2 column names of result.df df that contains list of hex codes
}
#Have also tried:
for (i in 1:length(unique(background_col_list))) {
color_contrast_checker(background_col_list, foreground_col_list)
}
The simplest option in base R would be results = apply(result.df, 1, function(x) color_contrast_checker(x[1], x[2])) which you can then transform in more a readable output (e.g. do.call(rbind, results)).
However, this function is a bit slow - you can implement the check yourself pretty easily in R.
First, we check what W3C uses as contrast ratio:
contrast ratio
(L1 + 0.05) / (L2 + 0.05), where
L1 is the relative luminance of the lighter of the colors, and
L2 is the relative luminance of the darker of the colors.
Then we check what W3C defines as relative luminance:
For the sRGB colorspace, the relative luminance of a color is defined as L = 0.2126 * R + 0.7152 * G + 0.0722 * B
So at this point all you need to do is calculate the relative luminance of each color, then their ratio, and check whether they pass any of the thresholds required:
WCAG 2.0 level AA requires a contrast ratio of at least 4.5:1 for normal text and 3:1 for large text. WCAG 2.1 requires a contrast ratio of at least 3:1 for graphics and user interface components (such as form input borders). WCAG Level AAA requires a contrast ratio of at least 7:1 for normal text and 4.5:1 for large text.
In code:
# Transform colors to RGB values, and RGB values to relative luminance
result.df$L_background = apply(col2rgb(result.df[,1]), 2, function(x)
0.2126 * x[1] + 0.7152 * x[2] + 0.0722 * x[3])
result.df$L_foreground = apply(col2rgb(result.df[,2]), 2, function(x)
0.2126 * x[1] + 0.7152 * x[2] + 0.0722 * x[3])
# Apply the contrast ratio formula (max luminance is brighter, min is darker)
result.df$L_ratio = apply(result.df[,3:4], 1, function(x)
(max(x) + 0.05)/(min(x) + 0.05))
# Check against standard thresholds
result.df$WCAG2_0_AA_pass = result.df$L_ratio > 4.5
result.df$WCAG2_1_pass = result.df$L_ratio > 3
result.df$WCAG_AAA_pass = result.df$L_ratio > 7
This is relatively fast for checks that are not huge. There may be a vectorized solution somewhere.

How can I extract certain color with its position from a jpeg?

By library of 'JPEG', we can extract RGB matrix from a jpeg file. If I know the certain color in #RRGGBB format, how can I get the positions of the color points?
library(jpeg)
IMGMX <- readJPEG("sample.jpg") %>% melt %>% dcast(Var1+Var2~Var3, value.var = "value")
Re-scale data to [0, 255]
IMGMX[, 3:5] <- 255 * IMGMX[, 3:5]
Convert #RRGGBB to R, G, B coordinates
clr <- '#652d90'
r <- col2rgb(clr)[1]
g <- col2rgb(clr)[2]
b <- col2rgb(clr)[3]
Print matches
IMGMX[IMGMX$`1` == r & IMGMX$`2` == g & IMGMX$`3` == b, 1:2]
This will give you a dataframe comprised of 2 columns (width and height) and of n rows, corresponding to the n pixels in the original image that have the color 'clr'.

How to check if a point is in a polygon effectively using R for large data set?

I am new to R and for my currently project, I have to draw a heat map related to a specific event. There are around 2 million observations of such event and in each observation there is a long and lat coordinate. Also, I have converted the map data to a data frame and the data frame contains 71 district, each district is defined with a set of coordinates. I need to decide which observation of the event belongs to which district. I am using the following code:
for (row in 1:nrow(data2015)){
point.x=data2015[row,"Latitude"]
point.y=data2015[row,"Longitude"]
for (name in names(polygonOfdis)){
if (point.in.polygon(point.x, point.y, polygonOfdis[[name]]$lat, polygonOfdis[[name]]$long, mode.checked=FALSE)){
count[[name]]<-count[[name]]+1
break
}
}
}
data2015 is the data set for the event, polygonOfdis is the data set for each district.
For small data set, this algorithm works okay but for my data set, it will definitely run more than ten hours or even more (For a data set only 1/400 of current size, this algorithm runs for 1 to 2 minutes). I am wondering if there is any better way to find out which observation belongs to which district? My problem is that the point.in.polygon function takes too much time and I am wondering if there is any other function can do this?
PS: The current data is actual only 1/10 of the real data I have to process, so I really really need a faster way to do this.
This function from the SMDTools package worked well.
So, awhile ago, I ported over a point in a polygon algorithm by W. Randolph Franklin that uses the notion of rays. I.e. If a point is in the polygon, it should pass through an odd number of times. Otherwise, when it has an even number, it should lie on the outside of the polygon.
The code is considerably fast because it is written using Rcpp. It is split into two parts: 1. The PIP Algorithm and 2. A wrapper function for classification.
PIP Algorithm
#include <RcppArmadillo.h>
using namespace Rcpp;
// [[Rcpp::depends(RcppArmadillo)]]
//' #param points A \code{rowvec} with x,y coordinate structure.
//' #param bp A \code{matrix} containing the boundary points of the polygon.
//' #return A \code{bool} indicating whether the point is in the polygon (TRUE) or not (FALSE)
// [[Rcpp::export]]
bool pnpoly(const arma::rowvec& point, const arma::mat& bp) {
// Implementation of the ray-casting algorithm is based on
//
unsigned int i, j;
double x = point(0), y = point(1);
bool inside = false;
for (i = 0, j = bp.n_rows - 1; i < bp.n_rows; j = i++) {
double xi = bp(i,0), yi = bp(i,1);
double xj = bp(j,0), yj = bp(j,1);
// See if point is inside polygon
inside ^= (((yi >= y) != (yj >= y)) && (x <= (xj - xi) * (y - yi) / (yj - yi) + xi));
}
// Is the cat alive or dead?
return inside;
}
Classification Algorithm
//' PIP Classifier
//' #param points A \code{matrix} with x,y coordinate structure.
//' #param names A \code{vector} of type \code{string} that contains the location name.
//' #param bps A \code{field} of type {matrix} that contains the polygon coordinates to test against.
//' #return A \code{vector} of type \code{string} with location information.
// [[Rcpp::export]]
std::vector<std::string> classify_points(const arma::mat& points,
std::vector<std::string> names,
const arma::field<arma::mat>& bps){
unsigned int i, j;
unsigned int num_points = points.n_rows;
std::vector<std::string> classified(num_points);
for(i = 0; i < num_points; i++){
arma::rowvec active_row = points.row(i);
// One of the coordinate lacks a value
if( !arma::is_finite(active_row(0)) || !arma::is_finite(active_row(1)) ){
classified[i] = "Missing";
continue; // skip trying to find a location
}
// Try to classify coordinate based on supplied boundary points for area j
for(j = 0; j < names.size(); j++){
if( pnpoly(active_row, bps(j)) ){
classified[i] = names[j];
break; // Break loop
}
}
}
return classified;
}
I just found this, which works well for me:
library(secr)
## 100 random points in unit square
xy <- matrix(runif(200, -0.5, 1.5), ncol = 2)
## triangle centred on (0.5, 0.5)
poly <- data.frame(x = c(1, 1, 0, 0, 1), y = c(1,0,0, 1, 1))
plot(xy, pch = 1 + pointsInPolygon(xy, poly))
lines(poly)
Your code is pretty straight forward, your stumbling block is using loops instead of the R's vectorization power. This code should work, but without any data I can't verify it:
# create a column onto the dataframe to store the results
data2015$poly<-"blank"
point.x=data2015$Latitude
point.y=data2015$Longitude
for (name in names(polygonOfdis)){
#point.in.polygon returns a arrary of 0 to 3 for point location
inpoly<-point.in.polygon(point.x, point.y, polygonOfdis[[name]]$lat,
polygonOfdis[[name]]$long, mode.checked=FALSE)
#if the element in >0 in poly assign poly name to poly column
data2015$poly[inpoly>0]<-name
}
#additional processing (returns count per polygon)
tapply(data2015$poly, INDEX = data2015$poly, FUN=length)
This code also assumes that each point is in one and only 1 polygon. The inner loop and the tapply could most likely improved by using the dplyr library.
The other listed solution with the PIP Algorithm could provide a boost over the built-in method.
There's a package for that, namely ptinpoly.
library(ptinpoly)
# define a square
square <- rbind(
c(0,0),
c(0,1),
c(1,0),
c(1,1)
)
pinside <- rbind(c(0.5,0.5)) # point inside the square
poutside <- rbind(c(2,1)) # point outside the square
Note that you can test several points (see below), but if you test a single one you need a matrix, that's why I use rbind.
You get 0 if the point is inside the polygon, -1 otherwise:
> pip2d(square, pinside)
[1] 0
> pip2d(square, poutside)
[1] -1
As I said before you can simultaneously test multiple points:
> pip2d(square, rbind(pinside, poutside))
[1] 0 -1
The package also allows to test for point containment in a 3D polyhedron.
Based on #conner-m suggestion:
library(tidyverse)
library(furrr)
library(SMDTools)
plan(multiprocess)
future_map2_dfr(
polygonOfdis,
names(polygonOfdis),
~tibble(
district = .y,
pip =
pnt.in.poly(
data2015[, c('Latitude', 'Longitude')],
.x
)$pip
)
) %>%
group_by(district) %>%
summarise(count = sum(pip))
I'm more related with spatial data. I would convert them into spatial objects to perform (in less time for me, since might not be so very efficient)
xyDf <- data.frame(X = MyYPtsCoordshere), Y = MyYPtsCoordshere) # points coords
coordMat <- data.frame(X = MyYPolygonCoordshere, Y = MyYPolygonCoordshere) # polygon coords
## Filter points by bounding box (easy=
posCoord <- which(
xyDf$X <= max(coordMat$X) & # west
xyDf$X >= min(coordMat$X) & # east
xyDf$Y <= max(coordMat$Y) & # north
xyDf$Y >= min(coordMat$Y) )# south
#check how many: good for debug
str(posCoord)
plot(coordMat[, c('X', 'Y')], type = 'b')
points(xyDf[, c('X', 'Y')], col = 2, pch = 20)
points(xyDf[posCoord, c('X', 'Y')], col = 4, pch = 2)
# Filter for real using the bbox
xySel <- xyDf[posCoord, ]
#Make the polygon spatial
spDf <<- sp:SpatialPolygonsDataFrame(
SpatialPolygons(list(Polygons(list(Polygon(coordMat)), 1) # polgons
)), data = data.frame(ID = 1), match.ID = FALSE)
#Make pĆ³ints spatial and make the query
posSel <- sp::over(sp::SpatialPoints(xySel[, c('X', 'Y')]), spDf)
posSel is a data.frame indicating 1 or NA if each point is inside or not. Another option (slower) is using raster package + extract function
system.time(ov_ap_mat <- sp::over(coord_pts, ap)) # 0.45
#system.time(ov_ap_matR <- raster::extract( ap_eco_sp, mat[, c('X_a1', 'Y_a1')])) # 1.25
You can use the cgalPolygons package (not on CRAN yet).
library(cgalPolygons)
# define a square
square <- rbind(
c(0, 0),
c(0, 1),
c(1, 0),
c(1, 1)
)
pinside <- c(0.5, 0.5) # point inside the square
poutside <- c(2, 1) # point outside the square
ponsquare <- c(1, 0.5) # point on the boundary of the square
Note that you can test several points with a single command (see below).
You get 1 if the point is inside the polygon, -1 if it is outside, and 0 if it is on the boundary:
> plg <- cgalPolygon$new(square)
> plg$whereIs(pinside)
[1] 1
> plg$whereIs(poutside)
[1] -1
> plg$whereIs(ponsquare)
[1] 0
As I said before you can simultaneously test multiple points:
> plg$whereIs(rbind(pinside, poutside, ponsquare))
[1] 1 -1 0

Image processing loop using EBImage in R

require("EBImage")
I essentially want to loop this set of commands in R for generating masks of images
a <- readImage('BF_Position004_time021.tif') # load rgb image
g <- channel(a, "asgreen") #take green channel from image
writeImage(g,"/Users/...path.../GFP_Position004_time021.tif") #save green channel image
r <- channel(a, "asred") #take red channel from image
writeImage(r,"/Users/...path.../RFP_Position004_time021.tif")#save red
b <- channel(r, "gray") #makes red channel into gray scale
#the following creates a white mask over pixels with high intensity
nmask2 = thresh(sqrt(b), 15, 15, .001)
# nmask2 = fillHull(nmask2)
mk3 = makeBrush(3, shape = 'diamond')
nmask3 = opening(nmask2, mk3)
nseg = bwlabel(nmask3)
nf = computeFeatures.shape(nseg)
nr = which(nf[,'s.area'] < 150)
nseg = rmObjects(nseg,nr) #resulting image called nseg
writeImage(nseg,"/Users/...path.../BF_Position004_time021.tif") #save nseg with the following name
I want to do across multiple positions from 000 to 100 and for time points 001 to 100. I have annotated the above code for some clarity
thanks for any and all help
Put the commands above into a function that takes the common part of the file name and the directory where you'd like to write to
fun <- function(fl, dirpath) {
a <- readImage(sprintf("BF_%s", fl))
...
writeImage(g, file.path(dirpath, sprintf("GFP_%s", fl)))
....
}
Create a vector of file names
fls <- sprintf("Position%03d_time%03d.tif", rep(0:100, each=100), 1:100)
and go!
for (fl in fls)
fun(fl, "/Users/...path...")

Resources