I need to convert rgb image into binary image so that I can use bwlabel() function to detect the no of objects in the image in R. I have just started working on image processing, so I don't have any idea how to do it. I am using EBImage package.
Can anyone help me with this?
Thank you
An example with the lenac image from the package:
lenac = readImage(system.file("images", "lena-color.png", package="EBImage"))
lena = channel(lenac, "gray")
lena5 = lena > 0.5
labels = bwlabel(lena5)
max(labels)
gives 770 objects in the lena picture. Since this is a picture of a face, dividing it into objects may not make much sense. Try different values of the threshold until you get something reasonable - it depends on the type of images you are working with.
Related
I am trying to do image processing on an image in Julia. I have a binary mask for a region of interest within my image. I need to find the border of pixels just outside of my mask. In python I can use skimage.segmentation.find_boundaries, and in Matlab I can use boundarymask. Does Julia have any convenient tools to do this?
I found a quick and dirty solution using PyCall which I will post for now, but if anyone finds a better solution that uses standard Julia modules I will accept that answer over mine.
using Conda
using PyCall
Conda.add("scikit-image")
find_boundaries = pyimport("skimage.segmentation").find_boundaries
mask = # load mask ...
border = find_boundaries(mask)
i'm rather new to image analysis in R and was wondering how i can assess the number of individual plants within a picture such as this one:
I thought of converting the picture to a black/white picture and then using the bwlabel function to count the number of objects within the picture like this:
R<-R(image)
G<-G(image)
B<-B(image)
ExGreen<-2*G-R-B
plot(ExGreen)
ExGreen<-threshold(ExGreen,thr = "auto",approx=FALSE,adjust=1)
plot(ExGreen)
ExGreen<-clean(ExGreen,10)
plot(ExGreen)
labels=bwlabel(ExGreen)
max(labels)
However, I'm running into the issue that my white colored potato plants do not always form 1 contiguous unity.
I was therefore wondering whether there is some option to connect the white pixels which are very close to each other or whether it is possible to draw a circle around every potato plant and then using the bwlabel function...
Or is there any other option to solve my problem.
Thanks in advance!
I was not aware that R has a package imager for image processing that already has a good deal of builtin functions for solving this problem. Thanks for pointing me to it through this interesting question.
Here is my solution (beware that some thresholds are hard coded and thus not scale invariant!):
library(imager)
image <- load.image("plants.png")
R<-R(image); G<-G(image); B<-B(image)
ExGreen<-2*G-R-B
plot(ExGreen)
# blur before thresholding to fill some gaps
ExGreen <- isoblur(ExGreen, 3)
ExGreen <- threshold(ExGreen, thr="auto", approx=FALSE, adjust=1)
plot(ExGreen)
# split into connected component and keep only large CCs
ccs <- split_connected(ExGreen)
largeccs <- purrr::keep(ccs, function(x) {sum(x) > 800})
plot(add(largeccs))
# count CCs
cat(sprintf("Number of large CCs: %i\n", length(largeccs)))
I have a RGB tiff files (from cellProfiler) which I want to import to R, label and arrange - as part of a high throughput analysis. The closest I get is using:
library(tiff)
library(raster)
imageTiff <- tiff::readTIFF(imagePath[i])
rasterTiff <- raster::as.raster(imageTiff)
raster::plot(rasterTiff)
raster::plot plots the image nicely but I can't catch the output and use it with gridExtra or add labels.
In addition I tried rasterVis with levelPlot and multiple other ways importing the tiff and then converting them to grob or ggplots.
However, I can't get anything to work and would like to ask if R is even suited at all for that task?
Thank you very much for your help!
Okay, I think that is the most straight forward way and possible also the most obvious one.
I import JPEG or TIFF files with jpeg::readJPEG or tiff::readTIFF respectively. Both transform the images to a raster format which is compatible with rasterGrid() and following grid.arrange etc.
library(jpeg)
library(tiff)
library(grid)
imageJPEG <- grid::rasterGrob(jpeg::readJPEG("test.jpeg"))
imageTIFF <- grid::rasterGrob(tiff::readTIFF("test.tiff"))
grid.arrange(imageJPEG , imageJPEG , imageJPEG)
grid.arrange(imageTIFF , imageTIFF, imageTIFF)
For my purpose that is perfect since tasterGrob does not alter the raster matrix values. Labeling might be a bit tricky but overall it is a grid/grob problem from here on.
I have a folder of JPG images that I'm trying to classify for a kaggle competition. I have seen some code in Python that I think will accomplish this on the forums, but was wondering is it possible to do in R? I'm trying to convert this folder of many jpg images into csv files that have numbers showing the grayscale of each pixel, similar to the hand digit recognizer here http://www.kaggle.com/c/digit-recognizer/
So basically jpg -> .csv in R, showing numbers for the grayscale of each pixel to use for classification. I'd like to put a random forest or linear model on it.
There are some formulas for how to do this at this link. The raster package is one approach. THis basically converts the RGB bands to one black and white band (it makes it smaller in size, which I am guessing what you want.)
library(raster)
color.image <- brick("yourjpg.jpg")
# Luminosity method for converting to greyscale
# Find more here http://www.johndcook.com/blog/2009/08/24/algorithms-convert-color-grayscale/
color.values <- getValues(color.image)
bw.values <- color.values[,1]*0.21 + color.values[,1]*0.72 + color.values[,1]*0.07
I think the EBImage package can also help for this problem (not on CRAN, install it through source:
source("http://bioconductor.org/biocLite.R")
biocLite("EBImage")
library(EBImage)
color.image <- readImage("yourjpg.jpg")
bw.image <- channel(color.image,"gray")
writeImage(bw.image,file="bw.png")
i am plotting some data in R using the following commands:
jj = ts(read.table("overlap.txt"))
pdf(file = "plot.pdf")
plot(jj, ylab="", main="")
dev.off()
The result looks like this:
The problem I have is that the pdf file that I get is quite big (25Mb). Is the a way to reduce the file size? JPEG is not an option because I need a vector graphic.
Take a look at tools::compactPDF - you need to have either qpdf or ghostscript installed, but it can make a huge difference to pdf file size.
If reading a PDF file from disk, there are 3 options for GostScript quality (gs_quality), as indicated in the R help file:
printer (300dpi)
ebook (150dpi)
screen (72dpi)
The default is none. For example to convert all PDFs in folder mypdfs/ to ebook quality, use the command
tools::compactPDF('mypdfs/', gs_quality='ebook')
You're drawing a LOT of lines or points. Vector image formats such as pdf, ps, eps, svg, etc. maintain logical information about all of those points, lines, or other items that increase complexity, which translates to size and drawing time, as the number of points increases. Generally vector images are the best in a number of ways, most compact, scale best, and highest quality reproduction. But, if the number of graphical elements becomes very large then it's often best to go to a raster image format such as png. When you switch to raster it's best to have a good idea what size image you want, both in pixels and also in things like print measurements, in order to produce the best image.
For information from the other direction, too large a raster image, see this answer.
One way of reducing the file size is to reduce the number of values that you have. Assuming you have a dataframe called df:
# take sample of data from dataframe
sampleNo = 10000
sampleData <- df[sample(nrow(df), sampleNo), ]
I think the only other alternative within R is to produce a non-vector. Outside of R you could use Acrobat Professional (which is not free) to optimize the pdf. This can reduce the file size enormously.
Which version of R are you using? In R 2.14.0, pdf() has an argument compress to support compression. I'm not sure how much it can help you, but there are also other tools to compress PDF files such as Pdftk and qpdf. I have two wrappers for them in the animation package, but you may want to use command line directly.
Hard to tell without seeing what the plot looks like - post a screenshot?
I suspect its a lot of very detailed lines and most of the information probably isn't visible - lots of things overlapping or very very small detail. Try thinning your data in one dimension or another. I doubt you'll lose visible information.