R - counting the number of objects in an image using bwlabel - r

i'm rather new to image analysis in R and was wondering how i can assess the number of individual plants within a picture such as this one:
I thought of converting the picture to a black/white picture and then using the bwlabel function to count the number of objects within the picture like this:
R<-R(image)
G<-G(image)
B<-B(image)
ExGreen<-2*G-R-B
plot(ExGreen)
ExGreen<-threshold(ExGreen,thr = "auto",approx=FALSE,adjust=1)
plot(ExGreen)
ExGreen<-clean(ExGreen,10)
plot(ExGreen)
labels=bwlabel(ExGreen)
max(labels)
However, I'm running into the issue that my white colored potato plants do not always form 1 contiguous unity.
I was therefore wondering whether there is some option to connect the white pixels which are very close to each other or whether it is possible to draw a circle around every potato plant and then using the bwlabel function...
Or is there any other option to solve my problem.
Thanks in advance!

I was not aware that R has a package imager for image processing that already has a good deal of builtin functions for solving this problem. Thanks for pointing me to it through this interesting question.
Here is my solution (beware that some thresholds are hard coded and thus not scale invariant!):
library(imager)
image <- load.image("plants.png")
R<-R(image); G<-G(image); B<-B(image)
ExGreen<-2*G-R-B
plot(ExGreen)
# blur before thresholding to fill some gaps
ExGreen <- isoblur(ExGreen, 3)
ExGreen <- threshold(ExGreen, thr="auto", approx=FALSE, adjust=1)
plot(ExGreen)
# split into connected component and keep only large CCs
ccs <- split_connected(ExGreen)
largeccs <- purrr::keep(ccs, function(x) {sum(x) > 800})
plot(add(largeccs))
# count CCs
cat(sprintf("Number of large CCs: %i\n", length(largeccs)))

Related

How to add node size as legend in Cytoscape 3?

From an R function (cnetplot) I've obtained the following image which does not look very nice.
Therefore, I extracted the data from the R object and wrote a script to create an equivalent network data file that is readable by Cytoscape. The following equivalent plot from Cytoscape looks much better but the problem is that I am not able to add legends based on node size in Cytoscape as the R function did. I tried with Legend Creator app in cytoscspe but couldn't do it.
The original data and R code to reproduce the plots can be found in the following link.
ftp://ftp.lrz.de/transfer/Data_For_Plot/StackOverflow/
I looked into this Mapping nodes sizes and adding legends in cytoscape network, but in that case questioner already was able to load the node sizes as legends in cytoscape and moreover, he/she used a python package.
Any suggestions will highly be appreciated
Here's a little R script that will generate a min/max node size legend. You'll need to set the first variable to the name of the Visual Style in your network. This one works with the sample session file, "Yeast Perturbation.cys" if you want to test it there first.
If you are familiar with RCy3, then it should be self-explanatory. You can customize the positioning of the nodes, labels and label font size, etc. You can even adapt it to generate intermediate values (like in your example above) if you want.
NOTE: This adds nodes to your network. If you run a layout after adding these, then they will be moved! If you rely on node counts or connectivity measures, then these will affect those counts! Etc.
If you find this uesful, I might try to add it as helper function in the RCy3 package. Let me know if you have feedback or questions.
# https://bioconductor.org/packages/release/bioc/html/RCy3.html
library(RCy3)
# Set your current style name
style.name <- "galFiltered Style"
# Extract min and max node size
res<-cyrestGET(paste0("styles/",style.name,"/mappings/NODE_SIZE"))
size.col <- res$mappingColumn
min.size <- res$points[[1]]$equal
min.value <- res$points[[1]]$value
max.size <- res$points[[length(res$points)]]$equal
max.value <- res$points[[length(res$points)]]$value
# Prepare as data.frame
legend.df <-data.frame(c(min.size, max.size), c(min.value, max.value))
colnames(legend.df) <- c("legend.label",size.col)
rownames(legend.df) <- c("legend.size.min", "legend.size.max")
# Add legend nodes and data
addCyNodes(c("legend.size.min", "legend.size.max"))
loadTableData(legend.df)
# Style and position
setNodeColorBypass(c("legend.size.min", "legend.size.max"),"#000000")
setNodePropertyBypass(c("legend.size.min", "legend.size.max"),
c("E,W,l,5,0", "E,W,l,5,0"), # node_anchor, label_anchor, justification, x-offset, y-offset
"NODE_LABEL_POSITION")
setNodeLabelBypass(c("legend.size.min", "legend.size.max"), legend.df$legend.label)
setNodePropertyBypass("legend.size.max",
as.numeric(max.size)/2 + as.numeric(min.size)/2 + 10, # vertical spacing
"NODE_Y_LOCATION")
setNodeFontSizeBypass(c("legend.size.min", "legend.size.max"), c(20,20))

Detect color in image

I would like to have R detect a given color in a section of an image.
I've been reading about RGB schemes, but I thought there would maybe a package or a way to have R detect a cluster of pixels where, for example, the color yellow takes place.
Is there a solution or am I just trapped in RGB?
Thanks.
Here you go:
install.packages('raster')
library(raster)
#Get some data
duck.jpg<-tempfile()
download.file('http://www.pilgrimshospices.org/wp-content/uploads/Pilgrims-Hospice-Duck.jpg',duck.jpg,mode="wb")
#Plug it into a stack object
duck.raster<-stack(duck.jpg)
names(duck.raster)<-c('r','g','b')
#Look at it
plotRGB(duck.raster)
duck.yellow<-duck.raster
duck.yellow$Yellow_spots<-0
duck.yellow$Yellow_spots[duck.yellow$r<250&duck.yellow$g<250&duck.yellow$b>5]<-1
plot(duck.yellow$Yellow_spots)
So, just a few teachable points here. A digital image is basically a bucket for holding pixel values. So all you need to do to subset a raster (read: digital image), is use some tool to read it into R; decide how you want to subset it; and subset it in the same way you would subset any other data in R.
Another way to think about a raster in R is a stack of same size matrices, with the number of matrices in the stack as the number of bands in the image. In this manner, you can manipulate the data as you would manipulate any other matrix in R.

Extracting boundary line from Image in R

So I got some kind of cross section picture in jpg format I want to work with. For better understanding I just drew a picture, hopefully symbolising well enough kinda how the real pictures will look like:
At the top of the picture is material A, at the bottom material B.
Goal: I want to get the Pixels of the boundary line between both materials.
My way so far:
I already know how to read pictures with package called EBImage
I also know, that this will result in a matrix with a color value for
every pixel.
I thought it would be better to convert the jpeg into a binary picture with only black and white colors.
I thought filling up the black part below (Material B) and reducing the noise would be nice, so I could use column sums (a sum of 1's) to find the row number where material A touches material B, which should be my searched boundary line (right?).
Problems:
I don't find filters which fill up the black parts intelligently, in the real pictures, there will be much more noise, which will complicate things even further...
I am not sure if all this is even necessary, and there is a more efficient way to reach my goal of finding the boundary line
Thank you very much for every tip in advance!
Answers will always be vague when there's no example to work with. I would normally use ImageJ for a task like this but EBImage has the commands that I would use.
From EBImage I would make binary and then erode , dilate, and fill holes (fillHull).
Your picture looks like it might be a candidate for a support vector machine. There are a couple of packages for R with svm functions, one is e1071.

Plot two large Raster Data Sets in a Scatter Plot

i have a problem with plotting two Raster Data Sets in R.
I use two different IRS LISS III Scenes (with the same Extent) and what i want is to plot the pixel values of both scenes in one Scatterplot (x= Layer1 and y=Layer2).
My problem is now the handling of the big amount of data. Each Scene has about 80.000.000 pixels due reclassification and other processing i was able to scale down the values to a amount of 12.000.000 in each raster. But when i try to import these values e.g. in a data.frame or load them from an ascii file i always got problems with my memory.
Is it possible two plot such an amount of data, and when yes it would be great if someone could help me, i was trying it for two days now and right now im desperated.
Many thanks,
Stefan
Use the raster package, there's a good chance it will work out of the box since it has good "out-of-memory" handling. If it doesn't work with the ASCII grids, convert them to something more efficient (like an LZW-compressed and tiled GeoTIFF) with GDAL. And if they are still too big resize them, that's all the graphics rendering process will do anyway. (You don't say how you resized originally, or give any details on how you are trying to read them).

R package for motion capture data analysis and visualisation

I am a newbie in R, love it, but I am surprised by a complete lack of solid package to analyse motion capture data.
The simplest motion capture file is just a massive table with 'XYZ' coordinates for each point attached to a recorded subject, and for every frame captured. I know that I can find individual methods and functions in R to perform complex operations (like principal component analysis) or I can plot time series for all the points. But when I am looking for examples that could also educate me statistically about analysing human movement, and provide with nice toolbox for visual representation of data, R turns out to be a cold desert. On the other hand, MATLAB has Motion capture toolbox and MoCap Toolbox and especially the latter has quite good options for plotting and analysing the captures. But let's be honest - MATLAB has quite ugly visualisation engine comparing to R.
Some specific requests for R motion capture package would include:
reading, editing, visualizing and transforming mocap data
kinetic and kinematic analysis
time-series and principal component analysis
animating data
Am I missing something here (in my Googling) or is there really no mocap packages out there for R? Have anyone tried playing with motion capture data in R? Can you give me some directions?
UPDATE, December 2019: It seems like the mocapr package by Steen Harsted is a much more powerful tool than the one I built. Enjoy.
Have a look at my package, the mocap package:
https://github.com/gsimchoni/mocap
It is far from perfect but it's a start, currently tested only on CMU Graphics Lab Motion Capture Database ASF/AMC files.
And here is a blog post with some more details.
I used the package rgl to create an animation from a motion gesture dataset. Although it's not a package made specifically for gesture data, you can work with it.
In the example below, we have gesture data for 8 points on the upper body: spine, shoulder center, head, left shoulder, left wrist, right shoulder, and right wrist. The subject has his hands down and his right arm is making an upward movement.
I restricted the dataset to 6 time observations (seconds, if you will), because otherwise it would get to big to post here.
Each line from the original dataset corresponds to a time observation, and the coordinates of each body point are defined in sets of 4 (every four columns is one body point). So at each line, we have "x", "y", "z", "br" for the spine, then "x", "y", "z", "br" for the shoulder center, and so on. The "br" is always 1, in order to separate the three coordinates (x,y,z) of each body part.
Here is the original (restricted) dataset:
DATA.time.obs<-rbind(c(-0.06431,0.101546,2.990067,1,-0.091378,0.165703,3.029513,1,-0.090019,0.518603,3.022399,1,-0.042211,0.687271,2.987086,1,-0.231384,0.419869,2.953286,1,-0.299824,0.173991,2.882627,1,0.063367,0.399478,3.136306,1,0.134907,0.176191,3.159998,1),
c(-0.067185,0.102249,2.990185,1,-0.095083,0.166589,3.028688,1,-0.093098,0.519146,3.019775,1,-0.043808,0.687041,2.987671,1,-0.234622,0.417481,2.94581,1,-0.300324,0.169313,2.869782,1,0.056816,0.398384,3.135578,1,0.134536,0.180875,3.162843,1),
c(-0.069282,0.102964,2.989943,1,-0.098594,0.167465,3.027638,1,-0.097184,0.52169,3.019556,1,-0.046626,0.695406,2.989244,1,-0.23478,0.417057,2.943475,1,-0.300101,0.168628,2.860515,1,0.053793,0.395444,3.143226,1,0.134175,0.182816,3.172053,1),
c(-0.070924,0.102948,2.989369,1,-0.101156,0.167554,3.026474,1,-0.100244,0.522901,3.018919,1,-0.049834,0.696996,2.987933,1,-0.235301,0.416329,2.939331,1,-0.301339,0.170203,2.85497,1,0.04762,0.390872,3.142792,1,0.14041,0.186844,3.182172,1),
c(-0.071973,0.103372,2.988788,1,-0.103215,0.16776,3.025409,1,-0.102334,0.52281,3.019341,1,-0.051298,0.697003,2.991192,1,-0.235497,0.414859,2.935161,1,-0.297678,0.15788,2.833734,1,0.045973,0.386249,3.147609,1,0.14408,0.1916,3.204443,1),
c(-0.073223,0.104598,2.988132,1,-0.106597,0.168971,3.022554,1,-0.106778,0.522688,3.015138,1,-0.051867,0.697781,2.990767,1,-0.236137,0.414773,2.931317,1,-0.297552,0.153462,2.827027,1,0.039316,0.39146,3.166831,1,0.175061,0.214336,3.207459,1))
For each time point, we can create a matrix where each row will be a body point, and the columns will be the coordinates:
# Single time point for analysis
time.point<-1
# Number of coordinates
coordinates<-4
# Number of body points
body.points<-dim(DATA.time.obs)[2]/coordinates
# Total time of gesture
total.time<-dim(DATA.time.obs)[1]
# Transform data for a single time. observation into a matrix
DATA.matrix<-matrix(DATA.time.obs[1,],c(body.points,coordinates),byrow = TRUE)
colnames(DATA.matrix)<-c("x","y","z","br")
rownames(DATA.matrix)<-c("hip_center","spine","shoulder_center","head",
"left_shoulder","left_wrist","right_shoulder",
"right_wrist")
So, we have, at each point of time, a matrix like this:
x y z br
hip_center -0.064310 0.101546 2.990067 1
spine -0.091378 0.165703 3.029513 1
shoulder_center -0.090019 0.518603 3.022399 1
head -0.042211 0.687271 2.987086 1
left_shoulder -0.231384 0.419869 2.953286 1
left_wrist -0.299824 0.173991 2.882627 1
right_shoulder 0.063367 0.399478 3.136306 1
right_wrist 0.134907 0.176191 3.159998 1
And now we use rgl to plot the data from this matrix:
#install.packages("rgl")
library(rgl)
# INITIAL PLOT
x<-unlist(DATA.matrix[,1])
y<-unlist(DATA.matrix[,2])
z<-unlist(DATA.matrix[,3])
# OPEN A BLANK 3D PLOT AND SET INITIAL NEUTRAL VIEWPOINT
open3d()
rgl.viewpoint(userMatrix=rotationMatrix(0,0,0,0))
# SET FIGURE POSITION
# This is variable. It will depend on your dataset
# I've found that for this specific dataset a rotation
# of -0.7*pi on the Y axis works
# You can also plot and select the best view with
# your mouse. This selected view will be passed on
# to the animation.
U <- par3d("userMatrix")
par3d(userMatrix = rotate3d(U, -0.7*pi, 0,1,0))
# PLOT POINTS
points3d(x=x,y=y,z=z,size=6,col="blue")
text3d(x=x,y=y,z=z,texts=1:8,adj=c(-0.1,1.5),cex=0.8)
# You can also plot each body point name.
# This might be helpful when you don't know the
# initial orientation of your plot
# text3d(x=x,y=y,z=z,texts=rownames(DATA.matrix),
# cex=0.6,adj=c(-0.1,1.5))
# Based on the plotted figure, connect the line segments
CONNECTOR<-c(1,2,2,3,3,4,3,5,3,7,5,6,7,8)
segments3d(x=x[CONNECTOR],y=y[CONNECTOR],z=z[CONNECTOR],col="red")
Then, we have this:
To create an animation, we can put all this into a function and use lapply.
movement.points<-function(DATA,time.point,CONNECTOR,body.points,coordinates){
DATA.time<-DATA[time.point,]
DATA.time<-matrix(DATA.time,c(body.points,coordinates),byrow = TRUE)
x<-unlist(DATA.time[,1])
y<-unlist(DATA.time[,2])
z<-unlist(DATA.time[,3])
# I used next3d instead of open3d because now I want R to plot
# several plots on top of our original, creating the animation
next3d(reuse=FALSE)
points3d(x=x,y=y,z=z,size=6,col="blue")
segments3d(x=c(x,x[CONNECTOR]),y=c(y,y[CONNECTOR]),z=c(z,z[CONNECTOR]),col="red")
# You can control the "velocity" of the animation by changing the
# parameter below. Smaller = faster
Sys.sleep(0.5)
}
I know this solution is not elegant, but it works.
Judging by a quick search on RSeek, there isn't a motion capture package available for R. It looks like you'll need to find equivalents for each function. The more general ones should be fairly easy to find (interpolation, subsetting, transformation/ projection, time-series analysis, pca, matrix analysis etc) and the very process of writing your own custom functions for specific things like estimating instantaneous kinetic energy is probably the best way to learn!
You may find plyr useful for knocking the data into shape and the animation package for visualising motion.

Resources