hist2d subrange selection in R - r

I would like to performs some statistical analysis in a definite zone of a very big table created with hist2d function of R. Is there any elegant way to cut a definite zone of the 2-d histogram and to put it in a table with R? thanx

I'm not entirely clear on what you mean by "cut a definite zone", but as per the documentation on hist2d, the function returns the counts for each cell in a matrix. So you can easily extract the specific cells you want by subsetting:
y <- rnorm(2000, sd=1)
x <- rnorm(2000, sd=4)
# separate scales for each axis, this looks circular
tmp <- gplots:::hist2d(x,y)
str(tmp$counts)
dim(tmp$counts)
tmp$counts[1:10,1:10]
So just take the appropriate subset of tmp$counts.

Related

Point pattern classification with spatstat: how to choose the right bandwidth?

I'm still trying to find the best way to classify bivariate point patterns:
Point pattern classification with spatstat: what am I doing wrong?
I now analysed 110 samples of my dataset using #Adrian's suggestion with sigma=bw.diggle (as I wanted an automatic bandwidth selection). f is a "resource selection function" (RSF) which describes the relationship between the intensity of the Cancer point process and the covariate (here kernel density of Immune):
Cancer <- split(cells)[["tumor"]]
Immune <- split(cells)[["bcell"]]
Dimmune <- density(Immune,sigma=bw.diggle)
f <- rhohat(Cancer, Dimmune)
I am in doubt about some results I've got. A dozen of rho-functions looked weird (disrupted, single peak). After changing to default sigma=NULL or sigma=bw.scott (which are smoother) the functions became "better" (see examples below). I also experimented with the following manipulations:
cells # bivariate point pattern with marks "tumor" and "bcell"
o.marks<-cells$marks # original marks
#A) randomly re-assign original marks
a.marks <- sample(cells$marks)
#B) replace marks randomly with a 50/50 proportion
b.marks<-as.factor(sample(c("tumor","bcell"), replace=TRUE, size=length(o.marks)))
#C) random (homogenious?) pattern with the original number of points
randt<-runifpoint(npoints(subset(cells,marks=="tumor")),win=cells$window)
randb<-runifpoint(npoints(subset(cells,marks=="bcell")),win=cells$window)
cells<-superimpose(tumor=randt,bcell=randb)
#D) tumor points are associated with bcell points (is "clustered" a right term?)
Cancer<-rpoint(npoints(subset(cells,marks=="tumor")),Dimmune,win=cells$window)
#E) tumor points are segregated from bcell points
reversedD<-Dimmune
density.scale.v<-sort(unique((as.vector(Dimmune$v)[!is.na(as.vector(Dimmune$v))]))) # density scale
density.scale.v.rev<-rev(density.scale.v)# reversed density scale
new.image.v<-Dimmune$v
# Loop over matrix
for(row in 1:nrow(Dimmune$v)) {
for(col in 1:ncol(Dimmune$v)) {
if (is.na(Dimmune$v[row, col])==TRUE){next}
number<-which(density.scale.v==Dimmune$v[row, col])
new.image.v[row, col]<-density.scale.v.rev[number]}
}
reversedD$v<-new.image.v # reversed density
Cancer<-rpoint(npoints(subset(cells,marks=="tumor")),reversedD,win=cells$window)
A better way to generate inverse density heatmaps is given by #Adrian in his post below.
I could not generate rpoint patterns for the bw.diggle density as it produced negative numbers.Thus I replaced the negatives Dimmune$v[which(Dimmune$v<0)]<-0 and could run rpoint then. As #Adrian explained in the post below, this is normal and can be solved easier by using a density.ppp option positive=TRUE.
I first used bw.diggle, because hopskel.test indicarted "clustering" for all my patterns. Now I'm going to use bw.scott for my analysis but can this decision be somehow justified? Is there a better method besides "RSF-function is looking weird"?
some examples:
sample10:
sample20:
sample110:
That is a lot of questions!
Please try to ask only one question per post.
But here are some answers to your technical questions about spatstat.
Negative values:
The help for density.ppp explains that small negative values can occur because of numerical effects. To force the density values to be non-negative, use the argument positive=TRUE in the call to density.ppp. For example density(Immune, bw.diggle, positive=TRUE).
Reversed image: to reverse the ordering of values in an image Z you can use the following code:
V <- Z
A <- order(Z[])
V[][A] <- Z[][rev(A)]
Then V is the order-reversed image.
Tips for your code:
to generate a random point pattern with the same number of points and in the same window as an existing point pattern X, use Y <- runifpoint(ex=X).
To extract the marks of a point pattern X, use a <- marks(X). To assign new marks to a point pattern X, use marks(X) <- b.
to randomly permute the marks attached to the points in a point pattern X, use Y <- rlabel(X).
to assign new marks to a point pattern X where the new marks are drawn randomly-with-replacement from a given vector of values m, use Y <- rlabel(X, m, permute=FALSE).

In R, how do I count the number of data points on a scatter plot within a cell of custom dimensions?

Let's just say I have the following scatterplot:
set.seed(665544)
n <- 100
x <- cbind(
x=runif(10, 0, 5) + rnorm(n, sd=0.4),
y=runif(10, 0, 5) + rnorm(n, sd=0.4)
)
plot(x)
I want to divide this scatterplot into square cells of a specified size and then count how many points fall into each unique cell. This will essentially give me the local density value of that cell. What is the best way of doing this? Is there an R package that can help? Perhaps a 2D histogram method like in Matlab?
Quick clarifications:
1.) I'd like the function/method to take the following 3 arguments: dimensions of total area, dimensions of cell (OR number of cells), and the data. It would then perhaps output a matrix where each value corresponds to a cell's point count.
2.) Q: Why do you want to use this method to determine local density? Isn't this much easier:
library(dbscan)
pointdensity(x, eps = .1, type = "frequency")
A: This method calculates the local density around each point. Though easy, this definition of local density then makes it very difficult (optimization algorithms necessary) to assign new data in a way that it matches the local density distribution of the original data set.

How to extract specific values with point coordinates from Kriging interpolations made in R?

By using R version 3.4.2 and the library "geoR", I made kriging interpolations for different variables (bellow I give an example of my process). I also made a matrix with the coordinates for 305 trees with distinct marks (species, DBH, Height) that are within the same space for the interpolations, as seen in the image attached (https://imgur.com/SLQBnZH). I've been looking for ways to extract the nearest value from each variable for each tree and save the corresponding values in a data.frame or matrix, but haven't been successful, and I can't find specific answers to this.
One thing I've been looking at is trying to convert the Kriging result into a Raster (.tif) and proceed from there. But Kriging interpolations are made out of vector data, so is it even posible?
I'd be glad to receive any sort of help, thank you in advance!
P.S. I'm doing this so that I can latter use the data for spatial point patern analysis.
#Kriging####:
PG<-read.csv("PGF.csv", header=T, stringsAsFactors=FALSE)
library("geoR")
x<-(PG$x)
y<-(PG$y)
#Grid
loci<-expand.grid(x=seq(-5, 65, length=100), y=seq(-5, 85, length=100))
names(loci)<-c("x", "y")
mix<-cbind(rep(1,10000), loci$x, loci$y, loci$x*loci$y)
#Model
pH1.mod<-lm(pH1~y*x, data=PG, x=T)
pH1.kg<-cbind(pH1.mod$x[,3], pH1.mod$x[,2], pH1.mod$residuals)
#Transform to geographic data
pH1.geo<-as.geodata(pH1.kg)
#Variogram
pH1.vario<-variog(pH1.geo, max.dist=35)
pH1.vario.mod<-eyefit(pH1.vario)
#Cross validation
pH1.valcruz<-xvalid(pH1.geo, model=pH1.vario.mod)
#Kriging
pH1.krig<-krige.conv(pH1.geo, loc=loci, krige=krige.control(obj.model=pH1.vario.mod[[1]]))
#Predictive model
pH1a.yhat<-mix %*% pH1.mod$coefficients + pH1.krig$predict
#Exchange Kriging prediction values
pH1.krig$predict<-pH1.yhat
#Image
image(pH1.krig2)
contour(pH1.krig2, add=TRUE)
#Tree matrix####:
CoA<-read.csv("CoAr.csv", header=T)
#Data
xa<-(CoA$X)
ya<-(CoA$Y)
points(xa,ya, col=4)
TreeDF<-(cbind.data.frame(xa, ya, CoA$Species, CoA$DBH, CoA$Height, stringsAsFactors = TRUE))
m<-(cbind(xa, ya, 1:305))
as.matrix(m)
I tried to find the value of a point in space (trees [1:305]) through the minimum distance to a predicted value using the following code, (I suggest not running this since it takes too long):
for(i in 1:2){print(c(2:10000)[as.matrix(dist(rbind(m[i,], as.matrix(pH1.krig2$predict))))[i,2:10000]==min(as.matrix(dist(rbind(m[i,],as.matrix(pH1.krig2$predict))))[i,2:10000])])}
In the following link aldo_tapia's answer was the approach needed for this problem. Thank you to everyone! https://gis.stackexchange.com/questions/284698/how-to-extract-specific-values-with-point-coordinates-from-kriging-interpolation
The process is as follows:
Use extract() function from raster package:
library(raster)
r <- SpatialPointsDataFrame(loci, data.frame(predict = pH1.krig$predict))
gridded(r) <- T
r <- as(r,'RasterLayer')
pts <- SpatialPointsDataFrame(CoA[,c('X','Y')],CoA)
pH1.arb <-extract(r, pts)
to this I just added the values through cbind to the tree data frame since they are in order.
COA2<-cbind(CoA, pH1val=pH1.arb)
I will repeat the process for each variable.

Subset 3D matrix using polygon coordinates

I'm working on some bioacoustical analysis and got stuck with an issue that I believe it can be worked out mathematically. I'll use an sound sample from seewavepackage:
library(seewave)
library(tuneR)
data(tico)
By storing a spectrogram (i.e. graphic representation of the sound wave tico) in an R object, we can now deal with the wave file computationally.
s <- spectro(tico, plot=F)
class(s)
>[1] "list"
length(s)
>[1] 3
The object created s consists in two numerical vectors x = s$time, y = s$freq representing the X and Y axis, respectively, and a matrix z = s$amp of amplitude values with the same dimensions of x and y. Z is a virtually a 3D matrix that can be plotted using persp3D (plot3D), plot_ly (plotly) or plot3d (rgl). Alternatively, the wave file can be plotted in 3D using seewave if one wishes to visualize it as an interative rgl plot.
spectro3D(tico)
That being said, the analysis I'm conducting aims to calculate contours of relative amplitude:
con <- contourLines(x=s$time, y=s$freq, z=t(s$amp), levels=seq(-25, -25, 1))
Select the longest contour:
n.con <- numeric(length(con))
for(i in 1:length(con)) n.con[i] <- length(con[[i]]$x)
n.max <- which.max(n.con)
con.max <- con[[n.max]]
And then plot the selected contour against the spectrogram of tico:
spectro(tico, grid=F, osc=F, scale=F)
polygon(x=con.max$x, y=con.max$y, lwd=2)
Now it comes the tricky part. I must find a way to "subset" the matrix of amplitude values s$amp using the coordinates of the longest contour con.max. What I aim to achieve is a new matrix containing only the amplitude values inside the polygon. The remaining parts of the spectrogram should then appear as blank spaces.
One approach I though it could work would be to create a loop that replaces every value outside the polygon for a given amplitude value (e.g. -25 dB). I once did an similar approach to remove the values below -30 dB and it worked out perfectly:
for(i in 1:length(s$amp)){if(s$amp[i] == -Inf |s$amp[i] <= -30)
{s$amp[i] <- -30}}
Another though would be to create a new matrix with the same dimensions of s$amp, subset s$amp using the coordinates of the contour, then replace the subset on the new matrix. Roughly:
mt <- matrix(-30, nrow=nrow(s$amp), ncol = ncol(s$amp))
sb <- s$amp[con.max$y, con.max$x]
new.mt <- c(mt, sb)
s$amp <- new.mt
I'll appreciate any help.

Simplifying 3D points. R

I need to work with 3D data (spatial) very long tables with for coumns:
x, y, z, Value
There are too many data to be plotted with scatterplot3d or similar (rgl, lattice...)
I would like to reduce the number of data.
One idea could be to sample.
But I'd like to know how to reduce the data, getting new points that summarize the nearby points.
Is there any package to do it and work with this kind of data?
Something like creating a predefined 3D grid and averaging the points in each grid.
But I don't know whether it's better to choose the new points equidistants or just get their coordinates averaging the old ones locally. Or even weighting their final contribution with the distance to the new point.
Other issues:
The "optimal" grid could be tilted, but I don't know it beforehand.
I don't know if the grid should be extended a little bit beyond the data nor how much.
PD: I don't want to create surfaces nor wireframes nor adjust anything.
PD: I've checked spatial packages but as far as I see they are useful for data on a surface, such as the earth, but without height.
To reduce the size of the data set, have you thought about using a clustering methods such as kmeans or hierarchical clustering (hclust). These methods could reduce your data set down to a reasonable size. Be aware, if your data set is large enough these methods could still be too computational time consuming.
Seems like you might benefiit from fitting some sort of model to your data and then displaying the prediction on a resolution of your choice.
Here is an example of fitting with a GAM model:
library(sinkr) # https://github.com/marchtaylor/sinkr
library(mgcv)
library(rgl)
# make data ---------------------------------------------------------------
n <- 1000
x <- runif(n, min=-10, max=10)
y <- runif(n, min=-10, max=10)
z <- runif(n, min=-10, max=10)
value <- (-0.01*x^3 + -0.2*y^2 + -0.3*z^2) * rlnorm(n, 0, 0.1)
# fit model (GAM) ---------------------------------------------------------
fit <- gam(value ~ s(x) + s(y) + s(z))
plot.gam(fit, pages = 1)
This visualization is already helpful in understanding the 3d pattern of value, but you could also predict the values to a new grid. To visualize the prediction in 3d, the rgl package might be useful:
# predict to new grid -----------------------------------------------------
grd <- expand.grid(
x=seq(min(x), max(x),,10),
y=seq(min(y), max(y),,10),
z=seq(min(z), max(z),,10)
)
grd$value <- predict.gam(fit, newdata = grd)
# plot prediction with rgl ------------------------------------------------
# original data
plot3d(x, y, z, col=val2col(value, col=jetPal(100)))
rgl.snapshot("original.png")
# interpolated data
plot3d(grd$x, grd$y, grd$z, col=val2col(grd$value, col=jetPal(100)), alpha=0.5, size=5)
rgl.snapshot("points.png")
spheres3d(grd$x, grd$y, grd$z, col=val2col(grd$value, col=jetPal(100)), alpha=0.3, radius=1)
rgl.snapshot("spheres.png")
I've found the way to do it.
I'll post an example, just in case it's useful for others.
I write only two dimensions (and only working on the coordinates) to make it clear, but it can be generalized to higher dimensions and summarizing the functions at every coordinate).
set.seed(1)
xx <- runif(30,0,100); yy <- runif(30,0,100)
datos <- data.frame(xx,yy) #sample data
plot(xx,yy,pch=20) # 2D plot to visualize it.
n <- 4 # Same number of splits on every axis. Simple example.
rango <- function(ii){(max(ii)-min(ii))+0.000001}
renorm<- function(jj) {trunc(n*(jj-min(jj))/rango(jj))+1}
result <- aggregate(cbind(xx,yy)~renorm(xx) + renorm(yy),datos, mean)
points(result$xx,result$yy,pch=20, col="red")
abline(v=( min(xx) + (rango(xx)/n)*0:n) )
abline(h=( min(yy) + (rango(yy)/n)*0:n) )
Everything could be modified with na.rm=T
Maybe there are a simpler solutions with split, cut, dplyr, data.table, tapply...
I like this way more than fixing the new points coordinates at the center of every subregion because if you have only 1 point it keeps its original coordinates.
The +0.000000001 is to avoid the last point to move to a subregion further.
The full solution would have been:
aggregate(cbind(xx,yy,zz, Value)~renorm(xx)+renorm(yy)+renorm(zz),datos, mean)
And it could be further improved by weighting distances.

Resources