R sf: st_intersection on a list of polygons - r

I have a list of polygons (and multi-polygons) named p_1, p_2, ..., p_n. And I would like to obtain the area in which they all intersect. As st_intersection() does not accept lists as arguments, I tried the following three approaches. None of them provides a satisfactory solution, which is why I am looking for alternative, more efficient techniques.
(i) I could loop through the list
for(i in P) p_1 <- st_intersection(p_1, i)
where P is a list containing polygons p_2 to p_n. But that is rather slow.
(ii) A do.call() approach, i.e.
p <- do.call(st_intersection, P)
where P is a list of polygons p_1 to p_n, only computes the intersection between the first two polygons in the list.
(iii) I could combine the polygons into one sf object and then run st_intersection():
p <- do.call(c, P) %>%
st_sf() %>%
st_intersection()
It works but is slow. Presumeably because it also derives a lot of other polygons apart from the common intersection of all polygons in P.
None of the three approaches provides a satisfactory solution. Looping through a hierarchy of pairwise comparisons in a parallelized framework might be faster. However, I assume there to be a simpler and more efficient solution than that.
Any comments and suggestions are welcome.
A note to the person who closed this question yesterday: do not close this question. Comment or send me a private message, if you personally have a problem with it. But do not close it.

I don't think the overhead of iterating through a list is a problem here: finding the intersection of multiple polygons is just computationally expensive. However, the method of sequentially applying a function to members of a list (effectively what you were trying to do with do.call) is easily managed using purrr::accumulate:
You don't have a reproducible example for folks here to test possible solutions, and creating sf polygons from scratch involves some work, so that may have been why your previous question was closed - I don't know.
Anyway, lets create three overlapping squares in a list and draw them:
library(sf)
library(purrr)
# create square
s1 <- rbind(c(1, 1), c(10, 1), c(10, 10), c(1, 10), c(1, 1))
p <- list(s1 = s1, s2 = s1 + 4, s3 = s1 - 4)
p <- lapply(p, function(x) st_sfc(st_polygon(list(x))) )
plot(p[[1]], xlim = c(-5, 15), ylim = c(-5, 15))
plot(p[[2]], add = TRUE)
plot(p[[3]], add = TRUE)
Our goal is to find the intersection of all three squares, which of course is the tiny square in the center. Using purrr, this is as easy as:
intersection <- accumulate(p, st_intersection)$s3
So when we add our result, coloured red, we get:
plot(intersection, col = "red", add = TRUE)
In terms of performance, accumulate is only about 10% faster than the raw loop, so you may need to parallelize this if performance is a big problem. Also, if there is a possibility that there is no intersection between all the polygons, you can find your smallest polygon and use st_intersects to ensure that all the polygons actually intersect it. This is a much quicker calculation provided that there is a fair chance of there being no unqiue intersection.

Related

Calculating the percentage of overlap between polytopes (n-dimensions)

I have to figure out the percentage of overlap between polytopes in n-dimensional spaces, where my only available source of reference is a set of randomly sampled points within those polytopes.
Assume that the following two R objects are two sets of randomly sampled points from two different polytopes in 5 dimensions:
one <- matrix(runif(5000, min = 0, max = 5), ncol = 5)
two <- matrix(runif(5000, min = 0, max = 4), ncol = 5)
In this example, I selected a smaller range for the second object, so we know that there should be less than 10% overlap. Let me know if I am wrong.
EDIT:
Just to make it really clear, the question is what is the percentage of overlap between those two objects?
I need a method that generalizes to n-dimensional spaces.
This stackoverflow question is somewhat similar to what I am trying to do, but I didn't manage to get it to work.
So, the most straightforward way is to use the hypervolume package.
library(hypervolume)
one <- hypervolume(matrix(runif(5000, min = 0, max = 5), ncol = 5))
two <- hypervolume(matrix(runif(5000, min = 0, max = 4), ncol = 5))
three = hypervolume_set(one, two, check.memory=FALSE)
get_volume(three)
This will get you the volume.
hypervolume_overlap_statistics(three)
This function will output four different metrics, one if which is the Jaccard Similarity Index.
The Jaccard Similarity is the proportion of overlap between the two sample sets (the intersection divided by the union).
Alternatives
Chris suggested volesti as an alternative. Another alternative would be the geometry package.
They do not calculate the proportion straight away. Here you need to find the intersection (e.g. intersectn in geometry, VpolytopeIntersection in volesti), then calculate the volume for the polytopes separately and also their intersection, then you need to divide the volume of the intersection with the sum of the volumes for the two polytopes.
Here, they are also using a different method to calculate the volume and it might be more appropriate for you if you are trying to construct convex hulls in an n-dimensional space. For me, hypervolume is a better solution, because I am doing something more akin to Hutchinson’s n-dimensional hypervolume concept from ecology and evolutionary biology.

Get summary vectors of raster cell centers in R

I want to extract summary vectors that contain the coordinates for the centers of the different cells in a raster. The following code works but I believe involves an n-squared comparison operation. Is there a more efficient method? Not seeing anything obvious in {raster}'s guidance.
require(raster)
r = raster(volcano)
pts = rasterToPoints(r)
x_centroids = unique(pts[,1])
y_centroids = unique(pts[,2])
To get the centers of the raster cells, you should use the functions xFromCol, yFromRow and friends (see also the help pages)
In this case, you get exactly the same result as follows:
require(raster)
r <- raster(volcano)
x_centers <- xFromCol(r)
y_centers <- yFromRow(r)
Note that these functions actually don't do much else but check the minimum value of the coordinates and the resolution of the raster. From these two values, they calculate the sequence of centers as follows:
xmin(r) + (seq_len(ncol(r)) - 0.5) * xres(r)
ymin(r) + (seq_len(nrow(r)) - 0.5) * xres(r)
But you better use the functions mentioned above, as these do a bit more safety checks.

Shortest Euclidean distance between two groups in subset

I have a largish data frame (50000 points) representing points in 2D collected from biological images. Points are categorised as either red or green and are associated with each other in groups (in the example: cells A-D). A small test data set (MSR_test.csv) can be found here.
require(ggplot2)
cells <- read.csv("MSR_test.csv")
ggplot(cells, aes(X, Y, colour = channel, shape = cell)) +
geom_point() +
scale_colour_manual(values = c("green","red"), name = "channel")
I am trying to find a reasonably straight forward way (perhaps involving plyr?) to find the Euclidean distance between each green point and its nearest red point within the same ‘cell group’. Whilst I think I have worked out how to do this for an individual grouping (using rdist from package fields) I can’t seem to work out how to apply a method to my data frame.
I don't see any reason to use plyr, but maybe I'm wrong.
The following code works on your example. I did not use any heavy function to compute the Euclidean distance, mainly because you may have to compute it on a lot of points.
green <- subset(cells, channel=="Green")
red <- subset(cells, channel=="Red")
fun_dist <- function(a, M) rowSums( (M - matrix(1,nrow(M),1) %*% as.numeric(a))**2 )
foo <- function(greenrow, matred) {
subred <- subset(matred, cell == greenrow["cell"], select=c("X","Y"))
minred <- subred[ which.min(fun_dist(unlist(greenrow[c("X","Y")]),subred)), ]
return(minred)
}
data.frame( "rbind", apply(green, 1, foo, red) )

spatial filtering by proximity in R

I have occurrence points for a species, and I'd like to remove potential sampling bias (where some regions might have much greater density of points than others). One way to do this would be to maximize a subset of points that are no less than a certain distance X of each other. Essentially, I would prevent points from being too close to each other.
Are there any existing R functions to do this? I've searched through various spatial packages, but haven't found anything, and can't figure out exactly how to implement this myself.
An example occurrence point dataset can be downloaded here.
Thanks!
I've written a new version of this function that no longer really follows rMaternII.
The input can either be a SpatialPoints, SpatialPointsDataFrame or matrix object.
Seems to work well, but suggestions welcome!
filterByProximity <- function(xy, dist, mapUnits = F) {
#xy can be either a SpatialPoints or SPDF object, or a matrix
#dist is in km if mapUnits=F, in mapUnits otherwise
if (!mapUnits) {
d <- spDists(xy,longlat=T)
}
if (mapUnits) {
d <- spDists(xy,longlat=F)
}
diag(d) <- NA
close <- (d <= dist)
diag(close) <- NA
closePts <- which(close,arr.ind=T)
discard <- matrix(nrow=2,ncol=2)
if (nrow(closePts) > 0) {
while (nrow(closePts) > 0) {
if ((!paste(closePts[1,1],closePts[1,2],sep='_') %in% paste(discard[,1],discard[,2],sep='_')) & (!paste(closePts[1,2],closePts[1,1],sep='_') %in% paste(discard[,1],discard[,2],sep='_'))) {
discard <- rbind(discard, closePts[1,])
closePts <- closePts[-union(which(closePts[,1] == closePts[1,1]), which(closePts[,2] == closePts[1,1])),]
}
}
discard <- discard[complete.cases(discard),]
return(xy[-discard[,1],])
}
if (nrow(closePts) == 0) {
return(xy)
}
}
Let's test it:
require(rgeos)
require(sp)
pts <- readWKT("MULTIPOINT ((3.5 2), (1 1), (2 2), (4.5 3), (4.5 4.5), (5 5), (1 5))")
pts2 <- filterByProximity(pts,dist=2, mapUnits=T)
plot(pts)
axis(1)
axis(2)
apply(as.data.frame(pts),1,function(x) plot(gBuffer(SpatialPoints(coords=matrix(c(x[1],x[2]),nrow=1)),width=2),add=T))
plot(pts2,add=T,col='blue',pch=20,cex=2)
There is also an R package called spThin that performs spatial thinning on point data. It was developed for reducing the effects of sampling bias for species distribution models, and does multiple iterations for optimization. The function is quite easy to implement---the vignette can be found here. There is also a paper in Ecography with details about the technique.
Following Josh O'Brien's advice, I looked at spatstat's rMaternI function, and came up with the following. It seems to work pretty well.
The distance is in map units. It would be nice to incorporate one of R's distance functions that always returns distances in meters, rather than input units, but I couldn't figure that out...
require(spatstat)
require(maptools)
occ <- readShapeSpatial('occurrence_example.shp')
filterByProximity <- function(occ, dist) {
pts <- as.ppp.SpatialPoints(occ)
d <- nndist(pts)
z <- which(d > dist)
return(occ[z,])
}
occ2 <- filterByProximity(occ,dist=0.2)
plot(occ)
plot(occ2,add=T,col='blue',pch=20)
Rather than removing data points, you might consider spatial declustering. This involves giving points in clusters a lower weight than outlying points. The two simplest ways to do this involve a polygonal segmentation, like a Voronoi diagram, or some arbitrary grid. Both methods will weight points in each region according to the area of the region.
For example, if we take the points in your test (1,1),(2,2),(4.5,4.5),(5,5),(1,5) and apply a regular 2-by-2 mesh, where each cell is three units on a side, then the five points fall into three cells. The points ((1,1),(2,2)) falling into the cell [0,3]X[0,3] would each have weights 1/( no. of points in current cell TIMES tot. no. of occupied cells ) = 1 / ( 2 * 3 ). The same thing goes for the points ((4.5,4.5),(5,5)) in the cell (3,6]X(3,6]. The "outlier", (1,5) would have a weight 1 / ( 1 * 3 ). The nice thing about this technique is that it is a quick way to generate a density based weighting scheme.
A polygonal segmentation involves drawing a polygon around each point and using the area of that polygon to calculate the weight. Generally, the polygons completely cover the entire region, and the weights are calculated as the inverse of the area of each polygon. A Voronoi diagram is usually used for this, but polygonal segmentations may be calculated using other techniques, or may be specified by hand.

What techniques exists in R to visualize a "distance matrix"?

I wish to present a distance matrix in an article I am writing, and I am looking for good visualization for it.
So far I came across balloon plots (I used it here, but I don't think it will work in this case), heatmaps (here is a nice example, but they don't allow to present the numbers in the table, correct me if I am wrong. Maybe half the table in colors and half with numbers would be cool) and lastly correlation ellipse plots (here is some code and example - which is cool to use a shape, but I am not sure how to use it here).
There are also various clustering methods but they will aggregate the data (which is not what I want) while what I want is to present all of the data.
Example data:
nba <- read.csv("http://datasets.flowingdata.com/ppg2008.csv")
dist(nba[1:20, -1], )
I am open for ideas.
You could also use force-directed graph drawing algorithms to visualize a distance matrix, e.g.
nba <- read.csv("http://datasets.flowingdata.com/ppg2008.csv")
dist_m <- as.matrix(dist(nba[1:20, -1]))
dist_mi <- 1/dist_m # one over, as qgraph takes similarity matrices as input
library(qgraph)
jpeg('example_forcedraw.jpg', width=1000, height=1000, unit='px')
qgraph(dist_mi, layout='spring', vsize=3)
dev.off()
Tal, this is a quick way to overlap text over an heatmap. Note that this relies on image rather than heatmap as the latter offsets the plot, making it more difficult to put text in the correct position.
To be honest, I think this graph shows too much information, making it a bit difficult to read... you may want to write only specific values.
also, the other quicker option is to save your graph as pdf, import it in Inkscape (or similar software) and manually add the text where needed.
Hope this helps
nba <- read.csv("http://datasets.flowingdata.com/ppg2008.csv")
dst <- dist(nba[1:20, -1],)
dst <- data.matrix(dst)
dim <- ncol(dst)
image(1:dim, 1:dim, dst, axes = FALSE, xlab="", ylab="")
axis(1, 1:dim, nba[1:20,1], cex.axis = 0.5, las=3)
axis(2, 1:dim, nba[1:20,1], cex.axis = 0.5, las=1)
text(expand.grid(1:dim, 1:dim), sprintf("%0.1f", dst), cex=0.6)
A Voronoi Diagram (a plot of a Voronoi Decomposition) is one way to visually represent a Distance Matrix (DM).
They are also simple to create and plot using R--you can do both in a single line of R code.
If you're not famililar with this aspect of computational geometry, the relationship between the two (VD & DM) is straightforward, though a brief summary might be helpful.
Distance Matrices--i.e., a 2D matrix showing the distance between a point and every other point, are an intermediate output during kNN computation (i.e., k-nearest neighbor, a machine learning algorithm which predicts the value of a given data point based on the weighted average value of its 'k' closest neighbors, distance-wise, where 'k' is some integer, usually between 3 and 5.)
kNN is conceptually very simple--each data point in your training set is in essence a 'position' in some n-dimension space, so the next step is to calculate the distance between each point and every other point using some distance metric (e.g., Euclidean, Manhattan, etc.). While the training step--i.e., construcing the distance matrix--is straightforward, using it to predict the value of new data points is practically encumbered by the data retrieval--finding the closest 3 or 4 points from among several thousand or several million scattered in n-dimensional space.
Two data structures are commonly used to address that problem: kd-trees and Voroni decompositions (aka "Dirichlet tesselation").
A Voronoi decomposition (VD) is uniquely determined by a distance matrix--i.e., there's a 1:1 map; so indeed it is a visual representation of the distance matrix, although again, that's not their purpose--their primary purpose is the efficient storage of the data used for kNN-based prediction.
Beyond that, whether it's a good idea to represent a distance matrix this way probably depends most of all on your audience. To most, the relationship between a VD and the antecedent distance matrix will not be intuitive. But that doesn't make it incorrect--if someone without any statistics training wanted to know if two populations had similar probability distributions and you showed them a Q-Q plot, they would probably think you haven't engaged their question. So for those who know what they are looking at, a VD is a compact, complete, and accurate representation of a DM.
So how do you make one?
A Voronoi decomp is constructed by selecting (usually at random) a subset of points from within the training set (this number varies by circumstances, but if we had 1,000,000 points, then 100 is a reasonable number for this subset). These 100 data points are the Voronoi centers ("VC").
The basic idea behind a Voronoi decomp is that rather than having to sift through the 1,000,000 data points to find the nearest neighbors, you only have to look at these 100, then once you find the closest VC, your search for the actual nearest neighbors is restricted to just the points within that Voronoi cell. Next, for each data point in the training set, calculate the VC it is closest to. Finally, for each VC and its associated points, calculate the convex hull--conceptually, just the outer boundary formed by that VC's assigned points that are farthest from the VC. This convex hull around the Voronoi center forms a "Voronoi cell." A complete VD is the result from applying those three steps to each VC in your training set. This will give you a perfect tesselation of the surface (See the diagram below).
To calculate a VD in R, use the tripack package. The key function is 'voronoi.mosaic' to which you just pass in the x and y coordinates separately--the raw data, not the DM--then you can just pass voronoi.mosaic to 'plot'.
library(tripack)
plot(voronoi.mosaic(runif(100), runif(100), duplicate="remove"))
You may want to consider looking at a 2-d projection of your matrix (Multi Dimensional Scaling). Here is a link to how to do it in R.
Otherwise, I think you are on the right track with heatmaps. You can add in your numbers without too much difficulty. For example, building of off Learn R :
library(ggplot2)
library(plyr)
library(arm)
library(reshape2)
nba <- read.csv("http://datasets.flowingdata.com/ppg2008.csv")
nba$Name <- with(nba, reorder(Name, PTS))
nba.m <- melt(nba)
nba.m <- ddply(nba.m, .(variable), transform,
rescale = rescale(value))
(p <- ggplot(nba.m, aes(variable, Name)) + geom_tile(aes(fill = rescale),
colour = "white") + scale_fill_gradient(low = "white",
high = "steelblue")+geom_text(aes(label=round(rescale,1))))
A dendrogram based on a hierarchical cluster analysis can be useful:
http://www.statmethods.net/advstats/cluster.html
A 2-D or 3-D multidimensional scaling analysis in R:
http://www.statmethods.net/advstats/mds.html
If you want to go into 3+ dimensions, you might want to explore ggobi / rggobi:
http://www.ggobi.org/rggobi/
In the book "Numerical Ecology" by Borcard et al. 2011 they used a function called *coldiss.r *
you can find it here: http://ichthyology.usm.edu/courses/multivariate/coldiss.R
it color codes the distances and even orders the records by dissimilarity.
another good package would be the seriation package.
Reference:
Borcard, D., Gillet, F. & Legendre, P. (2011) Numerical Ecology with R. Springer.
A solution using Multidimensional Scaling
data = read.csv("http://datasets.flowingdata.com/ppg2008.csv", sep = ",")
dst = tcrossprod(as.matrix(data[,-1]))
dst = matrix(rep(diag(dst), 50L), ncol = 50L, byrow = TRUE) +
matrix(rep(diag(dst), 50L), ncol = 50L, byrow = FALSE) - 2*dst
library(MASS)
mds = isoMDS(dst)
#remove {type = "n"} to see dots
plot(mds$points, type = "n", pch = 20, cex = 3, col = adjustcolor("black", alpha = 0.3), xlab = "X", ylab = "Y")
text(mds$points, labels = rownames(data), cex = 0.75)

Resources