long-time reader, first time poster.
I'm attempting perform a gIntersection() on two very large SpatialPolygonsDataFrame objects. The first is all US Counties, the second is a 240 row x 279 column grid, as a series of 66,960 polygon.
I successfully ran this by just using Pennsylvania and the piece of the grid that overlaps PA:
gIntersection(PA, grid, byid=TRUE)
I tried to run this overnight for the whole U.S. and it was still running this morning with a 10 GB(!) swap file on my hard drive and no evidence of progress. Am I doing something wrong, or is this normal behavior, and I should just do a state-by-state loop?
Thanks!
A little later than I hoped, but here's the function I ended up using for my task related to this. It could probably be adapted to other applications.
#mdsumner was right that a high-level operation to discard non-intersects sped this up greatly. Hopefully this is useful!
library("sp")
library("rgeos")
library("plyr")
ApportionPopulation <- function(AdminBounds, poly, Admindf) { # I originally wrote this function to total the population that lies within each polygon in a SpatialPolygon object. AdminBounds is a SpatialPolygon for whatever administrative area you're working with; poly is the SpatalPolygon you want to total population (or whatever variable of your choice) across, and Admindf is a dataframe that has data for each polygon inside the AdminBounds SpatialPolygon.
# the AdminBounds have the administrative ID code as feature IDS. I set that up using spChFID()
# start by trimming out areas that don't intersect
AdminBounds.sub <- gIntersects(AdminBounds, poly, byid=TRUE) # test for areas that don't intersect
AdminBounds.sub2 <- apply(AdminBounds.sub, 2, function(x) {sum(x)}) # test across all polygons in the SpatialPolygon whether it intersects or not
AdminBounds.sub3 <- AdminBounds[AdminBounds.sub2 > 0] # keep only the ones that actually intersect
# perform the intersection. This takes a while since it also calculates area and other things, which is why we trimmed out irrelevant areas first
int <- gIntersection(AdminBounds.sub3, poly, byid=TRUE) # intersect the polygon and your administrative boundaries
intdf <- data.frame(intname=names(int)) # make a data frame for the intersected SpatialPolygon, using names from the output list from int
intdf$intname <- as.character(intdf$intname) # convert the name to character
splitid <- strsplit(intdf$intname, " ", fixed=TRUE) # split the names
splitid <- do.call("rbind", splitid) # rbind those back together
colnames(splitid) <- c("adminID", "donutshpid") # now you have the administrative area ID and the polygonID as separate variables in a dataframe that correspond to the int SpatialPolygon.
intdf <- data.frame(intdf, splitid) # make that into a dataframe
intdf$adminID <- as.character(intdf$adminID) # convert to character
intdf$donutshpid <- as.character(intdf$donutshpid) # convert to character. In my application the shape I'm using is a series of half-circles
# now you have a dataframe corresponding to the intersected SpatialPolygon object
intdf$polyarea <- sapply(int#polygons, function(x) {x#area}) # get area from the polygon SP object and put it in the df
intdf2 <- join(intdf, Admindf, by="adminID") # join together the two dataframes by the administrative ID
intdf2$popinpoly <- intdf2$pop * (intdf2$polyarea / intdf2$admin_area) # calculate the proportion of the population in the intersected area that is within the bounds of the polygon (assuming the population is evenly distributed within the administrative area)
intpop <- ddply(intdf2, .(donutshpid), summarize, popinpoly=sum(popinpoly)) # sum population lying within each polygon
# maybe do other final processing to get the output in the form you want
return(intpop) # done!
}
I found the sf package is superior for this:
out <- st_intersection(grid, polygons)
gIntersection was locking up my computer for hours trying to run and as a result requires trimming or cycling through individual polygons, st_intersection from sf package runs my data in seconds.
st_intersection also automatically merges the dataframes of both inputs.
Thanks to Grant Williamson at University of Tasmania for the vignette: https://atriplex.info/blog/index.php/2017/05/24/polygon-intersection-and-summary-with-sf/
You could probably get your answer faster using rasterize in the raster package, with your grid as a raster. It has an argument for finding the amount of polygon overlap to a cell.
?rasterize
getCover: logical. If ‘TRUE’, the fraction of each grid cell that is
covered by the polygons is returned (and the values of
‘field, fun, mask’, and ‘update’ are ignored. The fraction
covered is estimated by dividing each cell into 100 subcells
and determining presence/absence of the polygon in the center
of each subcell
It doesn't look like you get to control the number of subcells, though that probably wouldn't be hard to open up.
Related
There are a few related questions and answers out there, but none fit my data structure.
I have a SpatialPolygonsDataFrame which holds (multiple) polygons of multiple features (here features being animal foraging areas and fishing areas), with data available for multiple years.
something like this:
feature 1: P1_2002; P1_2003; P1_2004;
feature 2: P2_2002; P2_2003; P2_2004;
feature 3: P3_2002; P3_2003; P3_2004.
What I want to do is calculate the overlap of polygons between features within each year (i.e.
% overlap of P1_2002 vs P2_2002; P1_2002 vs P3_2002; P2_2002 vs P3_2002;
% overlap of P1_2003 vs P2_2003, etc, etc.
Ideally, the result would be produced in the form of a matrix that can be saved as a table.
I have the following code from another thread (Percentage of overlap between SpatialPolygonsDataFrame), which I think essentially does what I need. However, the issue I have is that I start from a SPDF, whereas the code was written to start from a list of shapefiles.
I tried to produce a list of SPDF from my single SPDF that might match the list of shapefiles produced by the code; however, I get an error when I run the subsequent code and I cannot figure out what the issue is.
ss <- split(pol, pol$Unique_id) ##unique_id defines "feature_year"; in this example e.g. "P1_2002", etc.
class(ss)
[1] "list"
n <- length(ss)
overlap <- matrix(0, nrow=n, ncol=n)
diag(overlap) <- 1
for (i in 1:n) {
+ ss[[i]]$area <- area(ss[[i]])
+ }
Error in area(ss[[i]]) : argument "b" is missing, with no default
If you could provide a hypothetical answer without the requirement of a reproducible example for me, that would be great, as I struggle to produce one :-( .
Trying to get it done via mapply or something like this without iterations - I have a spatial dataframe in R and would like to subset all more complicated shapes - ie shapes with 10 or more coordinates. The shapefile is substantial (10k shapes) and the method that is fine for a small sample is very slow for a big one. The iterative method is
Street$cc <-0
i <- 1
while(i <= nrow(Street)){
Street$cc[i] <-length(coordinates(Street)[[i]][[1]])/2
i<-i+1
}
How can i get the same effect in any array way? I have a problem with accessing few levels down from the top (Shapefile/lines/Lines/coords)
I tried:
Street$cc <- lapply(slot(Street, "lines"),
function(x) lapply(slot(x, "Lines"),
function(y) length(slot(y, "coords"))/2))
/division by 2 as each coordinate is a pair of 2 values/
but is still returns a list with number of items per row, not the integer telling me how many items are there. How can i get the number of coordinates per each shape in a spatial dataframe? Sorry I do not have a reproducible example but you can check on any spatial file - it is more about accessing low level property rather than a very specific issue.
EDIT:
I resolved the issue - using function
tail()
Here is a reproducible example. Slightly different to yours, because you did not provide data, but the principle is the same. The 'principle' when drilling down into complex S4 structures is to pay attention to whether each level is a list or a slot, using [[]] to access lists, and # for slots.
First lets get a spatial ploygon. I'll use the US state boundaries;
library(maps)
local.map = map(database = "state", fill = TRUE, plot = FALSE)
IDs = sapply(strsplit(local.map$names, ":"), function(x) x[1])
states = map2SpatialPolygons(map = local.map, ID = IDs)
Now we can subset the polygons with fewer than 200 vertices like this:
# Note: next line assumes that only interested in one Polygon per top level polygon.
# I.e. assumes that we have only single part polygons
# If you need to extend this to work with multipart polygons, it will be
# necessary to also loop over values of lower level Polygons
lengths = sapply(1:length(states), function(i)
NROW(states#polygons[[i]]#Polygons[[1]]#coords))
simple.states = states[which(lengths < 200)]
plot(simple.states)
I have a dataset of XY points that looks like this
x<-c(2,4,6,3,7,9,1)
y<-c(6,4,8,2,9,6,1)
id<-c("a","b","c","d","e","f","g")
dataset<-data.frame(cbind(x,y,id))
I would like to connect all combinations of all points with spatial lines, with lines named with combinations of the points that they're connecting
In "attributes table" that results from the output, names for spatial lines might look like this:
a_b
a_c
a_d
a_e
a_f
a_g
b_a
b_c
b_d
b_e
b_f
b_g
c_a
etc.
I'm speculating a bit here as to what exactly you wanted, but I think you want to visualize the connections from any point to the others. If that's the case, then this might work.
But first, some assumptions:
Your x and y coordinates are starting points. Consequently, id are thus id.origin
All other points will need to become "destinations", and then their own coordinates will become x_destination and so on.
< disclaimer> There should be a better, more elegant way to do this. I'd appreciate if someone more experienced can jump in and show me any of the *ply ways to do it. < /disclaimer>
Replicate the dataframe to cover for all possible combinations
dataset<-do.call(rbind, replicate(7, dataset, simplify=FALSE))
Now, create a matrix with all the same destination points, mixed:
nm=matrix(ncol=3)
for (i in 1:7){
nm<-rbind(nm,do.call(rbind,replicate(7,as.matrix(dataset[i,]),simplify=FALSE)))
}
nm<-nm[-1,]
Rename the columns of matrix, so they make sense, and bind the existing data frame with the new matrix
colnames(nm)<-c("x2","y2","id.dest")
newds<-cbind(dataset,as.data.frame(nm))
Remove duplicated trajectories:
newds<-newds[-which(newds$id.origin==newds$id.dest),]
and plot the result using geom_segment:
p<-ggplot(newds,aes(x=x,y=y))+geom_segment(aes(xend=x2,yend=y2))
There is a way to name the segments, but from observing the plot I would't suggest doing it. Instead you might consider naming the points using geom_text (other options are available, see ?annotate for one).
p<-p + geom_text(aes(x=1.8,y=6.1,label="a"))
That will produce a plot like the one here:
The whole solution looks like this:
plot(dataset$x,dataset$y)
Replicate the dataframe to cover for all possible combinations
dataset<-do.call(rbind, replicate(7, dataset, simplify=FALSE))
Now, create a matrix with all the same destination points, mixed:
nm=matrix(ncol=3)
for (i in 1:7){
nm<-rbind(nm,do.call(rbind,replicate(7,as.matrix(dataset[i,]),simplify=FALSE)))
}
nm<-nm[-1,]
Rename the columns of matrix, so they make sense, and bind the existing data frame with the new matrix
colnames(nm)<-c("x2","y2","id.dest")
newds<-cbind(dataset,as.data.frame(nm))
Remove duplicated trajectories:
newds1<-newds[-which(newds$id==newds$id.dest),]
library(ggplot2)
Converting destination x & y to numeric from factor
newds1$x2<-as.numeric(as.character(newds1$x2)) #converting from factor to numeric
newds1$y2<-as.numeric(as.character(newds1$y2))
Plotting the destination points . . .same as the origin points
plot(newds1$x, newds1$y)
plot(newds1$x2, newds1$y2, col="red")
Now use code from this answer:
Convert Begin and End Coordinates into Spatial Lines in R
Raw list to store Lines objects:
l <- vector("list", nrow(newds1)) #
This l is now an empty vector w/ number of rows defined by length (nrow) of newds1
Splitting origin and destination coordinates so I can run this script:
origins<-data.frame(cbind(newds1$x, newds1$y))
destinations<-data.frame(cbind(newds1$x2, newds1$y2))
library(sp)
for (i in seq_along(l)) {
l[[i]] <- Lines(list(Line(rbind(origins[i, ], destinations[i,]))), as.character(i))
}
l.spatial<-SpatialLines(l)
plot(l.spatial, add=T)
I'm looking for a way to get the xy-coordinates of all the intersections within one SpatialLines object or SpatialLinesDataFrame. I have found the function gIntersect of rgeos but that only looks at the intersection between two datasets. Since I am working with a dataset of over half a million lines it would take too much time to make a separate file of every line and check whether any line intersects with another. In ArcMap there is the Intersect function that is able to do it in a couple of seconds and I was wondering whether there was also such a function in R. Thanks!
If you convert your SpatialLines object into a psp object from spatstat you can use the spatstat function selfcrossing.psp. However, I'm not sure how it will cope with half a million lines since the number of crossings potentially could be enormous. The code below generates a random segment pattern and finds the self crossings.
BEWARE that this code potentially can take up a lot of memory and kill R, so try with progressively increasing examples before processing a half million lines. The code below used quite a bit of memory on my 5 year old laptop and took 5 seconds to run.
set.seed(42)
N <- 1e4
x <- psp(runif(N), runif(N), runif(N), runif(N), owin(), check=FALSE)
y <- selfcrossing.psp(x)
I have one table containing +500k rows with coordinates x, y grouped by shapeid (289 ids in total) and forming a polygon.
shapeid x y
1 679400.3 6600354
1 679367.9 6600348
1 679313.3 6600340
1 679259.5 6600331
1 679087.5 6600201
0 661116.3 6606615
0 661171.5 6606604
0 661182.7 6606605
0 661198.9 6606606
0 661205.9 6606605
... ... ...
I want to find the coordinates which intersects or lies closest to each other, in essence finding the physical neighbours for each shapeid.
The results should look something like:
shapeid shapeid_neighbour1 shapeid_neighbour2
So I tried using sp and rgeos like so:
library(sp)
library(rgeos)
mydata <- read.delim('d:/temp/testfile.txt', header=T, sep=",")
sp.mydata <- mydata
coordinates(sp.mydata) <- ~x+y
When I run class, everything looks fine:
class(sp.mydata)
[1] "SpatialPointsDataFrame"
attr(,"package")
[1] "sp"
I now try calculating the distance by each point:
d <- gDistance(sp.mydata, byid=T)
R Studio encounters fatal error. Any ideas? My plan is then to use:
min.d <- apply(d, 1, function(x) order(x, decreasing=F)[2])
To find the second shortest distance, i.e. the closest point. But maybe this isn't the best approach to do what I want - finding the physical neighbours for each shapeid?
Assuming that each shapeid of your dataframe identifies the vertices of a polygon, you need first to create a SpatialPolygons object from the coordinates and then apply the function gDistance to know the distance between any pair of polygons (assuming that is what you are looking for). In order to create a SpatialPolygons you need a Polygons and in turn a Polygon object. You can find details in the help page of the sp package under Polygon.
You might find soon a problem: the coordinates of each polygons must close, i.e. the last vertex must be the same as the first for each shapeid. As far as I can see from your data, that seems not to be the case for you. So you should "manually" add a row for each subset of your data.
You can try this (assuming that df is your starting dataframe):
require(rgeos)
#split the dataframe for each shapeid and coerce to matrix
coordlist<-lapply(split(df[,2:3],df$shapeid),as.matrix)
#apply the following command only if the polygons don't close
#coordlist<-lapply(coordilist, function(x) rbind(x,x[1,]))
#create a SpatialPolygons for each shapeid
SPList<-lapply(coordlist,function(x) SpatialPolygons(list(Polygons(list(Polygon(x)),1))))
#initialize a matrix of distances
distances<-matrix(0,ncol=length(SPList),nrow=length(SPList))
#calculate the distances
for (i in 1:(length(SPList)-1))
for (j in (i+1):length(SPList))
distances[i,j]<-gDistance(SPList[[i]],SPList[[j]])
This may require some time, since you are calculating 289*288/2 polygons distances. Eventually, you'll obtain a matrix of distances.