I have one table containing +500k rows with coordinates x, y grouped by shapeid (289 ids in total) and forming a polygon.
shapeid x y
1 679400.3 6600354
1 679367.9 6600348
1 679313.3 6600340
1 679259.5 6600331
1 679087.5 6600201
0 661116.3 6606615
0 661171.5 6606604
0 661182.7 6606605
0 661198.9 6606606
0 661205.9 6606605
... ... ...
I want to find the coordinates which intersects or lies closest to each other, in essence finding the physical neighbours for each shapeid.
The results should look something like:
shapeid shapeid_neighbour1 shapeid_neighbour2
So I tried using sp and rgeos like so:
library(sp)
library(rgeos)
mydata <- read.delim('d:/temp/testfile.txt', header=T, sep=",")
sp.mydata <- mydata
coordinates(sp.mydata) <- ~x+y
When I run class, everything looks fine:
class(sp.mydata)
[1] "SpatialPointsDataFrame"
attr(,"package")
[1] "sp"
I now try calculating the distance by each point:
d <- gDistance(sp.mydata, byid=T)
R Studio encounters fatal error. Any ideas? My plan is then to use:
min.d <- apply(d, 1, function(x) order(x, decreasing=F)[2])
To find the second shortest distance, i.e. the closest point. But maybe this isn't the best approach to do what I want - finding the physical neighbours for each shapeid?
Assuming that each shapeid of your dataframe identifies the vertices of a polygon, you need first to create a SpatialPolygons object from the coordinates and then apply the function gDistance to know the distance between any pair of polygons (assuming that is what you are looking for). In order to create a SpatialPolygons you need a Polygons and in turn a Polygon object. You can find details in the help page of the sp package under Polygon.
You might find soon a problem: the coordinates of each polygons must close, i.e. the last vertex must be the same as the first for each shapeid. As far as I can see from your data, that seems not to be the case for you. So you should "manually" add a row for each subset of your data.
You can try this (assuming that df is your starting dataframe):
require(rgeos)
#split the dataframe for each shapeid and coerce to matrix
coordlist<-lapply(split(df[,2:3],df$shapeid),as.matrix)
#apply the following command only if the polygons don't close
#coordlist<-lapply(coordilist, function(x) rbind(x,x[1,]))
#create a SpatialPolygons for each shapeid
SPList<-lapply(coordlist,function(x) SpatialPolygons(list(Polygons(list(Polygon(x)),1))))
#initialize a matrix of distances
distances<-matrix(0,ncol=length(SPList),nrow=length(SPList))
#calculate the distances
for (i in 1:(length(SPList)-1))
for (j in (i+1):length(SPList))
distances[i,j]<-gDistance(SPList[[i]],SPList[[j]])
This may require some time, since you are calculating 289*288/2 polygons distances. Eventually, you'll obtain a matrix of distances.
Related
I'm wanting to find the nearest polygons in a simple features data frame in R to a set of points in another simple features data frame using the sf package in R. I've been using 'st_is_within_distance' in 'st_join' statements, but this returns everything within a given distance, not simply the closest features.
Previously I used 'gDistance' from the 'rgeos' package with 'sp' features like this:
m = gDistance(a, b, byid = TRUE)
row = apply(m, 2, function(x) which(x == min(x)))
labels = unlist(b#data[row, ]$NAME)
a$NAME <- labels
I'm wanting to translate this approach of finding nearest features for a set of points using rgeos and sp to using sf. Any advice or suggestions greatly appreciated.
It looks like the solution to my question was already posted -- https://gis.stackexchange.com/questions/243994/how-to-calculate-distance-from-point-to-linestring-in-r-using-sf-library-and-g -- this approach gets just what I need given an sf point feature 'a' and sf polygon feature 'b':
closest <- list()
for(i in seq_len(nrow(a))){
closest[[i]] <- b[which.min(
st_distance(b, a[i,])),]
}
Trying to get it done via mapply or something like this without iterations - I have a spatial dataframe in R and would like to subset all more complicated shapes - ie shapes with 10 or more coordinates. The shapefile is substantial (10k shapes) and the method that is fine for a small sample is very slow for a big one. The iterative method is
Street$cc <-0
i <- 1
while(i <= nrow(Street)){
Street$cc[i] <-length(coordinates(Street)[[i]][[1]])/2
i<-i+1
}
How can i get the same effect in any array way? I have a problem with accessing few levels down from the top (Shapefile/lines/Lines/coords)
I tried:
Street$cc <- lapply(slot(Street, "lines"),
function(x) lapply(slot(x, "Lines"),
function(y) length(slot(y, "coords"))/2))
/division by 2 as each coordinate is a pair of 2 values/
but is still returns a list with number of items per row, not the integer telling me how many items are there. How can i get the number of coordinates per each shape in a spatial dataframe? Sorry I do not have a reproducible example but you can check on any spatial file - it is more about accessing low level property rather than a very specific issue.
EDIT:
I resolved the issue - using function
tail()
Here is a reproducible example. Slightly different to yours, because you did not provide data, but the principle is the same. The 'principle' when drilling down into complex S4 structures is to pay attention to whether each level is a list or a slot, using [[]] to access lists, and # for slots.
First lets get a spatial ploygon. I'll use the US state boundaries;
library(maps)
local.map = map(database = "state", fill = TRUE, plot = FALSE)
IDs = sapply(strsplit(local.map$names, ":"), function(x) x[1])
states = map2SpatialPolygons(map = local.map, ID = IDs)
Now we can subset the polygons with fewer than 200 vertices like this:
# Note: next line assumes that only interested in one Polygon per top level polygon.
# I.e. assumes that we have only single part polygons
# If you need to extend this to work with multipart polygons, it will be
# necessary to also loop over values of lower level Polygons
lengths = sapply(1:length(states), function(i)
NROW(states#polygons[[i]]#Polygons[[1]]#coords))
simple.states = states[which(lengths < 200)]
plot(simple.states)
I followed How do I extract raster values from polygon data then join into spatial data frame? (which was helpful) to create a matrix (then data frame) of mean raster values to a polygon. The problem now is that I want to know which polygon is which. My SpatialPolygonsDataFrame has an ID value in p$Block_ID. Is there a way to bring that over in the extract() code?
Alternatively, does the extract() function report output in the order it was input (that would make sense)? i.e. the order of p$Block_ID will be preserved in the output? I looked through the documentation and it was not clear one way or the other. If so it is easy enough to add an ID column to the extract() output.
Here is my generalized code for reference. NOTE note reproducible because I don't think it really needs to be at this point. Where r is a raster and p in the polygons
extract(r, p, small = TRUE, fun = mean, na.rm = TRUE, df = TRUE, nl = 1)
Thoughts?
The values are returned in order, as one would expect in R, and as stated in the manual (?extract): The order of the returned values corresponds to the order of object y
Thus you can do (reproducible example from ?extract)
e <- extract(r, p)
ee <- data.frame(ID=p$Block_ID, e)
I could not get R. Hijmans answer working for me. I found that this works.
e = extract(r, p)
e$ID = as.factor(e$ID)
levels(e$ID) = levels(p$Block_ID)
I have a dataset of XY points that looks like this
x<-c(2,4,6,3,7,9,1)
y<-c(6,4,8,2,9,6,1)
id<-c("a","b","c","d","e","f","g")
dataset<-data.frame(cbind(x,y,id))
I would like to connect all combinations of all points with spatial lines, with lines named with combinations of the points that they're connecting
In "attributes table" that results from the output, names for spatial lines might look like this:
a_b
a_c
a_d
a_e
a_f
a_g
b_a
b_c
b_d
b_e
b_f
b_g
c_a
etc.
I'm speculating a bit here as to what exactly you wanted, but I think you want to visualize the connections from any point to the others. If that's the case, then this might work.
But first, some assumptions:
Your x and y coordinates are starting points. Consequently, id are thus id.origin
All other points will need to become "destinations", and then their own coordinates will become x_destination and so on.
< disclaimer> There should be a better, more elegant way to do this. I'd appreciate if someone more experienced can jump in and show me any of the *ply ways to do it. < /disclaimer>
Replicate the dataframe to cover for all possible combinations
dataset<-do.call(rbind, replicate(7, dataset, simplify=FALSE))
Now, create a matrix with all the same destination points, mixed:
nm=matrix(ncol=3)
for (i in 1:7){
nm<-rbind(nm,do.call(rbind,replicate(7,as.matrix(dataset[i,]),simplify=FALSE)))
}
nm<-nm[-1,]
Rename the columns of matrix, so they make sense, and bind the existing data frame with the new matrix
colnames(nm)<-c("x2","y2","id.dest")
newds<-cbind(dataset,as.data.frame(nm))
Remove duplicated trajectories:
newds<-newds[-which(newds$id.origin==newds$id.dest),]
and plot the result using geom_segment:
p<-ggplot(newds,aes(x=x,y=y))+geom_segment(aes(xend=x2,yend=y2))
There is a way to name the segments, but from observing the plot I would't suggest doing it. Instead you might consider naming the points using geom_text (other options are available, see ?annotate for one).
p<-p + geom_text(aes(x=1.8,y=6.1,label="a"))
That will produce a plot like the one here:
The whole solution looks like this:
plot(dataset$x,dataset$y)
Replicate the dataframe to cover for all possible combinations
dataset<-do.call(rbind, replicate(7, dataset, simplify=FALSE))
Now, create a matrix with all the same destination points, mixed:
nm=matrix(ncol=3)
for (i in 1:7){
nm<-rbind(nm,do.call(rbind,replicate(7,as.matrix(dataset[i,]),simplify=FALSE)))
}
nm<-nm[-1,]
Rename the columns of matrix, so they make sense, and bind the existing data frame with the new matrix
colnames(nm)<-c("x2","y2","id.dest")
newds<-cbind(dataset,as.data.frame(nm))
Remove duplicated trajectories:
newds1<-newds[-which(newds$id==newds$id.dest),]
library(ggplot2)
Converting destination x & y to numeric from factor
newds1$x2<-as.numeric(as.character(newds1$x2)) #converting from factor to numeric
newds1$y2<-as.numeric(as.character(newds1$y2))
Plotting the destination points . . .same as the origin points
plot(newds1$x, newds1$y)
plot(newds1$x2, newds1$y2, col="red")
Now use code from this answer:
Convert Begin and End Coordinates into Spatial Lines in R
Raw list to store Lines objects:
l <- vector("list", nrow(newds1)) #
This l is now an empty vector w/ number of rows defined by length (nrow) of newds1
Splitting origin and destination coordinates so I can run this script:
origins<-data.frame(cbind(newds1$x, newds1$y))
destinations<-data.frame(cbind(newds1$x2, newds1$y2))
library(sp)
for (i in seq_along(l)) {
l[[i]] <- Lines(list(Line(rbind(origins[i, ], destinations[i,]))), as.character(i))
}
l.spatial<-SpatialLines(l)
plot(l.spatial, add=T)
long-time reader, first time poster.
I'm attempting perform a gIntersection() on two very large SpatialPolygonsDataFrame objects. The first is all US Counties, the second is a 240 row x 279 column grid, as a series of 66,960 polygon.
I successfully ran this by just using Pennsylvania and the piece of the grid that overlaps PA:
gIntersection(PA, grid, byid=TRUE)
I tried to run this overnight for the whole U.S. and it was still running this morning with a 10 GB(!) swap file on my hard drive and no evidence of progress. Am I doing something wrong, or is this normal behavior, and I should just do a state-by-state loop?
Thanks!
A little later than I hoped, but here's the function I ended up using for my task related to this. It could probably be adapted to other applications.
#mdsumner was right that a high-level operation to discard non-intersects sped this up greatly. Hopefully this is useful!
library("sp")
library("rgeos")
library("plyr")
ApportionPopulation <- function(AdminBounds, poly, Admindf) { # I originally wrote this function to total the population that lies within each polygon in a SpatialPolygon object. AdminBounds is a SpatialPolygon for whatever administrative area you're working with; poly is the SpatalPolygon you want to total population (or whatever variable of your choice) across, and Admindf is a dataframe that has data for each polygon inside the AdminBounds SpatialPolygon.
# the AdminBounds have the administrative ID code as feature IDS. I set that up using spChFID()
# start by trimming out areas that don't intersect
AdminBounds.sub <- gIntersects(AdminBounds, poly, byid=TRUE) # test for areas that don't intersect
AdminBounds.sub2 <- apply(AdminBounds.sub, 2, function(x) {sum(x)}) # test across all polygons in the SpatialPolygon whether it intersects or not
AdminBounds.sub3 <- AdminBounds[AdminBounds.sub2 > 0] # keep only the ones that actually intersect
# perform the intersection. This takes a while since it also calculates area and other things, which is why we trimmed out irrelevant areas first
int <- gIntersection(AdminBounds.sub3, poly, byid=TRUE) # intersect the polygon and your administrative boundaries
intdf <- data.frame(intname=names(int)) # make a data frame for the intersected SpatialPolygon, using names from the output list from int
intdf$intname <- as.character(intdf$intname) # convert the name to character
splitid <- strsplit(intdf$intname, " ", fixed=TRUE) # split the names
splitid <- do.call("rbind", splitid) # rbind those back together
colnames(splitid) <- c("adminID", "donutshpid") # now you have the administrative area ID and the polygonID as separate variables in a dataframe that correspond to the int SpatialPolygon.
intdf <- data.frame(intdf, splitid) # make that into a dataframe
intdf$adminID <- as.character(intdf$adminID) # convert to character
intdf$donutshpid <- as.character(intdf$donutshpid) # convert to character. In my application the shape I'm using is a series of half-circles
# now you have a dataframe corresponding to the intersected SpatialPolygon object
intdf$polyarea <- sapply(int#polygons, function(x) {x#area}) # get area from the polygon SP object and put it in the df
intdf2 <- join(intdf, Admindf, by="adminID") # join together the two dataframes by the administrative ID
intdf2$popinpoly <- intdf2$pop * (intdf2$polyarea / intdf2$admin_area) # calculate the proportion of the population in the intersected area that is within the bounds of the polygon (assuming the population is evenly distributed within the administrative area)
intpop <- ddply(intdf2, .(donutshpid), summarize, popinpoly=sum(popinpoly)) # sum population lying within each polygon
# maybe do other final processing to get the output in the form you want
return(intpop) # done!
}
I found the sf package is superior for this:
out <- st_intersection(grid, polygons)
gIntersection was locking up my computer for hours trying to run and as a result requires trimming or cycling through individual polygons, st_intersection from sf package runs my data in seconds.
st_intersection also automatically merges the dataframes of both inputs.
Thanks to Grant Williamson at University of Tasmania for the vignette: https://atriplex.info/blog/index.php/2017/05/24/polygon-intersection-and-summary-with-sf/
You could probably get your answer faster using rasterize in the raster package, with your grid as a raster. It has an argument for finding the amount of polygon overlap to a cell.
?rasterize
getCover: logical. If ‘TRUE’, the fraction of each grid cell that is
covered by the polygons is returned (and the values of
‘field, fun, mask’, and ‘update’ are ignored. The fraction
covered is estimated by dividing each cell into 100 subcells
and determining presence/absence of the polygon in the center
of each subcell
It doesn't look like you get to control the number of subcells, though that probably wouldn't be hard to open up.