R function to count coordinates - r

Trying to get it done via mapply or something like this without iterations - I have a spatial dataframe in R and would like to subset all more complicated shapes - ie shapes with 10 or more coordinates. The shapefile is substantial (10k shapes) and the method that is fine for a small sample is very slow for a big one. The iterative method is
Street$cc <-0
i <- 1
while(i <= nrow(Street)){
Street$cc[i] <-length(coordinates(Street)[[i]][[1]])/2
i<-i+1
}
How can i get the same effect in any array way? I have a problem with accessing few levels down from the top (Shapefile/lines/Lines/coords)
I tried:
Street$cc <- lapply(slot(Street, "lines"),
function(x) lapply(slot(x, "Lines"),
function(y) length(slot(y, "coords"))/2))
/division by 2 as each coordinate is a pair of 2 values/
but is still returns a list with number of items per row, not the integer telling me how many items are there. How can i get the number of coordinates per each shape in a spatial dataframe? Sorry I do not have a reproducible example but you can check on any spatial file - it is more about accessing low level property rather than a very specific issue.
EDIT:
I resolved the issue - using function
tail()

Here is a reproducible example. Slightly different to yours, because you did not provide data, but the principle is the same. The 'principle' when drilling down into complex S4 structures is to pay attention to whether each level is a list or a slot, using [[]] to access lists, and # for slots.
First lets get a spatial ploygon. I'll use the US state boundaries;
library(maps)
local.map = map(database = "state", fill = TRUE, plot = FALSE)
IDs = sapply(strsplit(local.map$names, ":"), function(x) x[1])
states = map2SpatialPolygons(map = local.map, ID = IDs)
Now we can subset the polygons with fewer than 200 vertices like this:
# Note: next line assumes that only interested in one Polygon per top level polygon.
# I.e. assumes that we have only single part polygons
# If you need to extend this to work with multipart polygons, it will be
# necessary to also loop over values of lower level Polygons
lengths = sapply(1:length(states), function(i)
NROW(states#polygons[[i]]#Polygons[[1]]#coords))
simple.states = states[which(lengths < 200)]
plot(simple.states)

Related

Calculate percentage overlap of multiple polygons from a SpatialPolygonsDataFrame

There are a few related questions and answers out there, but none fit my data structure.
I have a SpatialPolygonsDataFrame which holds (multiple) polygons of multiple features (here features being animal foraging areas and fishing areas), with data available for multiple years.
something like this:
feature 1: P1_2002; P1_2003; P1_2004;
feature 2: P2_2002; P2_2003; P2_2004;
feature 3: P3_2002; P3_2003; P3_2004.
What I want to do is calculate the overlap of polygons between features within each year (i.e.
% overlap of P1_2002 vs P2_2002; P1_2002 vs P3_2002; P2_2002 vs P3_2002;
% overlap of P1_2003 vs P2_2003, etc, etc.
Ideally, the result would be produced in the form of a matrix that can be saved as a table.
I have the following code from another thread (Percentage of overlap between SpatialPolygonsDataFrame), which I think essentially does what I need. However, the issue I have is that I start from a SPDF, whereas the code was written to start from a list of shapefiles.
I tried to produce a list of SPDF from my single SPDF that might match the list of shapefiles produced by the code; however, I get an error when I run the subsequent code and I cannot figure out what the issue is.
ss <- split(pol, pol$Unique_id) ##unique_id defines "feature_year"; in this example e.g. "P1_2002", etc.
class(ss)
[1] "list"
n <- length(ss)
overlap <- matrix(0, nrow=n, ncol=n)
diag(overlap) <- 1
for (i in 1:n) {
+ ss[[i]]$area <- area(ss[[i]])
+ }
Error in area(ss[[i]]) : argument "b" is missing, with no default
If you could provide a hypothetical answer without the requirement of a reproducible example for me, that would be great, as I struggle to produce one :-( .

Merge rasters of different extents, sum overlapping cell values in R

I am trying to merge rasterized polylines which have differing extents, in order to create a single surface indicating the number of times cells overlap.
Due to computational constraints (given the size of my study area), I am unable to use extend and then stack for each raster (total count = 67).
I have come across the merge function in R, and this allows me to merge rasters together into one surface. It doesn't, however, seem to like me inserting a function to compute the sum of overlapping cells.
Maybe I'm missing something obvious, or this is a limitation of the merge function. Any advice on how to generate this output, avoiding extend & stack would be greatly appreciated!
Code:
# read in specific route rasters
raster_list <- list.files('Data/Raw/tracks/rasterized/', full.names = TRUE)
for(i in 1:length(raster_list)){
# get file name
file_name <- raster_list[i]
# read raster in
road_rast_i <- raster(file_name)
if(i == 1){
combined_raster <- road_rast_i
} else {
# merge rasters and calc overlap
combined_raster <- merge(combined_raster, road_rast_i,
fun = function(x, y){sum(x#data#values, y#data#values)})
}
}
Image of current output:
Image of a single route (example):
Image of fix:
Solved. There's a mosaic function, which allows the following:
combined_raster <- mosaic(combined_raster, road_rast_i, fun = sum)

Include polygon ID when extracting raster values to polygons in R

I followed How do I extract raster values from polygon data then join into spatial data frame? (which was helpful) to create a matrix (then data frame) of mean raster values to a polygon. The problem now is that I want to know which polygon is which. My SpatialPolygonsDataFrame has an ID value in p$Block_ID. Is there a way to bring that over in the extract() code?
Alternatively, does the extract() function report output in the order it was input (that would make sense)? i.e. the order of p$Block_ID will be preserved in the output? I looked through the documentation and it was not clear one way or the other. If so it is easy enough to add an ID column to the extract() output.
Here is my generalized code for reference. NOTE note reproducible because I don't think it really needs to be at this point. Where r is a raster and p in the polygons
extract(r, p, small = TRUE, fun = mean, na.rm = TRUE, df = TRUE, nl = 1)
Thoughts?
The values are returned in order, as one would expect in R, and as stated in the manual (?extract): The order of the returned values corresponds to the order of object y
Thus you can do (reproducible example from ?extract)
e <- extract(r, p)
ee <- data.frame(ID=p$Block_ID, e)
I could not get R. Hijmans answer working for me. I found that this works.
e = extract(r, p)
e$ID = as.factor(e$ID)
levels(e$ID) = levels(p$Block_ID)

gIntersection on very large spatial objects

long-time reader, first time poster.
I'm attempting perform a gIntersection() on two very large SpatialPolygonsDataFrame objects. The first is all US Counties, the second is a 240 row x 279 column grid, as a series of 66,960 polygon.
I successfully ran this by just using Pennsylvania and the piece of the grid that overlaps PA:
gIntersection(PA, grid, byid=TRUE)
I tried to run this overnight for the whole U.S. and it was still running this morning with a 10 GB(!) swap file on my hard drive and no evidence of progress. Am I doing something wrong, or is this normal behavior, and I should just do a state-by-state loop?
Thanks!
A little later than I hoped, but here's the function I ended up using for my task related to this. It could probably be adapted to other applications.
#mdsumner was right that a high-level operation to discard non-intersects sped this up greatly. Hopefully this is useful!
library("sp")
library("rgeos")
library("plyr")
ApportionPopulation <- function(AdminBounds, poly, Admindf) { # I originally wrote this function to total the population that lies within each polygon in a SpatialPolygon object. AdminBounds is a SpatialPolygon for whatever administrative area you're working with; poly is the SpatalPolygon you want to total population (or whatever variable of your choice) across, and Admindf is a dataframe that has data for each polygon inside the AdminBounds SpatialPolygon.
# the AdminBounds have the administrative ID code as feature IDS. I set that up using spChFID()
# start by trimming out areas that don't intersect
AdminBounds.sub <- gIntersects(AdminBounds, poly, byid=TRUE) # test for areas that don't intersect
AdminBounds.sub2 <- apply(AdminBounds.sub, 2, function(x) {sum(x)}) # test across all polygons in the SpatialPolygon whether it intersects or not
AdminBounds.sub3 <- AdminBounds[AdminBounds.sub2 > 0] # keep only the ones that actually intersect
# perform the intersection. This takes a while since it also calculates area and other things, which is why we trimmed out irrelevant areas first
int <- gIntersection(AdminBounds.sub3, poly, byid=TRUE) # intersect the polygon and your administrative boundaries
intdf <- data.frame(intname=names(int)) # make a data frame for the intersected SpatialPolygon, using names from the output list from int
intdf$intname <- as.character(intdf$intname) # convert the name to character
splitid <- strsplit(intdf$intname, " ", fixed=TRUE) # split the names
splitid <- do.call("rbind", splitid) # rbind those back together
colnames(splitid) <- c("adminID", "donutshpid") # now you have the administrative area ID and the polygonID as separate variables in a dataframe that correspond to the int SpatialPolygon.
intdf <- data.frame(intdf, splitid) # make that into a dataframe
intdf$adminID <- as.character(intdf$adminID) # convert to character
intdf$donutshpid <- as.character(intdf$donutshpid) # convert to character. In my application the shape I'm using is a series of half-circles
# now you have a dataframe corresponding to the intersected SpatialPolygon object
intdf$polyarea <- sapply(int#polygons, function(x) {x#area}) # get area from the polygon SP object and put it in the df
intdf2 <- join(intdf, Admindf, by="adminID") # join together the two dataframes by the administrative ID
intdf2$popinpoly <- intdf2$pop * (intdf2$polyarea / intdf2$admin_area) # calculate the proportion of the population in the intersected area that is within the bounds of the polygon (assuming the population is evenly distributed within the administrative area)
intpop <- ddply(intdf2, .(donutshpid), summarize, popinpoly=sum(popinpoly)) # sum population lying within each polygon
# maybe do other final processing to get the output in the form you want
return(intpop) # done!
}
I found the sf package is superior for this:
out <- st_intersection(grid, polygons)
gIntersection was locking up my computer for hours trying to run and as a result requires trimming or cycling through individual polygons, st_intersection from sf package runs my data in seconds.
st_intersection also automatically merges the dataframes of both inputs.
Thanks to Grant Williamson at University of Tasmania for the vignette: https://atriplex.info/blog/index.php/2017/05/24/polygon-intersection-and-summary-with-sf/
You could probably get your answer faster using rasterize in the raster package, with your grid as a raster. It has an argument for finding the amount of polygon overlap to a cell.
?rasterize
getCover: logical. If ‘TRUE’, the fraction of each grid cell that is
covered by the polygons is returned (and the values of
‘field, fun, mask’, and ‘update’ are ignored. The fraction
covered is estimated by dividing each cell into 100 subcells
and determining presence/absence of the polygon in the center
of each subcell
It doesn't look like you get to control the number of subcells, though that probably wouldn't be hard to open up.

performing a calculation with a `paste`d vector reference

So I have some lidar data that I want to calculate some metrics for (I'll attach a link to the data in a comment).
I also have ground plots that I have extracted the lidar points around, so that I have a couple hundred points per plot (19 plots). Each point has X, Y, Z, height above ground, and the associated plot.
I need to calculate a bunch of metrics on the plot level, so I created plotsgrouped with split(plotpts, plotpts$AssocPlot).
So now I have a data frame with a "page" for each plot, so I can calculate all my metrics by the "plot page". This works just dandy for individual plots, but I want to automate it. (yes, I know there's only 19 plots, but it's the principle of it, darn it! :-P)
So far, I've got a for loop going that calculates the metrics and puts the results in a data frame called Results. I pulled the names of the groups into a list called groups as well.
for(i in 1:length(groups)){
Results$Plot[i] <- groups[i]
Results$Mean[i] <- mean(plotsgrouped$PLT01$Z)
Results$Std.Dev.[i] <- sd(plotsgrouped$PLT01$Z)
Results$Max[i] <- max(plotsgrouped$PLT01$Z)
Results$75%Avg.[i] <- mean(plotsgrouped$PLT01$Z[plotsgrouped$PLT01$Z <= quantile(plotsgrouped$PLT01$Z, .75)])
Results$50%Avg.[i] <- mean(plotsgrouped$PLT01$Z[plotsgrouped$PLT01$Z <= quantile(plotsgrouped$PLT01$Z, .50)])
...
and so on.
The problem arises when I try to do something like:
Results$mean[i] <- mean(paste("plotsgrouped", groups[i],"Z", sep="$")). mean() doesn't recognize the paste as a reference to the vector plotsgrouped$PLT27$Z, and instead fails. I've deduced that it's because it sees the quotes and thinks, "Oh, you're just some text, I can't get the mean of you." or something to that effect.
Btw, groups is a list of the 19 plot names: PLT01-PLT27 (non-consecutive sometimes) and FTWR, so I can't simply put a sequence for the numeric part of the name.
Anyone have an easier way to iterate across my test plots and get arbitrary metrics?
I feel like I have all the right pieces, but just don't know how they go together to give me what I want.
Also, if anyone can come up with a better title for the question, feel free to post it or change it or whatever.
Try with:
for(i in seq_along(groups)) {
Results$Plot[i] <- groups[i] # character names of the groups
tempZ = plotsgrouped[[groups[i]]][["Z"]]
Results$Mean[i] <- mean(tempZ)
Results$Std.Dev.[i] <- sd(tempZ)
Results$Max[i] <- max(tempZ)
Results$75%Avg.[i] <- mean(tempZ[tempZ <= quantile(tempZ, .75)])
Results$50%Avg.[i] <- mean(tempZ[tempZ <= quantile(tempZ, .50)])
}

Resources