extend (buffer) boundary of shape file in R - r

I need to extend the boundary of a field (only boundary) by x meters. I tried using gBuffer from rgeos R package - output of the transformation gives me only boundary of the field and rest polygons inside the field are lost with data.
How I can use gBuffer / any other way to extend only boundary of spatial polygon object (shape file) by 10m and keeping everything intact (inside polygons and data)
Tried Code -
field <- raster::shapefile("test.shp")
class(field)
plot(field)
View(field#data)
field <- sp::spTransform(field,CRS("+init=epsg:32632"))
plot(field)
field10m <- rgeos::gBuffer(field , width = 10)
plot(field10m)
Test shapefile can be downloaded from here https://drive.google.com/file/d/1s4NAinDeBow95hxr6gELHHkhwiR3z6Z9/view?usp=sharing

I suggest you consider a workflow based on the {sf} package; it makes for a bit clearer code than sp and rgeos (it will use the same geometry engine, but hide the gritty parts under the hood).
The code preserves all data features (actually, only one - a column named Rx) of your shapefile.
Note that since the yellow element / Rx = 120 / consists of multiple polygons each of them is buffered, resulting in overlaid features. This is expected outcome.
Should this be undesired behavior you can consider using a dplyr::group_by(Rx) followed by dplyr::summarise() to dissolve the internal boundary lines before applying the sf::st_buffer() call.
library(sf)
library(dplyr)
library(mapview) # needed only for the final overview
library(leafsync) # dtto.
test_map <- sf::st_read("./Map/test.shp")
# find an appropriate projected metric CRS
crsuggest::suggest_crs(test_map, type = "projected")
result <- test_map %>%
sf::st_transform(5683) %>% # transform to a metric CRS
sf::st_buffer(10) # buffer by 10 meters
# a visual check / note how the polygons are overlaid
leafsync::latticeview(mapview::mapview(test_map),
mapview::mapview(result))

Related

Identifying points located near to a polygons boundary

I am trying to identify all points (postcodes in my case) that are located near to the coastline of the UK (i.e., a polygon). I am using R to process this.
I downloaded the geographical outline of United Kingdom from here as a shapefile. A list of all postcodes for the UK were accessed from the ONS here. Please note that the latter file is very large (211MB zipped).
To begin, I loaded in both files into R, and then convert them to the same coordinate reference system (OSGB1936; 27700). For the polygon of the UK, I convert this to lines that represent the boundary/coastline (note that while Northern Ireland shares a common boundary with Ireland, I will subset any postcodes erroneously matched as near the coastline by lat/long later). I then convert the points into spatial points.
# Load libraries
library(sf)
library(data.table)
# Load data
uk_shp <- read_sf("./GBR_adm/GBR_adm0.shp") # Load UK shapefile (ignore the download file says GBR, it is UK)
uk_shp <- st_transform(uk_shp, crs = 27700) # Convert to co-ordinate reference system (CRS) that allow buffers in correct units later (note: 4326 is World CRS)
uk_coast <- st_cast(uk_shp,"MULTILINESTRING") # Convert polygon to a line (i.e., coastline)
# Load in postcodes
pcd <- fread("./ONSPD_FEB_2022_UK/Data/ONSPD_FEB_2022_UK.csv") # Load all postcodes for Great Britain - this is a very large file so I also create a single
pcd <- pcd[, c(1:3, 43:44)] # Drop unnecessary information/columns to save memory
# Convert to spatial points data frame
pcd_sp <- pcd %>% # For object of postcodes
st_as_sf(coords = c("long", "lat")) %>% # Define as spatial object and identify which columns tell us the position of points
st_set_crs(27700) # Set CRS
I originally thought the most efficient approach to take would be to define what a coastal region is (here defined as within 5km of the coastline), create a buffer to represent that around the coastline, and then use a point-in-polygon function to select all points within the buffers. However, the code below had not finished running overnight which probably suggests that it was the incorrect approach and I an unsure why it is taking so long.
uk_coast <- st_buffer(uk_coast, 5000) # Create 5km buffer
pcd_coastal <- st_intersection(uk_buf, pcd_sp) # Point-in-polygon (i.e., keep only the postcodes that are located in the buffer region)
So I changed my approach to calculate the straight-line distance of each point to the nearest coastline. In running the code below, it gives incorrect distances. For example below, I select one postcode (AB12 4XP) which is located ~2.6km from the coastline, however the code below gives ~82km which is very wrong. I had tried st_nearest_feature() but could not get it to work (it may do, but was beyond my attempts).
test <- pcd_sp[pcd_sp$pcd == "AB124XP",] # Subset test postcode
dist <- st_distance(test, uk_coast, by_element = TRUE, which = "Euclidean") # Calculate distance
I am unsure how to proceed from here - I don't think it is the wrong CRS. It might be that the multilinestring conversion is causing problems. Does anyone have suggestions what to do?
sf has an st_is_within_distance function that can test if points are within a distance of a line. My test data is 10,000 random points in the bounding box of the UK shape, and the UK shape in OSGB grid coordinates.
> system.time({indist = st_is_within_distance(uk_coast, pts, dist=5000)})
user system elapsed
30.907 0.003 30.928
But this isn't building a spatial index. The docs say that it does build a spatial index if the coordinates are "geographic" and the flag for using spherical geometry is set. I don't understand why it can't build one for cartesian coordinates, but lets see how much faster it is...
Transform takes no time at all:
> ukLL = st_transform(uk_coast, 4326)
> ptsLL = st_transform(pts, 4326)
Then test...
system.time({indistLL = st_is_within_distance(ukLL, ptsLL, dist=5000)})
user system elapsed
1.405 0.000 1.404
Just over a second. Any difference between the two? Let's see:
> setdiff(indistLL[[1]], indist[[1]])
[1] 3123
> setdiff(indist[[1]], indistLL[[1]])
integer(0)
So point 3123 is in the set using lat-long, but not the set using OSGB. There's nothing in OSGB that isn't in the lat-long set.
Quick plot to show the selected points:
> plot(uk_coast$geometry)
> plot(pts$geometry[indistLL[[1]]], add=TRUE)

Create Grid in R for kriging in gstat

lat long
7.16 124.21
8.6 123.35
8.43 124.28
8.15 125.08
Consider these coordinates, these coordinates correspond to weather stations that measure rainfall data.
The intro to the gstat package in R uses the meuse dataset. At some point in this tutorial: https://rpubs.com/nabilabd/118172, the guys makes use of a "meuse.grid" in this line of code:
data("meuse.grid")
I do not have such a file and I do not know how to create it, can I create one using these coordinates? Or at least point me to material that discusses how to create a custom grid for a custom area (i.e not using administrative boundaries from GADM).
Probably wording this wrong, don't even know if this question makes sense to R savvy people. Still, would love to hear some direction, or at least tips. Thanks a lot!
Total noob at R and statistics.
EDIT: See the sample grid that the tutorial I posted looks like, that's the thing I want to make.
EDIT 2: Would this method be viable? https://rstudio-pubs-static.s3.amazonaws.com/46259_d328295794034414944deea60552a942.html
I am going to share my approach to create a grid for kriging. There are probably more efficient or elegant ways to achieve the same task, but I hope this will be a start to facilitate some discussions.
The original poster was thinking about 1 km for every 10 pixels, but that is probably too much. I am going to create a grid with cell size equals to 1 km * 1 km. In addition, the original poster did not specify an origin of the grid, so I will spend some time determining a good starting point. I also assume that the Spherical Mercator projection coordinate system is the appropriate choice for the projection. This is a common projection for Google Map or Open Street Maps.
1. Load Packages
I am going to use the following packages. sp, rgdal, and raster are packages provide many useful functions for spatial analysis. leaflet and mapview are packages for quick exploratory visualization of spatial data.
# Load packages
library(sp)
library(rgdal)
library(raster)
library(leaflet)
library(mapview)
2. Exploratory Visualization of the station locations
I created an interactive map to inspect the location of the four stations. Because the original poster provided the latitude and longitude of these four stations, I can create a SpatialPointsDataFrame with Latitude/Longitude projection. Notice the EPSG code for Latitude/Longitude projection is 4326. To learn more about EPSG code, please see this tutorial (https://www.nceas.ucsb.edu/~frazier/RSpatialGuides/OverviewCoordinateReferenceSystems.pdf).
# Create a data frame showing the **Latitude/Longitude**
station <- data.frame(lat = c(7.16, 8.6, 8.43, 8.15),
long = c(124.21, 123.35, 124.28, 125.08),
station = 1:4)
# Convert to SpatialPointsDataFrame
coordinates(station) <- ~long + lat
# Set the projection. They were latitude and longitude, so use WGS84 long-lat projection
proj4string(station) <- CRS("+init=epsg:4326")
# View the station location using the mapview function
mapview(station)
The mapview function will create an interactive map. We can use this map to determine what could be a suitable for the origin of the grid.
3. Determine the origin
After inspecting the map, I decided that the origin could be around longitude 123 and latitude 7. This origin will be on the lower left of the grid. Now I need to find the coordinate representing the same point under Spherical Mercator projection.
# Set the origin
ori <- SpatialPoints(cbind(123, 7), proj4string = CRS("+init=epsg:4326"))
# Convert the projection of ori
# Use EPSG: 3857 (Spherical Mercator)
ori_t <- spTransform(ori, CRSobj = CRS("+init=epsg:3857"))
I first created a SpatialPoints object based on the latitude and longitude of the origin. After that I used the spTransform to perform project transformation. The object ori_t now is the origin with Spherical Mercator projection. Notice that the EPSG code for Spherical Mercator is 3857.
To see the value of coordinates, we can use the coordinates function as follows.
coordinates(ori_t)
coords.x1 coords.x2
[1,] 13692297 781182.2
4. Determine the extent of the grid
Now I need to decide the extent of the grid that can cover all the four points and the desired area for kriging, which depends on the cell size and the number of cells. The following code sets up the extent based on the information. I have decided that the cell size is 1 km * 1 km, but I need to experiment on what would be a good cell number for both x- and y-direction.
# The origin has been rounded to the nearest 100
x_ori <- round(coordinates(ori_t)[1, 1]/100) * 100
y_ori <- round(coordinates(ori_t)[1, 2]/100) * 100
# Define how many cells for x and y axis
x_cell <- 250
y_cell <- 200
# Define the resolution to be 1000 meters
cell_size <- 1000
# Create the extent
ext <- extent(x_ori, x_ori + (x_cell * cell_size), y_ori, y_ori + (y_cell * cell_size))
Based on the extent I created, I can create a raster layer with number all equal to 0. Then I can use the mapview function again to see if the raster and the four stations matches well.
# Initialize a raster layer
ras <- raster(ext)
# Set the resolution to be
res(ras) <- c(cell_size, cell_size)
ras[] <- 0
# Project the raster
projection(ras) <- CRS("+init=epsg:3857")
# Create interactive map
mapview(station) + mapview(ras)
I repeated this process several times. Finally I decided that the number of cells is 250 and 200 for x- and y-direction, respectively.
5. Create spatial grid
Now I have created a raster layer with proper extent. I can first save this raster as a GeoTiff for future use.
# Save the raster layer
writeRaster(ras, filename = "ras.tif", format="GTiff")
Finally, to use the kriging functions from the package gstat, I need to convert the raster to SpatialPixels.
# Convert to spatial pixel
st_grid <- rasterToPoints(ras, spatial = TRUE)
gridded(st_grid) <- TRUE
st_grid <- as(st_grid, "SpatialPixels")
The st_grid is a SpatialPixels that can be used in kriging.
This is an iterative process to determine a suitable grid. Throughout the process, users can change the projection, origin, cell size, or cell number depends on the needs of their analysis.
#yzw and #Edzer bring up good points for creating a regular rectangular grid, but sometimes, there is the need to create an irregular grid over a defined polygon, usually for kriging.
This is a sparsely documented topic. One good answer can be found here. I expand on it with code below:
Consider the the built in meuse dataset. meuse.grid is an irregularly shaped grid. How do we make an grid like meuse.grid for our unique study area?
library(sp)
data(meuse.grid)
ggplot(data = meuse.grid) + geom_point(aes(x, y))
Imagine an irregularly shaped SpatialPolygon or SpatialPolygonsDataFrame, called spdf. You first build a regular rectangular grid over it, then subset the points in that regular grid by the irregularly-shaped polygon.
# First, make a rectangular grid over your `SpatialPolygonsDataFrame`
grd <- makegrid(spdf, n = 100)
colnames(grd) <- c("x", "y")
# Next, convert the grid to `SpatialPoints` and subset these points by the polygon.
grd_pts <- SpatialPoints(
coords = grd,
proj4string = CRS(proj4string(spdf))
)
# subset all points in `grd_pts` that fall within `spdf`
grd_pts_in <- grd_pts[spdf, ]
# Then, visualize your clipped grid which can be used for kriging
ggplot(as.data.frame(coordinates(grd_pts_in))) +
geom_point(aes(x, y))
If you have your study area as a polygon, imported as a SpatialPolygons, you could either use package raster to rasterize it, or use sp::spsample to sample it using sampling type regular.
If you don't have such a polygon, you can create points regularly spread over a rectangular long/lat area using expand.grid, using seq to generate a sequence of long and lat values.

How to get count of non-NA raster cells within polygon

I've been running into all sorts of issues using ArcGIS ZonalStats and thought R could be a great way. Saying that I'm fairly new to R, but got a coding background.
The situation is that I have several rasters and a polygon shape file with many features of different sizes (though all features are bigger than a raster cell and the polygon features are aligned to the raster).
I've figured out how to get the mean value for each polygon feature using the raster library with extract:
#load packages required
require(rgdal)
require(sp)
require(raster)
require(maptools)
# ---Set the working directory-------
datdir <- "/test_data/"
#Read in a ESRI grid of water depth
ras <- readGDAL("test_data/raster/pl_sm_rp1000/w001001.adf")
#convert it to a format recognizable by the raster package
ras <- raster(ras)
#read in polygon shape file
proxNA <- readShapePoly("test_data/proxy/PL_proxy_WD_NA_test")
#plot raster and shp
plot(ras)
plot(proxNA)
#calc mean depth per polygon feature
#unweighted - only assigns grid to district if centroid is in that district
proxNA#data$RP1000 <- extract(ras, proxNA, fun = mean, na.rm = TRUE, weights = FALSE)
#check results
head(proxNA)
#plot depth values
spplot(proxNA[,'RP1000'])
The issue I have is that I also need an area based ratio between the area of the polygon and all non NA cells in the same polygon. I know what the cell size of the raster is and I can get the area for each polygon, but the missing link is the count of all non-NA cells in each feature. I managed to get the cell number of all the cells in the polygon proxNA#data$Cnumb1000 <- cellFromPolygon(ras, proxNA)and I'm sure there is a way to get the actual value of the raster cell, which then requires a loop to get the number of all non-NA cells combined with a count, etc.
BUT, I'm sure there is a much better and quicker way to do that! If any of you has an idea or can point me in the right direction, I would be very grateful!
I do not have access to your files, but based on what you described, this should work:
library(raster)
mask_layer=shapefile(paste0(shapedir,"AOI.shp"))
original_raster=raster(paste0(template_raster_dir,"temp_raster_DecDeg250.tif"))
nonNA_raster=!is.na(original_raster)
masked_img=mask(nonNA_raster,mask_layer) #based on centroid location of cells
nonNA_count=cellStats(masked_img, sum)

sp::over() for point in polygon analysis

I have a shapefile named "ind_adm" and a SpatialPointsDataFrame called "pnts". The "pnts" contains points generated at random, and some of the points overlap with the polygon. See picture below.
Now, I want do do a point in polygon analysis, i.e. I want to find out which points lie inside the gray polygon representing the boundary of India. For this I am using the over() function in the sp library.
pt.in.poly <- sp::over(ind_adm, pnts, fn = mean) #do the join
However, the output I am getting is
>pt.in.poly
values
0 6.019467
I should actually get the index of the points that are "in" the polygon.
Where am I going wrong?
Found this concise and intuitive syntax for over:
pnts[ind_adm,]
from this Intro document
You should not supply a function. You are aggregating the attribute values of your points over the geometry of the polygon, (i.e. the number returned is the mean of the attribute of the points that fall within the polygon). In addition you have your x and y the wrong way round for what you want to do. Should be...
over( pnts , ind_adm , fn = NULL)
You can use point.in.poly fom spatialEco package. It "intersects point and polygon feature classes and adds polygon attributes to points".
library(spatialEco)
new_shape <- point.in.poly(pnts, ind_adm)
You could also use the st_intersection function from the sf package:
Load the library
library(sf)
Create a simple feature geometry (polygon) from your polygon
ind_adm <- st_as_sf(ind_adm)
Create a simple feature geometry (point) from your points of interest
(24047 is the EPSG code for India)
pnts <- st_as_sf(pnts) %>% st_set_crs(., 24047)
Keep only the points inside the polygon
kept_points <- st_intersection(ind_adm, pnts)

Intersecting Points and Polygons in R

I am working with shapefiles in R, one is point.shp the other is a polygon.shp.
Now, I would like to intersect the points with the polygon, meaning that all the values from the polygon should be attached to the table of the point.shp.
I tried overlay() and spRbind in package sp, but nothing did what I expected them to do.
Could anyone give me a hint?
With the new sf package this is now fast and easy:
library(sf)
out <- st_intersection(points, poly)
Additional options
If you do not want all fields from the polygon added to the point feature, just call dplyr::select() on the polygon feature before:
library(magrittr)
library(dplyr)
library(sf)
poly %>%
select(column-name1, column-name2, etc.) -> poly
out <- st_intersection(points, poly)
If you encounter issues, make sure that your polygon is valid:
st_is_valid(poly)
If you see some FALSE outputs here, try to make it valid:
poly <- st_make_valid(poly)
Note that these 'valid' functions depend on a sf installation compiled with liblwgeom.
If you do overlay(pts, polys) where pts is a SpatialPointsDataFrame object and polys is a SpatialPolygonsDataFrame object then you get back a vector the same length as the points giving the row of the polygons data frame. So all you then need to do to combine the polygon data onto the points data frame is:
o = overlay(pts, polys)
pts#data = cbind(pts#data, polys[o,])
HOWEVER! If any of your points fall outside all your polygons, then overlay returns an NA, which will cause polys[o,] to fail, so either make sure all your points are inside polygons or you'll have to think of another way to assign values for points outside the polygon...
You do this in one line with point.in.poly fom spatialEco package.
library(spatialEco)
new_shape <- point.in.poly(pts, polys)
from the documentation: point.in.poly "intersects point and polygon feature classes and adds polygon attributes to points".

Resources