I am working with shapefiles in R that I need to convert from polygon to raster. While the vectors look perfect when plotted, when converted to raster using 'rasterize' they produce erroneous horizontal lines. Here is an example of the problem:
Here is a generic example of the code that I am using (sorry that I cannot upload the data itself as it is proprietary):
spdf.dat <- readOGR("directory here", "layer here")
# Plot polygon
plot(spdf.dat, col = 'dimgrey', border = 'black')
# Extract boundaries
ext <- extent(spdf.dat)
# Set resolution for rasterization
res <- 1
# determine no. of columns from extents and resolution
yrow <- round((ext#ymax - ext#ymin) / res)
xcol <- round((ext#xmax - ext#xmin) / res)
# Rasterize base
rast.base <- raster(ext, yrow, xcol, crs = projection(spdf.dat))
# Rasterize substrate polygons
rast <- rasterize(spdf.dat, rast.base, field = 1, fun = 'min', progress='text')
plot(rast, col = 'dimgrey')
Does this seem to be a problem with the source data or the rasterize function? Has anyone seen this sort of error before? Thank you for any advice that you can provide.
To make it official so the question is considered answered, I'll copy my commented responses here. You can therefor accept it.
When I look at your figure, it seems to me that the problematic appearing lines in the raster are situated at the same latitude of some islands. Try to removes these islands from your dataset. If the problem disappear, you'll know that your data is the problem and where in your data the problem lies.
An other option is to try the gdalUtils package which has a function: gdal_rasterize. Maybe gdal is less exigent in the input data.
I had a similar problem rasterizing the TIGER areal water data for the San Juan Islands in Washington State , as well as for Maui - both of these spatial polygon data frames at the default resolution returned by package Tigris using a raster defined by points 1 arc-second of lat/lon apart. There were several horizontal stripes starting at what appeared to be sharp bends of the coastline. Various simplification algorithms helped, but not predictably, and not perfectly.
Try package Velox, which takes some getting used to as it uses Reference Classes. It probably has size limits, as it uses the Boost geometry libraries and works in memory. You don't need to understand it all, I don't. It is fast compared to raster::rasterize (especially for large and complicated spatial lines dataframes), although I didn't experience the hundred-fold speedups claimed, I am not gonna complain about a mere factor of 10 or 20 speedup. Most importantly, velox$rasterize() doesn't leave streaks for the locations I found where raster::rasterize did!
I found that it leaves a lot of memory garbage, and when converting large rasterLayers derived from velox$rasterize, running gc() was helpful before writing the raster in native R .grd format (in INT1S format to save disk space).
Just as a follow up to this question based on my experiences.
The horizontal lines are as a result of these 'islands' as described above. However, it only occurs if the polygon is 'multi-part'. If 'islands' are distinct polygons rather than a separate part of one polygon, then raster:rasterize() works fine.
Related
I am extremely new to working with spatial data and so most of what I'm about to say is me trying to speak a foreign language. Right now I am trying to learn how to do this all in R (I am slightly more capable with this data in QGIS but for this solution, I am looking for R only).
My research involves ecological data in Pennsylvania (PA) and so I am playing around with cropping the US NLCD dataset to PA. I have a raster layer for the NLCD and a shapefile for the boundary of Pennsylvania. I am able to successfully crop the larger US raster down to PA as follows:
library(raster)
library(rgdal)
pabound <- readOGR(dsn="...",
layer="PAbound")
nlcdRast <- raster(".../NLCD_2016_Land_Cover_L48_20190424.img")
pabound <- spTransform(pabound,CRS(proj4string(nlcdRast)))
PAnlcd <- raster::crop(nlcdRast,pabound)
If I run the simple plot command for both nlcdRast and PAnlcd (i.e. plot(nlcdRast) they maintain the same color scheme. But when I run it through tmap it seems to look at the cropped data differently and I am not exactly sure how to figure this out. Please see the plots below:
library(tmap)
tm_shape(nlcdRast) +
tm_raster()
And then when I plot the cropped version in tmap:
tm_shape(PAnlcd) +
tm_raster()
As you can see, it is not simply the color palette that is changing (I am confident I could figure that out) but the real problem is I'm losing the important information as seen in the legend. Whereas the full plot actually shows the categorical values for the raster NLCD, the cropped version now seems to show just some unknown numerical range. Even though it looks bad at the moment, I'd like to have the same legend/information as seen in the full US map.
I apologize for not having a more reproducible example but I am completely lost on what is happening here so I can't quite replicate it. I suppose right now I'm just looking for where to look to try and figure out what changed. Thank you in advance.
Cropping is changing the way the pixels are represented. To maintain your values use the stars package (also note I'm using the sf package for the shapefile):
library(stars)
library(sf)
# load in NLCD
nlcdRast <- read_stars(".../NLCD_2016_Land_Cover_L48_20190424.img")
# read in study area
pabound <- st_read(dsn="...", layer="PAbound")
# reproject pabound to match NLCD
pabound <- st_transform(pabound, CRSobj = crs(nlcdRast))
# now crop
panlcd <- st_crop(nlcdRast, pabound)
My goal is to extract precipitation data from the Daymet database (https://daymet.ornl.gov/) for each of my 68 watershed polygons. I was able to use the ncdf4 package to download the data:
download_daymet_ncss(location = c(53.5,-116.6, 48,-98),
start = 1992,
end = 2000,
param = "prcp",
path = "./Daymet gridded data/Precip_raw")#download the data
I realize this is quite a large area and might be part of the issue.
Once the data is downloaded, the challenge begins.
I've tried two approaches, the first to aggregate the data into annual values (using daymet_grid_agg from the daymetr package). But then extracting the correct areas from the raster generated is challenging (and I haven't been able to do successfully).
I then tried to use the RavenR package to generate a grid overlay from a netcdf (rvn_netcdf_to_gridshp).
fn<-"prcp_daily_1992_ncss.nc"
ncfile<-nc_open(fn)
outshp<-rvn_netcdf_to_gridshp(ncfile, projID = 4326)
This fails completely - either RStudio freezes, cannont allocate 2.7 GB, or the shapefile is empty. I have tried increasing memory size, but then R is just running forever and nothing seems to happen.
Next, I tried this simple approach (as per https://www.researchgate.net/post/How-to-get-data-only-within-shapefile-boundary-from-a-netcdf-data-file-in-R-software):
shp<-st_read(file.choose())
data<-brick(file.choose())
crs(data) <- "+proj=lcc +lon_0=-90 +lat_1=33 +lat_2=45"
output<-raster::mask(data, shp)
The output brick raster is full of NAs...
I have made the downloaded netcdf quite a bit smaller, but none of these approaches seem to work... And yes, my data are gridded (unlike in this case: How to extract NetCDF data frame by region using a polygon shapefile)
I have a NetCDF file of global oceanographic (OmegaA) data at relatively coarse spatial resolution with 33 depth levels. I also have a global bathymetry raster at much finer resolution. My goal is to use get the seabed OmegaA data from the NetCDF file, using the bathymetry data to determine the desired depth. My code so far;
library(raster)
library(rgdal)
library(ncdf4)
# Aragonite data. Defaults to CRS WGS84
ncin <- nc_open("C:/..../GLODAPv2.2016b.OmegaA.nc")
ncin.depth <- ncvar_get(ncin, "Depth")# 33 depth levels
omegaA.brk <- brick("C:/.../GLODAPv2.2016b.OmegaA.nc")
omegaA.brk <-rotate(omegaA.bkr)# because netCDF is in Lon 0-360.
# depth raster. CRS WGS84
r<-raster("C:/....GEBCO.tif")
# resample the raster brick to the resolution that matches the bathymetry raster
omegaA.brk <-resample(omegaA.brk, r, method="bilinear")
# create blank final raster
omegaA.rast <- raster(ncol = r#ncols, nrow = r#nrows)
extent(omegaA.rast) <- extent(r)
omegaA.rast[] <- NA_real_
# create vector of indices of desired depth values
depth.values<-getValues(r)
depth.values.index<-which(!is.na(depth.values))
# loop to find appropriate raster brick layer, and extract the value at the desired index, and insert into blank raster
for (p in depth.values.index) {
dep.index <-which(abs(ncin.depth+depth.values[p]) == min(abs(ncin.depth+depth.values[p]))) ## this sometimes results in multiple levels being selected
brk.level <-omegaA.brk[[dep.index]] # can be more than on level if multiple layers selected above.
omegaA.rast[p] <-omegaA.brk[[1]][p] ## here I choose the first level if multiple levels have been selected above
print(paste(p, "of", length(depth.values.index))) # counter to look at progress.
}
The problem: The result is a raster with massive gaps (NAs) in it where there should be data. The gaps often take a distinctive shape - eg, follow a contour, or along a long straight line. I've pasted a cropped example.
enter image description here
I think this could be because either 1) for some reason the 'which' statement in the loop is not finding a match or 2) a misalignment of the projections is created which I've read can happen when using 'Rotate'.
I've tried to make sure all the extents, resolutions, number of cells, and CRS's are all the same, which they seem to be.
To speed up the process I've cropped the global brick and bathy raster to my area of interest, again checking that all the spatial resolutions, etc etc match - I've not included those steps here for simplicity.
At a loss. Any help welcome!
Without a reproducible example, this kind of problems is hard to solve. I can't tell where your problem is but I'll present to you the approach I would try. Maybe it's good, maybe it's bad, I don't know but it may inspire you to find a way to go around your problem.
To my understanding, you have a brick of OmegaA (33 layers/depth) and a bathymetry raster. You want to get the OmegaA value at the bottom of the sea. Here is how I would do:
Make OmegaA raster to the same resolution and extent to the bathymetry one
Transforme the bathymetry raster into a raster brick of 33 three layers of 0-1. e.g. If the sea bottom is at 200m for one particular pixel, than this pixel on all depth layer other than 200 is 0 and 1 for the 200. To program this, I would go the long way, something like
:
r_1 <- r
values(r_1) <- values(r)==10 # where 10 is the depth (it could be a range with < or >)
r_2 <- r
values(r_2) <- values(r)==20
...
r_33 <- r
values(r_33) <- values(r)==250
r_brick <- brick(r_1, r_2, ..., r_33)
then you multiple both your raster bricks. They have the same dimension, it should be easy. The output should be a raster brick of 33 layers with 0 everywhere where it isn't the bottom of the sea and the value of OmegaA anywhere else.
Combine all the layer of the brick obtained previously into a simple raster with a sum.
This should work. If you have problem with dealing with raster brick, you could make the data into base R arrays, it could be simpler.
Good luck.
I've been running into all sorts of issues using ArcGIS ZonalStats and thought R could be a great way. Saying that I'm fairly new to R, but got a coding background.
The situation is that I have several rasters and a polygon shape file with many features of different sizes (though all features are bigger than a raster cell and the polygon features are aligned to the raster).
I've figured out how to get the mean value for each polygon feature using the raster library with extract:
#load packages required
require(rgdal)
require(sp)
require(raster)
require(maptools)
# ---Set the working directory-------
datdir <- "/test_data/"
#Read in a ESRI grid of water depth
ras <- readGDAL("test_data/raster/pl_sm_rp1000/w001001.adf")
#convert it to a format recognizable by the raster package
ras <- raster(ras)
#read in polygon shape file
proxNA <- readShapePoly("test_data/proxy/PL_proxy_WD_NA_test")
#plot raster and shp
plot(ras)
plot(proxNA)
#calc mean depth per polygon feature
#unweighted - only assigns grid to district if centroid is in that district
proxNA#data$RP1000 <- extract(ras, proxNA, fun = mean, na.rm = TRUE, weights = FALSE)
#check results
head(proxNA)
#plot depth values
spplot(proxNA[,'RP1000'])
The issue I have is that I also need an area based ratio between the area of the polygon and all non NA cells in the same polygon. I know what the cell size of the raster is and I can get the area for each polygon, but the missing link is the count of all non-NA cells in each feature. I managed to get the cell number of all the cells in the polygon proxNA#data$Cnumb1000 <- cellFromPolygon(ras, proxNA)and I'm sure there is a way to get the actual value of the raster cell, which then requires a loop to get the number of all non-NA cells combined with a count, etc.
BUT, I'm sure there is a much better and quicker way to do that! If any of you has an idea or can point me in the right direction, I would be very grateful!
I do not have access to your files, but based on what you described, this should work:
library(raster)
mask_layer=shapefile(paste0(shapedir,"AOI.shp"))
original_raster=raster(paste0(template_raster_dir,"temp_raster_DecDeg250.tif"))
nonNA_raster=!is.na(original_raster)
masked_img=mask(nonNA_raster,mask_layer) #based on centroid location of cells
nonNA_count=cellStats(masked_img, sum)
I have a shapefile with 50+ different polygonal shapes (representing 50+ different regions) and 10,000+ data points that are supposed to be present in one of the regions. The thing is, the 10,000+ points are already coded with a region they are supposed to be in, and I want to figure out how far they are from this coded region in geo-spatial distance.
My current approach (code below), which involves converting shapefiles to owin objects from the sp library and using distfun gets me distances in lat,long euclidean space. But I would like to get geo-spatial distances (eventually to convert to km). Where should I go next?
#basically cribbed from http://cran.r-project.org/web/packages/spatstat/vignettes/shapefiles.pdf (page 9)
shp <- readShapeSpatial("myShapeFile.shp", proj4string=CRS("+proj=longlat +datum=WGS84"))
regions <- lapply(slot(shp, "polygons"), function(x) SpatialPolygons(list(x)))
windows <- lapply(regions, as.owin)
# need to convert this to geo distance
distance_from_region <- function(regionData, regionName) {
w <- windows[[regionName]]
regionData$dists <- distfun(w)(regionData$lat, regionData$long)
regionData
}
I'd project the data to a euclidean (or near euclidean) coordinate system - unless you are spanning a large chunk of the globe then this is feasible. Use spTransform from maptools or sp or rgdal (I forget which) and convert to a UTM zone near your data.
You also might do better with package rgeos and the gDistance function:
gDistance by default returns the cartesian minimum distance
between the two geometries in the units of the current projection.
If your data is over a large chunk of globe then... tricky... 42...
Barry