I have a NetCDF file of global oceanographic (OmegaA) data at relatively coarse spatial resolution with 33 depth levels. I also have a global bathymetry raster at much finer resolution. My goal is to use get the seabed OmegaA data from the NetCDF file, using the bathymetry data to determine the desired depth. My code so far;
library(raster)
library(rgdal)
library(ncdf4)
# Aragonite data. Defaults to CRS WGS84
ncin <- nc_open("C:/..../GLODAPv2.2016b.OmegaA.nc")
ncin.depth <- ncvar_get(ncin, "Depth")# 33 depth levels
omegaA.brk <- brick("C:/.../GLODAPv2.2016b.OmegaA.nc")
omegaA.brk <-rotate(omegaA.bkr)# because netCDF is in Lon 0-360.
# depth raster. CRS WGS84
r<-raster("C:/....GEBCO.tif")
# resample the raster brick to the resolution that matches the bathymetry raster
omegaA.brk <-resample(omegaA.brk, r, method="bilinear")
# create blank final raster
omegaA.rast <- raster(ncol = r#ncols, nrow = r#nrows)
extent(omegaA.rast) <- extent(r)
omegaA.rast[] <- NA_real_
# create vector of indices of desired depth values
depth.values<-getValues(r)
depth.values.index<-which(!is.na(depth.values))
# loop to find appropriate raster brick layer, and extract the value at the desired index, and insert into blank raster
for (p in depth.values.index) {
dep.index <-which(abs(ncin.depth+depth.values[p]) == min(abs(ncin.depth+depth.values[p]))) ## this sometimes results in multiple levels being selected
brk.level <-omegaA.brk[[dep.index]] # can be more than on level if multiple layers selected above.
omegaA.rast[p] <-omegaA.brk[[1]][p] ## here I choose the first level if multiple levels have been selected above
print(paste(p, "of", length(depth.values.index))) # counter to look at progress.
}
The problem: The result is a raster with massive gaps (NAs) in it where there should be data. The gaps often take a distinctive shape - eg, follow a contour, or along a long straight line. I've pasted a cropped example.
enter image description here
I think this could be because either 1) for some reason the 'which' statement in the loop is not finding a match or 2) a misalignment of the projections is created which I've read can happen when using 'Rotate'.
I've tried to make sure all the extents, resolutions, number of cells, and CRS's are all the same, which they seem to be.
To speed up the process I've cropped the global brick and bathy raster to my area of interest, again checking that all the spatial resolutions, etc etc match - I've not included those steps here for simplicity.
At a loss. Any help welcome!
Without a reproducible example, this kind of problems is hard to solve. I can't tell where your problem is but I'll present to you the approach I would try. Maybe it's good, maybe it's bad, I don't know but it may inspire you to find a way to go around your problem.
To my understanding, you have a brick of OmegaA (33 layers/depth) and a bathymetry raster. You want to get the OmegaA value at the bottom of the sea. Here is how I would do:
Make OmegaA raster to the same resolution and extent to the bathymetry one
Transforme the bathymetry raster into a raster brick of 33 three layers of 0-1. e.g. If the sea bottom is at 200m for one particular pixel, than this pixel on all depth layer other than 200 is 0 and 1 for the 200. To program this, I would go the long way, something like
:
r_1 <- r
values(r_1) <- values(r)==10 # where 10 is the depth (it could be a range with < or >)
r_2 <- r
values(r_2) <- values(r)==20
...
r_33 <- r
values(r_33) <- values(r)==250
r_brick <- brick(r_1, r_2, ..., r_33)
then you multiple both your raster bricks. They have the same dimension, it should be easy. The output should be a raster brick of 33 layers with 0 everywhere where it isn't the bottom of the sea and the value of OmegaA anywhere else.
Combine all the layer of the brick obtained previously into a simple raster with a sum.
This should work. If you have problem with dealing with raster brick, you could make the data into base R arrays, it could be simpler.
Good luck.
Related
lat long
7.16 124.21
8.6 123.35
8.43 124.28
8.15 125.08
Consider these coordinates, these coordinates correspond to weather stations that measure rainfall data.
The intro to the gstat package in R uses the meuse dataset. At some point in this tutorial: https://rpubs.com/nabilabd/118172, the guys makes use of a "meuse.grid" in this line of code:
data("meuse.grid")
I do not have such a file and I do not know how to create it, can I create one using these coordinates? Or at least point me to material that discusses how to create a custom grid for a custom area (i.e not using administrative boundaries from GADM).
Probably wording this wrong, don't even know if this question makes sense to R savvy people. Still, would love to hear some direction, or at least tips. Thanks a lot!
Total noob at R and statistics.
EDIT: See the sample grid that the tutorial I posted looks like, that's the thing I want to make.
EDIT 2: Would this method be viable? https://rstudio-pubs-static.s3.amazonaws.com/46259_d328295794034414944deea60552a942.html
I am going to share my approach to create a grid for kriging. There are probably more efficient or elegant ways to achieve the same task, but I hope this will be a start to facilitate some discussions.
The original poster was thinking about 1 km for every 10 pixels, but that is probably too much. I am going to create a grid with cell size equals to 1 km * 1 km. In addition, the original poster did not specify an origin of the grid, so I will spend some time determining a good starting point. I also assume that the Spherical Mercator projection coordinate system is the appropriate choice for the projection. This is a common projection for Google Map or Open Street Maps.
1. Load Packages
I am going to use the following packages. sp, rgdal, and raster are packages provide many useful functions for spatial analysis. leaflet and mapview are packages for quick exploratory visualization of spatial data.
# Load packages
library(sp)
library(rgdal)
library(raster)
library(leaflet)
library(mapview)
2. Exploratory Visualization of the station locations
I created an interactive map to inspect the location of the four stations. Because the original poster provided the latitude and longitude of these four stations, I can create a SpatialPointsDataFrame with Latitude/Longitude projection. Notice the EPSG code for Latitude/Longitude projection is 4326. To learn more about EPSG code, please see this tutorial (https://www.nceas.ucsb.edu/~frazier/RSpatialGuides/OverviewCoordinateReferenceSystems.pdf).
# Create a data frame showing the **Latitude/Longitude**
station <- data.frame(lat = c(7.16, 8.6, 8.43, 8.15),
long = c(124.21, 123.35, 124.28, 125.08),
station = 1:4)
# Convert to SpatialPointsDataFrame
coordinates(station) <- ~long + lat
# Set the projection. They were latitude and longitude, so use WGS84 long-lat projection
proj4string(station) <- CRS("+init=epsg:4326")
# View the station location using the mapview function
mapview(station)
The mapview function will create an interactive map. We can use this map to determine what could be a suitable for the origin of the grid.
3. Determine the origin
After inspecting the map, I decided that the origin could be around longitude 123 and latitude 7. This origin will be on the lower left of the grid. Now I need to find the coordinate representing the same point under Spherical Mercator projection.
# Set the origin
ori <- SpatialPoints(cbind(123, 7), proj4string = CRS("+init=epsg:4326"))
# Convert the projection of ori
# Use EPSG: 3857 (Spherical Mercator)
ori_t <- spTransform(ori, CRSobj = CRS("+init=epsg:3857"))
I first created a SpatialPoints object based on the latitude and longitude of the origin. After that I used the spTransform to perform project transformation. The object ori_t now is the origin with Spherical Mercator projection. Notice that the EPSG code for Spherical Mercator is 3857.
To see the value of coordinates, we can use the coordinates function as follows.
coordinates(ori_t)
coords.x1 coords.x2
[1,] 13692297 781182.2
4. Determine the extent of the grid
Now I need to decide the extent of the grid that can cover all the four points and the desired area for kriging, which depends on the cell size and the number of cells. The following code sets up the extent based on the information. I have decided that the cell size is 1 km * 1 km, but I need to experiment on what would be a good cell number for both x- and y-direction.
# The origin has been rounded to the nearest 100
x_ori <- round(coordinates(ori_t)[1, 1]/100) * 100
y_ori <- round(coordinates(ori_t)[1, 2]/100) * 100
# Define how many cells for x and y axis
x_cell <- 250
y_cell <- 200
# Define the resolution to be 1000 meters
cell_size <- 1000
# Create the extent
ext <- extent(x_ori, x_ori + (x_cell * cell_size), y_ori, y_ori + (y_cell * cell_size))
Based on the extent I created, I can create a raster layer with number all equal to 0. Then I can use the mapview function again to see if the raster and the four stations matches well.
# Initialize a raster layer
ras <- raster(ext)
# Set the resolution to be
res(ras) <- c(cell_size, cell_size)
ras[] <- 0
# Project the raster
projection(ras) <- CRS("+init=epsg:3857")
# Create interactive map
mapview(station) + mapview(ras)
I repeated this process several times. Finally I decided that the number of cells is 250 and 200 for x- and y-direction, respectively.
5. Create spatial grid
Now I have created a raster layer with proper extent. I can first save this raster as a GeoTiff for future use.
# Save the raster layer
writeRaster(ras, filename = "ras.tif", format="GTiff")
Finally, to use the kriging functions from the package gstat, I need to convert the raster to SpatialPixels.
# Convert to spatial pixel
st_grid <- rasterToPoints(ras, spatial = TRUE)
gridded(st_grid) <- TRUE
st_grid <- as(st_grid, "SpatialPixels")
The st_grid is a SpatialPixels that can be used in kriging.
This is an iterative process to determine a suitable grid. Throughout the process, users can change the projection, origin, cell size, or cell number depends on the needs of their analysis.
#yzw and #Edzer bring up good points for creating a regular rectangular grid, but sometimes, there is the need to create an irregular grid over a defined polygon, usually for kriging.
This is a sparsely documented topic. One good answer can be found here. I expand on it with code below:
Consider the the built in meuse dataset. meuse.grid is an irregularly shaped grid. How do we make an grid like meuse.grid for our unique study area?
library(sp)
data(meuse.grid)
ggplot(data = meuse.grid) + geom_point(aes(x, y))
Imagine an irregularly shaped SpatialPolygon or SpatialPolygonsDataFrame, called spdf. You first build a regular rectangular grid over it, then subset the points in that regular grid by the irregularly-shaped polygon.
# First, make a rectangular grid over your `SpatialPolygonsDataFrame`
grd <- makegrid(spdf, n = 100)
colnames(grd) <- c("x", "y")
# Next, convert the grid to `SpatialPoints` and subset these points by the polygon.
grd_pts <- SpatialPoints(
coords = grd,
proj4string = CRS(proj4string(spdf))
)
# subset all points in `grd_pts` that fall within `spdf`
grd_pts_in <- grd_pts[spdf, ]
# Then, visualize your clipped grid which can be used for kriging
ggplot(as.data.frame(coordinates(grd_pts_in))) +
geom_point(aes(x, y))
If you have your study area as a polygon, imported as a SpatialPolygons, you could either use package raster to rasterize it, or use sp::spsample to sample it using sampling type regular.
If you don't have such a polygon, you can create points regularly spread over a rectangular long/lat area using expand.grid, using seq to generate a sequence of long and lat values.
I am working with shapefiles in R that I need to convert from polygon to raster. While the vectors look perfect when plotted, when converted to raster using 'rasterize' they produce erroneous horizontal lines. Here is an example of the problem:
Here is a generic example of the code that I am using (sorry that I cannot upload the data itself as it is proprietary):
spdf.dat <- readOGR("directory here", "layer here")
# Plot polygon
plot(spdf.dat, col = 'dimgrey', border = 'black')
# Extract boundaries
ext <- extent(spdf.dat)
# Set resolution for rasterization
res <- 1
# determine no. of columns from extents and resolution
yrow <- round((ext#ymax - ext#ymin) / res)
xcol <- round((ext#xmax - ext#xmin) / res)
# Rasterize base
rast.base <- raster(ext, yrow, xcol, crs = projection(spdf.dat))
# Rasterize substrate polygons
rast <- rasterize(spdf.dat, rast.base, field = 1, fun = 'min', progress='text')
plot(rast, col = 'dimgrey')
Does this seem to be a problem with the source data or the rasterize function? Has anyone seen this sort of error before? Thank you for any advice that you can provide.
To make it official so the question is considered answered, I'll copy my commented responses here. You can therefor accept it.
When I look at your figure, it seems to me that the problematic appearing lines in the raster are situated at the same latitude of some islands. Try to removes these islands from your dataset. If the problem disappear, you'll know that your data is the problem and where in your data the problem lies.
An other option is to try the gdalUtils package which has a function: gdal_rasterize. Maybe gdal is less exigent in the input data.
I had a similar problem rasterizing the TIGER areal water data for the San Juan Islands in Washington State , as well as for Maui - both of these spatial polygon data frames at the default resolution returned by package Tigris using a raster defined by points 1 arc-second of lat/lon apart. There were several horizontal stripes starting at what appeared to be sharp bends of the coastline. Various simplification algorithms helped, but not predictably, and not perfectly.
Try package Velox, which takes some getting used to as it uses Reference Classes. It probably has size limits, as it uses the Boost geometry libraries and works in memory. You don't need to understand it all, I don't. It is fast compared to raster::rasterize (especially for large and complicated spatial lines dataframes), although I didn't experience the hundred-fold speedups claimed, I am not gonna complain about a mere factor of 10 or 20 speedup. Most importantly, velox$rasterize() doesn't leave streaks for the locations I found where raster::rasterize did!
I found that it leaves a lot of memory garbage, and when converting large rasterLayers derived from velox$rasterize, running gc() was helpful before writing the raster in native R .grd format (in INT1S format to save disk space).
Just as a follow up to this question based on my experiences.
The horizontal lines are as a result of these 'islands' as described above. However, it only occurs if the polygon is 'multi-part'. If 'islands' are distinct polygons rather than a separate part of one polygon, then raster:rasterize() works fine.
I would like to be able to create an elevation plot from contour lines in R. I am very new to using shape files
At the moment I have downloaded data from here
which provides .shp files for all of the UK.
It also provides the contour lines, summarising the topology of the UK.
For the elevation plot I would like a data.frame or data.table of evenly spaced points (100m apart from each other) to produce a data output giving an x, y and z value. Where x and y represent the latitude and longitude (or Eastings and Northings), and z represent the height (in meters above sea-level).
I think there are probably some tools that will automatically carry out the interpolation for you, but am unsure how it would work with geo-spatial data.
This is my basic start...
require(maptools)
xx <- readShapeSpatial("HP40_line.shp")
Choose "ASCII Grid and GML (Grid)" as download format for the "OS Terrain 50" product, and download the file. This will give you a zip file containing many directories of zip files, each of which contains portions of a 50 m elevation grid of the UK (the portion I looked at had 200 x 200 cells, meaning 10 km x 10 km). I went into the directory data/su, unzipped the zip file there, and did
library(raster)
r = raster("SU99.asc")
plot(r)
to aggregate this to a 100 m grid, I did
r100 = aggregate(r) # default is factor 2: 50 -> 100 m
As mentioned above, the advice is to work on the grids as contour lines are derived from grids, working the other way around is a painful and a great loss of information.
Getting grid values in longitude latitude as a data.frame can be done in two ways:
df = as.data.frame(projectRaster(r, crs = CRS("+proj=longlat")), xy = TRUE)
unprojects the grid to a new grid in longitude / latitude. As these grids cannot coincide, it minimally moves points (see ?projectRaster).
The second option is to convert the grid to points, and unproject these to longitude latitude, by
df2 = as.data.frame(spTransform(as(r, "SpatialPointsDataFrame"), CRS("+proj=longlat")))
This does not move points, and as a consequence does not result in a grid.
I've been running into all sorts of issues using ArcGIS ZonalStats and thought R could be a great way. Saying that I'm fairly new to R, but got a coding background.
The situation is that I have several rasters and a polygon shape file with many features of different sizes (though all features are bigger than a raster cell and the polygon features are aligned to the raster).
I've figured out how to get the mean value for each polygon feature using the raster library with extract:
#load packages required
require(rgdal)
require(sp)
require(raster)
require(maptools)
# ---Set the working directory-------
datdir <- "/test_data/"
#Read in a ESRI grid of water depth
ras <- readGDAL("test_data/raster/pl_sm_rp1000/w001001.adf")
#convert it to a format recognizable by the raster package
ras <- raster(ras)
#read in polygon shape file
proxNA <- readShapePoly("test_data/proxy/PL_proxy_WD_NA_test")
#plot raster and shp
plot(ras)
plot(proxNA)
#calc mean depth per polygon feature
#unweighted - only assigns grid to district if centroid is in that district
proxNA#data$RP1000 <- extract(ras, proxNA, fun = mean, na.rm = TRUE, weights = FALSE)
#check results
head(proxNA)
#plot depth values
spplot(proxNA[,'RP1000'])
The issue I have is that I also need an area based ratio between the area of the polygon and all non NA cells in the same polygon. I know what the cell size of the raster is and I can get the area for each polygon, but the missing link is the count of all non-NA cells in each feature. I managed to get the cell number of all the cells in the polygon proxNA#data$Cnumb1000 <- cellFromPolygon(ras, proxNA)and I'm sure there is a way to get the actual value of the raster cell, which then requires a loop to get the number of all non-NA cells combined with a count, etc.
BUT, I'm sure there is a much better and quicker way to do that! If any of you has an idea or can point me in the right direction, I would be very grateful!
I do not have access to your files, but based on what you described, this should work:
library(raster)
mask_layer=shapefile(paste0(shapedir,"AOI.shp"))
original_raster=raster(paste0(template_raster_dir,"temp_raster_DecDeg250.tif"))
nonNA_raster=!is.na(original_raster)
masked_img=mask(nonNA_raster,mask_layer) #based on centroid location of cells
nonNA_count=cellStats(masked_img, sum)
I have a file in this format:
ASCII format
The first rows look like this:
ncols 1440
nrows 720
xllcorner -180.0
yllcorner -90
cellsize 0.25
NODATA_value -9999
Basically I have the world with 1440 'tiles' in x direction (longitude) and 720 'tiles' in y direction (latitude). Each 'tile' is a square with a length of 0.25 degrees. I think I have xllcorner and yllcorner correct. I can draw this map like this in R:
library("adehabitat")
bio1 <- import.asc("D:/ENFA/data.asc")
maps <- as.kasc(list(data = bio1))
image(maps, col = cm.colors(256), clfac = list(Aspect = cl))
The map looks fine.
I would like to perform some ecological niche factor analysis (ENFA) using the adehabitat package and am not too sure about the location data. Basically I have them as longitudes and latitudes at the moment but I could also generate then as 'tile index' (e.g. lower left corner has the latitude -90 and longitude -180 so the 'tile index' would be 0, 0 - right?). Which is the correct location data format? I would use ENFA code like this:
locs <- read.table("D:/ENFA/Locs.txt", header = TRUE, sep="\t")
dataenfa1 <- data2enfa(maps, locs)
pc <- dudi.pca(dataenfa1$tab, scannf = FALSE)
enfa1 <- enfa(pc, dataenfa1$pr,scannf = FALSE)
hist(enfa1)
I would appreciate any comments please. Thanks in advance.
The problem with leaving your coordinates in lat-long form is that, at most places on earth, a degree of longitude has a different length than a degree of latitude. This might distort your ENFA by exaggerating distances in some directions relative to those in others.
Especially if your data are from a relatively small area, I'd suggest re-expressing the coordinates in meters along an W/E x-axis and S/N y-axis. If all of your points fall inside a single UTM zone, then you could do the conversion within R, using project() in the rgdal package:
Here's one example, taken from here:
library(rgdal)
# Make a two-column matrix, col1 = long, col2 = lat
xy <- cbind(c(118, 119), c(10, 50))
# Convert it to UTM coordinates (in units of meters)
project(xy, "+proj=utm +zone=51 ellps=WGS84")
[,1] [,2]
[1,] -48636.65 1109577
[2,] 213372.05 5546301
Much more info about how to manipulate spatial data is available in the "Applied Spatial Data Analysis with R" by Bivand, Pebesma, and Gomez-Rubio. If you need more specific assistance, try the R-sig-Geo mailing list.
Hope this helps.
Maybe you want to convert the coordinates into
GHAM (Global, Hierarchical, Alphanumeric, and Morton-encoded)
which represents the globe by cells of arbitrary precision (as fine or coarse as you wish), so any lat/lon has a single alpha-numeric address that remains sortable.
Here's the abstract from GHAM: A compact global geocode suitable for sorting, by Duncan Agnew:
The GHAM code is a technique for labeling geographic locations based
on their positions. It defines addresses for equal-area cells bounded
by constant latitude and longitude, with arbitrarily fine precision.
The cell codes are defined by applying Morton ordering to a recursive
division into a 16 by 16 grid, with the resulting numbers encoded into
letter–number pairs. A lexical sort of lists of points so labeled will
bring near neighbors (usually) close together; tests on a variety of
global datasets show that in most cases the actual closest point is
adjacent in the list 50% of the time, and within 5 entries 80% of the
time.
Source code is the IAMG repository, but if you can't access it I'm sure he would provide it.