I have a brick file of the bioclim variables, the brick was merged from four 30sec tile brick, so it is a little bit large. I would like to get the brick file of my research area by cutting it using a polygon as boundary. What should I do? Otherwise, if it is not possible to do with brick, can I do it with raster?
Thanks in advance~
Marco
Check out extent() if you want to crop the brick to a smaller rectangle. Also drawExtent() if you would rather choose by clicking.
EDIT: Since you used the terms "cut" and "mask" I am not sure I have understood correctly, but here are two ways that might help. You could even use both.
# an example with dimensions: 77, 101, 3 (nrow, ncol, nlayers)
myGrid_Brick <- brick(system.file("external/rlogo.grd", package="raster"))
# a simple polygon within those dimensions
myTriangle_P <- Polygon(cbind(c(10, 80, 50, 10), c(10, 20, 65, 10)))
myTriangle_Ps <- Polygons(list(myTriangle_P), "fubar")
myTriangle_SP <- SpatialPolygons(list(myTriangle_Ps))
myTriangle_Ras <- rasterize(myTriangle_SP, myBrick)
# this will crop a brick to minimal rectangle that circumscribes the polygon
# extent(myCrop) is smaller than extent(myGrid) but no values are changed
myCrop_Brick <- crop(myGrid_Brick, myTriangle_SP)
# while this converts every coordinate that is NA in
# the mask to become NA in the returned brick
# while leaving the brick extent unchanged
myMask_Brick <- mask(myGrid_Brick, myTriangle_Ras)
Related
I'm trying to crop a large multipolygon shapefile by a single, smaller polygon. It works using st_intersection, however this takes a very long time, so I'm instead trying to convert the multipolygon to a raster, and crop that raster by the smaller polygon.
## packages - sorry if I've missed any!
library(raster)
library(rgdal)
library(fasterize)
library(sf)
## load files
shp1 <- st_read("pathtoshp", crs = 27700) # a large multipolygon shapefile to crop
### image below created using ggplot- ignore the black boundaries!
shp2 <- st_read("pathtoshp", crs = 27700) # a single, smaller polygon shapefile, to crop shp1 by
plot(shp2)
## convert to raster (faster than st_intersection)
projection1 <- CRS('+init=EPSG:27700')
rst_template <- raster(ncols = 1000, nrows = 1000,
crs = projection1,
ext = extent(shp1))
rst_shp1 <- fasterize(shp1, rst_template)
plot(rst_shp1)
rst_shp2 <- crop(rst_shp1, shp2)
plot(rst_shp2)
When I plot shp2, the upper boundary is flat, rather than fitting the true boundary of the shp2 polygon.
Any help would be greatly appreciated!
Maybe try raster::mask() instead of crop(). crop() uses the second argument as an extent with which to crop a raster; i.e. it's taking the bounding box (extent) of your second argument and cropping that entire rectangle from your raster.
Something important to understand about raster objects is that they are all rectangular. The white space you see surrounding your shape are just NA values.
raster::mask() will take your original raster, and a spatial object (raster, sf, etc.) and replace all values in your raster which don't overlap with your spatial object to NA (by default, you can supply other replacement values). Though I will say, mask() will likely also take awhile to run, so you may be better off just sticking with sf objects.
I would suggest moving to the "terra" package (faster and easier to use than "raster").
Here is an example.
library(terra)
r <- rast(system.file("ex/elev.tif", package="terra"))
v <- vect(system.file("ex/lux.shp", package="terra"))[4]
x <- crop(r, v)
plot(x); lines(v)
As edixon1 points out, a raster is always rectangular. If you want to set cells outside of the polygon to NA, you can do
x <- crop(r, v, mask=TRUE)
plot(x); lines(v)
In this example it makes no sense, but you could first rasterize
x <- crop(r, v)
y <- rasterize(v, x)
m <- mask(x, y)
plot(m); lines(v)
I am not sure if this answers your question. But if it does not, then please edit your question to make it reproducible, for example using the example data above.
If I have this raster with 40 x 40 resolution.
library(raster)
#get some sample data
data(meuse.grid)
gridded(meuse.grid) <- ~x+y
meuse.raster <- raster(meuse.grid)
res(meuse.raster)
#[1] 40 40
I would like to downscale this raster to 4 x 4. If a pixel of 40x40 = 125, so use the same values for all pixels of 4x4x within this pixel.
just divide each pixel of 40 x40 into 4x4 with keeping its value.
I am open to CDO solutions as well.
We can use raster::disaggregate
library(raster)
#get some sample data
data(meuse.grid)
gridded(meuse.grid) <- ~x+y
meuse.raster <- raster(meuse.grid)
#assign geographic coordinate system (from coordinates and location (Meuse) it seems like the standard projection for the Netherlands (Amersfoort, ESPG:28992)
crs(meuse.raster) <- "EPSG:28992"
#disaggregate
meuse.raster.dissaggregated <- disaggregate(meuse.raster, c(10,10))
I used c(10,10) to disaggregated from a 40x40 to 4x4 resolution (10 times more detailed).
res(meuse.raster.dissaggregated)
[1] 4 4
In the comments Chris mentioned the terra package. I also recommend shifting from raster to terra. I believe its the newest package and will eventually replaces packages like raster and stars.
terra also has a disaggregation function terra::disagg() which works in a similar way.
I have a NetCDF file of global oceanographic (OmegaA) data at relatively coarse spatial resolution with 33 depth levels. I also have a global bathymetry raster at much finer resolution. My goal is to use get the seabed OmegaA data from the NetCDF file, using the bathymetry data to determine the desired depth. My code so far;
library(raster)
library(rgdal)
library(ncdf4)
# Aragonite data. Defaults to CRS WGS84
ncin <- nc_open("C:/..../GLODAPv2.2016b.OmegaA.nc")
ncin.depth <- ncvar_get(ncin, "Depth")# 33 depth levels
omegaA.brk <- brick("C:/.../GLODAPv2.2016b.OmegaA.nc")
omegaA.brk <-rotate(omegaA.bkr)# because netCDF is in Lon 0-360.
# depth raster. CRS WGS84
r<-raster("C:/....GEBCO.tif")
# resample the raster brick to the resolution that matches the bathymetry raster
omegaA.brk <-resample(omegaA.brk, r, method="bilinear")
# create blank final raster
omegaA.rast <- raster(ncol = r#ncols, nrow = r#nrows)
extent(omegaA.rast) <- extent(r)
omegaA.rast[] <- NA_real_
# create vector of indices of desired depth values
depth.values<-getValues(r)
depth.values.index<-which(!is.na(depth.values))
# loop to find appropriate raster brick layer, and extract the value at the desired index, and insert into blank raster
for (p in depth.values.index) {
dep.index <-which(abs(ncin.depth+depth.values[p]) == min(abs(ncin.depth+depth.values[p]))) ## this sometimes results in multiple levels being selected
brk.level <-omegaA.brk[[dep.index]] # can be more than on level if multiple layers selected above.
omegaA.rast[p] <-omegaA.brk[[1]][p] ## here I choose the first level if multiple levels have been selected above
print(paste(p, "of", length(depth.values.index))) # counter to look at progress.
}
The problem: The result is a raster with massive gaps (NAs) in it where there should be data. The gaps often take a distinctive shape - eg, follow a contour, or along a long straight line. I've pasted a cropped example.
enter image description here
I think this could be because either 1) for some reason the 'which' statement in the loop is not finding a match or 2) a misalignment of the projections is created which I've read can happen when using 'Rotate'.
I've tried to make sure all the extents, resolutions, number of cells, and CRS's are all the same, which they seem to be.
To speed up the process I've cropped the global brick and bathy raster to my area of interest, again checking that all the spatial resolutions, etc etc match - I've not included those steps here for simplicity.
At a loss. Any help welcome!
Without a reproducible example, this kind of problems is hard to solve. I can't tell where your problem is but I'll present to you the approach I would try. Maybe it's good, maybe it's bad, I don't know but it may inspire you to find a way to go around your problem.
To my understanding, you have a brick of OmegaA (33 layers/depth) and a bathymetry raster. You want to get the OmegaA value at the bottom of the sea. Here is how I would do:
Make OmegaA raster to the same resolution and extent to the bathymetry one
Transforme the bathymetry raster into a raster brick of 33 three layers of 0-1. e.g. If the sea bottom is at 200m for one particular pixel, than this pixel on all depth layer other than 200 is 0 and 1 for the 200. To program this, I would go the long way, something like
:
r_1 <- r
values(r_1) <- values(r)==10 # where 10 is the depth (it could be a range with < or >)
r_2 <- r
values(r_2) <- values(r)==20
...
r_33 <- r
values(r_33) <- values(r)==250
r_brick <- brick(r_1, r_2, ..., r_33)
then you multiple both your raster bricks. They have the same dimension, it should be easy. The output should be a raster brick of 33 layers with 0 everywhere where it isn't the bottom of the sea and the value of OmegaA anywhere else.
Combine all the layer of the brick obtained previously into a simple raster with a sum.
This should work. If you have problem with dealing with raster brick, you could make the data into base R arrays, it could be simpler.
Good luck.
I'm trying to crop some raster data and do some calculations (getting the mean sea surface temperature, specifically).
However, when comparing cropping the extent of the raster data before doing the calculations gives me the same result as doing the calculations before cropping the resulting data.
The original extent of the raster data is -180, 180, -90, 90 (xmin, xmax, ymin, ymax), and I need to crop it to any desired region defined by latitude and longitude coordinates.
This is the script I'm doing tests with:
library(raster) # Crop raster data
library(stringr)
# hadsstR functions ----------------------------------------
load_hadsst <- function(file = "./HadISST_sst.nc") {
b <- brick(file)
NAvalue(b) <- -32768 # Land
return(b)
}
# Transform basin coordinates into numbers
morph_coords <- function(coords){
coords[1] = ifelse(str_extract(coords[1], "[A-Z]") == "W", - as.numeric(str_extract(coords[1], "[^A-Z]+")),
as.numeric(str_extract(coords[1], "[^A-Z]+")) )
coords[2] = ifelse(str_extract(coords[2], "[A-Z]") == "W", - as.numeric(str_extract(coords[2], "[^A-Z]+")),
as.numeric(str_extract(coords[2], "[^A-Z]+")) )
coords[3] = ifelse(str_extract(coords[3], "[A-Z]") == "S", - as.numeric(str_extract(coords[3], "[^A-Z]+")),
as.numeric(str_extract(coords[3], "[^A-Z]+")) )
coords[4] = ifelse(str_extract(coords[4], "[A-Z]") == "S", - as.numeric(str_extract(coords[2], "[^A-Z]+")),
as.numeric(str_extract(coords[4], "[^A-Z]+")) )
return(coords)
}
# Comparison test ------------------------------------------
hadsst.raster <- load_hadsst(file = "~/Hadley/HadISST_sst.nc")
x <- hadsst.raster
nms <- names(x)
months <- c("01","02","03","04","05","06","07","08","09","10","11","12")
coords <- c("85E", "90E", "5N", "10N")
coords <- morph_coords(coords)
years = 1970:1974
range = 5:12
# Crop before calculating mean
x <- crop(x, extent(as.numeric(coords[1]), as.numeric(coords[2]),
as.numeric(coords[3]), as.numeric(coords[4])))
xMeans <- vector(length = length(years)-1,mode='list')
for (ix in seq_along(years[1:length(years)])){
xMeans[[ix]] <- mean(x[[c(sapply(range,function(x) grep(paste0(years[ix],'.',months[x]),nms)))]], na.rm = T)
}
mean.brick1 <- do.call(brick,xMeans)
# Calculate mean before cropping
x <- hadsst.raster
xMeans <- vector(length = length(years)-1,mode='list')
for (ix in seq_along(years[1:length(years)])){
xMeans[[ix]] <- mean(x[[c(sapply(range,function(x) grep(paste0(years[ix],'.',months[x]),nms)))]], na.rm = T)
}
mean.brick2 <- do.call(brick,xMeans)
mean.brick2 <- crop(mean.brick2, extent(as.numeric(coords[1]), as.numeric(coords[2]),
as.numeric(coords[3]), as.numeric(coords[4])))
# Compare the two rasters
mean.brick1 - mean.brick2
This is the output of mean.brick1 - mean.brick2:
class : RasterBrick
dimensions : 5, 5, 25, 5 (nrow, ncol, ncell, nlayers)
resolution : 1, 1 (x, y)
extent : 85, 90, 5, 10 (xmin, xmax, ymin, ymax)
coord. ref. : +proj=longlat +datum=WGS84
data source : in memory
names : layer.1, layer.2, layer.3, layer.4, layer.5
min values : 0, 0, 0, 0, 0
max values : 0, 0, 0, 0, 0
As you can see, both RasterBricks are exactly the same, which should be impossible for any arbitrary choice of coordinates, as exemplified below with a small matrix:
Is there something I'm doing wrong? Cropping the data before doing calculations with them should unequivocally give me different results.
Ok, I'll continue from my post in your previous question:
We start out with the full hadsst.raster brick (which for having a reproducible example, can be fake created with the first part of my solution in my previous answer).
So this dataset has the dimensions 180, 360, 516, meaning 180 rows, 360 columns and 516 temporal layers.
Technically, a raster being a matrix, this could be how it looks like:
Just a bunch of matrix layers (516 to be precise), where each pixel is exactly aligned. Here I only have three example layers, the rest is indicated by the three dots.
So if we do temporal averaging, we basically extract all the values for a single pixel and take the mean (or any other averaging operation) of them. This is indicated here by the red squares.
This also shows why cropping does not influence the temporal averaging:
If we say the orange square is our extent of interest and we perform the cropping operation before the averaging, we basically discard all values around this square. After that we again take all the values for each pixel over all layers and perform our average.
This should make now clear, why it doesn't matter when you discard the pixel around the orange square. You could also calculate the average for them and discard the values afterwards, leaving you with just the values of your orange square. It just doesn't make any real sense if you're already sure you won't need them for further calculations.
Regardless, the values inside the square won't be affected.
When we talk about spatial averaging, it generally means averaging over pixel within a single layer, in this case probably over the values inside the orange rectangle.
Two common operations for that are
focal averaging (also known as neighbourhood averaging)
aggregation
The focal averaging will take will take for each pixel the average of all values of a defined number of adjacent pixels (most common is a 3x3square, where the pixel to be defined is the central one).
The aggregation is literally taking a number of pixel and combining them into a bigger pixel. This means that not only the value of this pixel will be averaged, but also that the resulting raster will have less individual pixels and a coarser resolution.
Alright, coming to the actual solution for you:
I assume you have an area of interest defined by an extent aoi:
aoi <- extent(xmin,xmax,ymin,ymax)
The first thing you would do is crop the initial brick to reduce the computational burden:
hadsst.raster_crp <- crop(hadsst.raster,aoi)
The next step is the temporal averaging, where we use the function I've defined in the solution from my other post:
hadsst.raster_crp_avg <- hadSSTmean(hadsst.raster_crp, 1969:2011, first.range = 11:12, second.range = 1:4)
Alright, now you have your temporal averages just for your region of interest. The next step depends on what your ultimate goal is.
As far as I understood, you just need a single average per temporal average for your region of interest.
If that is the case, it might be the right time to leave the actual raster domain and continue with base R:
res <- lapply(1:nlayers(hadsst.raster_crp_avg),function(ix) mean(as.matrix(hadsst.raster_crp_avg[[ix]])))
This will give you a list with as many elements as your brick hadsst.raster_crp_avg has.
Using lapply, we iterate through the layers, converting each layer into a matrix and then calculating the mean over all elements leaving us with a single value per averaged-timestep for the entire area of interest.
Going further you can use unlistto convert it to a vector and the add it to a data.frame or perform any other operation you like.
Hopefully that was clear and this is what you were looking for.
Best
I am trying to calculate the closest distance between locations in the ocean and points on land but not going through a coastline. Ultimately, I want to create a distance to land-features map.
This map was created using rdist.earth and is a straight line distance. Therefore it is not always correct because it not taking into account the curvatures of the coastline.
c<-matrix(coast_lonlat[,1], 332, 316, byrow=T)
image(1:316, 1:332, t(c))
min_dist2_feature<-NULL
for(q in 1:nrow(coast_lonlat)){
diff_lonlat <- rdist.earth(matrix(coast_lonlat[q,2:3],1,2),as.matrix(feature[,1:2]), miles = F)
min_dist2_feature<-c(min_dist2_feature, min(diff_lonlat,na.rm=T))
}
distmat <- matrix( min_dist2_feature, 316, 332)
image(1:316, 1:332, distmat)
Land feature data is a two column matrix of xy coordinates, e.g.:
ant_x <- c(85, 90, 95, 100)
ant_y <- c(-68, -68, -68, -68)
feature <- cbind(ant_x, ant_y)
Does anyone have any suggestions? Thanks
Not fully errorchecked yet but it may get you started. Rather than coastlines, I think you need to start with a raster whose the no-go areas are set to NA.
library(raster)
library(gdistance)
library(maptools)
library(rgdal)
# a mockup of the original features dataset (no longer available)
# as I recall it, these were just a two-column matrix of xy coordinates
# along the coast of East Antarctica, in degrees of lat/long
ant_x <- c(85, 90, 95, 100)
ant_y <- c(-68, -68, -68, -68)
feature <- cbind(ant_x, ant_y)
# a projection I found for antarctica
antcrs <- crs("+proj=stere +lat_0=-90 +lat_ts=-71 +datum=WGS84")
# set projection for your features
# function 'project' is from the rgdal package
antfeat <- project(feature, crs(antcrs, asText=TRUE))
# make a raster similar to yours
# with all land having "NA" value
# use your own shapefile or raster if you have it
# the wrld_simpl data set is from maptools package
data(wrld_simpl)
world <- wrld_simpl
ant <- world[world$LAT < -60, ]
antshp <- spTransform(ant, antcrs)
ras <- raster(nrow=300, ncol=300)
crs(ras) <- crs(antshp)
extent(ras) <- extent(antshp)
# rasterize will set ocean to NA so we just inverse it
# and set water to "1"
# land is equal to zero because it is "NOT" NA
antmask <- rasterize(antshp, ras)
antras <- is.na(antmask)
# originally I sent land to "NA"
# but that seemed to make some of your features not visible
# so at 999 land (ie everything that was zero)
# becomes very expensive to cross but not "impossible"
antras[antras==0] <- 999
# each cell antras now has value of zero or 999, nothing else
# create a Transition object from the raster
# this calculation took a bit of time
tr <- transition(antras, function(x) 1/mean(x), 8)
tr = geoCorrection(tr, scl=FALSE)
# distance matrix excluding the land
# just pick a few features to prove it works
sel_feat <- head(antfeat, 3)
A <- accCost(tr, sel_feat)
# now A still shows the expensive travel over land
# so we mask it out for sea travel only
A <- mask(A, antmask, inverse=TRUE)
plot(A)
points(sel_feat)
Seems to be working because the left side ocean has higher values than the right side ocean, and likewise as you go down into the Ross Sea.