How to dowscale a raster but keeping the same values? - r

If I have this raster with 40 x 40 resolution.
library(raster)
#get some sample data
data(meuse.grid)
gridded(meuse.grid) <- ~x+y
meuse.raster <- raster(meuse.grid)
res(meuse.raster)
#[1] 40 40
I would like to downscale this raster to 4 x 4. If a pixel of 40x40 = 125, so use the same values for all pixels of 4x4x within this pixel.
just divide each pixel of 40 x40 into 4x4 with keeping its value.
I am open to CDO solutions as well.

We can use raster::disaggregate
library(raster)
#get some sample data
data(meuse.grid)
gridded(meuse.grid) <- ~x+y
meuse.raster <- raster(meuse.grid)
#assign geographic coordinate system (from coordinates and location (Meuse) it seems like the standard projection for the Netherlands (Amersfoort, ESPG:28992)
crs(meuse.raster) <- "EPSG:28992"
#disaggregate
meuse.raster.dissaggregated <- disaggregate(meuse.raster, c(10,10))
I used c(10,10) to disaggregated from a 40x40 to 4x4 resolution (10 times more detailed).
res(meuse.raster.dissaggregated)
[1] 4 4
In the comments Chris mentioned the terra package. I also recommend shifting from raster to terra. I believe its the newest package and will eventually replaces packages like raster and stars.
terra also has a disaggregation function terra::disagg() which works in a similar way.

Related

R raster::crop() The upper boundary of my cropped raster is always horizontal- why?

I'm trying to crop a large multipolygon shapefile by a single, smaller polygon. It works using st_intersection, however this takes a very long time, so I'm instead trying to convert the multipolygon to a raster, and crop that raster by the smaller polygon.
## packages - sorry if I've missed any!
library(raster)
library(rgdal)
library(fasterize)
library(sf)
## load files
shp1 <- st_read("pathtoshp", crs = 27700) # a large multipolygon shapefile to crop
### image below created using ggplot- ignore the black boundaries!
shp2 <- st_read("pathtoshp", crs = 27700) # a single, smaller polygon shapefile, to crop shp1 by
plot(shp2)
## convert to raster (faster than st_intersection)
projection1 <- CRS('+init=EPSG:27700')
rst_template <- raster(ncols = 1000, nrows = 1000,
crs = projection1,
ext = extent(shp1))
rst_shp1 <- fasterize(shp1, rst_template)
plot(rst_shp1)
rst_shp2 <- crop(rst_shp1, shp2)
plot(rst_shp2)
When I plot shp2, the upper boundary is flat, rather than fitting the true boundary of the shp2 polygon.
Any help would be greatly appreciated!
Maybe try raster::mask() instead of crop(). crop() uses the second argument as an extent with which to crop a raster; i.e. it's taking the bounding box (extent) of your second argument and cropping that entire rectangle from your raster.
Something important to understand about raster objects is that they are all rectangular. The white space you see surrounding your shape are just NA values.
raster::mask() will take your original raster, and a spatial object (raster, sf, etc.) and replace all values in your raster which don't overlap with your spatial object to NA (by default, you can supply other replacement values). Though I will say, mask() will likely also take awhile to run, so you may be better off just sticking with sf objects.
I would suggest moving to the "terra" package (faster and easier to use than "raster").
Here is an example.
library(terra)
r <- rast(system.file("ex/elev.tif", package="terra"))
v <- vect(system.file("ex/lux.shp", package="terra"))[4]
x <- crop(r, v)
plot(x); lines(v)
As edixon1 points out, a raster is always rectangular. If you want to set cells outside of the polygon to NA, you can do
x <- crop(r, v, mask=TRUE)
plot(x); lines(v)
In this example it makes no sense, but you could first rasterize
x <- crop(r, v)
y <- rasterize(v, x)
m <- mask(x, y)
plot(m); lines(v)
I am not sure if this answers your question. But if it does not, then please edit your question to make it reproducible, for example using the example data above.

How to convert "im" pixel image to raster?

I am trying to convert an "im" pixel image I've produced into a raster image. The "im" was created with the following code:
library(sf)
library(spatstat)
library(rgeos)
library(raster)
# read ebird data
ebd_species <- ("ebd_hooded.txt") %>%
read_ebd()
# extracting coordinates
latitude_species <- ebd_species$latitude
longitude_species <- ebd_species$longitude
#convert to spatial object
coordinates1 <- data.frame(x = longitude_species, y = latitude_species) %>% st_as_sf(coords = c("x", "y"))
# converting to point pattern data
coordinates <- as.ppp(coordinates1)
# density image
a <- density(coordinates,2)
plot(a)
This is the plot I get:
plot
What I want to do is convert this into a raster. I wanna then use the coordinates of the ebird data to extract the values of density from the raster.
Here is a minimal, self-contained, reproducible example (based on the first example in ?im):
library(spatstat)
mat <- matrix(1:1200, nrow=30, ncol=40, byrow=TRUE)
m <- im(mat)
Solution
library(raster)
r <- raster(m)
Looks like you are using geographic coordinates (longitude, latitude) directly in spatstat. Are you sure this is OK in your context? For regions away from the equator this can be quite misleading. Consider projecting to planar coordinates using sf::st_transform() (see other of my answers on this site for code to do this). Also, in newer versions of sf you can convert directly from sf to spatstat format with e.g. as.ppp().
If you want a kernel density estimate of the intensity at the data points you can use the option at = "points" in density.ppp():
a <- density(coordinates, 2, at = "points")
Then a is simply a vector with length equal to the number of points containing the intensity estimate for each data point. This uses "leave-one-out" estimation by default to minimize bias (see the help file for density.ppp).

How to subset a raster based on grid cell values

My following question builds on the solution proposed by #jbaums on this post: Global Raster of geographic distances
For the purpose of reproducing the example, I have a raster dataset of distances to the nearest coastline:
library(rasterVis); library(raster); library(maptools)
data(wrld_simpl)
# Create a raster template for rasterizing the polys.
r <- raster(xmn=-180, xmx=180, ymn=-90, ymx=90, res=1)
# Rasterize and set land pixels to NA
r2 <- rasterize(wrld_simpl, r, 1)
r3 <- mask(is.na(r2), r2, maskvalue=1, updatevalue=NA)
# Calculate distance to nearest non-NA pixel
d <- distance(r3) # if claculating distances on land instead of ocean: d <- distance(r3)
# Optionally set non-land pixels to NA (otherwise values are "distance to non-land")
d <- d*r2
levelplot(d/1000, margin=FALSE, at=seq(0, maxValue(d)/1000, length=100),colorkey=list(height=0.6), main='Distance to coast (km)')
The data looks like this:
From here, I need to subset the distance raster (d), or create a new raster, that only contains cells for which the distance to coastline is less than 200 km. I have tried using getValues() to identify the cells for which the value <= 200 (as show below), but so far without success. Can anyone help? Am I on the right track?
#vector of desired cell numbers
my.pts <- which(getValues(d) <= 200)
# create raster the same size as d filled with NAs
bar <- raster(ncols=ncol(d), nrows=nrow(d), res=res(d))
bar[] <- NA
# replace the values with those in d
bar[my.pts] <- d[my.pts]
I think this is what you are looking for, you can treat a raster like a matrix here right after you d <- d*r2 line:
d[d>=200000]<-NA
levelplot(d/1000, margin=FALSE, at=seq(0, maxValue(d)/1000, length=100),colorkey=list(height=0.6), main='Distance to coast (km)')
(in case you forgot: the unit is in meters so the threshold should be 200000, not 200)

R - find point farthest from set of points on rasterized USA map

New to spatial analysis on R here. I have a shapefile for the USA that I downloaded from HERE. I also have a set of lat/long points (half a million) that lie within the contiguous USA.
I'd like to find the "most remote spot" -- the spot within the contiguous USA that's farthest from the set of points.
I'm using the rgdal, raster and sp packages. Here's a reproducible example with a random sample of 10 points:
# Set wd to the folder tl_2010_us_state_10
usa <- readOGR(dsn = ".", layer = "tl_2010_us_state10")
# Sample 10 points in USA
sample <- spsample(usa, 10, type = "random")
# Set extent for contiguous united states
ext <- extent(-124.848974, -66.885444, 24.396308, 49.384358)
# Rasterize USA
r <- raster(ext, nrow = 500, ncol = 500)
rr <- rasterize(usa, r)
# Find distance from sample points to cells of USA raster
D <- distanceFromPoints(object = rr, xy = sample)
# Plot distances and points
plot(D)
points(sample)
After the last two lines of code, I get this plot.
However, I'd like it to be over the rasterized map of the USA. And, I'd like it to only consider distances from cells that are in the contiguous USA, not all cells in the bounding box. How do I go about doing this?
I'd also appreciate any other tips regarding the shape file I'm using -- is it the best one? Should I be worried about using the right projection, since my actual dataset is lat/long? Will distanceFromPoints be able to efficiently process such a large dataset, or is there a better function?
To limit raster D to the contiguous USA you could find the elements of rr assigned values of NA (i.e. raster cells within the bounding box but outside of the usa polygons), and assign these same elements of D a value of NA.
D[which(is.na(rr[]))] <- NA
plot(D)
lines(usa)
You can use 'proj4string(usa)' to find the projection info for the usa shapefile. If your coordinates of interest are based on a different projection, you can transform them to match the usa shapefile projection as follows:
my_coords_xform <- spTransform(my_coords, CRS(proj4string(usa)))
Not sure about the relative efficiency of distanceFromPoints, but it only took ~ 1 sec to run on my computer using your example with 10 points.
I think you were looking for the mask function.
library(raster)
usa <- getData('GADM', country='USA', level=1)
# exclude Alaska and Hawaii
usa <- usa[!usa$NAME_1 %in% c( "Alaska" , "Hawaii"), ]
# get the extent and create raster with preferred resolution
r <- raster(floor(extent(usa)), res=1)
# rasterize polygons
rr <- rasterize(usa, r)
set.seed(89)
sample <- spsample(usa, 10, type = "random")
# Find distance from sample points to cells of USA raster
D <- distanceFromPoints(object = rr, xy = sample)
# remove areas outside of polygons
Dm <- mask(D, rr)
# an alternative would be mask(D, usa)
# cell with highest value
mxd <- which.max(Dm)
# coordinates of that cell
pt <- xyFromCell(r, mxd)
plot(Dm)
points(pt)
The distances should be fine, also when using long/lat data. But rasterFromPoints could indeed be a bit slow with a large data set as it uses a brute force algorithm.

Global Raster of geographic distances

Im wondering if someone has built a raster of the continents of the world where each cell equals the distance of that cell cell to the nearest shore. This map would highlight the land areas that are most isolated inland.
I would imagine this would simply rasterize a shapefile of the global boundaries and then calculate the distances.
You can do this with raster::distance, which calculates the distance from each NA cell to the closest non-NA cell. You just need to create a raster that has NA for land pixels, and some other value for non-land pixels.
Here's how:
library(raster)
library(maptools)
data(wrld_simpl)
# Create a raster template for rasterizing the polys.
# (set the desired grid resolution with res)
r <- raster(xmn=-180, xmx=180, ymn=-90, ymx=90, res=1)
# Rasterize and set land pixels to NA
r2 <- rasterize(wrld_simpl, r, 1)
r3 <- mask(is.na(r2), r2, maskvalue=1, updatevalue=NA)
# Calculate distance to nearest non-NA pixel
d <- distance(r3)
# Optionally set non-land pixels to NA (otherwise values are "distance to non-land")
d <- d*r2
To create the plot above (I like rasterVis for plotting, but you could use plot(r)):
library(rasterVis)
levelplot(d/1000, margin=FALSE, at=seq(0, maxValue(d)/1000, length=100),
colorkey=list(height=0.6), main='Distance to coast')

Resources