Basically, I computed a global distribution probability model in the form of ASCII, say:
gdpm. gdpm's values are all between 0 and 1.
Then I imported a local map from shape file:
shape <- file.choose()
map <- readOGR(shape, basename(file_path_sans_ext(shape)))
The next step, I rasterized gdpm, and cropped using the local map:
ldpm <- mask(gdpm, map)
Then, I reclassified this continuous model into a discrete model (I divided the model into 6 levels):
recalc <- matrix(c(0, 0.05, 0, 0.05, 0.2, 1, 0.2, 0.4, 2, 0.4, 0.6, 3, 0.6, 0.8, 4, 0.8, 1, 5), ncol = 3, byrow = TRUE)
ldpmR <- reclassify(ldpm, recalc)
I've got a cropped and reclassified raster, now I need to summarize land cover, that is, to each level, I want to calculate its proportion of area in each region of the local map. (I don't know how to describe it in terminology). I found and followed an example (RobertH):
ext <- raster::extract(ldpmR, map)
tab <- sapply(ext, function(x) tabulate(x, 10))
tab <- tab / colSums(tab)
But I'm not sure if it works, since the output of tab is confusing.
So how to compute land cover area correctly? How can I apply the correct method within each polygon?
My original data is too large, I can only provide an alternative raster (I think this example should apply a different reclassify matrix):
Example raster
Or you can generate a test raster (RobertH):
library(raster)
s <- stack(system.file("external/rlogo.grd", package="raster"))
writeRaster(s, file='testtif', format='GTiff', bylayer=T, overwrite=T)
f <- list.files(pattern="testtif_..tif")
I also have a question about plotting a raster:
r <- as(r, "SpatialPixelsDataFrame")
r <- as.data.frame(r)
colnames(r) <- c("value", "x", "y")
I do this conversion to make a raster plot-able with ggplot2, is there a more concise method?
loki's answer is OK, but this can be done the raster way, which is safer. And it is important to consider whether the coordinates are angular (longitude/latitude) or planar (projected)
Example data
library(raster)
r <- raster(system.file("external/test.grd", package="raster"))
r <- r / 1000
recalc <- matrix(c(0, 0.05, 0, 0.05, 0.2, 1, 0.2, 0.4, 2, 0.4, 0.6, 3, 0.6, 0.8, 4, 0.8, 2, 5), ncol = 3, byrow = TRUE)
r2 <- reclassify(r, recalc)
Approach 1. Only for planar data
f <- freq(r2, useNA='no')
apc <- prod(res(r))
f <- cbind(f, area=f[,2] * apc)
f
# value count area
#[1,] 1 78 124800
#[2,] 2 1750 2800000
#[3,] 3 819 1310400
#[4,] 4 304 486400
#[5,] 5 152 243200
Approach 2. For angular data (but also works for planar data)
a <- area(r2)
z <- zonal(a, r2, 'sum')
z
# zone sum
#[1,] 1 124800
#[2,] 2 2800000
#[3,] 3 1310400
#[4,] 4 486400
#[5,] 5 243200
If you want to summarize by polygons, you can do something like this:
# example polygons
a <- rasterToPolygons(aggregate(r, 25))
Approach 1
# extract values (slow)
ext <- extract(r2, a)
# tabulate values for each polygon
tab <- sapply(ext, function(x) tabulate(x, 5))
# adjust for area (planar data only)
tab <- tab * prod(res(r2))
# check the results, by summing over the regions
rowSums(tab)
#[1] 124800 2800000 1310400 486400 243200
Approach 2
x <- rasterize(a, r2)
z <- crosstab(x, r2)
z <- cbind(z, area = z[,3] * prod(res(r2)))
Check results:
aggregate(z[, 'area', drop=F], z[,'Var2', drop=F], sum)
Var2 area
#1 1 124800
#2 2 2800000
#3 3 1310400
#4 4 486400
#5 5 243200
Note that if you are dealing with lon/lat data you cannot use prod(res(r)) to get the cell size. In that case you will need to use the area function and loop over classes, I think.
You also asked about plotting. There are many ways to plot a Raster* object. The basic ones are:
image(r2)
plot(r2)
spplot(r2)
library(rasterVis);
levelplot(r2)
More tricky approaches:
library(ggplot2) # using a rasterVis method
theme_set(theme_bw())
gplot(r2) + geom_tile(aes(fill = value)) +
facet_wrap(~ variable) +
scale_fill_gradient(low = 'white', high = 'blue') +
coord_equal()
library(leaflet)
leaflet() %>% addTiles() %>%
addRasterImage(r2, colors = "Spectral", opacity = 0.8)
Seems like you can get the area by the number of pixels.
Let's start with a reproducible example:
r <- raster(system.file("external/test.grd", package="raster"))
plot(r)
Since, the values in this raster are in another range than your data, let's adapt them to your values:
r <- r / 1000
r[r>1,] <- 1
Afterwards, we apply your reclassification:
recalc <- matrix(c(0, 0.05, 0, 0.05, 0.2, 1, 0.2, 0.4, 2, 0.4, 0.6, 3, 0.6, 0.8, 4, 0.8, 1, 5), ncol = 3, byrow = TRUE)
r2 <- reclassify(r, recalc)
plot(r2)
Now, how do we get the area?
Since you are working with a projected raster, you can simply use the number of pixels and the raster resolution. Therefore, we first need to check the resolution and the map units of the projection:
res(r)
# [1] 40 40
crs(r)
# CRS arguments:
# +init=epsg:28992
# +towgs84=565.237,50.0087,465.658,-0.406857,0.350733,-1.87035,4.0812 +proj=sterea
# +lat_0=52.15616055555555 +lon_0=5.38763888888889 +k=0.9999079 +x_0=155000
# +y_0=463000 +ellps=bessel +units=m +no_defs
Now, we know that we are dealing with pixels of 40 x40 meters, since we have a metric CRS.
Let's use this information to calculate the area of each class.
app <- res(r)[1] * res(r)[2] # area per pixel
table(r2[]) * app
# 1 2 3 4 5
# 124800 2800000 1310400 486400 243200
For the plotting of georeferenced rasters, I would like to refer you to an older question here on SO
Related
so I've combined those 2 rasters and made them into one dem raster which contains elevation values:
dem1 = read_stars("srtm_43_06.tif")
dem2 = read_stars("srtm_44_06.tif")
pol = st_read("israel_borders.shp")
dem = st_mosaic(dem1, dem2)
dem = dem[, 5687:6287, 2348:2948]
names(dem) = "elevation"
dem = st_warp(src = dem, crs = 32636, method = "near", cellsize = 90)
Now I need to calculate a point geometry of the peak of the mountain by finding the centroid of the pixel that has the highest elevation in the image, does anyone know what functions I can use?
Building on Grzegorz Sapijaszko's example, here is an alternative path to the top of the mountain.
library(terra)
f <- system.file("ex/elev.tif", package="terra")
x <- rast(f)
If there is a single maximum, you can do
g <- global(x, which.max)
xyFromCell(x, g[,1])
# x y
#[1,] 6.020833 50.17917
Now, consider a situation with multiple maxima. I add three more cells with the maximum value.
x[c(1000, 2500, 5000)] <- 547
We can find the four highest peaks with:
g <- global(x, which.max)[[1]]
v <- x[g] |> unlist()
y <- ifel(x == v, v, NA)
p <- as.points(y)
crds(p)
#[1,] 6.020833 50.17917
#[2,] 6.154167 50.10417
#[3,] 5.987500 49.97083
#[4,] 6.237500 49.75417
You should not warp (project with terra) the raster data first because that changes the cell values and potentially the location of the highest peak. You should find the peaks with the original data, but then you can transform the results like this.
pp <- project(p, "EPSG:32636")
crds(pp)
# x y
#[1,] -1411008 5916157
#[2,] -1404896 5904422
#[3,] -1422145 5894509
#[4,] -1413735 5864236
With your files, you could start with something like
ff <- c("srtm_43_06.tif", "srtm_44_06.tif")
v <- vrt(ff)
g <- global(x, which.max)
And then continue as in the examples above.
Let's use terra, however similar approach can be applied by raster package as well. For testing purposes we will use raster supplied with terra package
library(terra)
#> terra 1.5.12
f <- system.file("ex/elev.tif", package="terra")
v <- rast(f)
plot(v)
You can check the details of your raster just typing the raster object name and pressing enter, you can check the min and max values with minmax() function form terra:
minmax(v)
#> elevation
#> [1,] 141
#> [2,] 547
Let's create another raster by copying original one, however checking if the value is the max value of elevation:
w <- v == minmax(v)[2]
plot(w)
Let's create a substitution matrix, and substitute all FALSE with NA and TRUE with 1:
mx <- matrix(c(FALSE, NA, TRUE, 1), ncol = 2, byrow = TRUE)
w <- classify(w, mx)
plot(v)
plot(as.polygons(w), add=TRUE)
Let's find centroids of those polygon(s):
pts <- centroids(as.polygons(w))
plot(pts, add=TRUE)
Let's see our coordinates:
as.data.frame(pts, geom = "WKT")
#> elevation geometry
#> 1 1 POINT (6.020833 50.179167)
Created on 2022-01-29 by the reprex package (v2.0.1)
Imagine a regular 0.5° grid across the Earth's surface. A 3x3 subset of this grid is shown below. As a stylized example of what I'm working with, let's say I have three polygons—yellow, orange, and blue—that for the sake of simplicity all are 1 unit in area. These polygons have attributes Population and Value, which you can see in the legend:
I want to turn these polygons into a 0.5° raster (with global extent) whose values are based on the weighted-mean Value of the polygons. The tricky part is that I want to weight the polygons' values based on not their Population, but rather on their included population.
I know—theoretically—what I want to do, and below have done it for the center gridcell.
Multiply Population by Included (the area of the polygon that is included in the gridcell) to get Pop. included. (Assumes population is distributed evenly throughout polygon, which is acceptable.)
Divide each polygon's Included_pop by the sum of all polygons' Included_pop (32) to get Weight.
Multiply each polygon's Value by Weight to get Result.
Sum all polygons' Result to get the value for the center gridcell (0.31).
Population
Value
Frac. included
Pop. included
Weight
Result
Yellow
24
0.8
0.25
6
0.1875
0.15
Orange
16
0.4
0.5
8
0.25
0.10
Blue
18
0.1
1
18
0.5625
0.06
32
0.31
I have an idea of how to accomplish this in R, as described below. Where possible, I've filled in code that I think will do what I want. My questions: How do I do steps 2 and 3? Or is there a simpler way to do this? If you want to play around with this, I have uploaded old_polygons as a .rds file here.
library("sf")
library("raster")
Calculate the area of each polygon: old_polygons$area <- as.numeric(st_area(old_polygons))
Generate the global 0.5° grid as some kind of Spatial object.
Split the polygons by the grid, generating new_polygons.
Calculate area of the new polygons: new_polygons$new_area <- as.numeric(st_area(new_polygons))
Calculate fraction included for each new polygon: new_polygons$frac_included <- new_polygons$new_area / new_polygons$old_area
Calculate "included population" in the new polygons: new_polygons$pop_included <- new_polygons$pop * new_polygons$frac_included
Calculate a new attribute for each polygon that is just their Value times their included population. new_polygons$tmp <- new_polygons$Value * new_polygons$frac_included
Set up an empty raster for the next steps: empty_raster <- raster(nrows=360, ncols=720, xmn=-180, xmx=180, ymn=-90, ymx=90)
Rasterize the polygons by summing this new attribute together within each gridcell. tmp_raster <- rasterize(new_polygons, empty_raster, "tmp", fun = "sum")
Create another raster that is just the total population in each gridcell: pop_raster <- rasterize(new_polygons, empty_raster, "pop_included", fun = "sum")
Divide the first raster by the second to get what I want:
output_raster <- empty_raster
values(output_raster) <- getValues(tmp_raster) / getValues(pop_raster)
Any help would be much appreciated!
Example data:
library(terra)
f <- system.file("ex/lux.shp", package="terra")
v <- vect(f)
values(v) <- data.frame(population=1:12, value=round(c(2:13)/14, 2))
r <- rast(ext(v)+.05, ncols=4, nrows=6, names="cell")
Illustrate the data
p <- as.polygons(r)
plot(p, lwd=2, col="gray", border="light gray")
lines(v, col=rainbow(12), lwd=2)
txt <- paste0(v$value, " (", v$population, ")")
text(v, txt, cex=.8, halo=TRUE)
Solution:
# area of the polygons
v$area1 <- expanse(v)
# intersect with raster cell boundaries
values(r) <- 1:ncell(r)
p <- as.polygons(r)
pv <- intersect(p, v)
# area of the polygon parts
pv$area2 <- expanse(pv)
pv$frac <- pv$area2 / pv$area1
Now we just use the data.frame with the attributes of the polygons to compute the polygon-cover-weighted-population-weighted values.
z <- values(pv)
a <- aggregate(z[, "frac", drop=FALSE], z[,"cell",drop=FALSE], sum)
names(a)[2] <- 'fsum'
z <- merge(z, a)
z$weight <- z$population * z$frac / z$fsum
z$wvalue <- z$value * z$weight
b <- aggregate(z[, c("wvalue", "weight")], z[, "cell", drop=FALSE], sum)
b$bingo <- b$wvalue / b$weight
Assign values back to raster cells
x <- rast(r)
x[b$cell] <- b$bingo
Inspect results
plot(x)
lines(v)
text(x, digits=2, halo=TRUE, cex=.9)
text(v, "value", cex=.8, col="red", halo=TRUE)
This may not scale very well to large data sets, but you could perhaps do it in chunks.
This is fast and scalable:
library(data.table)
library(terra)
# make the 3 polygons with radius = 5km
center_points <- data.frame(lon = c(0.5, 0.65, 1),
lat = c(0.75, 0.65, 1),
Population = c(16, 18, 24),
Value = c(0.4, 0.1, 0.8))
polygon <- vect(center_points, crs = "EPSG:4326")
polygon <- buffer(polygon, 5000)
# make the raster
my_raster <- rast(nrow = 3, ncol = 3, xmin = 0, xmax = 1.5, ymin = 0, ymax = 1.5, crs = "EPSG:4326")
my_raster[] <- 0 # set the value to 0 for now
# find the fractions of cells in each polygon
# "cells" gives you the cell ID and "weights" (or "exact") gives you the cell fraction in the polygon
# using "exact" instead of "weights" is more accurate
my_Table <- extract(my_raster, polygon, cells = TRUE, weights = TRUE)
setDT(my_Table) # convert to datatable
# merge the polygon attributes to "my_Table"
poly_Table <- setDT(as.data.frame(polygon))
poly_Table[, ID := 1:nrow(poly_Table)] # add the IDs which are the row numbers
merged_Table <- merge(my_Table, poly_Table, by = "ID")
# find Frac_included
merged_Table[, Frac_included := weight / sum(weight), by = ID]
# find Pop_included
merged_Table[, Pop_included := Frac_included * Population]
# find Weight, to avoid confusion with "weight" produced above, I call this "my_Weight"
merged_Table[, my_Weight := Pop_included / sum(Pop_included), by = cell]
# final results
Result <- merged_Table[, .(Result = sum(Value * my_Weight)), by = cell]
# add the values to the raster
my_raster[Result$cell] <- Result$Result
plot(my_raster)
I have two raster grids in R with different resolutions which don't line up exactly. In actual fact I have hundreds of each so any answer must be easily run many times.
I want to scale the finer resolution grid up to the coarser resolution by taking an areal weighted mean of the grid cells.
I was hoping I could use projectRaster or resample but neither give the desired output and I cannot use aggregate as I need my new grids to align to the coarser resolution grid.
For my real data my finer grid is 0.005 deg intervals and coarser is at 0.02479172 deg intervals and extents/origins don't exactly match up.
I've made an extreme version as an example why neither resample or projectRaster work
library(raster)
#> Warning: package 'raster' was built under R version 3.5.3
#> Loading required package: sp
testproj <- "+proj=lcc +lat_1=48 +lat_2=33 +lon_0=-100 +ellps=WGS84"
testmat <- matrix(1, nrow = 8, ncol = 8)
# testmat <- matrix(sample(1:10, 64, replace = T), nrow = 8, ncol = 8)
testmat[1,5] <- 400
testmat[8,4] <- -400
testsmallraster <- raster(testmat, xmn=0, xmx=8, ymn=0, ymx=8)
crs(testsmallraster) <- testproj
plot(testsmallraster)
testlarger <- raster(matrix(rep(NA,4), nrow = 2, ncol = 2), xmn=0.3, xmx=8.3, ymn=0, ymx=8)
crs(testlarger) <- testproj
tout_reproj <- projectRaster(testsmallraster, testlarger)
tout_resamp <- resample(testsmallraster, testlarger)
tout_resampngb <- resample(testsmallraster, testlarger, method = "ngb")
tout_agg <- aggregate(testsmallraster, fact = 4)
#reprojected values ignore all but 4 cells closest to new centre
values(tout_reproj)
#> [1] 1 1 1 1
#resample uses bilinear interpolation which weights the grids cells furthest from the new centre less than those closest
# I need all grid cells entirely contained in the new grid to have equal weighting
#bilinear interpolation also weights cells which do not fall within the new cell at all which I do not want
values(tout_resamp)
#> [1] 10.851852 15.777778 -7.911111 -12.366667
#aggregate gives close to the values I want but they are not in the new raster origin/resolution and therefore not splitting values that fall across grid boundaries
values(tout_agg)
#> [1] 1.0000 25.9375 -24.0625 1.0000
#using ngb was never really going to make any sense but thought I'd as it for completeness
values(tout_resampngb)
#> [1] 1 1 1 1
#desired output first cell only 0.3 of a grid cell covers the grid cell = 400 the rest equal 1
#desired output second cell 0.7 of a grid cell covers the grid cell = 400 the rest equal 1
#desired output third cell has exactly 1 grid cell of -400 and 15 of 1
#desired output fourth cell only overlap grid cells = 1
desiredoutput <- raster(matrix(c((15.7*1+0.3*400)/16,(15.3*1+0.7*400)/16,mean(c(-400, rep(1,15))),1),byrow = T, nrow = 2, ncol = 2), xmn=0.3, xmx=8.3, ymn=0, ymx=8)
values(desiredoutput)
#> [1] 8.48125 18.45625 -24.06250 1.00000
Created on 2020-07-02 by the reprex package (v0.3.0)
You can get closer to the desired result by using a similar spatial resolution for resample, and then aggregate the results
library(raster)
testproj <- "+proj=lcc +lat_1=48 +lat_2=33 +lon_0=-100 +datum=WGS84"
testmat <- matrix(1, nrow = 8, ncol = 8)
testmat[1,5] <- 400
testmat[8,4] <- -400
testsmallraster <- raster(testmat, xmn=0, xmx=8, ymn=0, ymx=8, crs=testproj)
testlarger <- raster(matrix(rep(NA,4), nrow = 2, ncol = 2), xmn=0.3, xmx=8.3, ymn=0, ymx=8, crs = testproj)
y <- disaggregate(testlarger, 4)
z <- resample(testsmallraster, y)
za <- aggregate(z, 4)
values(za)
#[1] 8.48125 18.45625 -24.06250 1.00000
for much better speed, try terra
library(terra)
a <- rast(testsmallraster)
b <- rast(testlarger)
b <- disaggregate(b, 4)
d <- resample(a, b)
da <- aggregate(d, 4)
values(da)
# layer
#[1,] 8.48125
#[2,] 18.45625
#[3,] -24.06250
#[4,] 1.00000
This probably ought to be done automatically by resample and project(Raster). raster attempts to do some of this for resample, but in this case not very satisfactorily.
When I needed to do similar resampling, this worked for me. This example is a 4-cell destination grid at 1o x 1o spacing with centroids at half degrees (to match some satellite data), and an offset half-degree grid for source data (ECMWF weather). 'Resample' does the heavy lifting of interpolating on mismatched grids. The code below is basically a manual version of a 'weights=' option that doesn't exist for resample. We need relative, not absolute, areas to be correct for weighting, so the caveat on the precision of raster::area described in the help seems of low concern.
library(raster)
wgs84 <- "+init=epsg:4326"
polar.brick.source <- array(dim = c(5, 5, 2), rep(c(1, 2), each = 25))
dimnames(polar.brick.source)[[1]] <- seq(-1, 1, by = .5)
dimnames(polar.brick.source)[[2]] <- seq(80, 82, by = .5)
dimnames(polar.brick.source)[[3]] <- c("time.a", "time.b")
# Add some outliers to see their effects.
polar.brick.source[1, 2, ] <- c(25, 50)
polar.brick.source[3, 2, 2] <- -30
polar.brick <- brick(polar.brick.source, crs = CRS(wgs84),
xmn = min(as.numeric(dimnames(polar.brick.source)[[1]])) - .25,
xmx = max(as.numeric(dimnames(polar.brick.source)[[1]])) + .25,
ymn = min(as.numeric(dimnames(polar.brick.source)[[2]])) - .25,
ymx = max(as.numeric(dimnames(polar.brick.source)[[2]])) + .25)
fine.polar.area <- raster::area(polar.brick)
polar.one.degree.source <- data.frame(
lon = c(-.5, .5, -.5, .5),
lat = c(80.5, 80.5, 81.5, 81.5),
placeholder = rep(1, 4))
polar.one.degree.raster <- rasterFromXYZ(polar.one.degree.source, crs = CRS(wgs84))
polar.one.degree.area <- raster::area(polar.one.degree.raster)
as.data.frame(polar.one.degree.area, xy = T)
fine.clip.layer <- disaggregate(polar.one.degree.raster, 2)
clipped.fine.polar <-resample(polar.brick * fine.polar.area,
fine.clip.layer)
new.weighted.wx <- aggregate(clipped.fine.polar * 4, 2)
as.data.frame(new.weighted.wx, xy = T) # look at partial results.
new.weather <- new.weighted.wx / polar.one.degree.area
as.data.frame(new.weather, xy = T)
I'm looking for an R implementation of the ESRI 'Slice' tool, specifically I want to use the 'EQUAL_AREA' option.
I want to use an input raster, and reclassify raster values into 9 'bins', based on (approximately) the number of cells within each bin.
My raster has values between 0 and 50,000 that covers a very large geographical area. So for example values between 0 and 5000 might become '1', values between 5000 and 6000 might become '2' and so on. Depending on how many values/cells there are in each category.
Thanks.
There is no such a package as far as I know, but you can use classInt and raster package to do what you are looking for! Although you need to come up with a reproducible example to get the best result, I think below script does the job:
# sample data
data(volcano)
volcanoR <- raster(volcano)
# required libraries
library(classInt)
library(raster)
n = 9 # this is number of classes
zClass <- classIntervals(values(volcanoR), n=n, style="jenks")
# chosen style: one of "fixed", "sd", "equal", "pretty", "quantile", "kmeans", "hclust",
# "bclust", "fisher", "jenks" or "dpih"
# classes for reclassification based on NBJ
df.rcl <- data.frame(zClass$brks[1:(length(zClass$brks)-1)],
zClass$brks[2:(length(zClass$brks))],
seq(1,length(zClass$brks)-1,1))
rec.ras <- reclassify(volcanoR, df.rcl, include.lowest=TRUE)
plot(rec.ras, col=terrain.colors(n, alpha=1, rev=T), legend=F, main="NBJ")
legend("topleft", legend = c(seq(1,length(zClass$brks)-1,1)),
fill = terrain.colors(n, alpha = 1, rev = T), cex=0.85, bty = "n")
Same approach for equal interval classes:
zClass <- classIntervals(values(volcanoR), n=n, style="equal")
# chosen style: one of "fixed", "sd", "equal", "pretty", "quantile", "kmeans", "hclust",
# "bclust", "fisher", "jenks" or "dpih"
# classes for reclassification based on EQUAL INTERVAL
df.rcl <- data.frame(zClass$brks[1:(length(zClass$brks)-1)],
zClass$brks[2:(length(zClass$brks))],
seq(1,length(zClass$brks)-1,1))
rec.ras <- reclassify(volcanoR, df.rcl, include.lowest=TRUE)
plot(rec.ras, col=terrain.colors(n, alpha=1, rev=T), legend=F, main="Equal Interval")
legend("topleft", legend = c(seq(1,length(zClass$brks)-1,1)),
fill = terrain.colors(n, alpha = 1, rev = T), cex=0.85, bty = "n")
library(raster)
data(volcano)
v <- raster(volcano)
Use quantile with sampleRegular (for very large rasters)
s <- seq(0, 1, 1/9)
q <- quantile(sampleRegular(v,500000), s)
x <- cut(v, q) # like reclassify
Check the results
plot(x)
table(values(x))
# 1 2 3 4 5 6 7 8 9
#597 582 547 562 613 575 592 556 632
I need to estimate the weighted average of raster values for the polygon shown in squares. I want to obtain raster value and its weight within each square in the polygon shape. (As shown in this post: How can I extract an area weighted sum from a raster into a polygon in R?)
But, please see my code below and the image of what I am getting as weights. Can somebody correct me what I am doing wrong here and why my output is different from as shown in the above post.? I want to obtain an output like in the post above. Seems likes the weights I am getting is wrong too.
Please see the attached input data set here:
https://bft.usu.edu/w8crs
Thanks.
library(raster)
library(sp)
library(rgdal)
library(rgeos)
rlist = list.files(getwd(), pattern = "tif$", full.names = TRUE)
inshp = "Test"
rdata <- rlist[1]
r <- raster(rdata)
sdata <- readOGR(dsn=getwd(), layer=inshp)
sdata <- spTransform(sdata, crs(r))
extract(r, sdata, weights=TRUE)
Output:
[[1]]
value weight
56.75139 1
[[2]]
value weight
61.18781 1
[[3]]
value weight
56.75139 1
[[4]]
value weight
61.18781 1
Here is a reproducible example
library(raster)
packageVersion("raster")
#[1] ‘2.8.4’
r <- raster(xmn=0, xmx=1, ymn=0, ymx=1, nrow=2, ncol=2)
values(r) <- 1:4
m <- matrix(c(0.4, 0.6, 0.8, 0.6, 0.7, 0.2, 0.3, 0.2), ncol=2, byrow=TRUE)
s <- spPolygons(m)
plot(r); lines(s)
extract(r, s, weights=TRUE)
#[[1]]
# value weight
#[1,] 1 0.0625
#[2,] 2 0.1875
#[3,] 3 0.3125
#[4,] 4 0.4375
This did not work for you, because your polygon was very small relative to the raster cell size. I have changed the function, such that it increases the precision for those cases. I now get this with your data:
> extract(r, sdata, weights=TRUE)
[[1]]
value weight
56.75139 1
[[2]]
value weight
[1,] 61.18781 0.6592593
[2,] 56.75139 0.3407407
[[3]]
value weight
56.75139 1
[[4]]
value weight
[1,] 61.18781 0.5522388
[2,] 56.75139 0.4477612
To make it reproducible without downloads, for one of your polygons:
library(raster)
r <- raster(ncol=2, nrow=1, xmn=596959.624056728, xmx=624633.120455544, ymn=568805.230192675, ymx=582641.978392083, crs='+proj=aea +lat_1=29.5 +lat_2=45.5 +lat_0=37.5 +lon_0=-96 +x_0=0 +y_0=0 +datum=NAD83 +units=m')
values(r) <- c(61.18781, 56.75139)
g <- data.frame(matrix(c(rep(1, 18), rep(0,6), 611318.079488842,611440.751254539,610712.115334383,609842.749239201, 609703.303842618,611318.079488842,581038.816616668,579434.971927127, 579381.167042005,579315.223934334,580917.724282178,581038.816616668), ncol=6))
colnames(g) <- c('object','part','cump','hole','x','y')
p <- as(g, "SpatialPolygons")
crs(p) <- crs(r)
extract(r, p, weights=TRUE)
#[[1]]
# value weight
#[1,] 61.18781 0.6592593
#[2,] 56.75139 0.3407407