I want to select raster cells that are within a certain distance (for e.g. 1 km or 5 km) from the boundary of a polygon. I ultimately want to take an average of only those raster cells that are within the specified distance from the boundary of shapefile inwards.
The way I thought I would approach is to create a negative buffer inwards, and subtract the original polygon and the buffer. Then mask and crop the raster using the new polygon and take the average.
Here's sample data demonstrating what I want to do.
library(raster)
# raster
r <- raster(xmn=1035792, xmx= 1116792, ymn=825303.6, ymx=937803.6, resolution = 12.5,crs = "+init=epsg:3174")
r <- setValues(r, 0)
# polygon
x <- c(1199999, 1080000, 1093067, 1090190, 1087977, 1070419, 1180419)
y <- c(957803.6,937803.6, 894366.9, 872153.9, 853703.0, 825353.6, 805353.6)
poly.lake <- SpatialPolygons(list(Polygons(list(Polygon(data.frame(x,y))), ID = 1)))
r <- mask(r, poly.lake)
r <- crop(r, poly.lake)
plot(poly.lake)
plot(r, add = T)
Instead of taking average of the resulting raster r, I only want to average raster cells which are within a certain specified distance from the boundary.
The example data but using "terra"
library(terra)
r <- rast(xmin=1035792, xmax= 1116792, ymin=825303.6, ymax=937803.6, resolution = 125, crs = "epsg:3174")
values(r) <- 1:ncell(r)
# polygon
x <- c(1199999, 1080000, 1093067, 1090190, 1087977, 1070419, 1180419)
y <- c(957803.6,937803.6, 894366.9, 872153.9, 853703.0, 825353.6, 805353.6)
p <- vect(cbind(x, y), "polygons", crs = "epsg:3174")
r <- mask(r, p)
r <- crop(r, p)
You can now take the internal buffer of p
b <- buffer(p, -10000)
x <- mask(r, b, inverse=TRUE)
global(x, mean,na.rm=T)
# mean
#lyr.1 296549.9
Or you can take both sides like this
bb <- buffer(as.lines(p), 10000)
y <- mask(r, bb)
global(y, mean,na.rm=T)
# mean
#lyr.1 296751.3
So there is a slight difference between these two approaches; I think because the first uses inverse=TRUE; I would go with the second approach.
Your drawing (and Chris' answer) suggests that you only want the distance to the western border. In that case, you can first find the start and end nodes you need (from 2 to 6)
plot(p)
points(p)
text(as.points(p), pos=2)
Select the segments in between these nodes and create a line type SpatVector.
g <- geom(p)
k <- vect(g[2:6,], "lines", crs=crs(p))
lines(k, col="red", lwd=2)
And now do as above.
bk <- buffer(k, 10000)
z <- mask(r, bk)
global(z, mean,na.rm=T)
# mean
#lyr.1 297747
If you wanted to get the part of buffer bk that is inside the original polygon p you can do
bki <- intersect(bk, p)
To complete the plot
polys(bk, lty=3, border=NA, col=adjustcolor("light blue", alpha.f = 0.4))
lines(bki, lty=3)
Finding which segments of a polygon to buffer was what puzzled me, and this seems a decent approach cast_poly_to_subsegments. Taking your poly.lake as poly_sf:
geom <- lapply(
1:(length(st_coordinates(poly_sf)[, 1]) - 1),
function(i) {
rbind(
as.numeric(st_coordinates(poly_sf)[i, 1:2]),
as.numeric(st_coordinates(poly_sf)[i + 1, 1:2])
)
}
+ ) |>
st_multilinestring() |>
st_sfc(crs=st_crs(rt)) |>
st_cast('LINESTRING')
gives us
which is a little surprising, the 'green and red', that I assumed would be 'green'. It is wound clockwise so the desired segments to buffer are 4 & 5.
lns_buf4 <- st_buffer(st_geometry(geom)[4], 1000, singleSide = TRUE)
lns_buf5 <- st_buffer(st_geometry(geom)[5], 1000, singleSide= TRUE)
lns_buf5_neg <- st_buffer(st_geometry(geom)[5], -1000, singleSide= TRUE)
plot(st_geometry(geom), col = c('red', 'yellow', 'blue', 'green'))
plot(lns_buf4, col = 'black', add = TRUE)
plot(lns_buf5, col = 'green', add = TRUE)
plot(lns_buf5_neg, col = 'blue', add = TRUE)
Whether +/-1000 is sufficient is a further intersection test between the buffer poly(s) and the other boundary. If the desired sampling area is not rectangular, steps can be taken to construct a sampling polygon from the buffer and intersection.
#library(lwgeom)
# on poly_sf
new_line <- draw(x = 'line', col ='blue', lwd = 2, n = 10)
lns_buf5_10k_neg <- st_buffer(st_geometry(geom)[5], -10000, singleSide= TRUE)
new_line_sf <- st_as_sf(new_line, crs = st_crs(lns_buf5_10k_neg))
buf5_nline_split <- lwgeom::st_split(lns_buf5_10k_neg, new_line_sf$geometry)
irreg_smp_area <- st_collection_extract(buf5_nline_split)[1]
Though I'm happy to see it all done in terra.
Related
I am looking for a general solution to create bivariate choropleth maps in R using raster files.
I have found the following code here which nearly does what I need but it is limited: it can only handle data which are between 0 and 1 on both axes. In my specific use-case one axis spans 0-1 while another spans between -1 and 1. Regardless as to my specific use-case, I think a more general function which can handle different data ranges would be useful to many people.
I have already tried updating the code within the function colmat to handle negative data but for the life of me cannot get it to work. In the interests of clarity I have avoided posting all of my failed attempts and have insread copied below the code I found at the link above in the hope that someone may be able to offer a solution.
The current code first creates a colour matrix using colmat. The colour matrix generated is then used in bivariate.map along with your two raster files containing the data. I think the ideal solution would be to create the colour matrix based on the two rasters first (so that it can correctly bin the data based on your actual data, not the current solution which is between 0 and 1).
````
library(classInt)
library(raster)
library(rgdal)
library(dismo)
library(XML)
library(maps)
library(sp)
# Creates dummy rasters
rasterx<- raster(matrix(rnorm(400),5,5))
rasterx[rasterx <=0]<-1
rastery<- raster(matrix(rnorm(400),5,5))
# This function creates a colour matrix
# At present it cannot handle negative values i.e. the matrix spans from 0 to 1 along both axes
colmat<-function(nquantiles=10, upperleft=rgb(0,150,235, maxColorValue=255), upperright=rgb(130,0,80, maxColorValue=255), bottomleft="grey", bottomright=rgb(255,230,15, maxColorValue=255), xlab="x label", ylab="y label"){
my.data<-seq(0,1,.01)
my.class<-classIntervals(my.data,n=nquantiles,style="quantile")
my.pal.1<-findColours(my.class,c(upperleft,bottomleft))
my.pal.2<-findColours(my.class,c(upperright, bottomright))
col.matrix<-matrix(nrow = 101, ncol = 101, NA)
for(i in 1:101){
my.col<-c(paste(my.pal.1[i]),paste(my.pal.2[i]))
col.matrix[102-i,]<-findColours(my.class,my.col)
}
plot(c(1,1),pch=19,col=my.pal.1, cex=0.5,xlim=c(0,1),ylim=c(0,1),frame.plot=F, xlab=xlab, ylab=ylab,cex.lab=1.3)
for(i in 1:101){
col.temp<-col.matrix[i-1,]
points(my.data,rep((i-1)/100,101),pch=15,col=col.temp, cex=1)
}
seqs<-seq(0,100,(100/nquantiles))
seqs[1]<-1
col.matrix<-col.matrix[c(seqs), c(seqs)]
}
# Creates colour matrix
col.matrix<-colmat(nquantiles=2, upperleft="blue", upperright="yellow", bottomleft="green", bottomright="red", xlab="Species Richness", ylab="Change in activity hours")
# Function to create bivariate map, given the colour ramp created previously
bivariate.map<-function(rasterx, rastery, colormatrix=col.matrix, nquantiles=10){
quanmean<-getValues(rasterx)
temp<-data.frame(quanmean, quantile=rep(NA, length(quanmean)))
brks<-with(temp, quantile(temp,na.rm=TRUE, probs = c(seq(0,1,1/nquantiles))))
r1<-within(temp, quantile <- cut(quanmean, breaks = brks, labels = 2:length(brks),include.lowest = TRUE))
quantr<-data.frame(r1[,2])
quanvar<-getValues(rastery)
temp<-data.frame(quanvar, quantile=rep(NA, length(quanvar)))
brks<-with(temp, quantile(temp,na.rm=TRUE, probs = c(seq(0,1,1/nquantiles))))
r2<-within(temp, quantile <- cut(quanvar, breaks = brks, labels = 2:length(brks),include.lowest = TRUE))
quantr2<-data.frame(r2[,2])
as.numeric.factor<-function(x) {as.numeric(levels(x))[x]}
col.matrix2<-colormatrix
cn<-unique(colormatrix)
for(i in 1:length(col.matrix2)){
ifelse(is.na(col.matrix2[i]),col.matrix2[i]<-1,col.matrix2[i]<-which(col.matrix2[i]==cn)[1])
}
cols<-numeric(length(quantr[,1]))
for(i in 1:length(quantr[,1])){
a<-as.numeric.factor(quantr[i,1])
b<-as.numeric.factor(quantr2[i,1])
cols[i]<-as.numeric(col.matrix2[b,a])}
r<-rasterx
r[1:length(r)]<-cols
return(r)
}
# Creates map
bivmap<-bivariate.map(rasterx,rastery, colormatrix=col.matrix, nquantiles=2)
# Plots a map
plot(bivmap,frame.plot=F,axes=F,box=F,add=F,legend=F,col=as.vector(col.matrix)) ````
Ideally,a more general function would take two raster files, determine the data ranges of both and then create a bivariate chorpleth map based on the number of bins/quantiles specified by the user.
Here are some ideas based on your code
Three functions
makeCM <- function(breaks=10, upperleft, upperright, lowerleft, lowerright) {
m <- matrix(ncol=breaks, nrow=breaks)
b <- breaks-1
b <- (0:b)/b
col1 <- rgb(colorRamp(c(upperleft, lowerleft))(b), max=255)
col2 <- rgb(colorRamp(c(upperright, lowerright))(b), max=255)
cm <- apply(cbind(col1, col2), 1, function(i) rgb(colorRamp(i)(b), max=255))
cm[, ncol(cm):1 ]
}
plotCM <- function(cm, xlab="", ylab="", main="") {
n <- cm
n <- matrix(1:length(cm), nrow=nrow(cm), byrow=TRUE)
r <- raster(n)
cm <- cm[, ncol(cm):1 ]
image(r, col=cm, axes=FALSE, xlab=xlab, ylab=ylab, main=main)
}
rasterCM <- function(x, y, n) {
q1 <- quantile(x, seq(0,1,1/(n)))
q2 <- quantile(y, seq(0,1,1/(n)))
r1 <- cut(x, q1, include.lowest=TRUE)
r2 <- cut(y, q2, include.lowest=TRUE)
overlay(r1, r2, fun=function(i, j) {
(j-1) * n + i
})
}
Example data
library(raster)
set.seed(42)
r <- raster(ncol=50, nrow=50, xmn=0, xmx=10, ymn=0,ymx=10, crs="+proj=utm +zone=1")
x <- init(r, "x") * runif(ncell(r), .5, 1)
y <- init(r, "y") * runif(ncell(r), .5, 1)
And now used the functions
breaks <- 5
cmat <- makeCM(breaks, "blue", "yellow", "green", "red")
xy <- rasterCM(x, y, breaks)
par(mfrow=c(2,2), mai=c(.5,.5,.5,.5), las=1)
plot(x)
plot(y)
par(mai=c(1,1,1,1))
plotCM(cmat, "var1", "var2", "legend")
par(mai=c(.5,.5,.5,.5))
image(xy, col=cmat, las=1)
How can I find a non-linear path through raster image data? e.g., least cost algorithm? Starting and ending points are known and given as:
Start point = (0,0)
End point = (12,-5)
For example, extract the approximate path of a winding river through a (greyscale) raster image.
# fake up some noisy, but reproducible, "winding river" data
set.seed(123)
df <- data.frame(x=seq(0,12,by=.01),
y=sapply(seq(0,12,by=.01), FUN = function(i) 10*sin(i)+rnorm(1)))
# convert to "pixels" of raster data
# assumption: image color is greyscale, only need one numeric value, v
img <- data.frame(table(round(df$y,0), round(df$x,1)))
names(img) <- c("y","x","v")
img$y <- as.numeric(as.character(img$y))
img$x <- as.numeric(as.character(img$x))
## take a look at the fake "winding river" raster image...
library(ggplot2)
ggplot(img) +
geom_raster(aes(x=x,y=y,fill=v))
As I was writing up my example, I stumbled upon an answer using the 'gdistance' r package... hopefully others will find this useful.
library(gdistance)
library(sp)
library(ggplot2)
# convert to something rasterFromXYZ() understands
spdf <- SpatialPixelsDataFrame(points = img[c("x","y")], data = img["v"])
# use rasterFromXYZ to make a RasterLayer
r <- rasterFromXYZ(spdf)
# make a transition layer, specifying a sensible function and the number of connection directions
tl <- transition(r, function(x) min(x), 8)
## mean(x), min(x), and max(x) produced similar results for me
# extract the shortest path as something we can plot
sPath <- shortestPath(tl, c(0,0), c(12,-5), output = "SpatialLines")
# conversion for ggplot
sldf <- fortify(SpatialLinesDataFrame(sPath, data = data.frame(ID = 1)))
# plot the original raster, truth (white), and the shortest path solution (green)
ggplot(img) +
geom_raster(aes(x=x,y=y,fill=v)) +
stat_function(data=img, aes(x=x), fun = function(x) 10*sin(x), geom="line", color="white") +
geom_path(data=sldf, aes(x=long,y=lat), color="green")
I wanted to make sure that I wasn't just giving myself too easy of a problem... so I made a noisier version of the image.
img2 <- img
img2$v <- ifelse(img2$v==0, runif(sum(img2$v==0),3,8), img2$v)
spdf2 <- SpatialPixelsDataFrame(points = img2[c("x","y")], data = img2["v"])
r2 <- rasterFromXYZ(spdf2)
# for this noisier image, I needed a different transition function.
# The one from the vignette worked well enough for this example.
tl2 <- transition(r2, function(x) 1/mean(x), 8)
sPath2 <- shortestPath(tl2, c(0,0), c(12,-5), output = "SpatialLines")
sldf2 <- fortify(SpatialLinesDataFrame(sPath2, data = data.frame(ID = 1)))
ggplot(img2) +
geom_raster(aes(x=x,y=y,fill=v)) +
stat_function(data=img2, aes(x=x), fun = function(x) 10*sin(x), geom="line", color="white") +
geom_path(data=sldf2, aes(x=long,y=lat), color="green")
UPDATE: using real raster data...
I wanted to see if the same workflow would work on an actual real-world raster image and not just fake data, so...
library(jpeg)
# grab some river image...
url <- "https://c8.alamy.com/comp/AMDPJ6/fiji-big-island-winding-river-aerial-AMDPJ6.jpg"
download.file(url, "river.jpg", mode = "wb")
jpg <- readJPEG("./river.jpg")
img3 <- melt(jpg, varnames = c("y","x","rgb"))
img3$rgb <- as.character(factor(img3$rgb, levels = c(1,2,3), labels=c("r","g","b")))
img3 <- dcast(img3, x + y ~ rgb)
# convert rgb to greyscale
img3$v <- img3$r*.21 + img3$g*.72 + img3$b*.07
For rgb to greyscale, see: https://stackoverflow.com/a/27491947/2371031
# define some start/end point coordinates
pts_df <- data.frame(x = c(920, 500),
y = c(880, 50))
# set a reference "grey" value as the mean of the start and end point "v"s
ref_val <- mean(c(subset(img3, x==pts_df[1,1] & y==pts_df[1,2])$v,
subset(img3, x==pts_df[2,1] & y==pts_df[2,2])$v))
spdf3 <- SpatialPixelsDataFrame(points = img3[c("x","y")], data = img3["v"])
r3 <- rasterFromXYZ(spdf3)
# transition layer defines "conductance" between two points
# x is the two point values, "v" = c(v1, v2)
# 0 = no conductance, >>1 = good conductance, so
# make a transition function that encourages only small changes in v compared to the reference value.
tl3 <- transition(r3, function(x) (1/max(abs((x/ref_val)-1))^2)-1, 8)
sPath3 <- shortestPath(tl3, as.numeric(pts_df[1,]), as.numeric(pts_df[2,]), output = "SpatialLines")
sldf3 <- fortify(SpatialLinesDataFrame(sPath3, data = data.frame(ID = 1)))
# plot greyscale with points and path
ggplot(img3) +
geom_raster(aes(x,y, fill=v)) +
scale_fill_continuous(high="white", low="black") +
scale_y_reverse() +
geom_point(data=pts_df, aes(x,y), color="red") +
geom_path(data=sldf3, aes(x=long,y=lat), color="green")
I played around with different transition functions before finding one that worked. This one is probably more complex than it needs to be, but it works. You can increase the power term (from 2 to 3,4,5,6...) and it continues to work. It did not find a correct solution with the power term removed.
Alternative solution using igraph package.
Found an alternative set of answers using 'igraph' r package. I think it is important to note that one of the big differences here is that 'igraph' supports n-dimensional graphs whereas 'gdistance' only supports 2D graphs. So, for example, extending this answer into 3D is relatively easy.
library(igraph)
# make a 2D lattice graph, with same dimensions as "img"
l <- make_lattice(dimvector = c(length(unique(img$y)),
length(unique(img$x))), directed=F, circular=F)
summary(l)
# > IGRAPH ba0963d U--- 3267 6386 -- Lattice graph
# > + attr: name (g/c), dimvector (g/n), nei (g/n), mutual (g/l), circular (g/l)
# set vertex attributes
V(l)$x = img$x
V(l)$y = img$y
V(l)$v = img$v
# "color" is a known attribute that will be used by plot.igraph()
V(l)$color = grey.colors(length(unique(img$v)))[img$v+1]
# compute edge weights as a function of attributes of the two connected vertices
el <- get.edgelist(l)
# "weight" is a known edge attribute, and is used in shortest_path()
# I was confused about weights... lower weights are better, Inf weights will be avoided.
# also note from help: "if all weights are positive, then Dijkstra's algorithm is used."
E(l)$weight <- 1/(pmax(V(l)[el[, 1]]$v, V(l)[el[, 2]]$v))
E(l)$color = grey.colors(length(unique(E(l)$weight)))[E(l)$weight+1]
Edge weights calculation courtesy of: https://stackoverflow.com/a/27446127/2371031 (thanks!)
# find the start/end vertices
start = V(l)[V(l)$x == 0 & V(l)$y == 0]
end = V(l)[V(l)$x == 12 & V(l)$y == -5]
# get the shortest path, returning "both" (vertices and edges)...
result <- shortest_paths(graph = l, from = start, to = end, output = "both")
# color the edges that were part of the shortest path green
V(l)$color = ifelse(V(l) %in% result$vpath[[1]], "green", V(l)$color)
E(l)$color = ifelse(E(l) %in% result$epath[[1]], "green", E(l)$color)
# color the start and end vertices red
V(l)$color = ifelse(V(l) %in% c(start,end), "red", V(l)$color)
plot(l, vertex.shape = "square", vertex.size=2, vertex.frame.color=NA, vertex.label=NA, curved=F)
Second (noisier) example requires a different formula to compute edge weights.
img2 <- img
img2$v <- ifelse(img2$v==0, runif(sum(img2$v==0),3,8), img2$v)
l <- make_lattice(dimvector = c(length(unique(img2$y)),
length(unique(img2$x))), directed=F, circular=F)
# set vertex attributes
V(l)$x = img2$x
V(l)$y = img2$y
V(l)$v = img2$v
V(l)$color = grey.colors(length(unique(img2$v)))[factor(img2$v)]
# compute edge weights
el <- get.edgelist(l)
# proper edge weight calculation is the key to a good solution...
E(l)$weight <- (pmin(V(l)[el[, 1]]$v, V(l)[el[, 2]]$v))
E(l)$color = grey.colors(length(unique(E(l)$weight)))[factor(E(l)$weight)]
start = V(l)[V(l)$x == 0 & V(l)$y == 0]
end = V(l)[V(l)$x == 12 & V(l)$y == -5]
# get the shortest path, returning "both" (vertices and edges)...
result <- shortest_paths(graph = l, from = start, to = end, output = "both")
# color the edges that were part of the shortest path green
V(l)$color = ifelse(V(l) %in% result$vpath[[1]], "green", V(l)$color)
E(l)$color = ifelse(E(l) %in% result$epath[[1]], "green", E(l)$color)
# color the start and end vertices red
V(l)$color = ifelse(V(l) %in% c(start,end), "red", V(l)$color)
plot(l, vertex.shape = "square", vertex.size=2, vertex.frame.color=NA, vertex.label=NA, curved=F)
In my example I create a raster:
require(raster); require(sp)
## Raster Raster creation
r <- raster(nc=10, nr=10)
r <- setValues(r, round(runif(ncell(r))* 255))
After, I make pixels values extraction by selection of coordinates:
x <- c(-150)
y <- c(-80)
p <- data.frame(x,y)
pontos <- SpatialPoints(p)
p$cel <- cellFromXY(r, pontos)
p$col <- colFromCell(r, p$cel)
p$row <- rowFromCell(r, p$cel)
p
plot(r)
text(r)
points(pontos, pch = 4, col = 2)
But, I'd like to find a way to extract the value of the pixels in which I would select a coordinate and the function would perform the extraction of the pixels of entire horizontal lines of the raster to every two pixels from the given coordinate. For example, I choose xy(-150,-80) coordinates but my function below returns values only for the first line and need lines 4, 7 and 10 too.
require(plyr)
vals2cols <- ldply(1:nrow(p),
function(ir){
getValuesBlock(r,
col = p$col[ir],
ncols = 10,
row = p$row[ir],
nrows = 1)
}# end fun
)
df <- data.frame(p, vals2cols)
df
This is possible?
Is there a way to generate regularly spaced (e.g., 500 meters apart) points within a polygon using R? I have been trying to use the sp package but can't seem to define a set of points that are spaced a certain distance apart from one another. My aim is to generate the points, then extract their lat/long coordinates into a new dataframe. Any help would be much appreciated! Thanks
Quite straight forward and almost out-of-the-box.
As OP did not share data, buckle up, put your seats in a vertical position and let us fly to Paris. There, we will adapt a geosphere function, and with its help we will divide up Paris' shape into lon / lat coordinates that are 500 meters apart each (vertically and horizontally).
# Load necessary libraries.
library(raster)
library(geosphere)
library(tidyverse)
library(sp)
# This is an adapted version of geosphere's destPoint() function that works with
# changing d (distance).
destPoint_v <- function (x, y, b, d, a = 6378137, f = 1/298.257223563, ...)
{
r <- list(...)$r
if (!is.null(r)) {
return(.old_destPoint(x, y, b, d, r = r))
}
b <- as.vector(b)
d <- as.vector(d)
x <- as.vector(x)
y <- as.vector(y)
p <- cbind(x, y, b, d)
r <- .Call("_geodesic", as.double(p[, 1]), as.double(p[, 2]),
as.double(p[, 3]), as.double(p[, 4]),
as.double(a), as.double(f),
PACKAGE = "geosphere")
r <- matrix(r, ncol = 3, byrow = TRUE)
colnames(r) <- c("lon", "lat", "finalbearing")
return(r[, 1:2, drop = FALSE])
}
# Data can be downloaded from
# http://osm13.openstreetmap.fr/~cquest/openfla/export/communes-20190101-shp.zip
# or
# https://www.data.gouv.fr/en/datasets/decoupage-administratif-communal-francais-issu-d-openstreetmap/
# ("Export simple de janvier 2019 (225Mo)")
# Load shapefile.
# shp <- raster::shapefile("Dropbox/work/crema/communes-20190101-shp/communes-20190101.shp")
# Extract Paris.
paris <- shp[shp$nom == "Paris", ]
# Set distance of points in meters.
dist <- 500
# Extract bounding box from Paris' SpatialPolygonDataFrame.
bbox <- raster::extent(paris)
# Calculate number of points on the vertical axis.
ny <- ceiling(geosphere::distGeo(p1 = c(bbox#xmin, bbox#ymin),
p2 = c(bbox#xmin, bbox#ymax)) / dist)
# Calculate maximum number of points on the horizontal axis.
# This needs to be calculated for the lowermost and uppermost horizontal lines
# as the distance between latitudinal lines varies when the longitude changes.
nx <- ceiling(max(geosphere::distGeo(p1 = c(bbox#xmin, bbox#ymin),
p2 = c(bbox#xmax, bbox#ymin)) / dist,
geosphere::distGeo(p1 = c(bbox#xmin, bbox#ymax),
p2 = c(bbox#xmax, bbox#ymax)) / dist))
# Create result data frame with number of points on vertical axis.
df <- data.frame(ny = 1:ny)
# Calculate coordinates along the vertical axis.
pts <- geosphere::destPoint(p = c(bbox#xmin, bbox#ymin),
b = 0, d = dist * (1:ny - 1))
df$x <- pts[, 1]
df$y <- pts[, 2]
# Add points on horizontal axis.
df <- tidyr::crossing(nx = 1:nx, df)
# Calculate coordinates.
pts <- destPoint_v(df$x, df$y, b = 90, 500 * (df$nx - 1))
# Turn coordinates into SpatialPoints.
pts <- SpatialPoints(cbind(pts[, 1], pts[, 2]), proj4string = CRS(proj4string(paris)))
# Cut to boundaries of Paris.
result <- raster::intersect(pts, paris)
# Plot result.
plot(result)
title("Paris in Points")
Kind of looks like a fish, doesn't it?
Here is a way to do assuming you have a lonlat polygon by first transforming it to a planar crs (not as nifty as Roman's solution with destPoint).
Packages and example data
library(raster)
library(rgdal)
p <- shapefile(system.file("external/lux.shp", package="raster"))[1,]
Transform to planar crs (pick one that matches your data!)
putm <- spTransform(p, "+proj=utm +zone=32 +datum=WGS84")
Create a raster with 500 m resolution, rasterize the polygon and transform to points
r <- raster(putm, res=500)
r <- rasterize(putm, r)
pts <- rasterToPoints(r, spatial=TRUE)
Transform the points to lon/lat and plot the results
pts_lonlat <- spTransform(pts, "+proj=longlat +datum=WGS84")
result <- coordinates(pts_lonlat)
plot(p)
points(result, pch="+", cex=.5)
(looks like an elephant)
I am using the example here for discussion:
ggplot map with l
library(rgdal)
library(ggplot2)
library(maptools)
# Data from http://thematicmapping.org/downloads/world_borders.php.
# Direct link: http://thematicmapping.org/downloads/TM_WORLD_BORDERS_SIMPL-0.3.zip
# Unpack and put the files in a dir 'data'
gpclibPermit()
world.map <- readOGR(dsn="data", layer="TM_WORLD_BORDERS_SIMPL-0.3")
world.ggmap <- fortify(world.map, region = "NAME")
n <- length(unique(world.ggmap$id))
df <- data.frame(id = unique(world.ggmap$id),
growth = 4*runif(n),
category = factor(sample(1:5, n, replace=T)))
## noise
df[c(sample(1:100,40)),c("growth", "category")] <- NA
ggplot(df, aes(map_id = id)) +
geom_map(aes(fill = growth, color = category), map =world.ggmap) +
expand_limits(x = world.ggmap$long, y = world.ggmap$lat) +
scale_fill_gradient(low = "red", high = "blue", guide = "colorbar")
Gives the following results:
I would like to map one variable to the left "half" of a country and a different variable to the right "half" of the country. I put "half" in quotes because it's not clearly defined (or at least I'm not clearly defining it). The answer by Ian Fellows might help (which gives an easy way to get the centroid). I'm hoping for something so that I can do aes(left_half_color = growth, right_half_color = category) in the example. I'm also interested in top half and bottom half if that is different.
If possible, I would also like to map the individual centroids of the halves to something.
This is a solution without ggplot that relies on the plot function instead. It also requires the rgeos package in addition to the code in the OP:
EDIT Now with 10% less visual pain
EDIT 2 Now with centroids for east and west halves
library(rgeos)
library(RColorBrewer)
# Get centroids of countries
theCents <- coordinates(world.map)
# extract the polygons objects
pl <- slot(world.map, "polygons")
# Create square polygons that cover the east (left) half of each country's bbox
lpolys <- lapply(seq_along(pl), function(x) {
lbox <- bbox(pl[[x]])
lbox[1, 2] <- theCents[x, 1]
Polygon(expand.grid(lbox[1,], lbox[2,])[c(1,3,4,2,1),])
})
# Slightly different data handling
wmRN <- row.names(world.map)
n <- nrow(world.map#data)
world.map#data[, c("growth", "category")] <- list(growth = 4*runif(n),
category = factor(sample(1:5, n, replace=TRUE)))
# Determine the intersection of each country with the respective "left polygon"
lPolys <- lapply(seq_along(lpolys), function(x) {
curLPol <- SpatialPolygons(list(Polygons(lpolys[x], wmRN[x])),
proj4string=CRS(proj4string(world.map)))
curPl <- SpatialPolygons(pl[x], proj4string=CRS(proj4string(world.map)))
theInt <- gIntersection(curLPol, curPl, id = wmRN[x])
theInt
})
# Create a SpatialPolygonDataFrame of the intersections
lSPDF <- SpatialPolygonsDataFrame(SpatialPolygons(unlist(lapply(lPolys,
slot, "polygons")), proj4string = CRS(proj4string(world.map))),
world.map#data)
##########
## EDIT ##
##########
# Create a slightly less harsh color set
s_growth <- scale(world.map#data$growth,
center = min(world.map#data$growth), scale = max(world.map#data$growth))
growthRGB <- colorRamp(c("red", "blue"))(s_growth)
growthCols <- apply(growthRGB, 1, function(x) rgb(x[1], x[2], x[3],
maxColorValue = 255))
catCols <- brewer.pal(nlevels(lSPDF#data$category), "Pastel2")
# and plot
plot(world.map, col = growthCols, bg = "grey90")
plot(lSPDF, col = catCols[lSPDF#data$category], add = TRUE)
Perhaps someone can come up with a good solution using ggplot2. However, based on this answer to a question about multiple fill scales for a single graph ("You can't"), a ggplot2 solution seems unlikely without faceting (which might be a good approach, as suggested in the comments above).
EDIT re: mapping centroids of the halves to something: The centroids for the east ("left") halves can be obtained by
coordinates(lSPDF)
Those for the west ("right") halves can be obtained by creating an rSPDF object in a similar way:
# Create square polygons that cover west (right) half of each country's bbox
rpolys <- lapply(seq_along(pl), function(x) {
rbox <- bbox(pl[[x]])
rbox[1, 1] <- theCents[x, 1]
Polygon(expand.grid(rbox[1,], rbox[2,])[c(1,3,4,2,1),])
})
# Determine the intersection of each country with the respective "right polygon"
rPolys <- lapply(seq_along(rpolys), function(x) {
curRPol <- SpatialPolygons(list(Polygons(rpolys[x], wmRN[x])),
proj4string=CRS(proj4string(world.map)))
curPl <- SpatialPolygons(pl[x], proj4string=CRS(proj4string(world.map)))
theInt <- gIntersection(curRPol, curPl, id = wmRN[x])
theInt
})
# Create a SpatialPolygonDataFrame of the western (right) intersections
rSPDF <- SpatialPolygonsDataFrame(SpatialPolygons(unlist(lapply(rPolys,
slot, "polygons")), proj4string = CRS(proj4string(world.map))),
world.map#data)
Then information could be plotted on the map according to the centroids of lSPDF or rSPDF:
points(coordinates(rSPDF), col = factor(rSPDF#data$REGION))
# or
text(coordinates(lSPDF), labels = lSPDF#data$FIPS, cex = .7)