calculate and plot vector field of an arbitrary rasterLayer - r

Problem statement:
With ggquiver::geom_quiver() we can plot vector fields, provided we know x, y, xend, and yend.
How can I calculate these parameters for an arbitrary RasterLayer of elevations?
How can I ensure that the size of these arrows indicates the slope for that particular vector such that the arrows appear different lengths proportional to the gradient at that location (e.g., the first plot below)?
Background:
# ggquiver example
library(tidyverse)
library(ggquiver)
expand.grid(x=seq(0,pi,pi/12), y=seq(0,pi,pi/12)) %>%
ggplot(aes(x=x,y=y,u=cos(x),v=sin(y))) +
geom_quiver()
A related appraoch uses rasterVis::vectorplot, which relies on raster::terrain (provided the field units == CRS units) to calculate and plot a vector field. Source code is here.
library(raster)
library(rasterVis)
r <- getData('alt', country='FRA', mask=TRUE)
r <- aggregate(r, 20)
vectorplot(r, par.settings=RdBuTheme())
Conclusion:
To review, I'd like to take an arbitrary rasterLayer of elevation, convert it to a data.frame, calculate the x, y, xmax, and ymax components of an elevation vector field that size the arrows such that they show the relative slope at the point (as in plots 1 and 2 above), and plot with ggquiver. Something like:
names(r) <- "z"
rd <- as.data.frame(r, xy=TRUE)
# calculate x, y, xend, yend for gradient vectors, add to rd, then plot
ggplot(rd) +
geom_raster(aes(x, y, fill = z)) +
geom_quiver(aes(x, y, xend, yend))

Effectively what you're asking is to convert a 2D scalar field into a vector field. There are a few different ways to do this.
The raster package contains the function terrain, which creates new raster layers that will give you both the angle of your desired vector at each point (i.e. the aspect), and its magnitude (the slope). We can use a little trigonometry to convert these into the North-South and East-West basis vectors used by ggquiver and add them to our original raster before turning the whole thing into a data frame.*
terrain_raster <- terrain(r, opt = c('slope', 'aspect'))
r$u <- terrain_raster$slope[] * sin(terrain_raster$aspect[])
r$v <- terr$slope[] * cos(terr$aspect[])
rd <- as.data.frame(r, xy = TRUE)
However, in most cases this will not make for a good plot. If you don't first aggregate the raster, you will have one gradient for each pixel on the image, which won't plot well. On the other hand, if you do aggregate, you will have a nice vector field, but your raster will look "blocky". Therefore, having a single data frame for your plot is probably not the best way to go.
The following function will take a raster and plot it with an overlaid vector field. You can adjust how much the vector field is aggregated without affecting the raster, and you can specify an arbitrary vector of colours for your raster.
raster2quiver <- function(rast, aggregate = 50, colours = terrain.colors(6))
{
names(rast) <- "z"
quiv <- aggregate(rast, aggregate)
terr <- terrain(quiv, opt = c('slope', 'aspect'))
quiv$u <- terr$slope[] * sin(terr$aspect[])
quiv$v <- terr$slope[] * cos(terr$aspect[])
quiv_df <- as.data.frame(quiv, xy = TRUE)
rast_df <- as.data.frame(rast, xy = TRUE)
print(ggplot(mapping = aes(x = x, y = y, fill = z)) +
geom_raster(data = rast_df, na.rm = TRUE) +
geom_quiver(data = quiv_df, aes(u = u, v = v), vecsize = 1.5) +
scale_fill_gradientn(colours = colours, na.value = "transparent") +
theme_bw())
return(quiv_df)
}
So, trying it out on your France example, after first defining a similar colour palette, we get
pal <- c("#B2182B", "#E68469", "#D9E9F1", "#ACD2E5", "#539DC8", "#3C8ABE", "#2E78B5")
raster2quiver(getData('alt', country = 'FRA', mask = TRUE), colours = pal)
Now to show that it works on an arbitrary raster (provided it has a projection assigned) let's test it out on this image, as converted to a raster. This time, we have a lower resolution so we choose a smaller aggregate value. We'll also choose a transparent colour for the lowest values to give a nicer plot:
rast <- raster::raster("https://i.stack.imgur.com/tXUXO.png")
# Add a fake arbitrary projection otherwise "terrain()" doesn't work:
projection(rast) <- "+proj=lcc +lat_1=48 +lat_2=33 +lon_0=-100 +ellps=WGS84"
raster2quiver(rast, aggregate = 20, colours = c("#FFFFFF00", "red"))
* I should point out that geom_quiver's mapping aesthetic takes arguments called u and v, which represent the basis vectors pointing North and East. The ggquiver package converts them to xend and yend values using stat_quiver. If you prefer to use xend and yend values you could just use geom_segment to plot your vector field, but this makes it more complex to control the appearance of the arrows. Hence, this solution will find the magnitude of the u and v values instead.

Related

`plot_raster` for data with missing stripes

Frequently I want to plot raster data with a lot of missing values, including entire missing rows or columns. Consider the following as a toy example:
library(ggplot2)
set.seed(50)
d = expand.grid(x = 1:100, y = 1:100)
d$v = rnorm(nrow(d))
d[d$x %in% sample(d$x, 5), "v"] = NA_real_
ggplot() + geom_raster(aes(x, y, fill = v), data = d)
This works so far, but what if I want to omit plotting the missing values at all, instead of plotting gray squares for them? If I change data = d to data = d[!is.na(d$v),], then I get the warning "Raster pixels are placed at uneven horizontal intervals and will be shifted. Consider using geom_tile() instead." I don't see a shift in this example, but I worry that if ggplot2 shifts the data, that could lead to squares being plotted at the wrong coordinates for real data. How do I avoid this shifting?

R Data points perimeter buffer

I need to create a buffer zone on the set of data points with x and y coordinates (grey points on the graph).
Unfortunately, I don’t have a perimeter border of the points, from which to create a buffer.
I was trying to calculate the perimeter using chull function, however it is not working properly (orange area).
I can calculate the border points using max/min functions for the data by some step (let's say 10 m, red dots), and try to calculate the buffer from those points.
Is someone aware of more correct and clean way to calculate the buffer zone for set of points.
You could do a tesselation around the points. Points at the border will have much larger polygons.
library(deldir)
library(ggplot2)
triang <- deldir(data$x, data$y)
border <- triang$summary
border$Selected <- border$dir.area > 260
ggplot(border[order(border$Selected), ], aes(x = x, y = y, colour = Selected)) + geom_point()
thanks a lot for your suggestions and comments.
Indeed, It was my fault omitting the alphahull package.
After identifying the border with ashape I create a buffer polygon and identified the data that lies inside and outside the buffer. Challenge was to correctly extract the polygon from ashap, but solution of RPubs safe me.
You can see also the graphical example here.
Best
## load
library(ggplot2); library(alphahull);
library(igraph); library(rgeos)
## Load the data
data.df<-read.csv("Data/Cencus/Lyford_meta.csv",sep=",",header=TRUE)
#Remove the duplicates in the data to do the chull calculation
data <- data.df[!duplicated(paste(data.df$xsite, data.df$ysite, sep ="_")), c("xsite","ysite") ]
#calculate the chull with alpha 20
data.chull <- ashape(data, alpha = 20)
## Below is the code to extract polygon from the ashape chull function
## credit to: http://rpubs.com/geospacedman/alphasimple
order.chull <- graph.edgelist(cbind(as.character(data.chull$edges[, "ind1"]), as.character(data.chull$edges[,"ind2"])), directed = FALSE)
cutg <- order.chull - E(order.chull)[1]
ends <- names(which(degree(cutg) == 1))
path <- get.shortest.paths(cutg, ends[1], ends[2])[[1]]
pathX <- as.numeric(V(order.chull)[unlist(path[[1]])]$name)
pathX = c(pathX, pathX[1])
data.chull <- as.data.frame(data.chull$x[pathX, ])
## Create a spatial object from the polygon and apply a buffer to
## Then extract the data to the dataframe.
data.chull.poly <- SpatialPolygons(list(Polygons(list(Polygon(as.matrix(data.chull))),"s1")))
data.chull.poly.buff <- gBuffer(data.chull.poly, width = -10)
data.buffer <- fortify(data.chull.poly.buff)[c("long","lat")]
## Identidfy the data that are inside the buffer polygon
data$posit <- "Outside"
data$posit[point.in.polygon(data$x,data$y,data.buffer$long,data.buffer$lat) %in% c(1,2,3)] <- "Inside"
## Plot the results
ggplot()+
theme_bw()+xlab("X coordinates (m)")+ylab("Y coordinates (m)") +
geom_point(data = data, aes(xsite, ysite, color = posit))+
geom_polygon(data = data.chull, aes(V1, V2), color = "black", alpha = 0)+
geom_polygon(data = data.buffer, aes(long, lat), color = "blue", alpha = 0)

Plotting z as a color with R on a rGoogleMap

I have a function and I want to plot only x and y. z should be represented as a color. Is there a package that does the work for me ?
f = function(a,b){
dnorm(a^2+b^2)
}
x = seq(-2, 2, 0.1)
y = seq(-2, 2, 0.1)
z = outer(x, y, f)
persp(x, y, z)
I want to plot this function on a map generated with rGoogleMaps. Maybe there is a more specific package for this use?
Something like this?
library(ggmap) # loads ggplot2 as well
library(RgoogleMaps) # for getGeoCode
london.center <- getGeoCode("London")
london <- get_map("London", zoom=12)
x <- seq(-2,2,0.1)
df <- expand.grid(x=x,y=x)
df$z <- with(df,f(x,y))
df$x <- london.center[2]+df$x/20
df$y <- london.center[1]+df$y/20
ggp <- ggmap(london)+
geom_tile(data=df,aes(x=x,y=y,fill=z), alpha=0.2)+
scale_fill_gradientn(guide="none",colours=rev(heat.colors(10)))+
stat_contour(data=df, aes(x=x, y=y, z=z, color=..level..), geom="path", size=1)+
scale_color_gradientn(colours=rev(heat.colors(10)))
plot(ggp)
This solution uses ggplot. Perhaps someone else will show you how to do this using RgoogleMaps.
Basically, we load the map, using get_map(...) (which is just a wrapper for GetMap(...) in the RgoogleMaps package).
Then we create the sample data frame df, which contains three columns, x, y, and z, and one row for every combination of x and y (this is the format required by ggplot).
Then we create the map layers. First the map itself, using ggmap(...); then a layer of tiles "filled" based on the value of z, using geom_tile(...); then a set of contour lines colored using the value of z, using stat_contour(geom="path",...). The rest of the code sets the fill and line colors and renders the map.
Purists will tell you that you can render the filled contours directly using stat_contour(geom="polygon",...), instead of using tiles, but this has the unfortunate effect of clipping any contours not completely enclosed in the plot area.

plot with overlapping points

I have data in R with overlapping points.
x = c(4,4,4,7,3,7,3,8,6,8,9,1,1,1,8)
y = c(5,5,5,2,1,2,5,2,2,2,3,5,5,5,2)
plot(x,y)
How can I plot these points so that the points that are overlapped are proportionally larger than the points that are not. For example, if 3 points lie at (4,5), then the dot at position (4,5) should be three times as large as a dot with only one point.
Here's one way using ggplot2:
x = c(4,4,4,7,3,7,3,8,6,8,9,1,1,1,8)
y = c(5,5,5,2,1,2,5,2,2,2,3,5,5,5,2)
df <- data.frame(x = x,y = y)
ggplot(data = df,aes(x = x,y = y)) + stat_sum()
By default, stat_sum uses the proportion of instances. You can use raw counts instead by doing something like:
ggplot(data = df,aes(x = x,y = y)) + stat_sum(aes(size = ..n..))
Here's a simpler (I think) solution:
x <- c(4,4,4,7,3,7,3,8,6,8,9,1,1,1,8)
y <- c(5,5,5,2,1,2,5,2,2,2,3,5,5,5,2)
size <- sapply(1:length(x), function(i) { sum(x==x[i] & y==y[i]) })
plot(x,y, cex=size)
## Tabulate the number of occurrences of each cooordinate
df <- data.frame(x, y)
df2 <- cbind(unique(df), value = with(df, tapply(x, paste(x,y), length)))
## Use cex to set point size to some function of coordinate count
## (By using sqrt(value), the _area_ of each point will be proportional
## to the number of observations it represents)
plot(y ~ x, cex = sqrt(value), data = df2, pch = 16)
You didn't really ask for this approach but alpha may be another way to address this:
library(ggplot2)
ggplot(data.frame(x=x, y=y), aes(x, y)) + geom_point(alpha=.3, size = 3)
You need to add the parameter cex to your plot function. First what I would do is use the function as.data.frame and table to reduce your data to unique (x,y) pairs and their frequencies:
new.data = as.data.frame(table(x,y))
new.data = new.data[new.data$Freq != 0,] # Remove points with zero frequency
The only downside to this is that it converts numeric data to factors. So convert back to numeric, and plot!
plot(as.numeric(new.data$x), as.numeric(new.data$y), cex = as.numeric(new.data$Freq))
You may also want to try sunflowerplot.
sunflowerplot(x,y)
Let me propose alternatives to adjusting the size of the points. One of the drawbacks of using size (radius? area?) is that the reader's evaluation of spot size vs. the underlying numeric value is subjective.
So, option 1: plot each point with transparency --- ninja'd by Tyler!
option 2: use jitter to push your data around slightly so the plotted points don't overlap.
A solution using lattice and table ( similar to #R_User but no need to remove 0 since lattice do the job)
dt <- as.data.frame(table(x,y))
xyplot(dt$y~dt$x, cex = dt$Freq^2, col =dt$Freq)

Fixing maps library data for Pacific centred (0°-360° longitude) display

I'm plotting some points on a map of the world using the R maps package, something like:
The command to draw the base map is:
map("world", fill=TRUE, col="white", bg="gray", ylim=c(-60, 90), mar=c(0,0,0,0))
But I need to display Pacific centred map. I use map("world2", etc to use the Pacific centred basemap from the maps package, and convert the coordinates of the data points in my dataframe (df) with:
df$longitude[df$longitude < 0] = df$longitude[df$longitude < 0] + 360
This works if I don't use the fill option, but with fill the polygons which cross 0° cause problems.
I guess I need to transform the polygon data from the maps library somehow to sort this out, but I have no idea how to get at this.
My ideal solution would be to draw a maps with a left boundary at -20° and a right boundary at -30° (i.e. 330°). The following gets the correct points and coastlines onto the map, but the crossing-zero problem is the same
df$longitude[df$longitude < -20] = df$longitude[d$longitude < -20] + 360
map("world", fill=TRUE, col="white", bg="gray", mar=c(0,0,0,0),
ylim=c(-60, 90), xlim=c(-20, 330))
map("world2", add=TRUE, col="white", bg="gray", fill=TRUE, xlim=c(180, 330))
Any help would be greatly appreciated.
You could use the fact that internally, a map object returned by the map() function can be recalculated and used again in the map() function. I'd create a list with individual polygons, check which ones have very different longitude values, and rearrange those ones. I gave an example of this approach in the function below*, which allows something like :
plot.map("world", center=180, col="white",bg="gray",
fill=TRUE,ylim=c(-60,90),mar=c(0,0,0,0))
to get
If I were you, I'd shift everything a bit more, like in :
plot.map("world", center=200, col="white",bg="gray",
fill=TRUE,ylim=c(-60,90),mar=c(0,0,0,0))
The function :
plot.map<- function(database,center,...){
Obj <- map(database,...,plot=F)
coord <- cbind(Obj[[1]],Obj[[2]])
# split up the coordinates
id <- rle(!is.na(coord[,1]))
id <- matrix(c(1,cumsum(id$lengths)),ncol=2,byrow=T)
polygons <- apply(id,1,function(i){coord[i[1]:i[2],]})
# split up polygons that differ too much
polygons <- lapply(polygons,function(x){
x[,1] <- x[,1] + center
x[,1] <- ifelse(x[,1]>180,x[,1]-360,x[,1])
if(sum(diff(x[,1])>300,na.rm=T) >0){
id <- x[,1] < 0
x <- rbind(x[id,],c(NA,NA),x[!id,])
}
x
})
# reconstruct the object
polygons <- do.call(rbind,polygons)
Obj[[1]] <- polygons[,1]
Obj[[2]] <- polygons[,2]
map(Obj,...)
}
*Note that this function only takes positive center values. It's easily adapted to allow for center values in both directions, but I didn't bother anymore as that's trivial.
install the latest version of maps (3.2.0).
do this:
d$lon2 <- ifelse(d$lon < -25, d$lon + 360, d$lon) # where d is your df
mapWorld <- map_data('world', wrap=c(-25,335), ylim=c(-55,75))
ggplot() +
geom_polygon(data = mapWorld, aes(x=long, y = lat, group = group)) +
geom_point(data = d, aes(x = lon2, y = lat))
A bit late, but you can also create a shifted map by using a projection (requires the mapproj package):
map("world", projection="rectangular", parameter=0,
orientation=c(90,0,180), wrap=TRUE, fill=T, resolution=0,col=0)
This will shift by 180 degrees. But the difference with 'world2' is that the longitude co-ordinate will be different ([-pi,pi]). All projections of this package put 0 at the centre. And in that case, the 'wrap' option detects the jump correctly.
'resolution=0' helps to get cleaner borders.
You can easily change the centre longitude by changing the '180' value in the projection description.
What about this solution?
xlims = c(0, 359) # these are the limits you want to change
ylims = c(-55,75)
mapWorld <- map_data('world', wrap=xlims, ylim=ylims)
head(mapWorld)
g1 <- ggplot() +
geom_polygon(data = mapWorld, aes(x=long, y = lat, group = group)) +
coord_map("rectangular", lat0=0, xlim=xlims, ylim=ylims)
g1

Resources