Calculate the bearing between more than two data points - r

I have some tracking data and I want to calculate the bearing over the course of the track. For two points we can use function from the fossil package:
# earth.bear(long1, lat1, long2, lat2)
earth.bear(-10.54427, 52.11112, -10.55493, 52.10944)
# 255.6118
However, this won't work for more than two points. Here's some sample data:
tracks <- read.table(text =
"latitude, longitude
52.111122, -10.544271
52.10944, -10.554933
52.108898, -10.558025
52.108871, -10.560946
52.113991, -10.582005
52.157223, -10.626506
52.194977, -10.652878
52.240215, -10.678817
52.26421, -10.720366
52.264015, -10.720642", header = TRUE, sep = ",")

Try this:
sum(
sapply(1:(nrow(tracks) - 1), function(i){
earth.bear(tracks$longitude[i], tracks$latitude[i],
tracks$longitude[i+1], tracks$latitude[i+1] )
})
)
# 2609.871

Related

How to exactly reproduce historical ORS-Isochrones? [duplicate]

I am working with the gmapsdistance package in R. I have my API key, and I am familiar with the functions within the package.
However, I would like to work out a problem in the reverse direction. Instead of just finding the Time, Distance, and Status between lat/longs are vectors of lat/longs, I would like to input a lat/long, and draw a region of all points that could be driven to in 3 hours or less. Then I'd like to draw this on a Google map.
To start, it would be great to use Marimar, FL: 25.9840, -80.2821.
Does anyone have experience with that type of problem?
As suggested in the comments, you can sign up to a service like Travel Time Platform (which I'm using in this example) and use their API to get the possible destinations given a starting point.
Then you can plot this on a map using Google Maps (in my googleway package)
appId <- "TravelTime_APP_ID"
apiKey <- "TravelTime_API_KEY"
mapKey <- "GOOGLE_MAPS_API_KEY"
library(httr)
library(googleway)
library(jsonlite)
location <- c(25.9840, -80.2821)
driveTime <- 2 * 60 * 60
## London example
## location <- c(51.507609, -0.128315)
## sign up to http://www.traveltimeplatform.com/ and get an API key
## and use their 'Time Map' API
url <- "http://api.traveltimeapp.com/v4/time-map"
requestBody <- paste0('{
"departure_searches" : [
{"id" : "test",
"coords": {"lat":', location[1], ', "lng":', location[2],' },
"transportation" : {"type" : "driving"} ,
"travel_time" : ', driveTime, ',
"departure_time" : "2017-05-03T08:00:00z"
}
]
}')
res <- httr::POST(url = url,
httr::add_headers('Content-Type' = 'application/json'),
httr::add_headers('Accept' = 'application/json'),
httr::add_headers('X-Application-Id' = appId),
httr::add_headers('X-Api-Key' = apiKey),
body = requestBody,
encode = "json")
res <- jsonlite::fromJSON(as.character(res))
pl <- lapply(res$results$shapes[[1]]$shell, function(x){
googleway::encode_pl(lat = x[['lat']], lon = x[['lng']])
})
df <- data.frame(polyline = unlist(pl))
df_marker <- data.frame(lat = location[1], lon = location[2])
google_map(key = mapKey) %>%
add_markers(data = df_marker) %>%
add_polylines(data = df, polyline = "polyline")
If you want to render in leaflet and use a free isochrone service, this is a pretty neat option. There is a limit of 2 hours drive away though.
devtools::install_github("tarakc02/rmapzen")
library(rmapzen)
Sys.setenv(MAPZEN_KEY = "") # get for free at https://mapzen.com/
marimar <- mz_geocode("Marimar, FL")
isos <- mz_isochrone(
marimar,
costing_model = mz_costing$auto(),
contours = mz_contours(c(60 * 2)) # 2 hours
)
library(leaflet)
leaflet(as_sp(isos)) %>%
addProviderTiles("CartoDB.DarkMatter") %>%
addPolygons(color = ~paste0("#", color), weight = 1)

Calculate the length of shared boundaries between multiple polygons

I have a shapefile and I want to know for each polygon what other polygons touch it. To that end I have this code:
require("rgdal")
require("rgeos")
download.file("https://www.dropbox.com/s/vbxx9dic34qwz63/Polygons.zip?dl=1", "Polygons.zip")
Shapefile <- readOGR(".","Polygons")
Touching_List <- gTouches(Shapefile, byid = TRUE, returnDense=FALSE)
Touching_DF <- setNames(stack(lapply(Touching_List, as.character)), c("TOUCHING", "ORIGIN"))
I now want to go further and understand the extent to which each polygon touch other polygons. What I am after for each row in Touching_DF is a total length/perimeter for each ORIGIN polygon and the total length that each TOUCHING polygon is touching the origin polygon. This will then allow the percentage of the shared boundary to be calculated. I can imagine the output of this would be 3 new columns in Touching_DF (e.g. for the first row it could be something like origin parameter 1000m, touching length 500m, shared boundary 50%). Thanks.
EDIT 1
I have applied #StatnMap's answer to my real dataset. It appears that gTouches is returning results if a polygon shares both an edge and a point. These points are causing issues because they have no length. I have modified StatnMap's portion of code to deal with it, but when it comes to creating the data frame at the end there is a mismatch between how many shared edges/vertices gTouches returns and how many edges have lengths.
Here is some code to demonstrate the problem using a sample of my actual dataset:
library(rgdal)
library(rgeos)
library(sp)
library(raster)
download.file("https://www.dropbox.com/s/hsnrdfthut6klqn/Sample.zip?dl=1", "Sample.zip")
unzip("Sample.zip")
Shapefile <- readOGR(".","Sample")
Touching_List <- gTouches(Shapefile, byid = TRUE, returnDense=FALSE)
# ---- Calculate perimeters of all polygons ----
perimeters <- sp::SpatialLinesLengths(as(Shapefile, "SpatialLines"))
# ---- All in a lapply loop ----
all.length.list <- lapply(1:length(Touching_List), function(from) {
lines <- rgeos::gIntersection(Shapefile[from,], Shapefile[Touching_List[[from]],], byid = TRUE)
if(class(lines) != "SpatialLines"){lines <- lines#lineobj}
l_lines <- sp::SpatialLinesLengths(lines, longlat=FALSE)
results <- data.frame(origin = from,
perimeter = perimeters[from],
touching = Touching_List[[from]],
t.length = l_lines,
t.pc = 100*l_lines/perimeters[from])
results
})
This specifically shows the issue for one of the polygons:
from <- 4
lines <- rgeos::gIntersection(Shapefile[from,], Shapefile[Touching_List[[from]],], byid = TRUE)
if(class(lines) != "SpatialLines"){lines <- lines#lineobj}
l_lines <- sp::SpatialLinesLengths(lines, longlat=FALSE)
plot(Shapefile[c(from, Touching_List[[from]]),])
plot(lines, add = TRUE, col = 1 + 1:length(Touching_List[[from]]), lwd = 2)
The two possible solutions I see are 1. getting gTouches to return only shared edges with a length greater than zero or 2. returning a length of zero (rather than error) when a point rather than an edge is encountered. So far I can't find anything that will do either of these things.
EDIT 2
#StatnMap's revised solution works great. However, if a polygon does not share a snapped boarder with its neighbouring polygon (i.e. it goes to a point and then creates an island slither polygon) then it comes up with this error after lines <- rgeos::gIntersection(Shapefile[from,], Shapefile[Touching_List[[from]],], byid = TRUE)
Error in RGEOSBinTopoFunc(spgeom1, spgeom2, byid, id, drop_lower_td, unaryUnion_if_byid_false, :
Geometry collections may not contain other geometry collections
I have been looking for a solution that is able to identify polygons with badly drawn borders and not perform any calculations and return 'NA' in res (so they can still be identified later). However, I have been unable to find a command that distinguishes these problematic polygons from 'normal' polygons.
Running #StatnMap's revised solution with these 8 polygons demonstrates the issue:
download.file("https://www.dropbox.com/s/ttg2mi2nq1gbbrq/Bad_Polygon.zip?dl=1", "Bad_Polygon.zip")
unzip("Bad_Polygon.zip")
Shapefile <- readOGR(".","Bad_Polygon")
The intersection of two polygons only touching themselves is a line. Calculating a line length is easy with functions of spatial libraries in R.
As you started your example with library sp, you'll find a proposition with this library. However, I also give you a proposition with the new library sf.
Calculate polygons shared boundaries lengths with library sp
require("rgdal")
require("rgeos")
library(sp)
library(raster)
download.file("https://www.dropbox.com/s/vbxx9dic34qwz63/Polygons.zip?dl=1", "Polygons.zip")
unzip("Polygons.zip")
Shapefile <- readOGR(".","Polygons")
Touching_List <- gTouches(Shapefile, byid = TRUE, returnDense=FALSE)
# Touching_DF <- setNames(utils::stack(lapply(Touching_List, as.character)), c("TOUCHING", "ORIGIN"))
# ---- Calculate perimeters of all polygons ----
perimeters <- sp::SpatialLinesLengths(as(Shapefile, "SpatialLines"))
# ---- Example with the first object of the list and first neighbor ----
from <- 1
to <- 1
line <- rgeos::gIntersection(Shapefile[from,], Shapefile[Touching_List[[from]][to],])
l_line <- sp::SpatialLinesLengths(line)
plot(Shapefile[c(from, Touching_List[[from]][to]),])
plot(line, add = TRUE, col = "red", lwd = 2)
# ---- Example with the first object of the list and all neighbors ----
from <- 1
lines <- rgeos::gIntersection(Shapefile[from,], Shapefile[Touching_List[[from]],], byid = TRUE)
l_lines <- sp::SpatialLinesLengths(lines)
plot(Shapefile[c(from, Touching_List[[from]]),])
plot(lines, add = TRUE, col = 1 + 1:length(Touching_List[[from]]), lwd = 2)
# ---- All in a lapply loop ----
all.length.list <- lapply(1:length(Touching_List), function(from) {
lines <- rgeos::gIntersection(Shapefile[from,], Shapefile[Touching_List[[from]],], byid = TRUE)
l_lines <- sp::SpatialLinesLengths(lines)
res <- data.frame(origin = from,
perimeter = perimeters[from],
touching = Touching_List[[from]],
t.length = l_lines,
t.pc = 100*l_lines/perimeters[from])
res
})
# ---- Retrieve as a dataframe ----
all.length.df <- do.call("rbind", all.length.list)
In the table above, t.length is the touching length and t.pc is the touching percentage with regards to the perimeter of the polygon of origin.
Edit: Some shared boundaries are points (with sp)
As commented, some frontiers may be a unique point instead of lines. To account for this case, I suggest to double the coordinates of the point to create a line of length=0. This requires to calculate intersections with other polygons one by one, when this case appear.
For a single polygon, we can test this:
# Example with the first object of the list and all neighbours
from <- 4
lines <- rgeos::gIntersection(Shapefile[from,], Shapefile[Touching_List[[from]],], byid = TRUE)
# If lines and points, need to do it one by one to find the point
if (class(lines) == "SpatialCollections") {
list.Lines <- lapply(1:length(Touching_List[[from]]), function(to) {
line.single <- rgeos::gIntersection(Shapefile[from,], Shapefile[Touching_List[[from]][to],])
if (class(line.single) == "SpatialPoints") {
# Double the point to create a line
L1 <- rbind(line.single#coords, line.single#coords)
rownames(L1) <- letters[1:2]
Sl1 <- Line(L1)
Lines.single <- Lines(list(Sl1), ID = as.character(to))
} else if (class(line.single) == "SpatialLines") {
Lines.single <- line.single#lines[[1]]
Lines.single#ID <- as.character(to)
}
Lines.single
})
lines <- SpatialLines(list.Lines)
}
l_lines <- sp::SpatialLinesLengths(lines)
plot(Shapefile[c(from, Touching_List[[from]]),])
plot(lines, add = TRUE, col = 1 + 1:length(Touching_List[[from]]), lwd = 2)
For all in a lapply loop:
# Corrected for point outputs: All in a lapply loop
all.length.list <- lapply(1:length(Touching_List), function(from) {
lines <- rgeos::gIntersection(Shapefile[from,], Shapefile[Touching_List[[from]],], byid = TRUE)
if (class(lines) == "SpatialCollections") {
list.Lines <- lapply(1:length(Touching_List[[from]]), function(to) {
line.single <- rgeos::gIntersection(Shapefile[from,], Shapefile[Touching_List[[from]][to],])
if (class(line.single) == "SpatialPoints") {
# Double the point to create a line
L1 <- rbind(line.single#coords, line.single#coords)
rownames(L1) <- letters[1:2]
Sl1 <- Line(L1)
Lines.single <- Lines(list(Sl1), ID = as.character(to))
} else if (class(line.single) == "SpatialLines") {
Lines.single <- line.single#lines[[1]]
Lines.single#ID <- as.character(to)
}
Lines.single
})
lines <- SpatialLines(list.Lines)
}
l_lines <- sp::SpatialLinesLengths(lines)
res <- data.frame(origin = from,
perimeter = perimeters[from],
touching = Touching_List[[from]],
t.length = l_lines,
t.pc = 100*l_lines/perimeters[from])
res
})
all.length.df <- do.call("rbind", all.length.list)
This may also be applied with library sf, but as you apparently chose to work with sp, I won't update the code for this part. Maybe later...
---- End of Edit ----
Calculate polygons shared boundaries lengths with library sf
Figures and outputs are the same.
library(sf)
Shapefile.sf <- st_read(".","Polygons")
# ---- Touching list ----
Touching_List <- st_touches(Shapefile.sf)
# ---- Polygons perimeters ----
perimeters <- st_length(Shapefile.sf)
# ---- Example with the first object of the list and first neighbour ----
from <- 1
to <- 1
line <- st_intersection(Shapefile.sf[from,], Shapefile.sf[Touching_List[[from]][to],])
l_line <- st_length(line)
plot(Shapefile.sf[c(from, Touching_List[[from]][to]),])
plot(line, add = TRUE, col = "red", lwd = 2)
# ---- Example with the first object of the list and all neighbours ----
from <- 1
lines <- st_intersection(Shapefile.sf[from,], Shapefile.sf[Touching_List[[from]],])
lines <- st_cast(lines) # In case of multiple geometries (ex. from=71)
l_lines <- st_length(lines)
plot(Shapefile.sf[c(from, Touching_List[[from]]),])
plot(lines, add = TRUE, col = 1:length(Touching_List[[from]]), lwd = 2)
# ---- All in a lapply loop ----
all.length.list <- lapply(1:length(Touching_List), function(from) {
lines <- st_intersection(Shapefile.sf[from,], Shapefile.sf[Touching_List[[from]],])
lines <- st_cast(lines) # In case of multiple geometries
l_lines <- st_length(lines)
res <- data.frame(origin = from,
perimeter = as.vector(perimeters[from]),
touching = Touching_List[[from]],
t.length = as.vector(l_lines),
t.pc = as.vector(100*l_lines/perimeters[from]))
res
})
# ---- Retrieve as dataframe ----
all.length.df <- do.call("rbind", all.length.list)
Just to add to Sébastien Rochette answer, I think function st_length from sfpackage does not work with polygons (see this post). Instead I suggest using function st_perimeter in lwgeom package.
(I wanted to comment the answer but I don't have enough reputation)

Time series with MODISTools

I need to get full EVI time series, along with dates and a quality information. After executing MODISSubsets() the crude data is available, but not processed in a comparably nice way as MODISSummaries() would do.
MODISSummaries() however reduces the time series to summary statistics, taking into account quality information.
Is there a way to extract time series for each tile from the crude data (see data frame crude below)? It would be great if that this could return a list of data frames, where each data frame represents one tile and holds data for EVI (or whatever variable), its date, and a quality flag.
Specifically, after doing the following ...
savedir <- './'
modis.subset <- data.frame(
lat = 11.3175,
long = 47.1167,
end.date = "2016-09-29"
)
MODISSubsets(
LoadDat = modis.subset,
Products = "MOD13Q1",
Bands = c("250m_16_days_EVI", "250m_16_days_pixel_reliability"),
Size = c(1,1),
StartDate = FALSE,
SaveDir = savedir,
TimeSeriesLength = 3
)
crude <- read.csv("./Lat47.11670Lon11.31750Start2013-01-01End2016-09-29___MOD13Q1.asc", header = FALSE, as.is = TRUE)
... how would you get to something like
nice <- list( lonX1_latY1=data.frame( date=..., var=..., qual=... ), lonX2_latX2=... )
...?
In short, I was missing that ExtractTile() would be usable with the return value of MODISTimeSeries(). My workaround is based on using ExtractTile() in combination with the output of reading the ASCII file. Here is what I got working for my purpose, returning a list that contains a matrix (npixels_lon, npixels_lat, n_timesteps) containing all the downloaded MODIS data, in this case EVI; a matrix of identical dimensions containing the pixel reliability code; and a vector of length n_timesteps holding the centre pixel information if its quality flag is 0 or the mean of its surrounding pixels otherwise:
read_crude_modis <- function( filn, savedir, expand_x, expand_y ){
# arguments:
# filn: file name of ASCII file holding MODIS "crude" data
# savedir: directory, where to look for that file
# expand_x : number of pixels to the right and left of centre
# expand_y : number of pixels to the top and bottom of centre
# MODIS quality flags:
# -1 Fill/No Data Not Processed
# 0 Good Data Use with confidence
# 1 Marginal data Useful, but look at other QA information
# 2 Snow/Ice Target covered with snow/ice
# 3 Cloudy Target not visible, covered with cloud
library( MODISTools )
ScaleFactor <- 0.0001 # applied to output variable
ndayyear <- 365
## Read dowloaded ASCII file
crude <- read.csv( paste( savedir, filn, sep="" ), header = FALSE, as.is = TRUE )
crude <- rename( crude, c( "V1"="nrows", "V2"="ncols", "V3"="modislon_ll", "V4"="modislat_ll", "V5"="dxy_m", "V6"="id", "V7"="MODISprod", "V8"="yeardoy", "V9"="coord", "V10"="MODISprocessdatetime" ) )
## this is just read to get length of time series and dates
tseries <- MODISTimeSeries( savedir, Band = "250m_16_days_EVI" )
ntsteps <- dim(tseries[[1]])[1]
tmp <- rownames( tseries[[1]] )
time <- data.frame( yr=as.numeric( substr( tmp, start=2, stop=5 )), doy=as.numeric( substr( tmp, start=6, stop=8 )) )
time$dates <- as.POSIXlt( as.Date( paste( as.character(time$yr), "-01-01", sep="" ) ) + time$doy - 1 )
time$yr_dec<- time$yr + ( time$doy - 1 ) / ndayyear
## get number of products for which data is in ascii file (not used)
nprod <- dim(crude)[1] / ntsteps
if ((dim(crude)[1]/nprod)!=ntsteps) { print("problem") }
## re-arrange data
if ( dim(crude)[2]==11 && expand_x==0 && expand_y==0 ){
## only one pixel downloaded
nice_all <- as.matrix( crude$V11[1:ntsteps], dim(1,1,ntsteps) ) * ScaleFactor ## EVI data
nice_qual_flg <- as.matrix( crude$V11[(ntsteps+1):(2*ntsteps)], dim(1,1,ntsteps) ) ## pixel reliability data
} else if ( dim(crude)[2]>11 ){
## multiple pixels downloaded
# nice <- ExtractTile( Data = tseries, Rows = c(crude$nrows,expand_y), Cols = c(crude$ncols,expand_x), Grid = TRUE ) ## > is not working: applying ExtractTile to return of MODISTimeSeries
nice_all <- ExtractTile( Data = crude[1:ntsteps,11:dim(crude)[2]] * ScaleFactor, Rows = c(crude$nrows[1],expand_y), Cols = c(crude$ncols[1],expand_x), Grid = TRUE )
nice_qual_flg <- ExtractTile( Data = crude[(ntsteps+1):(2*ntsteps),11:dim(crude)[2]], Rows = c(crude$nrows[1],expand_y), Cols = c(crude$ncols[1],expand_x), Grid = TRUE )
} else {
print( "Not sufficient data downloaded. Adjust expand_x and expand_y.")
}
## Clean data for centre pixel: in case quality flag is not '0', use mean of all 8 surrounding pixels
if ( expand_x==1 && expand_y==1 ){
nice_centre <- nice_all[2,2,]
nice_centre[ which( nice_qual_flg[2,2,]!=0 ) ] <- apply( nice_all[,,which( nice_qual_flg[2,2,]!=0 )], c(3), FUN=mean)
}
modis <- list( nice_all=nice_all, nice_centre=nice_centre, nice_qual_flg=nice_qual_flg, time=time )
return( modis )
}

Using R to identify nearest point to a location and calculate the distance between them along a network/road

I have a series of locations (Points_B) and would like to find the closest point to them from a different set of points (Points_A) and the distance between them in kms. I can do this as the crow flies but cannot work out how to do the same along a road network (the 'Roads' object in the code). The code I have so far is a follows:
library(sp)
library(rgdal)
library(rgeos)
download.file("https://dl.dropboxusercontent.com/u/27869346/Road_Shp.zip", "Road_Shp.zip")
#2.9mb
unzip("Road_Shp.zip")
Roads <- readOGR(".", "Subset_Roads_WGS")
Points_A <- data.frame(ID = c("A","B","C","D","E","F","G","H","I","J","K","L"), ID_Lat = c(50.91487, 50.92848, 50.94560, 50.94069, 50.92275, 50.94109, 50.92288, 50.92994, 50.92076, 50.90496, 50.89203, 50.88757), ID_Lon = c(-1.405821, -1.423619, -1.383509, -1.396910, -1.441801, -1.459088, -1.466626, -1.369458, -1.340104, -1.360153, -1.344662, -1.355842))
rownames(Points_A) <- Points_A$ID
Points_B <- data.frame(Code = 1:30, Code_Lat = c(50.92658, 50.92373, 50.93785, 50.92274, 50.91056, 50.88747, 50.90940, 50.91328, 50.91887, 50.92129, 50.91326, 50.91961, 50.91653, 50.90910, 50.91432, 50.93742, 50.91848, 50.93196, 50.94209, 50.92080, 50.92127, 50.92538, 50.88418, 50.91648, 50.91224, 50.92216, 50.90526, 50.91580, 50.91203, 50.91774), Code_Lon = c(-1.417311, -1.457155, -1.400106, -1.374250, -1.335896, -1.362710, -1.360263, -1.430976, -1.461693, -1.417107, -1.426709, -1.439435, -1.429997, -1.413220, -1.415046, -1.440672, -1.392502, -1.459934, -1.432446, -1.357745, -1.374369, -1.458929, -1.365000, -1.426285, -1.403963, -1.344068, -1.340864, -1.399607, -1.407266, -1.386722))
rownames(Points_B) <- Points_B$Code
Points_A_SP <- SpatialPoints(Points_A[,2:3])
Points_B_SP <- SpatialPoints(Points_B[,2:3])
Distances <- (gDistance(Points_A_SP, Points_B_SP, byid=TRUE))*100
Points_B$Nearest_Points_A_CF <- colnames(Distances)[apply(Distances,1,which.min)]
Points_B$Distance_Points_A_CF <- apply(Distances,1,min)
The output I am after would be two additional columns in 'Points_B' with 1) having the nearest Point A object ID along the road network and 2) having the distance along the network in km. Any help would be appreciated. Thanks.
I've been working on this kind of problem all day. Try mapdist() in the ggmap package and see if this works:
library(dplyr)
library(ggmap)
#Your data
Points_A <- data.frame(ID = c("A","B","C","D","E","F","G","H","I","J","K","L"), ID_Lat = c(50.91487, 50.92848, 50.94560, 50.94069, 50.92275, 50.94109, 50.92288, 50.92994, 50.92076, 50.90496, 50.89203, 50.88757), ID_Lon = c(-1.405821, -1.423619, -1.383509, -1.396910, -1.441801, -1.459088, -1.466626, -1.369458, -1.340104, -1.360153, -1.344662, -1.355842))
Points_B <- data.frame(Code = 1:30, Code_Lat = c(50.92658, 50.92373, 50.93785, 50.92274, 50.91056, 50.88747, 50.90940, 50.91328, 50.91887, 50.92129, 50.91326, 50.91961, 50.91653, 50.90910, 50.91432, 50.93742, 50.91848, 50.93196, 50.94209, 50.92080, 50.92127, 50.92538, 50.88418, 50.91648, 50.91224, 50.92216, 50.90526, 50.91580, 50.91203, 50.91774), Code_Lon = c(-1.417311, -1.457155, -1.400106, -1.374250, -1.335896, -1.362710, -1.360263, -1.430976, -1.461693, -1.417107, -1.426709, -1.439435, -1.429997, -1.413220, -1.415046, -1.440672, -1.392502, -1.459934, -1.432446, -1.357745, -1.374369, -1.458929, -1.365000, -1.426285, -1.403963, -1.344068, -1.340864, -1.399607, -1.407266, -1.386722))
#Combine coords into one field (mapdist was doing something funny with the commas so I had to specify "%2C" here)
Points_A$COORD <- paste(ID_Lat, ID_Lon, sep="%2C")
Points_B$COORD <- paste(Code_Lat, Code_Lon, sep="%2C")
#use expand grid to generate all combos
get_directions <- expand.grid(Start = Points_A$COORD,
End = Points_B$COORD,
stringsAsFactors = F,
KEEP.OUT.ATTRS = F) %>%
left_join(select(Points_A, COORD, ID), by = c("Start" = "COORD")) %>%
left_join(select(Points_B, COORD, Code), by = c("End" = "COORD"))
#make a base dataframe
route_df <- mapdist(from = get_directions$Start[1],
to = get_directions$End[1],
mode = "driving") %>%
mutate(Point_A = get_directions$ID[1],
Point_B = get_directions$Code[1])
#get the rest in a for-loop
start <- Sys.time()
for(i in 2:nrow(get_directions)){
get_route <- mapdist(from = get_directions$Start[i],
to = get_directions$End[i],
mode = "driving") %>%
mutate(Point_A = get_directions$ID[i],
Point_B = get_directions$Code[i])
route_df <<- rbind(route_df, get_route) #add to your original file
Sys.sleep(time = 1) #so google doesn't get mad at you for speed
end <- Sys.time()
print(paste(i, "of", nrow(get_directions),
round(i/nrow(get_directions),4)*100, "%", sep=" "))
print(end-start)
}
#save if you want
write.csv(route_df, "route_df.csv", row.names = F)
#Route Evaluation
closest_point <-route_df %>%
group_by(Point_A) %>%
filter(km == min(km)) %>%
ungroup()
I'm still kind of new at this so there may be a better way to do the data wrangling. Hope this helps & good luck
The packages igraph, osmr and walkalytics all seem to provide this functionality these days. Mode-specific routing networks exist (in various degrees of functionality).

Mosaic fails when reading rasters from disc but not from memory

I ran into a weird issue when trying to make a mosaic from several hundred rasters. The satellite imagery I'm using is not perfectly aligned or shares the exact same resolution, so I followed the steps found here to resample my rasters and then mosaic them.
I started off testing on a subset of only four images and had no problem doing this (had to manually calculate the full extent since unionExtent and the newer union only allows two extent arguments):
# Reading raster files
rst <- lapply(list.files(), FUN = stack)
# Extracting individual extents
rst_ext <- lapply(rst, FUN = extent)
# Calculating full extent
xmin_rst <- c(); xmax_rst <- c(); ymin_rst <- c(); ymax_rst <- c();
for (i in 1:length(rst_ext)) {
xmin_rst <- c(xmin_rst, rst_ext[[i]]#xmin)
ymin_rst <- c(ymin_rst, rst_ext[[i]]#ymin)
xmax_rst <- c(xmax_rst, rst_ext[[i]]#xmax)
ymax_rst <- c(ymax_rst, rst_ext[[i]]#ymax)
}
full_extent <- extent(min(xmin_rst), max(xmax_rst),
min(ymin_rst), max(ymax_rst))
# Creating raster from full extent and first rasters' CRS and resolution
bounding_rst <- raster(full_extent,
crs = crs(rst[[1]]),
res = res(rst[[1]]))
# Resampling rasters to match attributes of the bounding raster
rst_resampled <- lapply(X = rst, fun = function(x) {
target_rst <- crop(bounding_rst, x)
resample(x, target_rst, method="bilinear")
})
# Creating mosaic
rst_mosaic <- do.call("mosaic", c(rst_resampled, fun = mean))
That worked out OK, but of course, I didn't want to save all those rasters in my memory since I'd run out of it. I decided to save them in a new folder and re-read them as a stack, then make the mosaic.
# Function to crop, resample and write to a new GeoTIFF
resample_write <- function(x) {
target_rst <- crop(bounding_rst, x)
x <- resample(x, target_rst, method="bilinear")
save_name <- gsub("\\.1",
"_resampled.tif",
names(x)[1]) # Modifying name of 1st band
writeRaster(x,
filename = paste("../testing_resampling/",
save_name, sep = ""),
format = "GTiff")
}
# Running the function
lapply(rst, FUN = resample_write)
# Reading resampled images
setwd("../testing_resampling/")
rst_resampled2 <- lapply(list.files(), FUN = stack)
## Making the mosaic
rst_mosaic2 <- do.call("mosaic", c(rst_resampled2, fun = mean))
This gives the following error:
> rst_mosaic2 <- do.call("mosaic", c(rst_resampled2, fun = mean))
Error in compareRaster(x, extent = FALSE, rowcol = FALSE, orig = TRUE, :
different origin
I was able to get around it by setting the increasing the tolerance argument of mosaic to 0.4 but still don't understand why rst_resampled1 and rst_resampled2 yield different mosaic results.
Comparing them both with compareRaster and cellStats tells me that they're exactly the same.

Resources