R Highcharter map from customized shapefile - r

I am having trouble importing and joining a geojson map to some data using the highcharter library. I am trying to use a slim downed version of a sf dataset that I got using the tidycensus package which I then uploaded to https://mapshaper.org/ to reduce the size of the file by thinning out the polygons. After thinning I exported as geojson and import into R.
Here is an example. First I download the data using tidycensus, create two data sets one for geometry and one for the attribute of interest, here its median family income. Then I export the geometry data to so that I can feed into mapshapper for reduction.
#start with an example for one state
##pull geometry data for one state
md_data <- get_acs(geography = "tract",
state = "MD",
variables = "B19113_001",
geometry = T,
key = Sys.getenv("CENSUS_API_KEY"))
#data set of just GEOID and median family income for use in mapping
md_mfi <- as.data.frame(md_data) %>%
mutate(median_family_income = case_when(is.na(estimate) ~ 0,
TRUE ~ estimate)) %>%
select(GEOID,median_family_income)
#slim down to just the geoid and the geometry data
md_tracts <- md_data %>%
select(GEOID,geometry)
st_write(md_tracts, "U:/M1JPW00/GeoSpatial/census_tracts/acs_carto_2016/md_carto_tracts.shp")
After reformatting in mapshaper I import back into R
md_map_json <- jsonlite::fromJSON(txt = "FILEPATH/md_carto_tracts.json",simplifyVector = FALSE)
md_map_json <- geojsonio::as.json(md_map_json)
And then try and build a map based on an example from the highcharter docs here
> class(md_map_json)
[1] "json" "geo_json"
> head(md_mfi)
GEOID median_family_income
1 24001000100 54375
2 24001000200 57174
3 24001000300 48362
4 24001000400 52038
5 24001000500 46174
6 24001000600 49784
highchart(type = "map") %>%
hc_add_series(mapData = md_map_json,
data = list_parse(md_mfi),
joinBy = "GEOID",
value = "median_family_income",
name = "Median Family Income")
The map actually renders and the census tracts are colored solid blue but the series data doesn't seem to successfully join even with or without using list_parse.

I had the same problem, asked here:
Make a choropleth from a non-highmap-collection map. Nobody responded (I know!), so I finally got to a solution that I think should work for you too:
#Work with the map you get until this step:
md_map_json <- jsonlite::fromJSON(txt = "FILEPATH/md_carto_tracts.json",simplifyVector = FALSE)
#This part is unnecessary:
#md_map_json <- geojsonio::as.json(md_map_json)
#Then, write your map like this:
highchart() %>%
hc_add_series_map(md_map_json, md_mfi, value = "median_family_income", joinBy = "GEOID")

Related

How to get rid of the "Error in CPL_write_ogr" error when using "st_write" to export a sf object?

I'm trying to intersect two sf objects for the US (one at the township level and the other one at the census tract level). I'm getting both using tigris and tidycensys. My final goal is to have an unique sf object with information at the township level (with information from both the originals township and census tract level sf objects). And after I do this intersection, I want to export this sf object using st_write from the sf package. Here is the code I've used:
library(tigris)
library(sf)
library(purrr)
library(tidycensus)
library(tidyr)
library(dplyr)
##Data at township level
#---------------------------#
MN_Township_SHP <- county_subdivisions("Minnesota", cb = TRUE)%>% st_transform(., crs=32618)
MN_Township_SHP$County <- substr(MN_Township_SHP$NAMELSADCO,1,nchar(MN_Township_SHP$NAMELSADCO)-7)
Dataset <- MN_Township_SHP
#Data at census tract level
#---------------------------#
Sys.getenv("CENSUS_API_KEY")
my_vars <-
c(total_pop = "B01003_001",
race_denominator = "B02001_001", #Total
white = "B02001_002")
mn <- unique(fips_codes$state)[24]
MN_CensusTract_SHP <-map_df(mn, function(x) {
get_acs(geography = "tract",
geometry = T,
variables = my_vars,
state = mn) })
MN_CensusTract_SHP <- MN_CensusTract_SHP %>% dplyr::select(-moe)
Social_Dat <-
MN_CensusTract_SHP %>%
as.data.frame() %>%
pivot_wider(names_from = variable,
values_from = c(estimate)) %>%
dplyr::mutate(year=2021) %>%
dplyr::rename_all(funs(paste0("ACS_", .)))
Social_Dat$ACS_year <- as.double(Social_Dat$ACS_year)
Social_Dat$ACS_GEOID <- as.double(Social_Dat$ACS_GEOID)
Social_Dat <- st_as_sf(Social_Dat, sf_column_name = 'ACS_geometry')%>% st_transform(., crs=32618)
#Intersection between township and census tract levels
#---------------------------#
final_df <- st_intersection(Dataset, Social_Dat, all=TRUE)
#Export sf object as shapefile
#---------------------------#
st_write(final_df, "Input_Intermediate/final_df.shp", delete_layer = TRUE)
However, when I run this final step, I get the following error:
"Error in CPL_write_ogr(obj, dsn, layer, driver, as.character(dataset_options), :
Write error"
Does anyone know how to solve this? I've tried so many ways that I found in google, but none of them worked for me. Many thanks in advance!!!
You can use the function st_collection_extract and get only the polygon type from your geometry column, and then you can proceed and use st_write again.
df <- st_collection_extract(final_df, type = "POLYGON")
st_write(df, "Input_Intermediate/final_df.shp", delete_layer = TRUE)

Global Leaflet Map in R - issues adding data to spatial object

I am trying to replicate this visual, but with my own data. This is the template I am working off of - https://r-graph-gallery.com/183-choropleth-map-with-leaflet.html
My intent is to highlight every country with a value in the same color. I might make it a heatmap or something - but right now adding the polygons gives an error so I cannot try any color options at all.
# Setup
library(leaflet)
library(rgdal)
library(here)
library(tidyverse)
# Basically copy pasted from the template, but the download did not work. I manually went to the website, downloaded the file, manually un-zipped, and manually dropped it in my working directory
# download.file("http://thematicmapping.org/downloads/TM_WORLD_BORDERS_SIMPL-0.3.zip" , destfile="DATA/world_shape_file.zip")
# system("unzip DATA/world_shape_file.zip")
world_spdf <- readOGR(
dsn= here() ,
layer="TM_WORLD_BORDERS_SIMPL-0.3",
verbose=FALSE
)
world_spdf#data$POP2005[ which(world_spdf#data$POP2005 == 0)] = NA
world_spdf#data$POP2005 <- as.numeric(as.character(world_spdf#data$POP2005)) / 1000000 %>% round(2)
# Example of my data - I have countries and numbers associated with them, although not every country has a number
country <- c("Algeria", "Argentina", "Australia")
values <- c(1,4,4)
my_df <- dataframe(country, values)
# This is how I am trying to add MY values to the map. I have to convert the map to a tibble, add my data, then convert it back to a map. Perhaps this is the problem?
interactive_data_attempt <- world_spdf %>%
as.tibble() %>%
left_join(my_df , by = c("NAME" = "country")) %>%
mutate(texts = replace_na(texts, 0),
exists = texts > 1) %>%
st_as_sf(coords = c("LON","LAT"))
# This is the method I used to do the exact same thing in a domestic US map
bins <- c(seq(0,1,1), Inf)
pal <- colorBin(c("white","#C14A36"), domain = interactive_data_attempt$exists, bins = bins, reverse = FALSE)
# This gives an error: Error in to_ring.default(x) : Don't know how to get polygon data from object of class XY,POINT,sfg
leaflet(interactive_data_attempt) %>%
addTiles() %>%
setView(lat=10, lng=0 , zoom=2) %>%
addPolygons(fillColor = ~pal(interactive_data_attempt$exists))
You use readOGR to get an sp object, but at one point you convert it to tibble and then to sf? Not sure about sp, but in most cases you can handle sf as a regular tibble / dataframe, i.e. left_jointo it. And you can read shapefile directly to sf with st_read.
Then there's something with your variables, a mixup from copy-paste I would guess: in my_df you have values but you never do anything with it and in your mutate you use texts but it's unclear where it's coming from.
Binary palette is built from exists, a boolean value that should indicate if the actual value is present or not, though I'd assume you'd want to use values from your my_df$values instead.
Left NA values as-is, changed bins (to just 2) and adjusted some colours.
library(leaflet)
library(sf)
library(dplyr)
library(tidyr)
# download.file("http://thematicmapping.org/downloads/TM_WORLD_BORDERS_SIMPL-0.3.zip" , destfile="world_shape_file.zip")
# unzip("world_shape_file.zip",exdir = "world_shape_file")
world_sf <- st_read("world_shape_file")
world_sf$POP2005[ which(world_sf$POP2005 == 0)] = NA
world_sf$POP2005 <- as.numeric(as.character(world_sf$POP2005)) / 1000000 %>% round(2)
country <- c("Algeria", "Argentina", "Australia")
values <- c(1,4,4)
pal <- colorBin(c("blue","#C14A36"), domain = values, bins = 2, reverse = FALSE, na.color = "transparent")
world_sf %>%
left_join(
tibble(country, values),
by = c("NAME" = "country")) %>%
leaflet() %>%
addTiles() %>%
setView(lat=10, lng=0 , zoom=2) %>%
addPolygons(fillColor = ~pal(values), stroke = FALSE)
Created on 2022-11-12 with reprex v2.0.2

Shiny and Leaflet integration is really slow - how can I speed it up?

Right now i'm almost certain that my current use of shiny and leaflet is sub-optimal.
At a high level my current approach looks like this:
Generate a leaflet.
Create a reactive dataframe on user input.
Create a reactive dataframe of lat lon coordinates on user selection of their area of interest.
Merge a spatial dataframe (containing postcode polygon boundaries) with the reactive dataframe from step 2, then draw the map with the joined dataframe. This keeps all the data necessary for drawing polygons, adding colorBins and fillColor and labels inside the same final dataframe.
In more detail, the steps are executed as follows:
Generate a map like this:
output$leaflet_map <- renderLeaflet({
leaflet() %>%
addTiles()
})
Produce a reactive dataframe of marketing data to be joined onto an sf spatial dataframe containing postcode polygons via sp::merge() (the join happens a little later, i'll get to that):
reactive_map_data1 <- reactive({
df %>%
filter(BrandAccount_Brand %in% input$selectBrandRecruitment1) %>%
group_by(POA_CODE, ordertype) %>%
summarise("Number of Orders type and postcode" = n(), "AOV" = round(mean(TotalDiscount), 2)) %>%
left_join(seifa, by = "POA_CODE") %>%
left_join(over25bypostcode, by = "POA_CODE") %>%
mutate(`Proportion of Population Over 25` = round(n() / `25_and_over` * 100, 4))
})
Create a reactive dataframe containing the lat and lon coordinates of the State selected by the user to be fed into the call to render the map:
reactive_state_recruitment1 <- reactive({
australian_states %>%
filter(States == input$selectState_recruitment1)
})
Render the final map - profvis determines that this is infact the slow part:
observeEvent(
input$gobutton_recruitment1, {
## First I load the spatial data with each call to render the
## map - this is almost certainly sub-optimal however I can't
## think of another way to do this as each time the data are
## joined I have no other way of re-setting the gdal.postcodes2
## spatial dataframe to its original state which is why I reload
## it from .rds each time:
gdal.postcodes_recruitment1 <- readRDS("gdal.postcodes2.rds")
## I then merge the marketing `reactive_map_data1()` dataframe
## created in Step 2 with the freshly loaded `gdal.postcodes2`
## spatial dataframe - `profvis` says this is pretty slow but
## not as slow as the rendering of the map
gdal.postcodes_recruitment1#data <- sp::merge(gdal.postcodes_recruitment1#data, reactive_map_data1(), by.x = "POA_CODE", all.x = TRUE)
## Next I generate the domain of `colorBin` with the `Number of
## Orders type and postcode` variable that only exists after the
## merge and is subject to change from user input - it resides
## within the `reactive_map_data1()` dataframe that gets merged
## onto the `gdal.postcodes2()` spatial dataframe.
pal <- colorBin("YlOrRd", domain =
gdal.postcodes_recruitment1$`Number of Orders type and
postcode`, bins = bins_counts)
## Lastly I update the leaflet with `leafletProxy()` to draw the
## map with polygons and fill colour based on the
## `reactive_map_data1()` values
leafletProxy("leaflet_map_recruitment1", data = gdal.postcodes_recruitment1) %>%
addPolygons(data = gdal.postcodes_recruitment1,
fillColor = ~pal(gdal.postcodes_recruitment1$`Number of Orders type and postcode`),
weight = 1,
opacity = 1,
color = "white",
dashArray = "2",
fillOpacity = .32,
highlight = highlightOptions(
weight = 3.5,
color = "white",
dashArray = "4",
fillOpacity = 0.35,
bringToFront = TRUE),
layerId = gdal.postcodes_recruitment1#data$POA_CODE,
label = sprintf(
"<strong>%s<br/>%s</strong><br/>%s<br/>%s<br/>%s<br/>%s",
paste("Postcode: ", gdal.postcodes_recruitment1$POA_CODE, sep = ""),
paste("% of Population Over 25: ", gdal.postcodes_recruitment1$`Proportion of Population Over 25`, "%"),
paste("Number of Orders: ", gdal.postcodes_recruitment1$`Number of Orders type and postcode`, sep = ""),
paste("Ave Order Value: $", gdal.postcodes_recruitment1$`AOV`, sep = ""),
paste("Advantage & Disadvantage: ", gdal.postcodes_recruitment1$`Relative Socio-Economic Advantage and Disadvantage Decile`, sep = ""),
paste("Education and Occupation: ", gdal.postcodes_recruitment1$`Education and Occupation Decile`, sep = "")
) %>%
lapply(htmltools::HTML),
labelOptions = labelOptions(
style = list("font-weight" = "normal", padding = "3px 8px"),
textsize = "15px",
direction = "auto")) %>%
addLegend("bottomright", pal = pal, values = ~bins_counts,
title = "# of Recruits (All Time)",
labFormat = labelFormat(suffix = ""),
opacity = 1
) %>%
setView(lng = reactive_state_recruitment1()$Lon, lat = reactive_state_recruitment1()$Lat, zoom = reactive_state_recruitment1()$States_Zoom)
})
All up the map takes between 7 and 20 seconds to render as the data are quite large.
Some things to note:
The polygons have already been simplified to death, they are currently only displaying at 10% of the detail that was originally provided to define postcode boundaries by the Australian Bureau of Statistics. Simplifying the polygons further is not an option.
sp::merge() is not the fastest of join functions I have come across, but it is necessary in order to merge a spatial dataframe with a non-spatial dataframe (other joins such as those offered by dplyr will not accomplish this task - a look at the sp::merge() documentation reveals that this has something to do with S3 and S4 datatypes, in any case this part is not the slow part according to profvis).
According to profvis the actual rendering of the map in step 4 (drawing polygons) is the slow part. Ideally a solution to speed this whole process up would involve drawing the polygons on the original leaflet, and only updating the fillColor and labels applied to each polygon upon input of the 'Go' actionButton. I have not figured out a way to do this.
Can anyone think of a way to re-structure this whole procedure to optimise efficiency?
Any input is greatly appreciated.

using wux for ensemble climate data

I am very new to R, and I am working with New England climate data. I am currently attempting to use the package WUX to find ensemble averages for each year of the mean, minimum, and maximum temperatures across all 29 climate models. In the end for example, I want to have a raster stack of model averages, one stack for each year. The ultimate goal is to obtain a graph that shows variability. I have attempted to read through the WUX.pdf online, but because I am so new, and because it is such a general overview I feel I am getting lost. I need help developing a simple framework to run the model. My script so far has a general outline of what I think I need. Correct me if I am wrong, but I think I want to be using the 'models2wux' function. Bare in mind that my script is a bit messy at this point.
#This script will:
#Calculate ensemble mean, min, and max across modules within each year.
#For example, the script will find the average temp across all modules
#for the year 1980. It will do the same for all years.
#There will be a separate ensemble mean, max, and min for each scenario
library(raster)
library(rasterVis)
library(utils)
library(wux)
library(lattice)
#wux information
#https://cran.r-project.org/web/packages/wux/wux.pdf
path <- "/net/nfs/merrimack/raid/Northeast_US_Downscaling_cmip5/"
vars <- c("tasmin", "tasmax") #, "pr")
mods <- c("ACCESS1-0", "ACCESS1-3", "bcc-csm1-1", "bcc-csm1-1-m")
#"CanESM2", "CCSM4", "CESM1-BGC", "CESM1-CAM5", "CMCC-CM",
#"CMCC-CMS", "CNRM-CM5", "CSIRO-Mk3-6-0", "FGOALS-g2", "GFDL-CM3",
#"GFDL-ESM2G", "GFDL-ESM2M", "HadGEM2-AO", "HadGEM2-CC", "HadGEM2-ES",
#"inmcm4", "IPSL-CM5A-LR", "IPSL-CM5A-MR", "MIROC5", "MIROC-ESM-CHEM",
#"MIROC-ESM", "MPI-ESM-LR", "MPI-ESM-MR", "MRI-CGCM3", "NorESM1-M")
scns <- c("rcp45", "rcp85") #, "historical")
#A character vector containing the names of the models to be processed
climate.models <- c(mods)
#ncdf file for important cities we want to look at (with lat/lon)
cities.path <-
"/net/home/cv/marina/Summer2017_Projects/Lat_Lon/NE_Important_Cities.csv"
necity.vars <- c("City", "State", "Population",
"Latitude", "Longitude", "Elevation(meters")
# package = wux -- models2wux
#models2wux(userinput, modelinput)
#modelinput information
#Start 4 Loops to envelope netcdf data
for (iv in 1:2){
for (im in 1:4){
for (is in 1:2){
for(i in 2006:2099){
modelinput <- paste(path, vars[iv], "_day_", mods[im], "_", scns[is], "_r1i1p1_", i, "0101-", i, "1231.16th.nc", sep="")
print(modelinput)
} # end of year loop
} # end of scenario loop
} # end of model loop
} # end of variable loop
# this line will print the full file name
print(full)
#more modelinput information necessary? List of models
# package = wux -- models2wux
#userinput information
parameter.names <- c("tasmin", "tasmax")
reference.period <- "2006-2099"
scenario.period <- "2006-2099"
temporal.aggregation <- #maybe don't need this
subregions <- # will identify key areas we want to look at (important cities?)
#uses projection file
#These both read the .csv file (first uses 'utils', second uses 'wux')
#1
cities.read <- read.delim(cities.path, header = T, sep = ",")
#2
read.table <- read.wux.table(cities.path)
cities.read <- subset(cities.read, subreg = "City", sep = ",")
# To read only "Cities", "Latitude", and "Longitude"
cities.points <- subset(cities.read, select = c(1, 4, 5))
cities.points <- as.data.frame(cities.points)
colnames(cities.points)<- c("City", "Latitude", "Longitude" )
#Set plot coordinates for .csv graph
coordinates(cities.points) <- ~ Longitude + Latitude
proj4string(cities.points) <- c("+proj=longlat +datum=WGS84 +ellps=WGS84 +towgs84=0,0,0")
subregions <- proj4string(cities.points)
area.fraction <- # reread pdf on this (p.24)
#Do we want area.fraction = T or F ? (FALSE is default behavior)
spatial.weighting <- FALSE
#spatial.weighting = TRUE enables cosine weighting of latitudes, whereas
omitting or setting
#FALSE results in unweighted arithmetic areal mean (default). This option is
valid only for data on a regular grid.
na.rm = FALSE #keeps NA values
#plot.subregions refer to pdf pg. 25
#save.as.data saves data to specific file
# 1. use the brick function to read the full netCDF file.
# note: the varname argument is not necessary, but if a file has multiple varables, brick will read the first one by default.
air_t <- brick(full, varname = vars[iv])
# 2. use the calc function to get average, min, max for each year over the entire set of models
annualmod_ave_t <- calc(air_t, fun = mean, na.rm = T)
annualmod_max_t <- calc(air_t, fun = max, na.rm = T)
annualmod_min_t <- calc(air_t, fun = min, na.rm = T)
if(i == 2006){
annual_ave_stack = annualmod_ave_t
}else if{
annual_ave_stack <- stack(annual_ave_stack, annualmod_max_t))
}else{
annual_ave_stack <- stack(annual_ave_stack, annualmod_min_t)
} # end of if/else

How to add polylines from one location to others separately using leaflet in shiny?

I'm trying to add polylines from one specific location to many others in shiny R using addPolylines from leaflet. But instead of linking from one location to the others, I am only able to link them all together in a sequence. The best example of what I'm trying to achieve is seen here in the cricket wagon wheel diagram:
.
observe({
long.path <- c(-73.993438700, (locations$Long[1:9]))
lat.path <- c(40.750545000, (locations$Lat[1:9]))
proxy <- leafletProxy("map", data = locations)
if (input$paths) {
proxy %>% addPolylines(lng = long.path, lat = lat.path, weight = 3, fillOpacity = 0.5,
layerId = ~locations, color = "red")
}
})
It is in a reactive expression as I want them to be activated by a checkbox.
I'd really appreciate any help with this!
Note
I'm aware the OP asked for a leaflet answer. But this question piqued my interest to seek an alternative solution, so here are two
Example - mapdeck
Mapdeck (my package) uses Deck.gl on a Mapbox map, so you need a Mapbox API key to use it. But it does let you plot 2.5d arcs
It works on data.frames and data.tables (as well as sp and sf) objects.
center <- c(144.983546, -37.820077)
df_hits$center_lon <- center[1]
df_hits$center_lat <- center[2]
df_hits$score <- sample(c(1:4,6), size = nrow(df_hits), replace = T)
library(mapdeck)
set_token("MAPBOX")
mapdeck(
style = mapdeck_style("satellite")
) %>%
add_arc(
data = df_hits
, origin = c("center_lon", "center_lat")
, destination = c("lon", "lat")
, stroke_from = "score"
, stroke_to = "score"
, stroke_width = "score"
, palette = "magma"
)
Example - googleway
This example uses googleway (also my package, which interfaces Google Maps API), and also works on data.frames and data.tables (as well as sp and sf)
The trick is in the encodeCoordinates function, which encodes coordinates (lines) into a Google Polyline
library(data.table)
library(googleway)
library(googlePolylines) ## gets installed when you install googleway
center <- c(144.983546, -37.820077)
setDT(df_hits) ## data given at the end of the post
## generate a 'hit' id
df_hits[, hit := .I]
## generate a random score for each hit
df_hits[, score := sample(c(1:4,6), size = .N, replace = T)]
df_hits[
, polyline := encodeCoordinates(c(lon, center[1]), c(lat, center[2]))
, by = hit
]
set_key("GOOGLE_MAP_KEY") ## you need an API key to load the map
google_map() %>%
add_polylines(
data = df_hits
, polyline = "polyline"
, stroke_colour = "score"
, stroke_weight = "score"
, palette = viridisLite::plasma
)
The dplyr equivalent would be
df_hits %>%
mutate(hit = row_number(), score = sample(c(1:4,6), size = n(), replace = T)) %>%
group_by(hit, score) %>%
mutate(
polyline = encodeCoordinates(c(lon, center[1]), c(lat, center[2]))
)
Data
df_hits <- structure(list(lon = c(144.982933659011, 144.983487725258,
144.982804912978, 144.982869285995, 144.982686895782, 144.983239430839,
144.983293075019, 144.983529109412, 144.98375441497, 144.984103102141,
144.984376687461, 144.984183568412, 144.984344500953, 144.984097737723,
144.984065551215, 144.984339136535, 144.984001178199, 144.984124559814,
144.984280127936, 144.983990449363, 144.984253305846, 144.983030218536,
144.982896108085, 144.984022635871, 144.983786601478, 144.983668584281,
144.983673948699, 144.983577389175, 144.983416456634, 144.983577389175,
144.983282346183, 144.983244795257, 144.98315360015, 144.982896108085,
144.982686895782, 144.982617158347, 144.982761997634, 144.982740539962,
144.982837099486, 144.984033364707, 144.984494704658, 144.984146017486,
144.984205026084), lat = c(-37.8202049841516, -37.8201201023877,
-37.8199253045246, -37.8197812267274, -37.8197727515541, -37.8195269711051,
-37.8197600387923, -37.8193828925304, -37.8196964749506, -37.8196583366193,
-37.8195820598976, -37.8198956414717, -37.8200651444706, -37.8203575362288,
-37.820196509027, -37.8201032825917, -37.8200948074554, -37.8199253045246,
-37.8197897018997, -37.8196668118057, -37.8200566693299, -37.8203829615443,
-37.8204295746001, -37.8205355132537, -37.8194761198756, -37.8194040805737,
-37.819569347103, -37.8197007125418, -37.8196752869912, -37.8195015454947,
-37.8194930702893, -37.8196286734591, -37.8197558012046, -37.8198066522414,
-37.8198151274109, -37.8199549675656, -37.8199253045246, -37.8196964749506,
-37.8195862974953, -37.8205143255351, -37.8200270063298, -37.8197430884399,
-37.8195354463066)), row.names = c(NA, -43L), class = "data.frame")
I know this was asked a year ago but I had the same question and figured out how to do it in leaflet.
You are first going to have to adjust your dataframe because addPolyline just connects all the coordinates in a sequence. It seems that you know your starting location and want it to branch out to 9 separate locations. I am going to start with your ending locations. Since you have not provided it, I will make a dataframe with 4 separate ending locations for the purpose of this demonstration.
dest_df <- data.frame (lat = c(41.82, 46.88, 41.48, 39.14),
lon = c(-88.32, -124.10, -88.33, -114.90)
)
Next, I am going to create a data frame with the central location of the same size (4 in this example) of the destination locations. I will use your original coordinates. I will explain why I'm doing this soon
orig_df <- data.frame (lat = c(rep.int(40.75, nrow(dest_df))),
long = c(rep.int(-73.99,nrow(dest_df)))
)
The reason why I am doing this is because the addPolylines feature will connect all the coordinates in a sequence. The way to get around this in order to create the image you described is by starting at the starting point, then going to destination point, and then back to the starting point, and then to the next destination point. In order to create the dataframe to do this, we will have to interlace the two dataframes by placing in rows as such:
starting point
- destination point 1
- starting point
- destination point 2
- and so forth...
The way I will do is create a key for both data frames. For the origin dataframe, I will start at 1, and increment by 2 (e.g., 1 3 5 7). For the destination dataframe, I will start at 2 and increment by 2 (e.g., 2, 4, 6, 8). I will then combine the 2 dataframes using a UNION all. I will then sort by my sequence to make every other row the starting point. I am going to use sqldf for this because that is what I'm comfortable with. There may be a more efficient way.
orig_df$sequence <- c(sequence = seq(1, length.out = nrow(orig_df), by=2))
dest_df$sequence <- c(sequence = seq(2, length.out = nrow(orig_df), by=2))
library("sqldf")
q <- "
SELECT * FROM orig_df
UNION ALL
SELECT * FROM dest_df
ORDER BY sequence
"
poly_df <- sqldf(q)
The new dataframe looks like this (notice how the origin locations are interwoven between the destination):
And finally, you can make your map:
library("leaflet")
leaflet() %>%
addTiles() %>%
addPolylines(
data = poly_df,
lng = ~lon,
lat = ~lat,
weight = 3,
opacity = 3
)
And finally it should look like this:
I hope this helps anyone who is looking to do something like this in the future
Here is a possible approach based on the mapview package. Simply create SpatialLines connecting your start point with each of the end points (stored in locations), bind them together and display the data using mapview.
library(mapview)
library(raster)
## start point
root <- matrix(c(-73.993438700, 40.750545000), ncol = 2)
colnames(root) <- c("Long", "Lat")
## end points
locations <- data.frame(Long = (-78):(-70), Lat = c(40:44, 43:40))
## create and append spatial lines
lst <- lapply(1:nrow(locations), function(i) {
SpatialLines(list(Lines(list(Line(rbind(root, locations[i, ]))), ID = i)),
proj4string = CRS("+init=epsg:4326"))
})
sln <- do.call("bind", lst)
## display data
mapview(sln)
Just don't get confused by the Line-to-SpatialLines procedure (see ?Line, ?SpatialLines).

Resources