Global Leaflet Map in R - issues adding data to spatial object - r

I am trying to replicate this visual, but with my own data. This is the template I am working off of - https://r-graph-gallery.com/183-choropleth-map-with-leaflet.html
My intent is to highlight every country with a value in the same color. I might make it a heatmap or something - but right now adding the polygons gives an error so I cannot try any color options at all.
# Setup
library(leaflet)
library(rgdal)
library(here)
library(tidyverse)
# Basically copy pasted from the template, but the download did not work. I manually went to the website, downloaded the file, manually un-zipped, and manually dropped it in my working directory
# download.file("http://thematicmapping.org/downloads/TM_WORLD_BORDERS_SIMPL-0.3.zip" , destfile="DATA/world_shape_file.zip")
# system("unzip DATA/world_shape_file.zip")
world_spdf <- readOGR(
dsn= here() ,
layer="TM_WORLD_BORDERS_SIMPL-0.3",
verbose=FALSE
)
world_spdf#data$POP2005[ which(world_spdf#data$POP2005 == 0)] = NA
world_spdf#data$POP2005 <- as.numeric(as.character(world_spdf#data$POP2005)) / 1000000 %>% round(2)
# Example of my data - I have countries and numbers associated with them, although not every country has a number
country <- c("Algeria", "Argentina", "Australia")
values <- c(1,4,4)
my_df <- dataframe(country, values)
# This is how I am trying to add MY values to the map. I have to convert the map to a tibble, add my data, then convert it back to a map. Perhaps this is the problem?
interactive_data_attempt <- world_spdf %>%
as.tibble() %>%
left_join(my_df , by = c("NAME" = "country")) %>%
mutate(texts = replace_na(texts, 0),
exists = texts > 1) %>%
st_as_sf(coords = c("LON","LAT"))
# This is the method I used to do the exact same thing in a domestic US map
bins <- c(seq(0,1,1), Inf)
pal <- colorBin(c("white","#C14A36"), domain = interactive_data_attempt$exists, bins = bins, reverse = FALSE)
# This gives an error: Error in to_ring.default(x) : Don't know how to get polygon data from object of class XY,POINT,sfg
leaflet(interactive_data_attempt) %>%
addTiles() %>%
setView(lat=10, lng=0 , zoom=2) %>%
addPolygons(fillColor = ~pal(interactive_data_attempt$exists))

You use readOGR to get an sp object, but at one point you convert it to tibble and then to sf? Not sure about sp, but in most cases you can handle sf as a regular tibble / dataframe, i.e. left_jointo it. And you can read shapefile directly to sf with st_read.
Then there's something with your variables, a mixup from copy-paste I would guess: in my_df you have values but you never do anything with it and in your mutate you use texts but it's unclear where it's coming from.
Binary palette is built from exists, a boolean value that should indicate if the actual value is present or not, though I'd assume you'd want to use values from your my_df$values instead.
Left NA values as-is, changed bins (to just 2) and adjusted some colours.
library(leaflet)
library(sf)
library(dplyr)
library(tidyr)
# download.file("http://thematicmapping.org/downloads/TM_WORLD_BORDERS_SIMPL-0.3.zip" , destfile="world_shape_file.zip")
# unzip("world_shape_file.zip",exdir = "world_shape_file")
world_sf <- st_read("world_shape_file")
world_sf$POP2005[ which(world_sf$POP2005 == 0)] = NA
world_sf$POP2005 <- as.numeric(as.character(world_sf$POP2005)) / 1000000 %>% round(2)
country <- c("Algeria", "Argentina", "Australia")
values <- c(1,4,4)
pal <- colorBin(c("blue","#C14A36"), domain = values, bins = 2, reverse = FALSE, na.color = "transparent")
world_sf %>%
left_join(
tibble(country, values),
by = c("NAME" = "country")) %>%
leaflet() %>%
addTiles() %>%
setView(lat=10, lng=0 , zoom=2) %>%
addPolygons(fillColor = ~pal(values), stroke = FALSE)
Created on 2022-11-12 with reprex v2.0.2

Related

How to get rid of the "Error in CPL_write_ogr" error when using "st_write" to export a sf object?

I'm trying to intersect two sf objects for the US (one at the township level and the other one at the census tract level). I'm getting both using tigris and tidycensys. My final goal is to have an unique sf object with information at the township level (with information from both the originals township and census tract level sf objects). And after I do this intersection, I want to export this sf object using st_write from the sf package. Here is the code I've used:
library(tigris)
library(sf)
library(purrr)
library(tidycensus)
library(tidyr)
library(dplyr)
##Data at township level
#---------------------------#
MN_Township_SHP <- county_subdivisions("Minnesota", cb = TRUE)%>% st_transform(., crs=32618)
MN_Township_SHP$County <- substr(MN_Township_SHP$NAMELSADCO,1,nchar(MN_Township_SHP$NAMELSADCO)-7)
Dataset <- MN_Township_SHP
#Data at census tract level
#---------------------------#
Sys.getenv("CENSUS_API_KEY")
my_vars <-
c(total_pop = "B01003_001",
race_denominator = "B02001_001", #Total
white = "B02001_002")
mn <- unique(fips_codes$state)[24]
MN_CensusTract_SHP <-map_df(mn, function(x) {
get_acs(geography = "tract",
geometry = T,
variables = my_vars,
state = mn) })
MN_CensusTract_SHP <- MN_CensusTract_SHP %>% dplyr::select(-moe)
Social_Dat <-
MN_CensusTract_SHP %>%
as.data.frame() %>%
pivot_wider(names_from = variable,
values_from = c(estimate)) %>%
dplyr::mutate(year=2021) %>%
dplyr::rename_all(funs(paste0("ACS_", .)))
Social_Dat$ACS_year <- as.double(Social_Dat$ACS_year)
Social_Dat$ACS_GEOID <- as.double(Social_Dat$ACS_GEOID)
Social_Dat <- st_as_sf(Social_Dat, sf_column_name = 'ACS_geometry')%>% st_transform(., crs=32618)
#Intersection between township and census tract levels
#---------------------------#
final_df <- st_intersection(Dataset, Social_Dat, all=TRUE)
#Export sf object as shapefile
#---------------------------#
st_write(final_df, "Input_Intermediate/final_df.shp", delete_layer = TRUE)
However, when I run this final step, I get the following error:
"Error in CPL_write_ogr(obj, dsn, layer, driver, as.character(dataset_options), :
Write error"
Does anyone know how to solve this? I've tried so many ways that I found in google, but none of them worked for me. Many thanks in advance!!!
You can use the function st_collection_extract and get only the polygon type from your geometry column, and then you can proceed and use st_write again.
df <- st_collection_extract(final_df, type = "POLYGON")
st_write(df, "Input_Intermediate/final_df.shp", delete_layer = TRUE)

Travel Time Matrix doesn't seem to be including transit travel times in r5r

I am trying to use the r5r R package to create an isochrone accessibility study involving supermarkets in the City of Cleveland. I started by getting the boundary for the city, created a grid, and generated the centroids for the grid. I then used OSM to get the street network and locations of supermarkets. Finally, I created a travel time matrix using the r5r package. My code is below:
# load required packages
library(tidycensus)
library(tidytransit)
library(tmap)
library(osmdata)
library(tidyverse)
library(osmextract)
library(tigris)
library(r5r)
library(sf)
cleveland_boundary = places("Ohio") %>% filter(NAME == "Cleveland") %>% st_transform(4326)
cleveland_grid = st_make_grid(cleveland_boundary, square = FALSE, n=c(100,100),
what = "polygons") %>% st_as_sf() %>% st_filter(cleveland_boundary) %>%
mutate(id = seq(1, length(cleveland_grid$geometry), by=1)) %>% st_transform(4326)
cleveland_centroids = st_centroid(cleveland_grid)
cle_file = oe_match("Cleveland, Ohio")
cle_grocery = oe_read(cle_file$url, layer = "points", quiet = TRUE) %>%
st_transform(crs = st_crs(cleveland_boundary)) %>% st_filter(cleveland_boundary) %>%
rename(id = osm_id) %>% st_transform(4326)
dir.create("cle_network")
cleveland_streets = oe_read(cle_file$url, layer = "lines", quiet = TRUE, download_directory = "cle_network") %>%
filter(!is.na(highway)) %>%
st_transform(crs = st_crs(cleveland_boundary)) %>% st_filter(cleveland_boundary)
options(java.parameters = "-Xmx2G") # set up r5r core
r5r_core <- setup_r5("cle_network", verbose = FALSE, overwrite = TRUE)
ttm_wkday = travel_time_matrix(r5r_core = r5r_core,
origins = cle_grocery,
destinations = cleveland_centroids,
mode = c("WALK", "TRANSIT"),
departure_datetime = as.POSIXct("08-12-2022 14:00:00", format = "%d-%m-%Y %H:%M:%S"),
max_walk_dist = 1000,
max_trip_duration = 480,
verbose = FALSE)
I obtained GTFS data from [here] (https://www.riderta.com/sites/default/files/gtfs/latest/google_transit.zip) and saved it as "CLEgtfs.zip" in my cle_networks directory created in the above code.
The output of this code only gives me 532 results, with a maximum travel time of 29 minutes. This is clearly not correct, and it seems that transit travel times are not being factored in. My guess is that it is only accounting for walk time, and since I have a maximum walking distance of 1000 meters, I suspect only walking travel time is included in this travel time matrix. Is there any reason this may be happening? I would appreciate any guidance!
The departure datetime you're using is outside the valid range set in the calendar.txt of the GTFS data. The earliest start_date in the feed is 11/12/2022 (dd/mm/yyyy), but your departure date is 08/12/2022.
Basically, none of the transit services described in the feed run on this day, so that's why you have a walk-only matrix.

Leaflet polygon fillColor argument always produces error

I've been trying to produce a basic leaflet map of low income benefits in australia.
library(leaflet)
library(sf)
library(downloader)
library(dplyr)
library(RColorBrewer)
url <- "https://data.gov.au/dataset/0ed76b6e-4f6e-4cc6-9f95-0b4e3729dc69/resource/b50acd1e-5c0d-4260-8ebd-997fe97b7adb/download/asgs2017.gpkg"
download(url, dest="dataset2017.gpkg", mode="wb")
lga_data <- st_read("dataset2017.gpkg",layer = "local_government_area_2017")
unlink("dataset2017.gpkg")
dss_payments <- read.csv("https://data.gov.au/data/dataset/8dc2eaa3-e052-43a4-806b-91ca9306c346/resource/1cdccd05-80cd-46db-8d5a-933b4a4d129f/download/dss-demographics-march-2021-lga-csv-geo-au.csv") %>%
mutate(LGA_Code_2020 = as.character(LGA_Code_2020))
df <- lga_data %>%
select(LGA_CODE_2017,LGA_NAME_2017) %>%
mutate(LGA_CODE_2017 = as.character(LGA_CODE_2017))%>%
left_join(dss_payments, by = c("LGA_CODE_2017"="LGA_Code_2020")) %>%
filter(!is.na(LGA_name_2020))
rm(list = c(dss_payments,lga_data))
pal <- colorBin("PuRd", domain = df$Low.Income.Card, bins = seq(from=0,to=5000,by=500))
m <- leaflet() %>%
#addProviderTiles("CartoDB.DarkMatter") %>%
leaflet::addPolygons(data=st_sfc(df$shape)
,weight=1
,opacity=1
,color="white"
,dashArray = "3"
,fillColor = ~pal(Low.Income.Card)
)
m
If i run it all except fillColor it works perfectly, but with fillColor I get a cryptic and difficult to interpret error message:
Error in UseMethod("metaData") : no applicable method for 'metaData' applied to an object of class "c('sfc_MULTIPOLYGON', 'sfc')"
Im struggling to find many usable resources online, has anyone run into this issue before? Or can work out what the issue is?
Thanks!

How to filter a large geojson file in R

I'm having issues filtering a large geojson file with R. If I just want to show the map for one country, I don't want the entire map of europe to be loaded which is huge. So I want to filter this dataset for example for Bulgaria -- CNTR_CODE == "BG" but I can't manage.
Please find code to download below and an initial effort which doesn't result in the desired outcome
link <- 'https://ec.europa.eu/eurostat/cache/GISCO/distribution/v2/nuts/download/ref-nuts-2013-01m.geojson.zip'
temp <- tempfile()
download.file(link,temp)
mapdata <- readLines(unzip(temp, "NUTS_RG_01M_2013_4326_LEVL_3.geojson"))
mapdata <- jsonlite::fromJSON(mapdata, simplifyVector = FALSE)
#glimpse(mapdata)
mapdata$features[[100]]$properties$CNTR_CODE
[1] "BG"
library(sf)
mapdata2 <- copy(mapdata)
mapdata2 %>%
filter(CNTR_CODE %in% c('BG'))
Thanks.
That's not such a large file if you use the right tools.
library(geojsonsf) can read the geojson directly to an sf object
library(mapdeck) can plot all the polygons
link <- 'https://ec.europa.eu/eurostat/cache/GISCO/distribution/v2/nuts/download/ref-nuts-2013-01m.geojson.zip'
temp <- tempfile()
download.file(link,temp)
library(geojsonsf)
library(sf)
sf <- geojsonsf::geojson_sf(unzip(temp, "NUTS_RG_01M_2013_4326_LEVL_3.geojson"))
Then you can filter the sf object and plot
library(mapdeck)
set_token( "YOUR_MAPBOX_API_TOKEN" )
mapdeck(
style = mapdeck_style("dark")
) %>%
add_polygon(
data = sf[ sf$CNTR_CODE %in% c("BG"), ]
, fill_colour = "NUTS_NAME"
, legend = T
)
Or plot the whole lot
mapdeck(
style = mapdeck_style("dark")
) %>%
add_polygon(
data = sf
, fill_colour = "NUTS_NAME"
, legend = T
)
If you want a list, then Filter could work:
path = "NUTS_RG_01M_2013_4326_LEVL_3.geojson"
x <- jsonlite::fromJSON(path, simplifyVector = FALSE)
x$features <- Filter(function(z) z$properties$CNTR_CODE == "BG", x$features)
vapply(x$features, function(x) x$properties$CNTR_CODE, "")
If you want to keep the data in geojson character format, you could use jqr
path = "NUTS_RG_01M_2013_4326_LEVL_3.geojson"
x <- paste0(readLines(path), collapse = "")
xx <- jqr::jq(x, '.features |= map(select(.properties.CNTR_CODE == "BG"))')
jqr::jq(xx, '.features[].properties.CNTR_CODE')

How to read in KML file properly in R, or separate out lumped variables into columns

I read in a KML file, using the following:
clinics = st_read(dsn = "Data/clinics-kml.kml","CLINICS")
However, all my variables (except for the coordinates) got lumped into 1 column under Description (see below link).
What's the best way to separate the variables out? Alternatively, is there a way to import KML files properly to avoid this issue? You may view the screenshot of the problem here.
The problem (or maybe not) is that the Description column has an html table as a string for each observation. That is fine if you want to parse that html string and get a pretty table for example when creating an interactive web map. But it can be a headache if you just want the data inside.
So, it's possible to do all the process within R just following these steps:
Download the KML file from internet
Unzip the downloaded file
Read the KML file as a Spatial object
Get the attributes for each observation
Bind the attributes to each observation as new columns
All the code is commented, see below:
library(tidyverse)
library(sf)
library(mapview)
library(rvest)
library(httr)
# 1) Download the kml file
moh_chas_clinics <- GET("https://data.gov.sg/dataset/31e92629-980d-4672-af33-cec147c18102/download",
write_disk(here::here("moh_chas_clinics.zip"), overwrite = TRUE))
# 2) Unzip the downloaded zip file
unzip(here::here("moh_chas_clinics.zip"))
# 3) Read the KML file as a Spatial object
moh_chas_clinics <- read_sf(here::here("chas-clinics-kml.kml"))
# Watch data
moh_chas_clinics %>%
glimpse()
# See map
mapview(moh_chas_clinics)
# 4) Get the attributes for each observation
# Option a) Using a simple lapply
attributes <- lapply(X = 1:nrow(moh_chas_clinics),
FUN = function(x) {
moh_chas_clinics %>%
slice(x) %>%
pull(Description) %>%
read_html() %>%
html_node("table") %>%
html_table(header = TRUE, trim = TRUE, dec = ".", fill = TRUE) %>%
as_tibble(.name_repair = ~ make.names(c("Attribute", "Value"))) %>%
pivot_wider(names_from = Attribute, values_from = Value)
})
# Option b) Using a Parallel lapply (faster)
future::plan("multisession")
attributes <- future.apply::future_lapply(X = 1:nrow(moh_chas_clinics),
FUN = function(x) {
moh_chas_clinics %>%
slice(x) %>%
pull(Description) %>%
read_html() %>%
html_node("table") %>%
html_table(header = TRUE, trim = TRUE, dec = ".", fill = TRUE) %>%
as_tibble(.name_repair = ~ make.names(c("Attribute", "Value"))) %>%
pivot_wider(names_from = Attribute, values_from = Value)
})
# 5) Bind the attributes to each observation as new columns
moh_chas_clinics_attr <-
moh_chas_clinics %>%
bind_cols(bind_rows(attributes)) %>%
select(-Description)
# Watch new data
moh_chas_clinics_attr %>%
glimpse()
# New map
mapview(moh_chas_clinics_attr,
zcol = "CLINIC_PROGRAMME_CODE",
layer.name = "Clinic Programme Code")
A final map as an example showing all the attributes for a point and coloured by "Clinic Programme Code":
Figured out an alternative way by using QGIS to convert KML to SHP. Then read it as SHP into R.

Resources