How can I create Italy map with mapdeck? - r

I've found this code about a map with road traffic casualties across the UK done with mapdeck.
I would like to create something similar for Italy, but I don't understand how to modify the code to obtain italian area.
library(mapdeck)
set_token(Sys.getenv("MAPBOX"))
crash_data = read.csv("https://git.io/geocompr-mapdeck")
crash_data = na.omit(crash_data)
ms = mapdeck_style("dark")
mapdeck(style = ms, pitch = 45, location = c(0, 52), zoom = 4) %>%
add_grid(data = crash_data, lat = "lat", lon = "lng", cell_size = 1000,
elevation_scale = 50, layer_id = "grid_layer",
colour_range = viridisLite::plasma(6))
Thank you!

I think the data which you have given is for Italy. If it is the case then it is perfectly fine. This is the solution for your problem. Just change the locations latitude and longitude value.
library(mapdeck)
set_token(Sys.getenv("MAPBOX"))
crash_data = read.csv("https://git.io/geocompr-mapdeck")
crash_data = na.omit(crash_data)
ms = mapdeck_style("dark")
mapdeck(style = ms, pitch = 45, location = c(43,12), zoom = 4) %>%
add_grid(data = crash_data, lat = "lat", lon = "lng", cell_size = 1000,
elevation_scale = 50, layer_id = "grid_layer",
colour_range = viridisLite::plasma(6))
If this has answered your question it is well and good. You can also edit the other fine details.

Related

meteoForecast for precipitation from GFS: : Point outside GFS-MG region

Error with getPoint function (GFS) for location (lon = 29.7, lat = -0.9) and variable ("Precipitable_water_entire_atmosphere_single_layer") : Point outside GFS-MG region.
See the script below:
library(meteoForecast)
list<-grepVar('prec', service = 'gfs', complete = TRUE)
list$name
list$label
list$name
testDay <- Sys.Date() - 1
cloudGFS <- getRaster(list$name[1],
day = testDay,
box = c(-30, 30, 30, 50),
service = 'gfs')
EACrain <- getRaster(list$name[1],
day = testDay,
box = c(12, 32, -10, 10),
service = 'gfs')
#levelplot(cloudGFS, layout = c(1, 1))
plot(EACrain, layout = c(1, 1))
rainpoint <- getPoint(c(29.7, -0.9), vars = list$name[1],
day = testDay, service = 'gfs')

How do I map county-level data as a heatmap using FIPS codes (interactively?) in R

I am hoping to create an interactive map that will allow me to create a plot where users can change the year and variable plotted. I've seen the package tmap be used, so I'm imagining something like that, but I'd also take advice for a static map, or another approach to an interactive one. My data is much, much, richer than this, but looks something like:
example <- data.frame(fips = rep(as.numeric(c("37001", "37003", "37005", "37007", "37009", "37011", "37013", "37015", "37017", "37019"), 4)),
year = c(rep(1990, 10), rep(1991, 10), rep(1992, 10), rep(1993, 10)),
life = sample(1:100, 40, replace=TRUE),
income = sample(8000:1000000, 40, replace=TRUE),
pop = sample(80000:1000000, 40, replace=TRUE))
I'd like my output to be a map of ONLY the counties contained in my dataset (in my case, I have all the counties in North Carolina, so I don't want a map of the whole USA), that would show a heatmap of selected variables of interest (in this sample data, year, life, income, and pop. Ideally I'd like one plot with two dropdown-type menus that allow you to select what year you want to view, and which variable you want to see. A static map where I (rather than the user) defines year and variable would be helpful if you don't know how to do the interactive thing.
I've tried the following (taken from here), but it's static, which is not my ideal, and also appears to be trying to map the whole USA, so the part that's actually contained in my data (North Carolina) is very small.
library(maps)
library(ggmap)
library(mapproj)
data(county.fips)
colors = c("#F1EEF6", "#D4B9DA", "#C994C7", "#DF65B0", "#DD1C77",
"#980043")
example$colorBuckets <- as.numeric(cut(example$life, c(0, 20, 40, 60, 80,
90, 100)))
colorsmatched <- example$colorBuckets[match(county.fips$fips, example$fips)]
map("county", col = colors[colorsmatched], fill = TRUE, resolution = 0,
lty = 0, projection = "polyconic")
Here's almost the whole solution. I had hoped some package would allow mapping to be done by fips code alone, but haven't found one yet. You have to download shapefiles and merge them by fips code. This code does everything I wanted above except allow you to also filter by year. I've asking that question here, so hopefully someone will answer there.
# get shapefiles (download shapefiles [here][1] : http://www2.census.gov/geo/tiger/GENZ2014/shp/cb_2014_us_county_5m.zip )
usgeo <- st_read("~/cb_2014_us_county_5m/cb_2014_us_county_5m.shp") %>%
mutate(fips = as.numeric(paste0(STATEFP, COUNTYFP)))
### alternatively, this code *should* allow you download data ###
### directly, but somethings slightly wrong. I'd love to know what. ####
# temp <- tempfile()
# download.file("http://www2.census.gov/geo/tiger/GENZ2014/shp/cb_2014_us_county_5m.zip",temp)
# data <- st_read(unz(temp, "cb_2014_us_county_5m.shp"))
# unlink(temp)
########################################################
# create fake data
example <- data.frame(fips = rep(as.numeric(c("37001", "37003", "37005", "37007", "37009", "37011", "37013", "37015", "37017", "37019"), 4)),
year = c(rep(1990, 10), rep(1991, 10), rep(1992, 10), rep(1993, 10)),
life = sample(1:100, 40, replace=TRUE),
income = sample(8000:1000000, 40, replace=TRUE),
pop = sample(80000:1000000, 40, replace=TRUE))
# join fake data with shapefiles
example <- st_as_sf(example %>%
left_join(usgeo))
# drop layers (not sure why, but won't work without this)
example$geometry <- st_zm(example$geometry, drop = T, what = "ZM")
# filter for a single year (which I don't want to have to do)
example <- example %>% filter(year == 1993)
# change projection
example <- sf::st_transform(example, "+proj=longlat +datum=WGS84")
# create popups
incomepopup <- paste0("County: ", example$NAME, ", avg income = $", example$income)
poppopup <- paste0("County: ", example$NAME, ", avg pop = ", example$pop)
yearpopup <- paste0("County: ", example$NAME, ", avg year = ", example$year)
lifepopup <- paste0("County: ", example$NAME, ", avg life expectancy = ", example$life)
# create color palettes
yearPalette <- colorNumeric(palette = "Blues", domain=example$year)
lifePalette <- colorNumeric(palette = "Purples", domain=example$life)
incomePalette <- colorNumeric(palette = "Reds", domain=example$income)
popPalette <- colorNumeric(palette = "Oranges", domain=example$pop)
# create map
leaflet(example) %>%
addProviderTiles("CartoDB.Positron") %>%
addPolygons(stroke=FALSE,
smoothFactor = 0.2,
fillOpacity = .8,
popup = poppopup,
color = ~popPalette(example$pop),
group = "pop"
) %>%
addPolygons(stroke=FALSE,
smoothFactor = 0.2,
fillOpacity = .8,
popup = yearpopup,
color = ~yearPalette(example$year),
group = "year"
) %>%
addPolygons(stroke=FALSE,
smoothFactor = 0.2,
fillOpacity = .8,
popup = lifepopup,
color = ~lifePalette(example$life),
group = "life"
) %>%
addPolygons(stroke=FALSE,
smoothFactor = 0.2,
fillOpacity = .8,
popup = incomepopup,
color = ~incomePalette(example$income),
group = "income"
) %>%
addLayersControl(
baseGroups=c("income", "year", "life", "pop"),
position = "bottomleft",
options = layersControlOptions(collapsed = FALSE)
)
I'm still looking for a way to add a "year" filter that would be another interactive radio-button box to filter the data by different years.

Why does spplot take so much time for multiple panels

I am plotting multiple shapefiles using spplot. Here's a data to construct that
library(raster)
library(randomcoloR)
my.shp <- getData('GADM', country = 'BRA', level = 2)
my.shp$ID<- 1:nrow(my.shp)
My data consists of a variable X for 10 years as shown where each column is a year
df <- matrix(sample(100:5000, 55040, replace = T), nrow = 5504, ncol = 10)
df <- data.frame(ID = 1:nrow(my.shp), df)
my.dat <- merge(my.shp, df, by = "ID")
variable.names <- paste0("X",1:10)
spplot(my.dat, rev(variable.names), col = NA, at = seq(from = 100, to = 5000, by = 500),
col.regions = distinctColorPalette(length(seq(from = 100, to = 5000, by = 500))),
main = list(label = "TEST"))
My problem is this plot takes so much time (around an hour) to get plotted and was wondering if there is something inherently wrong in the code itself that it is taking too long to plot. My laptop has a 32 GB RAM.
Thanks
I haven't compared this plot to your spplot because I don't want to spend an hour waiting for it.
Instead I'm proposing to use library(mapdeck) to plot an interactive map, which takes a matter of seconds.
Two things to note
You need a Mapbox Access token
You need to convert the sp object to sf
library(raster)
my.shp <- getData('GADM', country = 'BRA', level = 2)
my.shp$ID <- 1:nrow(my.shp)
df <- matrix(sample(100:5000, 55040, replace = T), nrow = 5504, ncol = 10)
df <- data.frame(ID = 1:nrow(my.shp), df)
my.dat <- merge(my.shp, df, by = "ID")
library(sf)
sf <- sf::st_as_sf( my.dat )
library(mapdeck)
set_token( "YOUR_MAPBOX_TOKEN" )
mapdeck() %>%
add_sf(
data = sf
, fill_colour = "GID_2"
)
Are you willing/able to switch to sf instead of sp?
The sf plot function is considerably faster than spplot, although the layout differs a bit.
library(sf)
my.dat_sf <- st_as_sf(my.dat)
plot(my.dat_sf[rev(variable.names)], max.plot=10, breaks=c(seq(from = 100, to = 5000, by = 500),5000),
pal = distinctColorPalette(length(seq(from = 100, to = 5000, by = 500))),
main = "TEST", border=NA, key.pos=4)
Additionally, you could try to simplify the polygon with rmapshaper::ms_simplify() for Spatial*-objects or sf::st_simplify() for SimpleFeatures, which lets you reduce the object size by quite a bit, depending on the given dTolerance. Thus plotting, will also be faster with simplified polygons.
The original SpatialPolygon:
format(object.size(my.dat_sf), units="Kb")
"25599.2 Kb"
and a simplified SimpleFeature:
dat_sf_simple <- st_transform(my.dat_sf, crs = 3035)
dat_sf_simple <- st_simplify(dat_sf_simple, dTolerance = 1000, preserveTopology = T)
dat_sf_simple <- st_transform(dat_sf_simple, crs = 4326)
format(object.size(dat_sf_simple), units="Kb")
"7864.2 Kb"
Plot the simplified SimpleFeature, which takes about 1 minute on my machine with 8GB RAM.
plot(dat_sf_simple[rev(variable.names)], max.plot=10, breaks=c(seq(from = 100, to = 5000, by = 500),5000),
pal = distinctColorPalette(length(seq(from = 100, to = 5000, by = 500))),
main = "TEST", border=NA, key.pos=4)
You could also try out with ggplot2, but I am pretty sure the most performant solution will be the sf plot.
library(ggplot2)
library(dplyr)
library(tidyr)
dat_sf_simple_gg <- dat_sf_simple %>%
dplyr::select(rev(variable.names), geometry) %>%
gather(VAR, SID, -geometry)
ggplot() +
geom_sf(data = dat_sf_simple_gg, aes(fill=SID)) +
facet_wrap(~VAR, ncol = 2)

How do I filter countries/us/us-all-all to only show ca and nv?

I can get all US counties and I can get one state, but I can't get all the counties from two states into the same map. Here is my R code:
hcmap("countries/us/us-all-all", data = dataframe, value = "value1",
joinBy = c("name"), name = "Name",
dataLabels = list(enabled = TRUE, format = "{point.name}"),
tooltip = list(valueDecimals = 0, valueSuffix = "%"),
pointFormat = "County: {point.name}<br/>{point.value1}")
I tried downloading each state's data:
camapdata <- get_data_from_map(download_map_data("countries/us/us-ca-all"))
nvmapdata <- get_data_from_map(download_map_data("countries/us/us-nv-all"))
Then combining into one data set, but then hcmap errors because it's not a URL.
I can also download and filter the whole US map:
mapdata <- get_data_from_map(download_map_data("countries/us/us-all-all"))
canvmap <- filter(mapdata, `hc-key` == "us-ca*")
but get the same problem.
Is there a way to filter the US county map to specific states within the hcmap function?
This is the best I have so far--but I can only get CA and NV in one map...
camapdata <- get_data_from_map(download_map_data("countries/us/us-ca-all"))
nvmapdata <- get_data_from_map(download_map_data("countries/us/us-nv-all"))
states2<-join(camapdata,nvmapdata,by=c("fips"), type="full",match="all")
setnames(states2, old=c("hc-a2"), new=c("STABBR"))
setnames(states2, old=c("hc-middle-y"), new=c("HCMIDDLE"))
XXX<-sqldf("select * from states2 where STABBR in ('CA','OR')")
library(highcharter)
hcmap("countries/us/us-all-all", data = states2, value = "HCMIDDLE",
joinBy = c("name"), name = "Name",
dataLabels = list(enabled = TRUE, format = "{point.name}"),
tooltip = list(valueDecimals = 0, valueSuffix = "%"),
pointFormat = "County: {point.name}<br/>{point.value1}")

R googleway map cannot be loaded

I am trying to use R googleway to analyze crime records from NY Open Data. I want to add precinct polygon and crime circle to NY city map. However, even when I reduce the total crime points to 19k, I still cannot load the created map. Please see the code below.
map_key = "api_key
ggmap = google_map(location = c(mean(40.730610), mean(-73.935242)), zoom =
11, key = map_key)
ggmap %>% add_polygons(data = nypp_df_gg, lat = "lat", lon = "lon", id =
"ID", pathId = "pathID") %>% add_circles(lat = "Latitude", lon =
"Longitude", data = data.frame(NYPD_complaint_bf2006))
It does work if I limit the rows to 500. May I know if there is a way to visualize large observations>1MM? I tried to use add_heatmap but without any luck too.
The code that works is
ggmap %>% add_polygons(data = nypp_df_gg, lat = "lat", lon = "lon", id =
"ID", pathId = "pathID") %>% add_circles(lat = "Latitude", lon =
"Longitude", data = data.frame(NYPD_complaint_bf2006[1:500,]))
I can plot 22k circles using add_circles() if you play around with the load interval.
add_circles(lat = "Y", lon = "X", info_window = "SPECIES", update_map_view = FALSE,
focus_layer = FALSE, load_interval = 25, radius = 5, ...)
I sorted my data so it would start filling in the center of the map using a distance calculation to where I defined the center lat, lon.

Resources