R Adding polygon name data to point data - r

I have two sets of data: Point level data and polygon data. I am aiming to add the name of the polygon into which a pint is located as an extra column on the point level data.
I have found and used the code below utilising the sf library.
new_point_data <- point_data %>% mutate(
intersection = as.integer(st_intersects(geometry, polygon_data))
, area = if_else(is.na(intersection), polygon_data$Name[intersection])
This works in 90% of cases, however when a point intersects a polygon the code does not bring any data back. Which I'm assuming is because it will return two (or more) values and cannot determine which to use, how can I update this to select any value e.g. the first?

Related

Loosing data using st_join

I am trying to perform a spatial join of two sf shape files.I am losing all information from the second data set (i.e output_inmap). Whichever dataset is placed second will return all NA values. Anyone know what could be happening?
output_inmap <- st_read("processed/ceidars_data_inmap.shp")
output_inmap <-st_transform(output_inmap, crs=3310)
unzip("census-tract.zip")
census_tracts <- st_read("census-tract/tl_2019_06_tract.shp")
st_transform(census_tracts, crs = 3310)
st_transform(output_inmap, crs = 3310)
TC_1<- st_join(census_tracts, output_inmap)
I am losing all information from the second data set (i.e output_inmap). Whichever dataset is placed second will return all NA values. Anyone know what could be happening?
Your second st_transform (of the census tracts) seems to be leading nowhere; consider this code (slightly adjusted via dplyr style pipe) to ensure both spatial objects are on the same CRS.
You may also consider setting parameter left of the sf::st_join() call (by default true) to false = change behaviour from left (preserving) to inner (filtering) style join. Sometimes this makes for a more concise code.
library(sf)
library(dplyr)
output_inmap <- st_read("processed/ceidars_data_inmap.shp") %>%
st_transform(crs=3310)
unzip("census-tract.zip")
census_tracts <- st_read("census-tract/tl_2019_06_tract.shp") %>%
st_transform(crs = 3310)
TC_1<- st_join(census_tracts, output_inmap)

Choropleth Plotting polygons with ggplot2 R on a map

I realise this has been asked about 100 times prior, but none of the answers I've read so far on SO seem to fit my problem.
I have data. I have the lat and lon values. I've read around about something called sp and made a bunch of shape objects in a dataframe. I have matched this dataframe with the variable I am interested in mapping.
I cannot for the life of me figure out how the hell to get ggplot2 to draw polygons. Sometimes it wants explicit x,y values (which are a PART of the shape anyway, so seems redundant), or some other shape files externally which I don't actually have. Short of colouring it in with highlighters, I'm at a loss.
if I take an individual sps object (built with the following function after importing, cleaning, and wrangling a shitload of data)
createShape = function(sub){
#This funciton takes the list of lat/lng values and returns a SHAPE which should be plottable on ggmap/ggplot
tempData = as.data.frame(do.call(rbind, as.list(VICshapes[which(VICshapes$Suburb==sub),] %>% select(coords))[[1]][[1]]))
names(tempData) = c('lat', 'lng')
p = Polygon(tempData)
ps = Polygons(list(p),1)
sps = SpatialPolygons(list(ps))
return(sps)
}
These shapes are then stored in the same dataframe as my data - which only this afternoon for some reason, I can't even look at, as trying to look at it yields the following error.
head(plotdata)
Error in prettyNum(.Internal(format(x, trim, digits, nsmall, width, 3L, : first argument must be atomic
I realise I'm really annoyed at this now, but I've about 70% of a grade riding on this, and my university has nobody capable of assisting.
I have pasted the first few rows of data here - https://pastebin.com/vFqy5m5U - apparently you can't print data with an s4 object - the shape file that I"m trying to plot.
Anyway. I'm trying to plot each of those shapes onto a map. Polygons want an x,y value. I don't have ANY OTHER SHAPE FILES. I created them based on a giant list of lat and long values, and the code chunk above. I'm genuinely at a loss here and don't know what question to even ask. I have the variable of interest based on locality, and the shape for each locality. What am I missing?
edit: I've pasted the summary data (BEFORE making them into shapes) here. It's a massive list of lat/lng values for EACH tile/area, so it's pretty big...
Answered on gis.stackexchange.com (link not provided).

R spplot labels in wrong places

I am working with the US census gazetteer data file (zcta5) which is publicly available. The version I am using has files named tl_2015_us_zcta510.shp, dbf... Plotting the file works fine.
The issue I am having seems to happen when I subset the SpatialDataPolygonsDataFrame with a larger number of polygons. But when I use a small subset the labels work fine.
The labels I need identify assigned groupings of postal codes an individual 5-digit polygon area belongs to. For example - for Ashtabula, OH postal codes I need all the postal codes to have a label in the middle of it that reads "503". I have labels for all the other Ohio postal code groupings - called "PostalGroupNumber" and in table form the data all checks out to be correct.
So I load libraries and read the full spatial data frame into memory:
library(sp)
library(maps)
library(mapdata)
library(maptools)
library(foreign)
#Load in the entire census gazatteer data file
zcta5=readShapeSpatial("~/R/PostalCodes/USA/US Postal Codes/ZCTA5/tl_2015_us_zcta510.shp")
Next: create vector of Ashtabula, OH postal codes:
ashtab.zips <- c("44003","44004","44005","44010","44030","44032","44041","44047","44048","44068","44076","44082","44084","44085","44088","44093","44099")
Next - subset zcta5 Spatial Data Frame to include only these postal codes:
ashtab <- zcta5[which(zcta5#data$GEOID10 %in% ashtab.zips),]
Next - add labels to new ashtab spatial data frame and plot:
ashtab#data <- cbind(ashtab#data, "PostalGroupNumber"="503")
l1 = list("sp.text", coordinates(ashtab), as.character(ashtab#data$PostalGroupNumber),col="black", cex=0.7,font=2)
spplot(ashtab,zcol="GEOID10", sp.layout=list(l1)
,main=list(label="PostalGroupNumber 503 Postal Areas",cex=2,font=1)
)
Which works and gives the following and correct plot of the postal areas of northeast Ohio with correct labels in them:
Pretty good - BUT - the scale on the right looks like it retained a huge number of GEOID10 levels where I expected only the subset of the 17 in the ashtab.zips vector. Side Question (extra credit ;-)- why are those levels still there?
Now on to the main problem. Ohio postal codes all start with a 43... or a 44... - I have a csv file for just the 5-digit codes that are in Ohio, each with their assigned PostalGroupNumber which I read into a data frame, clean up and use to subset the main data frame like I did above:
oh <- read.csv("~/R/PostalCodes/OhioPostalGroupings/OH-PGAs-PostalCodes Only.csv", header = TRUE, stringsAsFactors = FALSE, colClasses = c("character", "character", "character"))
oh$ZIP_CODE <- trimws(oh$ZIP_CODE)
ohzcta5 <- zcta5[which(zcta5#data$GEOID10 %in% oh$ZIP_CODE),]
l1 = list("sp.text", coordinates(ohzcta5), as.character(ohzcta5#data$GEOID10),col="black", cex=0.7,font=2)
spplot(ohzcta5,zcol="GEOID10", sp.layout=list(l1)
,main=list(label="Ohio Postal Code - PostalGroupNumbers",cex=2,font=1)
)
This time - just plot with labels of the GEOID10 value to see if it plots correctly and it does - hard to read here but zooming in shows correct postal codes in each polygon (this is not a great image but shape of OH is right and labels are correct...):
Now I need to add the PostalGroupNumber labels to the spatial data frame, and make a factor to color all the groups of postal codes together as the same color per group. So Ashtabula should all be one color and all have "503" labels in them - but they do not:
ohzcta5#data <- merge(ohzcta5#data, oh, by.x="GEOID10", by.y="ZIP_CODE", all.x=TRUE)
ohzcta5#data <- cbind(ohzcta5#data, "TAcolor"=as.factor(ohzcta5#data$PostalGroupNumber))
l1 = list("sp.text", coordinates(ohzcta5), as.character(ohzcta5#data$PostalGroupNumber),col="black", cex=0.7,font=2)
spplot(ohzcta5,zcol="GEOID10", sp.layout=list(l1)
,main=list(label="Ohio Postal Code - PostalGroupNumber",cex=2,font=1)
)
Which now looks like this:
A closer look at Ashtabula (northeast corner) now looks like this - What happened to the labels?:
The labels are all wrong - and yet when examining the ohzcta5#data the PostalGroupNumber is on the correct GEOID10 records.
Help!!!! Losing my mind.
Answers to two issues:
1) the issue of too many levels retained inthe spatial frame appearing on the spplot scale is resolved by using the base package "droplevels" for each of the factors in the spatial data frame.
2) Don't use "merge" because it re-orders the data so it no longer aligns to the correct polygon. Instead use "match" as shown in this post https://stackoverflow.com/a/3652472/4017087 (Thanks Ramnath!)

Creating SpatialLinesDataFrame from SpatialLines object and basic df

Using leaflet, I'm trying to plot some lines and set their color based on a 'speed' variable. My data start at an encoded polyline level (i.e. a series of lat/long points, encoded as an alphanumeric string) with a single speed value for each EPL.
I'm able to decode the polylines to get lat/long series of (thanks to Max, here) and I'm able to create segments from those series of points and format them as a SpatialLines object (thanks to Kyle Walker, here).
My problem: I can plot the lines properly using leaflet, but I can't join the SpatialLines object to the base data to create a SpatialLinesDataFrame, and so I can't code the line color based on the speed var. I suspect the issue is that the IDs I'm assigning SL segments aren't matching to those present in the base df.
The objects I've tried to join, with SpatialLinesDataFrame():
"sl_object", a SpatialLines object with ~140 observations, one for each segment; I'm using Kyle's code, linked above, with one key change - instead of creating an arbitrary iterative ID value for each segment, I'm pulling the associated ID from my base data. (Or at least I'm trying to.) So, I've replaced:
id <- paste0("line", as.character(p))
with
lguy <- data.frame(paths[[p]][1])
id <- unique(lguy[,1])
"speed_object", a df with ~140 observations of a single speed var and row.names set to the same id var that I thought I created in the SL object above. (The number of observations will never exceed but may be smaller than the number of segments in the SL object.)
My joining code:
splndf <- SpatialLinesDataFrame(sl = sl_object, data = speed_object)
And the result:
row.names of data and Lines IDs do not match
Thanks, all. I'm posting this in part because I've seen some similar questions - including some referring specifically to changing the ID output of Kyle's great tool - and haven't been able to find a good answer.
EDIT: Including data samples.
From sl_obj, a single segment:
print(sl_obj)
Slot "ID":
[1] "4763655"
[[151]]
An object of class "Lines"
Slot "Lines":
[[1]]
An object of class "Line"
Slot "coords":
lon lat
1955 -74.05228 40.60397
1956 -74.05021 40.60465
1957 -74.04182 40.60737
1958 -74.03997 40.60795
1959 -74.03919 40.60821
And the corresponding record from speed_obj:
row.names speed
... ...
4763657 44.74
4763655 34.8 # this one matches the ID above
4616250 57.79
... ...
To get rid of this error message, either make the row.names of data and Lines IDs match by preparing sl_object and/or speed_object, or, in case you are certain that they should be matched in the order they appear, use
splndf <- SpatialLinesDataFrame(sl = sl_object, data = speed_object, match.ID = FALSE)
This is documented in ?SpatialLinesDataFrame.
All right, I figured it out. The error wasn't liking the fact that my speed_obj wasn't the same length as my sl_obj, as mentioned here. ("data =
object of class data.frame; the number of rows in data should equal the number of Lines elements in sl)
Resolution: used a quick loop to pull out all of the unique lines IDs, then performed a left join against that list of uniques to create an exhaustive speed_obj (with NAs, which seem to be OK).
ids <- data.frame()
for (i in (1:length(sl_obj))) {
id <- data.frame(sl_obj#lines[[i]]#ID)
ids <- rbind(ids, id)
}
colnames(ids)[1] <- "linkId"
speed_full <- join(ids, speed_obj)
speed_full_short <- data.frame(speed_obj[,c(-1)])
row.names(speed_full_short) <- speed_full$linkId
splndf <- SpatialLinesDataFrame(sl_obj, data = speed_full_short, match.ID = T)
Works fine now!
I may have deciphered the issue.
When I am pulling in my spatial lines data and I check the class it reads as
"Spatial Lines Data Frame" even though I know it's a simple linear shapefile, I'm using readOGR to bring the data in and I believe this is where the conversion is occurring. With that in mind the speed assignment is relatively easy.
sl_object$speed <- speed_object[ match( sl_object$ID , row.names( speed_object ) ) , "speed" ]
This should do the trick, as I'm willing to bet your class(sl_object) is "Spatial Lines Data Frame".
EDIT: I had received the same error as OP, driving me to check class()
I am under the impression that the error that was populated for you is because you were trying to coerce a data frame into a data frame and R wasn't a fan of that.

Choropleth Maps in R - TIGER Shapefile issue

Have a Question on Mapping with R, specifically around the choropleth maps in R.
I have a dataset of ZIP codes assigned to an are and some associated data (dataset is here).
My final data format is: Area ID, ZIP, Probability Value, Customer Count, Area Probability and Area Customer Total. I am attempting to present this data by plotting area probability and Area Customer Total on a Map. I have tried to do this by using the census TIGER Shapefiles but I guess R cannot handle the complete country.
I am comfortable with the Statistical capabilities and now I am moving all my Mapping from third party GIS focused applications to doing all my Mapping in R. Does anyone have any pointers to how to achieve this from within R?
To be a little more detailed, here's the point where R stops working -
shapes <- readShapeSpatial("tl_2013_us_zcta510.shp")
(where the shp file is the census/TIGER) shape file.
Edit - Providing further details. I am trying to first read the TIGER shapefiles, hoping to combine this spatial dataset with my data and eventually plot. I am having an issue at the very beginning when attempting to read the shape file. Below is the code with the output
require(maptools)
shapes<-readShapeSpatial("tl_2013_us_zcta510.shp")
Error: cannot allocate vector of size 317 Kb
There are several examples and tutorials on making maps using R, but most are very general and, unfortunately, most map projects have nuances that create inscrutable problems. Yours is a case in point.
The biggest issue I came across was that the US Census Bureau zip code tabulation area shapefile for the whole US is huge: ~800MB. When loaded using readOGR(...) the R SpatialPolygonDataFrame object is about 913MB. Trying to process a file this size, (e.g., converting to a data frame using fortify(...)), at least on my system, resulted in errors like the one you identified above. So the solution is to subset the file based in the zip codes that are actually in your data.
This map:
was made from your data using the following code.
library(rgdal)
library(ggplot2)
library(stringr)
library(RColorBrewer)
setwd("<directory containing shapfiles and sample data>")
data <- read.csv("Sample.csv",header=T) # your sample data, downloaded as csv
data$ZIP <- str_pad(data$ZIP,5,"left","0") # convert ZIP to char(5) w/leading zeros
zips <- readOGR(dsn=".","tl_2013_us_zcta510") # import zip code polygon shapefile
map <- zips[zips$ZCTA5CE10 %in% data$ZIP,] # extract only zips in your Sample.csv
map.df <- fortify(map) # convert to data frame suitable for plotting
# merge data from Samples.csv into map data frame
map.data <- data.frame(id=rownames(map#data),ZIP=map#data$ZCTA5CE10)
map.data <- merge(map.data,data,by="ZIP")
map.df <- merge(map.df,map.data,by="id")
# load state boundaries
states <- readOGR(dsn=".","gz_2010_us_040_00_5m")
states <- states[states$NAME %in% c("New York","New Jersey"),] # extract NY and NJ
states.df <- fortify(states) # convert to data frame suitable for plotting
ggMap <- ggplot(data = map.df, aes(long, lat, group = group))
ggMap <- ggMap + geom_polygon(aes(fill = Probability_1))
ggMap <- ggMap + geom_path(data=states.df, aes(x=long,y=lat,group=group))
ggMap <- ggMap + scale_fill_gradientn(name="Probability",colours=brewer.pal(9,"Reds"))
ggMap <- ggMap + coord_equal()
ggMap
Explanation:
The rgdal package facilitates the creation of R Spatial objects from ESRI shapefiles. In your case we are importing a polygon shapefile into a SpatialPolygonDataFrame object in R. The latter has two main parts: a polygon section, which contains the latitude and longitude points that will be joined to create the polygons on the map, and a data section which contains information about the polygons (so, one row for each polygon). If, e.g., we call the Spatial object map, then the two sections can be referenced as map#polygons and map#data. The basic challenge in making choropleth maps is to associate data from your Sample.csv file, with the relevant polygons (zip codes).
So the basic workflow is as follows:
1. Load polygon shapefiles into Spatial object ( => zips)
2. Subset if appropriate ( => map).
3. Convert to data frame suitable for plotting ( => map.df).
4. Merge data from Sample.csv into map.df.
5. Draw the map.
Step 4 is the one that causes all the problems. First we have to associate zip codes with each polygon. Then we have to associate Probability_1 with each zip code. This is a three step process.
Each polygon in the Spatial data file has a unique ID, but these ID's are not the zip codes. The polygon ID's are stored as row names in map#data. The zip codes are stored in map#data, in column ZCTA5CE10. So first we must create a data frame that associates the map#data row names (id) with map#data$ZCTA5CE10 (ZIP). Then we merge your Sample.csv with the result using the ZIP field in both data frames. Then we merge the result of that into map.df. This can be done in 3 lines of code.
Drawing the map involves telling ggplot what dataset to use (map.df), which columns to use for x and y (long and lat) and how to group the data by polygon (group=group). The columns long, lat, and group in map.df are all created by the call to fortify(...). The call to geom_polygon(...) tells ggplot to draw polygons and fill using the information in map.df$Probability_1. The call to geom_path(...) tells ggplot to create a layer with state boundaries. The call to scale_fill_gradientn(...) tells ggplot to use a color scheme based on the color brewer "Reds" palette. Finally, the call to coord_equal(...) tells ggplot to use the same scale for x and y so the map is not distorted.
NB: The state boundary layer, uses the US States TIGER file.
I would advise the following.
Use readOGR from the rgdal package rather than readShapeSpatial.
Consider using ggplot2 for good-looking maps - many of the examples use this.
Refer to one of the existing examples of creating a choropleth such as this one to get an overview.
Start with a simple choropleth and gradually add your own data; don't try and get it all right at once.
If you need more help, create a reproducible example with a SMALL fake dataset and with links to the shapefiles in question. The idea is that you make it easy to help us help you rather than discourage us by not supplying code and data in your question.

Resources