Creating County level Adjacency Matrix for different states - r

I'm attempting to create an adjacency matrix for counties in AL, SC, and GA and I tried one approach which seemed to work well. I used the following libraries and code to generate the adjacency matrix (Adj_mat).
library(tidyverse)
library(spdep)
library(urbnmapr)
counties_sf <- get_urbn_map("counties", sf = TRUE)
counties_south <- filter(counties_sf, state_abbv %in% c("AL", "GA", "SC" ))
south_counties_polylist <- poly2nb(counties_south)
Adj_mat <- nb2mat(south_counties_polylist, style = "B", zero.policy = T) # Adjacency matrix
The total adjacencies for this matrix are 1526 as obtained by
m1 <- apply(Adj_mat, 2, sum).
This was a bit concerning since I have another adjacency matrix for AL, GA, and SC that I have been using for a while now and has 1528 total adjacencies. I don't have the code that was used to create this matrix and I'm a bit unsure if my approach was wrong or the existing adjacency matrix that I have been using is not correct.
Based on the package description, urbnmapr library uses shapefiles from the US Census Bureau. I would like to be able to create matrices for different states on my own and would appreciate any pointers to ensure I'm doing it correctly. Thank you!

Related

How to construct a spatial adjacency matrix in R

I am new to using shapefiles in R and I was wondering if you can help me get a better understanding.
I need to create a spatial adjacency matrix W so that I can build a spatial model. W is an n x n matrix where n is the number of area polygons. The diagonal entries are wii = 0 and the off-diagonal entries wij = 1 if areas i and j share a common boundary and wij = 0 otherwise.
I know I would probably need to construct a contiguity matrix (I chose to use a queen neighborhood). But I am not sure how to further derive my spatial adjacency matrix from this.
#load relevant packages
library(sf)
library(tmap)
library(tmaptools)
library(dplyr)
#import data
mydata <- read.csv("tobago_communities.csv")
#import shapefile
mymap <-st_read("C:/Users/ndook/OneDrive/Desktop/Tobago/2011_parish_data.shp", stringsAsFactors = FALSE)
#join data and shapefile into one dataframe
map_and_data <- inner_join(mymap, mydata, by = "TGOLOC_ID")
#generate map
tm_shape(map_and_data) + tm_polygons("Unemployment")
#specify queen neighborhood
queen_tobago.nb <- poly2nb(mymap)
So I'm assuming the queen neighborhood would somehow be relevant to getting the spatial adjacency matrix but I am stuck at this point. Any further suggestions would be greatly appreciated.
The poly2nb function does generate a neighborhood list. Note that you need to call the option queen=T if you want queen neighborhood.
Some R packages expect a list representation of the spatial matrix, others might want a matrix form. The nb2listw function turns the neighborhood list into a list of spatial weights.
With the nb2mat function, you get a matrix representation that you are probably looking for (https://rdrr.io/rforge/spdep/man/nb2mat.html).

Create neighborhood list of large dataset / fasten up

I want to create a weight matrix based on distance. My code for the moment looks as follows and functions for a smaller sample of the data. However, with the large dataset (569424 individuals in 24077 locations) it doesn't go through. The problem arise at the nb2blocknb fuction. So my question would be: How can I optimize my code for large datasets?
# load all survey data
DHS <- read.csv("Daten/final.csv")
attach(DHS)
# define coordinates matrix
coormat <- cbind(DHS$location, DHS$lon_s, DHS$lat_s)
coorm <- cbind(DHS$lon_s, DHS$lat_s)
colnames(coormat) <- c("location", "lon_s", "lat_s")
coo <- cbind(unique(coormat))
c <- as.data.frame(coo)
coor <- cbind(c$lon_s, c$lat_s)
# get a list with beneighbored locations thath are inbetween 50 km distance
neighbor <- dnearneigh(coor, d1 = 0, d2 = 50, row.names=c$location, longlat=TRUE, bound=c("GE", "LE"))
# get neighborhood list on individual level
nb <- nb2blocknb(neighbor, as.character(DHS$location)))
# weight matrix in list format
nbweights.lw <- nb2listw(nb, style="B", zero.policy=TRUE)
Thanks a lot for your help!
you're trying to make 1.3 e10 distance calculations. The results would be in the GB.
I think you'd want to limit either the maximum distance or the number of nearest neighbors you're looking for. Try nn2 from the RANN package:
library('RANN')
nearest_neighbours_w_distance<-nn2(coordinatesA, coordinatesB,10)
note that this operation is not symmetric (Switching coordinatesA and coordinatesB gives different results).
Also you would first have to convert your gps coordinates to a coordinate reference system in which you can calculate euclidean distances, for example UTM (code not tested):
library("sp")
gps2utm<-function(gps_coordinates_matrix,utmzone){
spdf<-SpatialPointsDataFrame(gps_coordinates_matrix[,1],gps_coordinates_matrix[,2])
proj4string(spdf) <- CRS("+proj=longlat +datum=WGS84")
return(spTransform(spdf, CRS(paste0("+proj=utm +zone=",utmzone," ellps=WGS84"))))
}

Using R intersections to create a polygons-inside-a-polygon key using two shapefile layers

The data
I have two shapefiles marking the boundaries of national and provincial electoral constituencies in Pakistan.
The objective
I am attempting to use R to create a key that will generate a list of which provincial-level constituencies are "contained within" or otherwise intersecting with which national-level constituencies, based on their coordinates in this data. For example, NA-01 corresponds with PA-01, PA-02, PA-03; NA-02 corresponds with PA-04 and PA-05, etc. (The key will ultimately be used to link separate dataframes containing electoral results at the national and provincial level; that part I've figured out.)
I have only basic/intermediate R skills learned largely through trial and error and no experience working with GIS data outside of R.
The attempted solution
The closest solution I could find for this problem comes from this guide to calculating intersection areas in R. However, I have been unable to successfully replicate any of the three proposed approaches (either the questioner's use of a general TRUE/FALSE report on intersections, or the more precise calculations of area of overlap).
The code
# import map files
NA_map <- readOGR(dsn = "./National_Constituency_Boundary", layer = "National_Constituency_Boundary")
PA_map <- readOGR(dsn = "./Provincial_Constituency_Boundary", layer = "Provincial_Constituency_Boundary")
# Both are now SpatialPolygonsDataFrame objects of 273 and 577 elements, respectively.
# If relevant, I used spdpylr to tweak some of data attribute names (for use later when joining to electoral dataframes):
NA_map <- NA_map %>%
rename(constituency_number = NA_Cons,
district_name = District,
province = Province)
PA_map <- PA_map %>%
rename(province = PROVINCE,
district_name = DISTRICT,
constituency_number = PA)
# calculate intersections, take one
Results <- gIntersects(NA_map, PA_map, byid = TRUE)
# this creates a large matrix of 157,521 elements
rownames(Results) <- NA_map#data$constituency_number
colnames(Results) <- PA_map#data$constituency_number
Attempting to add the rowname/colname labels, however, gives me the error message:
Error in dimnames(x) <- dn :
length of 'dimnames' [1] not equal to array extent
Without the rowname/colname labels, I'm unable to read the overlay matrix, and unsure how to filter them so as to produce a list of only TRUE intersections that would help make a NA-PA key.
I also attempted to replicate the other two proposed solutions for calculating exact area of overlap:
# calculate intersections, take two
pi <- intersect(NA_map, PA_map)
# this generates a SpatialPolygons object with 273 elements
areas <- data.frame(area=sapply(pi#polygons, FUN = function(x) {slot(x, 'area')}))
# this calculates the area of intersection but has no other variables
row.names(areas) <- sapply(pi#polygons, FUN=function(x) {slot(x, 'ID')})
This generates the error message:
Error in `row.names<-.data.frame`(`*tmp*`, value = c("2", "1", "4", "5", :
duplicate 'row.names' are not allowed
In addition: Warning message:
non-unique value when setting 'row.names': ‘1’
So that when I attempt to attach areas to attributes info with
attArrea <- spCbind(pi, areas)
I get the error message
Error in spCbind(pi, areas) : row names not identical
Attempting the third proposed method:
# calculate intersections, take three
pi <- st_intersection(NA_map, PA_map)
Produces the error message:
Error in UseMethod("st_intersection") :
no applicable method for 'st_intersection' applied to an object of class "c('SpatialPolygonsDataFrame', 'SpatialPolygons', 'Spatial', 'SpatialPolygonsNULL', 'SpatialVector')"
I understand that my SPDF maps can't be used for this third approach, but wasn't clear from the description what steps would be needed to transform it and attempt this method.
The plea for help
Any suggestions on corrections necessary to use any of these approaches, or pointers towards some other method of figuring this, would be greatly appreciated. Thanks!
Here is some example data
library(raster)
p <- shapefile(system.file("external/lux.shp", package="raster"))
p1 <- aggregate(p, by="NAME_1")
p2 <- p[, 'NAME_2']
So we have p1 with regions, and p2 with lower level divisions.
Now we can do
x <- intersect(p1, p2)
# or x <- union(p1, p2)
data.frame(x)
Which should be (and is) the same as the original
data.frame(p)[, c('NAME_1', 'NAME_2')]
To get the area of the polygons, you can do
x$area <- area(x) / 1000000 # divide to get km2
There are likely to be many "slivers", very small polygons because of slight variations in borders. That might not matter to you.
But another approach could be matching by centroid:
y <- p2
e <- extract(p1, coordinates(p2))
y$NAME_1 <- e$NAME_1
data.frame(y)
Your code isn't self-contained, so I didn't try to replicate the errors you report.
However, getting the 'key' you want is very simple using the sf package (which is intended to supercede rgeos, rgdal and sp in the near future). See here:
library(sf)
# Download shapefiles
national.url <- 'https://data.humdata.org/dataset/5d48a142-1f92-4a65-8ee5-5d22eb85f60f/resource/d85318cb-dcc0-4a59-a0c7-cf0b7123a5fd/download/national-constituency-boundary.zip'
provincial.url <- 'https://data.humdata.org/dataset/137532ad-f4a9-471e-8b5f-d1323df42991/resource/c84c93d7-7730-4b97-8382-4a783932d126/download/provincial-constituency-boundary.zip'
download.file(national.url, destfile = file.path(tempdir(), 'national.zip'))
download.file(provincial.url, destfile = file.path(tempdir(), 'provincial.zip'))
# Unzip shapefiles
unzip(file.path(tempdir(), 'national.zip'), exdir = file.path(tempdir(), 'national'))
unzip(file.path(tempdir(), 'provincial.zip'), exdir = file.path(tempdir(), 'provincial'))
# Read map files
NA_map <- st_read(dsn = file.path(tempdir(), 'national'), layer = "National_Constituency_Boundary")
PA_map <- st_read(dsn = file.path(tempdir(), 'provincial'), layer = "Provincial_Constituency_Boundary")
# Get sparse list representation of intersections
intrs.sgpb <- st_intersects(NA_map, PA_map)
length(intrs.sgpb) # One list element per national constituency
# [1] 273
print(intrs.sgpb[[1]]) # Indices of provnicial constituencies intersecting with first national constituency
# [1] 506 522 554 555 556
print(PA_map$PROVINCE[intrs.sgpb[[1]]])[1] # Name of first province intersecting with first national constituency
# [1] KHYBER PAKHTUNKHWA

Approaches for spatial geodesic latitude longitude clustering in R with geodesic or great circle distances

I would like to apply some basic clustering techniques to some latitude and longitude coordinates. Something along the lines of clustering (or some unsupervised learning) the coordinates into groups determined either by their great circle distance or their geodesic distance. NOTE: this could be a very poor approach, so please advise.
Ideally, I would like to tackle this in R.
I have done some searching, but perhaps I missed a solid approach? I have come across the packages: flexclust and pam -- however, I have not come across a clear-cut example(s) with respect to the following:
Defining my own distance function.
Do either flexclut (via kcca or cclust) or pam take into account random restarts?
Icing on the cake = does anyone know of approaches/packages that would allow one to specify the minimum number of elements in each cluster?
Regarding your first question: Since the data is long/lat, one approach is to use earth.dist(...) in package fossil (calculates great circle dist):
library(fossil)
d = earth.dist(df) # distance object
Another approach uses distHaversine(...) in the geosphere package:
geo.dist = function(df) {
require(geosphere)
d <- function(i,z){ # z[1:2] contain long, lat
dist <- rep(0,nrow(z))
dist[i:nrow(z)] <- distHaversine(z[i:nrow(z),1:2],z[i,1:2])
return(dist)
}
dm <- do.call(cbind,lapply(1:nrow(df),d,df))
return(as.dist(dm))
}
The advantage here is that you can use any of the other distance algorithms in geosphere, or you can define your own distance function and use it in place of distHaversine(...). Then apply any of the base R clustering techniques (e.g., kmeans, hclust):
km <- kmeans(geo.dist(df),centers=3) # k-means, 3 clusters
hc <- hclust(geo.dist(df)) # hierarchical clustering, dendrogram
clust <- cutree(hc, k=3) # cut the dendrogram to generate 3 clusters
Finally, a real example:
setwd("<directory with all files...>")
cities <- read.csv("GeoLiteCity-Location.csv",header=T,skip=1)
set.seed(123)
CA <- cities[cities$country=="US" & cities$region=="CA",]
CA <- CA[sample(1:nrow(CA),100),] # 100 random cities in California
df <- data.frame(long=CA$long, lat=CA$lat, city=CA$city)
d <- geo.dist(df) # distance matrix
hc <- hclust(d) # hierarchical clustering
plot(hc) # dendrogram suggests 4 clusters
df$clust <- cutree(hc,k=4)
library(ggplot2)
library(rgdal)
map.US <- readOGR(dsn=".", layer="tl_2013_us_state")
map.CA <- map.US[map.US$NAME=="California",]
map.df <- fortify(map.CA)
ggplot(map.df)+
geom_path(aes(x=long, y=lat, group=group))+
geom_point(data=df, aes(x=long, y=lat, color=factor(clust)), size=4)+
scale_color_discrete("Cluster")+
coord_fixed()
The city data is from GeoLite. The US States shapefile is from the Census Bureau.
Edit in response to #Anony-Mousse comment:
It may seem odd that "LA" is divided between two clusters, however, expanding the map shows that, for this random selection of cities, there is a gap between cluster 3 and cluster 4. Cluster 4 is basically Santa Monica and Burbank; cluster 3 is Pasadena, South LA, Long Beach, and everything south of that.
K-means clustering (4 clusters) does keep the area around LA/Santa Monica/Burbank/Long Beach in one cluster (see below). This just comes down to the different algorithms used by kmeans(...) and hclust(...).
km <- kmeans(d, centers=4)
df$clust <- km$cluster
It's worth noting that these methods require that all points must go into some cluster. If you just ask which points are close together, and allow that some cities don't go into any cluster, you get very different results.

Export R plot to shapefile

I am fairly new to R, but not to ArcView. I am plotting some two-mode data, and want to convert the plot to a shapefile. Specifically, I would like to convert the vertices and the edges, if possible, so that I can get the same plot to display in ArcView, along with the attributes.
I've installed the package "shapefiles", and I see the convert.to.shapefile command, but the help doesn't talk about how to assign XY coords to the vertices.
Thank you,
Tim
Ok, I'm making a couple of assumptions here, but I read the question as you're looking to assign spatial coordinates to a bipartite graph and export both the vertices and edges as point shapefiles and polylines for use in ArcGIS.
This solution is a little kludgey, but will make shapefiles with coordinate limits xmin, ymin and xmax, ymax of -0.5 and +0.5. It will be up to you to decide on the graph layout algorithm (e.g. Kamada-Kawai), and project the shapefiles in the desired coordinate system once the shapefiles are in ArcGIS as per #gsk3's suggestion. Additional attributes for the vertices and edges can be added where the points.data and edge.data data frames are created.
library(igraph)
library(shapefiles)
# Create dummy incidence matrix
inc <- matrix(sample(0:1, 15, repl=TRUE), 3, 5)
colnames(inc) <- c(1:5) # Person ID
rownames(inc) <- letters[1:3] # Event
# Create bipartite graph
g.bipartite <- graph.incidence(inc, mode="in", add.names=TRUE)
# Plot figure to get xy coordinates for vertices
tk <- tkplot(g.bipartite, canvas.width=500, canvas.height=500)
tkcoords <- tkplot.getcoords(1, norm=TRUE) # Get coordinates of nodes centered on 0 with +/-0.5 for max and min values
# Create point shapefile for nodes
n.points <- nrow(tkcoords)
points.attr <- data.frame(Id=1:n.points, X=tkcoords[,1], Y=tkcoords[,2])
points.data <- data.frame(Id=points.attr$Id, Name=paste("Vertex", 1:n.points, sep=""))
points.shp <- convert.to.shapefile(points.attr, points.data, "Id", 1)
write.shapefile(points.shp, "~/Desktop/points", arcgis=TRUE)
# Create polylines for edges in this example from incidence matrix
n.edges <- sum(inc) # number of edges based on incidence matrix
Id <- rep(1:n.edges,each=2) # Generate Id number for edges.
From.nodes <- g.bipartite[[4]]+1 # Get position of "From" vertices in incidence matrix
To.nodes <- g.bipartite[[3]]-max(From.nodes)+1 # Get position of "To" vertices in incidence matrix
# Generate index where position alternates between "From.node" to "To.node"
node.index <- matrix(t(matrix(c(From.nodes, To.nodes), ncol=2)))
edge.attr <- data.frame(Id, X=tkcoords[node.index, 1], Y=tkcoords[node.index, 2])
edge.data <- data.frame(Id=1:n.edges, Name=paste("Edge", 1:n.edges, sep=""))
edge.shp <- convert.to.shapefile(edge.attr, edge.data, "Id", 3)
write.shapefile(edge.shp, "~/Desktop/edges", arcgis=TRUE)
Hope this helps.
I'm going to take a stab at this based on a wild guess as to what your data looks like.
Basically you'll want to coerce the data into a data.frame with two columns containing the x and y coordinates (or lat/long, or whatever).
library(sp)
data(meuse.grid)
class(meuse.grid)
coordinates(meuse.grid) <- ~x+y
class(meuse.grid)
Once you have it as a SpatialPointsDataFrame, sp provides some decent functionality, including exporting shapefiles:
writePointsShape(meuse.grid,"/home/myfiles/wherever/myshape.shp")
Relevant help files examples are drawn from:
coordinates
SpatialPointsDataFrame
readShapePoints
At least a few years ago when I last used sp, it was great about projection and very bad about writing projection information to the shapefile. So it's best to leave the coordinates untransformed and manually tell Arc what projection it is. Or use writeOGR rather than writePointsShape.

Resources