I have the following boundary dataset for the United Kingdom, which shows all the counties:
library(raster)
library(sp)
library(ggplot)
# Download the data
GB <- getData('GADM', country="gbr", level=2)
Using the subset function it is really easy to filter the shapefile polygons by an attribute in the data. For example, if I want to exclude Northern Ireland:
GB_sub <- subset(UK, NAME_1 != "Northern Ireland")
However, there are lots of small islands which distort the scale data range, as shown in the maps below:
Any thoughts on how to elegantly subset the dataset on a minimum size? It would be ideal to have something in the format consistent with the subset argument. For example:
GB_sub <- subset(UK, Area > 20) # specify minimum area in km^2
Here is another potential solution. Because your data is in lat-long projection, directly calculating the area based on latitude and longitude would cause bias, it is better to calculate the area based on functions from the geosphere package.
install.packages("geosphere")
library(geosphere)
# Calculate the area
GB$poly_area <- areaPolygon(GB) / 10^6
# Filter GB based on area > 20 km2
GB_filter <- subset(GB, poly_area > 20)
poly_area contains the area in km2 for all polygons. We can filter the polygon by a threshold, such as 20 in your example. GB_filter is the final output.
This is one potential solution:
GB_sub = GB[sapply(GB#polygons, function(x) x#area>0.04),] # select min size
map.df <- fortify(GB_sub)
ggplot(map.df, aes(x=long, y=lat, group=group)) + geom_polygon()
Check this link for specifics on the actual interpretation of km2 size: Getting a slot's value of S4 objects?
I compared both as well but they don't seem to differ:
out1 = sapply(GB#polygons, function(x) x#area)
out2 = rgeos::gArea(GB, byid=TRUE)
Related
New to spatial analysis on R here. I have a shapefile for the USA that I downloaded from HERE. I also have a set of lat/long points (half a million) that lie within the contiguous USA.
I'd like to find the "most remote spot" -- the spot within the contiguous USA that's farthest from the set of points.
I'm using the rgdal, raster and sp packages. Here's a reproducible example with a random sample of 10 points:
# Set wd to the folder tl_2010_us_state_10
usa <- readOGR(dsn = ".", layer = "tl_2010_us_state10")
# Sample 10 points in USA
sample <- spsample(usa, 10, type = "random")
# Set extent for contiguous united states
ext <- extent(-124.848974, -66.885444, 24.396308, 49.384358)
# Rasterize USA
r <- raster(ext, nrow = 500, ncol = 500)
rr <- rasterize(usa, r)
# Find distance from sample points to cells of USA raster
D <- distanceFromPoints(object = rr, xy = sample)
# Plot distances and points
plot(D)
points(sample)
After the last two lines of code, I get this plot.
However, I'd like it to be over the rasterized map of the USA. And, I'd like it to only consider distances from cells that are in the contiguous USA, not all cells in the bounding box. How do I go about doing this?
I'd also appreciate any other tips regarding the shape file I'm using -- is it the best one? Should I be worried about using the right projection, since my actual dataset is lat/long? Will distanceFromPoints be able to efficiently process such a large dataset, or is there a better function?
To limit raster D to the contiguous USA you could find the elements of rr assigned values of NA (i.e. raster cells within the bounding box but outside of the usa polygons), and assign these same elements of D a value of NA.
D[which(is.na(rr[]))] <- NA
plot(D)
lines(usa)
You can use 'proj4string(usa)' to find the projection info for the usa shapefile. If your coordinates of interest are based on a different projection, you can transform them to match the usa shapefile projection as follows:
my_coords_xform <- spTransform(my_coords, CRS(proj4string(usa)))
Not sure about the relative efficiency of distanceFromPoints, but it only took ~ 1 sec to run on my computer using your example with 10 points.
I think you were looking for the mask function.
library(raster)
usa <- getData('GADM', country='USA', level=1)
# exclude Alaska and Hawaii
usa <- usa[!usa$NAME_1 %in% c( "Alaska" , "Hawaii"), ]
# get the extent and create raster with preferred resolution
r <- raster(floor(extent(usa)), res=1)
# rasterize polygons
rr <- rasterize(usa, r)
set.seed(89)
sample <- spsample(usa, 10, type = "random")
# Find distance from sample points to cells of USA raster
D <- distanceFromPoints(object = rr, xy = sample)
# remove areas outside of polygons
Dm <- mask(D, rr)
# an alternative would be mask(D, usa)
# cell with highest value
mxd <- which.max(Dm)
# coordinates of that cell
pt <- xyFromCell(r, mxd)
plot(Dm)
points(pt)
The distances should be fine, also when using long/lat data. But rasterFromPoints could indeed be a bit slow with a large data set as it uses a brute force algorithm.
I have the longitude and latitude of 5449 trees in NYC, as well as a shapefile for 55 different Neighborhood Tabulation Areas (NTAs). Each NTA has a unique NTACode in the shapefile, and I need to append a third column to the long/lat table telling me which NTA (if any) each tree falls under.
I've made some progress already using other point-in-polygon threads on stackoverflow, especially this one that looks at multiple polygons, but I'm still getting errors when trying to use gContains and don't know how I could check/label each tree for different polygons (I'm guessing some sort of sapply or for loop?).
Below is my code. Data/shapefiles can be found here: http://bit.ly/1BMJubM
library(rgdal)
library(rgeos)
library(ggplot2)
#import data
setwd("< path here >")
xy <- read.csv("lonlat.csv")
#import shapefile
map <- readOGR(dsn="CPI_Zones-NTA", layer="CPI_Zones-NTA", p4s="+init=epsg:25832")
map <- spTransform(map, CRS("+proj=longlat +datum=WGS84"))
#generate the polygons, though this doesn't seem to be generating all of the NTAs
nPolys <- sapply(map#polygons, function(x)length(x#Polygons))
region <- map[which(nPolys==max(nPolys)),]
plot(region, col="lightgreen")
#setting the region and points
region.df <- fortify(region)
points <- data.frame(long=xy$INTPTLON10,
lat =xy$INTPTLAT10,
id =c(1:5449),
stringsAsFactors=F)
#drawing the points / polygon overlay; currently only the points are appearing
ggplot(region.df, aes(x=long,y=lat,group=group))+
geom_polygon(fill="lightgreen")+
geom_path(colour="grey50")+
geom_point(data=points,aes(x=long,y=lat,group=NULL, color=id), size=1)+
xlim(-74.25, -73.7)+
ylim(40.5, 40.92)+
coord_fixed()
#this should check whether each tree falls into **any** of the NTAs, but I need it to specifically return **which** NTA
sapply(1:5449,function(i)
list(id=points[i,]$id, gContains(region,SpatialPoints(points[i,1:2],proj4string=CRS(proj4string(region))))))
#this is something I tried earlier to see if writing a new column using the over() function could work, but I ended up with a column of NAs
pts = SpatialPoints(xy)
nyc <- readShapeSpatial("< path to shapefile here >")
xy$nrow=over(pts,SpatialPolygons(nyc#polygons), returnlist=TRUE)
The NTAs we're checking for are these ones (visualized in GIS): http://bit.ly/1A3jEcE
Try simply:
ShapeFile <- readShapeSpatial("Shapefile.shp")
points <- data.frame(long=xy$INTPTLON10,
lat =xy$INTPTLAT10,
stringsAsFactors=F)
dimnames(points)[[1]] <- seq(1, length(xy$INTPTLON10), 1)
points <- SpatialPoints(points)
df <- over(points, ShapeFile)
I omitted transformation of shapefile because this is not the main subject here.
I'm working with one of my professors on some research aimed toward bettering the current methods of carbon accounting. We noticed that many of the locations for point sources were defaulted to the centroid of the county it was in (this is specific to the US at the moment, though it will be applied globally) if there was no data on the location.
So I'm using R to to address the uncertainty associated with these locations. My code takes the range of longitude and latitude for a county and plots 10,000 points. It then weeds out the points that are not in the county and take the average of the leftover points to locate the centroid. My goal is to ultimately take the difference between these points and the centroid to find the spacial uncertainty for point sources that were placed in the centroid.
However, I'm running into problems with coastal regions. My first problem is that the maps package ignores islands (the barrier islands for example) as well as other disjointed county shapes, so the centroid is not accurately reproduced when the points are averaged. My second problem is found specifically with Currituck county (North Carolina). Maps seems to recognize parts of the barrier islands contained in this county, but since it is not continuous, the entire function goes all wonky and produces a bunch of "NAs" and "Falses" that don't correspond with the actual border of the county at all.
(The data for the centroid is going to be used in other areas of the research which is why it's important we can accurately access all counties.)
Is there any way around the errors I'm running into? A different data set that could be read in, or anything of the sort? Your help would be greatly appreciated. Let me know if there are any questions about what I'm asking, and I'll be happy to clarify.
CODE:
ggplot2 helps
SOME TROUBLE COUNTIES: north carolina, currituck & massachusetts,dukes
library(ggplot2)
library(maps) # package has maps
library(mapproj) # projections
library(sp)
WC <- map_data('county','north carolina,currituck') #calling on county
p <- ggplot(data = WC, aes(x = long, y = lat)) #calling on latitude and longitude
p1 <- p + geom_polygon(fill = "lightgreen") + theme_bw() +
coord_map("polyconic") + coord_fixed() #+ labs(title = "Watauga County")
p1
### range for the long and lat
RLong <- range(WC$long)
RLong
RLat <- range(WC$lat)
RLat
### Add some random points
n <- 10000
RpointsLong <- sample(seq(RLong[1], RLong[2], length = 100), n, replace = TRUE)
RpointsLat <- sample(seq(RLat[1], RLat[2], length = 100), n, replace = TRUE)
DF <- data.frame(RpointsLong, RpointsLat)
head(DF)
p2<-p1 + geom_point(data = DF, aes(x = RpointsLong, y = RpointsLat))
p2
# Source:
# http://www.nceas.ucsb.edu/scicomp/usecases/GenerateConvexHullAndROIForPoints
inside <- map.where('county',RpointsLong,RpointsLat)=="north carolina,currituck"
inside[which(nchar(inside)==2)] <- FALSE
inside
g<-inside*DF
g1<-subset(g,g$RpointsLong!=0)
g1
CentLong<-mean(g1$RpointsLong)
CentLat<-mean(g1$RpointsLat)
Centroid<-data.frame(CentLong,CentLat)
Centroid
p1+geom_point(data=g1, aes(x=RpointsLong,y=RpointsLat)) #this maps all the points inside county
p1+geom_point(data=Centroid, aes(x=CentLong,y=CentLat))
First, given your description of the problem, I would probably invest a lot of effort to avoid this business of locations defaulting to the county centroids - that's the right way to solve this problem.
Second, if this is a research project, I would not use the built in maps in R. The USGS National Atlas website has excellent county maps of the US. Below is an example using Currituck County in NC.
library(ggplot2)
library(rgdal) # for readOGR(...)
library(rgeos) # for gIntersection(...)
setwd("< directory contining shapefiles >")
map <- readOGR(dsn=".",layer="countyp010")
NC <- map[map$COUNTY=="Currituck County" & !is.na(map$COUNTY),]
NC.df <- fortify(NC)
bbox <- bbox(NC)
x <- seq(bbox[1,1],bbox[1,2],length=50) # longitude
y <- seq(bbox[2,1],bbox[2,2],length=50) # latitude
all <- SpatialPoints(expand.grid(x,y),proj4string=CRS(proj4string(NC)))
pts <- gIntersection(NC,all) # points inside the polygons
pts <- data.frame(pts#coords) # ggplot wants a data.frame
centroid <- data.frame(x=mean(pts$x),y=mean(pts$y))
ggplot(NC.df)+
geom_path(aes(x=long,y=lat, group=group), colour="grey50")+
geom_polygon(aes(x=long,y=lat, group=group), fill="lightgreen")+
geom_point(data=pts, aes(x,y), colour="blue")+
geom_point(data=centroid, aes(x,y), colour="red", size=5)+
coord_fixed()
Finally, another way to do this (which I'd recommend, actually), is to just calculate the area weighted centroid. This is equivalent to what you are approximating, is more accurate, and much faster.
polys <- do.call(rbind,lapply(NC#polygons[[1]]#Polygons,
function(x)c(x#labpt,x#area)))
polys <- data.frame(polys)
colnames(polys) <- c("long","lat","area")
polys$area <- with(polys,area/sum(area))
centr <- with(polys,c(x=sum(long*area),y=sum(lat*area)))
centr # area weighted centroid
# x y
# -76.01378 36.40105
centroid # point weighted centroid (start= 50 X 50 points)
# x y
# 1 -76.01056 36.39671
You'll find that as you increase the number of points in the points-weighted centroid the result gets closer to the area-weighted centroid.
I would like to apply some basic clustering techniques to some latitude and longitude coordinates. Something along the lines of clustering (or some unsupervised learning) the coordinates into groups determined either by their great circle distance or their geodesic distance. NOTE: this could be a very poor approach, so please advise.
Ideally, I would like to tackle this in R.
I have done some searching, but perhaps I missed a solid approach? I have come across the packages: flexclust and pam -- however, I have not come across a clear-cut example(s) with respect to the following:
Defining my own distance function.
Do either flexclut (via kcca or cclust) or pam take into account random restarts?
Icing on the cake = does anyone know of approaches/packages that would allow one to specify the minimum number of elements in each cluster?
Regarding your first question: Since the data is long/lat, one approach is to use earth.dist(...) in package fossil (calculates great circle dist):
library(fossil)
d = earth.dist(df) # distance object
Another approach uses distHaversine(...) in the geosphere package:
geo.dist = function(df) {
require(geosphere)
d <- function(i,z){ # z[1:2] contain long, lat
dist <- rep(0,nrow(z))
dist[i:nrow(z)] <- distHaversine(z[i:nrow(z),1:2],z[i,1:2])
return(dist)
}
dm <- do.call(cbind,lapply(1:nrow(df),d,df))
return(as.dist(dm))
}
The advantage here is that you can use any of the other distance algorithms in geosphere, or you can define your own distance function and use it in place of distHaversine(...). Then apply any of the base R clustering techniques (e.g., kmeans, hclust):
km <- kmeans(geo.dist(df),centers=3) # k-means, 3 clusters
hc <- hclust(geo.dist(df)) # hierarchical clustering, dendrogram
clust <- cutree(hc, k=3) # cut the dendrogram to generate 3 clusters
Finally, a real example:
setwd("<directory with all files...>")
cities <- read.csv("GeoLiteCity-Location.csv",header=T,skip=1)
set.seed(123)
CA <- cities[cities$country=="US" & cities$region=="CA",]
CA <- CA[sample(1:nrow(CA),100),] # 100 random cities in California
df <- data.frame(long=CA$long, lat=CA$lat, city=CA$city)
d <- geo.dist(df) # distance matrix
hc <- hclust(d) # hierarchical clustering
plot(hc) # dendrogram suggests 4 clusters
df$clust <- cutree(hc,k=4)
library(ggplot2)
library(rgdal)
map.US <- readOGR(dsn=".", layer="tl_2013_us_state")
map.CA <- map.US[map.US$NAME=="California",]
map.df <- fortify(map.CA)
ggplot(map.df)+
geom_path(aes(x=long, y=lat, group=group))+
geom_point(data=df, aes(x=long, y=lat, color=factor(clust)), size=4)+
scale_color_discrete("Cluster")+
coord_fixed()
The city data is from GeoLite. The US States shapefile is from the Census Bureau.
Edit in response to #Anony-Mousse comment:
It may seem odd that "LA" is divided between two clusters, however, expanding the map shows that, for this random selection of cities, there is a gap between cluster 3 and cluster 4. Cluster 4 is basically Santa Monica and Burbank; cluster 3 is Pasadena, South LA, Long Beach, and everything south of that.
K-means clustering (4 clusters) does keep the area around LA/Santa Monica/Burbank/Long Beach in one cluster (see below). This just comes down to the different algorithms used by kmeans(...) and hclust(...).
km <- kmeans(d, centers=4)
df$clust <- km$cluster
It's worth noting that these methods require that all points must go into some cluster. If you just ask which points are close together, and allow that some cities don't go into any cluster, you get very different results.
I normally do not work with shapefiles, so I am a bit lost here. I have two shapefiles each with multiple objects. The first is a set of 32 polygons (each one is a plot). The second shapefile has >10,000 objects which represent vegetation clusters of different sizes within each plot. I am trying to figure out.
1) How do I calculate percent cover of total vegetation cover within each site?
2) What percentage of each the vegetation cover is less than 5 meters in area in each plot?
This is what my data looks like in ArcGIS for a single plot.
The following code will do what you want, I think.
NB: This uses the area information stored in the shapefile polygons (as explained below). It does not use the Area column in your vegetation shapefile data section. In most cases, your Area is identical to the area stored in the shapefile, but in some cases your Area is much larger. Since I don't know where your Area data came from, it seemed safer to use the information stored with the shapefile polygons.
library(rgdal)
library(ggplot2)
setwd("<directory containing all your shapefiles>")
plt.map <- readOGR(dsn=".",layer="plots")
veg.map <- readOGR(dsn=".",layer="veg_in_plots")
# associate LocCode with polygon IDs
plt.data <- cbind(id=rownames(plt.map#data), LocCode=plt.map#data$LocCode)
veg.data <- cbind(id=rownames(veg.map#data), LocCode=veg.map#data$LocCode)
# function to extract area from polygon data
get.area <- function(polygon) {
row <- data.frame(id=polygon#ID, area=polygon#area, stringsAsFactors=F)
return(row)
}
# area of each plot polygon
plt.areas <- do.call(rbind,lapply(plt.map#polygons, get.area))
plt.data <- merge(plt.data,plt.areas, by="id") # append area column to plt.data
# area of each vegetation polygon
veg.areas <- do.call(rbind,lapply(veg.map#polygons, get.area))
veg.data <- merge(veg.data,veg.areas, by="id") # append area column to veg.data
# total area of vegetation polygons by LocCode
veg.smry <- aggregate(area~LocCode,data=veg.data,sum)
smry <- merge(plt.data,veg.smry,by="LocCode")
smry$coverage <- with(smry,100*area.y/area.x) # coverage percentage
# total area for vegetation object with A < 5 msq
veg.lt5 <- aggregate(area~LocCode,data=veg.data[veg.data$area<5,],sum)
smry <- merge(smry, veg.lt5, by="LocCode")
# fraction of covered area coming from veg. obj. with A < 5 msq
smry$pct.lt5 <- with(smry, 100*area/area.y)
Produces this:
# head(smry)
# LocCode id area.x area.y coverage area pct.lt5
# 1 1 3 1165.916 259.2306 22.23408 60.98971 23.52720
# 2 10 11 1242.770 366.3222 29.47626 88.21827 24.08216
# 3 11 12 1181.366 213.2105 18.04779 129.21612 60.60496
# 4 12 13 1265.352 577.6037 45.64767 236.83946 41.00380
# 5 13 14 1230.662 226.2686 18.38593 48.09509 21.25575
# 6 14 15 1274.538 252.0577 19.77640 46.94874 18.62619
Explanation:
Shapefiles can be imported into R using readOGR(...) in the rgdal package. When importing a polygon shapefile, the result is a "SpatialPolygonDataFrame" object. These objects basically have two sections: a polygon section, which has the coordinates needed to plot each polygon, and a data section, which has data for each polygon (so, one row per polygon). If the shapefile is imported as, e.g., map,
map <- readOGR(dsn=".",layer="myShapeFile")
then the polygon and data sections can be accessed as map#polygon and map#data. It turns out that the polygon areas are stored in the polygon section. To get the areas, we define a function, get.area(...) that extracts the area and polygon ID from a polygon. Then we call that function for all polygons using lapply(...), and bind all the returned values together row-wise using rbind(...):
plt.areas <- do.call(rbind,lapply(plt.map#polygons, get.area))
veg.areas <- do.call(rbind,lapply(veg.map#polygons, get.area))
Now we need to associate vegetation areas with plot polygons. This is done through column LocCode, which is present in the data section of each shapefile. So we first associate polygon ID's with LocCode for both plots and vegetation areas:
plt.data <- cbind(id=rownames(plt.map#data), LocCode=plt.map#data$LocCode)
veg.data <- cbind(id=rownames(veg.map#data), LocCode=veg.map#data$LocCode)
Then we append the area column based on polygon ID:
plt.data <- merge(plt.data,plt.areas, by="id") # append area column to plt.data
veg.data <- merge(veg.data,veg.areas, by="id") # append area column to veg.data
Then we need to sum the vegetation areas by LocCode:
veg.smry <- aggregate(area~LocCode,data=veg.data,sum)
And finally merge this with the plot polygon areas:
smry <- merge(plt.data,veg.smry,by="LocCode")
In the smry dataframe, area.x is the area of the plot, and area.y is the total area covered by vegetation in that plot. Since, for both shapefiles, the projection is:
+proj=utm +zone=13 +datum=NAD83 +units=m +no_defs +ellps=GRS80 +towgs84=0,0,0
the units are in meters and the areas are in msq. To determine how much of the vegetation is coming from areas < 5 msq, we total the vegetation areas with area < 5 and merge that result with smry:
veg.lt5 <- aggregate(area~LocCode,data=veg.data[veg.data$area<5,],sum)
smry <- merge(smry, veg.lt5, by="LocCode")
Finally, with the data we have it's straightforward to render maps for each plot area:
cols <- c("id","LocCode")
plt.df <- fortify(plt.map)
plt.df <- merge(plt.df,plt.data[cols],by="id")
veg.df <- fortify(veg.map)
veg.df <- merge(veg.df,veg.data[cols],by="id")
ggp <- ggplot(plt.df, aes(x=long, y=lat, group=group))
ggp <- ggp + geom_path()
ggp <- ggp + geom_polygon(data=veg.df, fill="green")
ggp <- ggp + facet_wrap(~LocCode,scales="free")
ggp <- ggp + theme(axis.text=element_blank())
ggp <- ggp + labs(x="",y="")
ggp <- ggp + coord_fixed()
ggp