I've a GeoJson file for Peru and it's states (Departamentos in Spanish).
I can plot Peru's states using leaflet, but as the GeoJson file has not all the data I need, I'm thinking of converting it to a data.frame adding the columns of data I need then return it to GeoJson format for plotting.
Data: You can donwload the GeoJson data of Perú from here:
This is the data I'm using and I need to add it a Sales Column with a row for every state ("NOMBDEP" - 24 in total)
library(leaflet)
library(jsonlite)
library(dplyr)
states <- geojsonio::geojson_read("https://raw.githubusercontent.com/juaneladio/peru-geojson/master/peru_departamental_simple.geojson", what = "sp")
I thought of using "jsonlite" package to transform "GeoJson" to a data frame, but getting this error:
library(jsonlite)
states <- fromJSON(states)
Error: Argument 'txt' must be a JSON string, URL or file.
I was expecting that after having a data frame I could be able to do something like:
states$sales #sales is a vector with the sales for every department
states <- toJson(states)
You can use library(geojsonsf) to go to and from GeoJSON and sf
library(geojsonsf)
library(sf) ## for sf print methods
states <- geojsonsf::geojson_sf("https://raw.githubusercontent.com/juaneladio/peru-geojson/master/peru_departamental_simple.geojson")
states$myNewValue <- 1:nrow(states)
geo <- geojsonsf::sf_geojson(states)
substr(geo, 1, 200)
# [1] "{\"type\":\"FeatureCollection\",\"features\":[{\"type\":\"Feature\",\"properties\":{\"COUNT\":84,\"FIRST_IDDP\":\"01\",\"HECTARES\":3930646.567,\"NOMBDEP\":\"AMAZONAS\",\"myNewValue\":1},\"geometry\":{\"type\":\"Polygon\",\"coordinat"
.
You can see myNewValue is in the GeoJSON
You don't need to convert back and forth, you can just add another column to the states SPDF:
states <- geojsonio::geojson_read("https://raw.githubusercontent.com/juaneladio/peru-geojson/master/peru_departamental_simple.geojson", what = "sp")
states$sales <- abs(rnorm(nrow(states), sd=1000))
plot(states, col=states$sales)
Yields this image:
Related
I have a shapefile of population estimates of different administrative levels on Nigeria and I want to create a cartogram out of it.
I used the cartogram package and tried the following
library(cartogram)
admin_lvl2_cartogram <- cartogram(admin_level2_shape, "mean", itermax=5)
However this gives me an error stating "Error: Using an unprojected map. This function does not give correct centroids and distances for longitude/latitude data:
Use "st_transform()" to transform coordinates to another projection." I'm not sure how to resolve this
To recreate the initial data
Download the data using the wopr package
library(wopr)
catalogue <- getCatalogue()
# Select files from the catalogue by subsetting the data frame
selection <- subset(catalogue,
country == 'NGA' &
category == 'Population' &
version == 'v1.2')
# Download selected files
downloadData(selection)
Manually unzip the downloaded zip file (NGA_population_v1_2_admin.zip) and read in the data
library(rgdal)
library(here)
admin_level2_shape <- readOGR(here::here("wopr/NGA/population/v1.2/NGA_population_v1_2_admin/NGA_population_v1_2_admin_level2_boundaries.shp"))
The function spTransform in the sp package is probably easiest because the readOGR call returns a spatial polygon defined in that package.
Here's a full example that transforms to a suitable projection for Nigeria, "+init=epsg:26331". You'll probably have to Google to find the exact one for your needs.
#devtools::install_github('wpgp/wopr')
library(wopr)
library(cartogram)
library(rgdal)
library(sp)
library(here)
catalogue <- getCatalogue()
# Select files from the catalogue by subsetting the data frame
selection <- subset(catalogue, country == 'NGA' & category == 'Population' & version == 'v1.2')
# Download selected files
downloadData(selection)
unzip(here::here("wopr/NGA/population/v1.2/NGA_population_v1_2_admin.zip"),
overwrite = T,
exdir = here::here("wopr/NGA/population/v1.2"))
admin_level2_shape <- readOGR(here::here("wopr/NGA/population/v1.2/NGA_population_v1_2_admin/NGA_population_v1_2_admin_level2_boundaries.shp"))
transformed <- spTransform(admin_level2_shape, CRS("+init=epsg:26331"))
admin_lvl2_cartogram <- cartogram(transformed, "mean", itermax=5)
I confess I don't know anything about the specific packages so I don't know if what is produced is correct, but at least it transforms.
I am using the osmdata package to extract data from Open Street Map (OSM) and turn it into a sf object. Unfortunately, I have not found a way to get the encoding right using the functions of the osmdata and sf package. Currently, I am changing the encoding afterwards via the Encoding function, which is quite cumbersome because it involves a nested loop (over all data frames contained in the object returned from Open Street Map, and over all character columns within these data frames).
Is there a more generic, nicer way to get the encoding right?
The following code shows the problem. It extracts OSM data on pharmacy's in the German city Neumünster:
library(osmdata)
library(sf)
library(purrr)
results <- opq(bbox = "Neumünster, Germany") %>%
add_osm_feature(key = "amenity", value = "pharmacy") %>%
osmdata_sf()
pharmacy_points <- results$osm_points
head(pharmacy_points$addr.city)
My locale and encoding is set as follows:
My current, but unsatisfactory, solution is the following:
encode_osm <- function(list){
# For all data frames in query result
for (df in (names(list)[map_lgl(list, is.data.frame)])) {
last <- length(list[[df]])
# For all columns except the last column ("geometry")
for (col in names(list[[df]])[-last]){
# Change the encoding to UTF8
Encoding(list[[df]][[col]]) <- "UTF-8"
}
}
return(list)
}
results_encoded <- encode_osm(results)
I would like to save a shapefile after a manipulation.
First, I read my object
map<-readOGR("C:/MAPS","33SEE250GC_SIR")
After this, I subset my shapefile:
test <- fortify(map, region="CD_GEOCODI")
test<- subset(test, -43.41<long & long < -43.1 & - 23.05<lat & lat< -22.79)
I get the corresponding id's of this subset
ids<- unique(test$id)
map2<- map[map$CD_GEOCODI %in% ids ,]
When I plot the map2, it is all right. But, when I try to save this shapefile, somethinh is wrong
writeOGR(map2, dsn = "C:/MAPS" , layer = "nameofmynewmap")
Error in match(driver, drvs$name) :
argument "driver" is missing, with no default
I don't know how to get the drive. Some solution?
The problem is that your map2object is no longer a shapefile and therefore you cannot save it as a shapefile. The fortify command converts the data slot of the shape file (map#data) to data.frame object to be used for mapping purposes. ggplot2 cannot handle objects of class sp (spatial polygon i.e. shape files). I'm assuming you want to save this 'reduced' or 'subsetted' data. What you need to do is the following:
library(rgdal)
library(dplyr)
map <- readOGR("C:/MAPS","33SEE250GC_SIR")
map <- subset(world, LON>-43.41 | LON < -43.1 & LAT>- 23.05 | LAT< -22.79)
writeOGR(map, ".", "filename",
driver = "ESRI Shapefile") #also you were missing the driver argument
I am having problems opening the .shp file in R after I have joined attributes from a csv file to the dbf file. I have a lot of experience coding in R, but limited experience with GIS in R. I have experience in ArcGIS, but do not have access to the program anymore. I know how to create bubbleplots and other maps in R using the csv file and plotting points, but I would like to be able to add the attributes to the .dbf, then use the shapefile to fill in the county areas with the brewer palette. I can open the shape file fine before joining the attributes to the .dbf file (the files were obtained from the us census bureau webpage).
Here is my code below:
library(gpclib)
library(maptools)
library(RColorBrewer)
library(classInt)
library(TeachingDemos)
gci<-read.csv("C:/Users/Smackbug/marketingmapexample.csv", header=TRUE) #Has Geo_ID
#read in dbf file to append data
gci2<-gci
gci2<-na.omit(gci2) #remove any empty data points
#read in dbf file to add attributes
akdbf<-read.dbf(file.choose())#downloaded from the us census bureau
#merge to join attributes
joined<-merge(akdbf,gci2, by=c("GEO_ID"))
#Save original and new dbf
write.dbf(akdbf, "C:/Users/Smackbug/Desktop/shapefiles/gz_2010_02_060_00_500koriginal.dbf")
write.dbf(joined, "C:/Users/Smackbug/Desktop/shapefiles/gz_2010_02_060_00_500k.dbf")
and I get the error from this part of the code
**alaska<-readShapePoly(file.choose(),proj4string=CRS("+proj=longlat") )
Error in `row.names<-.data.frame`(`*tmp*`, value = value) :
invalid 'row.names' length**
and the rest of the code
#the rest of the code should look something like this
colors<-brewer.pal(5,"Reds")
brks<-classIntervals(alaska$medianIncome, n=5, style="fixed", fixedBreaks=c(0,25,50,100,250))
plot(brks, pal=colors)
brks<-brks$brks
plot(alaska, col=colors[findInterval(alaska$medianIncome, brks, all.inside=TRUE)], axes=F)
You are breaking the sp (shapefile) object in multiple ways. You cannot not add data to the dbf independently of the operating on the shapefile. Everything is indexed in one of the binary files (shx) comprising the shapefile. You are also breaking the internal relationship of the sp object by using merge.
The most efficient way is to use rgdal to read the shapefile, join the dbf, and finally write out (or overwrite) a new shapefile.
require(rgdal)
require(sp)
require(foreign)
# Read data
shp <- readORG(getwd(), "ShpName")
tbl <- read.dbf("infile.dbf")
# Merge data using match
shp#data = data.frame(shp#data, tbl[match(shp#data[,"GEO_ID"], tbl[,"GEO_ID"]),])
# Write new shapefile with added attributes, THe additional flags will overwrite if
# the name is the same as the original
writeOGR(shp, getwd(), "NewShp", driver="ESRI Shapefile", check_exists=TRUE, overwrite_layer=TRUE)
If you need a more formal merge function you can use this.
##########################################################################
# PROGRAM: merge.sp.df
#
# USE: JOINS A dataframe OBJECT TO A sp CLASS SpatialDataFrame OBJECT
# KEEPING INTEGRITY OF DATA
#
# REQUIRES: sp CLASS SpatialDataFrame OBJECT
# PACKAGES: sp
#
# ARGUMENTS:
# x sp SpatialDataFrame OBJECT
# y dataframe OBJECT TO MERGE
# xcol MERGE COLUMN NAME IN sp OBJECT
# ycol MERGE COLUMN NAME IN dataframe OBJECT
#
# EXAMPLE:
# # Not Run (dat.sp is sp object and dat is a data.frame to merge
# dat.sp <- merge.sp.df(dat.sp, dat, "dat.sp-ID", "dat-ID")
#
# VALUE:
# A NEW SpatialDataFrame OBJECT WITH MERGED DATA
##########################################################################
merge.sp.df <- function(x, y, xcol, ycol) {
x#data$sort <- 1:nrow(as(x#data, "data.frame"))
xdf <- as( x#data, "data.frame")
xdf <- merge(xdf, y, by.x=xcol, by.y=ycol)
xdf <- xdf[order(xdf$sort), ]
row.names(xdf) <- xdf$sort
xdf <- xdf[,- which(names(xdf) == "sort")]
x#data <- xdf
return(x)
}
Here's a working result (thanks Jeffery for your help) using some of Jeffery's code above:
library(sp)
library(rgdal)
library(foreign)
setwd("C:/Users/rhonda/Documents/R scripts/shapefiles")
gc<-read.csv("gcmarketingmapexample.csv", header=TRUE)
#read in dbf file to append data
akdbf<-read.dbf("gz_2010_02_060_00_500k.dbf")
#merge to join attributes
joined<-merge(akdbf,gc, by=c("GEO_ID")`enter code here`
#save new dbf
write.dbf(joined, "akdbf")
#Shape File and DBF file
frame<-readOGR(getwd(),"gz_2010_02_060_00_500k")
akdbf2<-read.dbf("akdbf.dbf")
frame#data=akdbf2[match(frame#data[,"GEO_ID"], akdbf2[,"GEO_ID"]),]
writeOGR(frame,getwd() , "akdbf",driver="ESRI Shapefile", check_exists=TRUE, overwrite_layer=TRUE)
#use new shapefiles to create maps
I want to to convert two .shp files into one database that would allow me to draw the maps together.
Also, is there a way to convert .shp files into .csv files? I want to be able to personalize and add some data which is easier for me under a .csv format. What I have in mind if to add overlay yield data and precipitation data on the maps.
Here are the shapefiles for Morocco, and Western Sahara.
Code to plot the two files:
# This is code for mapping of CGE_Morocco results
# Loading administrative coordinates for Morocco maps
library(sp)
library(maptools)
library(mapdata)
# Loading shape files
Mor <- readShapeSpatial("F:/Purdue University/RA_Position/PhD_ResearchandDissert/PhD_Draft/Country-CGE/MAR_adm1.shp")
Sah <- readShapeSpatial("F:/Purdue University/RA_Position/PhD_ResearchandDissert/PhD_Draft/Country-CGE/ESH_adm1.shp")
# Ploting the maps (raw)
png("Morocco.png")
Morocco <- readShapePoly("F:/Purdue University/RA_Position/PhD_ResearchandDissert/PhD_Draft/Country-CGE/MAR_adm1.shp")
plot(Morocco)
dev.off()
png("WesternSahara.png")
WesternSahara <- readShapePoly("F:/Purdue University/RA_Position/PhD_ResearchandDissert/PhD_Draft/Country-CGE/ESH_adm1.shp")
plot(WesternSahara)
dev.off()
After looking into suggestions from #AriBFriedman and #PaulHiemstra and subsequently figuring out how to merge .shp files, I have managed to produce the following map using the following code and data (For .shp data, cf. links above)
code:
# Merging Mor and Sah .shp files into one .shp file
MoroccoData <- rbind(Mor#data,Sah#data) # First, 'stack' the attribute list rows using rbind()
MoroccoPolys <- c(Mor#polygons,Sah#polygons) # Next, combine the two polygon lists into a single list using c()
summary(MoroccoData)
summary(MoroccoPolys)
offset <- length(MoroccoPolys) # Next, generate a new polygon ID for the new SpatialPolygonDataFrame object
browser()
for (i in 1: offset)
{
sNew = as.character(i)
MoroccoPolys[[i]]#ID = sNew
}
ID <- c(as.character(1:length(MoroccoPolys))) # Create an identical ID field and append it to the merged Data component
MoroccoDataWithID <- cbind(ID,MoroccoData)
MoroccoPolysSP <- SpatialPolygons(MoroccoPolys,proj4string=CRS(proj4string(Sah))) # Promote the merged list to a SpatialPolygons data object
Morocco <- SpatialPolygonsDataFrame(MoroccoPolysSP,data = MoroccoDataWithID,match.ID = FALSE) # Combine the merged Data and Polygon components into a new SpatialPolygonsDataFrame.
Morocco#data$id <- rownames(Morocco#data)
Morocco.fort <- fortify(Morocco, region='id')
Morocco.fort <- Morocco.fort[order(Morocco.fort$order), ]
MoroccoMap <- ggplot(data=Morocco.fort, aes(long, lat, group=group)) +
geom_polygon(colour='black',fill='white') +
theme_bw()
Results:
New Question:
1- How to eliminate the boundaries data that cuts though the map in half?
2- How to combine different regions within a .shp file?
Thanks you all.
P.S: the community in stackoverflow.com is wonderful and very helpful, and especially toward beginners like :) Just thought of emphasizing it.
Once you have loaded your shapefiles into Spatial{Lines/Polygons}DataFrames (classes from the sp-package), you can use the fortify generic function to transform them to flat data.frame format. The specific functions for the fortify generic are included in the ggplot2 package, so you'll need to load that first. A code example:
library(ggplot2)
polygon_dataframe = fortify(polygon_spdf)
where polygon_spdf is a SpatialPolygonsDataFrame. A similar approach works for SpatialLinesDataFrame's.
The difference between my solution and that of #AriBFriedman is that mine includes the x and y coordinates of the polygons/lines, in addition to the data associated to those polgons/lines. I really like visualising my spatial data with the ggplot2 package.
Once you have your data in a normal data.frame you can simply use write.csv to generate a csv file on disk.
I think you mean you want the associated data.frame from each?
If so, it can be accessed with the # slot access function. The slot is called data:
write.csv( WesternSahara#data, file="/home/wherever/myWesternSahara.csv")
Then when you read it back in with read.csv, you can try assigning:
myEdits <- read.csv("/home/wherever/myWesternSahara_modified.csv")
WesternSahara#data <- myEdits
You may need to do some massaging of row names and so forth to get it to accept the new data.frame as valid. I'd probably try to merge the existing data.frame with a csv you read in in R, rather than making edits destructively....