This is a newbie question. I want to plot the state level unemployment in the US map. There have been profound discussions here and elsewhere about how to plot county level unemployment and the issues associated with it. The code looks intimidating to me. Is there a simple code out there which takes two columns, a state code and a factor variable indicating numeric intervals and yields a colored US map(based on the factor variable). A supplementary question is that if I need to go a little further and create similar plot but with unemployment rate in major cities of US how do I modify the code.
Thank you in advance.
Here is a quick piece of code with comments explaining each step. Let me know if you have questions
# load libraries
library(XML);
library(ggplot2);
library(maps);
library(plyr);
# read the data from the bls website with correct column formats
unemp = readHTMLTable('http://www.bls.gov/web/laus/laumstrk.htm',
colClasses = c('character', 'character', 'numeric'))[[2]];
# rename columns and convert region to lowercase
names(unemp) = c('rank', 'region', 'rate');
unemp$region = tolower(unemp$region);
# get us state map data and merge with unemp
us_state_map = map_data('state');
map_data = merge(unemp, us_state_map, by = 'region');
# keep data sorted by polygon order
map_data = arrange(map_data, order);
# plot map using ggplot2
p0 = ggplot(map_data, aes(x = long, y = lat, group = group)) +
geom_polygon(aes(fill = cut_number(rate, 5))) +
geom_path(colour = 'gray', linestyle = 2) +
scale_fill_brewer('Unemployment Rate (Jan 2011)', pal = 'PuRd') +
coord_map();
#You may need to spell out the argument pal as pallete
Ramnath nailed this one. If you're still looking for other solutions, there's a decent example using other packages at the SAS-and-R blog.
Related
I would like to make a map in R that colours in the FAO Fishing Areas according to a data set (in my case, length data of shark species).
I would prefer to do a choropleth map in ggplot but other types of maps are also fine. Worst case scenario a base map of FAO areas that I can add bubbles to. Even just an existing base map of FAO areas would be great. Any suggestions welcome!
I went to this page and clicked through to find this link to retrieve a GeoJSON file:
download.file("http://www.fao.org/fishery/geoserver/fifao/ows?service=WFS&request=GetFeature&version=1.0.0&typeName=fifao:FAO_AREAS_CWP&outputFormat=json", dest="FAO.json")
From here on, I was following this example from the R graph gallery, with a little help from this SO question and these notes:
library(geojsonio)
library(sp)
library(broom)
library(ggplot2)
library(dplyr) ## for joining values to map
spdf <- geojson_read("FAO.json", what = "sp")
At this point, plot(spdf) will bring up a plain (base-R) plot of the regions.
spdf_fortified <- tidy(spdf)
## make up some data to go with ...
fake_fish <- data.frame(id = as.character(1:324), value = rnorm(324))
spdf2 <- spdf_fortified %>% left_join(fake_fish, by = "id")
ggplot() +
geom_polygon(data = spdf2, aes( x = long, y = lat, group = group,
fill = value), color="grey") +
scale_fill_viridis_c() +
theme_void() +
theme(plot.background = element_rect(fill = 'lightgray', colour = NA)) +
coord_map() +
coord_sf(crs = "+proj=cea +lon_0=0 +lat_ts=45") ## Gall projection
ggsave("FAO.png")
notes
some of the steps are slow, it might be worth looking up how to coarsen/lower resolution of a spatial polygons object (if you just want to show the picture, the level of resolution might be overkill)
to be honest the default sequential colour scheme might be better but all the cool kids seem to like "viridis" these days so ...
There are probably better ways to do a lot of these pieces (e.g. set map projection, fill in background colour for land masses, ... ?)
I am using an excel sheet for data. One column has FIPS numbers for GA counties and the other is labeled Count with numbers 1 - 5. I have made a map with these values using the following code:
library(usmap)
library(ggplot2)
library(rio)
carrierdata <- import("GA Info.xlsx")
plot_usmap( data = carrierdata, values = "Count", "counties", include = c("GA"), color="black") +
labs(title="Georgia")+
scale_fill_continuous(low = "#56B1F7", high = "#132B43", name="Count", label=scales::comma)+
theme(plot.background=element_rect(), legend.position="right")
I've included the picture of the map I get and a sample of the data I am using. Can anyone help me put the actual Count numbers on each county?
Thanks!
Data
The usmap package is a good source for county maps, but the data it contains is in the format of data frames of x, y co-ordinates of county outlines, whereas you need the numbers plotted in the center of the counties. The package doesn't seem to contain the center co-ordinates for each county.
Although it's a bit of a pain, it is worth converting the map into a formal sf data frame format to give better plotting options, including the calculation of the centroid for each county. First, we'll load the necessary packages, get the Georgia data and convert it to sf format:
library(usmap)
library(sf)
library(ggplot2)
d <- us_map("counties")
d <- d[d$abbr == "GA",]
GAc <- lapply(split(d, d$county), function(x) st_polygon(list(cbind(x$x, x$y))))
GA <- st_sfc(GAc, crs = usmap_crs()#projargs)
GA <- st_sf(data.frame(fips = unique(d$fips), county = names(GAc), geometry = GA))
Now, obviously I don't have your numeric data, so I'll have to make some up, equivalent to the data you are importing from Excel. I'll assume your own carrierdata has a column named "fips" and another called "values":
set.seed(69)
carrierdata <- data.frame(fips = GA$fips, values = sample(5, nrow(GA), TRUE))
So now we left_join our imported data to the GA county data:
GA <- dplyr::left_join(GA, carrierdata, by = "fips")
And we can calculate the center point for each county:
GA$centroids <- st_centroid(GA$geometry)
All that's left now is to plot the result:
ggplot(GA) +
geom_sf(aes(fill = values)) +
geom_sf_text(aes(label = values, geometry = centroids), colour = "white")
I have a data set of ~25,000 people that have complete postal codes. I'm trying to create a map of Canada at the FSA level but always seem to get bizarre results. I would appreciate if someone could point out where my mistakes are happening or what I'm missing.
library(rgeos)
library(maptools)
library(ggplot2)
fsas = readShapeSpatial('./Resources/FSA/gfsa000a11a_e.shp')
data = fortify(fsas, region = 'CFSAUID')
data$fsa = factor(data$id)
data$id = NULL
df$fsa = substr(df$Postal, 1, 3)
prvdr_cts = data.frame(table(df$fsa)) ; names(prvdr_cts) = c('fsa', 'ct')
plot.data = merge(data, prvdr_cts, by = 'fsa')
ggplot(plot.data, aes(x = long, y = lat, group = group, fill = ct)) +
geom_polygon() +
coord_equal()
This is my resulting plot
I got my map file from http://www12.statcan.gc.ca/census-recensement/2011/geo/bound-limit/bound-limit-2011-eng.cfm under 'Forward sortation areas'. df has two columns Person ID and FSA.
I've seen similar problems before when I forgot group = group (as #r.bot points out), but as you have that I wonder if it's because the shapefile you're using is highly detailed.
I suggest trying the sf package to load shapefiles which has superseded using readShapePoly. This has the advantages of being faster and you don't need to fortify(). I've also simplified the shapefile somewhat to make plotting faster. Finally, you need to development version of ggplot2 to use the new geom_sf() (ATOW):
install.packages(c("rmapshaper", "sf", "devtools"))
devtools::install_github("tidyverse/ggplot2")
library("ggplot2")
fsas = sf::read_sf("gfsa000a11a_e.shp")
fsas = rmapshaper::ms_simplify(fsas, keep = 0.05)
ggplot(fsas) + geom_sf()
I am relatively new to ggplot, so please forgive me if some of my problems are really simple or not solvable at all.
What I am trying to do is generate a "Heat Map" of a country where the filling of the shape is continous. Furthermore I have the shape of the country as .RData. I used hadley wickham's script to transform my SpatialPolygon data into a data frame. The long and lat data of my data frame now looks like this
head(my_df)
long lat group
6.527187 51.87055 0.1
6.531768 51.87206 0.1
6.541202 51.87656 0.1
6.553331 51.88271 0.1
This long/lat data draws the outline of Germany. The rest of the data frame is omitted here since I think it is not needed. I also have a second data frame of values for certain long/lat points. This looks like this
my_fixed_points
long lat value
12.817 48.917 0.04
8.533 52.017 0.034
8.683 50.117 0.02
7.217 49.483 0.0542
What I would like to do now, is colour each point of the map according to an average value over all the fixed points that lie within a certain distance of that point. That way I would get a (almost)continous colouring of the whole map of the country.
What I have so far is the map of the country plotted with ggplot2
ggplot(my_df,aes(long,lat)) + geom_polygon(aes(group=group), fill="white") +
geom_path(color="white",aes(group=group)) + coord_equal()
My first Idea was to generate points that lie within the map that has been drawn and then calculate the value for every generated point my_generated_point like so
value_vector <- subset(my_fixed_points,
spDistsN1(cbind(my_fixed_points$long, my_fixed_points$lat),
c(my_generated_point$long, my_generated_point$lat), longlat=TRUE) < 50,
select = value)
point_value <- mean(value_vector)
I havent found a way to generate these points though. And as with the whole problem, I dont even know if it is possible to solve this way. My question now is if there exists a way to generate these points and/or if there is another way to come to a solution.
Solution
Thanks to Paul I almost got what I wanted. Here is an example with sample data for the Netherlands.
library(ggplot2)
library(sp)
library(automap)
library(rgdal)
library(scales)
#get the spatial data for the Netherlands
con <- url("http://gadm.org/data/rda/NLD_adm0.RData")
print(load(con))
close(con)
#transform them into the right format for autoKrige
gadm_t <- spTransform(gadm, CRS=CRS("+proj=merc +ellps=WGS84"))
#generate some random values that serve as fixed points
value_points <- spsample(gadm_t, type="stratified", n = 200)
values <- data.frame(value = rnorm(dim(coordinates(value_points))[1], 0 ,1))
value_df <- SpatialPointsDataFrame(value_points, values)
#generate a grid that can be estimated from the fixed points
grd = spsample(gadm_t, type = "regular", n = 4000)
kr <- autoKrige(value~1, value_df, grd)
dat = as.data.frame(kr$krige_output)
#draw the generated grid with the underlying map
ggplot(gadm_t,aes(long,lat)) + geom_polygon(aes(group=group), fill="white") + geom_path(color="white",aes(group=group)) + coord_equal() +
geom_tile(aes(x = x1, y = x2, fill = var1.pred), data = dat) + scale_fill_continuous(low = "white", high = muted("orange"), name = "value")
I think what you want is something along these lines. I predict that this homebrew is going to be terribly inefficient for large datasets, but it works on a small example dataset. I would look into kernel densities and maybe the raster package. But maybe this suits you well...
The following snippet of code calculates the mean value of cadmium concentration of a grid of points overlaying the original point dataset. Only points closer than 1000 m are considered.
library(sp)
library(ggplot2)
loadMeuse()
# Generate a grid to sample on
bb = bbox(meuse)
grd = spsample(meuse, type = "regular", n = 4000)
# Come up with mean cadmium value
# of all points < 1000m.
mn_value = sapply(1:length(grd), function(pt) {
d = spDistsN1(meuse, grd[pt,])
return(mean(meuse[d < 1000,]$cadmium))
})
# Make a new object
dat = data.frame(coordinates(grd), mn_value)
ggplot(aes(x = x1, y = x2, fill = mn_value), data = dat) +
geom_tile() +
scale_fill_continuous(low = "white", high = muted("blue")) +
coord_equal()
which leads to the following image:
An alternative approach is to use an interpolation algorithm. One example is kriging. This is quite easy using the automap package (spot the self promotion :), I wrote the package):
library(automap)
kr = autoKrige(cadmium~1, meuse, meuse.grid)
dat = as.data.frame(kr$krige_output)
ggplot(aes(x = x, y = y, fill = var1.pred), data = dat) +
geom_tile() +
scale_fill_continuous(low = "white", high = muted("blue")) +
coord_equal()
which leads to the following image:
However, without knowledge as to what your goal is with this map, it is hard for me to see what you want exactly.
This slideshow offers another approach--see page 18 for a description of the approach and page 21 for a view of what the results looked like for the slide-maker.
Note however that the slide-maker used the sp package and the spplot function rather than ggplot2 and its plotting functions.
I'm trying to create a frequency plot of number of appearances of a graph type by year.
I have played around with ggplot2 for a while, but I think this is over my head (I'm just getting started with R)
I attached a schematic of what I would like the result to look like. One of the other issues I'm running into is that there are many years that the graph types don't appear. Is there a way to exclude the graph type if it does not appear that year?
e.g. in 1940 there is no "sociogram" I don't want to have a bunch of lines at 0...
year <- c("1940","1940","1940","1940","1940","1940","1940","1940","1940","1940","1940","1941","1941","1941","1941","1941","1941","1941","1941","1941","1941","1941","1941","1941","1941")
type <- c("Line","Column", "Stacked Column", "Scatter with line", "Scatter with line", "Scatter with line", "Scatter with line", "Map with distribution","Line","Line","Line","Bar","Bar","Stacked bar","Column","Column","Sociogram","Sociogram","Column","Column","Column","Line","Line","Line","Line")
ytmatrix <- cbind(as.Date(as.character(year), "%Y", type))
Please let me know if something doesn't make sense. StackOverflow is quickly becoming one of my favorite sites!
Thank,
Jon
Here's what I have so far...
Thank you again for all your help!
And here's how I did it (I can't share the data file yet, since it's something we're hoping to use it for a publication, but the ggplot area is probably the more interesting, though I didn't really do anything new/that wasn't discussed in the post):
AJS = read.csv(data) #read in file
Type = AJS[,17] #select and name "Type" column from csv
Year = AJS[,13] #select and name "Year" column from csv
Year = substr(Year,9,12) #get rid of junk from year column
Year = as.Date(Year, "%Y") #convert the year character to a date
Year = format(Year, "%Y") #get rid of the dummy month and day
Type = as.data.frame(Type) #create data frame
yt <- cbind(Year,Type) #bind the year and type together
library(ggplot2)
trial <- ggplot(yt, aes(Year,..count.., group= Type)) + #plot the data followed by aes(x- axis, y-axis, group the lines)
geom_density(alpha = 0.25, aes(fill=Type)) +
opts(axis.text.x = theme_text(angle = 90, hjust = 0)) + #adjust the x axis ticks to horizontal
opts(title = expression("Trends in the Use of Visualizations in The American Journal of Sociology")) + #Add title
scale_y_continuous('Appearances (10 or more)') #change Y-axis label
trial
This might be a more interesting dataframe to experiment with:
df1 <- data.frame(date = as.Date(10*365*rbeta(100, .5, .1)),group="a")
df2 <- data.frame(date = as.Date(10*365*rbeta(50, .1, .5)),group="b")
df3 <- data.frame(date = as.Date(10*365*rbeta(25, 3,3)),group="c")
dfrm <- rbind(df1,df2,df3)
I thought working with an example in the help(stat_density) page would work, but it does not:
m <- ggplot(dfrm, aes(x=date), group=group)
m+ geom_histogram(aes(y=..density..)) + geom_density(fill=NA, colour="black")
However an example I found in a search of hte archives found a posting by #Hadley Wickham that does work:
m+ geom_density(aes(fill=group), colour="black")