I have two layers that I would like to plot as Geom.point. From the Gadfly documentation, I believe this should be possible, though the example only covers the case where the layer Geom types are different. However, when I attempt this (Julia 0.3.0-prerelease+2584, Gadfly v0.2.8) it throws an error:
x = [83, 71, 79, 71, 73, 66, 78, 70, 69, 84, 59, 66, 73]
y = [59, 47, 33, 68, 56, 61, 51, 45, 50, 44, 60, 62, 50]
ox = 74
oy = 49
plot(layer(x=x, y=y, Geom.point),
layer(x=ox, y=oy, Geom.point))
# BoundsError()
# in eval_plot_mapping at /Users/peter/.julia/v0.3/Gadfly/src/Gadfly.jl:317
# in render at /Users/peter/.julia/v0.3/Gadfly/src/Gadfly.jl:448
# in writemime at /Users/peter/.julia/v0.3/Gadfly/src/Gadfly.jl:753
# in sprint at io.jl:460
# in display_dict at /Users/peter/.julia/v0.3/IJulia/src/execute_request.jl:35
Ultimately, I'd also like to manually specify an aesthetic for the layer (e.g. Geom.point(color="red").
Am I missing something about the slang of graphics, or are two same type Geom layers not supported in the slang? If they are, then how can I go about setting different aesthetics for each layer?
Gadfly expects x and y to be vectors, so plotting scalars won't work.
Besides providing a separate string for each layer to get different colors you can also use Theme to change the color manually, for example: Theme(default_color=color("red"))
Related
I need to draw a graph of the cumulative number of deaths in some countries from a file containing the cumulative amount of deaths by countries in the world from COVID-19.
This is the file we got - "time_series_covid19_deaths_global.csv"
I'm pretty new in R so I don't really know how even to start, I would love to get some help.
You can make a graph by plotting certain points in a simple way:
# Here is a daily deaths graph from the past 10 days in Canada using a line graph.
daily_deaths_from_canada <- c(89, 95, 82, 56, 98, 81, 114, 82, 89, 93)
plot(daily_deaths_from_canada)
If you want to learn more, I suggest look at this page.
I am very new to raster data and the use of R for spatial data analysis, so apologies if there's an obvious solution or process for this I've missed.
I have a raster file of population data from WorldPop, and a set of latitude / longitude location points that overlay onto that. I am trying to determine what portion of the population is (according to the WorldPop estimates) within a given distance of these points of interest, and also what portion is not.
I understand that using raster::extract, I should be able to get the sum of population values from (for example) a 1-kilometer buffer around each of these points. (Although my points and raster data are both in lat/lon projection, so I gather I need to first correct for this by changing the projection to utm as done here.)
However, because some number of these points will be less than 1 km apart, I am concerned that this total sum is double-counting the population of some cells where buffers overlap. Does buffering automatically correct for this, or is there an efficient way to ensure that this is not the case, and also to get the values from the inverse of the buffered point area selection?
Here is a minimal self-contained reproducible example,
library(raster)
r <- raster(system.file("external/rlogo.grd", package="raster"))
d <- matrix(c(48, 48, 48, 53, 50, 46, 54, 70, 84, 85, 74, 84, 95, 85,
66, 42, 26, 4, 19, 17, 7, 14, 26, 29, 39, 45, 51, 56, 46, 38, 31,
22, 34, 60, 70, 73, 63, 46, 43, 28), ncol=2)
p <- SpatialPoints(d, proj4string=crs(r))
A simple workflow, with points p and raster r would be
b <- buffer(p, 10)
m <- mask(r, b)
ms <- cellStats(m, "sum")
rs <- cellStats(r, "sum")
ms/rs
#[1] 0.4965083
Or you can use terra, to make this go faster, like this
library(terra)
r <- rast(system.file("ex/logo.tif", package="terra")) [[1]]
p <- vect(d, crs=crs(r))
b <- buffer(p, 10)
m <- mask(r, b)
ms <- global(m, "sum", na.rm=TRUE)
rs <- global(r, "sum")
ms/rs
By the way, with the raster package your assertion about needing to transform lon/lat data is not correct for extract or buffer. In contrast, with terra you need to do that (to be fixed).
Well, thanks to a suggestion via Twitter and this guide to creating SpatialPolygons around points, I've been able to find an answer for this. This is probably not the most efficient means of doing so — it's proving to be very slow on large polygons - but it's workable for my purposes.
Here's sample code:
library(tabularaster)
library(raster)
library(tidyverse)
library(geos)
# -----------------------
# load point data ---
p <- read_csv("points_of_interest.csv")
p_df <- p %>% rename(x = lat, y = lon)
p_coords <- p_df[, c("y","x")]
p_spdf <- SpatialPointsDataFrame(
coords = pc_coords,
data = p_df,
proj4string = CRS("+init=epsg:4326"))
# convert projection to metric units
p_mrc <- spTransform(
p_spdf,
CRS("+proj=merc +a=6378137 +b=6378137 +lat_ts=0.0 +lon_0=0.0
+x_0=0.0 +y_0=0 +k=1.0 +units=m +nadgrids=#null +no_defs")
)
# buffer to 1000 meters
p_mrc_1k_mrc <- gBuffer(
p_mrc, byid = TRUE, width = 1000)
# switch back to lat/lon
p_mrc_1k <- spTransform(p_mrc_1k_mrc, CRS("+init=epsg:4326"))
# load raster data -------
r <- raster("pop.tif")
r_tib <- tabularaster::as_tibble(r)
# get intersection of cells and polygons
cell_df_1k <- cellnumbers(r, p_mrc_1k)
# get list of cells where there is intersection
target_cell_1k <- cell_df_1k$cell_
# add cell values to df listing all cells covered by polys
target_cells_extract_1k <- cell_df_1k %>%
rename(cellindex = cell_) %>%
left_join(r_tib)
# calculate the sum of population within 1k radius for each object
# (this includes overlapping population cells shared between polys)
cell_sum_1k <- target_cells_extract_1k %>%
group_by(object_) %>%
summarize(pop_1k = sum(cellvalue, na.rm = T))
# get only unique cell values for total overlapping coverage of all polys
target_cells_unique_1k <- r_tib %>% filter(cellindex %in% target_cell_1k)
total_coverage_pop <- sum(target_cells_unique_1k$cellvalue, na.rm = T)
outside_coverage_pop <- sum(r_tib$cellvalue) - total_coverage_pop
Trying to create a thematic map to represent data on a LargeSpatialPolygonDataFrame and I'm having difficulty creating a forced scale.
I'd like to make the following scale: seq(0,4500,500) to create ten different fill categories regardless of if the data frame has data in that range or not like the following image.
Texas_LMA SpatialPolygonDataFrame:
> Texas_LMA
class : SpatialPolygonsDataFrame
features : 33
extent : -106.6278, -93.52764, 25.85646, 36.5004 (xmin, xmax, ymin, ymax)
coord. ref. : +proj=longlat +datum=NAD83 +no_defs +ellps=GRS80 +towgs84=0,0,0
variables : 10
names : LMA, Sol_index, Capacity, LMA.data, Technology, Water_Capacity_Value, Robust., X, Water_Capacity, Water_Capacity_String
min values : 1, 135, 21, 1, Biomass, 0.00, 1, NA, 0, 0%
max values : 33, 135, 1739, 32, Biomass | Wind, 0.84, 1, NA, 84, 84%
Has the following Capacity ranges
> unique(Texas_LMA$Capacity)
[1] 892 1739 156 NA 21 495
I'm using tmap to create the thematic map with the following code:
Fixed_Capacity_Heatmap <- tm_shape(Texas_LMA)+
tm_fill("Capacity",style="fixed",breaks=seq(0,4500,500))+
tm_borders()
Results of the plot when there aren't enough categories Capacity Plot with 5 categories
This issue should have been fixed as of https://github.com/mtennekes/tmap/commit/3a33563a4336042307320f470dc8189fb0572477, i.e. CRAN version 1.4-1. The problem was that classInt was struggling with numeric variables with only a few unique values. Please let me know if it still doesn't work, preferably with a reproducible example.
I am trying extract the data of two overlapping set of rasters (one, a stack of 35 rasters, all from the same source and the second an elevation raster) to get a data.frame of the values (mean of the values) of each pixel of all the rasters.
The description of the raster stack is the following:
> stack_pacifico
class : RasterStack
dimensions : 997, 709, 706873, 35 (nrow, ncol, ncell, nlayers)
resolution : 0.008333333, 0.008333333 (x, y)
extent : -81.62083, -75.7125, 0.3458336, 8.654167 (xmin, xmax, ymin, ymax)
coord. ref. : +proj=longlat +datum=WGS84 +no_defs +ellps=WGS84 +towgs84=0,0,0
names : F101992.v//ts.avg_vis, F101993.v//ts.avg_vis, F101994.v//ts.avg_vis, F121994.v//ts.avg_vis, F121995.v//ts.avg_vis, F121996.v//ts.avg_vis, F121997.v//ts.avg_vis, F121998.v//ts.avg_vis, F121999.v//ts.avg_vis, F141997.v//ts.avg_vis, F141998.v//ts.avg_vis, F141999.v//ts.avg_vis, F142000.v//ts.avg_vis, F142001.v//ts.avg_vis, F142002.v//ts.avg_vis, ...
min values : 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
max values : 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, 63, ...
And for the elevation raster:
> elevation_pacifico
class : RasterLayer
dimensions : 997, 709, 706873 (nrow, ncol, ncell)
resolution : 0.008333333, 0.008333333 (x, y)
extent : -81.62083, -75.7125, 0.3458336, 8.654167 (xmin, xmax, ymin, ymax)
coord. ref. : +proj=longlat +datum=WGS84 +ellps=WGS84 +towgs84=0,0,0
data source : in memory
names : COL_msk_alt
values : -16, 5164 (min, max)
It is my first time with raster data and I want to extract the data by grids of 1km2 (or less). I know the resolution of both rasters can be coerced to fit into that area requirement, also both dimensions are equal, so the number of pixels per raster is the same.
My question is, can I only merge all the rasters (the ones in the stack and the elevation raster) and extract the data with the confidence that all the pixels overlap (or are in the same place)? Or do I have to create a master SpatialGrid or SpatialPixel object and then extract the raster data to these objects?
Thanks in advance,
Data from the raster stack can be downloaded by clicking at this link (if you want to download all the stack, you can use the script in https://github.com/ivanhigueram/nightlights):
http://www.ngdc.noaa.gov/eog/data/web_data/v4composites/
Elevation:
#Download country map and filter by pacific states
colombia_departments <- getData("GADM", download=T, country="CO", level=1)
pacific_littoral <- c(11, 13, 21, 30)
pacific_littoral_map <- colombia_departments[colombia_departments#data$ID_1 %in% pacific_littoral, ]
#Download elevation data and filter it for pacific states
elevation <- getData("alt", co="COL")
elevation_pacifico <- crop(elevation, pacific_littoral_map)
elevation_pacifico <- setExtent(elevation_pacifico, rasters_extent)
If the resolutions, extents and coordinate systems of the two raster objects are identical, then the cells will overlap perfectly. You can confirm this by looking at the coordinates:
coordinates(stack_pacifico)
coordinates(elevation_pacifico)
# are they the same?
identical(coordinates(stack_pacifico), coordinates(elevation_pacifico))
You can extract all cell values for each object using one of the following:
as.data.frame(r)
values(r)
r[]
extract(r, seq_len(ncell(r)))
where r is your raster object.
(These do not all have consistent behaviour for single raster layers - as.data.frame(r) ensures the result is a data.frame, which would have a single column if r is a single raster layer; in contrast the alternatives would return a simple vector if used with a single raster layer.)
The rows of as.data.frame(stack_pacifico) correspond to cells at the same coordinates as do the rows of as.data.frame(elevation_pacifico) (or, equivalently, the elements ofvalues(elevation_pacifico)`).
Or do this:
s <- stack(elevation_pacifico, stack_pacifico)
d <- values(s)
I have a brick file of the bioclim variables, the brick was merged from four 30sec tile brick, so it is a little bit large. I would like to get the brick file of my research area by cutting it using a polygon as boundary. What should I do? Otherwise, if it is not possible to do with brick, can I do it with raster?
Thanks in advance~
Marco
Check out extent() if you want to crop the brick to a smaller rectangle. Also drawExtent() if you would rather choose by clicking.
EDIT: Since you used the terms "cut" and "mask" I am not sure I have understood correctly, but here are two ways that might help. You could even use both.
# an example with dimensions: 77, 101, 3 (nrow, ncol, nlayers)
myGrid_Brick <- brick(system.file("external/rlogo.grd", package="raster"))
# a simple polygon within those dimensions
myTriangle_P <- Polygon(cbind(c(10, 80, 50, 10), c(10, 20, 65, 10)))
myTriangle_Ps <- Polygons(list(myTriangle_P), "fubar")
myTriangle_SP <- SpatialPolygons(list(myTriangle_Ps))
myTriangle_Ras <- rasterize(myTriangle_SP, myBrick)
# this will crop a brick to minimal rectangle that circumscribes the polygon
# extent(myCrop) is smaller than extent(myGrid) but no values are changed
myCrop_Brick <- crop(myGrid_Brick, myTriangle_SP)
# while this converts every coordinate that is NA in
# the mask to become NA in the returned brick
# while leaving the brick extent unchanged
myMask_Brick <- mask(myGrid_Brick, myTriangle_Ras)