I'm changing my spatial workflow to use the terra package instead of the raster package. With the raster package I used to read in multiple rasters directly into a stack.
filelist_temp <- list.files(datapath("Climate/World Clim 1 yr Monthly Weather/LCC June and July/June"), full.names = TRUE)
temp_rasters <- stack(filelist_temp)
Is there a simple way to do the same operation in terra?
This is what I came up with initially, but it doesn't work. I end up with a list of 25 spatRasters
temp_rasters <- c(lapply(filelist_temp, rast))
It's simpler than I realized
temp_rasters <- rast(filelist_temp)
Related
I am trying to download high-resolution climate data for a bunch of lat/long coordinates, and combine them into a single dataframe. I've come up with a solution (below), but it will take forever with the large list of coordinates I have. I asked a related question on the GIS StackExchange to see if anyone knew of a better approach for downloading and merging the data, but I'm wondering if I could somehow just speed up the operation of the loop? Does anyone have any suggestions on how I might do that? Here is a reproducible example:
# Download and merge 0.5 minute MAT/MAP data from WorldClim for a list of lon/lat coordinates
# This is based on https://emilypiche.github.io/BIO381/raster.html
# Make a dataframe with coordinates
coords <- data.frame(Lon = c(-83.63, 149.12), Lat=c(10.39,-35.31))
# Load package
library(raster)
# Make an empty dataframe for dumping data into
coords3 <- data.frame(Lon=integer(), Lat=integer(), MAT_10=integer(), MAP_MM=integer())
# Get WorldClim data for all the coordinates, and dump into coords 3
for(i in seq_along(coords$Lon)) {
r <- getData("worldclim", var="bio", res=0.5, lon=coords[i,1], lat=coords[i,2]) # Download the tile containing the lat/lon
r <- r[[c(1,12)]] # Reduce the layers in the RasterStack to just the variables we want to look at (MAT*10 and MAP_mm)
names(r) <- c("MAT_10", "MAP_mm") # Rename the columns to something intelligible
points <- SpatialPoints(na.omit(coords[i,1:2]), proj4string = r#crs) #give lon,lat to SpatialPoints
values <- extract(r,points)
coords2 <- cbind.data.frame(coords[i,1:2],values)
coords3 <- rbind(coords3, coords2)
}
# Convert MAT*10 from WorldClim into MAT in Celcius
coords3$MAT_C <- coords3$MAT_10/10
Edit: Thanks to advice from Dave2e, I've first made a list, then put intermediate results in the list, and rbind it at the end. I haven't timed this yet to see how much faster it is than my original solution. If anyone has further suggestions on how to improve the speed, I'm all ears! Here is the new version:
coordsList <- list()
for(i in seq_along(coordinates$lon_stm)) {
r <- getData("worldclim", var="bio", res=0.5, lon=coordinates[i,7], lat=coordinates[i,6]) # Download the tile containing the lat/lon
r <- r[[c(1,12)]] # Reduce the layers in the RasterStack to just the variables we want to look at (MAT*10 and MAP_mm)
names(r) <- c("MAT_10", "MAP_mm") # Rename the columns to something intelligible
points <- SpatialPoints(na.omit(coordinates[i,7:6]), proj4string = r#crs) #give lon,lat to SpatialPoints
values <- extract(r,points)
coordsList[[i]] <- cbind.data.frame(coordinates[i,7:6],values)
}
coords_new <- bind_rows(coordsList)
Edit2: I used system.time() to time the execution of both of the above approaches. When I did the timing, I had already downloaded all of the data, so the download time isn't included in my time estimates. My first approach took 45.01 minutes, and the revised approach took 44.15 minutes, so I'm not really seeing a substantial time savings by doing it the latter way. Still open to advice on how to revise the code so I can improve the speed of the operations!
I need to get information on the extent, resolution, and cell number of the my file. I'm working with raster files.
You can do
library(raster)
# r <- raster("filename")
r <- raster()
extent(r)
ncell(r)
res(r)
You can read more about the methods in the raster package here
I have a folder with about 100 point shapefiles that are locations obtained while scat sampling of an ungulate species. I would like to merge all these point shapefiles into one shapefile in R. All the point data were in .gpx format initially which I then changed to shapefiles.
I am fairly new to R,so I am very confused as on how to do it and could not find codes that merged or combined more than a few shapefiles. Any suggestions would be much appreciated. Thanks!
Building on #M_Merciless ..
for long lists you can use
all_schools <- do.call(rbind, shapefile_list)
Or, alternatively, the very fast:
all_schools <- sf::st_as_sf(data.table::rbindlist(x))
library(sf)
list all the shapefiles within the folder location
file_list <- list.files("shapefile/folder/location", pattern = "*shp", full.names = TRUE)
read all shapefiles, store as a list
shapefile_list <- lapply(file_list, read_sf)
append the separate shapefiles, in my example there are 4 shapefiles, this can probably be improved by using a for loop or apply function for a longer list.
all_schools <- rbind(shapefile_list[[1]], shapefile_list[[2]], shapefile_list[[3]], shapefile_list[[4]])
Adding a solution that I think is "tidier"
library(fs)
library(tidyverse)
# Getting all file paths
shapefiles <- 'my/data/folder' |>
dir_ls(recurse = TRUE) |>
str_subset('.shp$')
# Loading all files
sfdf <- shapefiles |>
map(st_read) |>
bind_rows()
Admittedly, more lines of code but personally I think the code is much easier to read and comprehend this way.
In R, how can I export a khrud object from function kernelUD in package adehabitat to a raster file (geoTiff)?
I tried following this thread (R: how to create raster layer from an estUDm object) using the code here:
writeRaster(raster(as(udbis1,"SpatialPixelsDataFrame")), "udbis1.tif")
where udbis1 is a khrud object, but I get "Error in as(udbis1, "SpatialPixelsDataFrame") : no method or default for coercing “khrud” to “SpatialPixelsDataFrame."
I think the issue may be that the old thread was before an update to the adehabitat package changed the data format from estUD to khrud. Maybe?
You do not provide a reproducible example. The following works for me:
library(adehabitatHR)
library(raster)
data(puechabonsp)
loc <- puechabonsp$relocs
ud <- kernelUD(loc[, 1])
r <- raster(as(ud[[1]], "SpatialPixelsDataFrame"))
writeRaster(r, filename = file.path(tempdir(), "ud1.tif"))
AdehabitatHR solutions work well for data that are in the required format or when using multiple animals. Though when wanting to create KDE with data organized differently or for only one source, it can be frustrating. For some reason, #johaness' answer doesn't work for my case so here is an alternative solution that avoids the headaches of going into adehabitatHR's innards.
library(adehabitatHR)
library(raster)
# Recreating an example for only one animal
# with a basic xy dataset like one would get from tracking
loc<-puechabonsp$relocs
loc<-as.data.frame(loc)
loc<-loc[loc$Name=="Brock",]
coordinates(loc)<-~X+Y
ud<-kernelUD(loc)
# Extract the UD values and coordinates into a data frame
udval<-data.frame("value" = ud$ud, "lon" = ud#coords[,1], "lat" = ud#coords[,2])
coordinates(udval)<-~lon+lat
# coerce to SpatialPixelsDataFrame
gridded(udval) <- TRUE
# coerce to raster
udr <- raster(udval)
plot(udr)
I have a shape file with two different region with 12 sub region each.I want separate shape file of these 24 sub region from that shape file.I have also tried by using package maptools and rgeos but could not figure it.Any logarithm would be very much appreciated. Thanks.
sharif
You can split your data in a loop based on the unique value in the column of interest and write out the subset data. I am using rgdal in leu of maptools but you could easily change the code to use maptools functions for reading/writing shapefiles.
require(sp)
require(rgdal)
# READ SHAPEFILE
dat <- readOGR("C:/DATA", "dat")
# CREATE VECTOR OF UNIQUE SUBREGION VALUES
y <- unique(dat#data$SUBREGIONS)
# CREATE SHAPEFILE FOR EACH SUBREGION AND WRITE OUT
for (i in 1:length(y) ) {
temp <- dat[dat$SUBREGIONS == y[i], ]
writeOGR(temp, dsn=getwd(), y[i], driver="ESRI Shapefile",
overwrite_layer=TRUE)
}