I'm trying to rasterize a large CSV file which contains X and Y coordinates of raster pixels and an ID representing different individuals that have moved through the pixel.
I want 2 rasters from this operation: In the first, each pixel of the raster contains the number of times individuals have crossed it and in the second, each pixel contains the number of unique individuals that have crossed it.
I currently create rasters for each ID and then mosaic the rasters. I've parallelized this operation, using 16 cores and a CPU with 200 GB RAM. However, the CSV contains over a million rows and it seems a large overhead to copy it into each core. It also takes upwards of 4 days to complete and I have multiple such CSV files.
Is there a better, faster way to do this?
This is what I do currently:
n.cores <- 16 #this is half of the available cores
#Register CoreCluster
cl <- makeCluster(n.cores)
registerDoSNOW(cl)
clusterExport(cl=cl, "fil2" ) #fil2 is the CSV
#uq_id is a vector of all unique IDs
rasterList= foreach(i=1:length(uq_id), .combine='comb',
.init=list()) %dopar% {
library(raster)
#create a raster for each unique ID
dfXYID <- subset(fil2, uqid %in% i)
dfXYID$uqid <- as.numeric(dfXYID$uqid)
dfXYID=as.data.frame(dfXYID)
x <- raster(xmn=min(dfXYID$V1), xmx=max(dfXYID$V1), ymn=min(dfXYID$V2), ymx=max(dfXYID$V2), res=220, crs="+proj=utm +zone=43 +datum=WGS84 +units=m +no_defs")
#create a raster of the number of times individual with ID i have moved through a pixel
lenList <- rasterize(dfXYID[, c('V1', 'V2')], x, dfXYID[, 'uqid'], fun=function(x,...){length((na.omit(x)))})
return(lenList)
}
rasterList is then a list of rasters, each representing movement of individual i. I then have to do another time-consuming operation (can I do this in parallel within the above foreach?): apply a mosaic operation on this large list to get the 2 raster ouputs that I need.
I would be very grateful if anyone has any suggestions on how I can speed this up.
I understand gdal is an option, but I am constrained as I have to work with a CSV and I don't want to be writing each raster in rasterList to the disk to do further processing with gdal.
Related
I am trying to download high-resolution climate data for a bunch of lat/long coordinates, and combine them into a single dataframe. I've come up with a solution (below), but it will take forever with the large list of coordinates I have. I asked a related question on the GIS StackExchange to see if anyone knew of a better approach for downloading and merging the data, but I'm wondering if I could somehow just speed up the operation of the loop? Does anyone have any suggestions on how I might do that? Here is a reproducible example:
# Download and merge 0.5 minute MAT/MAP data from WorldClim for a list of lon/lat coordinates
# This is based on https://emilypiche.github.io/BIO381/raster.html
# Make a dataframe with coordinates
coords <- data.frame(Lon = c(-83.63, 149.12), Lat=c(10.39,-35.31))
# Load package
library(raster)
# Make an empty dataframe for dumping data into
coords3 <- data.frame(Lon=integer(), Lat=integer(), MAT_10=integer(), MAP_MM=integer())
# Get WorldClim data for all the coordinates, and dump into coords 3
for(i in seq_along(coords$Lon)) {
r <- getData("worldclim", var="bio", res=0.5, lon=coords[i,1], lat=coords[i,2]) # Download the tile containing the lat/lon
r <- r[[c(1,12)]] # Reduce the layers in the RasterStack to just the variables we want to look at (MAT*10 and MAP_mm)
names(r) <- c("MAT_10", "MAP_mm") # Rename the columns to something intelligible
points <- SpatialPoints(na.omit(coords[i,1:2]), proj4string = r#crs) #give lon,lat to SpatialPoints
values <- extract(r,points)
coords2 <- cbind.data.frame(coords[i,1:2],values)
coords3 <- rbind(coords3, coords2)
}
# Convert MAT*10 from WorldClim into MAT in Celcius
coords3$MAT_C <- coords3$MAT_10/10
Edit: Thanks to advice from Dave2e, I've first made a list, then put intermediate results in the list, and rbind it at the end. I haven't timed this yet to see how much faster it is than my original solution. If anyone has further suggestions on how to improve the speed, I'm all ears! Here is the new version:
coordsList <- list()
for(i in seq_along(coordinates$lon_stm)) {
r <- getData("worldclim", var="bio", res=0.5, lon=coordinates[i,7], lat=coordinates[i,6]) # Download the tile containing the lat/lon
r <- r[[c(1,12)]] # Reduce the layers in the RasterStack to just the variables we want to look at (MAT*10 and MAP_mm)
names(r) <- c("MAT_10", "MAP_mm") # Rename the columns to something intelligible
points <- SpatialPoints(na.omit(coordinates[i,7:6]), proj4string = r#crs) #give lon,lat to SpatialPoints
values <- extract(r,points)
coordsList[[i]] <- cbind.data.frame(coordinates[i,7:6],values)
}
coords_new <- bind_rows(coordsList)
Edit2: I used system.time() to time the execution of both of the above approaches. When I did the timing, I had already downloaded all of the data, so the download time isn't included in my time estimates. My first approach took 45.01 minutes, and the revised approach took 44.15 minutes, so I'm not really seeing a substantial time savings by doing it the latter way. Still open to advice on how to revise the code so I can improve the speed of the operations!
I would like to rasterize a very large vector file to 25m and have had some success with the 'cluster' package, adapting the qu's here and here, which worked nicely for that particular data.
However I now have a larger vector file that needs rasterizing and have access to a cluster that uses snowfall. I'm not used to cluster functions and i'm just not sure how to set up sfLapply. I am consistently getting the following sort of error as sfLapply is called in the cluster:
Error in checkForRemoteErrors(val) :
one node produced an error: 'quote(96)' is not a function, character or symbol
Calls: sfLapply ... clusterApply -> staticClusterApply -> checkForRemoteErrors
my full code:
library(snowfall)
library(rgeos)
library(maptools)
library(raster)
library(sp)
setwd("/home/dir/")
# Initialise the cluster...
hosts = as.character(read.table(Sys.getenv('PBS_NODEFILE'),header=FALSE)[,1]) # read the nodes to use
sfSetMaxCPUs(length(hosts)) # make sure the maximum allowed number of CPUs matches the number of hosts
sfInit(parallel=TRUE, type="SOCK", socketHosts=hosts, cpus=length(hosts), useRscript=TRUE) # initialise a socket cluster session with the named nodes
sfLibrary(snowfall)
# read in required data
shp <- readShapePoly("my_data.shp")
BNG <- "+proj=tmerc +lat_0=49 +lon_0=-2 +k=0.9996012717 +x_0=400000 +y_0=-100000 +ellps=airy +datum=OSGB36 +units=m +no_defs"
crs(shp) <- BNG
### rasterize the uniques to 25m and write (GB and clipped) ###
rw <- raster(res=c(25,25), xmn=0, xmx=600000, ymn=0, ymx=1000000, crs=BNG)
# Number of polygons features in SPDF
features <- 1:nrow(shp[,])
# Split features in n parts
n <- 96
parts <- split(features, cut(features, n))
rasFunction = function(X, shape, raster, nparts){
ras = rasterize(shape[nparts[[X]],], raster, 'CODE')
return(ras)
}
# Export everything in the workspace onto the cluster...
sfExportAll()
# Distribute calculation across the cluster nodes...
rDis = sfLapply(n, fun=rasFunction,X=n, shape=shp, raster=rw, nparts=parts) # equivalent of sapply
rMerge <- do.call(merge, rDis)
writeRaster(rMerge, filename="my_data_25m", format="GTiff", overwrite=TRUE)
# Stop the cluster...
sfStop()
i've tried a number of things, changing the function and sfLapply, but i just can't get this to run. thanks
Because I can't do formatting in comments:
library(maptools)
shp <- readShapePoly("my_data.shp")
BNG <- "+proj=tmerc +lat_0=49 +lon_0=-2 +k=0.9996012717 +x_0=400000 +y_0=-100000 +ellps=airy +datum=OSGB36 +units=m +no_defs"
shp.2 <- spTransform(shp, BNG)
#Continue as before
Overwriting a projection != reprojecting data.
Ok so I abandoned snowfall and I looked into gdalUtils::gdal_rasterize instead and found a lot of benefits to using it (with one downside that someone might be able to answer?)
Context & Issue: My vector data exist inside an ESRI file Geodatabase and require some processing pre rasterization. No problem, rgdal::readOGR is fine. However as gdal_rasterize requires a pathname to the vector data, i had trouble here because I could not write out my processed vector data, they exceed the max file size for a shapefile outside of a geodatabase and gdal_rasterize will not accept objects, paths to .gdbs or .Rdata/.rds files. How do I pass an object to gdal_rasterize??
So I wrote out the large shapefile in segments equal to number of processors.
Originally raster::rasterize was used as I could simply pass the vector object stored in memory to rasterize without the writing problem (though I would have liked to have it written), rasterizing this data to 25m. This took a pretty long time, even in parallel.
Solution: gdal_rasterize in parallel.
# gdal_rasterize in parallel
require(gdalUtils)
require(rgdal)
require(rgeos)
require(cluster)
require(parallel)
require(raster)
# read in vector data
shape <- readOGR("./mygdb.gdb", layer="mydata",stringsAsFactors=F)
## do all the vector processing etc ##
# split vector data into n parts, the same as number of processors (minus 1)
npar <- detectCores() - 1
features <- 1:nrow(shape[,])
parts <- split(features, cut(features, npar))
# write the vector parts out
for(n in 1:npar){
writeOGR(shape[parts[[n]],], ".\\parts", paste0("mydata_p",n), driver="ESRI Shapefile")
}
# set up and write a blank raster for gdal_rasterize for EACH vector segment created above
r <- raster(res=c(25,25), xmn=234000, xmx=261000, ymn=229000, ymx=256000, crs=projection(shape))
for(n in 1:npar){
writeRaster(r, filename=paste0(".\\gdal_p",n,".tif"), format="GTiff", overwrite=TRUE)
}
# set up cluster and pass required packages and objects to cluster
cl <- makeCluster(npar)
clusterEvalQ(cl, sapply(c('raster', 'gdalUtils',"rgdal"), require, char=TRUE))
clusterExport(cl, list("r","npar"))
# parallel apply the gdal_rasterize function against the vector parts that were written,
# same number as processors, against the pre-prepared rasters
parLapply(cl = cl, X = 1:npar, fun = function(x) gdal_rasterize(src_datasource=paste0(".\\parts\\mydata_p",x,".shp"),
dst_filename=paste0(".\\gdal_p",n,".tif"),b=1,a="code",verbose=F,output_Raster=T))
# There are now n rasters representing the n segments of the original vector file
# read in the rasters as a list, merge and write to a new tif.
s <- lapply(X=1:npar, function(x) raster(paste0(".\\gdal_p",n,".tif")))
s$filename <- "myras_final.tif"
do.call(merge,s)
stopCluster(cl)
The time (split 60% for vector reading/processing/writing & 40% for raster generation and rasterization) for the entire job in this code was around 9 times quicker than raster::rasterize in parallel.
Note: I tried this initially by splitting the vectors into n parts but creating only 1 blank raster. I then wrote to the same blank raster from all cluster nodes simultaneously but this corrupted the raster and made it unusable in R/Arc/anything (despite going through the function without error). Above is a more stable way, but n blank rasters have to be made instead of 1, increasing processing time, plus merging n rasters is extra processing.
caveat - raster::rasterize in parallel did not have writeRaster inside the rasterize function but rather as a separate line, which will have increased processing time in the original run due to storage to temp files etc.
EDIT: Why are the frequency tables from the raster from gdal_rasterize not the same as raster::rasterize? I mean with 100million cells i expect a bit of difference but for some codes it was a few 1000 cells different. I thought they both rasterized by centroid?
I have 500+ points in a SpatialPointsDataFrame object; I have a 1.7GB (200,000 rows x 200,000 cols) raster object. I want to have a tabulation of the values of the raster cells within a buffer around each of the 500+ points.
I have managed to achieve that with the code below (I got a lot of inspiration from here.). However, it is slow to run and I would like to make it run faster. It actually runs OK for buffers with "small" widths, say 5km ro even 15km (~1 million cells), but it becomes super slow when buffer increases to say 100km (~42 million cells).
I could easily improve on the loop below by using something from the apply family and/or a parallel loop. But my suspicion is that it is slow because the raster package writes 400Mb+ temporary files for each interaction of the loop.
# packages
library(rgeos)
library(raster)
library(rgdal)
myPoints = readOGR(points_path, 'myLayer')
myRaster = raster(raster_path)
myFunction = function(polygon_obj, raster_obj) {
# this function return a tabulation of the values of raster cells
# inside a polygon (buffer)
# crop to extent of polygon
clip1 = crop(raster_obj, extent(polygon_obj))
# crops to polygon edge & converts to raster
clip2 = rasterize(polygon_obj, clip1, mask = TRUE)
# much faster than extract
ext = getValues(clip2)
# tabulates the values of the raster in the polygon
tab = table(ext)
return(tab)
}
# loop over the points
ids = unique(myPoints$ID)
for (id in ids) {
# select point
myPoint = myPoints[myPoints$ID == id, ]
# create buffer
myPolygon = gBuffer(spgeom = myPoint, byid = FALSE, width = myWidth)
# extract the data I want (projections, etc are fine)
tab = myFunction(myPolygon, myRaster)
# do stuff with tab ...
}
My questions:
Am I right to partially blame the writing operations? If I managed to avoid all those writing operations, would this code run faster? I have access to a machine with 32GB of RAM -- so I guess it is safe to assume I could load the raster to the memory and need not to write temporary files?
What else could I do to improve efficiency in this code?
I think you should approach it like this
library(raster)
library(rgdal)
myPoints <- readOGR(points_path, 'myLayer')
myRaster <- raster(raster_path)
e <- extract(myRaster, myPoints, buffer=myWidth)
And then something like
etab <- sapply(e, table)
It is hard to answer your question #1 as we do not know enough about your data (we do not know how many cells are covered by a "100 km" buffer). But you can set options about when to write to file with the rasterOptions function. You notice that getValues is faster than extract, based on the post you link to, but I think that is wrong, or at least not very important. The combination of crop, rasterize and getValues should have a similar performance as extract (which does almost exactly that under the hood). If you go this route anyway, you should pass an empty RasterLayer, created by raster(myRaster) for faster cropping.
I have a long list of rasters and I need to take an average of a certain number and create a new image. For example, if I have rasters r1 r2 r3 r4 r5 r6 r7 r8
I want to take an average of r1 and r2 to give me an image, lets say new1. Then I want an average of r3 and r4 to give me image new2. I tried using the runmean in caTools but I couldn't get the output I needed. If I have 8 raster images then using a window of two should leave me with four raster images. I know raster usually belong on the GIS site but I needed help with the code so I hope it is ok here.
Supposing you have all the rasters in one folder: rasdir (and nothing else in this folder but the rasters to loop over), set environmental vars:
rasdir="myrasters/"
raspaths <- list.files(path=rasdir, full.names=T)
Assuming all rasters have the same extent and resolution, they can be stacked:
rascube <- stack(raspaths)
Create function that executes some function e.g. mean across bands
rascube is the stack of images to loop over, win the window size and outdir is the output directory
rasfun <- function(x=rascube, win=2, outdir=getwd()){
#Sanity check
if(!(length(raspaths)/win)%%1==0){stop("Number of rasters must be divisible by window size")}
#Create ```mat``` , an index that determines how rasters in ```rascube``` are aggregated:
#indices in the same row refer to rasters to be averaged into the ith output.
mat <- matrix(data=1:length(raspaths), ncol=win, byrow=T)
#Loop over ```rascube```, calculating the moving average as controlled by ```mat```
for (i in 1:nrow(mat)){
#Compute ith moving mean, You can alter this to compute a moving "whatever you like"
#Note the usage of ```[[ ]]``` to subset raster bands: see ```raster``` docu.
#Also Note the usage of ```na.rm=T```, just in case your images have NA's you dont care about
res_i <- sum(x[[ mat[i,1]:mat[i,win] ]], na.rm=T)/win #
#Write output to file: note how output filename is derived from the respective input rasters
#makes it possible to trace the outputs back to their source rasters.
writeRaster(x=res_i, filename=paste(mat[i,1:win], collapse=""),
format="GTiff", overwrite=T)
}
}
#Run newly created function on your stack of inputs with whatever args:
rasfun(x=rascube, win=2, outdir="moving_mean_rasters/")
Note: the number of rasters must be divisible by the window size e.g. attempting to run a moving window computation on 7 rasters with a window size of 2 will be made to fail by the sanity check. Of course you can alter the function to behave as you think is best for your use case. Cheers!
I am writing a code to calculate the mean amount of precipitation for different regions of conterminous USA. My total data has 300 times 120 (lon*lat) grids in Netcdf format. I want to write a loop in R to take the average of each 10 by 10 number of grids and assign that value (average) to all of the grids inside the region and repeat this for the next region. At the end instead of a 120 by 300 grids I will have 12 by 30 grids. So this is kind a upscaling method I want to apply to my data. I can use a for-loop for each region separately but It makes my code very huge and I don’t want to do that. Any idea would be appreciated. Thanks.
P.S: Here is the function I have written for one region (10by10) lat*lon.
upscaling <- function(file, variable, start.time=1, count.time=1)
{
library(ncdf) # load ncdf library to manipulate ncdf data
ncdata <- open.ncdf(file); # open ncdf file
lon <- get.var.ncdf(ncdata, "lon");
lat <- get.var.ncdf(ncdata, "lat");
time <- get.var.ncdf(ncdata, "time");
start.lon <- 1
end.lon <- length(lon)
start.lat <- 1
end.lat <- length(lat)
count.lon <- end.lon - start.lon + 1; # count number of longitude
count.lat <- end.lat - start.lat + 1; # count number of latitude
dat <- get.var.ncdf(ncdata, variable, start=c(start.lon, start.lat, 1),
count=c(count.lon, count.lat, 1))
temp.data<- array(0,dim=c(10,10))
for (i in 1:10)
{
for (j in 1:10)
{
temp.data <- mean(dat[i,j,])
}
}
}
There is no need to make a messy loop to spatially aggregate your data. Just use the aggregate function in the raster package:
library(raster)
a=matrix(data=c(1:100),nrow=10,ncol=10)
a=raster(a)
ra <- aggregate(a, fact=5, fun=mean) #fact=5 will aggregate using a 5x5 window
ra=as.matrix(ra)
ra
Now for your netcdf data, use raster's rasterFromXYZ to create the raster that can then be aggregated with the above method. Bonus includes the option to define your projection as an argument in the function so you end up with a georeferenced object at the end. This is important because if you aggregate your data without it you will then have to figure out by hand how to georeference the resulting matrix.
EDIT: If you want a resulting raster with the same dimensions as the original one, disaggregate the data right after aggregating it. While this seems redundant, these raster methods are very fast.
library(raster)
a=matrix(data=c(1:100),nrow=10,ncol=10)
a=raster(a)
ra <- aggregate(a, fact=5, fun=mean) #fact=5 will aggregate using a 5x5 window
ra <- disaggregate(ra, fact=5)
ra=as.matrix(ra)
ra
If you grid definitions follow standard netcdf conventions, then you might be able to remap using the CDO remapping functions. For first order conservative remapping you can try
cdo remapcon,grid_specification_here in.nc out.nc
Note that the answer given above is approximate, and not quite correct as the grid cell size is not the same as a function of latitude. The size of the error is likely small for this particular task as the cell sizes are fine, but nevertheless the answer will be slightly off.