Read specific raster files and create a mean raster in R - r

I am desesperate, because my problem seems very simple, but I cannot find out how to manage it.
Aim:
I would like to read 1 to 4 raster files from a folder. The names of the one that I need are listed in a list as character.
After having opened the files, I would like to create a new raster corresponding to the mean of the files.
I can manage it on QGIS, but I need to automatize hte process, as I have a lot of individuals!
1) It should work with list.files(pattern = ) but as the names are in a list, I do not know how to do.
Ex: for the first individual, I have to read 2 files named 2018-12-27_sic.tif and 2018-12-27_sic_con.tif
I tried to read with readGDAL , open.GDAL it didn't work
thanks a lot for your valuable help

I would use the stack and calc functions from the raster package. The function stack creates a stack of rasters, all with the same resolution and extent, and makes it easy to do operations like take the mean of every cell. So:
library(raster)
fs <- list.files(pattern='tif$')
rasterstack <- stack(fs)
rastermean <- calc(rasterstack, fun=mean)
Note, if your rasters are not the same resolution, you will have to use the resample function, and if they are not the same extent, you will have to use crop. Typing in ?resample and ?crop in RStudio will show you instructions for using those functions.

Related

How do I use a for loop to open .ncdf files and average a matrix variable that has different values over all the files? (Using R Programming)

I'm currently trying to code an averaged matrix for all matrix values from a specific air quality variable (ColumnAmountNO2TropCloudScreened) positioned in different .ncdf4 files. The only way I was able to do it was listing all the files, opening them using lapply, creating a single NO2 variable for every ncdf. file and then applying abind to all of the variables. Even though I was able to do it, it took me a lot of time to type in different names for the NO2 variables (NO2_1, NO2_2,NO2_3,etc) and which row to access the original listed file ([[1]],[[2]],[[3]],etc).
I am trying to type in a code that's smarter and easier than just typing in a bunch of numbers. I have all the original .ncdf4 files listed, and am trying to loop over the files to open them and get the 'ColumnAmountNO2TropCloudScreened' matrix value from each, so then I can average them. However, I am having no luck. Would someone know what is wrong with this code/my thought over it? Thanks.
I'm trying the code as it follows:
# Load libraries
library(ncdf4)
library(abind)
library(plot.matrix)
# Set working directory
setwd("~/researchdatasets/2020")
# Declare data frame
df=NULL
# List all files in one file
files1= list.files(pattern='\\.nc4$',full.names=FALSE)
# Loop to open files, get NO2 variables
for(i in seq(along=files1)) {
nc_data = nc_open(files1[i])
NO2_var<-ncvar_get(nc_data,'ColumnAmountNO2TropCloudScreened')
nc_close(nc_data)
}
# Average variables
list_NO2= apply(abind::abind(NO2_var,along=3),1:2,mean,na.rm=TRUE)
NCO's ncra averages variables across all input files with, e.g.,
ncra in*.nc out.nc

Explaining Simple Loop in R

I successfully wrote a for loop in R. That is okay and I am very happy that it works. But I also want to understand what I've done exactly because I will have to work with loops later on in my analysis as well.
I work with Raster Data (DEMs). I load them into the environment as rasters and then I use the getValues function in the loop as I want to do some calculations. Looks as follows:
list <- dir(pattern=".tif", full.names=T)
tif.files <- list()
tif.files.values <- tif.files
for (i in 1: length(list)){
tif.files[[i]] <- raster (list[[i]])
tif.files.values[[i]] <- getValues(tif.files[[i]])
}
Okay, so far so good. I don't get why I have to specify tif.files and tif.files.values before I use them in the loop and I don't know why to specify them exactly how I did that. For the first part, the raster operation, I had a pattern. Maybe someone can explain the context. I really want to understand R.
When you do:
tif.files[[i]] <- raster (list[[i]])
then tif.files[[i]] is the result of running raster(list[[i]]), so that is storing the raster object. This object contains the metadata (extent, number of rows, cols etc) and the data, although if the tiff is huge it doesn't actually read it in at the time.
tif.files.values[[i]] <- getValues(tif.files[[i]])
that line calls getValues on the raster object, which reads the values from the raster and returns a vector. The values of the grid cells are now in tif.files.values[[i]].
Experiment by printing tif.files[[1]] and tif.files.values[[1]] at the R prompt.
Note
This is R, not RStudio, which is the interface you are using that has all the buttons and menus. The R language exists quite happily without it, and your question is just a language question. I've edited and tagged it now for you.

Extracting point data from a large shape file in R

I'm having trouble extracting point data from a large shape file (916.2 Mb, 4618197 elements - from here: https://earthdata.nasa.gov/data/near-real-time-data/firms/active-fire-data) in R. I'm using readShapeSpatial in maptools to read in the shape file which takes a while but eventually works:
worldmap <- readShapeSpatial("shp_file_name")
I then have a data.frame of coordinates that I want extract data for. However R is really struggling with this and either loses connection or freezes, even with just one set of coordinates!
pt <-data.frame(lat=-64,long=-13.5)
pt<-SpatialPoints(pt)
e<-over(pt,worldmap)
Could anyone advise me on a more efficient way of doing this?
Or is it the case that I need to run this script on something more powerful (currently using a mac mini with 2.3 GHz processor)?
Many thanks!
By 'point data' do you mean the longitude and latitude coordinates? If that's the case, you can obtain the data underlying the shapefile with:
worldmap#data
You can view this in the same way you would any other data frame, for example:
View(worldmap#data)
You can also access columns in this data frame in the same way you normally would, except you don't need the #data, e.g.:
worldmap$LATITUDE
Finally, it is recommended to use readOGR from the rgdal package rather than maptools::readShapeSpatial as the former reads in the CRS/projection information.

Handling multiple raster files and executing unit conversions on them: R

I've dug around a lot for an answer to this and wasn't able to find anything, so here I am.
I have a whole bunch of ascii raster files corresponding to air temperature and dew point temperature of a certain area over 744 hourly time steps. (So I have 744 air temp and 744 dew point files corresponding to a 31-day month). The files are only about 45 kB each.
I want to stack them together so I can perform some analyses on them, and I also want to convert their units from K to deg F.
The file names air Tair1.txt, Tair2.txt, ... Tair744.txt and Eair1.txt, Eair2.txt, ... Eair744.txt.
Using the raster package, I can easily load all the files as rasters:
for (i in 1:744) {
assign(paste0("Tair",i),raster(paste0("Tair",i,".txt")))
assign(paste0("Eair",i),raster(paste0("Tair",i,".txt")))
}
I've tried to use ls() with pattern or glob2rx to define just the raster file names and
then do conversions on them, or to do something similar to join them in a stack, but to no avail. I also tried mget, values(mget(filename)) and things like that to get at the values in a loop.
I know R doesn't handle large datasets very well, but I'm thinking these aren't really that large so there should be something pretty simple?
I would appreciate any help and advice! Thank you.
The raster package's RasterStack is for this:
library(raster)
files <- paste0("Tair",1:744,".txt")
rs <- stack(files)
Why do you have these files in text format though? Who imposed this disaster on you? I suspect your individual layers have insufficient metadata, so try one and see if it's sensible. You can use extent(rs) <- and projection(rs) <- to fix:
r <- raster(files[1])
print(r)
Don't use assign() that's just creating a mess.

Extract certain values out of netCDF

I ve a netCDF file with 3 Dimensions. The first dimension is the longitude and reaches from 1-464. The second dimension is the latitude and reaches from 1-201. The third dimension is time and reaches from 1-5479.
Now I want to extract certain values out of the file. I think one can handle it with the start argument. I tried this command.
test = open.ncdf("rr_0.25deg_reg_1980-1994_v8.0.nc")
data = get.var.ncdf(test,start=c(1:464,1:201,1:365))
But somehow it doesnt work. Has anybody a solution?
Thanks in advance...
It looks like you are using the ncdf package in R. If you can, I recommend using the updated ncdf4 package, which is based on Unidata's netcdf version 4 library (link).
Back to your problem. I use the ncdf4 package, but I think the ncdf package works the same way. When you call the function get.var.ncdf, you also need to explicitly supply the name of the variable that you want to extract. I think you can get the names of the variables using names(test$var).
So you need to do something like this:
# Open the nc file
test = open.ncdf("rr_0.25deg_reg_1980-1994_v8.0.nc")
# Now get the names of the variables in the nc file
names(test$var)
# Get the data from the first variable listed above
# (May not fit in memory)
data = get.var.ncdf(test,varid=names(test$var)[1])
# If you only want a certain range of data.
# The following will probably not fit in memory either
# data = get.var.ncdf(test,varid=names(test$var)[1])[1:464,1:201,1:365]
For your problem, you would need to replace varid=names(test$var)[1] above with varid='VARIABLE_NAME', where VARIABLE_NAME is the variable you want to extract.
Hope that helps.
EDIT:
I installed the ncdf package on my system, and the above code works for me!
You could also do the extracting of timesteps/dates and locations outside of R before reading it into to R for plotting etc, by using CDO. This has the advantage that you can work directly in the coordinate space and specify timesteps or dates directly:
e.g.
cdo seldate,20100101,20121031 in.nc out.nc
cdo sellonlatbox,lon1,lon2,lat1,lat2 in.nc out.nc

Resources