I've dug around a lot for an answer to this and wasn't able to find anything, so here I am.
I have a whole bunch of ascii raster files corresponding to air temperature and dew point temperature of a certain area over 744 hourly time steps. (So I have 744 air temp and 744 dew point files corresponding to a 31-day month). The files are only about 45 kB each.
I want to stack them together so I can perform some analyses on them, and I also want to convert their units from K to deg F.
The file names air Tair1.txt, Tair2.txt, ... Tair744.txt and Eair1.txt, Eair2.txt, ... Eair744.txt.
Using the raster package, I can easily load all the files as rasters:
for (i in 1:744) {
assign(paste0("Tair",i),raster(paste0("Tair",i,".txt")))
assign(paste0("Eair",i),raster(paste0("Tair",i,".txt")))
}
I've tried to use ls() with pattern or glob2rx to define just the raster file names and
then do conversions on them, or to do something similar to join them in a stack, but to no avail. I also tried mget, values(mget(filename)) and things like that to get at the values in a loop.
I know R doesn't handle large datasets very well, but I'm thinking these aren't really that large so there should be something pretty simple?
I would appreciate any help and advice! Thank you.
The raster package's RasterStack is for this:
library(raster)
files <- paste0("Tair",1:744,".txt")
rs <- stack(files)
Why do you have these files in text format though? Who imposed this disaster on you? I suspect your individual layers have insufficient metadata, so try one and see if it's sensible. You can use extent(rs) <- and projection(rs) <- to fix:
r <- raster(files[1])
print(r)
Don't use assign() that's just creating a mess.
Related
I'm currently trying to code an averaged matrix for all matrix values from a specific air quality variable (ColumnAmountNO2TropCloudScreened) positioned in different .ncdf4 files. The only way I was able to do it was listing all the files, opening them using lapply, creating a single NO2 variable for every ncdf. file and then applying abind to all of the variables. Even though I was able to do it, it took me a lot of time to type in different names for the NO2 variables (NO2_1, NO2_2,NO2_3,etc) and which row to access the original listed file ([[1]],[[2]],[[3]],etc).
I am trying to type in a code that's smarter and easier than just typing in a bunch of numbers. I have all the original .ncdf4 files listed, and am trying to loop over the files to open them and get the 'ColumnAmountNO2TropCloudScreened' matrix value from each, so then I can average them. However, I am having no luck. Would someone know what is wrong with this code/my thought over it? Thanks.
I'm trying the code as it follows:
# Load libraries
library(ncdf4)
library(abind)
library(plot.matrix)
# Set working directory
setwd("~/researchdatasets/2020")
# Declare data frame
df=NULL
# List all files in one file
files1= list.files(pattern='\\.nc4$',full.names=FALSE)
# Loop to open files, get NO2 variables
for(i in seq(along=files1)) {
nc_data = nc_open(files1[i])
NO2_var<-ncvar_get(nc_data,'ColumnAmountNO2TropCloudScreened')
nc_close(nc_data)
}
# Average variables
list_NO2= apply(abind::abind(NO2_var,along=3),1:2,mean,na.rm=TRUE)
NCO's ncra averages variables across all input files with, e.g.,
ncra in*.nc out.nc
I am desesperate, because my problem seems very simple, but I cannot find out how to manage it.
Aim:
I would like to read 1 to 4 raster files from a folder. The names of the one that I need are listed in a list as character.
After having opened the files, I would like to create a new raster corresponding to the mean of the files.
I can manage it on QGIS, but I need to automatize hte process, as I have a lot of individuals!
1) It should work with list.files(pattern = ) but as the names are in a list, I do not know how to do.
Ex: for the first individual, I have to read 2 files named 2018-12-27_sic.tif and 2018-12-27_sic_con.tif
I tried to read with readGDAL , open.GDAL it didn't work
thanks a lot for your valuable help
I would use the stack and calc functions from the raster package. The function stack creates a stack of rasters, all with the same resolution and extent, and makes it easy to do operations like take the mean of every cell. So:
library(raster)
fs <- list.files(pattern='tif$')
rasterstack <- stack(fs)
rastermean <- calc(rasterstack, fun=mean)
Note, if your rasters are not the same resolution, you will have to use the resample function, and if they are not the same extent, you will have to use crop. Typing in ?resample and ?crop in RStudio will show you instructions for using those functions.
I want to do something (apparently) simple, but didn't yet find the right way to do it:
I read a netcdf file (wind speed from the ERA5 reanalysis) on a grid.
From this, I use the wind speed to calculate a wind capacity factor (using a given power curve).
I then want to write a new netcdf file, with exactly the same structure as the input file, but just replacing the input wind speed by the new variable (wind capacity factor).
Is there a simple/fast way to do this, avoiding to redefine all the dims, vars ... with ncvar_def and ncdim_def ?
Thanks in advance for your replies!
Writing a netcdf file in R is not overly complicated, there is a nice example online here:
http://geog.uoregon.edu/GeogR/topics/netCDF-write-ncdf4.html
You could copy the dimensions from the input file.
However if your wind power curve is a simple analytical expression then you could perform this task in one line from the command line in bash/linux using climate data operators (cdo).
For example, if you have two variables 10u and 10v in the file (I don't recalled the reanalysis names exactly) then you could make a new variable WCF=SQRT(U2+V2) in the following way
cdo expr,'wcf=sqrt(10u**2+10v**2)' input.nc output.nc
See an example here:
https://code.mpimet.mpg.de/boards/53/topics/1622
So if your window power function is an analytical expression you can define it this way without using R at all or worrying about dimensions etc, the new file will have an variable wcf added. You should then probably use NCO to alter the metadata (units etc) to ensure they are appropriate.
I successfully wrote a for loop in R. That is okay and I am very happy that it works. But I also want to understand what I've done exactly because I will have to work with loops later on in my analysis as well.
I work with Raster Data (DEMs). I load them into the environment as rasters and then I use the getValues function in the loop as I want to do some calculations. Looks as follows:
list <- dir(pattern=".tif", full.names=T)
tif.files <- list()
tif.files.values <- tif.files
for (i in 1: length(list)){
tif.files[[i]] <- raster (list[[i]])
tif.files.values[[i]] <- getValues(tif.files[[i]])
}
Okay, so far so good. I don't get why I have to specify tif.files and tif.files.values before I use them in the loop and I don't know why to specify them exactly how I did that. For the first part, the raster operation, I had a pattern. Maybe someone can explain the context. I really want to understand R.
When you do:
tif.files[[i]] <- raster (list[[i]])
then tif.files[[i]] is the result of running raster(list[[i]]), so that is storing the raster object. This object contains the metadata (extent, number of rows, cols etc) and the data, although if the tiff is huge it doesn't actually read it in at the time.
tif.files.values[[i]] <- getValues(tif.files[[i]])
that line calls getValues on the raster object, which reads the values from the raster and returns a vector. The values of the grid cells are now in tif.files.values[[i]].
Experiment by printing tif.files[[1]] and tif.files.values[[1]] at the R prompt.
Note
This is R, not RStudio, which is the interface you are using that has all the buttons and menus. The R language exists quite happily without it, and your question is just a language question. I've edited and tagged it now for you.
I'm having trouble extracting point data from a large shape file (916.2 Mb, 4618197 elements - from here: https://earthdata.nasa.gov/data/near-real-time-data/firms/active-fire-data) in R. I'm using readShapeSpatial in maptools to read in the shape file which takes a while but eventually works:
worldmap <- readShapeSpatial("shp_file_name")
I then have a data.frame of coordinates that I want extract data for. However R is really struggling with this and either loses connection or freezes, even with just one set of coordinates!
pt <-data.frame(lat=-64,long=-13.5)
pt<-SpatialPoints(pt)
e<-over(pt,worldmap)
Could anyone advise me on a more efficient way of doing this?
Or is it the case that I need to run this script on something more powerful (currently using a mac mini with 2.3 GHz processor)?
Many thanks!
By 'point data' do you mean the longitude and latitude coordinates? If that's the case, you can obtain the data underlying the shapefile with:
worldmap#data
You can view this in the same way you would any other data frame, for example:
View(worldmap#data)
You can also access columns in this data frame in the same way you normally would, except you don't need the #data, e.g.:
worldmap$LATITUDE
Finally, it is recommended to use readOGR from the rgdal package rather than maptools::readShapeSpatial as the former reads in the CRS/projection information.