I'm trying to read MODIS 17 data files into R, manipulate them (cropping etc.) and then save them as geoTIFF's. The data files come in .hdf format and there doesn't seem to be an easy way to read them into R.
Compared to other topics there isn't a lot of advice out there and most of it is several years old. Some of it also advises using additional programmes but I want to stick with just using R.
What package/s do people use for dealing with .hdf files in R?
Ok, so my MODIS hdf files were hdf4 rather than hdf5 format. It was surprisingly difficult to discover this, MODIS don't mention it on their website but there are a few hints in various blogs and stack exchange posts. In the end I had to download HDFView to find out for sure.
R doesn't do hdf4 files and pretty much all the packages (like rgdal) only support hdf5 files. There are a few posts about downloading drivers and compiling rgdal from source but it all seemed rather complicated and the posts were for MAC or Unix and I'm using Windows.
Basically gdal_translate from the gdalUtils package is the saving grace for anyone who wants to use hdf4 files in R. It converts hdf4 files into geoTIFFs without reading them into R. This means that you can't manipulate them at all e.g. by cropping them, so its worth getting the smallest tiles you can (for MODIS data through something like Reverb) to minimise computing time.
Here's and example of the code:
library(gdalUtils)
# Provides detailed data on hdf4 files but takes ages
gdalinfo("MOD17A3H.A2000001.h21v09.006.2015141183401.hdf")
# Tells me what subdatasets are within my hdf4 MODIS files and makes them into a list
sds <- get_subdatasets("MOD17A3H.A2000001.h21v09.006.2015141183401.hdf")
sds
[1] "HDF4_EOS:EOS_GRID:MOD17A3H.A2000001.h21v09.006.2015141183401.hdf:MOD_Grid_MOD17A3H:Npp_500m"
[2] "HDF4_EOS:EOS_GRID:MOD17A3H.A2000001.h21v09.006.2015141183401.hdf:MOD_Grid_MOD17A3H:Npp_QC_500m"
# I'm only interested in the first subdataset and I can use gdal_translate to convert it to a .tif
gdal_translate(sds[1], dst_dataset = "NPP2000.tif")
# Load and plot the new .tif
rast <- raster("NPP2000.tif")
plot(rast)
# If you have lots of files then you can make a loop to do all this for you
files <- dir(pattern = ".hdf")
files
[1] "MOD17A3H.A2000001.h21v09.006.2015141183401.hdf" "MOD17A3H.A2001001.h21v09.006.2015148124025.hdf"
[3] "MOD17A3H.A2002001.h21v09.006.2015153182349.hdf" "MOD17A3H.A2003001.h21v09.006.2015166203852.hdf"
[5] "MOD17A3H.A2004001.h21v09.006.2015099031743.hdf" "MOD17A3H.A2005001.h21v09.006.2015113012334.hdf"
[7] "MOD17A3H.A2006001.h21v09.006.2015125163852.hdf" "MOD17A3H.A2007001.h21v09.006.2015169164508.hdf"
[9] "MOD17A3H.A2008001.h21v09.006.2015186104744.hdf" "MOD17A3H.A2009001.h21v09.006.2015198113503.hdf"
[11] "MOD17A3H.A2010001.h21v09.006.2015216071137.hdf" "MOD17A3H.A2011001.h21v09.006.2015230092603.hdf"
[13] "MOD17A3H.A2012001.h21v09.006.2015254070417.hdf" "MOD17A3H.A2013001.h21v09.006.2015272075433.hdf"
[15] "MOD17A3H.A2014001.h21v09.006.2015295062210.hdf"
filename <- substr(files,11,14)
filename <- paste0("NPP", filename, ".tif")
filename
[1] "NPP2000.tif" "NPP2001.tif" "NPP2002.tif" "NPP2003.tif" "NPP2004.tif" "NPP2005.tif" "NPP2006.tif" "NPP2007.tif" "NPP2008.tif"
[10] "NPP2009.tif" "NPP2010.tif" "NPP2011.tif" "NPP2012.tif" "NPP2013.tif" "NPP2014.tif"
i <- 1
for (i in 1:15){
sds <- get_subdatasets(files[i])
gdal_translate(sds[1], dst_dataset = filename[i])
}
Now you can read your .tif files into R using, for example, raster from the raster package and work as normal. I've checked the resulting files against a few I converted manually using QGIS and they match so I'm confident the code is doing what I think it is. Thanks to Loïc Dutrieux and this for the help!
These days you can use the terra package with HDF files
Either get sub-datasets
library(terra)
s <- sds("file.hdf")
s
That can be extracted as SpatRasters like this
s[1]
Or create a SpatRaster of all subdatasets like this
r <- rast("file.hdf")
The following worked for me. It's a short program and just takes in the input folder name. Make sure you know which sub data you want. I was interested in sub data 1.
library(raster)
library(gdalUtils)
inpath <- "E:/aster200102/ast_200102"
setwd(inpath)
filenames <- list.files(,pattern=".hdf$",full.names = FALSE)
for (filename in filenames)
{
sds <- get_subdatasets(filename)
gdal_translate(sds[1], dst_dataset=paste0(substr(filename, 1, nchar(filename)-4) ,".tif"))
}
Use the HEG toolkit provided by NASA to convert your hdf file to geotiff and then use any package ("raster" for example) to read the file. I do the same for both old and new hdf files.
Heres the link: https://newsroom.gsfc.nasa.gov/sdptoolkit/HEG/HEGHome.html
Take a look at the NASA products supported here: https://newsroom.gsfc.nasa.gov/sdptoolkit/HEG/HEGProductList.html
Hope this helps.
This script has been very useful and I managed to convert a batch of 36 files using it. However, my problem is that the conversion does not seem correct. When I do it using ArcGIS 'Make NetCDF Raster Layer tool', I get different results + I am able to convert the numbers to C from Kelvin using simple formula: RasterValue * 0.02 - 273.15. With the results from R conversion I don't get the right results after conversion which leads me to believe ArcGIS conversion is good, and R conversion returns an error.
library(gdalUtils)
library(raster)
setwd("D:/Data/Climate/MODIS")
# Get a list of sds names
sds <- get_subdatasets('MOD11C3.A2009001.006.2016006051904.hdf')
# Isolate the name of the first sds
name <- sds[1]
filename <- 'Rasterinr.tif'
gdal_translate(sds[1], dst_dataset = filename)
# Load the Geotiff created into R
r <- raster(filename)
# Identify files to read:
rlist=list.files(getwd(), pattern="hdf$", full.names=FALSE)
# Substract last 5 digits from MODIS filename for use in a new .img filename
substrRight <- function(x, n){
substr(x, nchar(x)-n+1, nchar(x))
}
filenames0 <- substrRight(rlist,9)
# Suffixes for MODIS files for identyfication:
filenamessuffix <- substr(filenames0,1,5)
listofnewnames <- c("2009.01.MODIS_","2009.02.MODIS_","2009.03.MODIS_","2009.04.MODIS_","2009.05.MODIS_",
"2009.06.MODIS_","2009.07.MODIS_","2009.08.MODIS_","2009.09.MODIS_","2009.10.MODIS_",
"2009.11.MODIS_","2009.12.MODIS_",
"2010.01.MODIS_","2010.02.MODIS_","2010.03.MODIS_","2010.04.MODIS_","2010.05.MODIS_",
"2010.06.MODIS_","2010.07.MODIS_","2010.08.MODIS_","2010.09.MODIS_","2010.10.MODIS_",
"2010.11.MODIS_","2010.12.MODIS_",
"2011.01.MODIS_","2011.02.MODIS_","2011.03.MODIS_","2011.04.MODIS_","2011.05.MODIS_",
"2011.06.MODIS_","2011.07.MODIS_","2011.08.MODIS_","2011.09.MODIS_","2011.10.MODIS_",
"2011.11.MODIS_","2011.12.MODIS_")
# Final new names for converted files:
newnames <- vector()
for (i in 1:length(listofnewnames)) {
newnames[i] <- paste0(listofnewnames[i],filenamessuffix[i],".img")
}
# Loop converting files to raster from NetCDF
for (i in 1:length(rlist)) {
sds <- get_subdatasets(rlist[i])
gdal_translate(sds[1], dst_dataset = newnames[i])
}
Related
I have the following problem: I need to process multiple raster files using the same function in R package landscapemetrics. Basically my raster files are parts of a country map, all of the same shape and size (i.e. quadrants. I figured out a code for 1 file, but I have to do the same with more than 600 rasters. So, doing it manually is very irrational. The steps in my code are the following:
# 1. I load "raster" and "landscapemetrics" packages:
library(raster)
library(landscapemetrics)
# 2. I read in my quadrant:
Quadrant <- raster("C:\\Users\\customer\\Documents\\ ... \\2434-44.tif")
# 3. I process the raster to get landscape metrics tibble:
LS_metrics <- calculate_lsm(landscape = Quadrant)
# 4. Finally, I write it into a csv:
write.csv(LS_metrics, file = "2434-44.csv")
I need to keep the same file name for my csv files as I had for tif (e.g. results from processing quadrant "2434-44.tif", need to be stored in "2434-44.csv", possibly in a folder in wd).
I am new to R. I tried to use list.files() and then apply a for loop, but my code did not work.
I need your advice.
Yours faithfully,
Denis
Your question is really about iteration and character (filename) manipulation; not about landscapemetrics etc. There are many similar questions on this site and resources elsewhere that you can consult. The basic approach can be like this:
# get input filenames
inf <- list.files("/my/path", pattern="\\.tif$", full=TRUE)
# create output filenames
outf <- gsub(".tif", ".csv", basename(inf))
# perhaps put output files in particular folder
dir.create("out", FALSE, FALSE)
outf <- file.path("out", outf)
# iterate
for (i in 1:length(inf)) {
# read input
input <- raster(inf[i])
# do something
output <- data.frame(id=1)
# write output
write.csv(output, outf[i])
}
It's very hard to help without further information. What was the issue with your approach of looping through all files using list.files(). In general, this should work.
Furthermore, most likely you don't want to calculate all available landscape metrics, but rather specify a subselection during the calculate_lsm() function call.
I need to get information on the extent, resolution, and cell number of the my file. I'm working with raster files.
You can do
library(raster)
# r <- raster("filename")
r <- raster()
extent(r)
ncell(r)
res(r)
You can read more about the methods in the raster package here
I would like to read all shapefiles in a directory into the global environment However, when I get a list of files in my working directory, the list includes both .shp and .shp.xml files, the latter of which I do not want. My draft code reads both .shp and .shp.xml files. How can I prevent it from doing so?
Draft code follows:
library(maptools)
# get all files with the .shp extension from working directory
setwd("N:/Dropbox/_BonesFirst/139_Transit_Metros_Points_Subset_by_R")
shps <- dir(getwd(), "*.shp")
# the assign function will take the string representing shp
# and turn it into a variable which holds the spatial points data
for (shp in shps) {
cat("now loading", shp, "...", '\n\r')
assign(shp, readOGR(shp))
}
EDIT: Problems seems to be in the readShapePoints. Either readOGR (from rgdal) or shapefile (from raster) work better.
Get all the files:
# replace with your folder name:
dir <- "c:/files/shpfiles"
ff <- list.files(dir, pattern="\\.shp$", full.names=TRUE)
Now read them. Easiest with raster::shapefile. Do not use readShapefile (obsolete and incomplete)
library(raster)
# first file
shapefile(ff[1])
# all of them into a list
x <- lapply(ff, shapefile)
These days, you could use "terra" and do
library(terra)
v <- vect( lapply(ff, vect) )
Going by the Title of your post, I believe you want to list all shape files from your current directory.You may want to use below.
list.files(, pattern="*.shp", full.names=TRUE)
I have about 40 spatial rasters in the .tiff format in a folder. I'm trying to generate histograms from each of the rasters in R, and save each histogram as a jpeg in a separate folder. I wrote code to loop through each of the raster, create a histogram and save it using the 'jpeg' package.
setwd("G:/Research/MODIS Albedo Oct 08-July 09/Test")
library(raster)
library(jpeg)
files <- list.files(path="G:/Research/MODIS Albedo Oct 08-July 09/Test", pattern=".tif",all.files=T, full.names=F, no.. = T) #generate a list of rasters in the folder
number<-length(files) #count the number of rasters
for(r in 1:number) #loop over each raster in the folder
{
x<-raster(files[r], header=F) #load one raster file
jpeg("G:/Research/MODIS Albedo Oct 08-July 09/test_histplots/r.jpg") #create jpeg using the name 'r' generated by loop
hist(x) #generate histogram
dev.off()
}
I want each of the generated jpeg to have a different name, ideally a subset of the original raster name. For example, if the original name of the raster is 'MODIS101_265', the jpeg's name should be 265. Here, 265 is the Julian date in the year. I'm assuming that this might involve using a format specifier like %d in C, but I'm not sure how this works in R.
When I run the above code, I get only one histogram. It seems the code is correctly looping over the original rasters, but saving all resultant histograms to a single jpeg.
Any advice will be helpful! Thanks!
Regular expression using gsub are your friend for getting the number out of the name. Assuming that all of your files are named the way your example is ("MODIS101_265.tif"), then the code below will work for you.
Also, welcome to R where for loops are slow and can usually be replaced by the faster lapply.
saveMyHist <- function(fileName) {
fileNum <- as.numeric(gsub(".*_(\\d+)\\.tif", "\\1", fileName))
x <- raster(fileName, header=F)
jpeg(sprintf("G:/Research/MODIS Albedo Oct 08-July 09/test_histplots/%03d.jpg",
fileNum))
hist(x)
dev.off()
}
files <- list.files("/Users/home/Documents/Development/Rtesting",
pattern=".tif",all.files=T, full.names=F, no.. = T)
lapply(files, saveMyHist)
Motivated by the post here, Developing Geographic Thematic Maps with R, I was thinking about constructing a choropleth map based on zip codes. I've downloaded the shape files for New Hampshire and Maine from http://www.census.gov/geo/www/cob/z52000.html, but I'm interested in combining or merging the .shp files from these two states.
Is there a mechanism in the maptools package for doing this kind of merge or concatenation of two .shp files after you read them in using readShapeSpatial()? Also welcome input if e.g. using the RgoogleMaps package would be easier.
I followed up on the link posted by Roman Luštrik, and the following answer is a slight modification of http://r-sig-geo.2731867.n2.nabble.com/suggestion-to-MERGE-or-UNION-3-shapefiles-td5914413.html#a5916751.
The following code will allow you to merge the .shp files obtained from Census 2000 5-Digit ZIP Code Tabulation Areas (ZCTAs) Cartographic Boundary Files and plot them.
In this case, I downloaded the .shp files and associated .dbf and .shx files for Massachusetts, New Hampshire, and Maine.
library('maptools')
library('rgdal')
setwd('c:/location.of.shp.files')
# this location has the shapefiles for zt23_d00 (Maine), zt25_d00 (Mass.), and zt33_d00 (New Hampshire).
# columns.to.keep
# allows the subsequent spRbind to work properly
columns.to.keep <- c('AREA', 'PERIMETER', 'ZCTA', 'NAME', 'LSAD', 'LSAD_TRANS')
files <- list.files(pattern="*.shp$", recursive=TRUE, full.names=TRUE)
uid <-1
# get polygons from first file
poly.data<- readOGR(files[1], gsub("^.*/(.*).shp$", "\\1", files[1]))
n <- length(slot(poly.data, "polygons"))
poly.data <- spChFIDs(poly.data, as.character(uid:(uid+n-1)))
uid <- uid + n
poly.data <- poly.data[columns.to.keep]
# combine remaining polygons with first polygon
for (i in 2:length(files)) {
temp.data <- readOGR(files[i], gsub("^.*/(.*).shp$", "\\1",files[i]))
n <- length(slot(temp.data, "polygons"))
temp.data <- spChFIDs(temp.data, as.character(uid:(uid+n-1)))
temp.data <- temp.data[columns.to.keep]
uid <- uid + n
poly.data <- spRbind(poly.data,temp.data)
}
plot(poly.data)
# save new shapefile
combined.shp <- 'combined.shp'
writeOGR(poly.data, dsn=combined.shp, layer='combined1', driver='ESRI Shapefile')
GeoMerge is a free tool for merging Shapefiles. Merges SHP and DBF parts. Seems to work OK, but I haven't pushed it too much.