I am trying to automate the data ingestion process in an R script that pulls data from a directory that updates regularly.
The general framework follows this process
library(sp)
library(rgdal)
library(raster)
f1.t1.cir <- stack("../raster/field1/f1_cir_t1.tif")
f1.t1.NDVI <- stack("../raster/field1/f1_ndvi_t1.tif")
f1.t1.RGB <- stack("../raster/field1/f1_ndvi_t1.tif")
f1.dat <- c(f1.t1.cir, f1.t1.NDVI, f1.t1.RGB)
for (i in f1.dat){
plotRGB(i)
}
I would like to generate each f1.t1.cir type object from the directory directly such that when I add a new TIFF file f1_cir_t2.tif, the r script will create an object f1.cir.t2.
I am trying to use something like
a <- list.files(path= "../raster/field1", pattern = "\\.tif$")
b <- gsub("_", "\\.", a)
for (i in a) {
assign(get(b[(which(a==i))]), stack((paste("../raster/field1/", i,sep=""))))
}
At this point, I would have all tiff files as stacked multiband raster objects in the R workspace.
I am getting the following error,
Error in get(b[(which(a == i))]) : object 'f1_t1_DSM.tif' not found
I can not figure out if this is a get() problem, or something else.
for reference
> a
[1] "f1_t1_DSM.tif" "f1_t1_NDVI.tif"
> b
[1] "f1.t1.DSM.tif" "f1.t1.NDVI.tif"
so that much is working, I think.
Any suggestions?
#joran, great suggestion...
f1.t1<-list()
for(i in list.files(path= "../raster/field1", pattern = "\\.tif$")){
f1.t1[[i]]<-stack((paste("../raster/field1/", i, sep="")))
}
Worked very well, no need to change the names.
Thank you.
Related
I am currently trying to write a function to batch process a number of PNGs I have in a file into SVGs. I wanted to get this up and running as it is something I usually have to do and find the "free" online alternatives to be lacking. I have written the following script to do this:
image.list <- list.files("D:/R Exports/06 Graphics")
image.select <- list()
batch.svg.convert <- function(image.list){
for (i in 1:length(image.list)){
image.select[i] <- paste0("D:/R Exports/06 Graphics/",image.list[i])
image.read <- image_read(image.select[i])
image.svg <- image_convert(image.read,format = "svg")
image.write <- image_write(image.svg, paste0("D:/R Exports/Graphics SVG",image.list[i]))
}
}
batch.svg.convert(image.list)
However, when doing so I get the following error in response:
Error in image_read(image.select[i]) :
path must be URL, filename or raw vector
I am unsure why this is occurring as when I run tests such as:
image_list2 <- list.files("D:/R Exports/06 Graphics")
image_test <- paste0("D:/R Exports/06 Graphics/",image_list2[6])
image_test2 <- image_read(image_test)
The image is read as intended. What exactly am I missing here? Is this a limitation of the magick package or am I missing something from my code to get this function working correctly?
Thanks in advance,
Jim
I have a script to loop through a selection of net cdf files. The files are opened, data extracted, then closed again. I have used this many times before and it works with no issue. I was recently sent a new selection of files to run through the same code. I can check the files individually using the ncdf4 package and nc_open() function. The files look fine and are not corrupt. However, when I run through the loop the function will not let me open the files and I get this error:
Error in R_nc4_open: NetCDF: HDF error
When I run though the loop to check, all is fine and the file opens. It just cannot open in the loop. There is no issue with the code.
Has anyone come across this before with non-corrupt net cdf files getting this error only on occasion. Even outside the loop I can run the code and get the error first time, then run it again without changing anything and the connection works.
Not sure how to trouble shoot this one, so just looking for advice as to why this might be happening.
Code snippet:
targetYear <- '2005-2019'
variables <- c('CHL','SSH')
ncNam <- list.files(folderdir, '.nc', recursive = TRUE)
for(v in 1:length((variables)))
{
varNam <- unlist(unique(variables))[v]
# Get names corresponding to variable
varLs <- ncNam[grep(varNam, basename(ncNam))]
varLs <- varLs[grep(targetYear, varLs)]}
varLs <- varLs[1]
export <- paste0(exportdir,varNam,'/')
dir.create(export, recursive = TRUE)
if(varNam == 'Proximity1km' | varNam == 'Proximity200m'| varNam ==
'ProximityCoast'| varNam == 'Bathymetry'){
fileNam <- varLs
ncfilename <- paste0(folderdir, fileNam)
print(ncfilename)
# Read ncfile
ncfile <- nc_open(ncfilename)
nc_close(ncfile)
gc()
} else {
fileNam <- varLs
ncfilename <- paste0(folderdir, fileNam)
print(ncfilename)
# Read ncfile
ncfile <- nc_open(ncfilename)
nc_close(ncfile)
gc()}`
I figured out the issue. It was to do with the error detection filer in the .nc files.
I removed the filter and the files work fine inside the loop. Still a bit strange.
Perhaps the ncdf4 package is not up to date with this filtering.
I have a bunch of .gpx files in a folder and I'm trying to read them all with readOGR and get one file in memory for each .gpx file. Here's what isn't working:
myfiles <- list.files(".", pattern = "*.gpx")
for (i in 1:length(myfiles)) {
temp.gpx <- readOGR(dsn = myfiles[i], layer="tracks")
temp.gpx
}
What this does is read all of the files and then write them to temp.gpx. What I'd like this to do is to read them and write them to, e.g., temp1.gpx, temp2.gpx, etc.
Unfortunately, I'm pretty new to R and I've no idea how to do it. I tried looking online and found some solutions that were specific to non-spatial files and messed up these files in one way or another.
Does anyone know how to accomplish this?
Thanks!
You can use assign() to generate variable names using other variables:
myfiles <- list.files(".",pattern = "*.gpx")
for (i in 1:length(myfiles)) {
varName <- paste0("temp", i, ".gpx")
assign(varName, readOGR(dsn = myfiles[i], layer="tracks"))
}
This will create a character variable varName with each iteration of the loop which will have the value temp1.gpx, temp2.gpx, etc:
## i <- 1
varName <- paste0("temp", i, ".gpx")
## [1] "temp1.gpx"
The assign() then assigns the result of readOGR() to the current temp*.gpx variable.
The use of assign is in most cases a very poor choice. Although Stuart Allen answered your question correctly, your are most likely asking the wrong question.
What you are trying to do is a typical beginners mistake. With this approach you end up with several named objects that are difficult to manipulate because you need to refer to them by their names, making it hard to use the objects in a loop, for example.
Instead you probably should make a list with all your objects:
gpx <- lapply(myfiles,
function(f) { readOGR(dsn=f, layer="tracks") }
)
And take it from there.
I have written a R script for binning on the specific parameters of several .csv files in the same folder. I used the smbinning package. When I execute the script, it produces detailed results. I do not need all of them. I want to take a specific part of the results and write into a .csv file automatically. Can someone tell me how can I do this? My R script, details results, and wanted parts of result is as follows
My R script is as follows:
library(smbinning)
files <- list.files(pattern = "0.csv")
cutpoint <- rep(0,length(files))
for(i in 1:length(files)){
data <- read.csv(files[i],header=T)
df.train <- data.frame(data)
df.train_amp <-rbind(df.train)
cutpoint[i] <- smbinning(df=df.train_amp, y="cvflg",x="dwell")
}
result <- cbind(files,cutpoint)
write.csv(result,"result_dwell.csv")
You can use View(result) to see if the variable contains exactly what your require. Else there is something wrong in your logic.
There is function sink in R which writes the output of a program to a file.
https://stat.ethz.ch/R-manual/R-devel/library/base/html/sink.html
I have the following problem, please.
I need to read recursively raster images, stack and store them in a file with different names (e.g. name1.tiff, name2.tiff, ...)
I tried the following:
for (i in 10) {
fn <- system.file ("external / test.grd", package = "raster")
fn <-stack (fn) # not sure if this idea can work.
fnSTACK[,, i] <-fn
}
here expect a result of the form:
dim (fnSTACK)
[1] 115 80 10
or something like that
but it didn't work.
Actually, I have around 300 images that I have to be store with different names.
The purpose is to extract time series information (if you know another method or suggestions I would appreciate it)
Any suggestions are welcomed. Thank you in advance for your time.
What I would first do is put all your *.tiff in a single folder. Then read all their names into a list. Stack them and then write a multi-layered raster. I'm assuming all the images have the same extent and projection.
### Load necessary packages
library(tiff)
library(raster)
library(sp)
library(rgdal) #I cant recall what packages you might need so this is probably
library(grid) # overkill
library(car)
############ function extracts the last n characters from a string
############ without counting the last m
subs <- function(x, n=1,m=0){
substr(x, nchar(x)-n-m+1, nchar(x)-m)
}
setwd("your working directory path") # you set your wd to were all your images are
filez <- list.files() # creates a list with all the files in the wd
no <- length(filez) # amount of files found
imagestack <- stack() # you initialize your raster stack
for (i in 1:no){
if (subs(filez[i],4)=="tiff"){
image <- raster(filez[i]) # fill up raster stack with only the tiffs
imagestack <- addLayer(imagestack,image)
}
}
writeRaster(imagestack,filename="output path",options="INTERLEAVE=BAND",overwrite=TRUE)
# write stack
I did not try this, but it should work.
Your question is rather vague and it would have helped if you had provided a full example script such that it could be more easily understood. You say you need to read several (probably not recursively?) raster images (files, presumably) and create a stack. Then you need to store them in files with different names. That sounds like copying the files to new files with a different names, and there are R functions for that, but that is probably not what you intended to ask.
if you have a bunch of files (with full path names or in the working directory), e.g. from list.files()
f <- system.file ("external/test.grd", package = "raster")
ff <- rep(f, 10)
you can do
library(raster)
s <- stack(ff)
I am assuming that you simply need this stack for operations in R (it is an object, but not a file). You can extract the values in many ways (see the help files and vignette of the raster package). If you want a three dimensional array, you can do
a <- as.array(s)
dim(a)
[1] 115 80 10
thanks "JEquihua" for your suggestion, just need to add the initial variable before addLayer ie:
for (i in 1:no){
if (subs(filez[i],4)=="tiff"){
image <- raster(filez[i]) # fill up raster stack with only the tiffs
imagestack <- addLayer(imagestack,image)
}
}
And sorry "RobertH", I'm newbie about R. I will be ask, more sure or exact by next time.
Also, any suggestions for extracting data from time series of MODIS images stacked. Or examples of libraries: "rts ()", "ndvits ()" or "bfast ()"
Greetings to the entire community.
Another method for stacking
library(raster)
list<-list.files("/PATH/of/DATA/",pattern="NDVI",
recursive=T,full.names=T)
data_stack<-stack(list)