I want to, programmatically, source all .R files contained within a given array retrieved with the Sys.glob() function.
This is the code I wrote:
# fetch the different ETL parts
parts <- Sys.glob("scratch/*.R")
if (length(parts) > 0) {
for (part in parts) {
# source the ETL part
source(part)
# rest of code goes here
# ...
}
} else {
stop("no ETL parts found (no data to process)")
}
The problem I have is I cannot do this or, at least, I get the following error:
simpleError in source(part): scratch/foo.bar.com-https.R:4:151: unexpected string constant
I've tried different combinations for the source() function like the following:
source(sprintf("./%s", part))
source(toString(part))
source(file = part)
source(file = sprintf("./%s", part))
source(file = toString(part))
No luck. As I'm globbing the contents of a directory I need to tell R to source those files. As it's a custom-tailored ETL (extract, transform and load) script, I can manually write:
source("scratch/foo.bar.com-https.R")
source("scratch/bar.bar.com-https.R")
source("scratch/baz.bar.com-https.R")
But that's dirty and right now there are 3 different extraction patterns. They could be 8, 80 or even 2000 different patterns so writing it by hand is not an option.
How can I do this?
Try getting the list of files with dir and then using lapply:
For example, if your files are of the form t1.R, t2.R, etc., and are inside the path "StackOverflow" do:
d = dir(pattern = "^t\\d.R$", path = "StackOverflow/", recursive = T, full.names = T)
m = lapply(d, source)
The option recursive = T will search all subdirectories, and full.names = T will add the path to the filenames.
If you still want to use Sys.glob(), this works too:
d = Sys.glob(paths = "StackOverflow/t*.R")
m = lapply(d, source)
Related
eem_import_dir is supposed to "Reads Rdata and RDa files with one eemlist each. The eemlists are combined into one and returned." However, in my case, only one file is read at the time, and thus no combination is happening...
I don't expect any error in the function which is part of the staRdom package. I guess my limited R knowledge limits my understanding of the function and what could be wrong.
All files are the same class (eemlist) and in the same format. Tried changing the folder, filenames, etc. Can someone please help me understand the requirements of the function? Why is only one file read at the time and not all combined?
function (dir)
{
eem_files <- dir(dir, pattern = ".RData$|.RDa$", ignore.case = TRUE) %>%
paste0(dir, "/", .)
for (file in eem_files) {
file <- load(file)
if (get(file) %>% class() == "eemlist") {
if (exists("eem_list"))
eem_list <- eem_bind(eem_list, get(file))
else eem_list <- get(file)
}
else {
warning(paste0(file, " is no object of class eemlist!"))
}
NULL
}
eem_list
}
I found an old thread (How do you read a password protected excel file into r?) that recommended that I use the following code to read in a password protected file:
install.packages("excel.link")
library("excel.link")
dat <- xl.read.file("TestWorkbook.xlsx", password = "pass", write.res.password="pass")
dat
However, when I try to do this my R immediately crashes. I've tried removing the write.res.password argument, and that doesn't seem to be the issue. I have a hunch that excel.link might not work with the newest version of R, so if you know of any other ways to do this I'd appreciate the advice.
EDIT: Using read.xlsx generates this error:
Error in .jcall("RJavaTools", "Ljava/lang/Object;", "newInstance", .jfindClass(class), :
org.apache.poi.poifs.filesystem.OfficeXmlFileException:
The supplied data appears to be in the Office 2007+ XML.
You are calling the part of POI that deals with OLE2 Office Documents.
You need to call a different part of POI to process this data (eg XSSF instead of HSSF)
You can remove the password of the excel file without knowing it with the following function (adapted version of code available at https://www.r-bloggers.com/2018/05/remove-password-protection-from-excel-sheets-using-r/)
remove_Password_Protection_From_Excel_File <- function(dir, file, bool_XLSXM = FALSE)
{
initial_Dir <- getwd()
setwd(dir)
# file name and path after removing protection
if(bool_XLSXM == TRUE)
{
file_unlocked <- stringr::str_replace(basename(file), ".xlsm$", "_unlocked.xlsm")
}else
{
file_unlocked <- stringr::str_replace(basename(file), ".xlsx$", "_unlocked.xlsx")
}
file_unlocked_path <- file.path(dir, file_unlocked)
# create temporary directory in project folder
# so we see what is going on
temp_dir <- "_tmp"
# remove and recreate _tmp folder in case it already exists
unlink(temp_dir, recursive = TRUE)
dir.create(temp_dir)
# unzip Excel file into temp folder
unzip(file, exdir = temp_dir)
# get full path to XML files for all worksheets
worksheet_paths <- list.files(paste0(temp_dir, "/xl/worksheets"), full.name = TRUE, pattern = ".xml")
# remove the XML node which contains the sheet protection
# We might of course use e.g. xml2 to parse the XML file, but this simple approach will suffice here
for(ws in worksheet_paths)
{
file_Content <- readLines(ws, encoding = "windows1")
# the "sheetProtection" node contains the hashed password "<sheetProtection SOME INFO />"
# we simply remove the whole node
out <- str_replace(file_Content, "<sheetProtection.*?/>", "")
writeLines(out, ws)
}
worksheet_Protection_Paths <- paste0(temp_dir, "/xl/workbook.xml")
file_Content <- readLines(worksheet_Protection_Paths , encoding = "windows1")
out <- stringr::str_replace(file_Content, "<workbookProtection.*?/>", "")
writeLines(out, worksheet_Protection_Paths)
# create a new zip, i.e. Excel file, containing the modified XML files
old_wd <- setwd(temp_dir)
files <- list.files(recursive = T, full.names = F, all.files = T, no.. = T)
# as the Excel file is a zip file, we can directly replace the .zip extension by .xlsx
zip::zip(file_unlocked_path, files = files) # utils::zip does not work for some reason
setwd(old_wd)
# clean up and remove temporary directory
unlink(temp_dir, recursive = T)
setwd(initial_Dir)
}
Once the password is removed, you can read the Excel file. This approach works for me.
I am using the command file.copy in R and it throws an error, but I can't spot the reason.
file.copy(from="Z:/Ongoing/Test", to = "C:/Users/Darius/Desktop", overwrite = TRUE, recursive = TRUE)
Warning message:
In file.copy(from = "Z:/Ongoing/Test",:
problem copying Z:/Ongoing/Test to C:/Users/Darius/Desktop/Test: No such file or directory
Can anyone see the problem? The command line doesn't work even though it only gives you a warning message.
Actually, I don't think there is any straight forward way to copy a directory. I have written a function which might help you.
This function takes input two arguments:
from: The complete path of directory to be copied
to: The location to which the directory is to be copied
Assumption: from and to are paths of only one directory.
dir.copy <- function(from, to){
## check if from and to directories are valid
if (!dir.exists(from)){
cat('from: No such Directory\n')
return (FALSE)
}
else if (!dir.exists(to)){
cat('to: No such Directory\n')
return (FALSE)
}
## extract the directory name from 'from'
split_ans <- unlist(strsplit(from,'/'))
dir_name <- split_ans[length(split_ans)]
new_to <- paste(to,dir_name,sep='/')
## create the directory in 'to'
dir.create(new_to)
## copy all files in 'to'
file_inside <- list.files(from,full.names = T)
file.copy(from = file_inside,to=new_to)
## copy all subdirectories
dir_inside <- list.dirs(path=from,recursive = F)
if (length(dir_inside) > 0){
for (dir_name in dir_inside)
dir.copy(dir_name,new_to)
}
return (TRUE)
}
The file.copy() doesn't create directories. So it'll only work if you're copying to folders that already exist.
Had similar issue:
This blog was helpful. Slightly modified the code by adding full.names=T and overwrite = T.
current.folder <- "E:/ProjectDirectory/Data/"
new.folder <- "E:/ProjectDirectory/NewData/"
list.of.files <- list.files(current.folder, full.names = T)
# copy the files to the new folder
file.copy(list.of.files, new.folder, overwrite = T)
I need to create a function in R that reads all the files in a folder (let's assume that all files are tables in tab delimited format) and create objects with same names in global environment. I did something similar to this (see code below); I was able to write a function that reads all the files in the folder, makes some changes in the first column of each file and writes it back in to the folder. But the I couldn't find how to assign the read files in to an object that will stay in the global environment.
changeCol1 <- function () {
filesInfolder <- list.files()
for (i in 1:length(filesInfolder)){
wrkngFile <- read.table(filesInfolder[i])
wrkngFile[,1] <- gsub(0,1,wrkngFile[,1])
write.table(wrkngFile, file = filesInfolder[i], quote = F, sep = "\t")
}
}
You are much better off assigning them all to elements of a named list (and it's pretty easy to do, too):
changeCol1 <- function () {
filesInfolder <- list.files()
lapply(filesInfolder, function(fname) {
wrkngFile <- read.table(fname)
wrkngFile[,1] <- gsub(0, 1, wrkngFile[,1])
write.table(wrkngFile, file=fname, quote=FALSE, sep="\t")
wrkngFile
}) -> data
names(data) <- filesInfolder
data
}
a_list_full_of_data <- changeCol1()
Also, F will come back to haunt you some day (it's not protected where FALSE and TRUE are).
add this to your loop after making the changes:
assign(filesInfolder[i], wrkngFile, envir=globalenv())
If you want to put them into a list, one way would be, outside your loop, declare a list:
mylist = list()
Then, within your loop, do like so:
mylist[[filesInfolder[i] = wrkngFile]]
And then you can access each object by looking at:
mylist[[filename]]
from the global env.
I would like to scan multiple files for strings in R and know which file names have that string.
Is there a way to do this with something like grep, cat, readLines in a function maybe?
If I scan the files using:
fileNames <- Sys.glob("*.csv")
then maybe something like:
for (f in fileNames) {
stuff <- read.csv(fileName, sep = ",")
grep("string")
}
names(res) <- substr(filenames, 1, 30)
Or maybe even better, a loop like this:
for( f in filenames ){
cat("string", file=f)
}
for( f in filenames) {
cat(readLines(f), sep="\n")
}
This code doesn't work, I'm just trying to think this through. Im certain there is a better way to do this. It sounds simple but I cant get it right.
I want to scan files for strings and then have the output of the filenames where the string was found. I have not found an example to do this in R.
Suggestions?
note that in your first code example you use f as a loop variable while inside the loop you use fileName instead (also R is case sensitive so fileNames and filenames are different objects).
if it's unlikely that your search string contains the CSV delimiter, you can indeed use readLines(..) together with grep(..). grep(..) then returns a list of line numbers where the string occurs. Try the following code:
fileNames <- Sys.glob("*.csv")
for (fileName in fileNames) {
if (length(grep("string", readLines(fileName))) > 0) { print(fileName)}
}