I am developing an R-package with a function that should return file names with full path, either by being given a path and a couple of file names, or - if under Windows in interactive mode - by opening a file browser with choose.files() showing only files ending in csv, dat or txt.
Since choose.files() and variable Filters only exist under Windows, this happens in an appropriate if() clause. However, R CMD CHECK under Linux generates a NOTE about global variable Filters having no visible binding. Even though this is just a note, I have been asked to fix it by creating platform specific functions such that the choose.files() branch is never even seen under Linux. How can I do this?
fileNames <- function() {
files <- character(0)
if(interactive() && (.Platform$OS.type == "windows")) {
myFilt <- rbind(Filters, txtCsvDat=c("Data files (*.txt, *.csv, *.dat)",
"*.txt;*.csv;*.dat"))
files <- choose.files(filters=myFilt[c("txtCsvDat", "All"), ], index=1)
}
return(files)
}
Using get should fix it :
get("Filters")
OS-specific code doesn't in general seem like a good idea, but you can use sub-directories to define functions
MyPkg/R/windows/fileName.R
If necessary (e.g., because you will not implement fileName on non-Windows), include conditional collation order in the DESCRIPTION file
Collate.unix: shared.R
Collate.windows: shared.R windows/fileName.R
and arrange for conditional exports in the NAMESPACE file (again, if necessary)
if (.Platform$OS.type == "windows")
export(fileName)
Conditional collation is documented in RShowDoc("R-exts") (Section 1.1.1, search for Collate.windows); I'm not sure that there is a good reference for allowable syntax in NAMESPACE files. The NAMESPACE approach is used in the parallel package.
Related
I have a function in an Rscript (this is inside an r-package of my own) that is been called either by an .R(Rscript) file or an .Rmd(rmarkdown).
I need to create an if statement inside this function to do things depending if the function is called by either the Rscript or the Rmarkdown.
In my case the following solves the issue
Getting the file path that also includes filename and extension:
file_type <- try(rstudioapi::getSourceEditorContext()$path, silent = T)
Getting the file extension.
file_type <- tools::file_ext(file_type)
Then iff file_type = 'R' the functionalities are turned on and for any other cases they are off.
if(file_type == 'R'){
...
}
In this specific case I only care that the extension is a '.R' and don't want to risk using the functionalities with anything else.
For example if you are running the function in the console then file_type = '' and the functionalities will be off.
I have multiple .xls (~100MB) files from which I would like to load multiple sheets (from each) into R as a dataframe. I have tried various functions, such as xlsx::xlsx2 and XLConnect::readWorksheetFromFile, both of which always run for a very long time (>15 mins) and never finish and I have to force-quit RStudio to keep working.
I also tried gdata::read.xls, which does finish, but it takes more than 3 minutes per one sheet and it cannot extract multiple sheets at once (which would be very helpful to speed up my pipeline) like XLConnect::loadWorkbook can.
The time it takes these functions to execute (and I am not even sure the first two would ever finish if I let them go longer) is way too long for my pipeline, where I need to work with many files at once. Is there a way to get these to go/finish faster?
In several places, I have seen a recommendation to use the function readxl::read_xls, which seems to be widely recommended for this task and should be faster per sheet. This one, however, gives me an error:
> # Minimal reproducible example:
> setwd("/Users/USER/Desktop")
> library(readxl)
> data <- read_xls(path="test_file.xls")
Error:
filepath: /Users/USER/Desktop/test_file.xls
libxls error: Unable to open file
I also did some elementary testing to make sure the file exists and is in the correct format:
> # Testing existence & format of the file
> file.exists("test_file.xls")
[1] TRUE
> format_from_ext("test_file.xls")
[1] "xls"
> format_from_signature("test_file.xls")
[1] "xls"
The test_file.xls used above is available here.
Any advice would be appreciated in terms of making the first functions run faster or the read_xls run at all - thank you!
UPDATE:
It seems that some users are able to open the file above using the readxl::read_xls function, while others are not, both on Mac and Windows, using the most up to date versions of R, Rstudio, and readxl. The issue has been posted on the readxl GitHub and has not been resolved yet.
I downloaded your dataset and read each excel sheet in this way (for example, for sheets "Overall" and "Area"):
install.packages("readxl")
library(readxl)
library(data.table)
dt_overall <- as.data.table(read_excel("test_file.xls", sheet = "Overall"))
area_sheet <- as.data.table(read_excel("test_file.xls", sheet = "Area"))
Finally, I get dt like this (for example, only part of the dataset for the "Area" sheet):
Just as well, you can use the read_xls function instead read_excel.
I checked, it also works correctly and even a little faster, since read_excel is a wrapper over read_xls and read_xlsx functions from readxl package.
Also, you can use excel_sheets function from readxl package to read all sheets of your Excel file.
UPDATE
Benchmarking is done with microbenchmark package for the following packages/functions: gdata::read.xls, XLConnect::readWorksheetFromFile and readxl::read_excel.
But XLConnect it's a Java-based solution, so it requires a lot of RAM.
I found that I was unable to open the file with read_xl immediately after downloading it, but if I opened the file in Excel, saved it, and closed it again, then read_xl was able to open it without issue.
My suggested workaround for handling hundreds of files is to build a little C# command line utility that opens, saves, and closes an Excel file. Source code is below, the utility can be compiled with visual studio community edition.
using System.IO;
using Excel = Microsoft.Office.Interop.Excel;
namespace resaver
{
class Program
{
static void Main(string[] args)
{
string srcFile = Path.GetFullPath(args[0]);
Excel.Application excelApplication = new Excel.Application();
excelApplication.Application.DisplayAlerts = false;
Excel.Workbook srcworkBook = excelApplication.Workbooks.Open(srcFile);
srcworkBook.Save();
srcworkBook.Close();
excelApplication.Quit();
}
}
}
Once compiled, the utility can be called from R using e.g. system2().
I will propose a different workflow. If you happen to have LibreOffice installed, then you can convert your excel files to csv programatically. I have Linux, so I do it in bash, but I'm sure it can be possible in macOS.
So open a terminal and navigate to the folder with your excel files and run in terminal:
for i in *.xls
do soffice --headless --convert-to csv "$i"
done
Now in R you can use data.table::fread to read your files with a loop:
Scenario 1: the structure of files is different
If the structure of files is different, then you wouldn't want to rbind them together. You could run in R:
files <- dir("path/to/files", pattern = ".csv")
all_files <- list()
for (i in 1:length(files)){
fileName <- gsub("(^.*/)(.*)(.csv$)", "\\2", files[i])
all_files[[fileName]] <- fread(files[i])
}
If you want to extract your named elements within the list into the global environment, so that they can be converted into objects, you can use list2env:
list2env(all_files, envir = .GlobalEnv)
Please be aware of two things: First, in the gsub call, the direction of the slash. And second, list2env may overwrite objects in your Global Environment if they have the same name as the named elements within the list.
Scenario 2: the structure of files is the same
In that case it's likely you want to rbind them all together. You could run in R:
files <- dir("path/to/files", pattern = ".csv")
joined <- list()
for (i in 1:length(files)){
joined <- rbindlist(joined, fread(files[i]), fill = TRUE)
}
On my system, i had to use path.expand.
R> file = "~/blah.xls"
R> read_xls(file)
Error:
filepath: ~/Dropbox/signal/aud/rba/balsheet/data/a03.xls
libxls error: Unable to open file
R> read_xls(path.expand(file)) # fixed
Resaving your file and you can solve your problem easily.
I also find this problem before but I get the answer from your discussion.
I used the read_excel() to open those files.
I was seeing a similar error and wanted to share a short-term solution.
library(readxl)
download.file("https://mjwebster.github.io/DataJ/spreadsheets/MLBpayrolls.xls", "MLBPayrolls.xls")
MLBpayrolls <- read_excel("MLBpayrolls.xls", sheet = "MLB Payrolls", na = "n/a")
Yields (on some systems in my classroom but not others):
Error: filepath: MLBPayrolls.xls libxls error: Unable to open file
The temporary solution was to paste the URL of the xls file into Firefox and download it via the browser. Once this was done we could run the read_excel line without error.
This was happening today on Windows 10, with R 3.6.2 and R Studio 1.2.5033.
If you have downloaded the .xls data from the internet, even if you are opening it in Ms.Excel, it will open a prompt first asking to confirm if you trust the source, see below screenshot, I am guessing this is the reason R (read_xls) also can't open it, as it's considered unsafe. Save it as .xlsx file and then use read_xlsx() or read_excel().
Even thought this is not a code-based solution, I just changed the type file. For instance, instead of xls I saved as csv or xlsx. Then I opened it as regular one.
I worked it for me, because when I opened my xlsfile, I popped up the message: "The file format and extension of 'file.xls'' don't match. The file could be corrupted or unsafe..."
I am trying to write a test for a package function in R.
Let's say we have a function that simply writes a string x to disk using writeLines():
exporting_function <- function(x, file) {
writeLines(x, con = file)
invisible(NULL)
}
One way of testing it would be to check if a file exists. Typically, it should not exist at first, but after the exporting function was run it should. Also, you might want to test the file size to be greater than 0:
library(testthat)
test_that("file is written to disk", {
file = 'output.txt'
expect_false(file.exists(file))
exporting_function("This is a test",
file = file)
expect_true(file.exists(file))
expect_gt(file.info('output.txt')$size, 0)
})
Is this a good way to test it? In the CRAN Repository Policy it states that Packages should not write in the user’s home filespace (including clipboards), nor anywhere else on the file system apart from the R session’s temporary directory. Would this test violate this constraint?
There is a expect_output_file function. From the documentation and examples I am not sure if this is a more appropriate expectation to test the function. It requires a.o. an object argument which should be the object to test. What is the object to test in my case?
That looks as if it violates CRAN policy. Why not simply write to the temporary directory, using
file <- tempfile()
in place of
file = 'output.txt'
?
As to whether it is a good test: wouldn't it be better to try reading the file back in, and confirming that what was read matches what was written? That's easy in your toy example. It might be harder in the real one, but having an import function paired with your export function is always a good idea.
I have a flexdashboard which is used by multiple users. They read, modify and write the same (csv) file. I haven't been able to figure out how to do this with a SQL connection so in the meantime (I need a working app) I would like to use a simple .csv file as a database. This should be fine since the users aren't likely to work on it the exact same time and loading and writing the full file is almost instant.
My strategy is therefore:
1-load file,
2-edit (edits are done in rhandsontable which is backconverted to a dataframe)
3-save:
(a)-loads file again (to get the latest data),
(b)-appends the edits from the rhandsontable and keeps the latest data (indicated by a timestamp)
(c)-write.csv
I'm thinking I should add something in (1) such that it checks if the file is not already in use/open (because an other user is at (3). So: check if open, if not-> continue, else-> sys.sleep(3) and try again.
Any ideas about how to do this in R? In Delphi it would be something like:
if fileinuse(filename) then sleep(3) else df<-read.csv
What's the R way?
Edit:
I'm starting with the my edited answer as it's more elegant. This uses a shell command to test whether a file is available as discussed in this question:
How to check in command-line if a given file or directory is locked (used by any process)?
This avoids loading and saving the file, and is therefor more efficient.
# Function to test availability
IsInUse <- function(fName) {
shell(paste("( type nul >> ", fName, " ) 2>nul && echo available || echo in use", sep=""), intern = TRUE)=="in use"
}
# Test availability
IsInUse("test.txt")
Original answer:
Interesting question! I did not find a way to check if a file is in use before trying to write to it. The solution below is far from elegant. It relies on a tryCatch function, and on reading and writing to a file to check if it is available (which can be quite slow depending on your file size).
# Function to check if the file is in use (relies on reading and writing which is inefficient)
IsInUse <- function(fName) {
rData <- read.csv(fName)
tryCatch(
{
write.csv(rData, file=fName, row.names = FALSE)
return(FALSE)
},
error=function(cond) {
return(TRUE)
}
)
}
# Loop to check if file is in use
while(IsInUse(fName)) {
print("Still in use")
Sys.sleep(0.1)
}
# Your action here
I also found the answer to this question useful How to write trycatch in R to make sense of the tryCatch function.
I'd be interested to see if anyone else has a more elegant suggestion!
Interesting question, indeed! Curious about an elegant solution too...
I have a 370MB zip file and the content is a 4.2GB csv file.
I did:
unzip("year2015.zip", exdir = "csv_folder")
And I got this message:
1: In unzip("year2015.zip", exdir = "csv_folder") :
possible truncation of >= 4GB file
Have you experienced that before? How did you solve it?
I agree with #Sixiang.Hu's answer, R's unzip() won't work reliably with files greater than 4GB.
To get at how did you solve it?: I've tried a few different tricks with it, and in my experience the result of anything using R's built-ins is (almost) invariably an incorrect identification of the end-of-file (EOF) marker before the actual end of the file.
I deal with this issue in a set of files I process on a nightly basis, and to deal with it consistently and in an automated fashion, I wrote the function below to wrap the UNIX unzip. This is basically what you're doing with system(unzip()), but gives you a bit more flexibility in its behavior, and allows you to check for errors more systematically.
decompress_file <- function(directory, file, .file_cache = FALSE) {
if (.file_cache == TRUE) {
print("decompression skipped")
} else {
# Set working directory for decompression
# simplifies unzip directory location behavior
wd <- getwd()
setwd(directory)
# Run decompression
decompression <-
system2("unzip",
args = c("-o", # include override flag
file),
stdout = TRUE)
# uncomment to delete archive once decompressed
# file.remove(file)
# Reset working directory
setwd(wd); rm(wd)
# Test for success criteria
# change the search depending on
# your implementation
if (grepl("Warning message", tail(decompression, 1))) {
print(decompression)
}
}
}
Notes:
The function does a few things, which I like and recommend:
uses system2 over system because the documentation says "system2 is a more portable and flexible interface than system"
separates the directory and file arguments, and moves the working directory to the directory argument; depending on your system, unzip (or your choice of decompression tool) gets really finicky about decompressing archives outside the working directory
it's not pure, but resetting the working directory is a nice step toward the function having fewer side effects
you can technically do it without this, but in my experience it's easier to make the function more verbose than have to deal with generating filepaths and remembering unzip CLI flags
I set it to use the -o flag to automatically overwrite when rerun, but you could supply any number of arguments
includes a .file_cache argument which allows you to skip decompression
this comes in handy if you're testing a process which runs on the decompressed file, since 4GB+ files tend to take some time to decompress
commented out in this instance, but if you know you don't need the archive after decompressing, you can remove it inline
the system2 command redirects the stdout to decompression, a character vector
an if + grepl check at the end looks for warnings in the stdout, and prints the stdout if it finds that expression
Checking ?unzip, found the following comment in Note:
It does have some support for bzip2 compression and > 2GB zip files
(but not >= 4GB files pre-compression contained in a zip file: like
many builds of unzip it may truncate these, in R's case with a warning
if possible).
You can try to unzip it outside of R (using 7-Zip for example).
To add to the list of possible solutions, in case you have Java (JDK) available on your machine, you can wrap jar xf into an R function similar to utils::unzip() in interface, a very simple example:
unzipLarge <- function(zipfile, exdir = getwd()) {
oldWd <- getwd()
on.exit(setwd(oldWd))
setwd(exdir)
system2("jar", args = c("xf", zipfile))
}
And then use:
unzipLarge("year2015.zip", exdir = "csv_folder")