Related
What is the difference between require() and library()?
There's not much of one in everyday work.
However, according to the documentation for both functions (accessed by putting a ? before the function name and hitting enter), require is used inside functions, as it outputs a warning and continues if the package is not found, whereas library will throw an error.
Another benefit of require() is that it returns a logical value by default. TRUE if the packages is loaded, FALSE if it isn't.
> test <- library("abc")
Error in library("abc") : there is no package called 'abc'
> test
Error: object 'test' not found
> test <- require("abc")
Loading required package: abc
Warning message:
In library(package, lib.loc = lib.loc, character.only = TRUE, logical.return = TRUE, :
there is no package called 'abc'
> test
[1] FALSE
So you can use require() in constructions like the one below. Which mainly handy if you want to distribute your code to our R installation were packages might not be installed.
if(require("lme4")){
print("lme4 is loaded correctly")
} else {
print("trying to install lme4")
install.packages("lme4")
if(require(lme4)){
print("lme4 installed and loaded")
} else {
stop("could not install lme4")
}
}
In addition to the good advice already given, I would add this:
It is probably best to avoid using require() unless you actually will be using the value it returns e.g in some error checking loop such as given by thierry.
In most other cases it is better to use library(), because this will give an error message at package loading time if the package is not available. require() will just fail without an error if the package is not there. This is the best time to find out if the package needs to be installed (or perhaps doesn't even exist because it it spelled wrong). Getting error feedback early and at the relevant time will avoid possible headaches with tracking down why later code fails when it attempts to use library routines
You can use require() if you want to install packages if and only if necessary, such as:
if (!require(package, character.only=T, quietly=T)) {
install.packages(package)
library(package, character.only=T)
}
For multiple packages you can use
for (package in c('<package1>', '<package2>')) {
if (!require(package, character.only=T, quietly=T)) {
install.packages(package)
library(package, character.only=T)
}
}
Pro tips:
When used inside the script, you can avoid a dialog screen by specifying the repos parameter of install.packages(), such as
install.packages(package, repos="http://cran.us.r-project.org")
You can wrap require() and library() in suppressPackageStartupMessages() to, well, suppress package startup messages, and also use the parameters require(..., quietly=T, warn.conflicts=F) if needed to keep the installs quiet.
Always use library. Never use require.
tl;dr: require breaks one of the fundamental rules of robust software systems: fail early.
In a nutshell, this is because, when using require, your code might yield different, erroneous results, without signalling an error. This is rare but not hypothetical! Consider this code, which yields different results depending on whether {dplyr} can be loaded:
require(dplyr)
x = data.frame(y = seq(100))
y = 1
filter(x, y == 1)
This can lead to subtly wrong results. Using library instead of require throws an error here, signalling clearly that something is wrong. This is good.
It also makes debugging all other failures more difficult: If you require a package at the start of your script and use its exports in line 500, you’ll get an error message “object ‘foo’ not found” in line 500, rather than an error “there is no package called ‘bla’”.
The only acceptable use case of require is when its return value is immediately checked, as some of the other answers show. This is a fairly common pattern but even in these cases it is better (and recommended, see below) to instead separate the existence check and the loading of the package. That is: use requireNamespace instead of require in these cases.
More technically, require actually calls library internally (if the package wasn’t already attached — require thus performs a redundant check, because library also checks whether the package was already loaded). Here’s a simplified implementation of require to illustrate what it does:
require = function (package) {
already_attached = paste('package:', package) %in% search()
if (already_attached) return(TRUE)
maybe_error = try(library(package, character.only = TRUE))
success = ! inherits(maybe_error, 'try-error')
if (! success) cat("Failed")
success
}
Experienced R developers agree:
Yihui Xie, author of {knitr}, {bookdown} and many other packages says:
Ladies and gentlemen, I've said this before: require() is the wrong way to load an R package; use library() instead
Hadley Wickham, author of more popular R packages than anybody else, says
Use library(x) in data analysis scripts. […]
You never need to use require() (requireNamespace() is almost always better)
?library
and you will see:
library(package) and require(package) both load the package with name
package and put it on the search list. require is designed for use
inside other functions; it returns FALSE and gives a warning (rather
than an error as library() does by default) if the package does not
exist. Both functions check and update the list of currently loaded
packages and do not reload a package which is already loaded. (If you
want to reload such a package, call detach(unload = TRUE) or
unloadNamespace first.) If you want to load a package without putting
it on the search list, use requireNamespace.
My initial theory about the difference was that library loads the packages whether it is already loaded or not, i.e. it might reload an already loaded package, while require just checks that it is loaded, or loads it if it isn't (thus the use in functions that rely on a certain package). The documentation refutes this, however, and explicitly states that neither function will reload an already loaded package.
Here seems to be the difference on an already loaded package.
While it is true that both require and library do not load the package. Library does a lot of other things before it checks and exits.
I would recommend removing "require" from the beginning of a function running 2mil times anyway, but if, for some reason I needed to keep it. require is technically a faster check.
microbenchmark(req = require(microbenchmark), lib = library(microbenchmark),times = 100000)
Unit: microseconds
expr min lq mean median uq max neval
req 3.676 5.181 6.596968 5.655 6.177 9456.006 1e+05
lib 17.192 19.887 27.302907 20.852 22.490 255665.881 1e+05
I'm running a new script on R and i'm trying to call a file .exe using the function system() from r.
I run :
system("C:/Program Files (x86)/OxMetrics6/ox/bin/oxl.exe I:/Code R/GarchOxModelling.ox", show.output.on.console = TRUE, wait = TRUE)
But it seams to do nothing. And when i launch manually the file GarchOxModelling.ox, it works.
Would you have any idea on how to make it work from R ?
Thanks in advance
Without testing, try
ret <- system(paste(shQuote("C:/Program Files (x86)/OxMetrics6/ox/bin/oxl.exe"),
shQuote("I:/Code R/GarchOxModelling.ox")),
show.output.on.console = TRUE, wait = TRUE)
A few issues with the code you provided in your question:
It is syntactically wrong R:
"system(C:/Program Fil...", show.output.on.console = TRUE, wait = TRUE)
is like typing in
"ABC", x=1, y=2)
which should error.
Even assuming that the leading quote is not correct, you need to start the quote at the beginning of the executable name, as in
system("C:/Program File...", ...)
Further, though, is that this is being passed verbatim to the shell. While something windows guesses correctly about embedded spaces, it's really not good practice to assume this can happen all of the time, so you should manually quote all of your arguments that either (a) do include a space in them, or (b) you do not know because they are variables. In this case, I prefer shQuote, but dQuote might be sufficient.
system(paste(shQuote("C:/Program Files (x86)/OxMetrics6/ox/bin/oxl.exe"),
shQuote("I:/Code R/GarchOxModelling.ox")),
show.output.on.console = TRUE, wait = TRUE)
I suggest that you consider using intern=TRUE instead of show.output..., so that you can perhaps programmatically verify the output is what you expect.
Last suggestion, I find the processx package much more reliable for calls like this,
# library(processx)
ret <- processx::run("C:/Program Files (x86)/OxMetrics6/ox/bin/oxl.exe", "I:/Code R/GarchOxModelling.ox")
where quoting is handled automatically.
Is there any way to reproduce the environment which is used by devtools::check?
I have the problem that my tests work with devtools::test() but fail within devtools::check(). My problem is now, how to find the problem. The report of check just prints the last few lines of the error log and I can't find the complete report for the testing.
checking tests ... ERROR
Running the tests in ‘tests/testthat.R’ failed.
Last 13 lines of output:
...
I know that check uses a different environment compared to test but I don't know how I should debug these problems since they are not reproducible at all. Specially these test where running a few month ago, so not sure where to look for the problem.
EDIT
actually I tried to locate my problem and I found a solution. But to post my solution to it, I have to add more details.
So my test always failed since I was testing a markdown script if it is running without errors and afterwards I was checking if some of the environmental variables are set correctly. These where results which I calculate with the script as well as standard settings which I set. So I wanted to get a warning if I forgot to change some of my settings after developing...
Anyway, since it is a markdown script, I had to extract the code and I was using comments from this post knitr: run all chunks in an Rmarkdown document using knitr::purl to get the code and sys.source to execute it.
runAllChunks <- function(rmd, envir=globalenv()){
# as found here https://stackoverflow.com/questions/24753969
tempR <- tempfile(tmpdir = '.', fileext = ".R")
on.exit(unlink(tempR))
knitr::purl(rmd, output=tempR, quiet=TRUE)
sys.source(tempR, envir=envir)
}
For some reason, this produces an error since maybe a few weeks (not sure which new packages I installed lately...). But since there is a new comment, that I can just use knitr::knit which also executes the code, this worked as expected and now my test no longer complains.
So in the end, I don't know where the problem exactly was, but this is now working.
I recently had a similar issue with my tests breaking (succeeding with devtools::test() but failing with devtools::check()). I don't know if this solution necessarily fixes the problem above, but it should help to track down similar problems.
In my case, the problem ultimately came down to using a function that needed a package listed in Suggests rather than in Imports/Depends. In particular, my function called httr::content(), which broke when I tried to pass it the as = "parsed" argument. It turns out that as = "parsed" uses a suggested package, readr to read a csv, and I needed to add it to my dependencies for devtools::check() to work.
This is a known issue with testthat. The workaround is to add the following as the 1st line in tests/testthat.R:
Sys.setenv(R_TESTS="")
In case it helps someone else, this is what worked for me
Re-install all relevant packages. E.g. install.packages("testthat", "dplyr", "lubridate", "stringr") (I included all packages my package uses)
Close RStudio and reopen
Then all tests passed
I spent much too long looking into this error, so hoping that I can help someone out in the future. I would like to add to this that I was getting this error while using ggplot2::autoplot() in my function and it required that I added #import ggfortify to the Roxygen skeleton part of my function.
I ran into the same issue with my tests failing under devtools::check() while not failing under testthat::test()
And none of the above applied to my problem, so i decided to post my issue plus solution here as well. But first some NOTEs from my experience:
devtools::check() does - so it seems - deeper error checking then your own written tests.
Now to my code-setup. I had a function that was build to retrieve values from two different files. Those files contained named profiles with a set of values per profile. But the profiles were named differently, depending on the files:
Example files:
Content of file_one:
[default]
value_A = "foo"
value_B = "bar"
value_C = "baz"
[peter]
value_A = "oof"
value_B = "rab"
value_C = "zab"
content of file_two:
[default]
value_X = "fuzzly"
value_Z = "puzzly"
[profile peter]
value_X = "fuzzly"
value_Z = "puzzly"
As you can see, does the naming in file two follow another naming convention, when it comes to the named profiles. The profiles are written in "[]" and the default-profile is always '[default]' in both files. But as soon as it comes to named profiles, its just '[name]' in one file and then '[profile name]' in the other one.
Now i've build the function like that (simplyfied):
get_value <- function(file_content, what, profile) {
file_content <- readr::read_lines(file)
all_profiles_at <- grep("\\[.*\\]", file_content)
profile_regex <- paste0("\\[",if(file_content == "file_two" && profile != "default") "profile ",profile,"\\]")
profile_at <- grep(profile_regex, file_content)
profile_ends_at <- if(profile_at == max(all_profiles_at)) length(file_content) else all_profiles_at[grep(paste0("^",profile_at,"$"), all_profiles_at) + 1] -1
profile_content <- file_content[profile_at:profile_ends_at]
whole_what <- stringr::str_replace_all(profile_content[grep(paste0("^",what,".*"), profile_content)], " ", "")
return(stringr::str_sub(whole_what, stringr::str_length(paste0(what,"=."))))
}
With this code my tests ran smoothly and even check() found no issues.
While the whole code evolved i figured, that i should read the files content beforehand and give only the alread read_in content to the function to avoid duplication in my code. So i changed the function like so:
get_value <- function(file, what, profile) {
is_file_two <- is_file_two(file_content)
all_profiles_at <- grep("\\[.*\\]", file_content)
profile_regex <- paste0("\\[",if(file_content == "file_two" && profile != "default") "profile ",profile,"\\]")
profile_at <- grep(profile_regex, file_content)
profile_ends_at <- if(profile_at == max(all_profiles_at)) length(file_content) else all_profiles_at[grep(paste0("^",profile_at,"$"), all_profiles_at) + 1] -1
profile_content <- file_content[profile_at:profile_ends_at]
whole_what <- stringr::str_replace_all(profile_content[grep(paste0("^",what,".*"), profile_content)], " ", "")
return(stringr::str_sub(whole_what, stringr::str_length(paste0(what,"=."))))
}
As you might notice i only changed the first line of the funciton body and left the if-condition unchanged - my mistake!
But my tests didn't throw an error, as the if-condition still worked. Even though the 'file_content == "file_two"' part now generated a logical vector and if() ... else ... normally throws a warning, when the logical has length > 1. The special construct with the && doesn't throw such an error as it returns a length(1) logical:
# with warning
if(c(FALSE, FALSE, FALSE)) "Done!" else "Not done!"
# no warning:
if(c(FALSE, FALSE, FALSE) && TRUE) "Done!" else "Not done!"
Thats why my tests with testthat::test() sill worked.
But devtools::check() saw this flaw in my code and the tests failed!
And that part of the FAILURE_REPORT showed me my errors:
[...]
where 41: test_check("my_package_name")
--- value of length: 18 type: logical ---
[1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
[13] FALSE FALSE FALSE FALSE FALSE FALSE
--- function from context ---
[...]
Conclusion:
testthat::test() is great! Is checks whether or not your code still runs. But devtools::check() goes far deeper - and when your tests pass with testthat::test() but fail with devtools::check() then you've probaly got some deeper bugs and flaws in your code you MUST attend to!
So as I shortly mentioned above, I changed some of my code to no longer user knitr::purl but using knitr::knit and this solved my problem.
expect_error(f <- runAllChunks('010_main_lfq_analysis.Rmd'), NA)
expect_error(f <- knitr::knit('010_main_lfq_analysis.Rmd', output='jnk.R', quiet=TRUE, envir=globalenv()), NA)
This could also happen in the following scenario: You have a library already loaded in R and you are referring to the function in that library without namespace binding. For example, suppose you use the nnzero() function from the Matrix in a test file and happen to also have had the Matrix package already loaded with library(Matrix). Then devtools::test() will pass but devtools::check() fails. Using Matrix::nnzero() should fix the problem.
What is the difference between require() and library()?
There's not much of one in everyday work.
However, according to the documentation for both functions (accessed by putting a ? before the function name and hitting enter), require is used inside functions, as it outputs a warning and continues if the package is not found, whereas library will throw an error.
Another benefit of require() is that it returns a logical value by default. TRUE if the packages is loaded, FALSE if it isn't.
> test <- library("abc")
Error in library("abc") : there is no package called 'abc'
> test
Error: object 'test' not found
> test <- require("abc")
Loading required package: abc
Warning message:
In library(package, lib.loc = lib.loc, character.only = TRUE, logical.return = TRUE, :
there is no package called 'abc'
> test
[1] FALSE
So you can use require() in constructions like the one below. Which mainly handy if you want to distribute your code to our R installation were packages might not be installed.
if(require("lme4")){
print("lme4 is loaded correctly")
} else {
print("trying to install lme4")
install.packages("lme4")
if(require(lme4)){
print("lme4 installed and loaded")
} else {
stop("could not install lme4")
}
}
In addition to the good advice already given, I would add this:
It is probably best to avoid using require() unless you actually will be using the value it returns e.g in some error checking loop such as given by thierry.
In most other cases it is better to use library(), because this will give an error message at package loading time if the package is not available. require() will just fail without an error if the package is not there. This is the best time to find out if the package needs to be installed (or perhaps doesn't even exist because it it spelled wrong). Getting error feedback early and at the relevant time will avoid possible headaches with tracking down why later code fails when it attempts to use library routines
You can use require() if you want to install packages if and only if necessary, such as:
if (!require(package, character.only=T, quietly=T)) {
install.packages(package)
library(package, character.only=T)
}
For multiple packages you can use
for (package in c('<package1>', '<package2>')) {
if (!require(package, character.only=T, quietly=T)) {
install.packages(package)
library(package, character.only=T)
}
}
Pro tips:
When used inside the script, you can avoid a dialog screen by specifying the repos parameter of install.packages(), such as
install.packages(package, repos="http://cran.us.r-project.org")
You can wrap require() and library() in suppressPackageStartupMessages() to, well, suppress package startup messages, and also use the parameters require(..., quietly=T, warn.conflicts=F) if needed to keep the installs quiet.
Always use library. Never use require.
tl;dr: require breaks one of the fundamental rules of robust software systems: fail early.
In a nutshell, this is because, when using require, your code might yield different, erroneous results, without signalling an error. This is rare but not hypothetical! Consider this code, which yields different results depending on whether {dplyr} can be loaded:
require(dplyr)
x = data.frame(y = seq(100))
y = 1
filter(x, y == 1)
This can lead to subtly wrong results. Using library instead of require throws an error here, signalling clearly that something is wrong. This is good.
It also makes debugging all other failures more difficult: If you require a package at the start of your script and use its exports in line 500, you’ll get an error message “object ‘foo’ not found” in line 500, rather than an error “there is no package called ‘bla’”.
The only acceptable use case of require is when its return value is immediately checked, as some of the other answers show. This is a fairly common pattern but even in these cases it is better (and recommended, see below) to instead separate the existence check and the loading of the package. That is: use requireNamespace instead of require in these cases.
More technically, require actually calls library internally (if the package wasn’t already attached — require thus performs a redundant check, because library also checks whether the package was already loaded). Here’s a simplified implementation of require to illustrate what it does:
require = function (package) {
already_attached = paste('package:', package) %in% search()
if (already_attached) return(TRUE)
maybe_error = try(library(package, character.only = TRUE))
success = ! inherits(maybe_error, 'try-error')
if (! success) cat("Failed")
success
}
Experienced R developers agree:
Yihui Xie, author of {knitr}, {bookdown} and many other packages says:
Ladies and gentlemen, I've said this before: require() is the wrong way to load an R package; use library() instead
Hadley Wickham, author of more popular R packages than anybody else, says
Use library(x) in data analysis scripts. […]
You never need to use require() (requireNamespace() is almost always better)
?library
and you will see:
library(package) and require(package) both load the package with name
package and put it on the search list. require is designed for use
inside other functions; it returns FALSE and gives a warning (rather
than an error as library() does by default) if the package does not
exist. Both functions check and update the list of currently loaded
packages and do not reload a package which is already loaded. (If you
want to reload such a package, call detach(unload = TRUE) or
unloadNamespace first.) If you want to load a package without putting
it on the search list, use requireNamespace.
My initial theory about the difference was that library loads the packages whether it is already loaded or not, i.e. it might reload an already loaded package, while require just checks that it is loaded, or loads it if it isn't (thus the use in functions that rely on a certain package). The documentation refutes this, however, and explicitly states that neither function will reload an already loaded package.
Here seems to be the difference on an already loaded package.
While it is true that both require and library do not load the package. Library does a lot of other things before it checks and exits.
I would recommend removing "require" from the beginning of a function running 2mil times anyway, but if, for some reason I needed to keep it. require is technically a faster check.
microbenchmark(req = require(microbenchmark), lib = library(microbenchmark),times = 100000)
Unit: microseconds
expr min lq mean median uq max neval
req 3.676 5.181 6.596968 5.655 6.177 9456.006 1e+05
lib 17.192 19.887 27.302907 20.852 22.490 255665.881 1e+05
Is there a way to programmatically find the path of an R script inside the script itself?
I am asking this because I have several scripts that use RGtk2 and load a GUI from a .glade file.
In these scripts I am obliged to put a setwd("path/to/the/script") instruction at the beginning, otherwise the .glade file (which is in the same directory) will not be found.
This is fine, but if I move the script in a different directory or to another computer I have to change the path. I know, it's not a big deal, but it would be nice to have something like:
setwd(getScriptPath())
So, does a similar function exist?
This works for me:
getSrcDirectory(function(x) {x})
This defines an anonymous function (that does nothing) inside the script, and then determines the source directory of that function, which is the directory where the script is.
For RStudio only:
setwd(dirname(rstudioapi::getActiveDocumentContext()$path))
This works when Running or Sourceing your file.
Use source("yourfile.R", chdir = T)
Exploit the implicit "--file" argument of Rscript
When calling the script using "Rscript" (Rscript doc) the full path of the script is given as a system parameter. The following function exploits this to extract the script directory:
getScriptPath <- function(){
cmd.args <- commandArgs()
m <- regexpr("(?<=^--file=).+", cmd.args, perl=TRUE)
script.dir <- dirname(regmatches(cmd.args, m))
if(length(script.dir) == 0) stop("can't determine script dir: please call the script with Rscript")
if(length(script.dir) > 1) stop("can't determine script dir: more than one '--file' argument detected")
return(script.dir)
}
If you wrap your code in a package, you can always query parts of the package directory.
Here is an example from the RGtk2 package:
> system.file("ui", "demo.ui", package="RGtk2")
[1] "C:/opt/R/library/RGtk2/ui/demo.ui"
>
You can do the same with a directory inst/glade/ in your sources which will become a directory glade/ in the installed package -- and system.file() will compute the path for you when installed, irrespective of the OS.
This answer works fine to me:
script.dir <- dirname(sys.frame(1)$ofile)
Note: script must be sourced in order to return correct path
I found it in: https://support.rstudio.com/hc/communities/public/questions/200895567-can-user-obtain-the-path-of-current-Project-s-directory-
But I still don´t understand what is sys.frame(1)$ofile. I didn´t find anything about that in R Documentation. Someone can explain it?
#' current script dir
#' #param
#' #return
#' #examples
#' works with source() or in RStudio Run selection
#' #export
z.csd <- function() {
# http://stackoverflow.com/questions/1815606/rscript-determine-path-of-the-executing-script
# must work with source()
if (!is.null(res <- .thisfile_source())) res
else if (!is.null(res <- .thisfile_rscript())) dirname(res)
# http://stackoverflow.com/a/35842176/2292993
# RStudio only, can work without source()
else dirname(rstudioapi::getActiveDocumentContext()$path)
}
# Helper functions
.thisfile_source <- function() {
for (i in -(1:sys.nframe())) {
if (identical(sys.function(i), base::source))
return (normalizePath(sys.frame(i)$ofile))
}
NULL
}
.thisfile_rscript <- function() {
cmdArgs <- commandArgs(trailingOnly = FALSE)
cmdArgsTrailing <- commandArgs(trailingOnly = TRUE)
cmdArgs <- cmdArgs[seq.int(from=1, length.out=length(cmdArgs) - length(cmdArgsTrailing))]
res <- gsub("^(?:--file=(.*)|.*)$", "\\1", cmdArgs)
# If multiple --file arguments are given, R uses the last one
res <- tail(res[res != ""], 1)
if (length(res) > 0)
return (res)
NULL
}
A lot of these solutions are several years old. While some may still work, there are good reasons against utilizing each of them (see linked source below). I have the best solution (also from source): use the here library.
Original example code:
library(ggplot2)
setwd("/Users/jenny/cuddly_broccoli/verbose_funicular/foofy/data")
df <- read.delim("raw_foofy_data.csv")
Revised code
library(ggplot2)
library(here)
df <- read.delim(here("data", "raw_foofy_data.csv"))
This solution is the most dynamic and robust because it works regardless of whether you are using the command line, RStudio, calling from an R script, etc. It is also extremely simple to use and is succinct.
Source: https://www.tidyverse.org/articles/2017/12/workflow-vs-script/
I have found something that works for me.
setwd(dirname(rstudioapi::getActiveDocumentContext()$path))
How about using system and shell commands? With the windows one, I think when you open the script in RStudio it sets the current shell directory to the directory of the script. You might have to add cd C:\ e.g or whatever drive you want to search (e.g. shell('dir C:\\*file_name /s', intern = TRUE) - \\ to escape escape character). Will only work for uniquely named files unless you further specify subdirectories (for Linux I started searching from /). In any case, if you know how to find something in the shell, this provides a layout to find it within R and return the directory. Should work whether you are sourcing or running the script but I haven't fully explored the potential bugs.
#Get operating system
OS<-Sys.info()
win<-length(grep("Windows",OS))
lin<-length(grep("Linux",OS))
#Find path of data directory
#Linux Bash Commands
if(lin==1){
file_path<-system("find / -name 'file_name'", intern = TRUE)
data_directory<-gsub('/file_name',"",file_path)
}
#Windows Command Prompt Commands
if(win==1){
file_path<-shell('dir file_name /s', intern = TRUE)
file_path<-file_path[4]
file_path<-gsub(" Directory of ","",file_path)
filepath<-gsub("\\\\","/",file_path)
data_directory<-file_path
}
#Change working directory to location of data and sources
setwd(data_directory)
Thank you for the function, though I had to adjust it a Little as following for me (W10):
#Windows Command Prompt Commands
if(win==1){
file_path<-shell('dir file_name', intern = TRUE)
file_path<-file_path[4]
file_path<-gsub(" Verzeichnis von ","",file_path)
file_path<-chartr("\\","/",file_path)
data_directory<-file_path
}
In my case, I needed a way to copy the executing file to back up the original script together with its outputs. This is relatively important in research. What worked for me while running my script on the command line, was a mixure of other solutions presented here, that looks like this:
library(scriptName)
file_dir <- gsub("\\", "/", fileSnapshot()$path, fixed=TRUE)
file.copy(from = file.path(file_dir, scriptName::current_filename()) ,
to = file.path(new_dir, scriptName::current_filename()))
Alternatively, one can add to the file name the date and our to help in distinguishing that file from the source like this:
file.copy(from = file.path(current_dir, current_filename()) ,
to = file.path(new_dir, subDir, paste0(current_filename(),"_", Sys.time(), ".R")))
None of the solutions given so far work in all circumstances. Worse, many solutions use setwd, and thus break code that expects the working directory to be, well, the working directory — i.e. the code that the user of the code chose (I realise that the question asks about setwd() but this doesn’t change the fact that this is generally a bad idea).
R simply has no built-in way to determine the path of the currently running piece of code.
A clean solution requires a systematic way of managing non-package code. That’s what ‘box’ does. With ‘box’, the directory relative to the currently executing code can be found trivially:
box::file()
However, that isn’t the purpose of ‘box’; it’s just a side-effect of what it actually does: it implements a proper, modern module system for R. This includes organising code in (nested) modules, and hence the ability to load code from modules relative to the currently running code.
To load code with ‘box’ you wouldn’t use e.g. source(file.path(box::file(), 'foo.r')). Instead, you’d use
box::use(./foo)
However, box::file() is still useful for locating data (i.e. OP’s use-case). So, for instance, to locate a file mygui.glade from the current module’s path, you would write.
glade_path = box::file('mygui.glade')
And (as long as you’re using ‘box’ modules) this always works, doesn’t require any hacks, and doesn’t use setwd.