testInstalledPackage in custom libs folder - r

Question : How do I make tools::testInstalledPackage work when I have a custom lib path defined in Rprofile.site.
I'm not an experience unix user so I might be wrong about this. Theres a line in tools::testInstalledPackage (shown below) which I suppose runs R in vanilla mode, i.e. my custom lib path in Rprofile.site does not get added in.
cmd <- paste(shQuote(file.path(R.home("bin"), "R")),
"CMD BATCH --vanilla --no-timing", Ropts, shQuote(Rfile),
shQuote(failfile))
In this case, when I try to test the package zoo, tools::testInstalledPackage returns a zoo-Ex.Rout.fail file with an error saying that there is no package called 'zoo' which makes sense as vanilla R does not contain my custom Lib folder.
Is there a way to use tools::testInstalledPackage to test my packages in my custom folder? Or do I have to copy the folder into the default file path
p.s. my current workaround is to create a new function without the --vanilla and attach it to the tools namespace but I don't think its a very elegant solution.

Related

How to run R projects / use their relative paths from the terminal without setwd() resp. cd

I'm kinda lost on that one:
I have set up an R project, let's call it "Test Project.Rproj". The beauty of R projects is the possibility to use relative paths (relative to the .Rproj file). My project consists of a "main.R" script, which is saved on the same level as the .Rproj file.
Additionally I have a directory called 'Output', where I want my plots and exported data to be saved. My "main.R" file looks like the following:
my_df <- data.frame(A = 1:10, B = 11:20)
my_df |>
writexl::write_xlsx(here::here("Output",
paste0("my_df_",
stringr::str_replace_all(as.character(Sys.time()), ":", ""),
".xlsx")))
My final goal is to automate the execution of the 'main.R' file using the Windows Task Scheduler. But in order to do so, I have to be able to run the script from the terminal. The problem here is the working directory. When opening an R project, all the paths are relative to .Rproj file. But in the terminal the current working directory is <C:\Users\my_name>. Of course I could manually set the working directory via cd "path\to\my\project. But I would like to avoid that.
My current call for the execution of the main.R file in the terminal is the following:
"C:\Program Files\R\R-4.1.0\bin\Rscript" -e "source('C:/Users/my_name/path/to/my/project/main.R')"
My two ideas for a solution are the following, but I am happy for other suggestions as well.
In order to replicate the usual use of a project: Is there a way to execute the .Rproj
file from the terminal? In order to create a similar environment as in RStudio, where all the relative paths are working, when executing scripts from the project afterwards?
There are two packages adressing the problem of relative paths: rprojroot and here, where the former is the basis for the latter. I am pretty sure that here does not provide the needed functionality. I tried adding here::i_am("main.R) to my main.R file, but the project root directory still is not found when executing in the terminal from a working directory outside the project.
For rprojroot to work, I think it is also necessary to have your current working directory somewhere within the project. But this package offers a lot of functionality, so I am not sure wheter I am overlooking something.
So I would be happy about any help. Maybe it is impossible and I have to change the working directory manually - then I would be glad to know that as well.
Some links I used in my research:
https://www.tidyverse.org/blog/2017/12/workflow-vs-script/
https://malco.io/2018/11/05/why-should-i-use-the-here-package-when-i-m-already-using-projects/
http://jenrichmond.rbind.io/post/how-to-use-the-here-package/
Thanks a lot!
Edit: My current implementation is an additional R script, where I manually set the working directory via setwd() and source the main.R file. However it is always suggested to avoid setwd, which is why this whole question exists.

Load file implicitly from Path

I am trying to get my head around programming with multiple modules (in different files). I don't want to load explicitly the files with ìnclude in the right order.
I am using the Atom IDE as my development platform, so I don't run julia explicitly.
when I am just using importall Datastructures (where ModuleName is the name of the module) julia complains:
LoadError: ArgumentError: Module Datastructures not found in current path.
Run `Pkg.add("Datastructures")` to install the Datastructures package.
while loading F:\dev\ai\Interpreter.jl, in expression starting on line 8
There are two ways to build a package or module in julia:
1) Use the tools in PkgDev. You can get them with Pkg.add("PkgDev") ; using PkgDev. Now you can use PkgDev.generate("MyPackageName", "MIT") (or whatever license you prefer) to build your package folder. By default, julia will build this folder in the same directory as all your other external packages. On Linux, this would ~/.julia/v0.6/ (or whatever version you are running). Also by default, this folder will be on the julia path, so you can just type using MyPackageName at the REPL to load it.
Note that julia essentially loads the package by looking for the file ~/.julia/v0.6/MyPackageName/src/MyPackageName.jl and then running it. If your module consists of multiple files, you should have all of them in the ~/.julia/v0.6/MyPackageName/src/ directory, and then have a line of code in the MyPackageName.jl file that says include("MyOtherFileOfCode.jl").
2) If you don't want to keep your package in ~/.julia/v0.6/ for some reason, or you don't want to build your package using PkgDev.generate(), you can of course just set the files up yourself.
Let's assume you want MyPackageName to be stored in the ~/MyCode directory. First, create the directory ~/MyCode/MyPackageName/. Within this directory, I strongly recommend using the same structure that julia and github use, i.e. store all your code in a directory called ~/MyCode/MyPackageName/src/.
At a minimum, you will need a file in this directory called ~/MyCode/MyPackageName/src/MyPackageName.jl (just like in the method above). This file should begin with module MyPackageName and finish with end. Then, put whatever you want in-between (including include calls to other files in the src directory if you wish).
The final step is to make sure that julia can find MyPackageName. To do this, you will need ~/MyCode to be on the julia path. To do this, use: push!(LOAD_PATH, "~/MyCode") or push!(LOAD_PATH, "~/MyCode/MyPackageName").
Maybe you don't want to have to run this command every time you want to access MyPackageName. No problem, you just need to add this line to your .juliarc.jl file, which is automatically run every time you start julia. On Linux, your .juliarc.jl file should be in your home directory, i.e. ~/.juliarc.jl. If it isn't there, you can just create it and put whatever code you want in there. If you're on a different OS, you'll have to google where to put your .juliarc.jl.
This answer turned out longer than I planned...

Copy .rmd file included in a Rstudio addin package to a user defined directory

I have a rstudio addin package located here.
One of the addins allows the user to define a directory and it will copy a file that is located in the package to that directory.
the file is located:
atProjectManageAddins/inst/Docs/RMarkdownSkeleton.Rmd
And I am trying to copy it to the user defined directory with something like this:
file.copy("inst/Docs/RMarkdownSkeleton.Rmd",
paste0(Dir, FolderName, "/Reports/", FolderName, "_report.Rmd"))
Where I am trying to copy it from where it is in the package, to where the user defines it to be (Based on two separate arguments Dir and FolderName).
But this doesn't seem to work. My assumption is that I am not referring to the package directory in the correct way. I've tried ./Inst/, ~/Inst/ and maybe a couple more. My assumption now is that there is a more systematic reason for my inability to get file.copy() to work.
Any suggestions? Is this even possible?
Note that if I run the function locally via source() and runGadget(), it works just fine. Only when the package is installed and I use the RStudio addins GUI where it references the intalled package, does it fail. Thus, I'm quite certain I am not correctly defining the file path for the installed .Rmd files.
Edit: I've changed to the following, based on Carl's suggestion (as can be seen on github), but the files are still not being copied over.
file.copy(system.file("Docs","Rmarkdownskeleton.rmd",package="atProjectManageAd‌​dins"),
paste0(Dir, FolderName, "/Reports/", FolderName, "_report.Rmd"))
system.file is the best function for getting a file from a package. I believe this should work for you:
file.copy(system.file("Docs","Rmarkdownskeleton.rmd",package="atProjectManageAd‌​dins"),
paste0(Dir, FolderName, "/Reports/", FolderName, "_report.Rmd"))
You got the right idea putting the files in inst/.
Use this code to copy the file from the package dir to the current dir :
file.copy(from = file.path(path.package("packagename"), "path/to/file"),
to = file.path("path/to/file"), overwrite = T)
file.path creates a path by concatenating the strings passed to it (OS-specific separators are automatically added).
path.package retrieves the path of a loaded package. Files presents in inst/ are copied at the root of the package dir when installed, thus "path/to/file" here should be the path relative to your inst/ dir.
overwrite can be used to overwrite the file if it already exists.
In your specific case, this should do the trick :
file.copy(file.path(path.package("atProjectManageAddins"), "Docs/RMarkdownSkeleton.Rmd",
file.path(getwd(), "Reports", paste0(reportName, "_report.Rmd")))

testthat .Rbuildignore + external file (NOTE)

Building a package using testthat for tests; those require an external file which as recommended lies in /tests/testthat/my-file.
However the R CMD check produces
Found the following hidden files and directories:
tests/testthat/my-file
The above is NOTE (Status: 1 NOTE)
If I add my-file to .Rbuildignore (devtools::use_build_ignore("/tests/testthat/my-file") then the file is well, ignored during the check, thus all tests fail and the package cannot be build.
How can I solve this issue? I understand that a NOTE is passable but I would like to get rid of it nonetheless.
The preferred way (according to Hadley) to load API credentials is via environment variables. If you are sharing the credentials with your package, you can just set them in an .onLoad function that will be run with the package namespace is loaded. If you just want to be able to run tests locally using those credentials but not share them, then add them to global Renviron.site file (or, less conveniently, in an .Renviron file in your working directory). Then you can delete this file from your package structure (or just .Rbuildignore it) and make the tests conditional on the presence of the environment variable, with something like:
if (!identical(Sys.getenv("MY_ENV_VAR"), "")) {
test_all("package")
}

Can I access an external file when testing an R package?

I am using the testthat package to test an R package that is within a larger repository. I would like to test the contents of a file outside of the R package.
Can I reference a file that is located outside of an R package while testing?
What I have tried
A reproducible example can be downloaded as MyRepo.tar.gz
My repository is called "myRepo", and it includes an R package, "myRpkg" and a folder full of miscellaneous scripts
~/MyRepo/
~/MyRepo/MyRpkg
~/MyRepo/Scripts
The tests in "MyRpkg" are in the /tests/ folder
~/myRepo/myRpkg/tests/test.myscript.R
And I want to be able to test a file in the Scripts folder:
~/MyRepo/Scripts/myscript.sh
I would like to read the script to test the contents of the first line doing something like this:
check.script <- readLines("../../../Scripts/myscript.sh")[1]
expect_true(grepl("echo", check.script))
This works fine if I start from the MyRepo directory:
cd ~/MyRepo
R CMD check MyRpkg
But if I move to another directory, it fails:
cd
R CMD check MyRepo/MyRpkg
As it says in R-exts
The directory tests is copied to the check area, and the tests are run with the copy as the working directory and with R_LIBS set to ensure that the copy of the package installed during testing will be found by library(pkg_name).
By default, the check directory is created in the current directory. Thus when running R CMD check from ~/MyRepo, the tests directory is copied to ~/MyRepo/MyRpkg.Rcheck/tests and hence
check.script <- readLines("../../../Scripts/myscript.sh")[1]
is interpreted as
check.script <- readLines("~/MyRepo/Scripts/myscript.sh")[1]
as required. However starting from ~/ would imply
check.script <- readLines("~/Scripts/myscript.sh")[1]
which isn't what you want. A work-around is to specify the directory in which the check directory is created, i.e.
R CMD check -o MyRepo MyRepo/MyRpkg
so that the copied tests directory has the same "grandparent" as the original tests directory.
Still, I wonder why the file must be external to the package. If you want to use the file in the package tests, it would make sense to include the file in the package. You could create an inst directory and put the Scripts directory in there, so that the Scripts directory will be copied to the package directory on installation and then
check.script <- readLines("../foo/Scripts/myscript.sh")[1]
could be used inside the test script, since the package is installed in MyRpkg.Rcheck/foo during R CMD check. Alternatively you could create an exec directory and put the script file in there, then
check.script <- readLines("../foo/exec/myscript.sh")[1]
would work. As both of these solutions only need to find the package installed during testing it wouldn't matter where you ran R CMD check from. See Package subdirectories and Non-R scripts in packages for more info.

Resources