I am trying to load a package at start-up if it's already installed. If it isn't then I want to first install it and then load it. So, I created the following function:
RLoadPackage <- function(packname)
{
if((packname %in% rownames(installed.packages()))==FALSE)
{
install.packages(packname,dependencies = TRUE)
}
library(packname,character.only = TRUE)
}
This works well once RStudio is opened, but it doesn't quite work at start-up. I added this function to my local .RProfile file as:
RLoadPackage("ggplot2")
RLoadPackage <- function(packname)
{
if((packname %in% rownames(installed.packages()))==FALSE)
{
install.packages(packname,dependencies = TRUE)
}
library(packname,character.only = TRUE)
}
However, I get the error message as:
Error: could not find function "RLoadPackage"
One option is to install packages manually and then add a bunch of library("xyz")
However, the above option is very clunky. So, I created a function.
I've 2 questions:
1) Can someone please help me with it?
2) Is there more efficient way of doing this?
My post is inspired from the following two links:
1) Check for installed packages before running install.packages()
2) http://www.statmethods.net/interface/customizing.html
I'd appreciate any help.
Thanks
Ok. This piece of code works:
library("utils")
RLoadPackage <- function(packname)
{
if((packname %in% rownames(installed.packages()))==FALSE)
{
install.packages(packname,dependencies = TRUE)
}
library(packname,character.only = TRUE)
}
RLoadPackage("ggplot2")
RLoadPackage("dplyr")
RLoadPackage("lubridate")
However, is there more efficient way of loading multiple packages--maybe a vectorized version of this? I am just curious.
Related
I am using R. I am working with a library called "mco" : https://cran.r-project.org/web/packages/mco/index.html
I was looking over some of the function definitions used in this library at the github repository, for example: https://github.com/olafmersmann/mco/blob/master/R/nsga2.R
Over here, I came across the following lines of code:
res <- .Call(do_nsga2,
ff, cf, sys.frame(),
as.integer(odim),
as.integer(cdim),
as.integer(idim),
lower.bounds, upper.bounds,
as.integer(popsize), as.integer(generations),
cprob, as.integer(cdist),
mprob, as.integer(mdist))
if (1 == length(res)) {
res <- res[[1]]
names(res) <- c("par", "value", "pareto.optimal")
class(res) <- c("nsga2", "mco")
} else {
for (i in 1:length(res)) {
names(res[[i]]) <- c("par", "value", "pareto.optimal")
class(res[[i]]) <- c("nsga2", "mco")
}
class(res) <- "nsga2.collection"
}
return (res)
}
In the very first line of this code, it makes reference to some object called "do_nsga2". But apart from this function, I can't find any reference to "do_nsga2" within the entire package.
Does anyone know what exactly is being "called"?
Thanks
Note: I am trying to copy/paste all the functions from the github repository into my R session, since I am working with an older computer in which directly installing libraries from CRAN is not possible. When I tried to copy/paste all these functions, I got the following error:
Error in nsga2....
object 'do_nsga2' not found
I was trying to get my code to run in parallel on R by using the doParallel package with the foreach package. I am also using the sf package to manipulate shp files. I made sure all my code worked in the foreach loop just using %do% so if there was an error I could better track it down. My code worked fine using foreach and %do% but when I changed it do %dopar% R would keep giving me the following error:
Error in { : task 1 failed - "could not find function "st_geometry_type""
Even though I clearly use require(sf) at the top of the R script. I made a small function that just prints out "check" if the statement is true to replicate the error.
require(sf)
require(doParallel)
doParallel::registerDoParallel(cores = 2)
testforeach <- function(sfObject)
{
foreach(i=1:10) %dopar% {
if (st_geometry_type(sfObject[i,]) == "LINESTRING")
{
print("check")
}
}
}
When I run this code it throws the same exact error:
Error in { : task 1 failed - "could not find function "st_geometry_type""
However when I replace %dopar% with %do% it prints out all of the expected "check" messages.
Is this a bug in R or am I missing something? I tried reinstalling my packages but that didn't seem to have any affect as I continued to get the same error. Any help would be greatly appreciated.
You need to include the packages you will use inside the loop in the foreachfunction
foreach(i=1:10,.packages="sf") %dopar% {
if (st_geometry_type(sfObject[i,]) == "LINESTRING")
{
print("check")
}
}
Aside from a vignette, I wish to add an additional document as PDF to my package. I can, of course, copy it to the inst/doc directory and it will then be included in the package documentation.
However, I would like to make it easy for the user to display this file. The authors of the edgeR package decided to do the following: the main users manual is distributed as PDF (and is not a regular vignette), and the authors include a function called edgeRUsersGuide() which shows the PDF by means of the following code:
edgeRUsersGuide <- function (view = TRUE) {
f <- system.file("doc", "edgeRUsersGuide.pdf", package = "edgeR")
if (view) {
if (.Platform$OS.type == "windows")
shell.exec(f)
else system(paste(Sys.getenv("R_PDFVIEWER"), f, "&"))
}
return(f)
}
It appears to work. Do you think it is a reasonable approach?
Or should one use something else? Potentially, the following code would also work and be more robust:
z <- list(PDF="edgeR.pdf", Dir=system.file(package="edgeR"))
class(z) <- "vignette"
return(z)
My solution was to ape the code in utils:::print.vignette():
function(docfile) {
## code inspired by tools:::print.vignette
pdfviewer <- getOption("pdfviewer")
f <- system.file("doc", docfile, package = "tmod")
if(identical(pdfviewer, "false"))
stop(sprintf("Cannot display the file %s", f))
if (.Platform$OS.type == "windows" &&
identical(pdfviewer, file.path(R.home("bin"), "open.exe"))) {
shell.exec(f)
} else {
system2(pdfviewer, shQuote(f), wait = FALSE)
}
return(invisible(f))
}
I am using tensorflow with Rstudio, and trying to make it as simple and as functionalized as possible. I was wondering if there is a way to call a library inside a function, without having to do this :
library(tensorflow)
myFunction(args)
Is there a way to embed the first command in the function, so that I don't have to call it each time before using the function ?
I tried something like that :
Lamdadou <- function(R) {
library(tensorflow)
sess =tf$Session()
K <- sess$run(R)
print(K)
}
But an error rises when I call it :
Error: Python module tensorflow was not found.
Within functions you should use require and not library to load packages.
So your function should look more like this:
Lamdadou <- function(R) {
if (!require(tensorflow)) {
stop("tensorflow not installed")
} else {
sess <- tf$Session()
K <- sess$run(R)
print(K)
}
}
I'm writing R code where I would like to have it run either in "non-debug" or "debug" mode. Under the debug mode, I would like the code to print-out runtime information.
In other languages, I would typically have some sort of print function that does nothing unless a flag is turned on (either for compilation or runtime).
For example, I can use #ifdef DEBUG (in compilation time), or set a debug level in run time.
What would be the equivalent way of doing this in R?
Same thing, minus the preprocessor:
Define a global variable variable (or use an options() value)
Insert conditional code that tests for the variable
Works with your functions (adding ..., verbose=options(myVerbose)), in your packages, etc pp
I have also used it in R scripts (driven by littler) using the CRAN package getopt to pick up a command-line option --verbose or --debug.
A slightly fancier version of Dirk's answer:
is_debug_mode <- function()
{
exists(".DEBUG", envir = globalenv()) &&
get(".DEBUG", envir = globalenv())
}
set_debug_mode <- function(on = FALSE)
{
old_value <- is.debug.mode()
.DEBUG <<- on
invisible(old_value)
}
Usage is, e.g.,
if(is_debug_mode())
{
#do some logging or whatever
}
and
set_debug_mode(TRUE) #turn debug mode on
set_debug_mode(FALSE) #turn debug mode off
It might also be worth looking at the Verbose class in the R.utils package, which allows you very fine control for printing run-time information of various sorts.
Extending Richie's code:
also, you can check for the system environment variable DEBUG for initialization:
isdbg <- function()
{
if(exists(".DEBUG", envir = globalenv()))
{
return(get(".DEBUG", envir = globalenv()))
} else #initialise from shell environment variable
{
debugmode <- !(toupper(Sys.getenv("DEBUG")) %in% c("", "FALSE", "0"))
assign(".DEBUG", debugmode, envir = globalenv())
return(debugmode)
}
}
setdbg <- function(on = FALSE)
{
old_value <- isdbg()
.DEBUG <<- on
invisible(old_value)
}
ifdbg <- function(x)
{
if(isdbg()) x
}
usage:
setdbg(TRUE) #turn debug mode on
setdbg(FALSE) #turn debug mode off
if(isdbg())
{
#do some logging or whatever
}
or
ifdebug(...do something here...)
Another possibility is log4r
To quote this page:
log4r: A simple logging system for R, based on log4j
logr4 provides an object-oriented logging system that uses an API
roughly equivalent to log4j and its related variants.