Suppose I replace a function of a package, for example knitr:::sub_ext.
(Note: I'm particularly interested where it is an internal function, i.e. only accessible by ::: as opposed to ::, but the same answer may work for both).
library(knitr)
my.sub_ext <- function (x, ext) {
return("I'm in your package stealing your functions D:")
}
# replace knitr:::sub_ext with my.sub_ext
knitr <- asNamespace('knitr')
unlockBinding('sub_ext', knitr)
assign('sub_ext', my.sub_ext, knitr)
lockBinding('sub_ext', knitr)
Question: is there any way to retrieve the original knitr:::sub_ext after I've done this? Preferably without reloading the package?
(I know some people want to know why I would want to do this so here it is. Not required reading for the question). I've been patching some functions in packages like so (not actually the sub_ext function...):
original.sub_ext <- knitr:::sub_ext
new.sub_ext <- function (x, ext) {
# some extra code that does something first, e.g.
x <- do.something.with(x)
# now call the original knitr:::sub_ext
original.sub_ext(x, ext)
}
# now set knitr:::sub_ext to new.sub_ext like before.
I agree this is not in general a good idea (in most cases these are quick fixes until changes make their way into CRAN, or they are "feature requests" that would never be approved because they are somewhat case-specific).
The problem with the above is if I accidentally execute it twice (e.g. it's at the top of a script that I run twice without restarting R in between), on the second time original.sub_ext is actually the previous new.sub_ext as opposed to the real knitr:::sub_ext, so I get infinite recursion.
Since sub_ext is an internal function (I wouldn't call it directly, but functions from knitr like knit all call it internally), I can't hope to modify all the functions that call sub_ext to call new.sub_ext manually, hence the approach of replacing the definition in the package namespace.
When you do assign('sub_ext', my.sub_ext, knitr), you are irrevocably overwriting the value previously associated with sub_ext with the value of my.sub_ext. If you first stash the original value, though, it's not hard to reset it when you're done:
library(knitr)
knitr <- asNamespace("knitr")
## Store the original value of sub_ext
.sub_ext <- get("sub_ext", envir = knitr)
## Overwrite it with your own function
my.sub_ext <- function (x, ext) "I'm in your package stealing your functions D:"
assignInNamespace('sub_ext', my.sub_ext, knitr)
knitr:::sub_ext("eg.csv", "pdf")
# [1] "I'm in your package stealing your functions D:"
## Reset when you're done
assignInNamespace('sub_ext', .sub_ext, knitr)
knitr:::sub_ext("eg.csv", "pdf")
# [1] "eg.pdf"
Alternatively, as long as you are just adding lines of code to what's already there, you could add that code using trace(). What's nice about trace() is that, when you are done, you can use untrace() to revert the function's body to its original form:
trace(what = "mean.default",
tracer = quote({
a <- 1
b <- 2
x <- x*(a+b)
}),
at = 1)
mean(1:2)
# Tracing mean.default(1:2) step 1
# [1] 4.5
untrace("mean.default")
# Untracing function "mean.default" in package "base"
mean(1:2)
# [1] 1.5
Note that if the function you are tracing is in a namespace, you'll want to use trace()'s where argument, passing it the name of some other (exported) function that shares the to-be-traced function's namespace. So, to trace an unexported function in knitr's namespace, you could set where=knit
Related
I have a script with my most commonly used functions which I source at the top of most scripts. Sometimes I only want to get one of the functions in that script, but I don't know how to indicate that I only want one specific function. I'm looking for a function that is similar to the :: used to get a function inside a package. A reproducible example:
# file a.R
foo <- function() cat("Hello!\n")
bar <- function() cat("Goodbye!\n")
# End of file a.R
# file b.R
# Can't just delete all functions
fun <- function(x) print(x)
fun("It's so late!")
source("a.R")
foo()
fun("See you next time")
# End of file
I read the "source" help and it was unhelpful to me. The solution I currently have is to assign a variable at the start of the script with the functions loaded before, then set the difference with what was there after:
list_before <- lsf.str()
# content of file b.R
new_funcs <- setdiff(lsf.str(),list_before)
Then I can use rm(list=new_funcs[-1]) to keep only the function I wanted. This is, however a very convoluted way of doing this and I was hoping to find an easier solution.
A good way would be to write a package but it requires more knowledge (not there myself).
A good alternative I found is to use the package box that always you to import functions from an R script as a module.
You can import all functions or specific functions.
To set up a function as a module, you would use the roxygen2 documentation syntax as such:
#' This is a function to calculate a sum
#' #export
my_sum <- function(x, y){
x + y
}
#' This is a function to calculate a difference
#' #export
my_diff <- function(x, y){
x - y
}
Save the file as an R script "my_module.R"
The export parameter in the documentation tells box that what follows is a module. Then you can call box to reach a specific function in the module named "my_module".
Let's say your project directory has a script folder that contains your scripts and modules, you would import functions as such:
box::use(script/my_module)
my_module$my_sum(x, y)
box::use() creates an environment that contains all the functions found inside the module.
You can also import single functions like as follows. Let's assume your directory is a bit more complex as well where modules are inside a box folder inside script.
box::use(./script/box/my_module[my_sum])
my_sum(x, y)
You can use box to fetch functions from packages as well. In a sense, it is better than calling library() that would import all the functions in the package.
Using box, you can organize script by objectives or whatever organization you have in place.
I have a script to deal with strings from which I fetch function that work with strings.
I have a script for plot functions that I use in my projects...etc
insertSource() would help.
In your example, let's presume we need to import foo() from a.R :
# file b.R
foo <- function(){}
insertSource("a.R", functions = "foo", force=T)
foo <- foo#.Data
In an R package, let's say we have two functions. One is setting some parameters; the other one is using those parameters. How can I build such a pattern in R. It is similar to event-driven applications. But I am not sure if it is possible in R or not.
For example:
If we run set_param( a=10), whenever we run print_a.R, it prints 10, and incase of running set_param(a=20), it prints 20.
I need a solution without assigning value to the global environment because CRAN checks raise notes.
I suggest adding a variable to your package, as #MrFlick suggested.
For instance, in ./R/myoptions.R:
.myoptions <- new.env(parent = emptyenv())
getter <- function(k) {
.myoptions[[k]]
}
setter <- function(k, v) {
.myoptions[[k]] <- v
}
lister <- function() {
names(.myoptions)
}
Then other package functions can use this as a key/value store:
getter("optA")
# NULL
setter("optA", 99)
getter("optA")
# [1] 99
lister()
# [1] "optA"
and all the while, nothing is in the .GlobalEnv:
ls(all.names = TRUE)
# character(0)
Values can be as complex as you want.
Note that these are not exported, so if you want/need the user to have direct access to this, then you'll need to update NAMESPACE or, if using roxygen2, add #' #export before each function definition.
NB: I should add that a more canonical approach might be to use options(.) for these, so that users can preemptively control and have access to them., programmatically.
Consider this R package with two functions, one exported and the other internal
hello.R
#' #export
hello <- function() {
internalFunctions:::hello_internal()
}
hello_internal.R
hello_internal <- function(x){
print("hello world")
}
NAMESPACE
# Generated by roxygen2 (4.1.1): do not edit by hand
export(hello)
When this is checked (devtools::check()) it returns the NOTE
There are ::: calls to the package's namespace in its code. A package
almost never needs to use ::: for its own objects:
‘hello_internal’
Question
Given the NOTE says almost never, under what circumstances will a package need to use ::: for its own objects?
Extra
I have a very similar related question where I do require the ::: for an internal function, but I don't know why it's required. Hopefully having an answer to this one will solve that one. I have a suspicion that unlocking the environment is doing something I'm not expecting, and thus having to use ::: on an internal function.
If they are considered duplicates of each other I'll delete the other one.
You should never need this in ordinary circumstances. You may need it if you are calling the parent function in an unusual way (for example, you've manually changed its environment, or you're calling it from another process where the package isn't attached).
Here is a pseudo-code example, where I think using ::: is the only viable solution:
# R-package with an internal function FInternal() that is called in a foreach loop
FInternal <- function(i) {...}
#' Exported function containing a foreach loop
#' #export
ParallelLoop <- function(is, <other-variables>) {
foreach(i = is) %dopar% {
# This fails, because it cannot not locate FInternal, unless it is exported.
FInternal(i)
# This works but causes a note:
PackageName:::FInternal(i)
}
}
I think the problem here is that the body of the foreach loop is not defined as a function of the package. Hence, when executed on a worker process, it is not treated as a code belonging to the package and does not have access to the internal objects of the package. I would be glad if someone could suggest an elegant solution for this specific case.
Let's say I open R file that I used way back when. On top of the page I see a library loaded, but I don't remember what it does any more. So I think to myself: hm, I wonder where in this long R file is this library used?
Is there a way to list what functions from a given package were used in particula file?
There are certainly other ways to do this but if you can get a list of the functions for the package you could combine readLines (to read the script into R as characters), grepl (to detect matches), and sapply. The way I would grab the functions is using p_funs from the pacman package. (Full disclosure: I am one of the authors).
Here is an example script that I have saved as "test.R"
library(ggplot2)
x <- rnorm(20)
y <- rnorm(20)
qplot(x, y)
summary(x)
and here is a session where I detect which functions are used
script <- readLines("test.R")
funs <- p_funs(ggplot2)
out <- sapply(funs, function(input){any(grepl(input, x = script))})
funs[out]
#[1] "ggplot" "qplot"
If you don't want to install pacman you can use any other method to get a list of the functions in the package. You could replace that with
funs <- objects("package:ggplot2")
and you would essentially get the same answer.
Note that you may get more matches than there actually are in the file - note that the ggplot function wasn't actually in my script but the string "ggplot" is in library(ggplot2). So you may still need to do a little bit of additional digging after the initial sweep through the file.
I have a few convenience functions in my .Rprofile, such as this handy function for returning the size of objects in memory. Sometimes I like to clean out my workspace without restarting and I do this with rm(list=ls()) which deletes all my user created objects AND my custom functions. I'd really like to not blow up my custom functions.
One way around this seems to be creating a package with my custom functions so that my functions end up in their own namespace. That's not particularly hard, but is there an easier way to ensure custom functions don't get killed by rm()?
Combine attach and sys.source to source into an environment and attach that environment. Here I have two functions in file my_fun.R:
foo <- function(x) {
mean(x)
}
bar <- function(x) {
sd(x)
}
Before I load these functions, they are obviously not found:
> foo(1:10)
Error: could not find function "foo"
> bar(1:10)
Error: could not find function "bar"
Create an environment and source the file into it:
> myEnv <- new.env()
> sys.source("my_fun.R", envir = myEnv)
Still not visible as we haven't attached anything
> foo(1:10)
Error: could not find function "foo"
> bar(1:10)
Error: could not find function "bar"
and when we do so, they are visible, and because we have attached a copy of the environment to the search path the functions survive being rm()-ed:
> attach(myEnv)
> foo(1:10)
[1] 5.5
> bar(1:10)
[1] 3.027650
> rm(list = ls())
> foo(1:10)
[1] 5.5
I still think you would be better off with your own personal package, but the above might suffice in the meantime. Just remember the copy on the search path is just that, a copy. If the functions are fairly stable and you're not editing them then the above might be useful but it is probably more hassle than it is worth if you are developing the functions and modifying them.
A second option is to just name them all .foo rather than foo as ls() will not return objects named like that unless argument all = TRUE is set:
> .foo <- function(x) mean(x)
> ls()
character(0)
> ls(all = TRUE)
[1] ".foo" ".Random.seed"
Here are two ways:
1) Have each of your function names start with a dot., e.g. .f instead of f. ls will not list such functions unless you use ls(all.names = TRUE) therefore they won't be passed to your rm command.
or,
2) Put this in your .Rprofile
attach(list(
f = function(x) x,
g = function(x) x*x
), name = "MyFunctions")
The functions will appear as a component named "MyFunctions" on your search list rather than in your workspace and they will be accessible almost the same as if they were in your workspace. search() will display your search list and ls("MyFunctions") will list the names of the functions you attached. Since they are not in your workspace the rm command you normally use won't remove them. If you do wish to remove them use detach("MyFunctions") .
Gavin's answer is wonderful, and I just upvoted it. Merely for completeness, let me toss in another one:
R> q("no")
followed by
M-x R
to create a new session---which re-reads the .Rprofile. Easy, fast, and cheap.
Other than that, private packages are the way in my book.
Another alternative: keep the functions in a separate file which is sourced within .RProfile. You can re-source the contents directly from within R at your leisure.
I find that often my R environment gets cluttered with various objects when I'm creating or debugging a function. I wanted a way to efficiently keep the environment free of these objects while retaining personal functions.
The simple function below was my solution. It does 2 things:
1) deletes all non-function objects that do not begin with a capital letter and then
2) saves the environment as an RData file
(requires the R.oo package)
cleanup=function(filename="C:/mymainR.RData"){
library(R.oo)
# create a dataframe listing all personal objects
everything=ll(envir=1)
#get the objects that are not functions
nonfunction=as.vector(everything[everything$data.class!="function",1])
#nonfunction objects that do not begin with a capital letter should be deleted
trash=nonfunction[grep('[[:lower:]]{1}',nonfunction)]
remove(list=trash,pos=1)
#save the R environment
save.image(filename)
print(paste("New, CLEAN R environment saved in",filename))
}
In order to use this function 3 rules must always be kept:
1) Keep all data external to R.
2) Use names that begin with a capital letter for non-function objects that I want to keep permanently available.
3) Obsolete functions must be removed manually with rm.
Obviously this isn't a general solution for everyone...and potentially disastrous if you don't live by rules #1 and #2. But it does have numerous advantages: a) fear of my data getting nuked by cleanup() keeps me disciplined about using R exclusively as a processor and not a database, b) my main R environment is so small I can backup as an email attachment, c) new functions are automatically saved (I don't have to manually manage a list of personal functions) and d) all modifications to preexisting functions are retained. Of course the best advantage is the most obvious one...I don't have to spend time doing ls() and reviewing objects to decide whether they should be rm'd.
Even if you don't care for the specifics of my system, the "ll" function in R.oo is very useful for this kind of thing. It can be used to implement just about any set of clean up rules that fit your personal programming style.
Patrick Mohr
A nth, quick and dirty option, would be to use lsf.str() when using rm(), to get all the functions in the current workspace. ...and let you name the functions as you wish.
pattern <- paste0('*',lsf.str(), '$', collapse = "|")
rm(list = ls()[-grep(pattern, ls())])
I agree, it may not be the best practice, but it gets the job done! (and I have to selectively clean after myself anyway...)
Similar to Gavin's answer, the following loads a file of functions but without leaving an extra environment object around:
if('my_namespace' %in% search()) detach('my_namespace'); source('my_functions.R', attach(NULL, name='my_namespace'))
This removes the old version of the namespace if it was attached (useful for development), then attaches an empty new environment called my_namespace and sources my_functions.R into it. If you don't remove the old version you will build up multiple attached environments of the same name.
Should you wish to see which functions have been loaded, look at the output for
ls('my_namespace')
To unload, use
detach('my_namespace')
These attached functions, like a package, will not be deleted by rm(list=ls()).