I am trying to find a way to override specific functions in meteor packages. The only way I have found to do it is to download the package and use it locally. Is there a quick way that I could just override one function without doing this? (as one would expect to be able to do in most OOP languages)
Related
Tl;dr
I have a complete NAMESPACE file for the package I am building. I want to execute all the importFrom(x,y) clauses in that file to get the set of necessary functions I use in the package. Is there an automated way to do that?
Full question
I'm currently working on building a package that in turns depends on a bunch of other packages. Think: both Hmisc and dplyr.
The other contributors to the package don't really like using devtools::build() every 5 minutes when they debug, which I can understand. They want to be able to toggle a dev_mode boolean which, when set to true, loads all dependencies and sources all scripts in the R/ and data-raw/ folders, so that all functions and data objects are loaded in memory. I don't necessarily think that's a perfectly clean solution, but they're technically my clients and it's a big help to them so please don't offer a frame challenge unless it's the most user-friendly solution imaginable.
The problem with this solution is that when I load two libraries whose namespace clash, functions in the package that would perfectly work otherwise start throwing errors. I thus need to improve my import system.
Thanks to thorough documentation (and the help of devtools::check()), my NAMESPACE is complete. I guess I could split it in pieces and eval() some well-chosen parts of it, but it sort of feels like I'm probably not the first person to encounter this difficulty. Is there an automated way to parse NAMESPACE and import the proper functions from the proper packages ?
Thanks !
I'm working on a fork of a package that depends upon the ReporteRs library.
However, this library has been deprecated by its owner for a few years, in favour of the officer and flextable libraries.
On of the main reasons for this depreciation is not to depend on rJava, which may cause installation problems and bugs.
In my package, how should I manage this case?
So far, my package was processing data to return a ReporteRs object. If I change my functions to return an officer object I would break backward compatibilty.
But if I don't, and keep old, ReporteRs returning function as legacy backward compatibilty functions, I have to keep ReporteRs in my dependencies and my package would be rJava-dependant.
Is there a win-win solution?
Here is what I would do:
Make your best attempt to re-implement your functions with the officer library, but keeping your old API. Make sure that you warn the users that these functions are deprecated. At the same time make new functions fully compliant with officer/flextable syntax. Note that you might change the behavior of the functions slightly (as in, not ensuring all parameters are properly evaluated), as long as they take the same parameters and return the same type of objects.
If that is really not possible, just add a compatibility warning to your old functions.
Create a transitional package version that you would keep around for a few weeks or months with both versions of these functions. If the package still needs to depend on rJava, tough luck.
Keep track of the packages that depend on your package. If there are not too many, you can contact their developers directly. Maybe the issue is not as serious as you think it is?
EDIT: As discussed above, you can make your dependency on ReportR conditional on the availability of ReportR. Then, you can put ReportR into the Suggests field of the DESCRIPTION file rather than Depends, and in the package you can use code like this:
if(requireNamespace("ReportR")) {
warning("This function is deprecated, better use MyNewFunction instead")
ReportR::whatever() ...
} else {
warning("To run this (deprecated) function, please install the ReportR package")
}
I am encountering an error that is driven by a C++ function that has been wrapped using Rcpp. Normally if I needed to edit a function I would just use fix(TheFunction) and edit as needed. While this works for R functions, I can't figure out how to access the underling C++ code.
Is there a simple way to edit an underlining C++ function? One thought I had was forking the GitHub repository, making the edits I need, and then installing the package from my GitHub. I have never done that would it work, or is there a better way?
Is it good practice to include every library I need to execute a function within that function?
For example, my file global.r contains several functions I need for a shiny app. Currently I have all needed packages at the top of the file. When I'm switching projects/copying these functions I have to load the packages/include them in the new code. Otherwise all needed packages are contained in that function. Of course I have to check all functions with a new R session, but I think this could help in the long run.
When I tried to load a package twice it won't load the package again but checks it's already loaded. My main question is if it would slow my functions if I restructure in that way?
I only saw that practice once, library calls inside functions, so I'm not sure.
As one of the commenters suggest, you should avoid loading packages within a function since
The function now has a global effect - as a general rule, this is something to avoid.
There is a very small performance hit.
The first point is the big one. As with most optimisation, only worry about the second point if it's an issue.
Now that we've established the principle, what are the possible solution.
In small projects, I have a file called packages.R that includes all the library calls I need. This is sourced at the top of my analysis script. BTW, all my functions are in a file call func.R. This workflow was stolen/adapted from a previous SO question
If you're only importing a single function, you could use the :: trick, e.g. package::funcA(...) That way you avoid loading the package.
For larger projects, I create an R package that handles all necessary imports. The benefit of creating a package is detailed in this answer on structuring large R projects.
I have written personal functions in R that are not specific to one (or a few) projects.
What are the best practices (in R) to put those kind of functions?
Is the best way to do it to have one file that gets sourced at startup? or is there a better (recommended) way to deal with this situation?
Create a package named "utilities" , put utility functions in that package, try to aim for one function per file, and store the package in a source control system (e.g., GIT, SVN ). It will save you time in the long run.
P.S. .Rprofile tends to get accidentally deleted.
If you have many, it would be good to make it into a package that you load each time you start working.
It is probably not a good idea to have a monolithic script with a bunch of functions. Instead break the file up into several files each of which either has only one function (my preference) or has a group of functions that are logically similar. That makes it easier to find things when you need to make changes.
Most people use the .Rprofile file for this. Here are two links which talk about this file in some detail.
http://www.statmethods.net/interface/customizing.html
http://blog.revolutionanalytics.com/2013/10/sample-rprofile.html
At the top of my .Rprofile file I call library() for the various libraries which I normally use. I also have some personal handy functions which I've come to rely on. Because this file is sourced on startup, they are available to me every session.
From my experience, a package will be the best choice for personal functions. Firstly I put all new functions into a personal package, which I called it My. When I find some functions was similar and are worth to become an independent package, I will create a new package and move them.