Extend map from other packages at compile time - dictionary

I'm trying to extend a map across packages at 'compile time'. Is this possible?
I have package A with a predefined map:
package A
var MyMap = map[string]string{"key1": "value", "key2": "value"}
And I would like to extend the map during 'compile time'. This shall be done in another package. E.g. like so (not working code ofc.):
package B
import "A"
A.MyMap.Slice1["key3"] = "value" // extend the map during compile time
Is this somehow possible?

You can't do this "at compile" time. In fact, the composite literal that package A uses, that also will be constructed and used at runtime. There are no composite literal constants.
Going further, whatever code you write in package B, if it imports package A, code of package B will only run after package A has been initialized, including the map you posted.
If you want A.MyMap to have a different value before it can be seen by any other package, you should modify the source of package A. This could be a generated additional file, which could use a package init() function, assigning a new value to MyMap, or adding new values to it.
If you can, you could also modify package A so that the initialization of MyMap is moved to a different source file, one that can be generated.

It is actually extension at runtime but it should fit your example.
Use init function.
package B
import "A"
func init() {
A.MyMap["key3"] = "value"
}

You pass a string at linking time with command
go build -ldflags '-X somemap={"k":"v"}'
and then parse it to map at run time. You can easily use JSON format.
See more on GcToolchainTricks.

Related

How to share objects in vignettes documentation in a R package?

I'm writing documentation for my R package using vignettes (which will be embedded in a pkgdown website.
My question is :
if I create an R object in a chunk within a first "aa" vignette.
myobject <- mypkg::myfct()
How to reuse this object in a second vignette called "bb"?
verif <- myobject[myfilters,]
I get this error : myobject not found
This doesn't seem like a good idea.
You can save this data in an internal RDA file, and load it in the second vignette. See Chapter 14 External data, R Pakcages.
So you'd have to:
Create a data_raw folder, and have a script that creates this object, then saves it. This is facilitated by usethis::use_data_raw, with like "myfct_example".
Then, at the bottom of this script in data_raw ensure that internal = TRUE; This will save the object in R/sysdata.rda, instead of data/myfct_example (if you call that object that).
Ensure that you run the data_raw/myfct_example script every time you change myfct_example, so the new version get stored in R/sysdata.rda.
In both vignettes, you'll have this data object available, by mypkg:::myfct_example.

With get() call a function from an unloaded package without using library

I want to call a function from an unloaded package by having the function name stored in a list.
Normally I would just use:
library(shiny)
pagelist <- list("type" = "p") # object with the function name (will be loaded from .txt file)
get(pagelist$type[1])("Display this text")
but since when writing a package you're not allowed to load the library I'd have to use something like
get(shiny::pagelist$type[1])("Display this text")
which doesn't work. Is there a way to call the function from the function name stored in the list, without having to load the library? Note that it should be possible to call many different functions like this (all from the same package), so just using e.g.
if (pagelist$type[1] == "p"){
shiny::p("Display this text")
}
would require a quite long list of if else statemens.
Use getExportedValue:
getExportedValue("shiny",pagelist$type[1])("Display this text")
#<p>Display this text</p>
You shouldn't use getExportedValue as was done in the accepted answer, because its help page describes the functions there as "Internal functions to support reflection on namespace objects." It's bad practice to use internal functions, because they can change in subtle ways with very little notice.
The right way to do the equivalent of shiny::p when both "shiny" and "p" are character strings in variables is to use get:
get("p", envir = loadNamespace("shiny"))
The loadNamespace function returns the exported environment of the package; it's fairly quick to execute if the package is already loaded.
The original question asked
Is there a way to call the function from the function name stored in
the list, without having to load the library?
(where I think "library" should be "package" in R jargon). The answer to this is "no", you can't get any object from a package unless you load the package. However, loading is simpler than attaching, so this won't put shiny on the search list (making all of shiny visible to the user), it's just loaded internally in R.
A related question is why get("shiny::p") doesn't work. The answer is that shiny::p is an expression to evaluate, and get only works on names.

Generate a call to a package function programatically given vector of package names

In my work I develop R packages that export R data objects (.RData). The name of these .RData files is always the same (e.g. files.RData). These packages also define and export a function that uploads the data to my database, say upload_data(). Inside upload_data() I first load the data using data(files, package = "PACKAGE NAME") and then push it into my database.
Let's say I have two packages, package1 and package2, which live on my file system. Given a vector of the package names (c("package1", "package2")), how would I go about to call 'upload_data()' programatically? Specifically, inside a script, how would I construct a call using "::" notation that constructs and evaluates a call like this: package1::upload_data()). I tried 'call' but couldn't get it right.
You could go the route of constructing the call using :: notation and evaluating that - but it's probably just easier to directly use get and specify the package you want to grab from.
get("upload_data", envir = asNamespace("package1"))
will return the function the same as using package1::upload_data would but is much easier to deal with programatically.

How to prevent functions polluting global namespace?

My R project is getting increasingly complex, and I'm starting to look for some construct that's equivalent to classes in Java/C#, or modules in python, so that my global namespace doesn't become littered with functions that are never used outside of one particular .r file.
So, I guess my question is: to what extent is it possible to limit the scope of functions to within a specific .r file, or similar?
I think I can just make the entire .r file into one giant function, and put functions inside that, but that messes with the echoing:
myfile.r:
myfile <- function() {
somefunction <- function(a,b,c){}
anotherfunction <- function(a,b,c){}
# do some stuff here...
123
456
# ...
}
myfile()
Output:
> source("myfile.r",echo=T)
> myfile <- function() {
+ somefunction <- function(a,b,c){}
+ anotherfunction <- function(a,b,c){}
+
+ # do some stuff here...
+ # . .... [TRUNCATED]
> myfile()
>
You can see that "123" is not printed, even though we used echo=T in the source command.
I'm wondering if there is some other construct which is more standard, since putting everything inside a single function doesn't sound like something that is really standard? But perhaps it is? Also, if it means that echo=T works then that is a definite bonus for me.
Firstly, as #Spacedman has said, you'll be best served by a package but there are other options.
S3 Methods
R's original "object orientation" is known as S3. The majority of R's code base uses this particular paradigm. It is what makes plot() work for all kinds of objects. plot() is a generic function and the R Core Team and package developers can and have written their own methods for plot(). Strictly these methods might have names like plot.foo() where foo is a class of object for which the function defines a plot() method. The beauty of S3 is that you don't (hardly) ever need to know or call plot.foo() you just use plot(bar) and R works out which plot() method to dispatch to based on the class of object bar.
In your comments on your Question you mention that you have a function populate() that has methods (in effect) for classes "crossvalidate" and "prod" which you keep in separate .r files. The S3 way to set this up is to do:
populate <- function(x, ...) { ## add whatever args you want/need
UseMethod("populate")
}
populate.crossvalidate <-
function(x, y, z, ...) { ## add args but must those of generic
## function code here
}
populate.prod <-
function(x, y, z, ...) { ## add args but must have those of generic
## function code here
}
The given some object bar with class "prod", calling
populate(bar)
will result in R calling populate() (the generic), it then looks for a function with name populate.prod because that is the class of bar. It finds our populate.prod() and so dispatches that function passing on to it the arguments we initially specified.
So you see that you only ever refer to the methods using the name of the generic, not the full function name. R works out for you what method needs to be called.
The two populate() methods can have very different arguments, with exception that strictly they should have the same arguments as the generic function. So in the example above, all methods should have arguments x and .... (There is an exception for methods that employ formula objects but we don't need to worry about that here.)
Package Namespaces
Since R 2.14.0, all R packages have had their own namespace, even if one were not provided by the package author, although namespaces have been around for a lot longer in R than that.
In your example, we wish to register the populate() generic and it's two methods with the S3 system. We also wish to export the generic function. Usually we don't want or need to export the individual methods. So, pop your functions into .R files in the R folder of the package sources and then in the top level of the package sources create a file named NAMESPACE and add the following statements:
export(populate) ## export generic
S3method(populate, crossvalidate) ## register methods
S3method(populate, prod)
Then once you have installed your package, you will note that you can call populate() but R will complain if you try to call populate.prod() etc directly by name from the prompt or in another function. This is because the functions that are the individual methods have not been exported from the namespace and thence are not visible outside it. Any function in your package that call populate() will be able to access the methods you have defined, but any functions or code outside your package can't see the methods at all. If you want, you can call non-exported functions using the ::: operator, i.e.
mypkg:::populate.crossvalidate(foo, bar)
will work, where mypkg is the name of your package.
To be honest, you don't even need a NAMESPACE file as R will auto generate one when you install the package, one that automatically exports all functions. That way your two methods will be visible as populate.xxx() (where xxx is the particular method) and will operate as S3 methods.
Read Section 1 Creating R Packages in the Writing R Extensions manual for details of what is involved, but yuo won't need to do half of this if you don't want too, especially if the package is for your own use. Just create the appropriate package folders (i.e. R and man), stick your .R files in R. Write a single .Rd file in man where you add
\name{Misc Functions}
\alias{populate}
\alias{populate.crossvalidate}
\alias{populate.prod}
at the top of the file. Add \alias{} for any other functions you have. Then you'll need to build and install the package.
Alternative using sys.source()
Although I don't (can't!) really recommend what I mention below as a long-term viable option here, there is an alternative that will allow you to isolate the functions from individual .r files as you initially requested. This is achieved through the use of environments not namespaces and doesn't involve creating a package.
The sys.source() function can be used to source R code/functions from a .R file and evaluate it in an environment. As you .R file is creating/defining functions, if you source it inside another environment then those will functions will be defined there, in that environment. They won't be visible on the standard search path by default and hence a populate() function defined in crossvalidate.R will not clash with a populate() defined in prod.R as long as you use two separate environments. When you need to use one set of functions you can assign the environment to the search path, upon which it will then be miraculously visible to everything, and when you are done you can detach it. The attach the other environment, use it, detach etc. Or you can arrange for R code to be evaluated in a specific environment using things like eval().
Like I said, this isn't a recommended solution but it will work, after a fashion, in the manner you describe. For example
## two source files that both define the same function
writeLines("populate <- function(x) 1:10", con = "crossvalidate.R")
writeLines("populate <- function(x) letters[1:10]", con = "prod.R")
## create two environments
crossvalidate <- new.env()
prod <- new.env()
## source the .R files into their respective environments
sys.source("crossvalidate.R", envir = crossvalidate)
sys.source("prod.R", envir = prod)
## show that there are no populates find-able on the search path
> ls()
[1] "crossvalidate" "prod"
> find("populate")
character(0)
Now, attach one of the environments and call populate():
> attach(crossvalidate)
> populate()
[1] 1 2 3 4 5 6 7 8 9 10
> detach(crossvalidate)
Now call the function in the other environment
> attach(prod)
> populate()
[1] "a" "b" "c" "d" "e" "f" "g" "h" "i" "j"
> detach(prod)
Clearly, each time you want to use a particular function, you need to attach() its environment and then call it, followed by a detach() call. Which is a pain.
I did say you can arrange for R code (expressions really) to be evaluated in a stated environment. You can use eval() of with() for this for example.
> with(crossvalidate, populate())
[1] 1 2 3 4 5 6 7 8 9 10
At least now you only need a single call to run the version of populate() of your choice. However, if calling the functions by their full name, e.g. populate.crossvalidate() is too much effort (as per your comments) then I dare say that even the with() idea will be too much hassle? And anyway, why would you use this when you can quite easily have your own R package.
Don't worry about the complexity of 'making a package'. Stop thinking of it like that. What you are going to do is this:
in the folder where you are working on your project, make a folder called 'R'
put your R code in there, one function per file
make a DESCRIPTION file in your project directory. Check out existing examples for the exact format, but you only need a few fields.
Get devtools. install.packages("devtools")
Use devtools. library(devtools)
Now, write your functions in your R files in your R folder. To load them into R, DONT source them. Do load_all(). Your functions will be loaded but NOT into the global environment.
Edit one of your R files, then do load_all() again. This will load any modified files in the R folder, thus updating your function.
That's it. Edit, load_all, rinse and repeat. You have created a package, but its pretty lightweight and you don't have to deal with the bondage and discipline of R's package building tools.
I've seen, used, and even written code that tries to implement a lightweight packagey mechanism for loading objects, and none are as good as what devtools does.
All Hail Hadley!
You might want to consider making a package. As an alternative, you could look at environments. Finally, RStudio's projects may be closer to what would suit you.

"object not found" error when creating a new geom for a package

Full disclosure: this issue is duplicated on the ggplot2 google group
I'm developing a package that makes heavy use of ggplot2. I've created my own geom—geom_rug_alt—as a way of putting rug fringes on the top/right of the plot instead of the default locations.
My problem is that when geom_rug_alt() is defined and called within a single script, it seems to plot just fine. (Please try it yourself to verify that.) But, in my package geom_rug_alt() is defined in one file (CommonFunctions.R) and called in another (the Residuals() function of larger function foo.R). When I call foo.R on something, I get this error:
Error in geom_rug_alt(aes(x = NULL, y = within.group.residuals, color = factor(within.1.sd.of.the.mean.of.all.residuals)), :
object 'GeomRugAlt' not found
Now, I've done a couple of things (suggested by Hadley in this thread) to try to make sure that geom_rug_alt() should work properly within the package:
I define GeomRugAlt as a proto object in a file essentially called CommonFunctions.R within my package. CommonFunctions.R contains lines 3-42 of my example script.
In CommonFunctions.R, I was sure to include the build_accessor() line for geom_rug_alt (line 42 in my example script) after the definition of GeomRugAlt
In the package DESCRIPTION file, I have a collate: line where CommonFunctions.R appears first
In the package DESCRIPTION file, I have a LazyLoad: false line
In CommonFunctions.R, I included a require(ggplot2) call before defining GeomRugAlt as a proto object.
In foo.R, I included a require(ggplot2) call before calling geom_rug_alt() within Residuals().
I'm not sure what else I'm missing. Given that my example script runs just fine, I suspect the issue isn't that my geom doesn't work, but that I'm doing something wrong as part of the package development process.
Sorry for duplicating the issue, but I can't seem to find a thorough solution to the problem :-(
Put export(GeomRugAlt) in the NAMESPACE file.

Resources