Easiest way to save an S4 class - r

Probably the most basic question on S4 classes imaginable here.
What is the simplest way to save an S4 class you have defined so that you can reuse it elsewhere. I have a project where I'm taking a number of very large datasets and compiling summary information from them into small S4 objects. Since I'll therefore be switching R sessions to create the summary object for each dataset, it'd be good to be able to load in the definition of the class from a saved object (or have it load automatically) rather than having to include the long definition of the object at the top of each script (which I assume is bad practice anyway because the code defining the object might become inconsistent).
So what's the syntax along the lines of saveclass("myClass"), loadclass("myclass") or am I just thinking about this in the wrong way?

setClass("track", representation(x="numeric", y="numeric"))
x <- new("track", x=1:4, y=5:8)
save as binary
fn <- tempfile()
save(x, ascii=FALSE, file=fn)
rm(x)
load(fn)
x
save as ASCII
save(x, ascii=TRUE, file=fn)
ASCII text representation from which to regenerate the data
dput(x, file=fn)
y <- dget(fn)
The original source can be found here.

From the question, I think you really do want to include the class definition at the top of each script (although not literally; see below), rather than saving a binary representation of the class definition and load that. The reason is the general one that binary representations are more fragile (subject to changes in software implementation) compared to simple text representations (for instance, in the not too distant past S4 objects were based on simple lists with a class attribute; more recently they have been built around an S4 'bit' set on the underlying C-level data representation).
Instead of copying and pasting the definition into each script, really the best practice is to included the class definition (and related methods) in an R package, and to load the package at the top of the script. It is not actually hard to write packages; an easy way to get started is to use Rstudio to create a 'New Project' as an 'R package'. Use a version number in the package to keep track of the specific version of the class definition / methods you're using, and version control (svn or git, for instance) to make it easy to track the changes / explorations you make as your class matures. Share with your colleagues and eventually the larger R community to let others benefit from your hard work and insight!

Related

Should I use S3 data frames in my package?

I have a package which uses a data.frame based S4 class:
setClass(Class="foobar",
slots=c(a="character", b="character", c="character"),
contains="data.frame")
Works as intended. However, I observe weird warnings when combining with tidyverse:
df <- data.frame(ID=1:5)
df2 <- new("foobar", df)
as_tibble(df2)
The last statement incites a warning message:
Warning message:
In class(x) <- c(subclass, tibble_class) :
Setting class(x) to multiple strings ("tbl_df", "tbl", ...); result will no longer be an S4 object
This is because tidyverse does not support S4 data frames. This can be circumvented in downstream code by using asS3(df). However, users of my package may be puzzled if they see these warnings. I am now faced with the following choices and I don't really know which would be the most reasonable and correct:
Keep the S4 model and hope that the users won't mind seeing this warning each time they pass my data frames into something else.
Use S3. However, I already have another S4 class defined in published versions of my package. I am afraid that I would break someones code.
Mix S3 and S4. Is it even allowed?
Is there another solution I might be overlooking?
There is no brilliant solution to this which is entirely within your control.
The tidyverse package may call class<- on any data-frame-like object given to it, and as you have seen this will destroy the S4 nature of any object. This can't be worked around by (for instance) defining a method for coerce or calling setAs, as class<- doesn't use that mechanism. (class<- isn't generic either, you can't set a method for it.) The only way to make tidyverse support S4 is for tidyverse's author to alter the code to use as or similar, and it doesn't look like that is top of their to-do-list.
You are correct to be worried about dramatically altering the way your class works when you have released a version of your package already with an S4 class.
If:
your package is quite new and doesn't yet have many users;
you can do all you need to do with S3; and
you don't know of another package which has built new classes on top of yours
then it may be best to redefine it as S3, and include a message when your package is installed or loaded to say
thanks for installing myPackage v2. Code may be incompatible with v1.2 or earlier; see help(blah) for details
otherwise, stick with S4.
You can't exactly mix S3 and S4 for class definitions (you can for method definitions). The closest you can come is setOldClass which registers a S3 class as an S4 one (whereas you wanted the opposite). Still, that may help you achieve "you can do all you need to do with S3" above.
One other possibility is to define your own version of class<- which checks to see if an object of S4 class foobar is attempting to be coerced to S3 and calls the ordinary class<- if not. The cure is probably worse than the disease in this case; this will slow down all future S3 class conversions (since class<- is now an ordinary function call, not a primitive) but it should work in principle. Another reason that it is not recommended is that you are relying on no other package higher in the search path doing something similar (what if another package author had the same issue and wanted to do the same trick? Then the results would depend on which package was higher up the search path!)

How to make an R object immutable? [duplicate]

I'm working in R, and I'd like to define some variables that I (or one of my collaborators) cannot change. In C++ I'd do this:
const std::string path( "/projects/current" );
How do I do this in the R programming language?
Edit for clarity: I know that I can define strings like this in R:
path = "/projects/current"
What I really want is a language construct that guarantees that nobody can ever change the value associated with the variable named "path."
Edit to respond to comments:
It's technically true that const is a compile-time guarantee, but it would be valid in my mind that the R interpreter would throw stop execution with an error message. For example, look what happens when you try to assign values to a numeric constant:
> 7 = 3
Error in 7 = 3 : invalid (do_set) left-hand side to assignment
So what I really want is a language feature that allows you to assign values once and only once, and there should be some kind of error when you try to assign a new value to a variabled declared as const. I don't care if the error occurs at run-time, especially if there's no compilation phase. This might not technically be const by the Wikipedia definition, but it's very close. It also looks like this is not possible in the R programming language.
See lockBinding:
a <- 1
lockBinding("a", globalenv())
a <- 2
Error: cannot change value of locked binding for 'a'
Since you are planning to distribute your code to others, you could (should?) consider to create a package. Create within that package a NAMESPACE. There you can define variables that will have a constant value. At least to the functions that your package uses. Have a look at Tierney (2003) Name Space Management for R
I'm pretty sure that this isn't possible in R. If you're worried about accidentally re-writing the value then the easiest thing to do would be to put all of your constants into a list structure then you know when you're using those values. Something like:
my.consts<-list(pi=3.14159,e=2.718,c=3e8)
Then when you need to access them you have an aide memoir to know what not to do and also it pushes them out of your normal namespace.
Another place to ask would be R development mailing list. Hope this helps.
(Edited for new idea:) The bindenv functions provide an
experimental interface for adjustments to environments and bindings within environments. They allow for locking environments as well as individual bindings, and for linking a variable to a function.
This seems like the sort of thing that could give a false sense of security (like a const pointer to a non-const variable) but it might help.
(Edited for focus:) const is a compile-time guarantee, not a lock-down on bits in memory. Since R doesn't have a compile phase where it looks at all the code at once (it is built for interactive use), there's no way to check that future instructions won't violate any guarantee. If there's a right way to do this, the folks at the R-help list will know. My suggested workaround: fake your own compilation. Write a script to preprocess your R code that will manually substitute the corresponding literal for each appearance of your "constant" variables.
(Original:) What benefit are you hoping to get from having a variable that acts like a C "const"?
Since R has exclusively call-by-value semantics (unless you do some munging with environments), there isn't any reason to worry about clobbering your variables by calling functions on them. Adopting some sort of naming conventions or using some OOP structure is probably the right solution if you're worried about you and your collaborators accidentally using variables with the same names.
The feature you're looking for may exist, but I doubt it given the origin of R as a interactive environment where you'd want to be able to undo your actions.
R doesn't have a language constant feature. The list idea above is good; I personally use a naming convention like ALL_CAPS.
I took the answer below from this website
The simplest sort of R expression is just a constant value, typically a numeric value (a number) or a character value (a piece of text). For example, if we need to specify a number of seconds corresponding to 10 minutes, we specify a number.
> 600
[1] 600
If we need to specify the name of a file that we want to read data from, we specify the name as a character value. Character values must be surrounded by either double-quotes or single-quotes.
> "http://www.census.gov/ipc/www/popclockworld.html"
[1] "http://www.census.gov/ipc/www/popclockworld.html"

What is the Julia's best approximation to R objects' attributes?

I store important metadata in R objects as attributes. I want to migrate my workflow to Julia and I am looking for a way to represent at least temporarily the attributes as something accessible by Julia. Then I can start thinking about extending the RData package to fill this data structure with actual objects' attributes.
I understand, that annotating with things like label or unit in DataFrame - I think the most important use for object' attributes - is probably going to be implemented in the DataFrames package some time (https://github.com/JuliaData/DataFrames.jl/issues/35). But I am asking about about more general solution, that doesn't depend on this specific use case.
For anyone interested, here is a related discussion in the RData package
In Julia it is ideomatic to define your own types - you'd simply make fields in the type to store the attributes. In R, the nice thing about storing things as attributes is that they don't affect how the type dispatches - e.g. adding metadata to a Vector doesn't make it stop behaving like a Vector. In julia, that approach is a little more complicated - you'd have to define the AbstractVector interface for your type https://docs.julialang.org/en/latest/manual/interfaces/#man-interface-array-1 to have it behave like a Vector.
In essence, this means that the workflow solutions are a little different - e.g. often the attribute metadata in R is used to associate metadata to an object when it's returned from a function. An easy way to do something similar in Julia is to have the function return a tuple and assign the result to a tuple:
function ex()
res = rand(5)
met = "uniformly distributed random numbers"
res, met
end
result, metadata = ex()
I don't think there are plans to implement attributes like in R.

Create an alias to a slot of an object in R

I've bumped my head on the walls trying to create an alias (aka a pointer, or a new short nickname designating the same object in memory without copying that object) to a subpart of a complex object. Let's say I am working with an object of class SpatialPolygonsDataFrame (package "sp"), and I want to perform operations on an part thereof, deep down in the hierarchical representation of that object. Instead of writing repeatedly things like
myBigMap#polygons[FRA][[1]]#Polygons[[1]]
I want to be able to write simply
mypolygon
so that
myBigMap#polygons[FRA][[1]]#Polygons[[1]]#coords
can be abbreviated
mypolygon#coords
etc. I've seen that I should maybe use environments as a replacement to the former .Alias defunct function, but can't find out how to tell R that I want to consider a subpart of a complex object as an environment. Thanks!
assignment:
mypolygon=myBigMap#polygons[FRA][[1]]#Polygons[[1]]
doesn't create a copy until you modify something in it. So if its just shorthand for accessing the data to make some code more readable then that will be fine:
mypolygon#coords
mean(mypolygon#coords[,1])
neither of those will make a copy.
However, if you do modify mypolygon, eg by changing #coords, you need to put the modified value back in the structure since a copy is made:
mypolygon#coords = mypolygon#coords * 1000
myBigMap#polygons[FRA][[1]]#Polygons[[1]] = mypolygon
I think that's a preferred solution, since its just as efficient as any kind of magic aliasing scheme and its explicit since there's no magic action-at-a-distance happening.
I don't think there's any way to alias parts of an object like the way you want to do.

FAQ markup to R data structure

I'm reading the R FAQ source in texinfo, and thinking that it would be easier to manage and extend if it was parsed as an R structure. There are several existing examples related to this:
the fortunes package
bibtex entries
Rd files
each with some desirable features.
In my opinion, FAQs are underused in the R community because they lack i) easy access from the R command-line (ie through an R package); ii) powerful search functions; iii) cross-references; iv) extensions for contributed packages. Drawing ideas from packages bibtex and fortunes, we could conceive a new system where:
FAQs can be searched from R. Typical calls would resemble the fortune() interface: faq("lattice print"), or faq() #surprise me!, faq(51), faq(package="ggplot2").
Packages can provide their own FAQ.rda, the format of which is not clear yet (see below)
Sweave/knitr drivers are provided to output nicely formatted Markdown/LaTeX, etc.
QUESTION
I'm not sure what is the best input format, however. Either for converting the existing FAQ, or for adding new entries.
It is rather cumbersome to use R syntax with a tree of nested lists (or an ad hoc S3/S4/ref class or structure,
\list(title = "Something to be \\escaped", entry = "long text with quotes, links and broken characters", category = c("windows", "mac", "test"))
Rd documentation, even though not an R structure per se (it is more a subset of LaTeX with its own parser), can perhaps provide a more appealing example of an input format. It also has a set of tools to parse the structure in R. However, its current purpose is rather specific and different, being oriented towards general documentation of R functions, not FAQ entries. Its syntax is not ideal either, I think a more modern markup, something like markdown, would be more readable.
Is there something else out there, maybe examples of parsing markdown files into R structures? An example of deviating Rd files away from their intended purpose?
To summarise
I would like to come up with:
1- a good design for an R structure (class, perhaps) that would extend the fortune package to more general entries such as FAQ items
2- a more convenient format to enter new FAQs (rather than the current texinfo format)
3- a parser, either written in R or some other language (bison?) to convert the existing FAQ into the new structure (1), and/or the new input format (2) into the R structure.
Update 2: in the last two days of the bounty period I got two answers, both interesting but completely different. Because the question is quite vast (arguably ill-posed), none of the answers provide a complete solution, thus I will not (for now anyway) accept an answer. As for the bounty, I'll attribute it to the answer most up-voted before the bounty expires, wishing there was a way to split it more equally.
(This addresses point 3.)
You can convert the texinfo file to XML
wget http://cran.r-project.org/doc/FAQ/R-FAQ.texi
makeinfo --xml R-FAQ.texi
and then read it with the XML package.
library(XML)
doc <- xmlParse("R-FAQ.xml")
r <- xpathSApply( doc, "//node", function(u) {
list(list(
title = xpathSApply(u, "nodename", xmlValue),
contents = as(u, "character")
))
} )
free(doc)
But it is much easier to convert it to text
makeinfo --plaintext R-FAQ.texi > R-FAQ.txt
and parse the result manually.
doc <- readLines("R-FAQ.txt")
# Split the document into questions
# i.e., around lines like ****** or ======.
i <- grep("[*=]{5}", doc) - 1
i <- c(1,i)
j <- rep(seq_along(i)[-length(i)], diff(i))
stopifnot(length(j) == length(doc))
faq <- split(doc, j)
# Clean the result: since the questions
# are in the subsections, we can discard the sections.
faq <- faq[ sapply(faq, function(u) length(grep("[*]", u[2])) == 0) ]
# Use the result
cat(faq[[ sample(seq_along(faq),1) ]], sep="\n")
I'm a little unclear on your goals. You seem to want all the R-related documentation converted into some format which R can manipulate, presumably so the one can write R routines to extract information from the documentation better.
There seem to be three assumptions here.
1) That it will be easy to convert these different document formats (texinfo, RD files, etc.) to some standard form with (I emphasize) some implicit uniform structure and semantics.
Because if you cannot map them all to a single structure, you'll have to write separate R tools for each type and perhaps for each individual document, and then the post-conversion tool work will overwhelm the benefit.
2) That R is the right language in which to write such document processing tools; suspect you're a little biased towards R because you work in R and don't want to contemplate "leaving" the development enviroment to get information about working with R better. I'm not an R expert, but I think R is mainly a numerical language, and does not offer any special help for string handling, pattern recognition, natural language parsing or inference, all of which I'd expect to play an important part in extracting information from the converted documents that largely contain natural language. I'm not suggesting a specific alternative language (Prolog??), but you might be better off, if you succeed with the conversion to normal form (task 1) to carefully choose the target language for processing.
3) That you can actually extract useful information from those structures. Library science was what the 20th century tried to push; now we're all into "Information Retrieval" and "Data Fusion" methods. But in fact reasoning about informal documents has defeated most of the attempts to do it. There are no obvious systems that organize raw text and extract deep value from it (IBM's Jeopardy-winning Watson system being the apparent exception but even there it isn't clear what Watson "knows"; would you want Watson to answer the question, "Should the surgeon open you with a knife?" no matter how much raw text you gave it) The point is that you might succeed in converting the data but it isn't clear what you can successfully do with it.
All that said, most markup systems on text have markup structure and raw text. One can "parse" those into tree-like structures (or graph-like structures if you assume certain things are reliable cross-references; texinfo certainly has these). XML is widely pushed as a carrier for such parsed-structures, and being able to represent arbitrary trees or graphs it is ... OK ... for capturing such trees or graphs. [People then push RDF or OWL or some other knoweldge encoding system that uses XML but this isn't changing the problem; you pick a canonical target independent of R]. So what you really want is something that will read the various marked-up structures (texinfo, RD files) and spit out XML or equivalent trees/graphs. Here I think you are doomed into building separate O(N) parsers to cover all the N markup styles; how otherwise would a tool know what the value markup (therefore parse) was? (You can imagine a system that could read marked-up documents when given a description of the markup, but even this is O(N): somebody still has to describe the markup). One this parsing is to this uniform notation, you can then use an easily built R parser to read the XML (assuming one doesn't already exist), or if R isn't the right answer, parse this with whatever the right answer is.
There are tools that help you build parsers and parse trees for arbitrary lanuages (and even translators from the parse trees to other forms). ANTLR is one; it is used by enough people so you might even accidentally find a texinfo parser somebody already built. Our DMS Software Reengineering Toolkit is another; DMS after parsing will export an XML document with the parse tree directly (but it won't necessarily be in that uniform representation you ideally want). These tools will likely make it relatively easy to read the markup and represent it in XML.
But I think your real problem will be deciding what you want to extract/do, and then finding a way to do that. Unless you have a clear idea of how to do the latter, doing all the up front parsers just seems like a lot of work with unclear payoff. Maybe you have a simpler goal ("manage and extend" but those words can hide a lot) that's more doable.

Resources