I need to use an r object from another session but I don't know how to call it or load it.
Specifically I'm using R from within Processing(Java) and the session I stablished with the Rserve package to use R from within Processing is different than the one I'm using in Rstudio but I need to use an object (cv.glmnet object) that was created in Rstudio.
Does anyone have an idea of how to do this?
Thanks! every thought will be very much appreciated.
If you are on the same machine, one (simple) way is to
saveRDS(objA, "/tmp/objA.rds") # adjust temp.dir as needed
and then do
objA <- readRDS("/tmp/objA.rds")
in the receiving session. There are better ways not involving files (eg writing to a Redis instance) but they require "more" from your side in terms of setup.
There is also a way to send an object via socket connections, but it is not trivial either.
Related
I am trying to create a R package mypckg with a function createShinyApp. The latter function should create a directory structure ready to use as shiny app at some location. In this newly created shiny app, I have some variables, which should be accessed from within the shiny app, but not by the user directly (to prevent a user from accidentally overwriting them). The reason for these variables to exist (I know one should not use global variables) is that the shiny app is treating a text corpus and I want to avoid passing (and hence copying) it between the many functions because this could lead to exhaustion of the memory. Somebody using mypckg should be able to set these variables and later use createShinyApp.
My ideas so far are:
I make mypckg::createShinyApp save the protected variables in a protectedVariables.rds file and get the shinyApp to load the variables from this file into a new environment. I am not very experienced with environments so I could not get this to work properly yet because the creation of a new environment is not working upon running a shiny app so far.
I make mypckg::createShinyApp save the protected variables in a protectedVariables.rds file and get the shinyApp to load the variables from this file into the options. Thereafter I would access the variables and set the variables with options() and getOption.
What are the advantages and disadvantages of these ideas and are there yet simpler and more elegant ways of achieving my goal?
It's a little bit difficult to understand the situation without seeing a concrete example of the kind of variable and context you're using it in, but I'll try to answer.
First of all: In R, it's very very difficult to achieve 100% protection of a variable. Even in shiny, the authors of shiny tried putting up a lot of mechanisms to disallow some variables from getting overwritten by users (like the input variable for example), and while it does make it much harder, you should know that it's impossible, or at least extremely difficult, to actually block all ways of changing a variable.
With that disclaimer out of the way, I assume that you'd be happy with something that prevents people from accidentally overwriting the variable, but if they go purposely out of their way to do it, then so be it. In that case, you can certainly read the variables from an RDS file like you suggest (with the caveat that the user can of course overwrite that file). You can also use a global package-level variable -- usually talking about global variables is bad, but in the context of a package it's a very common thing to do.
For example, you can define in a globals.R file in your package:
.mypkgenv <- new.env(parent = emptyenv())
.mypkgenv$var1 <- "some variable"
.mypkgenv$var2 <- "some other variable"
And the shiny app can access these using
mypckg:::.mypkgenv$var1
This is just one way, there are other ways too
I know that exported data (access to users) belongs in the data/ folder and that internal data (data used internally by package functions) belongs in R/sysdata.rda. However, what about data I wish to both export to the user AND be available internally for use by package functions?
Currently, presumably due to the order in which objects/data are added to the NAMESPACE, my exported data is not available during devtools::check() and I am getting a NOTE: no visible binding for global variable 'data_x'.
There are probably a half dozen ways to get around this issue, many of which appear to me as rather hacky, so I was wondering if there was a "correct" way to have BOTH external and internal data (and avoid the NOTE from R CMD check).
So far I see these options:
write an internal function that calls the data and use that everywhere internally
Use the ':::' to access the data; which seems odd and invokes a different warning
Have a copy of data_x in BOTH data/ and R/sysdata.rda (super hacky)
Get over it and ignore the NOTE
Any suggestions greatly appreciated,
Thx.
Does the R language have any easy way to set-up a timer function ?
By a timer function, I mean a function that sits in the background of the session and executes every so often.
Cheers!
In the tcltk2 package is the tclTaskSchedule function (and others) that could be used to do what you want. Be warned that this will usually violate the idea of functions not having side effects and you can really mess things up if the scheduled function uses any of the same objects that you are working with. It would be fine if the task just read data into a local variable and plotted the latest version (just make sure that it plots to the correct graphics device and does not mess up something else you are working on).
If you just want something to update on a regular basis you could use a repeat loop (or while) and Sys.sleep to wait the given time, then do whatever you want. You would not be able to use that R session for anything else, but you can easily have multiple R sessions running at the same time so that would not prevent you from working in another R session.
Check function ?txtProgressBar.
Regards
While I am writing .R functions I constantly need to manually write source("funcname.r") to get the changes reflected in the workspace. I am sure it must be possible to do this automatically. So what I would like would be just to make changes in my function, save the function and be able to use the new function in R workspace without manually "sourcing" this function. How can I do that?
UPDATE: I know about selecting appropriate lines of code and pressing CTRL+R in R Editor (RGui) or using Notepad++ and executing the lines into R. But this approach has a disadvantage of making my workspace console "muddled". I would like to stop this practice if at all possible.
You can use R studio which has a source on save option.
If you are prepared to package your functions into a package, you may enjoy exploring Hadley's devtools package. This provides a suite of tools to write, test and document
packages.
https://github.com/hadley/devtools
This approach offer many advantages, but mainly reloading the package with a minimum of retyping.
You will still have to type load_all("yourpackage") but I find this small amount of typing is small beer compared to the advantages of devtools.
For additional information, including how to setup devtools, have a look at https://github.com/hadley/devtools/wiki/development
If you're using Eclipse + StatET, you can press CTRL+R+S, which saves your script and sources it. As close to automatic as I can get.
If you can get your text editor to run a system command after it saves the file, then you could use something like AutoIt (on Windows) or a batch script (on UNIX-derivative) to pass a call to source off to all running copies of R. But that's a heck of a lot of work for not much gain.
Still, I think it's much more likely to work being event-driven on the text editor end vs. having R constantly scan for updates (or somehow interface with the OS's update-event-messaging-system).
This is likely not possible (automatically detecting disc changes without intervention or running at least one line).
R needs to read into memory functions, so a change on the disc wouldn't be reflected in the workspace without reloading your functions.
If you are into developing R functions, some amount of messiness during your development process will be likely inevitable, but perhaps I could suggest that you try writing an R-package to house your functions?
This has the advantage of being able to robustly document your functions, using lazy loading so that you have access to your functions/datasets immediately without sourcing them.
Don't be afraid of making a package, it's easy with package.skeleton() and doesn't have to go on CRAN but could be for your own personal use without distribution! Just have fun!
Try to accept some messiness during development knowing you are working towards your goal and fighting the good fight of code organization and documentation!
We are only imperfect people, in an imperfect world, but we mean well!
I use Sweave and cacheSweave with pleasure.
Occasionally I have a document in which some section takes a really long time (like 20 minutes or something) and after processing I'd like to open up the objects it created and play around with them in an interactive session.
Does anyone know a method for doing that, presumably by fetching directly against the stashR database or something?
I would have preferred to put this as a comment, but i dont have an account here, so anyway.
The easiest way to do this is probably to put save.image() statements at strategic points in the .Rnw file, so that all objects created at that point are saved. Then, one can open up a new instance of R and interact with the objects without altering the sweave file. HTH.