R language - timer - r

Does the R language have any easy way to set-up a timer function ?
By a timer function, I mean a function that sits in the background of the session and executes every so often.
Cheers!

In the tcltk2 package is the tclTaskSchedule function (and others) that could be used to do what you want. Be warned that this will usually violate the idea of functions not having side effects and you can really mess things up if the scheduled function uses any of the same objects that you are working with. It would be fine if the task just read data into a local variable and plotted the latest version (just make sure that it plots to the correct graphics device and does not mess up something else you are working on).
If you just want something to update on a regular basis you could use a repeat loop (or while) and Sys.sleep to wait the given time, then do whatever you want. You would not be able to use that R session for anything else, but you can easily have multiple R sessions running at the same time so that would not prevent you from working in another R session.

Check function ?txtProgressBar.
Regards

Related

How do I show a message when an sql query is running in R?

I'm working on some code in R that makes heavy use of the package rpostgresql. Some of the queries I'm using take quite a while to download, so I would like to periodically show a message to the users (something like "Download in progress, please be patient..") so that it is clear that the program didn't crash.
Ideally, I'd like to write the code as an all-purpose wrapper for functions so that it will give feedback to the user along the way (download in progress, time elapsed, etc.)
For example, if we have an example function:
x<-runif(10)
I would like to make a wrapper of the form
some_wrapper_function(x<-runif(10))
where some_wrapper_function gives periodic updates while the wrapped code is running.
It seems like this requires parallel coding, but with the hitch that the clusters need to talk to each other at least once.
Any thoughts? Or is there an existing function that does this?

R is there a way to dynamically update a function as you are building it

I am very new to R and I am using RStudio. I am building a new "user-defined" function, which entails a huge amount of trial and error. Every time I make the slightest change to the function I need select the entire function and do crtl+Enter in order to "commit" the function to the workspace.
I am hoping there is a better way of doing it, perhaps in a separate window that automatically "commits" when I save.
I am coming from Matlab and am used to just saving the function after which it is already "committed".
Ctrl+Shift+P re-runs previously executed region, so you won't have to highlight your function again. so this will work unless you have executed something else in the interim.
If you wan to run some part of your code in RStudio you simply have to use Ctrl+Enter. If the code were run every time you saved it, it could have very bad effects. Imagine that you have a huge script that runs for a long time and uses much computer resources - this would lead you to killing R to stop the script every time you saved it!
What you could do is to save script in external file and than call it from your main script using source("some_directory/myscript.R").

How to call/use r objects from a different session?

I need to use an r object from another session but I don't know how to call it or load it.
Specifically I'm using R from within Processing(Java) and the session I stablished with the Rserve package to use R from within Processing is different than the one I'm using in Rstudio but I need to use an object (cv.glmnet object) that was created in Rstudio.
Does anyone have an idea of how to do this?
Thanks! every thought will be very much appreciated.
If you are on the same machine, one (simple) way is to
saveRDS(objA, "/tmp/objA.rds") # adjust temp.dir as needed
and then do
objA <- readRDS("/tmp/objA.rds")
in the receiving session. There are better ways not involving files (eg writing to a Redis instance) but they require "more" from your side in terms of setup.
There is also a way to send an object via socket connections, but it is not trivial either.

Redirect all plot output to specific file

I want to automatically redirect all plotting to a file (reason: see below). Is there a non-hacky way of accomplishing that?
Lacking that, I’m actually not afraid of overriding the built-in functions, I’m that desperate. The easiest way I can think of is to hook into the fundamental plot-window creation function and calling pdf(…) and then hooking into the plot-finalising function and calling dev.off() there.
But what are these functions? Through debugging I’ve tentatively identified dev.hold and dev.flush – but is this actually true universally? Can I hook into those functions? I cannot override them with R.utils’ reassignInNamespace because they are locked, and just putting same-name functions into the global namespace doesn’t work (they’re ignored by plot).
So, why would I want to do something so horrible?
Because I’m working on a remote server and despite my best attempts, and long debugging sessions with our systems support, I cannot get X11 forwarding to work reliably. Not being able to preview a plot is making my workflow horribly inefficient. I’ve given up on trying to get X11 to work so now I’m creating PDFs in my public_html folder and just refresh the browser.
This works quite well – except that it‘s really annoying and quite time-consuming to always have to surround your plotting function calls with pdf(…) … dev.off(), especially in interactive sessions where you want to quickly create a plot while in a meeting with collaborators. In fact, it’s really annoying and they (understandably) haven’t got the patience for that.
For now I’m helping myself with the following function definition:
preview <- function (.expr, ...) {
on.exit(dev.off())
pdf(PREVIEW_FILE_NAME, ...)
eval(substitute(.expr))
}
Which is used like this:
preview(plot(1:100, rnorm(100) * 1:100))
That works a-ok. But this workflow is a real bottleneck in meetings, and I’d like to get rid of the preview call to streamline it as much as possible.
Any chance at all?
If you set options(device=FUN) then the graphics device function FUN becomes the new default graphics device that will be opened when a plot is created and device is not already opened.
So, one option would be to write a function that calls pdf or png or other graphics device with the filename and options that you want (probably onefile=FALSE in pdf), then set this function as the default in the options. You may need to use one of dev.off, plot.new, or frame to finalize the current plot (R does not finalize until you close the device or go to a new plot in case you want to add anything to the current plot).
If you will never add to a plot then you could use addTaskCallback to call dev.off automatically for you. There may be other hooks that you could use to finalize as well.

Automatically "sourcing" function on change

While I am writing .R functions I constantly need to manually write source("funcname.r") to get the changes reflected in the workspace. I am sure it must be possible to do this automatically. So what I would like would be just to make changes in my function, save the function and be able to use the new function in R workspace without manually "sourcing" this function. How can I do that?
UPDATE: I know about selecting appropriate lines of code and pressing CTRL+R in R Editor (RGui) or using Notepad++ and executing the lines into R. But this approach has a disadvantage of making my workspace console "muddled". I would like to stop this practice if at all possible.
You can use R studio which has a source on save option.
If you are prepared to package your functions into a package, you may enjoy exploring Hadley's devtools package. This provides a suite of tools to write, test and document
packages.
https://github.com/hadley/devtools
This approach offer many advantages, but mainly reloading the package with a minimum of retyping.
You will still have to type load_all("yourpackage") but I find this small amount of typing is small beer compared to the advantages of devtools.
For additional information, including how to setup devtools, have a look at https://github.com/hadley/devtools/wiki/development
If you're using Eclipse + StatET, you can press CTRL+R+S, which saves your script and sources it. As close to automatic as I can get.
If you can get your text editor to run a system command after it saves the file, then you could use something like AutoIt (on Windows) or a batch script (on UNIX-derivative) to pass a call to source off to all running copies of R. But that's a heck of a lot of work for not much gain.
Still, I think it's much more likely to work being event-driven on the text editor end vs. having R constantly scan for updates (or somehow interface with the OS's update-event-messaging-system).
This is likely not possible (automatically detecting disc changes without intervention or running at least one line).
R needs to read into memory functions, so a change on the disc wouldn't be reflected in the workspace without reloading your functions.
If you are into developing R functions, some amount of messiness during your development process will be likely inevitable, but perhaps I could suggest that you try writing an R-package to house your functions?
This has the advantage of being able to robustly document your functions, using lazy loading so that you have access to your functions/datasets immediately without sourcing them.
Don't be afraid of making a package, it's easy with package.skeleton() and doesn't have to go on CRAN but could be for your own personal use without distribution! Just have fun!
Try to accept some messiness during development knowing you are working towards your goal and fighting the good fight of code organization and documentation!
We are only imperfect people, in an imperfect world, but we mean well!

Resources