It seems that when I call littler from the command line, it will source ~/.Rprofile. Is there a way to prevent it from sourcing ~/.Rprofile?
It goes both ways---that we are now reading ~/.Rprofile is in large part due to users who wanted this feature, as opposed to you not wanting it :)
But there is a (simple and easy) fix: use interactive(). Witness:
edd#rob:~$ r -e 'print(interactive())'
[1] FALSE
edd#rob:~$ r -i -e 'print(interactive())'
Please do not apply R like a magic answers box, because you can mislead
others and cause harm.
-- Jeff Newmiller (about how much statistical knowledge is needed
for using R)
R-help (May 2016)
[1] TRUE
edd#rob:~$
So what happened here? First, we tested interactive(). It came back FALSE. This is the default. Nothing happened.
Second, I added the -i switch to enfore interactive mode. It printed TRUE, but more. Why?
Well my ~/.Rprofile in essence looks like this
## header with a few constant settings, mostly to options()
## TZ setting and related
local({ # start of large block, see Rprofile.site
if (interactive()) {
if (requireNamespace("fortunes", quietly=TRUE)) {
print(fortunes::fortune())
#more stuff
}
})
and that governs my interactive R sessions on the console, in Emacs/ESS, in RStudio, and my non-interactive r calls from, say, crontab.
So in short: yes, it is always read. But yes, you can also skip parts you do not want executed.
Related
I have bits of code that I want to show in the examples of a package but neither run (when example(my_fun) is run) nor test (when R CMD check is run) because they're slow enough to annoy users who might unthinkingly run them, and definitely slow enough to annoy the CRAN maintainers.
Writing R Extensions says
You can use \dontrun{} for text that should only be shown, but not run ...
and
Finally, there is \donttest, used (at the beginning of a separate line) to mark code that should be run by example() but not by R CMD check.
Should I nest these, i.e.
\donttest
\dontrun{first slow example ...}
\dontrun{second slow example ...}
? That technically seems to go against the wording in WRE (i.e. it says that \donttest code should be run by example() ...) ?
I could just include them in the examples in a commented-out form or using if (FALSE) { ... } if it came to it ... but that seems ugly.
\dontrun subsumes \donttest: code that is marked with the former will neither be run by example(), nor by R CMD check. I know this because my packages for talking to Azure use \dontrun liberally, for examples that assume you have an Azure account.
I'm trying to call a simple python script from within R using system2(). I've read some information I found vague that said if 'too much' memory is used, it won't work.
If I load a large dataset and use some information in it to use as arguments to pass into system2(), it will only work if I manually click "Restart R" in call Rstudio.
What I want:
df <- read.csv('some_large_file.csv')
###extracting some info called 'args_vec'
for(arg in args_vec){
system2('python', args)
}
This won't work as is. The for loop is simply passed over.
What I need:
df <- read.csv('some_large_file.csv')
###extracting some info called 'args_vec'
###something that 'restarts' R
for(arg in args_vec){
system2('python', args)
}
This answer doesn't quite get what I want. Namely, it doesn't work for me within Rstudio and it calls "system" (which presents the same problem as "system2" in this case). In fact, when I put the answer referenced above in my Rprofile.site file, it just immediately closed rstudio:
I tried the suggestion as a normal function (rather than using "makeActiveBinding", and it didn't quite work.
##restart R in r session -- doesn't work
makeActiveBinding("refresh", function() { system("R --save"); q("no") }, .GlobalEnv)
##nor did this:
refresh <- function() { system("R --save"); q("no") }
I tried a number of variations of these two options above, but this is getting long for what feels like a simple question. There's a lot I don't yet understand about the startup process and "makeActiveBinding" is a bit mysterious. Can anyone point me in the right direction?
In Rstudio, you can restart the R session by:
command/ctrl + shift + F10
You can also use:
.rs.restartR()
RStudio has this undocumented rs.restartR() which is supposed to do just that: restarting R.
However, it does not unload the packages that were loaded, nor does it clean the environment, so that I have some doubts about if it restarts R at all.
If you use RStudio, use the menu item Session > Restart R or the associated keyboard shortcut Ctrl+Shift+F10 (Windows and Linux) or Command+Shift+F10 (Mac OS). Additional keyboard shortcuts make it easy to restart development where you left off, i.e. to say “re-run all the code up to HERE”:
In an R script, use Ctrl+Alt+B (Windows and Linux) or Command+Option+B (Mac OS)
In R markdown, use Ctrl+Alt+P (Windows and Linux) or Command+Option+P (Mac OS)
If you run R from the shell, use Ctrl+D or q() to quit, then restart R.
Have you tried embedding the function call within the apply function, rather than a for loop?
I've had some pieces of code that ran the system out of memory in a for loop run perfectly with apply. It might help?
For those not limited to a command that want something that actually resets the system (no prior state, no loaded packages, no variables, etc.) you can select Terminate R from the Session menu.
It is a bit awkward (asks you if you are sure). If anyone knows something like clear all or really clear classes in MATLAB let me know!
How can I start a new R session in knitr? I would rather start a new session rather than use something like rm(list=ls()) because it is not equivalent.
<<myname>>=
#some R code
#
<<another_chunk>>=
#start a new R session
#more R code
#
Okay, now I have something more substantial for you, inspired by an answer on the R-help list by Georg Ruß. He suggest three things to get R back to how it was at start up, I've written this six step manual for you.
First, you save a string of the packages you have running at start up (this should be done before anything else, before you run any other code),
foo <- .packages()
Second, when you want to reset R, as you also mention, you run
rm(list=ls())
to remove all objects. Then, third, you run,
bar <- .packages()
to get a string of current packages. Followed by,
foobar <- setdiff(bar, foo)
Fifth, you remove the difference with this work-around loop,
toRemove <- paste("package:", foobar, sep='')
#or paste0("package:", foobar) in R-2.15.0 or higher
for(i in seq_along(foobar)) {
detach(toRemove[i], character.only=TRUE)
}
Sixth, depending on your setup, you source your .Rprofile
source(".Rprofile")
This should put R into the state it was in when you started it. I could have overlooked something.
Instead of starting a new R session in knitr, I would recommend you just to start a new R session in your terminal (or command window) like this:
R -e "library(knitr); knit('your_input.Rnw')"
If you are under Windows, you have to put the bin directory of R into your environment variable PATH (I'm very tired of describing how to do this, so google it by yourself if you are in the Windows world, or see the LyX Sweave manual).
However, most editors do start a new R session when calling Sweave or knitr, e.g. LyX and RStudio, etc. You can find more possible editors in http://yihui.name/knitr/demo/editors/ I do not really see the need to call R -e ... in the terminal.
I've an R script, that takes commandline arguments, where the top line is:
#!/usr/bin/Rscript --slave
I wanted to interrupt execution in a function (so I can interactively use the data variables that have been loaded by that point to work out the next bit of code I need to write). I added this inside the function in question:
browser()
but it gets ignored. A bit of searching suggests it might be because the program is running in non-interactive mode. But even more searching has not tracked down how I switch the script out non-interactive mode so that browser() will work. Something like a browser_yes_I_really_mean_it() function.
P.S. I want to avoid altering the rest of the script if at all possible. My current approach is to copy and paste the code chunks, needed to prepare the data, into an interactive session; but as the script gets more and more complex this is getting more and more unreasonable.
UPDATE: for anyone else with the same question, it appears the answer to the actual question is that it is impossible. Once you start R in a non-interactive mode the die is cast. The given answers are therefore workarounds: either you hack your code (remembering to unhack it afterwards), or you refactor to make debugging easier. (This comment is not intended as a criticism of the answers; the suggested refactoring makes the code cleaner anyway.)
Can you just fire up R and source the file instead?
R
source("script.R")
Following mdsumner's answer, I edited my script like this:
if(!exists("argv")){
argv=commandArgs(TRUE)
if(length(argv)!=4)usage_and_exit()
}else{
if(length(argv)!=4){
stop("Must set argv as a 4 element vector. E.g. argv=c(...)")
}
}
Then no other change was needed, and I was able to do:
R
> argv=c('a','b','c','d')
> source("script.R")
In addition to the previous answer, I'd create a toplevel function (e.g. doStuff) which performs the analysis you want to perform in batch. The function takes the cmd line options as input. In the batch script you source the script that contains this function and call it. In this way you can easily run the function in interactive mode and use e.g. browser().
In some cases, the suggested solution (workaround) may not work - for example, when the R code needs to be run as a part of an existing bash script. For those cases, I suggest to write in your R script into the bash script using here document:
#!/bin/bash
R --interactive << EOT
# R code starts here
argv=c('a','b','c','d')
print(interactive())
# Rest of script contents
quit("no")
# R code ends here
EOT
This way, print(interactive()) above will yield TRUE.
Sidenote: Make sure to avoid the $ character in your R code, as this would not be processed correctly - for example, retrieve a column from a data.frame() by using df[["X1"]] instead of df$X1.
R allows us to put code to run in the beginning/end of a session.
What codes would you suggest putting there?
I know of three interesting examples (although I don't have "how to do them" under my fingers here):
Saving the session history when closing R.
Running a fortune() at the beginning of an R session.
I was thinking of having an automated saving of the workspace. But I didn't set on solving the issue of managing space (so there would always be X amount of space used for that backup)
Any more ideas? (or how you implement the above ideas)
p.s: this is continuing a thread started on "stat.overflow"
Apart from .Rprofile, you could define .First and .Last functions. I usually put graphics.off() to get rid of any graphic displays running, so, in this case, it should go something like this:
.Last <- function() {
graphics.off()
save.image() # optionally, you can define specific file/folder
system(paste("cowsay", "Goodbye # ", date())) # if you're running GNU/Linux
and get something like this:
___________________________________
< Goodbye # Wed Aug 4 22:49:46 2010 >
-----------------------------------
\ ^__^
\ (oo)\_______
(__)\ )\/\
||----w |
|| ||
However, this ain't much useful. While .Rprofile manages R start-up, .Last function can perform various operations "on exit"... like saving image file or so...
I am pretty sure we had a question like this here before. See eg Expert R users, what's in your .Rprofile? or more generally search for "[r[ Startup" or other appropriate tags.