Why do some functions open in the read-only mode in RStudio? - r

I am debugging my scripts in RStudio and came across a strange behaviour. For all my working functions, when I Command + click (ctrl + click), I get to the script of that functions.
However, for my one function when I do that it opens only read only mode. Does that mean the function is buggy? How can I fix this behaviour?

It depends on how the function was loaded: when you source the whole file, RStudio associates the function with its source file, and can jump to its definition. If, by contrast, you’ve loaded the function by e.g. only executing a single code fragment, RStudio doesn’t know which source file the function is associated with.
If you then subsequently want to jump to its definition, RStudio creates a temporary file which contains a deparsed representation of the function, not the original source. And since that file isn’t the original source but rather a temporary, RStudio marks it as read-only.

Related

R Hide internal objects in package from autocomplete

I am developing a package in Rstudio and I am trying to save object as internal to the package so that the user cannot see it. I make a default package project in Rstudio called "testpackage" and then execute:
library(devtools)
test.hidden.object <- 1:5
use_data(test.hidden.object,internal = T,overwrite = T)
Then I build the package, which saves it to my library. Then I restart Rstudio, and execute:
library(testpackage)
test.hidden.object
It prints out: [1] 1 2 3 4 5
The environment is empty, executing:
ls()
prints out "character(0)"
From what I understand, it is not possible to hide an object in a package from the user if the user knows the name of object, and I don't want to do that. But what worries me is that the autocomplete functionally is able to find these objects.
In both Rstudio and the R console, if I load the package and then type "test.hid" and then press TAB I can see the object "test.hidden.object" as an option. Should autocomplete be able to reveal internal objects? Am I building the package incorrectly?
To fix this problem I have so far updated R, Rstudio, devtools, and I have manually created the sysdata.rda file myself instead of using "use_data", but each time I am able to see the internal objects with autocomplete.
I think you are mistaken in your description. Autocompletion in RStudio and other R front ends will only show symbols that are visible in the current context. Your users can't make use of symbols that aren't exported, so autocompletion won't display them.
You may see hidden symbols while editing files in your own package, because your package code can see hidden symbols. But your users won't.
Edited to add: I've just followed your instructions more closely, and managed to duplicate what you saw. The problem is that by default the NAMESPACE file declares everything to be public, regardless of the setting for internal. This looks like a devtools misunderstanding or bug.
To fix it, manually edit your NAMESPACE file to make sure only public symbols are exported.
2nd edit: The docs for devtools::use_data have been updated on Github. They now say "If TRUE, stores all objects in a single R/sysdata.rda file. Objects in this file follow the usual export rules. Note that this means they will be exported if you are using the common exportPattern()
rule which exports all objects except for those that start with .."

Make R project Automatically open Specific Scripts

I am working in team, we mainly use R, I am quite used to use R project in Rstudio, which I like because when I open them I have all my scripts and everything at the right place. However when another member of the team opens one of my project it loads the values and data but does not open the R script (one can see that by physically clicking on the project through the windows explorer rather than using the menu at the top right in R). I guess something can be done in the .Rprofile but I did not find any command to open physically a script, I tried
file.edit("./Main.R")
but it did not open anything. It just got me the message :
Error: could not find function "file.edit"
As always,
Thanks for your help !
**EDIT
I tried to use
file.show
file.edit
shell.exec(file.path(getwd()), "Main.R"))
in the .Rprofile. Nothing worked.
Romain
You can use the following code in the .Rprofile file.
setHook("rstudio.sessionInit", function(newSession) {
if (newSession)
rstudioapi::navigateToFile('<file name>', line = -1L, column = -1L)
}, action = "append")
The rstudioapi library has the function navigateToFile to open a file in Rstudio. The problem is that the code in the .Rprofile runs before Rstudio initialization. To deal with this problem you can use the setHook function (from base package) to make the code execute after the Rstudio initialization.
file.edit requires the utils package
library(utils)
file.edit("Master.R")
However, if it opens in Notepad rather than RStudio you have the same problem as me. I've tried editing the editor= in all possible places: .RProfile, RProfile, RProfile.sites, with and without .First() function statements and calls. However, RStudio does not load the .R file in RStudio if told to. It may be linked to the .RData file being loaded after .RProfile. Bug? Or at least a feature RStudio should incorporate in their RProject file specification.

Executing an R script in a way other than using source() in JRI

I am new to R and have been trying to use JRI. Through JRI, I have used the "eval()" function to get certain results. If I want to execute an R script, I have used "source()". However I am now in a situation where I need to execute a script on continuously incoming data. While I can still use "source()", I don't think that would be an optimal way from a performance perspectve.
What I did was to read the entire R script into memory and then try and use "eval()" passing the script - but this does not seem to work. I have ensured that the script has been correctly loaded into memory - that is because if I write this script (loaded into the memory) into a file and source this newly created file, it does produce the expected results.
Is there a way for me to not keep sourcing the same file over and over again and execute it from memory? Each of my data units are independent and have to be processed independently and as soon as they become available. I cannot wait to collect a bunch of data units and then pass them on to the R script.
I have searched a lot and not found anything related to this. Any pointers which could help me in this direction would be really helpful.
The way I handled this is as below -
I enclosed the entire script into a function.
I sourced the script file (which now contains the function) at the start of the execution of my program.
The place where I was sourcing the file, I am now just calling the function which contains the script itself i.e. -
REXP result = rengine.eval("retVal<-" + getFunctionName() + "()");
Here, getFunctionName() gives me the name of the name of the function which contains the script.
Since this is loaded into the memory and available, I do not have to source the script file every time I want to execute the script. Any arguments being passed to the script are done as env. variables.
This seems to be a workaround, but solves my problem. Any better options are welcome.

R - Execute a function in a file

There is a R file and there is a function getInfo() in it.
I want to run this function in that script file alone.
Is that possible ?
I know running the script command on the file and then running the function name will help.
But then it will also run the rest of stuffs from the script file which i dont want.
What is the best way out here
When you use source on a script file, all the code in that file will be loaded into the R session currently active. Any code that is not in a function, will be executed. I see two options:
Put the function in a seperate source file, or even a package if the number of functions grows.
Set a global R variable using option and retrieve its value in the file to be sourced using getOption, making the execution of the non-function code dependend on this option. This does require you to always set this option before sourcing the file, in any project you use it in.
I would go for option 1.

Starting R and calling a script from a batch file

I have an R-based GUI that allows some non-technical users access to a stats model. As it stands, the users have to first load R and then type loadGui() at the command line.
While this isn't overly challenging, I don't like having to make non-technical people type anything at a command line. I had the idea of writing a .bat file (users are all running Windows, though multi-platform solutions also appreciated) that starts R GUI, then autoruns that command.
My first problem is opening RGui from the command line. While I can provide an explicit path, such as
"%ProgramW6432%\R\R-2.15.1\bin\i386\Rgui.exe"
it will need updating each time R is upgraded. It would be better to retrieve the location of RGui from the %path% environment variable, but I don't know an easy way to parse that.
The second, larger problem is how to call commands for R on startup from the command line. My first thought is that I could take a copy of ~/.Rprofile, append the extra command, and then replace the original copy of the file once R is loaded. This is awfully messy though, so I'd like an alternative.
Running R in batch mode isn't an option, firstly since I can't persuade GUIs to display themselves, and secondly because I would like the R console available, even if the users shouldn't need to use it.
If you want a toy GUI to test your ideas, try this:
loadGui <- function()
{
library(gWidgetstclck)
win <- gwindow("test")
rad <- gradio(letters[1:3], cont = win)
}
Problem 1: I simply do not ever install in the suggested default directory on Windows, but rather group R and a few related things in, say, c:/opt/ where I install R itself in, say,c:/opt/R-current so that the path c:/opt/R-current/bin will remain constant. On upgrade, I first renamed to R-previous and then install into a new R-current.
Problem 2: I think I solved that many moons ago with scripts. You can now use Rscript.exe to launch these, and there are tcltk examples for waiting for a prompt.
I have done similar a couple of times. In my cases the client was using windows so I just installed R on their computer and created a shortcut on their desktop to run R. Then I right click on the shortcut and choose properties to get the propertiest dialog. I then changed the "Start in" folder to the one where I wanted it to run from (which had the .Rdata file with the correct data and either a .First function in the .Rdata file or .Rprofile in the folder). There is also a "Run:" option that has a "Minimized" option to run the main R window minimized.
I had created the functions that I wanted to run (usually a specialized gui using tcltk) and any needed data and saved them in the .Rdata file and also either created .First or .Rprofile to run the comnand that showed the gui. The user double clicks on the icon on the desktop and up pops my GUI that they can work with while ignoring the other parts.
Take a look at the ProjectTemplate library. It does what you want to do. It loads used libraries from a batch file an run R files automatically after loading as well as a lot of other usefull stuff as well...
Using the answer from https://stackoverflow.com/a/27350487/41338 and a comment from Richie Cotton above I have arrived at the following solution to keeping a script alive until a window is closed by checking if the pointer to the window is valid.
For a RGtk2 window created and shown using:
library(RGtk2)
mainWindow <- gtkWindow("toplevel", show = TRUE)
Create a function which checks if the pointer to it exists:
isnull <- function(pointer){
a <- attributes(pointer)
attributes(pointer) <- NULL
out <- identical(pointer, new("externalptr"))
attributes(pointer) <- a
return(out)
}
and at the end of your script:
while(!isnull(mainWindow)) Sys.sleep(1)

Resources