Rstudio very slow when editing large code - r

RStudio is working well in general (v 2021.09.1 Build 372).
But when a large code (7000 lines, a collection of functions) is being edited, even a click on the text or unfolding takes about 4 seconds to respond.
I've tried to disable diagnostics, but the problem still occurs.
Is there any setup that can help?

It seems that disabling "use real-time spell-checking" solve the problem.
Before, I've tried other solutions, like disabling "Diagnostics", without results.
Besides, to avoid the slow debugging of that big file, when debug is needed, specific functions are copied to a temp.R and then "debugSourced"

Related

GTM Preview mode keeps returning "Out of Memory" error

Recently, I started to experience "out of memory" error while previewing in GTM and it is getting more frequent now.
I am certain whenever I try submitting a form in a preview mode, this error would definitely occur while others are pretty random.
Have anyone experienced the same thing, started 2 weeks ago? Any idea how to resolve it?
Thanks in advance
Roman
Try tracing the variables and overall code that fires on the form submission. Looks like something in the container causes rapid memory leak.
There are a few approaches to debugging. I would just read the code and see if certain parts of it are poorly structured. If I can't see anything immediately, which is pretty unlikely, I would start inserting breakpoints in devtools to get closer to the bug.
A simpler, but longer approach would be disabling parts of logic in GTM and reproducing the out of memory error to find the exact trigger/variable/tag/template that causes the issue. After that just debug the code in question.

Stop RStudio always opening source viewer when in debug mode when evaluating R code

I often code functions in the following way by creating a basic function, entering debug mode, and then writing the function in debugmode. E.g.,
myfun <- function(x) print(x)
debugonce(myfun)
myfun("test")
The benefits of this is that I can get real time feedback on whether my code is working with the arguments passed to it.
However RStudio appears to have recently changed it's behaviour to make this more difficult. I recently upgraded to 2021.09.0 Build 351.
Previously, it would open the Source Editor when you call this command.
But if you closed this screen, you could return to your code, edit the code, evaluate code, and so on without the viewer re-appering.
However, now it seems to reappear every time you evaluate your code.
For instance, if I close the viewer, go back to my main script, and evaluate x, the Viewer opens back up.
This is super annoying for someone used to editing and evaluating code while in debugmode using the source script file.
Is there any way of entirely disabling the Source Editor view in Rstudio? Or alternatively, is there a way of being able to edit and evaluate code without the Source Editor constantly re-appearing in Rstudio (like how it used to work)?
My current fix is to downgrade to Version 1.3.1093, where you could edit and evaluate code without the Viewer constantly re-appearing. But it would be a shame to miss out on future upgrades.
This issue was principally a bug that has now been fixed.
At time of posting, downloading a daily build fixes the issue, but eventually this fix will be incorporated into beta and standard builds.
https://github.com/rstudio/rstudio/issues/9943

How to terminate a plotting job in Rstudio

Say I enter the following commands into the console in Rstudio
x=seq(0,1e11, by=.01)
plot(x,sin(x))
Clearly this is a very silly thing to do. But my question is how do you terminate this process? I couldn't find this answered anywhere.
Attempted solutions: pressing ctrl+q, pressing esc, going to session->interupt R, going to session->terminate R. Nothing seems to work. This seems to be specific to plotting, for example if you run a stupidly large loop, most of the above options seem to work as expected.
Ideally I'd like a solution that doesn't lose the R script I have been working on in the console (as I haven't saved it in awhile).
Re-posting my comment as an answer since it seems to have solved your problem.
Save early, save often, cry less.
Try clicking the little red
stop icon above the console panel (unlikely to work if you've done
all that you've done already).
Try copy/pasting the script out to
a text editor.
Try killing just the rsession process through your
OS (might leave RStudio open). The good news is that RStudio is
often pretty smart about backing up working copies of scripts, so
you might find it's still there even if you have to kill the whole
program.
Entering dev.null() in the console will kill any plots and I find it's less likely to crash RStudio than some of the options given by Ari B. Friedman.
That said, save early save often is always sound advice.

R knitr: is it possible to use cached results across different machines?

Issue solved, see answers for details.
I would like to run some code (with knitr) on a more powerful server and then maybe have the possibility of making small changes on my own laptop. Even copying across the entire folder, it seems that the cache is rebuilt when re-compiling locally, is there a way to avoid that and actually use the results in the cache?
Update: the problem arose from different versions of knitr on different machines.
In theory, yes -- if you do not change anything, the cache will be kept. In practice, you have to check carefully what the "small changes" are. The documentation page for cache has explained when the cache will be rebuilt, and you need to check if all three conditions are met.
I wonder if in addition to #Yihui's answer if the process of copying from one machine to another changes the datetimes on the files so that they look out of date even when nothing has changed.
Look at the dates on the files involved after copying. If you can figure out which files need to be newer than others then touching them may prevent the rebuilding.
Another option would be to just paste in the chached pieces directly so that they are not rerun (though that means you have to rerun and repaste manually if you change anything in those parts).

Subversion: "svn update" loses CSS data

Recently, I've noticed strange behavior by Subversion. Occasionally, and seemingly randomly, the "svn up" command will wreak havoc on my CSS files. 99% of the time it works fine, but when it goes bad, it's pretty damn terrible.
Instead of noting a conflict as it should, Subversion appears to be trashing all incoming conflict lines and reporting a successful merge. This results in massively inconvenient manual merges because the incoming changes effectively disappear unless they're manually placed back into the file.
I would have believed this was a case of user error, but I just watched it happen. We have two designers that frequently work on the same CSS files, but both are familiar and proficient with conflict resolution.
As near as can figure, this happens when both designers have a large number of changes to check in and one beats the other to the punch. Is it possible that this is somehow confusing SVN's merging algorithm?
Any experience or helpful anecdotes dealing with this type of behavior from SVN are welcome.
If you can find a diff/merge program that's better at detecting the minimal changes in files of this structure, use the -diff-cmd option to svn update to invoke it.
It may be tedious but you can check the changes in the CSS file by using
svn diff -r 100:101 filename/url
for example and stepping back from your HEAD revision. This should show what changes were made , at what revision and by whom. It sounds like a merging issue I've had before but unfortunately I found myself resolving it by looking at previous revisions and merging them manually too.

Resources