I've inherited a sweave file from a different author. I'd like to pause it after it finishes running the R code to interrogate the variables and see the objects in the console before it goes to PDF generation.
Is there a way to do this Rstudio conveniently? Or even in emacs if I must?
Thanks!
For debugging or checking Sweave documents, run the file through Stangle, e.g.
Stangle("a.rnw")
This produces a pure R-file, which you can debug separately. If the tangled files runs ok, but the Sweave'd does not, it is almost always due to some \Sexpr{} expression. These are difficult to locate, the error messages can be highly confusing.
Related
I am using Julia 1.8.4 in Jupyter, however, I get the following error message when I start it up.
[code_prettify] Sorry, can't use kernel language julia.
Configurations are currently only defined for the following languages: python, r, javascript
See readme for more details.
If I click "Ok", I am able to run the code in the cells but Jupyter does not highlight Julia code. It continues highlighting Python code and syntax. Is there a way I can fix this?
Or could someone refer me to the documentation?
Like the error message indicates, code_prettify only has configurations inbuilt for Python, Javascript, and R. From the docs:
Example implementations are provided for prettifiers for ipython, ir and ijavascript kernels which should work out of the box ...
Other languages may be added as defaults in the future
Support for Julia is yet to be added, hence the error message.
That said, code_prettify is only needed if you want to reformat your code (semi-)automatically. Syntax highlighting should work regardless, without any extension, and based on my testing, it does. Loading a notebook with a Julia kernel, the error message appears in the console, but the code is syntax-highlighted as usual.
So, if the syntax highlighting problem persists across notebook restarts, the issue must be with some other part of your notebook setup.
I am writing a book using LYX, have been using it for several years. Some time ago, maybe close to a year, evidently after updating R, MikTeX, LYX, I started getting a dialog saying that my installation does not include a knitr to pdf converter, but I can use an external converter if I authorize it, which I do and things work. But I have to do this every time I compile a new pdf file from LYX. Does someone know why I am missing the converter, and how I can add it (have updated MikTeX many times, which doesn't do it)?
A secondary question: It seems that the knitr to pdflatex converter controls the magnification of the pad file upon opening (over-riding Acrobat Pro DC 2017 magnification settings), and produces too small a magnification, that I have to increase every time I compile LYX to pdflatex. Any suggestions on how to fix this very annoying problem.
Thanks a ton for any help anyone can provide.
Doug Martin
The dialog message I get upon trying to compile the LYX file to pdflatex:
\., LyX: An external converter requires your authorization
The requested operation requires the use of a converter from
knitr to pdflatex:
Rscript --verbose --no-save --norestore $$s/scripts/lyxknitr.R $$p$$i $$p$$o $$e $$r
This external program can execute arbitrary commands on your system,
including dangerous ones, if instructed to do so by a maliciously
crafted LyX document
\ti/ould you like to run this converter?
Only run if you trust the origin/sender of the LyX document!
Do not run Run Always run for this document
Have searched Stack Exchange and other resources for an answer, but have not received one. Doug
According to this message:
https://www.mail-archive.com/lyx-devel#lists.lyx.org/msg197583.html
the questions were added to protect against malicious code. Another message later in that thread says one of the preference settings would let you disable the checks.
Given the parallel package's warning against using mclapply() in GUI environments, I've been moving away from using RStudio for scripts calling that function. I think I've observed (though I can't test for) a performance improvement.
I realize that knitting markdown documents with parallel processes works in RStudio, as does running mclapply() in RStudio, much of the time. But can I expect better performance if I knit through Terminal, instead of through RStudio? Or might RStudio's calls to knit() not actually fork the GUI? If so, could calling source() from RStudio's Console be safe as well?
Unfortunately, I don't know how to reproduce the problems that (sometimes?) occur when forking through a GUI, so I haven't been able to run any tests myself. So perhaps a better question is, can anyone think of a systematic method of testing for which types of function calls will result in these problems?
For reference: https://stat.ethz.ch/R-manual/R-devel/library/parallel/doc/parallel.pdf. (See point 2 in the introduction.)
I usually open the R console all day long, but sometimes I need to clean my history and my workspace's background so that I can test functions or load new data.
I'm wondering whether there is an easier way to use a command line in .Rprofile so that I can refresh the R console without quitting or rebooting my current session.
What I have usually done for this is to q() without saving and then start R again and clean the History. I think somebody here might be able to give me some better suggestions.
Thanks in advance.
For what concerns history, in UNIX-like systems (mine is Debian) this command refreshes it
loadhistory("")
However, as said in comments, loadhistory seems to be platform-dependent.
Check your ?loadhistory if present on your platform. Mine says:
There are several history mechanisms available for the different
R consoles, which work in similar but not identical ways. There
are separate versions of this help file for Unix and Windows.
The functions described here work on Unix-alikes under the
readline command-line interface but may not otherwise (for
example, in batch use or in an embedded application)
Is it possible to call R scripts in a MATLAB program? How can I do that?
You can use R in batch mode. If R is in your path, then you can call from MATLAB:
system('R CMD BATCH infile outfile');
will run the code in infile and place output in the outfile.
EDIT:
You can also give it a try with another approach using a R package rscproxy and R(D)COM Server, described here.
After using R(D)COM and Matlab R-link for a while, I do not recommend it. The COM interface has trouble parsing many commands and it is difficult to debug the code. I recommend using a system command from Matlab as described in the
R Wiki.
system is almost definitely the way to go, as described in other answers. For completeness, you could also use MATLAB's capability to run Java code, and JRI or RCaller to call R from Java. Similarly, you can use MATLAB's capability for running .NET code and R.NET.
Yes. On Windows, I have done a lot of this via the Matlab R-link and then R(D)COM server on the R side.
It works beautifully for passing commands and data back and forth. Calling R via the OS is feasible, but then you have to deparse (write) and parse (load) data passed between them. This is tedious and no fun. Especially if you are much data around. It also means that you lose state on the R side and every invocation is just like the first time.
On Linux or another OS, or even for more general usage, I'd now try Rstudio as a server -- see http://www.rstudio.org/docs/server/getting_started for more info.
Another way RWiki recommended:
CurrentDirectory=strrep(pwd,'\','/');
eval(['!C:\R\R-3.0.1\bin/Rscript "' CurrentDirectory '/Commands.R"'])
You can run command line functions in matlab using the unix command. The easiest way would probably be to set up an R script which outputs results to a text file, run the script in matlab using the unix command, and then (in matlab) verify that the file exists and load it up.
You could use the system command to execute R scripts. Something like the following:
[status] = system('R CMD BATCH [options] script.R [outfile]')
where [options] are the options your send to the R interpreter, and [outfile] is your output file.