I'm wondering if there are any packages for R which help to visualize workflows/code in a way Alteryx does. I find the visualization of the workflows within Alteryx quite helpful, but manually dragging an dropping the tools onto the canvas and set the parameters just takes so much longer than just writing the code in R. Also some functionally within Alteryx is not yet sufficient and has to be implemented via the R/Python-Tool anyway.
During my search I found this post which goes into the same direction, but the suggested packages don't really match what I am looking for.
Best regards
I am preparing a presentation on data analysis and I am provided with a 2-3 monitor and projector head-up. I would like to use one monitor(+projector) for code, one monitor(+projector) for console display and one monitor(+projector) for plots. Monitors are for me, projectors for the audience.
I would also like to run the code line-by-line (similar to the Ctr-Enter feature of RStudio); copy pasting code won't work. I want to use interactive graphics, analysis and plotting on-the-fly so any pre-done analysis won't work.
Is there any way to achieve this? Although Rstudio is a fantastic tool, a rather basic (and one might say easy) feature like panel detachment is not being developed although frequently requested. This would be probably the best solution to what I want.
UPDATE: Any OS (Win, Mac, Linux) will do.
You should be able to use the vanilla R GUI. Within that you have separate panels/windows for code, console, and plots (with as many plot windows as you want by calling a new device like quartz()). You can evaluate a line of code from the script using Cmd-Enter(mac) and Cntr-Enter (pc) plus the default settings highlight the line of interest. You could also use emacs in the same way, which I find much more powerful and fun.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Before any of you run at the closing vote let me say that I understand that this question may be subjective, and the expected answer may begin by "it depends". Nevertheless, it is an actually relevant problem I run into, as I am creating more and more graphs, and I don't necessarily know the exact way I am going to use them, or don't have the time to test for the final use case immediately.
So I am leveraging the experience of SO R users to get good reasons to choose one over the other, between jpg(), bmp(), png(), tiff(), pdf() and possibly with which options. I don't have the experience in R and the knowledge in the different formats to choose wisely.
Potential use cases:
quick look after or during run time of algorithms
presentations (.ppt mainly)
reports (word or latex)
publication (internet)
storage (without too much loss and to transform it later for a specific use)
anything relevant I forgot
Thanks! I'm happy to make the question clearer.
To expand a little on my comment, there is no real easy answer, but my suggestions:
My first totally flexible choice would be to simply store the final raw data used in the plot(s) and a bit of R code for generating the plot(s). That way you could easily enough send the output to whatever device that suits your particular purpose. It would not be that arduous a task to set yourself up a couple of basic templates based on png()/pdf() that you could call upon.
Use the svg() device. As noted by #gung, storing the output using pdf() , svg() , cairo_ps() or cairo_pdf() are your only real options for retaining scalable, vector images. I would tend to lean towards svg() rather than pdf() due to the greater editing options available using programs like Inkscape. It is also becoming a quite widely supported format for internet publication (see - http://caniuse.com/svg )
If on the other hand you're a latex user, most headaches seem to be solved by going straight to pdf() - you can usually import and convert pdf files using Inkscape or command line utilities like Imagemagick if you have to format shift.
For Word/Powerpoint interaction, if you are running R on Windows, you can also export directly using win.metafile() which will give you scalable/component emf images which you can import into Word or Powerpoint directly. I have heard of people running R through Wine or using intermediary steps on Linux to get emf files out for later use. For Mac, there are roundabout pathways as well.
So, to summarise, in order of preference.
Don't store images at all, store code to generate images
Use svg/pdf and convert formats as required.
Use a backup win.metafile export directly for those cases where you can't escape using Word/Powerpoint and are primarily going to be based on Windows systems.
So far the answers for this question have all recommended outputting plots in vector based formats. This will give you the best output, allowing you to resize your image as you need for whatever medium your image will end up in (whether that be a webpage, document, or presentation), but this comes at a computational cost.
For my own work, I often find it is much more convenient to save my plots in a raster format of sufficient resolution. You probably want to do this whenever your data takes a non-trivial amount of time to plot.
Some examples of where I find a raster format is more convenient:
Manhattan plots: A plot showing p-value significance for hundreds of thousands-millions of DNA markers across a genome.
Large Heatmaps: Clustering the top 5000 differentially expressed genes between two groups of people, one with a disease, and one healthy.
Network Rendering: When drawing a large number of nodes connected to each other by edges, redrawing the edges (as vectors) can slow down your computer.
Ultimately it comes down to a trade-off in your own sanity. What annoys you more? your computer grinding to a halt trying to redraw an image? or figuring out the exact dimensions to render an image in raster format so it doesn't look awful for your final publishing medium?
The most basic distinction to bear in mind here is raster graphics versus vector graphics. In general, vector graphics will preserve options for you later. Of the options you listed, jpeg, bmp, tiff, and png are raster formats; only pdf will give you vector graphics. Thus, that is probably the best default of your listed options.
Is it expected that printing a large-ish ggplot to PDF will cause the RSession memory to balloon? I have a ggplot2 object that is around 72 megabytes. My RSession grows to over 2 gig when printing to PDF. Is this expected? Are there ways to optimize performance? I find that the resulting PDFs are huge ~25meg and I have to use an external program to shrink them down (50kb with no visual loss!). Is there a way to print to PDF with lower quality graphics? Or perhaps some parameter to print or ggplot that I haven't considered?
For large data sets, I find it helpful to pre-process the data before putting together the ggplot (even if ggplot offers the same calculations).
ggplot has to be very general: it cannot predict what stat or geom you want to add later on, so it is very difficult to optimize things there (the split-apply-combine strategy can lead to exploding intermediat memory requirements). OTOH, you know what you want and can pre-calculate accordingly.
The large pdf indicates that you either have a lot of overplotting or you produce objects that are too small to be seen. In both cases, you could gain a lot by applying appropriate summary statistics (e.g. hexbin or boxplot instead of scatterplot).
I think we cannot tell you more without details of what you are doing. So please create a minimal example and/or upload the compressed plot you are producing.
Addressing the second part of your question, R makes no attempt to optimize PDFs. If you are overplotting a lot of points, this results in some ridiculous behavior. You can use qpdf to post-process the PDF.
Addressing the first question anecdotally, it does seem that plots on medium-sized datasets take up a lot of memory, but that is merely my experience. Others may have more opinions as to why or more facts as to whether this is so.
Saving in a bitmap format like png can reduce the filesize considerably. Note that this is only appropriate for certain uses of the final image, in particular, it can't be zoomed in as far as a pdf can. But if the final image size is known it can be a useful method.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I have been working on a few R packages for some general tools that aren't currently available in R: blogging, report delivery, logging, and scheduling. This led me to wonder: what are the most important things that people wish existed in R that currently aren't available?
My hope is that we can use this to pinpoint some gaps, and possibly work on them collaboratively.
I'm a former Mathematica junkie, and one thing that I really miss is the notebook style interface. When I did my research with notebooks, papers would almost write themselves as I did my analysis. But now that I'm using R, I find that documenting my work to be quite tedious.
For people that are not so familiar with Mathematica, you have documents called "notebooks" that can contain code, text, equations, and the results from executed code (which can be equations, text, graphics, or interactive tools). Everything can be neatly organized into styled subsections or sections that are collapsable. You can have multiple open documents that integrate with a single shared kernel.
While I don't think a full-blown Mathematica style interface is entirely necessary, some interactive document system that would support text (for description), code, code output, and embedded image output would be a real boon to researchers.
A Real-Time R package would be my choice, using C Streaming perhaps.
Also I'd like a more robust web development package. Nothing as extensive as Ruby on Rails but something a bit better than Sweave combined with R2HTML, that can run on RApache. I think this needs to be a huge area of emphasis for R in general.
I realize LaTeX is better markup for certain academia but in general I think HTML should be the markup language of choice. More needs to be done in terms of R Web Apps, so applications can be hosted on huge RAM remotely and R can start being used for SaaS data applications and other graphics choices.
Interfaces to any of the new-fangled 'Web 2.0' databases that use key-value pairs rather than the standard RDMS. A non-exhaustive list (in alphabetical order) would be
Cassandra Project
CouchDB
MongoDB
Project Voldemort
Redis
Tokyo Cabinet
and it would of course be nice if we had a DBI-alike abstraction on top of this. Jeff has started with RBerkeley but that use the older-school Oracle BerkeleyDB backend rather than one of those new things.
An output device which produces Javascript code, perhaps using the protovis library.
as a programmer and writer of libraries for colleagues, I was definitely missing a logging package, I googled and asked around, here too, then wrote one myself. it is on r-forge, here, and it s called "logging" :)
I use it and I'm obviously still developing it.
There are few libraries to interface with database in general, and there is not ORM library.
RMySQL is useful, but you have to write the SQL queries manually and there is not a way to generate them as in a ORM. Morevoer, it is only specific to MySQL.
Another library set that R still doesn't have, for me, it is a good system for reading command line arguments: there is R getopt but it is nothing like, for example, argparse in python.
A natural interface to the .NET framework would be awesome, though I suspect that that might be a lot of work.
EDIT:
Syntax highlighting from within RGui would also be wonderful.
ANOTHER EDIT:
R.NET now exists to integrate R with .NET.
A FRAQ package for FRequently Asked Questions, a la fortune(). R-help would be so much fun: "Try this, library(FRAQ); faq("lattice won't print"), etc.
See also.
A wiki package that adds wiki-like documentation to R packages. You'd have a inst/wiki subdirectory with plain text files in markdown, asciidoc, textile, with embedded R code. With the right incantation, these files would be executed (think brew and/or asciidoc packages), and the relevant output uploaded to a given repository online (github, googlecode, etc.). Another function could take care of synchronizing the changes made online, typically via svn or git.
Suddenly you have a wiki documentation for your package with reproducible examples (could even be hooked to R CMD check).
EDIT 2012:
... and now the knitr package would make this process even easier and neater
I would like to see a possibility to embed another programming language within R in a more straightforward way by the users. I give this as an example in some common-lisp implementations one could write a function with embedded C code like this:
(defun sample (x)
(ffi:c-inline (n1 n2) (:int :int) (values :int :int) "{
int n1 = #0, n2 = #1, out1 = 0, out2 = 1;
while (n1 <= n2) {
out1 += n1;
out2 *= n1;
n1++;
}
#(return 0)= out1;
#(return 1)= out2;
}"
:side-effects nil))
It would be good if one could write an R function with embedded C or lisp code (more interested in the latter) in a similar way.
A native .NET interface to RGUI. R(D)Com is based on COM, and it only allows to exchange matrices, not more complex structures.
I would very much like a line profiler. This exists in Matlab and Python, and is very useful for finding bits of code that take a lot of time or are executed more (or less) than expected. A lot of my code involves function optimizations and how many times something iterates may not be known in advance (though most iterations are constrained or specified).
The call stack is useful if all of your code is in R and is very simple, but as I recently posted about it, it takes a painstaking effort if your code is complex.
It's quite easy to develop a line profiler for a given bit of code. A naive way is to index every line (or just pre-specified sections) and insert a call to log proc.time() that line. In a loop, I simply enumerate sections of code and store in a 2 dimensional list the proc.time values for section i in iteration k. [See update below: this isn't actually a way to do a line profiler for all kinds of code.]
One can use such a tool to find hotspots, anomalies (e.g. code that should be O(n) but is really O(n^2)), code that may benefit from memoization (a line profiler doesn't tell you this, but it lets you know where to look), code that is mistakenly inside a loop, and more.
Update 1: Inserting a timing line between every function line is slightly erroneous: the definition of a line of code is not simply code separated by whitespace. Being able to parse the code into an AST is necessary for knowing where operations begin and end. As discussed in some of the answers to this question, there are some tools (namely, showTree and walkCode in the codetools package) for doing this. Simply applying a regular expression to source code would be a very bad thing to do.