Ways to make printing (on real, tangible paper) an Rmd prettier? - r

I find that looking at an Rmd as an HTML on a screen is nice because you can scroll through and use all the interactive bits. But when I want to actually print out an Rmd with the code visible, even short tasks tend to take up quite a bit of space and many sheets of paper. Even the defualt Rmd that pops up in rstudio that plots cars takes up an entire page. Granted, you could make the plot a lot smaller and save some space. But with some more complicated and formatted code, you start to take up a lot of space quite quickly. Is there some better way to format these so that they take up less paper? The colored code in the digital form would be nice too.
Yes I agree that sending people the digital version is probably better but some people I work with insist on print outs.

Related

improving rGL HTML performance with multiple figures (mfrow3d) + rglWidgets

i am using RGL to produce a panel of multiple figures through the mfrow3d command.
for the most part, the html produced from the call to writeWebGL is exemplary.
the one caveat is that for multiple figures (be it 6 or 16), i have noticed a bit of lag when attempting to manipulate any one of these figures (to pan/zoom/look around).
an example can be found here: http://fluxions.dydx.ie:1338/schiz.html (warning, 100MB html file haha).
i wanted to ask people here if there is anything i can do in terms of using the "reuse" argument that may speed up performance.
additionally, i wanted to ask if there is any benefit to using rglWidgets and if there is a small example someone could provide in porting a writeWebGL call produced from the following:
https://johnmuschelli.com/WebGL_Interactive_Paper/supp_1/supp_1_wrap.Rmd
to rglwidgets (in hopes that the reuse argument in widgets may improve performance due to my use of mfrow3d).
i am not familiar on how to capture a multi-figure layout with multiple calls to contour3d as a scene that widgets can use.
dr duncan murdoch has gotten back to me and said there probably is not a way to do this, so i guess i will close it.
he is very helpful and i thank him for his support.

Huge file to print when using many plots

I am using knitr through Lyx to create a document. In this document, I use knitr to print about 20 images (through R), and 5 calls from R, along with about 20 pages of text.
I save the pdf file, and it is only 1500 KB, and I can view and recompile it easily. But as soon as I go to print, the printer reads about 200MB of information. It takes a super long time (2+ hours) to print.
I was wondering if you knew the solution for this, or even the cause. I’ve been trying to remedy it by just copying the plots and putting them in as figures, but this obviously defeats the purpose of reproducible research. When I put the plots as pictures, we get down to a pdf size of 367 KB. I am fairly certain it is knitr generated plots that are causing the increase in data. When I changed the plots to pictures, it printed in about 5 minutes (which is still a long time, but much shorter than hours).
I've had this issue before, and I believe that it has something to do with plotting of multiple chains for traceplots. Are these known to take forever to print?
Has anyone else experienced this or know the solution for it?
The default for latex output is PDFs for plots. Presumably there are some effects within the PDF which are very expensive to render for your printer. I would specify an alternative graphics device such as png either per chunk using chunk options or as default for the whole file using opts_chunk$set. The relevant option is dev though you may need to change dpi too.
More details on the knitr page

What is the most useful output format for graphs? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Before any of you run at the closing vote let me say that I understand that this question may be subjective, and the expected answer may begin by "it depends". Nevertheless, it is an actually relevant problem I run into, as I am creating more and more graphs, and I don't necessarily know the exact way I am going to use them, or don't have the time to test for the final use case immediately.
So I am leveraging the experience of SO R users to get good reasons to choose one over the other, between jpg(), bmp(), png(), tiff(), pdf() and possibly with which options. I don't have the experience in R and the knowledge in the different formats to choose wisely.
Potential use cases:
quick look after or during run time of algorithms
presentations (.ppt mainly)
reports (word or latex)
publication (internet)
storage (without too much loss and to transform it later for a specific use)
anything relevant I forgot
Thanks! I'm happy to make the question clearer.
To expand a little on my comment, there is no real easy answer, but my suggestions:
My first totally flexible choice would be to simply store the final raw data used in the plot(s) and a bit of R code for generating the plot(s). That way you could easily enough send the output to whatever device that suits your particular purpose. It would not be that arduous a task to set yourself up a couple of basic templates based on png()/pdf() that you could call upon.
Use the svg() device. As noted by #gung, storing the output using pdf() , svg() , cairo_ps() or cairo_pdf() are your only real options for retaining scalable, vector images. I would tend to lean towards svg() rather than pdf() due to the greater editing options available using programs like Inkscape. It is also becoming a quite widely supported format for internet publication (see - http://caniuse.com/svg )
If on the other hand you're a latex user, most headaches seem to be solved by going straight to pdf() - you can usually import and convert pdf files using Inkscape or command line utilities like Imagemagick if you have to format shift.
For Word/Powerpoint interaction, if you are running R on Windows, you can also export directly using win.metafile() which will give you scalable/component emf images which you can import into Word or Powerpoint directly. I have heard of people running R through Wine or using intermediary steps on Linux to get emf files out for later use. For Mac, there are roundabout pathways as well.
So, to summarise, in order of preference.
Don't store images at all, store code to generate images
Use svg/pdf and convert formats as required.
Use a backup win.metafile export directly for those cases where you can't escape using Word/Powerpoint and are primarily going to be based on Windows systems.
So far the answers for this question have all recommended outputting plots in vector based formats. This will give you the best output, allowing you to resize your image as you need for whatever medium your image will end up in (whether that be a webpage, document, or presentation), but this comes at a computational cost.
For my own work, I often find it is much more convenient to save my plots in a raster format of sufficient resolution. You probably want to do this whenever your data takes a non-trivial amount of time to plot.
Some examples of where I find a raster format is more convenient:
Manhattan plots: A plot showing p-value significance for hundreds of thousands-millions of DNA markers across a genome.
Large Heatmaps: Clustering the top 5000 differentially expressed genes between two groups of people, one with a disease, and one healthy.
Network Rendering: When drawing a large number of nodes connected to each other by edges, redrawing the edges (as vectors) can slow down your computer.
Ultimately it comes down to a trade-off in your own sanity. What annoys you more? your computer grinding to a halt trying to redraw an image? or figuring out the exact dimensions to render an image in raster format so it doesn't look awful for your final publishing medium?
The most basic distinction to bear in mind here is raster graphics versus vector graphics. In general, vector graphics will preserve options for you later. Of the options you listed, jpeg, bmp, tiff, and png are raster formats; only pdf will give you vector graphics. Thus, that is probably the best default of your listed options.

ggplot2 - printing plot balloons memory

Is it expected that printing a large-ish ggplot to PDF will cause the RSession memory to balloon? I have a ggplot2 object that is around 72 megabytes. My RSession grows to over 2 gig when printing to PDF. Is this expected? Are there ways to optimize performance? I find that the resulting PDFs are huge ~25meg and I have to use an external program to shrink them down (50kb with no visual loss!). Is there a way to print to PDF with lower quality graphics? Or perhaps some parameter to print or ggplot that I haven't considered?
For large data sets, I find it helpful to pre-process the data before putting together the ggplot (even if ggplot offers the same calculations).
ggplot has to be very general: it cannot predict what stat or geom you want to add later on, so it is very difficult to optimize things there (the split-apply-combine strategy can lead to exploding intermediat memory requirements). OTOH, you know what you want and can pre-calculate accordingly.
The large pdf indicates that you either have a lot of overplotting or you produce objects that are too small to be seen. In both cases, you could gain a lot by applying appropriate summary statistics (e.g. hexbin or boxplot instead of scatterplot).
I think we cannot tell you more without details of what you are doing. So please create a minimal example and/or upload the compressed plot you are producing.
Addressing the second part of your question, R makes no attempt to optimize PDFs. If you are overplotting a lot of points, this results in some ridiculous behavior. You can use qpdf to post-process the PDF.
Addressing the first question anecdotally, it does seem that plots on medium-sized datasets take up a lot of memory, but that is merely my experience. Others may have more opinions as to why or more facts as to whether this is so.
Saving in a bitmap format like png can reduce the filesize considerably. Note that this is only appropriate for certain uses of the final image, in particular, it can't be zoomed in as far as a pdf can. But if the final image size is known it can be a useful method.

What word processor do you use for technical papers?

I've been looking for some time for a word processor to use for writing technical papers and I haven't really found one. What would really be nice to have is an editor that can handle mathematical expressions, code, and pseudo-code fairly well. I have yet to find one that works very well.
Does anyone have any recommendations?
I personally believe in LaTeX.
Benefits:
You can focus on content over form.
Use logical rather than semantic formatting (e.g., \methodname vs. just italic).
Easier to assemble large documents from multiple files.
Use text-based version control (CVS/SVN/etc.)
Widely used
Much more stable even on super-weak machines
Programmable. For example, I use macros to hide stuff, highlight stuff, obfuscate names by using a macro name with the real name but an obfuscated replacements.
See all the tips and tricks available on SO.
Output looks the same no matter which platform you compile on. Never had that luck with word, each version and each machine produces something slightly different.
My answer's long, so I want to say up front: I think you want OpenOffice Writer (I use v2.4, haven't tried 3.0 yet).
I've used Word with equation editor and LaTeX heavily in the past and OpenOffice Writer
more recently. I used the former two while writing my thesis.
LaTeX may still have advantages in quality of the output and in the ability to use text-based version control, but they're sharply diminished by OO Writer at this point.
Microsoft with equation editor, even the most recent versions, seems very weak still.
What I like about OpenOffice is that you can use the equation formatting mechanisms
in a mode where the window is split between the document you're writing and another
area where you can type very LaTeX-like formatting instructions. One of the big
strengths of LaTeX is that you get to type up something like $x \in S$ for "x is an element of S". OO Writer lets you do this and see the result.
Back when I wrote my thesis, LaTeX was preferable to Word with Eqn. Editor because of the length of my document (over 200 pages), the quality of the results, and the ease of specifying equations. LaTeX does have a disadvantage in simplicity of use that is made more acute by OO Writer.
That said, I'm sure I'd use OO Writer for conference to journal length articles (~8-15 pages v. ~15-40 pages) and also for shorter work. For thesis-length work, I'm not sure which I'd end up using: Word never worked so well for me on longer matter; I suspect OO Writer is better behaved but I don't have enough experience of it to make a firm judgement.
I like LyX (http://www.lyx.org/) -- it's a good tradeoff between "spending all your time writing your document" and "spending all your time writing markup". The most recent versions are even useable!
Apart from that, Word 2008 is actually pretty darn good, provided you use the styles and other "advanced" features.
I fully agree that LATEX is a good choice. I've used for paper in univ, including my master thesis. For LATEX I've been using Kile.
But nowadays there is interesting alternative which is DocBook with MathML extension.
LaTeX with TexMaker got me through grad school.
Depends on what you mean by "Word Processor". If you don't mind not having a WYSIWYG interface, I'd recommend LaTeX (http://www.latex-project.org/).
I wrote my final year Master's dissertation using it, which contained a lot of pseudocode, formulas, etc. Also outputs in a format fairly typical of technical papers.
I use FrameMaker.
MS Word with Mathtype. It has a number of advantages over the default Equation editor, including, but not limited to:
keyboard shortcuts
writing equations in tex mode then converting them
converting equations from "normal" to "linear" mode (the one you can use in your programs, you know a=b/c and such)
templates
no more latex. I can concentrate on the material, not the writing
Word with MS Equation for the mathematical sections.
I like DocBook and use FOP to create PDFs from it.
I use reStructuredText because it can be used in Trac, converted to PDF and HTML, have little markup overhead, and looks nice in its plain form too.
Microsoft Word is considered as the market standard word editor.
My suggestion is for you to use Authorea.
As a former postdoc (Astrophysics) and Ph.D. (Informatics) with 12+ years research experience (Harvard, CERN, UCLA), I have written technical papers for a long time. I have loved and hated LaTeX. For the past 2 years, I have worked with friends and colleagues at developing the next generation platform for writing technical/research documents collaboratively. It is called Authorea. From a technical standpoint Authorea is built on Git and takes LaTeX, Markdown, HTML (even JS, to include fancy d3.js in your papers). Bonus: you don't need to know LaTeX (or any other format) but you can easily add equations, tables, citations, and data to your papers. I hope you'll find it useful.

Resources