Arrange multiple (32) .png files in a grid - r

I've been pulling my hair out for the past week trying to figure out elementary R coding but can't seem to get anywhere (haven't used R since 2013 not that its a great excuse).
All I want is a 4x8 grid made up of 32 .png files (maps I've made), and I want to do it without loading one image file at a time (http://www.statmethods.net/advgraphs/layout.html).
So I think I can load the images within the folder writing (please correct me if my beliefs are bs)
img <- list.files(path='c:/a',patt='compo[0-32].*',full.names=T)
Then I was thinking maybe in the lines of par(mfrow=c()), layout, grid.arrange (writing png plots into a pdf file in R), grid.raster (How to join efficiently multiple rgl plots into one single plot?) - which I've read up on and experimented with accordingly not resulting in anything worthwhile..
The latter I employed only with the following outcome
It made me giggle.
I don't really think lattice is the way to go anyway.
Any help would be greatly appreciated!

Another approach is to read the PNG images with readPNG then use grid and gridExtra:
library(png)
library(grid)
library(gridExtra)
plot1 <- readPNG('plot1.png')
plot2 <- readPNG('plot2.png')
grid.arrange(rasterGrob(plot1),rasterGrob(plot2),ncol=1)
Alternative: If you want to save the plot using ggsave, instead of grid.arrange you can use
tmp <- arrangeGrob(rasterGrob(plot1),rasterGrob(plot2),ncol=1)
ggsave('filename.png',tmp,width=12,height=5)

Not sure what your concern is about loading all the image files -- how else could you read their data to create the new image?
ETA: to load the files, I'd just use png::readPNG . One way to collect the images would be(12 images selected here)
filenames<-dir(pattern='compo')
foo<-list()
for(j in 1:12) foo[[j]]<-readPNG(filenames[j]
If you're willing to load them and use the base plot tools, then layout is the command you want. E.g., for 12 images loaded
layout(matrix(1:12,nr=4,byr=T))
for (j in 1:12) plot(foo[[j]])

Related

R grid arrange tiff microscopy RGB

I have a RGB tiff files (from cellProfiler) which I want to import to R, label and arrange - as part of a high throughput analysis. The closest I get is using:
library(tiff)
library(raster)
imageTiff <- tiff::readTIFF(imagePath[i])
rasterTiff <- raster::as.raster(imageTiff)
raster::plot(rasterTiff)
raster::plot plots the image nicely but I can't catch the output and use it with gridExtra or add labels.
In addition I tried rasterVis with levelPlot and multiple other ways importing the tiff and then converting them to grob or ggplots.
However, I can't get anything to work and would like to ask if R is even suited at all for that task?
Thank you very much for your help!
Okay, I think that is the most straight forward way and possible also the most obvious one.
I import JPEG or TIFF files with jpeg::readJPEG or tiff::readTIFF respectively. Both transform the images to a raster format which is compatible with rasterGrid() and following grid.arrange etc.
library(jpeg)
library(tiff)
library(grid)
imageJPEG <- grid::rasterGrob(jpeg::readJPEG("test.jpeg"))
imageTIFF <- grid::rasterGrob(tiff::readTIFF("test.tiff"))
grid.arrange(imageJPEG , imageJPEG , imageJPEG)
grid.arrange(imageTIFF , imageTIFF, imageTIFF)
For my purpose that is perfect since tasterGrob does not alter the raster matrix values. Labeling might be a bit tricky but overall it is a grid/grob problem from here on.

R: importing and saving SVG graphics

I have a complex task: to merge existing SVG image with barplots in R. I want to save the output file as a vector graphics as well. So I create the layout, in one of the subplots I create the barplot and now:
How can I load an existing SVG from the hard drive and then put it into the plot? I have tried grimport and grimport2 libraries but they fail to read my SVG. How should I prepare it? It is a simple sketch made in Inkscape, should I save it in any special way?
I'd prefer to use a library that is supported by Anaconda Cloud.
EDIT:
I menaged to read the .ps file with grimport and convert it to a picture object - it was crashing previously because I had a text box with non-standard font in the SVG and the library could not ready that properly (some encoding problems).
Now I am just looking for a way to put the Picture object on the layout, just as if I would use plot(runif(10), runif(10)) to have a scatterplot
grid.picture(picture_object[-1],x=x_coord,y=y_coord)
With the variables x_coord and y_coord I can manipulate the position of the image.

creating multiple file types while plotting

I would like to produce a series of plots in both high-resolution and low-resolution versions, or stated differently using two different file types (.png and .eps). I'd like to know the best/least repetetive way to do this. I am using the gplot function in sna, and the plot has a custom legend outside the plot area. I wrote a function something like this:
library(sna)
plotfun <- function(net){
png("test.png",width=800)
p <- gplot(net)
par(xpd=T)
legend(max(p[,1])+1,max(p[,2]),legend=letters[1:10],title="custom legend")
dev.off()
seteps()
postscript(test.eps)
#repeat all the plotting commands, which are much longer in real life
dev.off()
}
#try it with some random data
plotfun(rgraph(10))
This is perfectly functional but seems inefficient and clumsy. The more general version of this question is: if for any reason I want to create a plot (including extra layers like my custom legend), store it as an object, and then plot it later, is there a way to do this? Incidentally, this question didn't seem sna specific to me at first, but in trying to reproduce the problem using a similar function with plot, I couldn't get the legend to appear correctly, so this solution to the outside-the-plot-area legend doesn't seem general.
I would recommend generate graphs only in Postscript/PDF from R and then generate bitmaps (e.g. PNG) from the Postscript/PDF using e.g. ImageMagick with -density parameter (http://www.imagemagick.org/script/command-line-options.php#density) set appropriately to get desired resolution. For example
convert -density 100 -quality 100 picture.pdf picture.png
assuming picture.pdf is 7in-by-7in (R defaults) will give you a 700x700 png picture.
With this approach you will not have to worry that the picture comes out formatted differently depending which R device (pdf() vs png()) is used.

Reduce PDF file size of plots by filtering hidden objects

While producing scatter plots of many points in R (using ggplot() for example), there might be many points that are behind the others and not visible at all. For instance see the plot below:
This is a scatter plot of several hundreds of thousands points, but most of them are behind the other points. The problem is when casting the output to a vector file (a PDF file for example), the invisible points make the file size so big, and increase memory and cpu usage while viewing the file.
A simple solution is to cast the output to a bitmap picture (TIFF or PNG for example), but they lose the vector quality and can be even larger in size. I tried some online PDF compressors, but the result was the same size as my original file.
Is there any good solution? For example some way to filter the points that are not visible, possibly during generating plot or after it by editing PDF file?
As a start you can do something like this:
set.seed(42)
DF <- data.frame(x=x<-runif(1e6),y=x+rnorm(1e6,sd=0.1))
plot(y~x,data=DF,pch=".",cex=4)
PDF size: 6334 KB
DF2 <- data.frame(x=round(DF$x,3),y=round(DF$y,3))
DF2 <- DF[!duplicated(DF2),]
nrow(DF2)
#[1] 373429
plot(y~x,data=DF2,pch=".",cex=4)
PDF size: 2373 KB
With the rounding you can control how many values you want to remove. You only need to modify this to handle the different colours.
Simply saving the plot as a high-res png file will very drastically cut the size, while keeping the quality more than good enough. At least I've never had journals complain about any of the png's I sent them, just keep sure to use > 600 dpi.
I think it might be done with some post-processing of the pdf-file. In linux, if I have to reduce a pdf, I would do
pdf2ps input.pdf output.ps
ps2pdf output.ps output.pdf
which for some reason works quite efficiently.
You can see some discussion at https://askubuntu.com/questions/113544/how-to-reduce-pdf-filesize.

Reduce pdf file size of plot in R

i am plotting some data in R using the following commands:
jj = ts(read.table("overlap.txt"))
pdf(file = "plot.pdf")
plot(jj, ylab="", main="")
dev.off()
The result looks like this:
The problem I have is that the pdf file that I get is quite big (25Mb). Is the a way to reduce the file size? JPEG is not an option because I need a vector graphic.
Take a look at tools::compactPDF - you need to have either qpdf or ghostscript installed, but it can make a huge difference to pdf file size.
If reading a PDF file from disk, there are 3 options for GostScript quality (gs_quality), as indicated in the R help file:
printer (300dpi)
ebook (150dpi)
screen (72dpi)
The default is none. For example to convert all PDFs in folder mypdfs/ to ebook quality, use the command
tools::compactPDF('mypdfs/', gs_quality='ebook')
You're drawing a LOT of lines or points. Vector image formats such as pdf, ps, eps, svg, etc. maintain logical information about all of those points, lines, or other items that increase complexity, which translates to size and drawing time, as the number of points increases. Generally vector images are the best in a number of ways, most compact, scale best, and highest quality reproduction. But, if the number of graphical elements becomes very large then it's often best to go to a raster image format such as png. When you switch to raster it's best to have a good idea what size image you want, both in pixels and also in things like print measurements, in order to produce the best image.
For information from the other direction, too large a raster image, see this answer.
One way of reducing the file size is to reduce the number of values that you have. Assuming you have a dataframe called df:
# take sample of data from dataframe
sampleNo = 10000
sampleData <- df[sample(nrow(df), sampleNo), ]
I think the only other alternative within R is to produce a non-vector. Outside of R you could use Acrobat Professional (which is not free) to optimize the pdf. This can reduce the file size enormously.
Which version of R are you using? In R 2.14.0, pdf() has an argument compress to support compression. I'm not sure how much it can help you, but there are also other tools to compress PDF files such as Pdftk and qpdf. I have two wrappers for them in the animation package, but you may want to use command line directly.
Hard to tell without seeing what the plot looks like - post a screenshot?
I suspect its a lot of very detailed lines and most of the information probably isn't visible - lots of things overlapping or very very small detail. Try thinning your data in one dimension or another. I doubt you'll lose visible information.

Resources