How does R's tikzDevice deal with raster images? - r

I have read, if tikz takes a raster image it is to be stored as png. Having that, tikz produces the rest of the graph around it and include the raster image in the final tex-file back again.
Now I have the following:
pic <- T
if(pic)
{
tikz(file=paste(plotpath,"Rohdaten_S1_S2_D21.tex",sep=""),width=width,height=height,engine = "pdftex",)
#png(filename=paste(plotpath,"Rohdaten_S1_S2_D6.png",sep=""),width=width,height=height,res=res,units="in")
par(mfrow=c(2,1),mar=c(1.1,3,2,0),mgp=c(1.5,0.5,0),ps=f.size,cex=1,xaxt="n")
}
if(!pic) par(mfrow=c(2,1),mar=c(1,4,3,0))
for(i in 1:2)
{
x <- sensors[[i]]$time
y <- sensors[[i]]$depth
z <- sensors[[i]]$velo
image(x,y,z)
# plot.image(x,y,z
# ,xlim=c(max(x)-400,max(x)),zlim=2*c(-1,1)
# ,xlab="",ylab="$d/\\mathrm{m}$",zlab="$v/(\\mathrm{mm/s})$"
# ,z.adj=c(0,0),ndz=5,z.cex=1
# )
abline(v=(1:10)/0.026+par("usr")[1],lty=2)
if(!pic) abline(h=(1:floor(max(y/0.02)))*0.02)
mtext(text=paste("Sensor",i),side=3,line=0.1,adj=0)
par(mar=c(3,3,0.1,0),xaxt="s")
}
title(xlab="t/s")
if(pic) dev.off()
even the simple image() function will produce a 100MB large .tex file.
No png is produced, everything is in the .tex file?!
What am I doing wrong? Is there a switch to be set TRUE? What do I have to do to put the rasterimage apart from the nice looking text.
Thank you for your help.

The solution is quite simple, but not obvious.
the image()-function in R produces vector graphics in the first
instance. There is a switch image(...,useRaster = T) with which
one can force the image()-function to produce raster graphics.
the image()-function aspects a regular grid (quadratic pixels). Otherwise an error occurs.
How to get a regular grid?
Suppose you have an image with the coordinates x[],y[] and the scalar matrix z[,]. Then the re-sampled regular grid can be calculated:
x.new<-seq(min(xlim),max(xlim),length.out=dim.max[1])
y.new<-seq(min(ylim),max(ylim),length.out=dim.max[2])
z<-apply(z,2,function(y,x,xout) return(approx(x,y,xout=xout+min(diff(x))/2,method="constant",rule=2)$y),x,x.new)
z<-t(apply(z,1,function(y,x,xout) return(approx(x,y,xout=xout+min(diff(x))/2,method="constant",rule=2)$y),y,y.new))
tikz(file ='a.tex',width = 2, height = 2)
image(x,y,z,useRaster = T)
dev.off()
The important things are the method = "constant" and the rule = 2 statements in the approx()-function. These enables a "shifting" to the regular grid.
Applying all this and tikz() will split the picture in a a.tex-file and a a_ras1.png-file.
I hope this will help sombody programming R and using tikzDevice to produce pictures for tex documents.

Related

How can I View the output of this animation code in Rstudio

This R code is to create an animated plot, I have run it and it did run but I have not been able to view it. it is said to save it output on pdf file though I saw the file but unable to open it. I got the code at How do I transfer output of animation R package on a beamer frame
because I want to learn how to input R animated plot on latex thus I was given this as an example. can you show me how I can view its output either on Rstudio or where the code saves it to? If you mean that the output can be viewed on pdf that is originally saved to, show me how? I am using Acrobat Reade Dc.
brownianMotion <- function(n=10,xlim=c(-20,20),ylim=c(-20,20),steps=50)
{
x=rnorm(n)
y=rnorm(n)
for (i in 1:steps) {
plot(x,y,xlim = xlim,ylim = ylim)
text(x,y)
# iterate over particles
for(k in 1:n){
walk=rnorm(2); # random move of particle
x[k]=x[k]+walk[1] # new position
y[k]=y[k]+walk[2]
# simple model for preventing a particle from moving past the limits
if(x[k]<xlim[1]) x[k]=xlim[1]
if(x[k]>xlim[2]) x[k]=xlim[2]
if(y[k]<ylim[1]) y[k]=ylim[1]
if(y[k]>ylim[2]) y[k]=ylim[2]
}
}
}
pdf("frames.pdf") # output device and file name
par(xaxs="i", yaxs="i", pty="s") # square plot region
par(mai=c(0.9,0.9,0.2,0.2)) # plot margins
brownianMotion(n=20, steps=400) # 20 particles, 400 time steps
There are two things here :
you need to add dev.off() after plotting so that the current plot is saved to the output device
the loop over step is rewriting the same filename for each plot, so that you end-up in having only the last frame in frames.pdf. Following this tutorial, you should rather write separate pdf files to an output folder, then animate them within LaTeX.
brownianMotion <- function(n=10,xlim=c(-20,20),ylim=c(-20,20),steps=50){
x=rnorm(n)
y=rnorm(n)
for (i in 1:steps) {
pdf(paste0("out/frames", i, ".pdf")) # save frames{i}.pdf to 'out' folder
plot(x,y,xlim = xlim,ylim = ylim)
text(x,y)
dev.off() # Adding dev.off()
...
}
}
par(xaxs="i", yaxs="i", pty="s") # square plot region
par(mai=c(0.9,0.9,0.2,0.2)) # plot margins
if (!dir.exists("out")) dir.create("out") # create 'out' folder if it doesn't exist
brownianMotion(n=20, steps=4) # 20 particles, 4 steps
The out folder will be located where your working directory is (use getwd() to see it).

Is there a way to read a plot as a matrix of pixels without saving it on disk?

I'd like to have an R plot as a matrix of pixels. I can simply save and read it again as a bitmap. But is there a faster way to do this directly without saving the plot first?
library(readbitmap)
tmp = tempfile()
bmp(tmp)
plot(1)
dev.off()
result = read.bitmap(tmp)

fast way to read in png, add grid & coords, and output

I have a lot of png files of a floor plan (mapping) layout that I want to:
read into R
Add grid lines
Add coordinates per cell in grid
Output
There are 1000 of these files so I'm looking for a speedy method. What would be a fast way to accomplish this task. These don't need to be publication quality as I'm looking for certain behavior clusters within cells and want recording the coordinated for these events for each of the 100 frames (pngs).
Here is a MWE that produces 10 png files:
x <- y <- seq(-4*pi, 4*pi, len = 27)
r <- sqrt(outer(x^2, y^2, "+"))
dir.create("delete_me")
wd <- getwd()
setwd("delete_me")
lapply(1:10, function(x){
png(sprintf("file_%s.png", x))
image(z = z <- cos(r^2)*exp(-r/x))
dev.off()
})
setwd(wd)
The final output will look like this for each png (with all the coords filled in).
I assume grid will be the way to create the gridlines quickly but am not sure about reading the png in quickly or plotting the coordinates (assume we'll use a 10 x 10 grid on each png).
How about using ggplot() and annotation_custom() to plot the image across the entire plot area, then manually overplot the grid lines.
(In the image, I trimmed the excess whitespace and axis from the png file in advance)
# pre-req libraries
require(ggplot2)
require(grid) # rasterGrob function
require(png) # to read the PNG file
width<-10
height<-10
# generate the points and labels for the grid
points<-data.frame(expand.grid(w=1:width,h=1:height))
points$labs<-paste0("(",points$w,",",points$h,")")
points$x<-points$w-0.5 # center
points$y<-points$h-0.5
# make the gridline co-ordinates
gridx<-data.frame(x=0:width,xend=0:width,y=rep(0,width+1),yend=rep(height,width+1))
gridy<-data.frame(x=rep(0,height+1),xend=rep(width,height+1),y=0:height,yend=0:height)
grids<-rbind(gridx,gridy)
# function to plot using ggplot with annotation_custom for the image
plotgrid<-function(file){
g<-ggplot(points)+theme_bw()+
annotation_custom(rasterGrob(readPNG(file),0,0,1,1,just=c("left","bottom")),0,width,0,height)+
geom_text(aes(x=x,y=y,label=labs))+
geom_segment(aes(x=x,xend=xend,y=y,yend=yend),data=grids) +
coord_cartesian(c(0,width),c(0,height))
return(g)
}
# run the function for each file in the folder
setwd("delete_me")
lapply(list.files(),function(x)plotgrid(x))
setwd(wd)

How to deal with a lot of plots in R

I have a for loop which produces 60 plots. I would like to save all this plots in only one file.
If I set par(mfrow=c(10,6)) it says : Error in plot.new() : figure margins too large
What can I do?
My code is as follows:
pdf(file="figure.pdf")
par(mfrow=c(10,6))
for(i in 1:60){
x=rnorm(100)
y=rnorm(100)
plot(x,y)
}
dev.off()
Your default plot, as stated in the loop, does not use the space very effectively. If you look at just a single plot, you can see it has large margins, both between axis and edge and plot area and axis text. Effectively, there is a lot of space-hogging.
Secondly, the default pdf-function creates small pages, 7 by 7 inches. That is not a large sheet to plot on.
Trying to plot a 10 x 6 or 12 x 5 on 7 by 7 inches is therefore trying to squeeze in a lot of whitespace on very little space.
For it to succeed, you must look into the margin-options of par which is mar, mai, oma and omi, and probably some more. Consult the documentation with the command
?par
In addition to this, you could consider not displaying axis-text, tick-marks, tick-labels and titles for every one of the 60 sub-plots, as this too will save you space.
But somebody has already gone through some of this trouble for you. Look into the lattice-package or ggplot2, which has some excellent methods for making table-like subplots.
But there is another pressing issue: What are you trying to display with 60 subplots?
Update
Seeing what you are trying to do, here is a small example of faceting in ggplot2. It uses the Tufte-theme from jrnold's ggthemes, which is copied into here and then modified slightly in the line after the function.
library(ggplot2)
library(scales)
#### Setup the `theme` for the plot, i.e. the appearance of background, lines, margins, etc. of the plot.
## This function returns a theme-object, which ggplot2 uses to control the appearance.
theme_tufte <- function(ticks=TRUE, base_family="serif", base_size=11) {
ret <- theme_bw(base_family=base_family, base_size=base_size) +
theme(
legend.background = element_blank(),
legend.key = element_blank(),
panel.background = element_blank(),
panel.border = element_blank(),
strip.background = element_blank(),
plot.background = element_blank(),
axis.line = element_blank(),
panel.grid = element_blank())
if (!ticks) {
ret <- ret + theme(axis.ticks = element_blank())
}
ret
}
## Here I modify the theme returned from the function,
theme <- theme_tufte() + theme(panel.margin=unit(c(0,0,0,0), 'lines'), panel.border=element_rect(colour='grey', fill=NA))
## and instruct ggplot2 to use this theme as default.
theme_set(theme)
#### Some data generation.
size = 60*30
data <- data.frame(x=runif(size), y=rexp(size)+rnorm(size), mdl=sample(60,size, replace=TRUE))
#### Main plotting routine.
ggplot(data, aes(x,y, group=mdl)) ## base state of the plot to be used on all "layers", i.e. which data to use and which mappings to use (x should use x-variable, y should use the y-variable
+ geom_point() ## a layer that renders data as points, creates the scatterplot
+ stat_quantile(formula=y~x) ## another layer that adds some statistics, in this case the 25%, 50% and 75% quantile lines.
+ facet_wrap(~ mdl, ncol=6) ## Without this, all the groups would be displayed in one large plot; this breaks it up according to the `mdl`-variable.
The usual challenge in using ggplot2 is restructuring all your data into data.frames. For this task, the reshape2 and plyr-packages might be of good use.
For you, I would imagine that your function that creates the subplot both calculates the estimation and creates the plot. This means that you have to split the function into calculating the estimation, returning it to a data.frame, which you then can collate and pass to ggplot.
Output the plots to a pdf:
X = matrix(rnorm(60*100), ncol=60)
Y = matrix(rnorm(60*100), ncol=60)
pdf(file="fileName.pdf")
for(j in 1:60){
plot(X[,j], Y[,j])
}
dev.off()
For placing many plots on a page or document (and I have created images with literally thousands of plots in them), it is convenient to separate the work between R--which creates the plots individually--and other software which is better suited for arranging arrays of things. If this reminds you of spreadsheets or word processing tables, then we are thinking alike.
This page, which is a screenshot from a PDF file, contains over 200 statistical graphics. Although it has been greatly reduced (to 40% nominal size) in order to obscure proprietary data, the original has all the detail of the original R graphics and can be zoomed to 1600% without problem.
Two mechanisms have worked reasonably well. For up to several hundred plots, a little macro to import and re-sequence a set of bitmapped image files (.emf or .wmf) into a Word document does fine. For better control, I turn to a comparable Excel macro. It is driven by a sheet that is empty of everything except a row with column headers and a column with row headers. (You can see them at the left and top of the figure.) The macro deletes everything else on that sheet (except for formatting), then munges each possible combination of row and column header into a file name and if it finds that file, it imports it into the corresponding cell. The whole operation takes just a few seconds for several thousand images.
Obviously this communication mechanism between R and the other software is primitive, consisting of a collection of image files having a standard naming convention. But the code needed to implement it all is brief (albeit customized to each situation) and it works reliably. For example, if you encapsulate the plotting code within a function, then it will be called within a loop to create many similar plots. At the end of that function add a few lines to save the plot to a file, something like this:
path <- "W: <whatever>/" # Folder for the output files
ext <- "wmf" # or "emf" or "png" or ... # Format (and extension) of the output
...
if (save) {
outfile <- paste(path, paste(munge(well), munge(parm), sep="_"), sep="/")
outfile <- paste(outfile, ext, sep=".")
savePlot(filename=outfile, type=ext)
}
In this case each plot is identified by two loop variables, well and parm, both of which are strings (they correspond to the column and row headers). The function for creating acceptable filenames merely strips out punctuation, replacing it by an anodyne placeholder:
munge <- function(s) gsub("[[:punct:]]", "_", s)
Once those images have been imported into Word, Excel, or wherever you like, it's fairly easy to reorganize them, place other material around them, etc., and then print the result in PDF format.
There is an art to creating these very large "small multiples" (in Tufte's terminology). To the extent possible, it helps to follow Tufte's principle of increasing the data:ink ratio by erasing inessential material. That makes graphical patterns clear even when the tableau has been greatly reduced in size in order to comprehend all its rows and columns at once. Although the preceding figure is a poor example--the individual plots had to have axes, gridlines, labels, and so on so that they can be read in detail when zoomed--the power of this method to reveal patterns is clear even at this scale. It is crucial to make the plots comparable to one another. In this example, which consists of time series, every plot has the same range on the x-axis; within each row (which corresponds to a different type of observation), the ranges on the y-axes are the same; and all color schemes and methods of symbolization are the same throughout.
You could also use knitr. This didn't instantly convert over to base graphics (and I've got to run now), but using ggplot works easily.
\documentclass{article}
\begin{document}
<<echo = FALSE, fig.keep='high', fig.height=3, fig.width=4>>=
require(ggplot2)
for (i in 1:10) print(ggplot(mtcars, aes(x = disp, y = mpg)) + geom_point())
#
\end{document}
The above code will produce a nice multi-page pdf with all the graphs.
For a very simple solution to this type of issue, I found that setting a large "Windows" device manages to make the window big enough for many uses.
windows(50,50)
par(mfrow=c(10,6))
for(i in 1:60){
x=rnorm(100)
y=rnorm(100)
plot(x,y)
}
Or in my case,
windows(20,20)
plot(Plotting_I_Need_In_Rows_of_4, mfrow=c(4,4))

Clearing plotted points in R

I am trying to use the animation package to generate an "evolving" plot of points on a map. The map is generated from shapefiles (from the readShapeSpatial/readShapeLines functions).
The problem is when it's plotted in a for loop, the result is additive, whereas the ideal result is to have it evolve.
Are there ways of using par() that I am missing?
My question is: is there a way to clear just the points ploted from the points function
and not clearing the entire figure thus not having to regraph the shapefiles?
in case someone wants to see code:
# plotting underlying map
newyork <- readShapeSpatial('nycpolygon.shp')
routes <- readShapeLines('nyc.shp')
par(bg="grey25")
plot(newyork, lwd=2, col ="lightgray")
plot(routes,add=TRUE,lwd=0.1,col="lightslategrey")
# plotting points and save to GIF
ani.options(interval=.05)
saveGIF({
par(bg="grey25")
# Begin loop
for (i in 13:44){
infile <-paste("Week",i,".csv",sep='')
mydata <-read.csv(file = infile, header = TRUE, sep=",")
plotvar <- Var$Para
nclr <- 4
plotclr <-brewer.pal(nclr,"RdPu")
class<- classIntervals(plotvar,nclr,style = "pretty")
colcode <- findColours(class,plotclr)
points(Var$Lon,Var$Lat,col=colcode)
}
})
If you can accept a residual shadow or halo of ink, you can over-plot with color ="white" or == to your background choices. We cannot access your shape file but you can try it out by adding this line:
points(Var$Lon, Var$Lat, col="grey25")
It may leave gaps in other previously plotted figures or boundaries, because it's definitely not object-oriented. The lattice and ggplot2 graphics models are more object oriented, so if you want to post a reproducible example, that might be an alternate path to "moving" forward. I seem to remember that the rgl package has animation options in its repetoire.

Resources