How to plot and save it inside a R function in R in mac - r

My question is that I want to save a plot inside an overall big function to summarize result. However, when I do not put my plotting command in the big function, it works great, but when I run the big function, then I my plot can not be opened because they are damaged. Is there any options I can achieve my goal? Neither of the two options I provide below works.
Thanks guys!:)
I am using a mac machine with yosemite system
Here is my code:
library(lattice)
poissonICARMCMCPost = function(overallRes, preProcessData,path){
####Posterior part#########################################
### get the posterior information from the posterior samples
result = overallRes$result
resultSubset = overallRes$resultSubset
quartz()
acfplot(resultSubset)
dev.copy2pdf(file = paste(path, "acfplot", ".pdf", sep=""))
dev.off()
}
I have also tried
poissonICARMCMCPost = function(overallRes, preProcessData,path){
####Posterior part#########################################
### get the posterior information from the posterior samples
result = overallRes$result
resultSubset = overallRes$resultSubset
pdf(paste(path, "acfplot", ".pdf", sep=""))
acfplot(resultSubset)
dev.off()
}

Thanks for MrFlick's great comment, I find a solution that works for now:
poissonICARMCMCPost = function(overallRes, preProcessData,path){
####Posterior part#########################################
### get the posterior information from the posterior samples
result = overallRes$result
resultSubset = overallRes$resultSubset
quartz()
print(acfplot(resultSubset))
dev.copy2pdf(file = paste(path, "acfplot", ".pdf", sep=""))
dev.off()
}

Related

How to send output plot from 'checkresiduals() function to a pdf file

I am wondering if there is a way to send the output plot from the checkresiduals() function to a pdf file.
I have the following command :-
checkresiduals(ts_regr_auto_new_objects[[1]], test = FALSE, plot = TRUE)
This generates a series of plots including ACF plot, residual density plot and the residual plot.
Will try and attach the image - for some reason its not doing it now but hopefully can reproduce the image again later.
The image is attached now :-
I can save the save the image as a pdf file from the RStudio console but I would like to be able to do so from code as this is a part of a larger application code.
Best regards
Deepak
I'm not familiar with checkresiduals, but generally it should be possible to save PDF with base R:
# start PDF output
pdf(file = "plot.pdf", paper = "a4")
# some graphics
hist(rnorm(100))
# end of output, save file
dev.off()
So for your case probably this:
pdf(file = "plot.pdf", paper = "a4")
checkresiduals(ts_regr_auto_new_objects[[1]], test = FALSE, plot = TRUE)
dev.off()
See documentation... Hope this helps.

drake readd function not working for plots

I'm trying to trouble shoot why Drake plots are not showing up with readd() - the rest of the pipeline seem's to have worked though.
Not sure if this is caused by minfi::densityPlot or some other reason; my thoughts are the later as it's also not working for the barplot function which is base R.
In the RMarkdown report I have readd(dplot1) etc. in the chunks but the output is NULL
This is the code I have in my R/setup.R file:
library(drake)
library(tidyverse)
library(magrittr)
library(minfi)
library(DNAmArray)
library(methylumi)
library(RColorBrewer)
library(minfiData)
pkgconfig::set_config("drake::strings_in_dots" = "literals") # New file API
# Your custom code is a bunch of functions.
make_beta <- function(rgSet){
rgSet_betas = minfi::getBeta(rgSet)
}
make_filter <- function(rgSet){
rgSet_filtered = DNAmArray::probeFiltering(rgSet)
}
This is my R/plan.R file:
# The workflow plan data frame outlines what you are going to do
plan <- drake_plan(
baseDir = system.file("extdata", package = "minfiData"),
targets = read.metharray.sheet(baseDir),
rgSet = read.metharray.exp(targets = targets),
mSetSq = preprocessQuantile(rgSet),
detP = detectionP(rgSet),
dplot1 = densityPlot(rgSet, sampGroups=targets$Sample_Group,main="Raw", legend=FALSE),
dplot2 = densityPlot (getBeta (mSetSq), sampGroups=targets$Sample_Group, main="Normalized", legend=FALSE),
pal = RColorBrewer::brewer.pal (8,"Dark2"),
dplot3 = barplot (colMeans (detP[,1:6]), col=pal[ factor (targets$Sample_Group[1:6])], las=2, cex.names=0.8, ylab="Mean detection p-values"),
report = rmarkdown::render(
knitr_in("report.Rmd"),
output_file = file_out("report.html"),
quiet = TRUE
)
)
After using make(plan) it looks like everything ran smoothly:
config <- drake_config(plan)
vis_drake_graph(config)
I am able to use loadd() to load the objects needed for one of these plots and then make the plots, like this:
loadd(rgSet)
loadd(targets)
densityPlot(rgSet, sampGroups=targets$Sample_Group,main="Raw", legend=FALSE)
But the readd() command doesn't work?
The output in the .html for dplot3 looks weird...
Fortunately, this is expected behavior. drake targets are return values of commands, and so the value of dplot3 is supposed to be the return value of barplot(). The return value of barplot() is actually not a plot. The "Value" section of the help file (?barplot) explains the return value.
A numeric vector (or matrix, when beside = TRUE), say mp, giving the coordinates of all the bar midpoints drawn, useful for adding to the graph.
If beside is true, use colMeans(mp) for the midpoints of each group of bars, see example.
So what is going on? As with most base graphics functions, the plot from barplot() is actually a side effect. barplot() sends the plot to a graphics device and then returns something else to the user.
Have you considered ggplot2? The return value of ggplot() is actually a plot object, which is more intuitive. If you want to stick with base graphics, maybe you could save the plot to an output file.
plan <- drake_plan(
...,
dplot3 = {
pdf(file_out("dplot3.pdf"))
barplot(...)
dev.off()
}
)

Save automatically produced plots in R

I'm using a function in R able to analyse my data and produce several plots.
The function is "snpzip" from adegenet package.
I would like to save automatically the three plots that the function produces as part of the output. Do you have any suggestion on how to do it?
I want to point to the fact that I know how to save a single plot, for instance with png or pdf followed by dev.off(). My problem is that when I run snpzip(snps, phen, method = "centroid"), the outcomes are three plots (which I would like to save).
I report here the same example as in the "adegenet" package:
simpop <- glSim(100, 10000, n.snp.struc = 10, grp.size = c(0.3,0.7),
LD = FALSE, alpha = 0.4, k = 4)
snps <- as.matrix(simpop)
phen <- simpop#pop
outcome <- snpzip(snps, phen, method = "centroid")
If you use a filename with a C integer format in it, then R will substitute the page number for that part of the name, generating multiple files. For example,
png("page%d.png")
plot(1)
plot(2)
plot(3)
dev.off()
will generate 3 files, page1.png, page2.png, and page3.png. For pdf(), you also need onefile=FALSE:
pdf("page%d.pdf", onefile = FALSE)
plot(1)
plot(2)
plot(3)
dev.off()

Use of recordPlot() and replayPlot() in Parallel in R to save plot in the same PDF

I would like to plot data in parallel using foreach in R but I didn't find any way to get all my plots in the same pdf file. I thought of using recordPlot to save my plots in a list and then print them in a pdf device but it doesn't work.
I have the following error :
Error in replayPlot(x) : loading snapshot from a different session
I tried as well with ggplot but this is to slow with my large dataset.
Here is a piece of code showing my problem :
# Creating a dataframe : df
df=as.data.frame(matrix(nrow=1, ncol=10))
df=apply(df, 2, function(x) runif(100))
# Plotting function
par.plot=function(dat){
plot(dat)
p=recordPlot()
return(p)}
#Applying the function in parallel
library("parallel")
library("foreach")
library("doParallel")
cl <- makeCluster(detectCores())
registerDoParallel(cl, cores = detectCores())
plot.lst = foreach(i = 1:nrow(df)) %dopar% {
par.plot(df[i,])
}
# Trying to get 1st plot
plot.lst[[1]]
Error in replayPlot(x) : loading snapshot from a different session
Replacing %dopar% by %do% is working when I try to get my plots, because they seems to have been generated in the same environment.
I know I can call a pdf device inside the loop to generate a file for each iteration, but I would like to know if there is a way to get one file for all my plots at the output of my function.
Or do you know an easy way to merge my pdf files afterwards ?
Thanks for your help.
Charles
In my opinion your question can be devided into two distinctive parts:
1. Using the replayPlot function in th%dopar% without getting the weird error
2. Somehow getting 1 file at the end
The first question is easy to answer. The reason you get this error is that the R somehow remembers where (in OS level) the plots has been generated. You can get the same effect by using Rstudio server and trying to replay some of the recorded plots after couple of hours of closing the browser tab. In brief, the issue is that R remembers the PID of the process that generated the plot (Don't know why though!):
# generate a plot
plot(iris[, 1:2]
# record the plot
myplot <- recordPlot()
# check the PID
attr(x = myplot, which = "pid")
the good thing is you can overwrite this by assigning your current PID:
attr(x = myplot, which = "pid") <- Sys.getpid()
so you should only change the last line of your code to the following:
pdf(file = "plot.lst.pdf"))
graphics.off()
lapply(plot.lst, function(x){
attr(x = x, which = "pid") <- Sys.getpid()
replayPlot(x)})
graphics.off()
The part above entirely solves your problem, but in case you are interested in merging PDF files, follow this discussion:
Merging existing PDF files using R

Unable to generate PDF with neural network graph

I'm trying to create a hard-copy image of a neural network graph and it keeps failing. If I try to create a PNG, nothing is generated, and if I try to generate a PDF I get a small file output that refuses to open with "file may be damaged" errors. If I just let it display in a graphics window, the image comes up fine.
I'm using 2.15.1 on OS X (10.7.4), built by Macports. The code I'm working with at the moment:
library(ALL)
library(neuralnet)
data(ALL)
ALL.pdat <- pData(ALL)
bt <- factor(substring(ALL.pdat$BT,1,1))
all.sds <- apply(exprs(ALL),1,sd)
top.10.sds <- rank(all.sds)>length(all.sds)-10
exprs.top.10 <- as.data.frame(t(exprs(ALL)[top.10.sds,]))
nn.data <- cbind(exprs.top.10, as.numeric(bt))
## Gene names start with a number, and that causes problems when trying to set up the
## formula for neuralnet.
col.names <- paste("g", colnames(nn.data), sep = '')
col.names[11] <- "bt"
colnames(nn.data) <- col.names
my.nn <- neuralnet(bt ~ g36108_at + g36638_at + g37006_at + g38096_f_at + g38319_at + g38355_at + g38514_at + g38585_at + g39318_at + g41214_at, nn.data, hidden = 10, threshold = 0.01)
summary(my.nn)
pdf("./nn-all.pdf")
plot.nn(my.nn)
dev.off()
png("./nn-all.png")
plot.nn(my.nn)
dev.off()
I've even rebooted the machine to make sure that all the memory is cleared up, and that didn't help any.
Simple reproducible example:
pdf("test.pdf")
set.seed(42)
plot(runif(20),rnorm(20))
png("test.png")
set.seed(42)
plot(runif(20),rnorm(20))
dev.off()
If I try to open the PDF with Adobe Reader on my German Windows 7, I get a nice informative error message telling me that the file cannot be opened because the file is in use by another application. This can be fixed easily:
pdf("test.pdf")
set.seed(42)
plot(runif(20),rnorm(20))
dev.off() #make sure to close the graphics device
png("test.png")
set.seed(42)
plot(runif(20),rnorm(20))
dev.off()
Edit:
The problem is plot.nn. Until the package gets patched, you need to redefine plot.nn manually as shown in this answer.

Resources