PDFs produced by R having inconsistent MD5 checksum - r

I'm testing an R package using testthat. Writing tests for an S3 method plot.foo is a huge headache, because it simply returns NULL, so I decided to save the plot to a file and check if it has been changed since the last run.
pdf(file='plot_foo.pdf')
plot.foo(bar)
dev.off()
tools::md5sum('plot_foo.pdf')
The problem is each time I'm getting a different result with the same input. The output looks the same, though.
replicate(10, {
pdf(file='plot.pdf')
plot(1:10, 10:1)
dev.off()
Sys.sleep(1)
tools::md5sum('plot.pdf')
})
Note that you need to wait a while between each iteration, otherwise the file would be identical, which makes me suspect some time-based metadata is changed.
plot.pdf plot.pdf
"5a0c096fe088342bc3c3d5960c5da1c9" "40d93c26b4901aef55a32b75473d05d2"
plot.pdf plot.pdf
"9815c6d9b2e94cda763a486fcd2ddf08" "a8e8db82d06b79f98416fa034b5aee46"
plot.pdf plot.pdf
"c2770250dbef3b60706559114c434851" "91c8cf124eb61ddebd3edbbb2d01677f"
plot.pdf plot.pdf
"d1594bd83b97fc890410a4c305366682" "f05197f165ec04df3dac4664494f4617"
plot.pdf plot.pdf
"64427124c6a6454e8f0e5944de20be95" "ff1abf2b31dfe688cf8f5994e409cc6d"
How do I force R to produce consistent PDFs? I'm temporary switching to PostScript for testing purposes, but I'd prefer PDF as it's better-supported (Windows doesn't seem to have a builtin PostScript viewer) and thus can also serve as the document.

While I think it's a little rough on a few things, I think vdiffr is going to let you do what you need.
First, I'm going to create a package; fake for now, but necessary, since vdiffr only works in a tightly-controlled environment: a package using testthat.
usethis::create_package("~/StackOverflow/nalzok")
setwd("~/StackOverflow/nalzok")
usethis::use_testthat()
Create a test_something.R test file.
context("basic plot tests")
baseplot1 <- function() hist(1:10)
vdiffr::expect_doppelganger("base 1", baseplot1)
(I'm going to assume that hist(1:10) is something relevant and interesting. Base plots need to be a function, ggplot2 objects do not; see the docs for more.)
I had thought I could call vdiffr::expect_doppelganger directly (as most testthat::expect_* functions often can be), but it needs to be "managed" (setup) first.
vdiffr::manage_cases(".")
Each of the images need to be "verified" (by a human), so this opens a shiny app that iterates through each of the expected doppelgangers:
After validation, each time you test the package, it will verify that the images have not changed:
devtools::test()
# Loading nalzok
# Testing nalzok
# v | OK F W S | Context
# v | 1 | basic plot tests
# == Results =====================================================================
# OK: 1
# Failed: 0
# Warnings: 0
# Skipped: 0
If something changes (perhaps changing the hist(1:10) to hist(2:11)), it'll fail the next test:
devtools::test()
# Loading nalzok
# Testing nalzok
# v | OK F W S | Context
# x | 0 1 | basic plot tests
# --------------------------------------------------------------------------------
# test_something.R:3: failure: (unknown)
# Figures don't match: base-1.svg
# --------------------------------------------------------------------------------
# == Results =====================================================================
# OK: 0
# Failed: 1
# Warnings: 0
# Skipped: 0
It does this by creating a ./tests/testthat/figs/ directory with a directory and .svg file for each expectation, and while you don't need to interact with it, it would make sense for .../figs/ to be version-controlled (you do version-control you package, right?).
Some caveats, I guess:
it is saving to .svg files; if your S3 plot.foo function doesn't play well with SVG (does that happen? I don't know), then I don't know (yet) how to deal with that;
since it's using the text-based SVG format, it will notice if a point or box or something shifts, but only within some basic tolerances; as an example, if even some meta-parameters (limits) are changed sufficiently, it will trigger a failure. This is generally good, since I believe the test should be resilient to minor changes (upstream library, etc).
hist(1:10) # pass
hist(1:10, xlim=c(0,10)) # pass, that's the default x-limit given the data
hist(1:10, xlim=c(0,10+1e-5)) # pass, close enough?
hist(1:10, xlim=c(0,10+1e-4)) # FAIL

Related

testthat and futile logger in R giving confusing results

I'm attempting to run some unit tests in R using a combination of futile.logger and testthat
I'm relatively new to using these packages and am trying to run a couple of tests at once using the code below. However, I'm somewhat confused by the behaviour of the code.
I would have thought that either:
All of the warnings would be printed to the log file (due to the flog
threshold being the same)
or alternatively
Only the warnings that violate the expect_equal test and which also meet the flog threshold would be printed to the log file.
However, the code only seems to print the warning "This should appear" along with the message related to the first line of testing, "Testing if A==D RIGHT should NOT APPEAR". None of the other warnings are printed regardless of the flog threshold or whether the expect_equal is satisfied or not.
Can someone with grater knowledge of these packages please help clarify whether this is the expected behaviour or what I am misunderstanding about the functions and/or their interaction?
Ideally, I would only like to log the output only when the expect_equal is violated. I have some code that achieves that (in the second code chunk), but am wondering if there is a way to achieve the same result using the syntax from the testthat package.
#devtools::install_github("r-lib/testthat")
library(testthat)
library(futile.logger)
# Set the logger threshold to only show warnings OR errors
flog.threshold(WARN) # INFO # ERROR
flog.appender(appender.file("progress.log"))
flog.info("Key table created", name="quiet")
A=1000
B=9999
C=9999
D=1000
test_that("INFO is not logged",{
expect_silent(flog.info("This should NOT appear"))
expect_silent(flog.warn("This should appear"))
expect_equal(A,D,flog.warn("Testing if A==D RIGHT should NOT APPEAR"))
expect_equal(B,C,flog.warn("Testing if B==C RIGHT should NOT APPEAR"))
expect_equal(A,B,flog.warn("Testing if A==B WRONG should APPEAR"))
})
This code essentially achieves what I want, (logging the warnings when the expectation of equality is violated) but seems kind of clunky compared to the above code. Is there a better/less clunky way to achieve this using the syntax from the testthat package
library(testthat)
library(futile.logger)
A=1000
B=9999
C=9999
D=1000
# Set the logger threshold to only show warnings OR errors
flog.threshold(WARN) # INFO # ERROR
flog.appender(appender.file("progress.log"))
flog.info("Key table created", name="quiet")
# Function to write to log file when test is not satisfied.
log_testing = function(DF1,DF2){
if(DF1==DF2){
flog.info("Testing if DF1==DF2 PASS SHOULD NOT BE FLAGGED")
} else{
flog.warn("WARNING in script ABCD")
flog.warn("Testing if DF1==DF2 FAIL SHOULD BE FLAGGED")
}
}
# Check that the right things are flagged
log_testing(A,B) # FAIL
log_testing(B,C) # PASS
log_testing(C,D) # FAIL
log_testing(B,C) # PASS

Is rJava object is exportable in future(Package for Asynchronous computing in R)

I'm trying to speed up my R code using future package by using mutlicore plan on Linux. In future definition I'm creating a java object and trying to pass it to .jcall(), But I'm getting a null value for java object in future. Could anyone please help me out to resolve this. Below is sample code-
library("future")
plan(multicore)
library(rJava)
.jinit()
# preprocess is a user defined function
my_value <- preprocess(a = value){
# some preprocessing task here
# time consuming statistical analysis here
return(lreturn) # return a list of 3 components
}
obj=.jnew("java.custom.class")
f <- future({
.jcall(obj, "V", "CustomJavaMethod", my_value)
})
Basically I'm dealing with large streaming data. In above code I'm sending the string of streaming data to user defined function for statistical analysis and returning the list of 3 components. Then want to send this list to custom java class [ java.custom.class ]for further processing using custom Java method [ CustomJavaMethod ].
Without using future my code is running fine. But I'm getting 12 streaming records in one minute and then my code is getting slow, observed delay in processing.
Currently I'm using Unix with 16 cores. After using future package my process is done fast. I have traced back my code, in .jcall something happens wrong.
Hope this clarifies my pain.
(Author of the future package here:)
Unfortunately, there are certain types of objects in R that cannot be sent to another R process for further processing. To clarify, this is a limitation to those type of objects - not to the parallel framework use (here the future framework). This simplest example of such an objects may be a file connection, e.g. con <- file("my-local-file.txt", open = "wb"). I've documented some examples in Section 'Non-exportable objects' of the 'Common Issues with Solutions' vignette (https://cran.r-project.org/web/packages/future/vignettes/future-4-issues.html).
As mentioned in the vignette, you can set an option (*) such that the future framework looks for these type of objects and gives an informative error before attempting to launch the future ("early stopping"). Here is your example with this check activated:
library("future")
plan(multisession)
## Assert that global objects can be sent back and forth between
## the main R process and background R processes ("workers")
options(future.globals.onReference = "error")
library("rJava")
.jinit()
end <- .jnew("java/lang/String", " World!")
f <- future({
start <- .jnew("java/lang/String", "Hello")
.jcall(start, "Ljava/lang/String;", "concat", end)
})
# Error in FALSE :
# Detected a non-exportable reference ('externalptr') in one of the
# globals ('end' of class 'jobjRef') used in the future expression
So, yes, your example actually works when using plan(multicore). The reason for that is that 'multicore' uses forked processes (available on Unix and macOS but not Windows). However, I would try my best to limit your software to parallelize only on "forkable" systems; if you can find an alternative approach I would aim for that. That way your code will also work on, say, a huge cloud cluster.
(*) The reason for these checks not being enabled by default is (a) it's still in beta testing, and (b) it comes with overhead because we basically need to scan for non-supported objects among all the globals. Whether these checks will be enabled by default in the future or not, will be discussed over at https://github.com/HenrikBengtsson/future.
The code in the question is calling unknown Method1 method, my_value is undefined, ... hard to know what you are really trying to achieve.
Take a look at the following example, maybe you can get inspiration from it:
library(future)
plan(multicore)
library(rJava)
.jinit()
end = .jnew("java/lang/String", " World!")
f <- future({
start = .jnew("java/lang/String", "Hello")
.jcall(start, "Ljava/lang/String;", "concat", end)
})
value(f)
[1] "Hello World!"

Externalise config file and functions in R markdown

I am having problems understanding the (practical) difference between the different ways to externalise code in R notebooks. Having referred to previous questions or to the documentation, it is still unclear the difference in sourcing external .R files or read_chunk() them. For practical purposes let us consider the below:
I want to load libraries with an external config.R file: the most intuitive way, according to me, seems to create config.R as
library(first_package)
library(second_package)
...
and, in the general R notebook (say, main.Rmd) call it like
```{r}
source('config.R')
```
```{r}
# use the libraries included above
```
However, this does not recognise the packages included, so it seems that sourcing an external config file is useless. Likewise using read_chunk() instead. Therefore the question is: How to include libraries at the top, so that they are recognised in the main markdown script?
Say I want to define global functions externally, and then include them in the main notebook: along the same lines as above one would include them in an external foo.R file and include them in the main one.
Again, it seems that read_chunk() does not do the job, whereas source('foo.R') does, in this case; the documentation states that the former "only evaluates code, but does not execute it": when is it ever the case that one wants to only evaluate the code but not execute it? Differently posed: why would one ever use read_chunk() rather than source, for practical purposes?
This does not recognise the packages included
In your example, first_package and second_package are both available in the working environment for the second code chunk.
Try putting library(nycflights13) in the R file and head(airlines) in the second chunk of the Rmd file. Calling knit("main.Rmd") would fail if the nycflights13 package wasn't successfully loaded with source.
read_chunk does in fact accomplish this (along with source) however they go about it differently. With source you will have the global functions available directly after the source (as you have found). With read_chunk however, as you pointed out since it only evaluates code, but does not execute it you need to explicitly execute the chunk and then the function will be available. (See my example with third_config_chunk below. Including the empty chunk of third_config_chunk in the report allows the global some_function to be called in subsequent chunks.)
Regarding "only evaluates code, but does not execute it", this is an entire property of R programming known as lazy evaluation. The idea being that you may want to create a number of functions or template code which is read into your R environment but is not executed on-the-spot, allowing you to modify the environment/parameters prior to evaluation. This also allows you to execute the same code chunks multiple times whereas source will only run once with what is already provided.
Consider an example where you have an external R script which contains a large amount of setup code that isn't needed in your report. It is possible to format this file into many "chunks" which will be loaded into the working environment with read_chunk but won't be evaluated until explicitly told.
In order to externalise your config.R using read_chunk() you would write the R script as:
config.R
# ---- config_preamble
## setup code that is required for config.R
## to run but not for main.Rmd
# ---- first_config_chunk
library(nycflights13)
library(MASS)
# ---- second_config_chunk
y <- 1
# ---- third_config_chunk
some_function <- function(x) {
x + y
}
# ---- fourth_config_chunk
some_function(10)
# ---- config_output
## code that is output during `source`
## and not wanted in main.Rmd
print(some_function(10))
To use this script with the externalisation methodology, you would setup main.Rmd as follows:
main.Rmd
```{r, include=FALSE}
knitr::read_chunk('config.R')
```
```{r first_config_chunk}
```
The packages are now loaded.
```{r third_config_chunk}
```
`some_function` is now available.
```{r new_chunk}
y <- 20
```
```{r fourth_config_chunk}
```
## [1] 30
```{r new_chunk_two}
y <- 100
lapply(seq(3), some_function)
```
## [[1]]
## [1] 101
##
## [[2]]
## [1] 102
##
## [[3]]
## [1] 103
```{r source_file_instead}
source("config.R")
```
## [1] 11
As you can see, if you were to source this file, there would be no way to modify the call to some_function prior to execution and the call would print an output of "11". Now that the chunks are available in the environment, they can be re-called any number of times (after for example, changing the value of y) or used any other way in the current environment (eg. new_chunk_two) which would not be possible with source if you didn't want the rest of the R script to execute.

Erroneous code diagnostics report in RStudio when sourcing functions via source

I'm working in RStudio on a simple analysis where I source some files via the source command. For example, I have this file with some simple analysis:
analysis.R
# Settings ----------------------------------------------------------------
data("mtcars")
source("Generic Functions.R")
# Some work ---------------------------------------------------------------
# Makes no sense
mtcars$mpg <- CleanPostcode(mtcars$mpg)
The generic functions file has some simple functions that I use to derive graphs and do repetitive tasks. For example the used CleanPostcode function would look like that:
Generic Functions.R
#' The file provides a set of generic functions
# String manipulations ----------------------------------------------------
# Create a clean Postcode for matching
CleanPostcode <- function(MessyPostcode) {
MessyPostcode <- as.character(MessyPostcode)
MessyPostcode <- gsub("[[:space:]]", "", MessyPostcode)
MessyPostcode <- gsub("[[:punct:]]", "", MessyPostcode)
MessyPostcode <- toupper(MessyPostcode)
cln_str <- MessyPostcode
return(cln_str)
}
When I run the first file, the objects are available in the global environment:
There are some other function in the file but they are not relevant to the described problem.
Nevertheless the RStudio sees the object as not available in scope, as illustrated by the yellow triangle next to the code:
Question
Is there a way to make RStudio stop doing that. Maybe changing something to the source command? I tried local = TRUE and got the same thing. The code works with no problems, I just find it annoying.
The report was generated on the version 0.99.491 of RStudio.

retrieve original version of package function even if over-assigned

Suppose I replace a function of a package, for example knitr:::sub_ext.
(Note: I'm particularly interested where it is an internal function, i.e. only accessible by ::: as opposed to ::, but the same answer may work for both).
library(knitr)
my.sub_ext <- function (x, ext) {
return("I'm in your package stealing your functions D:")
}
# replace knitr:::sub_ext with my.sub_ext
knitr <- asNamespace('knitr')
unlockBinding('sub_ext', knitr)
assign('sub_ext', my.sub_ext, knitr)
lockBinding('sub_ext', knitr)
Question: is there any way to retrieve the original knitr:::sub_ext after I've done this? Preferably without reloading the package?
(I know some people want to know why I would want to do this so here it is. Not required reading for the question). I've been patching some functions in packages like so (not actually the sub_ext function...):
original.sub_ext <- knitr:::sub_ext
new.sub_ext <- function (x, ext) {
# some extra code that does something first, e.g.
x <- do.something.with(x)
# now call the original knitr:::sub_ext
original.sub_ext(x, ext)
}
# now set knitr:::sub_ext to new.sub_ext like before.
I agree this is not in general a good idea (in most cases these are quick fixes until changes make their way into CRAN, or they are "feature requests" that would never be approved because they are somewhat case-specific).
The problem with the above is if I accidentally execute it twice (e.g. it's at the top of a script that I run twice without restarting R in between), on the second time original.sub_ext is actually the previous new.sub_ext as opposed to the real knitr:::sub_ext, so I get infinite recursion.
Since sub_ext is an internal function (I wouldn't call it directly, but functions from knitr like knit all call it internally), I can't hope to modify all the functions that call sub_ext to call new.sub_ext manually, hence the approach of replacing the definition in the package namespace.
When you do assign('sub_ext', my.sub_ext, knitr), you are irrevocably overwriting the value previously associated with sub_ext with the value of my.sub_ext. If you first stash the original value, though, it's not hard to reset it when you're done:
library(knitr)
knitr <- asNamespace("knitr")
## Store the original value of sub_ext
.sub_ext <- get("sub_ext", envir = knitr)
## Overwrite it with your own function
my.sub_ext <- function (x, ext) "I'm in your package stealing your functions D:"
assignInNamespace('sub_ext', my.sub_ext, knitr)
knitr:::sub_ext("eg.csv", "pdf")
# [1] "I'm in your package stealing your functions D:"
## Reset when you're done
assignInNamespace('sub_ext', .sub_ext, knitr)
knitr:::sub_ext("eg.csv", "pdf")
# [1] "eg.pdf"
Alternatively, as long as you are just adding lines of code to what's already there, you could add that code using trace(). What's nice about trace() is that, when you are done, you can use untrace() to revert the function's body to its original form:
trace(what = "mean.default",
tracer = quote({
a <- 1
b <- 2
x <- x*(a+b)
}),
at = 1)
mean(1:2)
# Tracing mean.default(1:2) step 1
# [1] 4.5
untrace("mean.default")
# Untracing function "mean.default" in package "base"
mean(1:2)
# [1] 1.5
Note that if the function you are tracing is in a namespace, you'll want to use trace()'s where argument, passing it the name of some other (exported) function that shares the to-be-traced function's namespace. So, to trace an unexported function in knitr's namespace, you could set where=knit

Resources