When visualizing data with plotly, i want to write widgets as html-documents without htmlwidgets::saveWidget writing dependencies every time, assuming that these already are in place, to save processing time. The widgets need to be self-contained to save disk space.
library(plotly)
t <- Sys.time()
p <- plot_ly(ggplot2::diamonds, y = ~price, color = ~cut, type = "box")
htmlwidgets::saveWidget(as_widget(p), "test.html", selfcontained = F, libdir = NULL)
print(Sys.time() - t)
Time difference of 4.303076 secs on my machine.
This produces ~6 mb of data only in depedencies (crosstalk-1.0.0, htmlwidgets-1.2, jquery-1.11.3, plotly-binding-4.7.1.9000, plotly-htmlwidgets-css-1.38.3, plotly-main-1.38.3, typedarray-0.1)
htmlwidgets::saveWidget writes dependencies although these files already exist. Can this be prevented?
Good question. I tried to answer inline in comments within the code. htmlwidgets dependencies come from two sources: htmlwidgets::getDependency() and the dependencies element in the widget list. Changing the src element within dependencies to href instead of file means these dependencies will not get copied. However, the dependencies from htmlwidgets::getDependency() are harder to overwrite, but in the case will only copy htmlwidgets.js and plotly-binding.js, which are fairly small in comparison with the other four.
library(plotly)
p <- plot_ly(ggplot2::diamonds, y = ~price, color = ~cut, type = "box")
# let's inspect our p htmlwidget list for clues
p$dependencies
# if the src argument for htmltools::htmlDependency
# is file then the file will be copied
# but if it is href then the file will not be copied
# start by making a copy of your htmlwidget
# this is not necessary but we'll do to demonstrate the difference
p2 <- p
p2$dependencies <- lapply(
p$dependencies,
function(dep) {
# I use "" below but guessing that is not really the location
dep$src$href = "" # directory of your already saved dependency
dep$src$file = NULL
return(dep)
}
)
# note this will still copy htmlwidgets and plotly-binding
# requires a much bigger hack to htmlwidgets::getDependency() to change
t <- Sys.time()
htmlwidgets::saveWidget(as_widget(p), "test.html", selfcontained = F, libdir = NULL)
print(Sys.time() - t)
t <- Sys.time()
htmlwidgets::saveWidget(as_widget(p2), "test.html", selfcontained = F, libdir = NULL)
print(Sys.time() - t)
Related
I would like to run a very simple script concurrently or asynchronously, displaying an estimated progress bar.
This works well enough when using system2() like this:
path <- '../Desktop/.../My_Skript_Dir/'
system2(command = "cmd.exe",
input = paste('"./R-4.2.1/bin/Rscript.exe"',
paste0(path, '/Progress_Bar.R')), wait = FALSE)
If possible I would like to avoid using system2 though and I recently found out that callr might do the trick. It almost works, using the function from the "Progress_Bar" script:
estimated_progress <- function(df = NULL, add_time = FALSE){
require(tcltk)
require(callr)
pred <- round(nrow(df)*0.6) # prediction
callr::r_bg(func = function(pred){ # open background r session
pb1 <- tcltk::tkProgressBar(title='PB', label='PB', min=0, max=pred, initial=0)
for (index in seq(pred)){
tcltk::setTkProgressBar(pb=pb1, value=index)
Sys.sleep(1)
}
}, args = list(pred))
}
df <- data.frame(matrix(nrow = 200, ncol = 3)) # dummy data
estimated_progress(df = df, add_time = FALSE)
When I do this, the progress bar opens in a new window as expected.
It keeps going for the next 1-3 function(s) (for example invisible(pbapply::pblapply(1:200000, function(x) x**3)) ) but any more than that and estimated_progress() abborts.
What am I missing here? I am sure it's quite obvious and I have read that callr can work asynchronously (look here) but I can't make it work.
Everytime I open a new session in RStudio, I'm greeted with the error message:
Error: C stack usage 7953936 is too close to the limit
Based on suggestions for similar issues posted here and here, I tried using the ulimit command in terminal, but get the following error.
Isabels-MacBook-Pro ~ % ulimit -s
8176
Isabels-MacBook-Pro ~ % R --slave -e 'Cstack_info()["size"]'
Error: C stack usage 7954496 is too close to the limit
Execution halted
Yet, when I run ulimit on it's own, I get:
Isabels-MacBook-Pro ~ % ulimit
unlimited
Just to double-check, I try setting it to unlimited again:
Isabels-MacBook-Pro ~ % ulimit -s unlimited
but then get a new error:
Isabels-MacBook-Pro ~ % R --slave -e 'Cstack_info()["size"]'
Error: evaluation nested too deeply: infinite recursion / options(expressions=)?
Execution halted
I have no clue what this means in this context. Is the Cstack_info() the bit getting stuck on infinite recursion?? I'd love to get this figured out, as it's getting in the way of installing some necessary packages!
In case it's helpful, here's my session info
R version 4.1.3 (2022-03-10)
Platform: x86_64-apple-darwin17.0 (64-bit)
Running under: macOS Monterey 12.2.1
And contents of .Rprofile
# REMEMBER to restart R after you modify and save this file!
# First, execute the global .Rprofile if it exists. You may configure blogdown
# options there, too, so they apply to any blogdown projects. Feel free to
# ignore this part if it sounds too complicated to you.
if (file.exists("~/.Rprofile")) {
base::sys.source("~/.Rprofile", envir = environment())
}
# Now set options to customize the behavior of blogdown for this project. Below
# are a few sample options; for more options, see
# https://bookdown.org/yihui/blogdown/global-options.html
options(
# to automatically serve the site on RStudio startup, set this option to TRUE
blogdown.serve_site.startup = FALSE,
# to disable knitting Rmd files on save, set this option to FALSE
blogdown.knit.on_save = TRUE,
# build .Rmd to .html (via Pandoc); to build to Markdown, set this option to 'm$
blogdown.method = 'html'
)
# fix Hugo version
options(blogdown.hugo.version = "0.82.0")
Here are the contents from /Library/Frameworks/R.framework/Resources/library/base/R/Rprofile
### This is the system Rprofile file. It is always run on startup.
### Additional commands can be placed in site or user Rprofile files
### (see ?Rprofile).
### Copyright (C) 1995-2020 The R Core Team
### Notice that it is a bad idea to use this file as a template for
### personal startup files, since things will be executed twice and in
### the wrong environment (user profiles are run in .GlobalEnv).
.GlobalEnv <- globalenv()
attach(NULL, name = "Autoloads")
.AutoloadEnv <- as.environment(2)
assign(".Autoloaded", NULL, envir = .AutoloadEnv)
T <- TRUE
F <- FALSE
R.version <- structure(R.Version(), class = "simple.list")
version <- R.version # for S compatibility
## for backwards compatibility only
R.version.string <- R.version$version.string
## NOTA BENE: options() for non-base package functionality are in places like
## --------- ../utils/R/zzz.R
options(keep.source = interactive())
options(warn = 0)
# options(repos = c(CRAN="#CRAN#"))
# options(BIOC = "http://www.bioconductor.org")
## setting from an env variable added in 4.0.2
local({to <- as.integer(Sys.getenv("R_DEFAULT_INTERNET_TIMEOUT", 60))
if (is.na(to) || to <= 0) to <- 60L
options(timeout = to)
})
options(encoding = "native.enc")
options(show.error.messages = TRUE)
## keep in sync with PrintDefaults() in ../../main/print.c :
options(show.error.messages = TRUE)
## keep in sync with PrintDefaults() in ../../main/print.c :
options(scipen = 0)
options(max.print = 99999)# max. #{entries} in internal printMatrix()
options(add.smooth = TRUE)# currently only used in 'plot.lm'
if(isFALSE(as.logical(Sys.getenv("_R_OPTIONS_STRINGS_AS_FACTORS_",
"FALSE")))) {
options(stringsAsFactors = FALSE)
} else {
options(stringsAsFactors = TRUE)
}
if(!interactive() && is.null(getOption("showErrorCalls")))
options(showErrorCalls = TRUE)
local({dp <- Sys.getenv("R_DEFAULT_PACKAGES")
if(identical(dp, "")) ## it fact methods is done first
dp <- c("datasets", "utils", "grDevices", "graphics",
"stats", "methods")
else if(identical(dp, "NULL")) dp <- character(0)
else dp <- strsplit(dp, ",")[[1]]
dp <- sub("[[:blank:]]*([[:alnum:]]+)", "\\1", dp) # strip whitespace
options(defaultPackages = dp)
})
## Expand R_LIBS_* environment variables.
Sys.setenv(R_LIBS_SITE =
.expand_R_libs_env_var(Sys.getenv("R_LIBS_SITE")))
Sys.setenv(R_LIBS_USER =
.expand_R_libs_env_var(Sys.getenv("R_LIBS_USER")))
local({
if(nzchar(tl <- Sys.getenv("R_SESSION_TIME_LIMIT_CPU")))
setSessionTimeLimit(cpu = tl)
if(nzchar(tl <- Sys.getenv("R_SESSION_TIME_LIMIT_ELAPSED")))
setSessionTimeLimit(elapsed = tl)
})
setSessionTimeLimit(elapsed = tl)
})
.First.sys <- function()
{
for(pkg in getOption("defaultPackages")) {
res <- require(pkg, quietly = TRUE, warn.conflicts = FALSE,
character.only = TRUE)
if(!res)
warning(gettextf('package %s in options("defaultPackages") was not found', sQuote(pkg)$
call. = FALSE, domain = NA)
}
}
## called at C level in the startup process prior to .First.sys
.OptRequireMethods <- function()
{
pkg <- "methods" # done this way to avoid R CMD check warning
if(pkg %in% getOption("defaultPackages"))
if(!require(pkg, quietly = TRUE, warn.conflicts = FALSE,
character.only = TRUE))
warning('package "methods" in options("defaultPackages") was not found',
call. = FALSE)
}
if(nzchar(Sys.getenv("R_BATCH"))) {
.Last.sys <- function()
{
cat("> proc.time()\n")
print(proc.time())
}
## avoid passing on to spawned R processes
## A system has been reported without Sys.unsetenv, so try this
try(Sys.setenv(R_BATCH=""))
}
local({
if(nzchar(rv <- Sys.getenv("_R_RNG_VERSION_")))
local({
if(nzchar(rv <- Sys.getenv("_R_RNG_VERSION_")))
suppressWarnings(RNGversion(rv))
})
.sys.timezone <- NA_character_
.First <- NULL
.Last <- NULL
###-*- R -*- Unix Specific ----
.Library <- file.path(R.home(), "library")
.Library.site <- Sys.getenv("R_LIBS_SITE")
.Library.site <- if(!nzchar(.Library.site)) file.path(R.home(), "site-library") else unlist(strspl$
.Library.site <- .Library.site[file.exists(.Library.site)]
invisible(.libPaths(c(unlist(strsplit(Sys.getenv("R_LIBS"), ":")),
unlist(strsplit(Sys.getenv("R_LIBS_USER"), ":")
))))
local({
popath <- Sys.getenv("R_TRANSLATIONS", "")
if(!nzchar(popath)) {
paths <- file.path(.libPaths(), "translations", "DESCRIPTION")
popath <- dirname(paths[file.exists(paths)][1])
}
bindtextdomain("R", popath)
bindtextdomain("R-base", popath)
assign(".popath", popath, .BaseNamespaceEnv)
})
local({
## we distinguish between R_PAPERSIZE as set by the user and by configure
papersize <- Sys.getenv("R_PAPERSIZE_USER")
if(!nchar(papersize)) {
lcpaper <- Sys.getlocale("LC_PAPER") # might be null: OK as nchar is 0
papersize <- if(nchar(lcpaper))
if(length(grep("(_US|_CA)", lcpaper))) "letter" else "a4"
else Sys.getenv("R_PAPERSIZE")
}
options(papersize = papersize,
}
options(papersize = papersize,
printcmd = Sys.getenv("R_PRINTCMD"),
dvipscmd = Sys.getenv("DVIPS", "dvips"),
texi2dvi = Sys.getenv("R_TEXI2DVICMD"),
browser = Sys.getenv("R_BROWSER"),
pager = file.path(R.home(), "bin", "pager"),
pdfviewer = Sys.getenv("R_PDFVIEWER"),
useFancyQuotes = TRUE)
})
## non standard settings for the R.app GUI of the macOS port
if(.Platform$GUI == "AQUA") {
## this is set to let RAqua use both X11 device and X11/TclTk
if (Sys.getenv("DISPLAY") == "")
Sys.setenv("DISPLAY" = ":0")
## this is to allow gfortran compiler to work
Sys.setenv("PATH" = paste(Sys.getenv("PATH"),":/usr/local/bin",sep = ""))
}## end "Aqua"
## de-dupe the environment on macOS (bug in Yosemite which affects things like PATH)
if (grepl("^darwin", R.version$os)) local({
## we have to de-dupe one at a time and re-check since the bug affects how
## environment modifications propagate
while(length(dupes <- names(Sys.getenv())[table(names(Sys.getenv())) > 1])) {
env <- dupes[1]
value <- Sys.getenv(env)
Sys.unsetenv(env) ## removes the dupes, good
.Internal(Sys.setenv(env, value)) ## wrapper requries named vector, a pain, hence internal
}
})
local({
tests_startup <- Sys.getenv("R_TESTS")
if(nzchar(tests_startup)) source(tests_startup)
})
Is there anything glaring here that could be causing the issue?
Your user .Rprofile file is loading itself recursively for some reason:
if (file.exists("~/.Rprofile")) {
base::sys.source("~/.Rprofile", envir = environment())
}
From your comments it seems that these lines are inside ~/.Rprofile (~ expands to the user home directory, i.e. /Users/mycomputer in your case, assuming mycomputer is your user name).
Delete these lines (or comment them out), they don’t belong here. In fact, the file looks like it’s a template for a project-specific .Rprofile configuration. It would make sense inside a project directory, but not as the profile-wide user .Rprofile.
The logic for these files is as follows:
If there is an .Rprofile file in the current directory, R attempts to load that.
Otherwise, if the environment variable R_PROFILE_USER is set to the path of a file, R attempts to load this file.
Otherwise, if the file ~/.Rprofile exists, R attempts to load that.
Now, this implies that ~/.Rprofile is not loaded automatically if a projects-specific (= in the current working directory) .Rprofile exists. This is unfortunate, therefore many projects add lines similar to the above to their project-specific .Rprofile files to cause the user-wide ~/.Rprofile to be loaded as well. However, the above implementation ignores the R_PROFILE_USER environment variable. A better implementation would therefore look as follows:
rprofile = Sys.getenv('R_PROFILE_USER', '~/.Rprofile')
if (file.exists(rprofile)) {
base::sys.source(rprofile, envir = environment())
}
rm(rprofile)
Success! Thank you to everyone in the comment. The issue was resolved by deleting /Library/Frameworks/R.framework/Resources/library/base/R/Rprofile and re-installing R and Rstudio.
I am encountering some odd drake behaviour which I just can't figure out. I am trying to add a .rmd to my drake plan. I am working on a remote machine AND on a network drive on that machine. If I try to add an .rmd file to my plan like this:
> library(drake)
> library(rmarkdown)
>
> list.files()
[1] "drake_testing.Rproj" "foo.png" "report.Rmd"
>
> plan <- drake_plan(
+ png("foo.png"),
+ plot(iris$Sepal.Length ~ iris$Sepal.Width),
+ dev.off(),
+ report = render(
+ input = knitr_in("report.Rmd"),
+ output_file = "report.html",
+ quiet = TRUE
+ )
+
+ )
>
> plan
# A tibble: 4 x 2
target command
<chr> <expr>
1 drake_target_1 png("foo.png")
2 drake_target_2 plot(iris$Sepal.Length ~ iris$Sepal.Width)
3 drake_target_3 dev.off()
4 report render(input = knitr_in("report.Rmd"), output_file = "report.html", quiet = TRUE)
>
> ## Turn your plan into a set of instructions
> config <- drake_config(plan)
Error: The specified file is not readable: report.Rmd
>
> traceback()
13: stop(txt, obj, call. = FALSE)
12: .errorhandler("The specified file is not readable: ", object,
mode = errormode)
11: digest::digest(object = file, algo = config$hash_algorithm, file = TRUE,
serialize = FALSE)
10: rehash_file(file, config)
9: rehash_storage(target = target, file = file, config = config)
8: FUN(X[[i]], ...)
7: lapply(X = X, FUN = FUN, ...)
6: weak_mclapply(X = keys, FUN = FUN, mc.cores = jobs, ...)
5: lightly_parallelize_atomic(X = X, FUN = FUN, jobs = jobs, ...)
4: lightly_parallelize(X = knitr_files, FUN = storage_hash, jobs = config$jobs,
config = config)
3: cdl_get_knitr_hash(config)
2: create_drake_layout(plan = plan, envir = envir, verbose = verbose,
jobs = jobs_preprocess, console_log_file = console_log_file,
trigger = trigger, cache = cache)
1: drake_config(plan)
I have tried the following permutations to make this work:
Move the .rmd to the local drive and call it with the full path to there
Add in file.path inside and outside of knitr_in to complete a full path.
Try using file_in for each of the scenarios above.
I have also tried debugging but I get a little lost when drake turns the file name into a hash then turns it back into the basename of the file (i.e. report.Rmd). The error ultimately happens when digest::digest is called.
Does anyone have experience attempting to figure out something like this?
I think the answer depends on whether you get the same error when you call digest("report.Rmd", file = TRUE) on its own outside drake_config(plan). If it errors (which I am betting it does) there may be something strange about your file system that clashes with R. If that is the case, then there is unfortunately nothing drake can do.
I also suggest some changes to your plan:
plan <- drake_plan(
plot_step = {
png(file_out("foo.png")),
plot(iris$Sepal.Length ~ iris$Sepal.Width),
dev.off()
},
report = render(
input = knitr_in("report.Rmd"),
output_file = "report.html",
quiet = TRUE
)
)
Or better yet, compartmentalize your work in reusable functions:
plot_foo = function(filename) {
png(filename),
plot(iris$Sepal.Length ~ iris$Sepal.Width),
dev.off()
}
plan <- drake_plan(
foo = plot_foo(file_out("foo.png")),
report = render(
input = knitr_in("report.Rmd"),
output_file = "report.html",
quiet = TRUE
)
)
A target is a skippable workflow step with a meaningful return value and/or output file(s). png() and dev.off() are part of the plotting step, and file_out() tells drake to watch foo.png for changes. Also, it is good practice to name your targets. Usually, the return values of targets are meaningful, just like variables in R.
I am trying to make a gif out of an R-Script using a function to generate the images.
I have a function that given some information creates a Map with dots on it.
I use this function on a Vector obtaining a series of different images, and I would like to put them together in a gif. It looks more or less like that:
createMap <- function(my_variable){
my_map <- a_map() + geom_point() # some variable missing
png(filename = paste(aDate, ".png", sep = ""), width = 3149, height = 2183, units = "px")
plot(mw_map)
dev.off()
}
ImageMagick is installed on my pc and the conversion file "converter.exe" also. Later I try to generate the gif using
saveGIF({
lapply(my_vector, createMap)
}, movie.name = "MY_GIF.gif")
but I get an error message:
> convert: improper image header `Rplot1.png' #
> error/png.c/ReadPNGImage/4362. convert: no images defined `MY_GIF.gif'
> # error/convert.c/ConvertImageCommand/3254.
an error occurred in the conversion...
does anybody know what I did wrong?
After creating the map png files. Use the below code. You don't need ImageMagick is installed on PC.
library(magick)
png.files <- sprintf("Rplot%02d.png", 1:10) #Mention the number of files to be read
GIF.convert <- function(x, output = "animation.gif")#Create a function to read, animate and convert the files to gif
{
image_read(x) %>%
image_animate(fps = 1) %>%
image_write(output)
}
GIF.convert(png.files)
For more details check this link: Link
I used R to create a series of images that took a long time to run. I'd like to use the animation package to make a video from them without re-running the analysis.
I can't find an example using existing images from file. The closest is Yihui Xie's demo('flowers') to create an HTML animation. I changed his code and can successfully make an mp4 of the flower images but I'm not sure how to access images already on file.
Based on his code, it should be something like this:
library(animation)
oopts = if (.Platform$OS.type == "windows") {
ani.options(ffmpeg = "C:/Software/ffmpeg/ffmpeg-20151015-git-0418541-win64-static/bin/ffmpeg.exe")}
#My list of images from disk
extList = list.files(myDir, pattern='.jpg', full.names=T)
saveVideo({
for (i in 1:length(extList)) {
#Yihui Xie's example downloads jpegs from web
#This code works to make an mp4 but I want to use images from disk
#extList = c('http://i.imgur.com/rJ7xF.jpg',
# 'http://i.imgur.com/Lyr9o.jpg',
# 'http://i.imgur.com/18Qrb.jpg')
#download.file(url = extList[i], destfile = sprintf(ani.options('img.fmt'), i), mode = 'wb')
someFunctionToAccessImage(extList[i])
}
}, video.name='notFlowers.mp4',
use.dev = FALSE,
ani.type = 'jpg',
interval = 2,
single.opts = "'dwellMultiplier': 1")
Bonus question - Can I do this with PNGs or other image types?
I found there to be a couple of problems with this. saveVideo uses a temporary directory to process the files and make the movie. Also, the postfix it was adding to the image name wasn't working correctly. So, here is a way to do it where you copy the images from the folder you have them stored in into the temporary directory used by saveVideo. The tricky part is finding the path to that directory, which is done using sys.frame from a function defined in the expression.
Note: another possible option could be to manually copy the images to the temporary folder that you know saveVideo will use (it will call tempdir()), or redefine tempdir() to return the path to your current images, but I haven't tested that.
library(animation)
oopts = if (.Platform$OS.type == "windows")
ani.options(ffmpeg = "C:\\home\\src\\ffmpeg-20151017-git-e9299df-win64-static\\bin\\ffmpeg.exe")
## Some variables
dirPath <- normalizePath("images/") # path to folder containing images
postfix <- "%03d" # I created my files with "Rplot%03d"
## Make the animation
saveVideo({
## Need to retrieve some variables from environments up the call stack
env.info <- (function() { list(wd = sys.frame(-1)$owd, fmt=sys.frame(-2)$img.fmt,
e=sys.frame(-2)) })()
postfix <- postfix
img.fmt <- gsub("%d", postfix, env.info$fmt, fixed=TRUE)
assign('img.fmt', img.fmt, envir=env.info$e)
file.copy(list.files(dirPath, full.names = TRUE), to=env.info$wd, overwrite = TRUE)
}, video.name='heatBalls.mp4', img.name='Rplot', interval = .05, use.dev=FALSE)
Full example
## Random function to save some images to disk
circ <- function(x,y,r) { s <- seq(-pi,pi,len=30); data.frame(x=x+r*cos(s), y=y+r*sin(s)) }
imgFun <- function(n, ncircs, dirPath) {
if (!require(scales)) stop("install scales package")
rads <- runif(ncircs, 0.5, 3)
xs <- runif(ncircs, 0.1+rads, 19.9-rads)
ys <- runif(ncircs, 0.1+rads, 19.9-rads)
vs <- matrix(runif(ncircs*2), 2)
cols <- colorRampPalette(c('lightblue','darkblue'), alpha=0.3)(ncircs)
png(file.path(dirPath, 'Rplot%03d.png'))
for (i in seq_len(n)) {
image(x=seq(0, 20, length=20), y=seq(0, 20, length=20),
z=matrix(rnorm(400),20), col=heat.colors(20, alpha=0.6), xlab='', ylab='')
for(j in 1:ncircs) polygon(x=circ(xs[j], ys[j], rads[j]), col=alpha(cols[j],0.7))
condx <- (xs + rads) > 20 | (xs - rads) < 0
condy <- (ys + rads) > 20 | (ys - rads) < 0
vs[1,condx] <- -vs[1,condx]
vs[2,condy] <- -vs[2,condy]
xs <- xs + vs[1,]
ys <- ys + vs[2,]
}
dev.off()
}
## Create some images on disk in a folder called "images"
dirPath <- normalizePath("images/")
dir.create(dirPath)
imgFun(50, 18, dirPath)
Then run the above code and a movie like the following should be made.