rmarkdown::render() doesn't allow multiple users at the same time? - r

I have a shiny app that renders an HTML report from an action button. Once this is rendered, a download button shows up in the screen so that the result of that action button can be downloaded. I had to create this two separate buttons because the download handler seems to have a time out, so since my Rmd file takes a bit longer to render, it wouldn't work and it throws an error in the server.
I am currently rendering my Rmd like in the following:
rmarkdown::render(tempReport, output_file = tmp_file,
params = params,
envir = new.env(parent = globalenv()))
The problem is: if one user is rendering his/her report in the server, if a second user clicks the action button to render it at the same time, it will only start rendering once the first user is finished.
Does anyone have any solutions to this?

The behavior you are observing is a result of the fact that R is single-threaded. The direct answer to your issue is that you need to implement asynchronous methods to allow multiple render() processes to run concurrently. More on this at: https://rstudio.github.io/promises/.
If you don't want to go down the asynchronous path and there are a reasonable number of possible report variants, you can pre-render the output and have the user simply open the selected output rather than rendering on-demand.

Related

Programmatically close Data Viewer tabs in RStudio

I wanted to make a script that closes all Data Viewer tabs in RStudio (those invoked by clicking on a data object in the Environment pane, or by calling utils::View()) but keeps all the "usual" document tabs.
First, I found rstudioapi::documentClose() function - not sure if it works for Data Viewer tabs, it requires the document id that seems to be not applicable here: calling getActiveDocumentContext() on Data Viewer tab returns #console.
Then, there's executeCommand('closeSourceDoc') option that closes the current tab, whether it is Data Viewer or standard document. I could probably use executeCommand('nextTab') to loop through all opened tabs, but I can't find how to determine if the active tab is Data Viewer or not...
Any hints?
The following code seems to do what you want.
Tabs=c()
doc=rstudioapi::documentPath()
while (is.null(doc)||!doc%in% Tabs) {
if(is.null(doc)){
rstudioapi::executeCommand('closeSourceDoc')
}
rstudioapi::executeCommand('nextTab')
Tabs=c(Tabs,doc);
doc=rstudioapi::documentPath()
}

R Shiny - Run application in background and issue UI controls with code

I am writing a vignette for my Shiny application package. At the beginning of my vignette, I source a file called screenshots.R that produces nice screenshots of my application. I am producing them like so:
webshot::appshot(mypackage::run_datepicker_app(),
file = "man/figures/datepicker.png", vwidth = 500, vheight = 200)
This works great and it gives me a great screenshot of what is - in this case - a couple dateInput fields. However, I'd like to be able to get a screenshot of the dateInput in use (say, with the calendar selection exposed).
Is there a way to issue commands to the application object in a script so I can get screenshots of the application in use, rather than having to do it manually?
Have you tried using ShinyDriver from the shinytest package?
You can use shinytest to have a headless browser run the app, interact with it, and take screenshots programmatically. If you don't have phantomJS installed, you'll need to run shinytest::installDependencies() before using ShinyDriver. All you need to do is point it to a directory containing a shiny app (in my case, the folder is 'myApp').
install.packages("shinytest")
shinytest::installDependencies()
app <- shinytest::ShinyDriver$new("myApp")
app$takeScreenshot("screenshot1.png")
button <- app$findElement("#button")
button$click()
Sys.sleep(1)
app$takeScreenshot("screenshot2.png")
app$stop()
I am starting the app in a headless browser, taking a screenshot, finding the button with the id 'button', clicking it, and taking another screenshot, then closing the app. Navigate to specific elements using "#id", where id is just the id you gave the shiny input. You can specify a file path to a png file in the takeScreenshot calls, so that you can then use them in your code elsewhere. Note that you may need to use Sys.sleep to stop the screenshots from being taken before the UI updates.

Automatically reloading shiny app when error occurs

This is a follow-up for my previous question Automatically reloading shiny app when add changes.
The solution options(shiny.autoreload = TRUE) works perfect when you want to automatically see changes on the browser which you put in the code.
However, the potential problem occurs when you save unfinished/corrupted file. For example, the following ui code lacks a , sign after the titlePanel function:
fluidPage(
titlePanel("Old Faithful Geyser Datass")
sidebarLayout(...
When you save such a file, you will get an error on your browser
ERROR: Error sourcing your_path/ui.R
R console will help detect the problem with the , sign. My impression was that if I improve my code and save the file, it should reload the browser and show my app correctly. Unfortunately, it doesn't do it.
Interesting thing is that an error in the app does not terminate the connection with the browser. To confirm my word, just reload the app manually using reload button in the browser (after improving your code).
Therofore, I examined how this shiny.autoreload option works. As I expected, it checked the time of file modification and then execute reload function. Then reload function send a message via sendMessage to the addMessageHandler:
addMessageHandler('reload', function(message) {
window.location.reload();
});
So it seems that after improving your code the function should be reexecuted, but it's not gonna happen.
To summarise, I think it's not possible to change it without major changes in shiny but maybe I am wrong. Thanks for any suggestion.
PS.You can manipulate example code here to see the problem.

Maxscript, backburner rendering renderElements

I have made a script that takes files from directory, and sends them to backburner for network rendering. When I run the script it renders fine but without the render elements they dont show in the backburner monitor nor do they save.
If I open some of the files manualy and send them to render with backburner it works fine, but not with the script?
The render element is VrayAlpha, but I dont think it matters.
This is the code Im using
on btnRender pressed do
(
outputFilesDir = textModelsOut.text + "*.max"
toRender = getFiles outputFilesDir
man = NetRender.GetManager()
man.connect #automatic "255.255.255.0"
man.GetControl()
for s in toRender do
(
renderModelPath = getFilenamePath s + filenameFromPath s
job = man.newJob file:renderModelPath
job.Submit()
)
man.Disconnect()
)
And this is quote from maxscript documentation, it says that render element data will not be available but it will be processed.
Jobs can not have maps included, and render element data will not be
available for submitted job but render elements will process
correctly. These problems are resent when submitting a job from a
file, but not when submitting the current scene.
Anyways my solution was to use job.newJob() to open each scene and submit the current scene.
You should always include your code (or at least some of it) so that we can check it for issues and test it our selves.
However, I usually use a struct called NetRenderAutomation, developed by Gravey.
You can find it here:
http://forums.cgsociety.org/showthread.php?f=98&t=1059510&page=1&pp=15
I haven't had any problems with it, and it is fairly easy to use, and you are even allowed to modify it, if you need some special features for your self.
Hope you can use the answer.
Else feel free to post some code, and I'll look into it.

How can you extend the default behavior of Tridion.Cme.Commands.Open.prototype._execute()?

I have written a GUI extension which adds an additional tab to many of the Item views in the SDL Tridion CME (e.g. Component, Page and Schema etc.). I have also written some JavaScript which loads that tab directly if when the view is loaded with a tab name is specified in the URL.
The result is that if a page is loaded with the tab name added as follows:
http://localhost/WebUI/item.aspx?tcm=64#id=tcm:1-48-64&tab=InfoTab
Rather than the default of
http://localhost/WebUI/item.aspx?tcm=64#id=tcm:1-48-64
The Info Tab will be loaded on top, instead of the General Tab. This is performed with the following code snippet and works very well:
$evt.addEventHandler($display, "start", onDisplayStarted);
// This callback is called when any view has finished loading
function onDisplayStarted() {
$evt.removeEventHandler($display, "start", onDisplayStarted);
var tabname = $url.getHashParam("tab");
if (tabname != '') {
var tabControl = $controls.getControl($("#MasterTabControl"), "Tridion.Controls.TabControl");
tabControl.selectItem(tabname);
}
}
Now I would like to make a context menu item to open items and link to the tabs using my new functionality. My first thought was to construct the Item URL myself and simply open a new window in my execute method. So I looked at the default functionality in the standard Open.prototype_execute() functionality of the GUI. This is stored in the navigation.js file of the CME, and is performed by the Tridion.Cme.Commands.Open.prototype._execute method. The code is a lot more complicated than I had anticipated as it deals with shared items, and permissions etc.
Rather than just copying all of this code to my own function, I was wondering if there is a way to elegantly extend the existing Open.prototype_execute() function and append my “&tab=MyTab” to the $cme.Popups.OPEN_ITEM_OPTIONS.URL constant for my own functions.
Any advice would be greatly appreciated.
At the end the Open command uses $config.getEditorUrl(item_type) to get the url for the item view (item_type - $const.ItemType.COMPONENT, etc). There are no extension points for this part of the functionality, but you could always try to overwrite it on your own risk.

Resources