Let's say I have the following HTML viewed in the Viewer Pane
tempDir <- tempfile()
dir.create(tempDir)
htmlFile <- file.path(tempDir, "index.html")
write('<h1> Content</h1>', htmlFile, append = TRUE)
write('<h2> Content</h2>', htmlFile, append = TRUE)
write('lorem ipsum...', htmlFile, append = TRUE)
viewer <- getOption("viewer")
viewer(htmlFile)
When I have this html in the Viewer Pane, I can click on the "Save as image" button:
And I have the html content as a png, for example :
Is there a way to do this with the command line? I know about rstudioapi::savePlotAsImage(), so I'm looking for a kind of saveViewerAsImage.
Edit: I know we can do this with the {webshot} package, but I'm looking for the RStudio function that does that.
Here's a proposal. The strategy is the following:
let the viewer build the png
send the png from the viewer to R
Let the viewer build the png
A canvas image possesses a .toDataURL() method that returns a data URI containing the representation of the image in png format (we also can get a jpeg format).
The html2canvas library can be used to take a screenshot: this library renders the current page as a canvas image.
So, one can combine these two functions in the viewer:
take a screenshot with html2canvas
transform this screenshot to png using .toDataURL()
However, the html2canvas library uses JavaScript Promises that are not supported by the (Windows version) RStudio viewer: a polyfill is required.
Send the png from the viewer to R
This task can be achieved using WebSockets.
The httpuv package can be used to create a webserver. This server will serve a HTML page that will be opened in the RStudio viewer.
A WebSocket communication is established between the httpuv server and the RStudio viewer.
From the R command line, one can send a WebSocket message to the RStudio viewer: receiving this message, the viewer takes the screenshot and send it back to the server.
The code
I'm sorry, this code is quite long for a SO answer.
library(httpuv)
# Initialize variables
png <- NULL
websocket <- NULL
# Download Javascript libraries
polyfill_promise <- readLines('https://cdn.jsdelivr.net/npm/es6-promise/dist/es6-promise.auto.min.js')
html2canvas <- readLines('https://html2canvas.hertzen.com/dist/html2canvas.min.js')
# Configure the httpuv server
app <- list(
call = function(req) {
list(
status = 200L,
headers = list(
'Content-Type' = 'text/html'
),
body = paste0(collapse = "\r\n",
c("<!DOCTYPE html>",
"<html>",
"<head>",
# polyfill the RStudio viewer to support JavaScript promises
'<script type="text/javascript">',
polyfill_promise,
"</script>",
# use html2canvas library
'<script type="text/javascript">',
html2canvas,
"</script>",
"</head>",
"<body>",
html_body,
"</body>",
'<script type="text/javascript">',
# Configure the client-side websocket connection:
'var ws = new WebSocket("ws://" + location.host);',
# When a websocket message is received:
"ws.onmessage = function(event) {",
# Take a screenshot of the HTML body element
" html2canvas(document.body).then(function(canvas) {",
# Transform it to png
" var dataURL = canvas.toDataURL();",
# Send it back to the server
" ws.send(dataURL);",
" });",
"};",
"</script>",
"</html>"
)
)
)
},
# Configure the server-side websocket connection
onWSOpen = function(ws) {
# because we need to send websocket message from the R command line:
websocket <<- ws
# when a websocket message is received from the client
ws$onMessage(function(binary, message) {
png <<- message
})
}
)
# From your question:
html_body <- c(
'<h1> Content</h1>',
'<h2> Content</h2>',
'lorem ipsum...'
)
# Start the server:
server <- startDaemonizedServer("0.0.0.0", 9454, app)
# Open the RStudio viewer:
rstudioapi::viewer("http://localhost:9454")
# Wait to see the result...
# Send a websocket message from the command line:
websocket$send("go") # send any message
# Write the png image to disk:
writeBin(
RCurl::base64Decode(
gsub("data:image/png;base64,", "", png),
"raw"
),
"screenshot.png"
)
# Close the websocket connection
websocket$close()
# Stop the server
stopDaemonizedServer(server)
Related
I'm getting "Bad image data." when using the Batch image annotation offline feature for Image Properties Detection on Google Cloud Vision when the image extension is .webp.
Now according to the documentation, the .webp extension is supported, and indeed when I tried the regular single image synchronous request for the same image it return results normally, so this seems to be an issue with the batch asynchronous feature. Is there a way to get around this?
Sample code:
vision_client = vision.ImageAnnotatorClient()
image = vision.Image()
storage_client = storage.Client("your-GCP-project-here")
base_gs_url = 'path-to-store-image-on-GCS'
url = 'gs://platform-storage-prod/image/f32d82295d744974b15ecfbcba7eb707/73a2d4f550b04fc3a1b097cc3a9b9428.webp' # doesn't work
source = {'image_uri': url}
image = {'source': source}
features = [{'type_': vision.Feature.Type.IMAGE_PROPERTIES}]
requests = [{'image': image, 'features': features}]
output_folder = base_gs_url + uuid.uuid4().hex + '/'
gcs_destination = {'uri': output_folder}
batch_size = 100
output_config = {'gcs_destination': gcs_destination, 'batch_size': batch_size}
operation = vision_client.async_batch_annotate_images(requests=requests, output_config=output_config)
response = operation.result(90)
Results will be in the .json file created at the specified path.
I would like to expose an endpoint that accepts multipart/form-data, parses the multipart content and returns a csv-file. (the multipart input contains a csv dataset and processing instructions)
I’ve done this with plumber using Rook::Multipart$parse() as suggested here. Because plumber doesn’t support parallel requests, I’d like to re-implement this with RestRserve. The following won’t work – plumber’s inputs are of class environment (which Rook::Multipart$parse() assumes) whereas RestRserve’s inputs are of class Request R6.
application = Application$new(content_type = "text/plain")
application$add_post("/echo", function(req, res) {
multipart <- Rook::Multipart$parse(req$body)
dta <- read_csv(multipart$dta$tempfile, trim_ws=FALSE)
res$set_body(dta)
})
Any ideas on how to get multipart/form-data input to work with RestRserve?
RestRserve parses multipart body when process the incoming request. As result you have a raw request$body and metatdata in the request$files. Request object also provides a get_file method to extract body content. Let me show example for the app and request:
# load packages
library(readr)
library(callr)
library(httr)
# run RestRserve in the background
ps <- r_bg(function() {
library(RestRserve)
library(readr)
app = Application$new(content_type = "text/plain")
app$add_post(
path = "/echo",
FUN = function(request, response) {
# for debug
str(request$body)
str(request$files)
# extract multipart body field
cnt <- request$get_file("csv") # 'csv' from the upload form field
# parse CSV
dt <- read_csv(cnt)
# for debug
str(dt)
# do something with dt
identity(dt)
# write result to temp file
tmp <- tempfile()
write_csv(dt, tmp)
# set output body
response$set_body(c(tmpfile = tmp))
# or simply response$set_body(format_csv(dt))
}
)
backend = BackendRserve$new()
backend$start(app, http_port = 65080)
})
# wait for up
Sys.sleep(2L)
# check is alive
ps$is_alive()
# prepare CSV to upload
tmp <- tempfile()
write_csv(head(iris, 5), tmp)
# POST request with file
rs <- POST(
url = "http:/127.0.0.1:65080/echo",
body = list(csv = upload_file(tmp)),
encode = "multipart"
)
# get response content
cat(content(rs))
# read log from the RestRserve
cat(ps$read_output())
# kill background prcoess
ps$kill()
See ?Request for more details about fields and methods in this class.
I am using svSocket package in R to create a socket server. I have successfully created server using startSocketServer(...). I am able to connect my application to the server and send data from server to the application. But I am struggeling with reading of messages sent by application. I couldn't find any example for that on internet. I found only processSocket(...) example in documentation of vsSocket (see below) which describes the function that processes a command coming from the socket. But I want only read socket messages comming to the server in repeat block and print them on the screen for testing.
## Not run:
# ## A simple REPL (R eval/process loop) using basic features of processSocket()
# repl <- function ()
# {
# pars <- parSocket("repl", "", bare = FALSE) # Parameterize the loop
# cat("Enter R code, hit <CTRL-C> or <ESC> to exit\n> ") # First prompt
# repeat {
# entry <- readLines(n = 1) # Read a line of entry
# if (entry == "") entry <- "<<<esc>>>" # Exit from multiline mode
# cat(processSocket(entry, "repl", "")) # Process the entry
# }
# }
# repl()
# ## End(Not run)
Thx for your input.
EDIT:
Here more specific example of socket server creation and sending message:
require(svSocket)
#start server
svSocket::startSocketServer(
port = 9999,
server.name = "test_server",
procfun = processSocket,
secure = FALSE,
local = FALSE
)
#test calls
svSocket::getSocketClients(port = 9999) #ip and port of client connected
svSocket::getSocketClientsNames(port = 9999) #name of client connected
svSocket::getSocketServerName(port = 9999) #name of socket server given during creation
svSocket::getSocketServers() #server name and port
#send message to client
svSocket::sendSocketClients(
text = "send this message to the client",
sockets = svSocket::getSocketClientsNames(port = 9999),
serverport = 9999
)
... and response of the code above is:
> require(svSocket)
>
> #start server
> svSocket::startSocketServer(
+ port = 9999,
+ server.name = "test_server",
+ procfun = processSocket,
+ secure = FALSE,
+ local = FALSE
+ )
[1] TRUE
>
> #test calls
> svSocket::getSocketClients(port = 9999) #ip and port of client connected
sock0000000005C576B0
"192.168.2.1:55427"
> svSocket::getSocketClientsNames(port = 9999) #name of client connected
[1] "sock0000000005C576B0"
> svSocket::getSocketServerName(port = 9999) #name of socket server given during creation
[1] "test_server"
> svSocket::getSocketServers() #server name and port
test_server
9999
>
> #send message to client
> svSocket::sendSocketClients(
+ text = "send this message to the client",
+ sockets = svSocket::getSocketClientsNames(port = 9999),
+ serverport = 9999
+ )
>
What you can see is:
successfull creation of socket server
successfull connection of external client sock0000000005C576B0 (192.168.2.1:55427) to the server
successfull sending of message to the client (here no explizit output is provided in console, but the client reacts as awaited
what I am still not able to implement is to fetch client messages sent to the server. Could somebody provide me an example on that?
For interaction with the server from the client side, see ?evalServer.
Otherwise, it is your processSocket() function (either the default one, or a custom function you provide) that is the entry point triggered when the server got some data from one connected client. From there, you have two possibilities:
The simplest one is just to use the default processSocket() function. Besides some special code between <<<>>>, which is interpreted as special commands, the default version will evaluate R code on the server side. So, just call the function you want on the server. For instance, define f <- function(txt) paste("Fake process ", txt) on the server, and call evalServer(con, "f('some text')") on the client. Your custom f() function is executed on the server. Just take care that you need to double quote expressions that contain text here.
An alternate solution is to define your own processSocket() function to capture messages sent by the client to the server earlier. This is safer for a server that needs to process a limited number of message types without parsing and evaluating R code received from the client.
Now, the server is asynchronous, meaning that you still got the prompt available on the server, while it is listening to client(s) and processing their requests.
I have a shiny app that takes a JSON input file, runs it through a classifier and returns a classified JSON object.
I want the app to be able to communicate with an API. I want the API to post a file to the Shiny App which will do its work and return a classified object.
Basically I want the Shiny app to sit in the background until a file is posted and then do its work.
I know that I can use GET from the httr package to get a file from a url. I can put this in the shiny.server file which is fine if I know the file name for the get command
However the filenames coming from the API will be different. So is there any way that I can make this dynamic according to the Post request that comes from the API.
If you do not have to use Shiny, you can use openCPU. OpenCPU provides each of your R packages as REST service automatically. I work with OpenCPU and it works fine! It is the easiest way to use R from another program.
By now library(plumber) needs to be mentioned as an alternative in this context, however the following example is showing how to handle POST requests directly in shiny.
It is based on Joe Cheng's gist here, which suggests to add an attribute "http_methods_supported" to the UI and use httpResponse to answer the requests.
The below code starts a shiny app in a background R process (This is done only to have a single single file MRE - of course, you can put the app in a separate file and remove the r_bg-line). After the app is launched the parent process sends the iris data.frame to the UI.
In the UI function the req$PATH_INFO is checked (see uiPattern = ".*"), then the numerical columns are multiplied by 10 (query_params$factor) and send back as a json string.
library(shiny)
library(jsonlite)
library(callr)
library(datasets)
ui <- function(req) {
# The `req` object is a Rook environment
# See https://github.com/jeffreyhorner/Rook#the-environment
if (identical(req$REQUEST_METHOD, "GET")) {
fluidPage(
h1("Accepting POST requests from Shiny")
)
} else if (identical(req$REQUEST_METHOD, "POST")) {
# Handle the POST
query_params <- parseQueryString(req$QUERY_STRING)
body_bytes <- req$rook.input$read(-1)
if(req$PATH_INFO == "/iris"){
postedIris <- jsonlite::fromJSON(rawToChar(body_bytes))
modifiedIris <- postedIris[sapply(iris, class) == "numeric"]*as.numeric(query_params$factor)
httpResponse(
status = 200L,
content_type = "application/json",
content = jsonlite::toJSON(modifiedIris, dataframe = "columns")
)
} else {
httpResponse(
status = 200L,
content_type = "application/json",
content = '{"status": "ok"}'
)
}
}
}
attr(ui, "http_methods_supported") <- c("GET", "POST")
server <- function(input, output, session) {}
app <- shinyApp(ui, server, uiPattern = ".*")
# shiny::runApp(app, port = 80, launch.browser = FALSE, host = "0.0.0.0")
shiny_process <- r_bg(function(x){ shiny::runApp(x, port = 80, launch.browser = FALSE, host = "0.0.0.0") }, args = list(x = app))
library(httr)
r <- POST(url = "http://127.0.0.1/iris?factor=10", body = iris, encode = "json", verbose())
recievedIris <- as.data.frame(fromJSON(rawToChar(r$content)))
print(recievedIris)
shiny_process$kill()
Please also check this related PR which is providing further examples (also showing how to use session$registerDataObj) and is aiming at a better description of the httpResponse function.
I would like to use LuaSocket's HTTP module to download a large file while displaying progress in the console and later on in a GUI. The UI must never block, not even when the server is unresponsive during the transfer. Additionally, creating a worker thread to handle the download is not an option.
Here's what I got so far:
local io = io
local ltn12 = require("ltn12")
local http = require("socket.http")
local fileurl = "http://www.example.com/big_file.zip"
local fileout_path = "big_file.zip"
local file_size = 0
local file_down = 0
-- counter filter used in ltn12
function counter(chunk)
if chunk == nil then
return nil
elseif chunk == "" then
return ""
else
file_down = file_down + #chunk
ui_update(file_size, file_down) -- update ui, run main ui loop etc.
return chunk -- return unmodified chunk
end
end
-- first request
-- determine file size
local r, c, h = http.request {
method = "HEAD",
url = fileurl
}
file_size = h["content-length"]
-- second request
-- download file
r, c, h = http.request {
method = "GET",
url = fileurl,
-- set our chain, count first then write to file
sink = ltn12.sink.chain(
counter,
ltn12.sink.file(io.open(fileout_path, "w"))
)
}
There are a few problems with the above, ignoring error checking and hard-coding:
It requires 2 HTTP requests when it is possible with only 1 (a normal GET request also sends content-length)
If the server is unresponsive, then the UI will also be unresponsive, as the filter only gets called when there is data to process.
How could I do this making sure the UI never blocks?
There is an example on non-preemptive multithreading in Programming in Lua that uses non-blocking luasocket calls and coroutines to do a multiple parallel downloads. It should be possible to apply the same logic to your process to avoid blocking. I can only add that you should consider calling this logic from IDLE event in your GUI (if there is such a thing) to avoid getting "attempt to yield across metamethod/c-call boundary" errors.