I have a shiny app that takes a JSON input file, runs it through a classifier and returns a classified JSON object.
I want the app to be able to communicate with an API. I want the API to post a file to the Shiny App which will do its work and return a classified object.
Basically I want the Shiny app to sit in the background until a file is posted and then do its work.
I know that I can use GET from the httr package to get a file from a url. I can put this in the shiny.server file which is fine if I know the file name for the get command
However the filenames coming from the API will be different. So is there any way that I can make this dynamic according to the Post request that comes from the API.
If you do not have to use Shiny, you can use openCPU. OpenCPU provides each of your R packages as REST service automatically. I work with OpenCPU and it works fine! It is the easiest way to use R from another program.
By now library(plumber) needs to be mentioned as an alternative in this context, however the following example is showing how to handle POST requests directly in shiny.
It is based on Joe Cheng's gist here, which suggests to add an attribute "http_methods_supported" to the UI and use httpResponse to answer the requests.
The below code starts a shiny app in a background R process (This is done only to have a single single file MRE - of course, you can put the app in a separate file and remove the r_bg-line). After the app is launched the parent process sends the iris data.frame to the UI.
In the UI function the req$PATH_INFO is checked (see uiPattern = ".*"), then the numerical columns are multiplied by 10 (query_params$factor) and send back as a json string.
library(shiny)
library(jsonlite)
library(callr)
library(datasets)
ui <- function(req) {
# The `req` object is a Rook environment
# See https://github.com/jeffreyhorner/Rook#the-environment
if (identical(req$REQUEST_METHOD, "GET")) {
fluidPage(
h1("Accepting POST requests from Shiny")
)
} else if (identical(req$REQUEST_METHOD, "POST")) {
# Handle the POST
query_params <- parseQueryString(req$QUERY_STRING)
body_bytes <- req$rook.input$read(-1)
if(req$PATH_INFO == "/iris"){
postedIris <- jsonlite::fromJSON(rawToChar(body_bytes))
modifiedIris <- postedIris[sapply(iris, class) == "numeric"]*as.numeric(query_params$factor)
httpResponse(
status = 200L,
content_type = "application/json",
content = jsonlite::toJSON(modifiedIris, dataframe = "columns")
)
} else {
httpResponse(
status = 200L,
content_type = "application/json",
content = '{"status": "ok"}'
)
}
}
}
attr(ui, "http_methods_supported") <- c("GET", "POST")
server <- function(input, output, session) {}
app <- shinyApp(ui, server, uiPattern = ".*")
# shiny::runApp(app, port = 80, launch.browser = FALSE, host = "0.0.0.0")
shiny_process <- r_bg(function(x){ shiny::runApp(x, port = 80, launch.browser = FALSE, host = "0.0.0.0") }, args = list(x = app))
library(httr)
r <- POST(url = "http://127.0.0.1/iris?factor=10", body = iris, encode = "json", verbose())
recievedIris <- as.data.frame(fromJSON(rawToChar(r$content)))
print(recievedIris)
shiny_process$kill()
Please also check this related PR which is providing further examples (also showing how to use session$registerDataObj) and is aiming at a better description of the httpResponse function.
Related
Let me start by saying that I understand how to do a POST request using "httr" and "crul" packages. I am working on developing an asynchronous method to sending multiple POST request with unique JSON body requests using the basic "curl" package. I have legitimate reasons for trying this with this package, but more importantly I'm just determined to get it to work. This may not be possible, or I may even be trying to wrong functions in "curl"...but wanted to see if anyone had any ideas.
I am trying to send a post request using curl_fetch_multi() as a POST request with a JSON in the body like this...
{
"configuration": {
"Id": 4507
},
"age": 0,
"zip": 32411,
"Date": "2020-12-23"
}
I have succeeded in at least getting getting error messages back form the API indicating an invalid body input using something along the lines of starting with an object containing each body i need to submit
library(curl)
library(jsonlite)
library(magrittr)
pool <- new_pool()
# results only available through call back function
cb <- function(req){cat("done:", req$url, ": HTTP:", req$status, "\n", "content:", rawToChar(req$content), "\n")}
# Create request for each body
for(i in 1:nrow(df)){
curl_fetch_multi(
"http://api.com/values?api_key=1234",
done = cb,
pool = pool,
handle = new_handle() %>%
handle_setopt(post = TRUE) %>%
handle_setheaders("Content-Type"="application/vnd.v1+json") %>%
handle_setform(body = df$body[[i]]) ###df$body[[i]] is a JSON string
)
}
# This actually performs requests
out <- multi_run(pool = pool)
done: http://api.com/values?api_key=1234 : HTTP: 400
content: {"errors":[{"code":"Service.input.invalid","message":"Invalid input"}]}
done: http://api.com/values?api_key=1234 : HTTP: 400
content: {"errors":[{"code":"Service.input.invalid","message":"Invalid input"}]}
....
I'm 90% positive it has to do with how it's attempting to call the JSON in handle_setform() setting of the handle. This is about where I am over my head and documentation is scarce.
Also, I am pretty sure the JSON is structured properly, as I can use them in other packages with no problem.
Any assistance would be greatly appreciated.
Found the solution!!
Needed to use following settings with handle_setopts()
for(i in 1:nrow(df)){
curl_fetch_multi(
"http://api.com/values?api_key=1234",
done = cb,
pool = pool,
handle = new_handle() %>%
handle_setheaders("Content-Type"="application/v1+json") %>%
handle_setopt(customrequest = "POST") %>%
handle_setopt(postfields = df$body[[i]]) #df$body is list of JSON
)
}
out <- multi_run(pool = pool)
I would like to expose an endpoint that accepts multipart/form-data, parses the multipart content and returns a csv-file. (the multipart input contains a csv dataset and processing instructions)
I’ve done this with plumber using Rook::Multipart$parse() as suggested here. Because plumber doesn’t support parallel requests, I’d like to re-implement this with RestRserve. The following won’t work – plumber’s inputs are of class environment (which Rook::Multipart$parse() assumes) whereas RestRserve’s inputs are of class Request R6.
application = Application$new(content_type = "text/plain")
application$add_post("/echo", function(req, res) {
multipart <- Rook::Multipart$parse(req$body)
dta <- read_csv(multipart$dta$tempfile, trim_ws=FALSE)
res$set_body(dta)
})
Any ideas on how to get multipart/form-data input to work with RestRserve?
RestRserve parses multipart body when process the incoming request. As result you have a raw request$body and metatdata in the request$files. Request object also provides a get_file method to extract body content. Let me show example for the app and request:
# load packages
library(readr)
library(callr)
library(httr)
# run RestRserve in the background
ps <- r_bg(function() {
library(RestRserve)
library(readr)
app = Application$new(content_type = "text/plain")
app$add_post(
path = "/echo",
FUN = function(request, response) {
# for debug
str(request$body)
str(request$files)
# extract multipart body field
cnt <- request$get_file("csv") # 'csv' from the upload form field
# parse CSV
dt <- read_csv(cnt)
# for debug
str(dt)
# do something with dt
identity(dt)
# write result to temp file
tmp <- tempfile()
write_csv(dt, tmp)
# set output body
response$set_body(c(tmpfile = tmp))
# or simply response$set_body(format_csv(dt))
}
)
backend = BackendRserve$new()
backend$start(app, http_port = 65080)
})
# wait for up
Sys.sleep(2L)
# check is alive
ps$is_alive()
# prepare CSV to upload
tmp <- tempfile()
write_csv(head(iris, 5), tmp)
# POST request with file
rs <- POST(
url = "http:/127.0.0.1:65080/echo",
body = list(csv = upload_file(tmp)),
encode = "multipart"
)
# get response content
cat(content(rs))
# read log from the RestRserve
cat(ps$read_output())
# kill background prcoess
ps$kill()
See ?Request for more details about fields and methods in this class.
Brief about Application: I am currently working on one project where i have created an application on R Shiny, which captures all details of different candidates where one can update the details and send the status by using mail functionality the moment anyone saves the data. Application will give popup to send updated status to candidate using mail. At this point, functionality is running perfectly fine if they open their own instance of a Shiny application.
Problem: We have hosted this application from PC as server, due to all users being on the same network. Currently, individual users are accessing the application using the IP Address of Host machine. The moment someone is updating status and saving into database every time, the last updated user entry in data base gets registered as the details of Host, instead of actual User who has made an update. Also, mail is triggered from host user instead of actual user id.
Is there any option to log individual user details who all are running my application within my organisation using my IP address?
As mentioned you could set cookies. You can set a user id uid, that will remain constant over sessions per user.
Upon each start of each app usage, the app will check the browser cookies for a uid in the cookies. If one is found it will help you identify the user across sessions. If none is found its (probably) a new user and there will be a uid assigned.
library(shiny)
library(shinyjs)
library(magrittr)
if (!dir.exists('www/')) {
dir.create('www')
}
download.file(
url = 'https://cdn.jsdelivr.net/npm/js-cookie#2/src/js.cookie.min.js',
destfile = 'www/js.cookie.js'
)
addResourcePath("js", "www")
jsCode <- '
shinyjs.getcookie = function(params) {
var cookie = Cookies.get(params[0]);
Shiny.onInputChange("jscookie", [params[0], cookie]);
}
shinyjs.setcookie = function(params) {
Cookies.set(params[1], escape(params[0]), { expires: 0.5 });
}
'
server <- function(input, output) {
observe({
input$set
js$getcookie("uid")
})
checkCookie <- eventReactive(input$jscookie, {
uid <- input$jscookie[2]
if(is.null(uid) | is.na(uid)){
uid <- sample(c(0:9, letters), 30, replace = TRUE) %>% paste(collapse = "")
js$setcookie(uid, "uid")
}
return(uid)
})
output$output <- renderText({
checkCookie()
})
}
ui <- fluidPage(
shiny::tags$head(
# you must copy:
# https://raw.githubusercontent.com/js-cookie/js-cookie/master/src/js.cookie.js
# to www/
shiny::tags$script(src = "js/js.cookie.js")
),
useShinyjs(),
extendShinyjs(text = jsCode),
sidebarLayout(
sidebarPanel(
actionButton('set', 'set')
),
mainPanel(
verbatimTextOutput('output')
)
)
)
runApp(
appDir = shinyApp(ui = ui, server = server),
port = 9898
)
It assumes that the users do not delete the cookies. And to be very precise they will be unique across browser per user.
Code is based on https://gist.github.com/calligross/e779281b500eb93ee9e42e4d72448189.
js.cookie.min.js: (As asked for in the comments).
/**
* Minified by jsDelivr using Terser v3.14.1.
* Original file: /npm/js-cookie#2.2.1/src/js.cookie.js
*
* Do NOT use SRI with dynamically generated files! More information: https://www.jsdelivr.com/using-sri-with-dynamic-files
*/
!function(e){var n;if("function"==typeof define&&define.amd&&(define(e),n=!0),"object"==typeof exports&&(module.exports=e(),n=!0),!n){var t=window.Cookies,o=window.Cookies=e();o.noConflict=function(){return window.Cookies=t,o}}}(function(){function e(){for(var e=0,n={};e<arguments.length;e++){var t=arguments[e];for(var o in t)n[o]=t[o]}return n}function n(e){return e.replace(/(%[0-9A-Z]{2})+/g,decodeURIComponent)}return function t(o){function r(){}function i(n,t,i){if("undefined"!=typeof document){"number"==typeof(i=e({path:"/"},r.defaults,i)).expires&&(i.expires=new Date(1*new Date+864e5*i.expires)),i.expires=i.expires?i.expires.toUTCString():"";try{var c=JSON.stringify(t);/^[\{\[]/.test(c)&&(t=c)}catch(e){}t=o.write?o.write(t,n):encodeURIComponent(String(t)).replace(/%(23|24|26|2B|3A|3C|3E|3D|2F|3F|40|5B|5D|5E|60|7B|7D|7C)/g,decodeURIComponent),n=encodeURIComponent(String(n)).replace(/%(23|24|26|2B|5E|60|7C)/g,decodeURIComponent).replace(/[\(\)]/g,escape);var f="";for(var u in i)i[u]&&(f+="; "+u,!0!==i[u]&&(f+="="+i[u].split(";")[0]));return document.cookie=n+"="+t+f}}function c(e,t){if("undefined"!=typeof document){for(var r={},i=document.cookie?document.cookie.split("; "):[],c=0;c<i.length;c++){var f=i[c].split("="),u=f.slice(1).join("=");t||'"'!==u.charAt(0)||(u=u.slice(1,-1));try{var a=n(f[0]);if(u=(o.read||o)(u,a)||n(u),t)try{u=JSON.parse(u)}catch(e){}if(r[a]=u,e===a)break}catch(e){}}return e?r[e]:r}}return r.set=i,r.get=function(e){return c(e,!1)},r.getJSON=function(e){return c(e,!0)},r.remove=function(n,t){i(n,"",e(t,{expires:-1}))},r.defaults={},r.withConverter=t,r}(function(){})});
//# sourceMappingURL=/sm/b0ce608ffc029736e9ac80a8dd6a7db2da8e1d45d2dcfc92043deb2214aa30d8.map
I am trying to write an R script to programmatically update a Google Tag Manager container via API and I have hit a bit of a wall getting it to work, as it keeps returning an invalid argument error. The problem is that I can't quite figure out what the problem is.
The documentation for the API call is here:
https://developers.google.com/tag-manager/api/v2/reference/accounts/containers/update
Here's the code:
library(httr)
url_base <- 'https://www.googleapis.com/tagmanager/v2'
url_path <- paste('accounts',account_id,'containers',container_id,sep='/')
api_url <- paste(url_base,url_path,sep='/')
#since the instructions indicate that the request body parameters are all optional, let's just send a new name
call <- PUT(api_url,
add_headers(Authorization = paste("Bearer", gtm_token$credentials$access_token)),
encode = 'json',
body = list(name = 'new name'))
call_content <- content(call,'parsed')
This is a pretty standard API call to the GTM API, and in fact I have written a bunch of functions for other GTM API methods that work in the same way, so I am a bit perplexed as to why this one keeps failing:
$error
$error$errors
$error$errors[[1]]
$error$errors[[1]]$domain
[1] "global"
$error$errors[[1]]$reason
[1] "invalidArgument"
$error$errors[[1]]$message
[1] "Bad Request"
$error$code
[1] 400
$error$message
[1] "Bad Request"
It seems like the issue is in the message body, but it's not clear if the issue is down to the API expecting different information / more parameters, when the documentation suggests that all of the parameters are optional.
OK, so the documentation is lacking here. This works if you include a name at least. Here's a working function:
gtm_containers_update <- function(account_id,container_id,container_name,usage_context,domain_name,notes,token) {
require(httr)
token$refresh()
#create the post url
api_url <- paste('https://www.googleapis.com/tagmanager/v2','accounts',account_id,'containers',container_id,sep='/')
#create the list with required components
call_body <- list(name = container_name,
usageContext = list(usage_context),
notes = notes,
domainName = domain_name)
call <- POST(url,
add_headers(Authorization = paste("Bearer", token$credentials$access_token)),
encode = 'json',
body = call_body)
print(paste('Status code:',call$status_code))
}
Let's say I have the following HTML viewed in the Viewer Pane
tempDir <- tempfile()
dir.create(tempDir)
htmlFile <- file.path(tempDir, "index.html")
write('<h1> Content</h1>', htmlFile, append = TRUE)
write('<h2> Content</h2>', htmlFile, append = TRUE)
write('lorem ipsum...', htmlFile, append = TRUE)
viewer <- getOption("viewer")
viewer(htmlFile)
When I have this html in the Viewer Pane, I can click on the "Save as image" button:
And I have the html content as a png, for example :
Is there a way to do this with the command line? I know about rstudioapi::savePlotAsImage(), so I'm looking for a kind of saveViewerAsImage.
Edit: I know we can do this with the {webshot} package, but I'm looking for the RStudio function that does that.
Here's a proposal. The strategy is the following:
let the viewer build the png
send the png from the viewer to R
Let the viewer build the png
A canvas image possesses a .toDataURL() method that returns a data URI containing the representation of the image in png format (we also can get a jpeg format).
The html2canvas library can be used to take a screenshot: this library renders the current page as a canvas image.
So, one can combine these two functions in the viewer:
take a screenshot with html2canvas
transform this screenshot to png using .toDataURL()
However, the html2canvas library uses JavaScript Promises that are not supported by the (Windows version) RStudio viewer: a polyfill is required.
Send the png from the viewer to R
This task can be achieved using WebSockets.
The httpuv package can be used to create a webserver. This server will serve a HTML page that will be opened in the RStudio viewer.
A WebSocket communication is established between the httpuv server and the RStudio viewer.
From the R command line, one can send a WebSocket message to the RStudio viewer: receiving this message, the viewer takes the screenshot and send it back to the server.
The code
I'm sorry, this code is quite long for a SO answer.
library(httpuv)
# Initialize variables
png <- NULL
websocket <- NULL
# Download Javascript libraries
polyfill_promise <- readLines('https://cdn.jsdelivr.net/npm/es6-promise/dist/es6-promise.auto.min.js')
html2canvas <- readLines('https://html2canvas.hertzen.com/dist/html2canvas.min.js')
# Configure the httpuv server
app <- list(
call = function(req) {
list(
status = 200L,
headers = list(
'Content-Type' = 'text/html'
),
body = paste0(collapse = "\r\n",
c("<!DOCTYPE html>",
"<html>",
"<head>",
# polyfill the RStudio viewer to support JavaScript promises
'<script type="text/javascript">',
polyfill_promise,
"</script>",
# use html2canvas library
'<script type="text/javascript">',
html2canvas,
"</script>",
"</head>",
"<body>",
html_body,
"</body>",
'<script type="text/javascript">',
# Configure the client-side websocket connection:
'var ws = new WebSocket("ws://" + location.host);',
# When a websocket message is received:
"ws.onmessage = function(event) {",
# Take a screenshot of the HTML body element
" html2canvas(document.body).then(function(canvas) {",
# Transform it to png
" var dataURL = canvas.toDataURL();",
# Send it back to the server
" ws.send(dataURL);",
" });",
"};",
"</script>",
"</html>"
)
)
)
},
# Configure the server-side websocket connection
onWSOpen = function(ws) {
# because we need to send websocket message from the R command line:
websocket <<- ws
# when a websocket message is received from the client
ws$onMessage(function(binary, message) {
png <<- message
})
}
)
# From your question:
html_body <- c(
'<h1> Content</h1>',
'<h2> Content</h2>',
'lorem ipsum...'
)
# Start the server:
server <- startDaemonizedServer("0.0.0.0", 9454, app)
# Open the RStudio viewer:
rstudioapi::viewer("http://localhost:9454")
# Wait to see the result...
# Send a websocket message from the command line:
websocket$send("go") # send any message
# Write the png image to disk:
writeBin(
RCurl::base64Decode(
gsub("data:image/png;base64,", "", png),
"raw"
),
"screenshot.png"
)
# Close the websocket connection
websocket$close()
# Stop the server
stopDaemonizedServer(server)