R / Shiny promises and futures not working with httr - r

I am working in a Shiny app that connects to Comscore using their API. Any attempt at executing POST commands inside future/promises fails with the cryptic error:
Warning: Error in curl::curl_fetch_memory: Bulk data encryption algorithm failed in selected cipher suite.
This happens with any POST attempt, not only when/if I try to call Comscore´s servers. As an example of a simple, harmless and uncomplicated POST request that fails, here is one:
rubbish <- future(POST('https://appsilon.com/an-example-of-how-to-use-the-new-r-promises-package/'))
print(value(rubbish))
But everything works fine if I don not use futures/promises.
The problem I want to solve is that currently we have an app that works fine in a single user environment, but it must be upgraded for a multiuser scenario to be served by a dedicated Shiny Server machine. The app makes several such calls in a row (from a few dozen to some hundreds), taking from 5 to 15 minutes.
The code is running inside an observeEvent block, triggered by a button clicked by the user when he has configured the request to be submitted.
My actual code is longer, and there are other lines both before and after the POST command in order to prepare the request and then process the received answer.
I have verified that all lines before the POST command get executed, so the problem seems to be there, in the attempt to do a POST connecting to the outside world from inside a promise.
I am using RStudio Server 1.1.453 along with R 3.5.0 in a RHEL server.
Package versions are
shiny: 1.1.0
httr: 1.3.1
future; 1.9.0
promise: 1.0.1
Thanks in advance,

Related

gmailr credentials randomly (?) need re-authentication

I'm using gmailr in an automatic R script to send out some emails. It's been working fine for about a month and a half, but recently it failed with the following error:
Error: Can't get Google credentials.
Are you running gmailr in a non-interactive session? Consider:
* Call `gm_auth()` directly with all necessary specifics.
Execution halted
My code, which hasn't changed, is
library(gmailr)
options(gargle_oauth_email = TRUE)
gm_auth_configure(path ="data/credentials.json")
gm_auth(email = TRUE, cache = ".secret")
and is run non-interactively. (there is only one token in the .secrets folder) When I now ran it interactively, it "did the dance" and opened up the authentication thingy in the browser, which I confirmed and now everything is running fine again.
The problem is that I don't understand why the credentials suddenly required re-authentication or how I could prevent the script failing like this in the future.
You can try to clean cache in gargle folder and then create new one.
It worked for me, when i had similar problem
gm_auth function with gargle_oauth_cache stop working

ERROR: [_parse_http_data] invalid HTTP method in shiny app

When I load my docker shiny app domain name in the browser, it crashes (greys out) and I get this "ERROR: [_parse_http_data] invalid HTTP method".
I have developed an web application that consists of a shiny app (has a login feature connected to an RMySQL database), a website and a mariadb database. I put them together in a docker-compose file and tested it on my local computer and it works fine. I then proceeded to deploy them in a Kubernetes cluster in GCE and that was also successful. I used cloudflare to install a ssl certificate for the shiny app domain (i.e. trnddaapp.com). Now when I load the shiny app domain in the browser it appends the https and loads the app successfully but after about a minute it crashes (greys out). I loaded the shiny app external ip with http and this doesn’t crash.
The closest solution I have come to is https://github.com/rstudio/shiny-server/issues/392 but there doesn't seem to be any other solution to my problem. I would be grateful if anyone help me resolve this problem.
This is the error message I get when I check with kubectl log [app pod name], I get this error:
ERROR: [_parse_http_data] invalid HTTP method
ERROR: [_parse_http_data] invalid HTTP method
ERROR: [_parse_http_data] invalid HTTP method
I expect the app not to crash when the shiny app domain (trnddaapp.com) is appended with the https.
Let's start with the analysis of the error message, it says:
[_parse_http_data]
So we know that your app is receiving something, but it doesn't understand what it is (it may be a malformed HTTP/1.0 or HTTP/1.1 or even binary data) then we have an
invalid HTTP method
Now we are sure it is not a HTTP/1.X call but a stream of (non recognized) data.
We now know is not the instance since it "deploys" and "delivers" the service, but something inside that is just breaking.
There are a few things that may be happening, since it runs in your local machine (where I am assuming it has access to more resources, especially memory) it may be an issue of resource allocation and that once ran in a container, it could be possible that it empties its allocated amount of resources and breaks (perhaps a library that is called in real time that uses a chunk of memory?) but we won't be sure unless we can debug it inside a container, so could it be possible for you to add a debug library that records your requests to see if it parses all of those and at some point in time it stops and why? I know a person from R-Studio created a httpuv that logs every request this can be done as in:
devtools::install_github('rstudio/httpuv#wch-print-req')
And after that, maybe share the output and see why the application is behaving like that and killing its own service.
I really thank you in advance, hopefully with those logs we may be able to shed more light into this matter.
Thanks once again!
-JP

rpivotTable shiny large dataframe

I am working on placing a rpivotTable inside of a Shiny app. When I try on test data (a data frame with 1000 rows) I am able to run my app from the command line, and others may access the app given my ip:port as enjoyed. However, when I up the size of the data frame being fed into rpivotTable, the app will 'grey' out and I'm not able to serve the app to others.
I have also, successfully tested this same app, spinning up an EC2 instance, and upping the instance type, but the same thing would happen. I was getting a similar error as shown in this post ERROR: [on_request_read] connection reset by peer in R shiny and on this github issue https://github.com/rstudio/shiny/issues/1469. "ERROR: [on_request_read] connection reset by peer"
My syntax is pretty straightforward in terms of calling and rendering the rpivotTable, but as the size of the data frame increases, my app doesn't work. My suspicion is that this is a timeout parameter in the javascript widget?
I had the same problem, and had to upgrade from t3a.medium to t3a.large. That's more than I wanted, but it works now.

Meteor 0.7.2 + OplogObserveDriver not updating under certain circumstances

This is pretty cutting-edge as 0.7.2 was just released today, but I thought I'd ask in case somebody can shed some light.
I didn't report this to MDG because I can't reproduce this on my dev environment and thus I wouldn't have a recipe to give them.
I've set up oplog tailing in my production environment, which was deployed exactly as my dev environment was, except it's on a remote server.
The server runs Ubuntu + node 0.10.26 and I'm running the bundled version of the app with forever. Mongo reports its replSet is working in order.
The problem is that some collection updates made in server code don't make it to the client. This is the workflow the code is following:
Server publishes the collection using a very simple user_id: this.userId selector.
Client subscribes
Client calls a server method using Meteor.call()
Client starts observing a query on that collection using a specific _id: "something" selector. It will echo on "changed"
Server method calls .update() on the document matching that "something" _id, after doing some work.
If I run the app without oplog tailing (by not setting MONGO_OPLOG_URL), the above workflow works every time. However, if I run it with oplog tailing, the client doesn't echo any changes and if I query the collection directly from the JS console on the browser I don't see the updated version of the collection.
To add to the mystery, if I go into the mongo console and update the document manually, I see the the change on the client immediately. Or if I refresh the browser after the Meteor.call() and then query the collection manually from the js console the changes are there, as I'd expect.
As mentioned before, if I run the app on my dev environment with oplog tailing (verified using the facts package) it all works as expected and I can't reproduce the issue. The only difference here would be latency between client and server? (my dev environment is in my LAN).
Maybe if somebody is running into something similar we can isolate the issue and make it reproducible..

Passing parameters to package run by sp_startjob

We have an SSIS package that is run via a SQLAgent job. We are initiating the job (via sp_startjob) from within an ASP.NET web page. The user that is logged onto the UI needs to be logged with the SSIS package that the user initiates - hence we require the userId to be passed to the SSIS package. The issue is we cannot pass parameters to sp_startjob.
Does anyone know how this can be achieved and/or know of an alternative to the above approach
It cannot be done through sp_startjob. You can't pass a parameter to a job step so that option is out.
If you have no concern about concurrency, and given that you can't have the same job running at the same time, you could probably hack it by changing your job step from type SQL Server Integration Services to something like a OS Command. Have the OS Command called a batch script that the web page creates/modifies. Net result being you start your package like dtexec.exe /file MyPackage /Set \Package.Variables[User::DomainUser].Properties[Value];\"Domain\MyUser\" At this point, the variable DomainUser in your package would have the value of Domain\MyUser.
I don't know your requirements so perhaps you can just call into the .NET framework and start your package from the web page. Although you'd probably want to make sure that call asynchronously. Otherwise unless your SSIS package is very fast, the users might try and navigate away, spam refresh etc waiting for it to the page to "work".
All of this by the way is simply pushing a value into an SSIS package. In this case, a user name. It doesn't pass along their credentials so calls to things like SYSTEM_USER would report the SQL Agent user account (or the operator of the job step).

Resources