rpivotTable shiny large dataframe - r

I am working on placing a rpivotTable inside of a Shiny app. When I try on test data (a data frame with 1000 rows) I am able to run my app from the command line, and others may access the app given my ip:port as enjoyed. However, when I up the size of the data frame being fed into rpivotTable, the app will 'grey' out and I'm not able to serve the app to others.
I have also, successfully tested this same app, spinning up an EC2 instance, and upping the instance type, but the same thing would happen. I was getting a similar error as shown in this post ERROR: [on_request_read] connection reset by peer in R shiny and on this github issue https://github.com/rstudio/shiny/issues/1469. "ERROR: [on_request_read] connection reset by peer"
My syntax is pretty straightforward in terms of calling and rendering the rpivotTable, but as the size of the data frame increases, my app doesn't work. My suspicion is that this is a timeout parameter in the javascript widget?

I had the same problem, and had to upgrade from t3a.medium to t3a.large. That's more than I wanted, but it works now.

Related

How to solve a data source error when loading Google Analytics data in Power BI?

I would like to load data from Google Analytics into Power BI.
After transforming the data in the Query Editor, I apply the changes.
At first, I see the message 'Waiting for www.googleapis.com' and the number of rows increases.
After a while, I get the following error message:
Failed to save modifications to the server. Error returned: 'OLE DB or ODBC error: [DataSource.Error] There was an internal error..'
Rows with errors have been removed in one of the steps and I have a stable Internet connection.
Does anyone have suggestions on how to solve this?
I was also facing this kind of refreshing issue, First go to edit query and verify the data types and change the data types if needed, after that if you still facing this error, you need to keep open the app.powerbi.com while refresh your PBI dashboard, I was followed the above steps and my issue got resolved now.

ERROR: [_parse_http_data] invalid HTTP method in shiny app

When I load my docker shiny app domain name in the browser, it crashes (greys out) and I get this "ERROR: [_parse_http_data] invalid HTTP method".
I have developed an web application that consists of a shiny app (has a login feature connected to an RMySQL database), a website and a mariadb database. I put them together in a docker-compose file and tested it on my local computer and it works fine. I then proceeded to deploy them in a Kubernetes cluster in GCE and that was also successful. I used cloudflare to install a ssl certificate for the shiny app domain (i.e. trnddaapp.com). Now when I load the shiny app domain in the browser it appends the https and loads the app successfully but after about a minute it crashes (greys out). I loaded the shiny app external ip with http and this doesn’t crash.
The closest solution I have come to is https://github.com/rstudio/shiny-server/issues/392 but there doesn't seem to be any other solution to my problem. I would be grateful if anyone help me resolve this problem.
This is the error message I get when I check with kubectl log [app pod name], I get this error:
ERROR: [_parse_http_data] invalid HTTP method
ERROR: [_parse_http_data] invalid HTTP method
ERROR: [_parse_http_data] invalid HTTP method
I expect the app not to crash when the shiny app domain (trnddaapp.com) is appended with the https.
Let's start with the analysis of the error message, it says:
[_parse_http_data]
So we know that your app is receiving something, but it doesn't understand what it is (it may be a malformed HTTP/1.0 or HTTP/1.1 or even binary data) then we have an
invalid HTTP method
Now we are sure it is not a HTTP/1.X call but a stream of (non recognized) data.
We now know is not the instance since it "deploys" and "delivers" the service, but something inside that is just breaking.
There are a few things that may be happening, since it runs in your local machine (where I am assuming it has access to more resources, especially memory) it may be an issue of resource allocation and that once ran in a container, it could be possible that it empties its allocated amount of resources and breaks (perhaps a library that is called in real time that uses a chunk of memory?) but we won't be sure unless we can debug it inside a container, so could it be possible for you to add a debug library that records your requests to see if it parses all of those and at some point in time it stops and why? I know a person from R-Studio created a httpuv that logs every request this can be done as in:
devtools::install_github('rstudio/httpuv#wch-print-req')
And after that, maybe share the output and see why the application is behaving like that and killing its own service.
I really thank you in advance, hopefully with those logs we may be able to shed more light into this matter.
Thanks once again!
-JP

R / Shiny promises and futures not working with httr

I am working in a Shiny app that connects to Comscore using their API. Any attempt at executing POST commands inside future/promises fails with the cryptic error:
Warning: Error in curl::curl_fetch_memory: Bulk data encryption algorithm failed in selected cipher suite.
This happens with any POST attempt, not only when/if I try to call Comscore´s servers. As an example of a simple, harmless and uncomplicated POST request that fails, here is one:
rubbish <- future(POST('https://appsilon.com/an-example-of-how-to-use-the-new-r-promises-package/'))
print(value(rubbish))
But everything works fine if I don not use futures/promises.
The problem I want to solve is that currently we have an app that works fine in a single user environment, but it must be upgraded for a multiuser scenario to be served by a dedicated Shiny Server machine. The app makes several such calls in a row (from a few dozen to some hundreds), taking from 5 to 15 minutes.
The code is running inside an observeEvent block, triggered by a button clicked by the user when he has configured the request to be submitted.
My actual code is longer, and there are other lines both before and after the POST command in order to prepare the request and then process the received answer.
I have verified that all lines before the POST command get executed, so the problem seems to be there, in the attempt to do a POST connecting to the outside world from inside a promise.
I am using RStudio Server 1.1.453 along with R 3.5.0 in a RHEL server.
Package versions are
shiny: 1.1.0
httr: 1.3.1
future; 1.9.0
promise: 1.0.1
Thanks in advance,

Network error triggering the download report(report generation) action in server.R twice

I have the shiny application deployed on the Rshiny pro server. The main aim of the application is to process the input excel files and produce the report in the form of word document which has couple of tables and around 15 graphs rendered using the ggplot.
This application works perfect for the input excel files having less than approx. 3500-4500 rows for around 10 metrics.
Now, I am trying to process the excel file with around 4000-4500 rows for around 20 metrics. While processing this file, during report generation(Rmarkdown file processing) it's showing the network error on the UI only. Despite this error on the UI, in the back-end the report file is getting generated, but the generated report doesn't get downloaded. After this error, the report generation action is getting triggered automatically resulting in the generation of two reports which is again doesn't get downloaded.
So, from this observations, I came to the conclusion that on getting the network error, the download report(report generation and downloading) action is getting triggered again by the server.R.
Has anyone been through such strange situation? I am looking for guidance regarding the two problems here-
What can be the reason of getting the network error sometime only?
What is there, which is triggering the download report action twice?
Is there any option to specify the max. session timeout period?
I have found answers to above questions and I have already answered it here.
Though I would like to quickly answer questions in above explained context.
Reason for getting network error: User will be presented with the network error only if the computations(in this case report generation) doesn't get completed within the 45 seconds. This is because the http_keepalive_timeout parameter is not defined in the server configuration and the default value for http_keepalive_timeout parameter is 45 seconds.
Why download report action was getting triggered twice? : It is because the user session with the server was getting terminated during the computations which were happening after clicking the Download action button
. There is parameter called reconnect in the shiny server configuration which is enabled by default. When a user's connection to the server is interrupted, Shiny Server will offer them a dialog that allows them to reconnect to their existing Shiny session for 15 seconds. This implies that the server will keep the Shiny session active on the server for an extra 15 seconds after a user disconnects in case they reconnect. After the 15 seconds, the user's session will be reaped and they will be notified and offered an opportunity to refresh the page. If this setting is true, the server will immediately reap the session of any user who is disconnected.
You can read about it in the shiny server documentation.
Option to specify the max. session timeout period: Yes. There is a parameter called http_keepalive_timeout. It will allow you to specify the maximum session timeout period. You will need to add http_keepalive_timeout parameter to the shiny-server.conf at the top level with the timeout period you want in seconds as shown below.
http_keepalive_timeout 120;
Read more about http_keepalive_timeout here.

firebase forge is no longer allowing me to view the dataset

Has anyone seen this error message when accessing Forge:
Console Message:
failed: Error: too_big: The data requested exceeds the maximum size that can be accessed with a single request.
Dislayed in Forge Viewer:
Data view could not be loaded: There is too much data in your Firebase.
This is happening accross all of my dev/uat/prod datasets. These datasets in the exact same form. Were fully and easily accessible across all levels. This is not a big dataset. The whole exported dataset is around 15meg.

Resources