RStudio Connect disconnects from server before app can load - r

I am running an application that takes quite a while to load. I have to load up 7GB worth of data which when run locally takes the app about 220 seconds. However on my deployed app it disconnects from the server after roughly 120 seconds, before it can load.
I don't know exactly what I can put here since the Log doesn't show anything. If there is anywhere I can grab information from to show you all or if this is a known issue that can be easily solved I would love to know!

Are you using shinyapps.io? The free tier only allows you to use 1GB RAM. Loading 7GB data will definitely crash the server.

Related

Downloads timing out after 30 seconds on users with slow connections

We have some files on our portal that aren't that big to me: 50MB-80MB. On my home connection, it takes <10seconds to download these files. I've had other users test and they experience the same thing.
However, in the office, the connection is terrible. These files don't even download because once the download time gets to about 30-35 seconds, even though it is downloading (just incredibly slow), it triggers a non-descriptive error in the Developer Tools > Network and stops the download. Not seeing anything in any logs that indicates why the download is terminated.
The bigger problem is we now have a few end users with crappy internet who are also experiencing the same issue.
So I'm trying to figure out what we can do on our end. Obviously, we can't tell them, "Well, just get better internet service." It seems like there can be something done on our end to persist the download until it is completed. What that is, I'm not quite sure and that is what I'm looking for help on. Maybe it is a default setting in a dependency somewhere in our stack.
ReactJS FE that uses FileSaver.js for downloads
Django BE using native Django downloading
nginx-ingress for traffic ingress controller to the Kubernetes cluster
The FE uses nginx to serve the FE
The BE uses gunicorn to serve the BE
Any suggestions on what I should do to prevent this timeout on downloads?
I'm thinking the issue is somewhere with nginx-ingress, nginx, and/or FileSaver.js, so investigating those.
Per Saurabh, adjusting the timeout did the trick. I now just start the web server with the -t 300 flag and the users that were having issues no longer do.

Shiny app runs significantly slower on Shiny Servers than it does locally

This is the source code of my Shiny app plotting polygons of more than 350 towns in Taiwan whenever there's any changed input from UI. Values of towns would change every time according to the inputs so there's little opportunity to do leafletProxy. Yet I am now having performance issues, especially on Shiny Server.
You may try running the app locally. The map would show up in like 10 seconds after the options are changed in UI. However, the deployed app on Google Compute Engine or on shinyapps.io takes so much longer (around 30 seconds) to depict the map, not only when initializing the app, but also every time the inputs are changed. Besides, the Shiny Server is frequently disconnected during computation like this:
When that disconnection happens, /var/log/shiny-server.log tells me:
[INFO] shiny-server - Error getting worker: Error: The application
exited during initialization.
, which has never happened locally.
It doesn't make any sense to me. How is it possible that my laptop is beating servers? My laptop is MacBook Air (Early 2015) with just 1.6 GHz Intel Core i5 and 8 GB 1600 MHz DDR3, whereas the VM on Google Compute Engine performs so badly even when it has 4 vCPU and 15 GB RAM.
How can I possibly find out the reasons of worse performances on Shiny Server, or refactor my codes?
Can be related: Leaflet R performance issues with large map
Well firstly - preprocessing has no place in the shiny application. Why repeat something every time someone uses the app when it can be done once and then that saved product can be loaded.
I'd have a look at the following steps:
Remove anything that can be done once externally (e.g. Ln 12 - 37)
Simplify the polygons to make the file smaller (faster loading, do this once and load in product)
Anything you generate (labels etc) that are repetitively done, do once and save in a list (e.g. metadata.rds) and read in once and reference.
Sometimes it can appear that your app runs faster locally because you dont actually restart
the session when developing - Shiny is basically kickstarting a session for each user (kinda).

Overload app pool in IIS

Our application normally hangs and we normally take that out of load balancing when business/end users report it.In normal sceanrios we will automatically take node out of load balancing.But in App pool hang scenarios we dont have that privelage.So i am trying to understand each stage of IIS request queue,When i see more requests in http.sys queue,i will try to take node out of load balancing.this will not be the total case,so i want to test rigorously and monitor performances at each stage.
so for my task to be done,i need to access urls continously to overload app pool and see the way it is serving requests.I tried using tiny get,but my application uses windows authentication,so its not working,everytime error is access denied.I tried using wcat ,but not able to understand much from it.
Is there any way i can access url continuously or simulate high app pool requests or any suggestions from your experience are more welcome.
Thanks in advance.
If you have Visual Studio you can create a load test
http://msdn.microsoft.com/en-us/library/ms182594.aspx

How to detect whether a asp.net script is already running?

i want to create a script that would run forever. i mean i start the script today, and i should be able to see it running even in the next year.
this would not be possible because of the sever errors. it is obvious that the script will stop at least within 2 or 3 hours due to the server faults(im using a free web server).
so the method im going to use is to run two (or more) scripts simultaniously in two severs, and one scripts cheks if the other is runing & viceversa for every 30 seconds. and if found not running it executes the other one.
so the scripts will run as long as both of them are not stopped at once
1.my question is how do i check if the other asp.net script is running?
2.at least is there a way to check if another intance of the same asp.net script(in the same server) is already running?
i want to create a script that would run forever
ASP.NET is not the tool for this. A web application is a request/response system. It intercepts requests, performs a finite amount of processing, and returns a response. At that point it's done. Additionally, web servers are free to allocate and de-allocate resources for a number of reasons, so at any time your web application can be shut down.
What you're looking for is something more like a Windows Service or perhaps a Console Application (backed by a scheduler or something else to ensure that it's running). Web applications by design don't "run forever" so they're not the right tool for the job.
ASP is not free but it is also not too expensive, we can run a script on server that can continuously work on server, but doing such thing on server can cause server load error, and will affect other websites which are hosted on shared hosting. You can go for VPS hosting, But I think that your server administrator can suspended your account on running such kind of script on server.

ASP.NET Application Deployment Issue

I have deployed an application written in ASP.NET 2.0 into production and it's experiencing some latency issues. Pages are taking about 4-5 seconds to load. GridView refreshing are taking around the same time to load.
The app runs fine on the develpment box. I did the following investigation on the server
Checked the available memory ... 80% used.
Cheched the processor ... 1%
Checked disk IO from perfmon, less than 15%
The server config is
Windows Server 2003 Sp2
Dual 2.0 GZH
2GB RAM
Running SQL Server 2005 and IIS only
Is there anything else I can troubleshoot? I also checked the event log for errors, it's clean.
EDITED ~ The only difference I just picked up is on the DEV box I am using IE7 and the clients are using IE6 - Could this be an issue?
UPDATE ~ I updated all clients to IE8 and noticed a 30% increase in the performance. I finally found out I left my debug=true in the web.config file. Setting that to flase got the app back to the stable performance... I still can't believe I did that.
First thing I would do is enable tracing. (see: https://web.archive.org/web/20210324184141/http://www.4guysfromrolla.com/webtech/081501-1.shtml)
then add tracing points to your page generation code to give you an idea of how long each part of the page build takes:
System.Diagnostics.Trace.Write(
"Starting Page init",
"TraceCheck");
//Init page
System.Diagnostics.Trace.Write(
"End Page init",
"TraceCheck");
System.Diagnostics.Trace.Write(
"Starting Data Fetch",
"TraceCheck");
//Get Data
System.Diagnostics.Trace.Write(
"End Data Fetch",
"TraceCheck");
etc
this way you can see exactly how long each stage is taking and then target that area.
Double check that you application is not running in debug mode. In your web.config file check that the debug attribute under system.web\compilation is set to false.
Besides making the application run slower and using more system memory you will also experience slow page loading since noting is cached when in debug mode.
Also check your page size. A developer friend of mine once loaded an entire table into viewstate. A 12 megabyte page will slip by when developing on your local machine, but becomes immediately noticeable in production.
Are you running against the same SQL Server as in your tests or a different one?
In order to find out where the time's coming from you could add some trace statements to your page load, and then load the page with tracing turned on. That might help point to the problem.
Also, what are the specs of your development box? The same?
Depending on the version of visual studio you have, Team Developer has a Performance Wizard you might want to investigate.
Also, if you use IE 8, it has a Profiler which will let you see how long the site takes to load in the browser itself. One of the first things to determine is whether the time is client side or server side.
If client side, start looking at what javascript you have and optimize / get rid of it.
If server side, you need to look at all of the performance counters (perfmon). For example, we had an app that crawled on the production servers due to a tremendous amount of JIT going on.
You also need to look at the communication between the web and database server. How long are queries taking? Are the boxes thrashing the disk drives? etc.

Resources