Downloads timing out after 30 seconds on users with slow connections - nginx

We have some files on our portal that aren't that big to me: 50MB-80MB. On my home connection, it takes <10seconds to download these files. I've had other users test and they experience the same thing.
However, in the office, the connection is terrible. These files don't even download because once the download time gets to about 30-35 seconds, even though it is downloading (just incredibly slow), it triggers a non-descriptive error in the Developer Tools > Network and stops the download. Not seeing anything in any logs that indicates why the download is terminated.
The bigger problem is we now have a few end users with crappy internet who are also experiencing the same issue.
So I'm trying to figure out what we can do on our end. Obviously, we can't tell them, "Well, just get better internet service." It seems like there can be something done on our end to persist the download until it is completed. What that is, I'm not quite sure and that is what I'm looking for help on. Maybe it is a default setting in a dependency somewhere in our stack.
ReactJS FE that uses FileSaver.js for downloads
Django BE using native Django downloading
nginx-ingress for traffic ingress controller to the Kubernetes cluster
The FE uses nginx to serve the FE
The BE uses gunicorn to serve the BE
Any suggestions on what I should do to prevent this timeout on downloads?
I'm thinking the issue is somewhere with nginx-ingress, nginx, and/or FileSaver.js, so investigating those.

Per Saurabh, adjusting the timeout did the trick. I now just start the web server with the -t 300 flag and the users that were having issues no longer do.

Related

My Apache2 server slow to serve some css and js files. Any idea why?

I'm not sure if this is a server or programming problem. I searched for a similar problem, but coudn't find anything like this.
I have a server running Debian Buster, serving sites on Apache2.
This week, one of my sites turned veeery slow, taking more than 25 seconds to renderize a page that usually took between 2 and 4 seconds.
At first, I checked the PHP program, but it completes processing everything in less than 1 second, sometimes it takes 2 seconds.
As I have to place some menus depending on the size of the page, I save everything in a PHP variable, then decide if I add extra menus or not.
In the end, I "echo" the variable to the browser.
That said, after checking a lot, I found that:
when I open a page, it takes no time to process the html in PHP, and after writing it to the browser, the browser starts "waiting for www.mydomainname.tld" for 20+ seconds.
by running top, I see 2 or 3 Apache processes running at 100% CPU on the server during that time.
One of my css files is missing on the server. By replacing it, one of the Apache processes at 100% disappeared (probably ran and closed);
Another CSS file is in the server, but with that file called in the html page, the same (100% CPU running) problem appear. If I disable it in the html, everything is running quickly as expected. Renderize in the browser in less than 4 seconds top.
I know this is not easily reproducible, and for now I disabled that second CSS file, so my site is running, but it would be great if anyone could give me an idea on where should I look for this solution, if any?
I even checked the hard disk, the SMART flags seems OK. Didn't stop the server to check the disks yet.
I have a copy of this server in a virtualbox running the same system, and locally it is amazingly fast.
My worry is if there is any problem in my server that I should get some maintenance for?
Just to add: The server is an AMD octa core with 32GB of RAM and 3 TB x 2 disks in a RAID1 set, so the server specs is not a culprit (I think)
I ran a badblocks check in the RAID, and found no errors.
Then I asked the support staff to replace the older disk by its manufacturing date and surprisingly (since I had no proof of damage to that disk) they accepted.
After rebuilding the RAID set, the slowness disappeared somehow.
So, this is not a good answer, but may serve as a hint for anyone having this problem, too.

MySQL keeps running out of memory with Wordpress, how much memory do I need?

I have been experiencing MySQL crashing recently and really need to figure out what I need to do to get this to stop.
I have a 2GB Digital Ocean server running the following:
Ubuntu 14.04
PHP v5.5.9
Apache v20120211
MySQL v5.5.43
Wordpress v4.2
I also have 2GB of swap.
The last time MySQL crashed this was in my error log
http://laravel.io/bin/E304E
The important part seems (to me) to be this
InnoDB: Fatal error: cannot allocate memory for the buffer pool
I am getting about 2000 page views per day. I thought this should be easily enough memory to run the site.
Can anyone give me some ideas what I can do or what I definitely need to do to stop this happening?
Thanks
2000 page views per day is well within the range of what your server can handle. It's possible you're getting hit by bots and/or Apache isn't configured well for your server size.
Apache2Buddy is a quick diagnostic tool to help with your Apache configurations. $ curl -L http://apache2buddy.pl/ | perl. It'll print out a report with suggested configuration adjustments given your RAM available and application size. My guess is that you'll need to update MaxRequestWorkers (located at /etc/apache2/mods-available/mpm_prefork.conf) to something smaller.
I'm also guessing that you have bots hitting your site, which is causing the huge volume of traffic that is crashing Apache. Check your access logs $ cat /var/log/apache2/access.log.
I wrote an article on this situation if you want a deeper explanation, a method to stress test, or ideas on how to block some of the bot traffic: http://brunzino.github.io/blog/2016/05/21/solution-how-to-debug-intermittent-error-establishing-database-connection/

not seeing changes after publishing changes

i have an asp.net 2.0 site which is compiled and has been put in the IIS virtual directory
however the new changes cant be seen by some users, and some other users can see the new changes on the web site, i have tried deleting cache and cleaning internet settings and trying with mozilla and chrome and still cant see the new changes.
we also tried cleaning the connection pool on iis.
is there anything more i can possibly do?
Have you confirmed that the machines are hitting the right server? Ping the URL from the "bad" machines and see what it resolves to.
The javascript changes may need browsers to download newer script which was published. (Cached javascript might be older)
For session based changes, if users retained older session, the changes may not reflect until they restart the session.
I found the answer, the server was being split in two machines, and when the server admin updated files only did in one of the servers, not both, now both are updated and running ok, thanks every one for the suggestions.
Two things to try (if Bill Gregg's suggestion doesn't work):
1) Bounce the web server services.
2) Download a freeware app called CCLEANER (formerly called Crap Cleaner....it was seriously called that). This app (when configured correctly) will clean all the junk web files off client machines.

Flash SWF On Solaris Won't Load When Also Loading Apache APR Library in JBoss

UPDATE + SOLUTION ===============================
Sorry to be posting the solution here instead of in a comment, but something about my work's filtering doesn't allow the comment functionality to work for me.
I ended up using the -b 0.0.0.0 property in jboss to bind to all addresses, so I could try accessing machine A's server with machine B as the client, and vice-versa. I found that it always failed to load when running on machine B, whether or not I was connecting from A or B.
I started wireshark on a windows machine on the same network, and observed the TCP connection that was loading the webpage. I saw that the request for the .swf in the cases where it failed had a content length of 2 million or so, and when I right clicked the wireshark logs and selected "view conversation" or something like that, the size of the total conversation to get the .swf file was only 130,000. Looking at about:cache, that was about equal to what it ended up caching before saying "Done" on the page.
I ended up finding that there is a bug with the useSendFile property. (http://community.jboss.org/thread/148651?tstart=0). This causes it to only send part of the file if you are running low on kernel memory. Using useSendFile="false" in our server.xml has seemed to resolve the problem.
==================================================
Original Problem
I have a JBoss (5.1.0.GA) application server. I am using GraniteDS to connect between the application server and the client. The client side is flash-based.
Granite DS requires the use of the APR library (apache native library), so I am loading it. I see in the JBoss logs that it says it loaded the apache native library just fine (version 1.18, though I've also tried 1.20).
The issue is that when I have it so the APR library loads successfully, then the Flash side of the application does not usually load. I'll have to hit refresh a bunch of times and eventually it will usually load, but normally I'll see either a black webpage that says "Done" or the loading progress bar never move. Only by repeatedly hitting refresh will the page load. It will load eventually by hitting refresh enough, but it is not consistent and this obviously will not work for our clients who have to clear their browser's cache every time.
This problem only exists on Solaris, our application works fine on Windows. We've tried multiple patch-levels of solaris, and have verified with the "ldd" command that the library that needs to be loaded has all its dependencies there.
We've verified it isn't our swf file's size by testing:
1) Our regular SWF (1660 kb).
2) A random large-ish SWF (950 kb).
3) A small SWF with one label component that says "Test" (277 kb).
All 3 were unable to load when JBoss was also loading the native library, and loaded just fine without it. We need the native library to load successfully for Granite to connect between the client and server though, so not loading it isn't an option (unless there's some way to use the NIO connector with JBoss, but it appears unsupported).... if there is a way to use the NIO connector then we shouldn't need the APR library.
Has anybody run into this before? Anybody have any ideas or recommendations?
Have you tried the jboss native libraries for Solaris ?
http://www.jboss.org/jbossweb/downloads/jboss-native-2-0-9.html

Wordpress on Xampp cause long ttfb

Does anyone know why Wordpress on Xampp causes a long time to first byte, around 10-15 sec? I'm running on 100mbit box, latest Xampp and latest Wordpress.
If we even can help you with this, I think that more information is needed - if you are running it locally, network connection speed shouldn't matter. Maybe the specs of your machine, stuff like that.
On a slightly more helpful note (:-):
Try running a network sniffer such as Wireshark (or using Task Manager) to check what traffic is actually going through the network - is it full?
Is there a process taking up lots of CPU time, or thrashing the disk a lot?
Check the speed of the MySQL database
Try redownloading Wordpress and XAMPP, just to be sure
Re-install Wordpress (ie, run setup.php again)
Run the initialisation batch script for XAMPP
These are only wild suggestions, so I'm not sure how useful they will be.

Resources