Apache random slow image loading - wordpress

I have a weird issue on my sites where some images load slow. I have caching in place (CloudFlare caching, Brotli compression enabled), this is referring to the first "uncached" load. All of the images have been compressed to the maximum extent.
I'm wondering why some of the images have such a delay on the first load and if there's anything I can do to fix it.
Here's the network result from a site I didn't have cached.
As you can see, it doesn't seem to matter how large the images are. Some larger ones load faster, while some smaller ones are delayed.
Apache Global Configuration settings are as follows (default):
Start Servers 5
Minimum Spare Servers 5
Maximum Spare Servers 5
Server Limit 256
Max Request Workers 150
Max Connections Per Child 10000
Keep-Alive On
Keep-Alive Timeout 5
Timeout 300
Is there some configuration needed to allow these images to all resolve quickly? The CPU usage when loading my sites (uncached) is minuscule, it never goes over 1%.
In total, I count 17 images (all under 5kb) loading on this particular site.
I understand that Ngix/Litespeed would probably speed up the loading, but this question is strictly related to Apache 2.4+, without either of those installed.
Digital Ocean $20 droplet (2x CPUs - Intel E5-2650 v4, 4gb ram, 80gb ssd).
Apache 2.4+/CentOS/cPanel 90.
Edit: Removing the Apache cache headers and relying on only Cloudflare solved the "random delay". But still the question remains, why does the first "uncached" version take so long to load small images?

Related

What happens when nginx proxy_buffer_size is exceeded?

I am running a node server in AWS Elastic Beanstalk with Docker, which also uses nginx. One of my endpoints is responsible for image manipulation such as resizing etc.
My logs show a lot of ESOCKETTIMEDOUT errors, which indicate it could be caused by an invalid url.
This is not the case as it is fairly basic to handle that scenario, and when I open the apparent invalid url, it loads an image just fine.
My research has so far led me to make the following changes:
Increase the timeout of the request module to 2000
Set the container uv_threadpool_size env variable to the max 128
While 1 has helped in improving response times somewhat, I don't see any improvements from 2. I have now come across the following warning in my server logs:
an upstream response is buffered to a temporary file /var/cache/nginx/proxy_temp/0/12/1234567890 while reading upstream,.
This makes me think that the ESOCKETTIMEDOUT errors could be due to the proxy_buffer_size being exceeded. But, I am not sure and I'd like some opinion on this before I continue making changes based on a hunch.
So I have 2 questions:
Would the nginx proxy_buffer_size result in an error if a) the size is exceeded in cases of manipulating a large image or b) the volume of requests maxes out the buffer size?
What are the cost impacts, if any, of updating the size. AWS memory, instance size etc?
I have come across this helpful article but wanted some more opinion on if this would even help in my scenario.
When proxy_buffer_size is exceeded it creates a temporary file to use as a kind of "swap", which uses your storage, and if it is billable your cost will increase. When you increase proxy_buffer_size value you will use more RAM, which means you will have to pay for a larger one, or try your luck with the current one.
There is two things you should never make the user wait for processing: e-mails and images. It can lead to timeouts or even whole application unavailability. You can always use larger timeouts, or even more robust instances for those endpoints, but when it scales you WILL have problems.
I suggest you approach this differently: Make a image placeholder response and process those images asynchronously. When they are available as versioned resized images you can serve them normally. There is an AWS article about something like this using lambda for it.

To how many users per second, 1 MB page can be served through 100 Mbps (12.5 MBps) uplink port of a dedicated server.

To how many users per second, 1 MB page can be served through 100 Mbps (12.5 MBps) uplink port of a dedicated server.
I am planning to increase capacity of my dedicated server as my current server is not able to manage the load of my application.
Henceforth, I need to understand the uplink port connection offered by varied dedicated server providers.
In Amazon EC2 this is mentioned as Network Performance, which only providsions 10 Gigabit on its largest instances.
Pls guide.
Simply put, a 12.5MB/s connection is going to be able to serve a 1MB page to 12.5 users every second.
That said, are you absolutely sure it's the network throughput that's causing the problem, rather than a CPU or memory limit? In my experience, the network link is very rarely the bottleneck.
Bear in mind that a 1MB page will often compress to far less than that, assuming the server's compression is configured correctly. And unless you're genuinely seeing 12.5 new users every second, they will likely have a lot of the static assets (images, scripts, etc) cached either in their browser or by an upstream proxy, so they won't be requested every time.
If you really are just serving a 1MB page to a very high number of users rather than being bound by CPU, then you might more luck investigating a CDN (like Cloudflare or Cloudfront) than simply upgrading to a quicker link.

asp.net high number of Request Queued and Context switching

We have a fairly popular site that has around 4 mil users a month. It is hosted on a Dedicated Box with 16 gb of Ram, 2 procc with 24 cores.
At any given time the CPU is always under 40% and the memory is under 12 GB but at the highest traffic we see a very poor performance. The site is very very slow. We have 2 app pools one for our main site and one for our forum. Only the site is being slow. We don't have any restrictions on cpu or memory per app pool.
I have looked at he Performance counters and I saw something very interesting. At our peek time for some reason Request are being queued. Overall context switching numbers are very high around 30 - 110 000 k.
As i understand high context switching is caused by locks. Can anyone give me an example code that would cause a high number of context switches.
I am not too concerned with the context switching, and i don't think the numbers are huge. You have a lot of threads running in IIS (since its a 24 core machine), and higher context switching numbers re expected. However, I am definitely concerned with the request queuing.
I would do several things and see how it affects your performance counters:
Your server CPU is evidently under-utilized, since you run below 40% all the time. You can try to set a higher value of "Threads per processor limit" in IIS until you get to a 50-60% utilization. An optimal value of threads per core by the books is 20, but it depends on the scenario, and you can experiment with higher or lower values. I would recommend trying setting a value >=30. Low CPU utilization can also be a sign of blocking IO operations.
Adjust the "Queue Length" settings in IIS properties. If you have configured the "Threads per processor limit" to be 20, then you should configure the Queue Length to be 20 x 24 cores = 480. Again, if the requests are getting Queued, that can be a sign that all your threads are blocked serving other requests or blocked waiting for an IO response.
Don't serve your static files from IIS. Move them to a CDN, amazon S3 or whatever else. This will significantly improve your server performance, because 1,000s of Server requests will go somewhere else! If you MUST serve the files from IIS, than configure IIS file compression. In addition use expire headers for your static content, so they get cached on the client, which will save a lot of bandwidth.
Use Async IO wherever possible (reading/writing from disk, db, network etc.) in your ASP.NET controllers, handlers etc. to make sure you are using your threads optimally. Blocking the available threads using blocking IO (which is done in 95% of the ASP.NET apps i have seen in my life) could easily cause the thread pool to be fully utilized under heavy load, and Queuing would occur.
Do a general optimization to prevent the number of requests that hit your server, and the processing time of single requests. This can include Minification and Bundling of your CSS/JS files, refactoring your Javascript to do less roundtrips to the server, refactoring your controller/handler methods to be faster etc. I have added links below to Google and Yahoo recommendations.
Disable ASP.NET debugging in IIS.
Google and Yahoo recommendations:
https://developers.google.com/speed/docs/insights/rules
https://developer.yahoo.com/performance/rules.html
If you follow all these advices, i am sure you will get some improvements!

Increase in number of requests form server cause website slow?

In My office website,webpage has 3css files ,2 javascript files ,11images and 1page request total 17 requests from server, If 10000 people visit my office site ...
This may slow the website due to more requests??
And any issues to the server due to huge traffic ??
I remember My tiny office server has
Intel i3 Processor
Nvidia 2Gb Graphic card
Microsoft 2008 server
8 GB DDR3 Ram and
500GB Hard disk..
Website developed on Asp.Net
Net speed was 10mbps download and 2mbps upload.using static ip address.
There are many reasons a website may be slow.
A huge spike in Additional Traffic.
Extremely Large or non-optimized graphics.
Large amount of external calls.
Server issue.
All websites should have optimized images, flash files, and video's. Large types media slow down the overall loading of each page. Optimize each image.PNG images have an improved weighted optimization that can offer better looking images with smaller file size.You could also run a Traceroute to your site.
Hope this helps.
This question is impossible to answer because there are so many variables. It sounds like you're hypothesising that you will have 10000 simultaneous users, do you really expect there to be that many?
The only way to find out if your server and site hold up under that kind of load is to profile it.
There is a tool called Apache Bench http://httpd.apache.org/docs/2.0/programs/ab.html which you can run from the command line and simulate a number of requests to your server to benchmark it. The tool comes with an install of apache, then you can simulate 10000 requests to your server and see how the request time holds up. At the same time you can run performance monitor in windows to diagnose if there are any bottlenecks.
Example usage taken from wikipedia
ab -n 100 -c 10 http://www.yahoo.com/
This will execute 100 HTTP GET requests, processing up to 10 requests
concurrently, to the specified URL, in this example,
"http://www.yahoo.com".
I don't think that downloads your page dependencies (js, css, images), but there probably are other tools you can use to simulate that.
I'd recommend that you ensure that you enable compression on your site and set up caching as this will significanly reduce the load and number of requests for very little effort.
Rather than hardware, you should think about your server's upload capacity. If your upload bandwidth is low, of course it would be a problem.
The most possible reason is because one session is lock all the rest requests.
If you not use session, turn it off and check again.
relative:
Replacing ASP.Net's session entirely
jQuery Ajax calls to web service seem to be synchronous

IIS 7 - Does the number of HTTP connections matters?

I'm optimizing a very popular website and since the user base is constantly growing I'm interested in what matters when it comes to scaling.
Currently I am scaling by adding more CPU power / RAM memory to the server. This works nicely - even though the site is quite popular, currently CPU usage is at 10%.
So, if possible, I'd keep doing that. What I am worried about is whether I could get to the point where CPU usage is low but users have problems connecting because of the number of HTTP connections. Is it better to scale horizontally, by adding more servers to the cluster?
Thanks!
Eventually just adding more memory won't be enough. There are concurrent connection limits for TCP rather than IIS (though both factors do come into account, IIS can handle about 3000 connections without a strain).
You probably won't encounter what you suggest where the CPU usage is low but number of HTTP connections is high unless it is a largely static site, but the more connections open, the higher the CPU usage.
But regardless of this, what you need for a popular site is redundancy, which is essential for a site which has a large user base. There is nothing more annoying to the user than the site being down as your sole server goes offline for some reason. If you have 2 servers behind a load balancer, you can grow the site, even take a server offline with less fear of your site going offline.

Resources