Wordpress site loads slowly - slow server response time? - wordpress

my website (www.scaredycut.com) loads very slowly. According to a load test, the initial "wait" is a long bottleneck. It ranges between 600 and 1900 ms.
Google's PageSpeed Insights test says:
In our test, your server responded in 0.59 seconds. There are many
factors that can slow down your server response time. Please read our
recommendations to learn how you can monitor and measure where your
server is spending the most time.
Google's recommendations don't tell you how to measure or get to the bottom of the problem. Does anyone have any suggestions as to where I should start? Should I contact my host?
Thanks for the help.

You need a complete PageSpeed overhaul and are you on cheap cloud hosting? Cuz your actual .com is taking over 2 seconds to load on its own without files.
Install w3 total cache and see how it does after activation vs without.

Related

Incredibly high Initial server response time

Is anyone here receiving an incredibly high Initial server response time? We're seeing upwards of 1.5s, however when using network tools, the server response time is sitting at < 150ms, so unless Google is doing these speed tests from a cave somewhere in the Amazon, we can't figure it out.
With the upcoming changes to ranking based on these arbitrary scores, does anyone have any insights here?
I'm experiencing very inaccurate performance measurements from Page Speed insights. is disappointing there is such little detail about the timing and network requests happening. The TTFB when running via their website is like 16s but when I test from nearly anywhere else, including via a proxy on the other side of the world, through tor, etc, the real TTFB is like 190ms.
There are problems with page speed insights that Google is not handling. So I suppose my answer is it's a disappointing and inaccurate tool.
All of that said, I would suggest trying the built in lighthouse tool inside chrome dev tools, this will show better scores for performance, but the outstanding question is how is google themselves measuring my websites performance?

Google cloud compute engine - Wordpress high TTFB

I am running a LAMP Stack on a google cloud customized compute engine primarily to host wordpress websites running woocommerce stores.
Following are server specs:
RAM: 5GB, Cores: 1, Space: 30GB, OS: CentOS7, Maria DB Version: 5.5.64, PHP Version: 7.3
Currently facing extreme ttfb Values over 10-20 secs even with very low traffic. Have done the following optimisations for improving the timing but it doesn't seem to improve it. The site has close to 1500 products.
Wordpress caching using hummingbird and auto optimize (minify, GZIP compression etc..) custom .htaccess with header expires, APCU PHP cache, cloudflare CDN, compressed images.
Optimized mariadb with optimum memory allocation, allocated optimum memory to apache and PHP as well.
Tried adding more cores and increase memory of compute engine in vain.
Disabling theme and template has little to no effect.
All the above optimizations has had little effect on the ttfb timings, is this a server/network related issue on my google cloud compute instance ?
Pls check the ttfb values below, test link:
TTFB Test Results
Thanks in advance !
I think you can measure the repose times. Try to measure the time spent waiting for the initial response by going to your browser and clicking "F12" >> "Network" tab and then search for your website using the browser in the same window.
You will get the response times by each process to connect to your website. If you click a specific process and then select the timing you will be able to see the TTFB and with that try to catch where is taking more time.
I believe this is more related with your installations than with the server itself.
If you want to test your server connection you could try to avoid the app side and use a trace or iperf to test your TCP connections times to your server from your local computer (to the external IP), this will only work if you have ICMP traffic allowed.
And the last thing is the same than John mentioned above, check if you're server is not swaping memory or even try to monitor the CPU and mem in use while you run the ttbf test, that will give you an idea if the problem is with the server or with the website and its configuration.
Additionally here are some recommendations to reduce ttbf (https://wp-rocket.me/blog/how-to-reduce-ttfb-wordpress-site/). Hoping it can help some how with this.

nginx protecting from screaming frog and too gready crawlers (so no real ddos, but close)

we have seen several actions where a simple screaming frog action would almost take down our server (it does not go down, but it slows down almost to a halt and PHP processes go crazy). We run Magento ;)
Now we applied this Nginx ruleset: https://gist.github.com/denji/8359866
But I was wondering of there is a more strict or better way to kick out too gready crawlers and screaming frog crawl episodes. Say 'after 2 minutes' of intense requesting we should already know someone is running too many requests of some automated system (not blocking the Google bot ofcourse)
Help and ideas appreciated
Seeing how a simple SEO utility scan may cause your server to crawl, you should realize that blocking of spiders isn't real solution. Suppose that you have managed to block every spider in the world or created a sophisticated ruleset to define that this number of requests per second is legit, and this one is not.
But it's obvious that your server can't handle even a few visitors at the same time. Few more visitors and they will bring your server down, whenever your store receives more traffic.
You should address the main performance bottleneck, which is PHP.
PHP is slow. With Magento it's slower. That's it.
Imagine every request to your Magento store causes scanning and parsing of dozens and dozens of PHP files. This will hit the CPU so bad.
If you have unoptimized PHP-FPM configuration, this will hit your RAM so bad also.
These are the things which should be done in order of priority to ease the PHP strain:
Make use of Full Page Cache
Really, it's a must with Magento. You don't loose anything, but only gain performance. Common choices are:
Lesti FPC. This is easiest to install and configure. Works most of the time even if your theme is badly coded. Profit - your server will no longer be down and will be able to serve more visitors. It can even store its cache to Redis if you have enough RAM and you are willing to configure it. It will cache, and it will cache things fast.
Varnish. It is the fastest caching server, but it's tricky to configure if you're running Magento 1.9. You will need Turpentine Magento plugin. The latter is quite picky to make work if your theme is not well coded. If you're running Magento 2, it's just compatible with Varnish out of the box and quite easy to configure.
Adjust PHP-FPM pool settings
Make sure that your PHP-FPM pool is configured properly. A value for pm.max_children that is too small will make for slow page requests. A value that is too high might hang your server because it will lack RAM. Set it to (50% of total RAM divided by 128MB) for starters.
Make sure to adjust pm.max_requests and set it to a sane number, i.e. 500. Many times, having it set to 0 (the default) will lead to "fat" PHP processes which will eventually eat all of the RAM on server.
PHP 7
If you're running Magento 2, you really should be using PHP 7 since it's twice as fast as PHP 5.5 or PHP 5.6.
Same suggestions with configs in my blog post Magento 1.9 performance checklist.

Why does Akamai take up to 7 minutes to purge a URL?

When I purge a URL with the Akamai Luna Control Center tool it says it will take up to 7 minutes. In my tests it takes between 1-180 seconds over 95% of the time.
Why does it take so long? What is the architecture behind Akamai in this regard? Surely there are many edge servers but you can make a multitude of requests to purge all edge servers within seconds so I don't think it is a technical reason.
I am thinking maybe they add your request to a queue and the queue gets run every N seconds and also can only handle X items. So you may get queued up.
Anyways, that is just speculation, does anyone know the real reason why?
Removing the file from 180,000 servers is definitely time consuming. Akamai is currently working on "Fast Invalidate" for purging, which should bring the time down to seconds instead of minutes. It's available in Beta right now, for anyone who's interested.
Currently, the CCU V2 API is down to less than 4 minutes in most cases. I've just written a getting started guide for the CCU API:
https://community.akamai.com/community/developer/blog/2015/08/19/getting-started-with-the-v2-open-ccu-api?sr=stream
The answer is pretty simple. Your purge has to be executed on thousands and thousands of machines.

Increase in number of requests form server cause website slow?

In My office website,webpage has 3css files ,2 javascript files ,11images and 1page request total 17 requests from server, If 10000 people visit my office site ...
This may slow the website due to more requests??
And any issues to the server due to huge traffic ??
I remember My tiny office server has
Intel i3 Processor
Nvidia 2Gb Graphic card
Microsoft 2008 server
8 GB DDR3 Ram and
500GB Hard disk..
Website developed on Asp.Net
Net speed was 10mbps download and 2mbps upload.using static ip address.
There are many reasons a website may be slow.
A huge spike in Additional Traffic.
Extremely Large or non-optimized graphics.
Large amount of external calls.
Server issue.
All websites should have optimized images, flash files, and video's. Large types media slow down the overall loading of each page. Optimize each image.PNG images have an improved weighted optimization that can offer better looking images with smaller file size.You could also run a Traceroute to your site.
Hope this helps.
This question is impossible to answer because there are so many variables. It sounds like you're hypothesising that you will have 10000 simultaneous users, do you really expect there to be that many?
The only way to find out if your server and site hold up under that kind of load is to profile it.
There is a tool called Apache Bench http://httpd.apache.org/docs/2.0/programs/ab.html which you can run from the command line and simulate a number of requests to your server to benchmark it. The tool comes with an install of apache, then you can simulate 10000 requests to your server and see how the request time holds up. At the same time you can run performance monitor in windows to diagnose if there are any bottlenecks.
Example usage taken from wikipedia
ab -n 100 -c 10 http://www.yahoo.com/
This will execute 100 HTTP GET requests, processing up to 10 requests
concurrently, to the specified URL, in this example,
"http://www.yahoo.com".
I don't think that downloads your page dependencies (js, css, images), but there probably are other tools you can use to simulate that.
I'd recommend that you ensure that you enable compression on your site and set up caching as this will significanly reduce the load and number of requests for very little effort.
Rather than hardware, you should think about your server's upload capacity. If your upload bandwidth is low, of course it would be a problem.
The most possible reason is because one session is lock all the rest requests.
If you not use session, turn it off and check again.
relative:
Replacing ASP.Net's session entirely
jQuery Ajax calls to web service seem to be synchronous

Resources