I am running a LAMP Stack on a google cloud customized compute engine primarily to host wordpress websites running woocommerce stores.
Following are server specs:
RAM: 5GB, Cores: 1, Space: 30GB, OS: CentOS7, Maria DB Version: 5.5.64, PHP Version: 7.3
Currently facing extreme ttfb Values over 10-20 secs even with very low traffic. Have done the following optimisations for improving the timing but it doesn't seem to improve it. The site has close to 1500 products.
Wordpress caching using hummingbird and auto optimize (minify, GZIP compression etc..) custom .htaccess with header expires, APCU PHP cache, cloudflare CDN, compressed images.
Optimized mariadb with optimum memory allocation, allocated optimum memory to apache and PHP as well.
Tried adding more cores and increase memory of compute engine in vain.
Disabling theme and template has little to no effect.
All the above optimizations has had little effect on the ttfb timings, is this a server/network related issue on my google cloud compute instance ?
Pls check the ttfb values below, test link:
TTFB Test Results
Thanks in advance !
I think you can measure the repose times. Try to measure the time spent waiting for the initial response by going to your browser and clicking "F12" >> "Network" tab and then search for your website using the browser in the same window.
You will get the response times by each process to connect to your website. If you click a specific process and then select the timing you will be able to see the TTFB and with that try to catch where is taking more time.
I believe this is more related with your installations than with the server itself.
If you want to test your server connection you could try to avoid the app side and use a trace or iperf to test your TCP connections times to your server from your local computer (to the external IP), this will only work if you have ICMP traffic allowed.
And the last thing is the same than John mentioned above, check if you're server is not swaping memory or even try to monitor the CPU and mem in use while you run the ttbf test, that will give you an idea if the problem is with the server or with the website and its configuration.
Additionally here are some recommendations to reduce ttbf (https://wp-rocket.me/blog/how-to-reduce-ttfb-wordpress-site/). Hoping it can help some how with this.
Related
Google provides Speed index for given URL by doing a lighthouse measurement:
https://developers.google.com/speed/pagespeed/insights
The data differs a lot from my own measurement. Now my guess is for the mobile page test they use some 4G emulation on a machine located in the US while my testet webserver is located in europe.
Any idea where to find information on the geo location they perform testing from ?
Lighthouse uses 'a slow 4G connection'. This choice influences the importance of network speed versus page weight.
From the Lighthouse Github repo:
How does Lighthouse use network throttling, and how can I make it better?
Good question. Network and CPU throttling are applied by default in a Lighthouse run. The network attempts to emulate slow 4G connectivity and the CPU is slowed down 4x from your machine's default speed. If you prefer to run Lighthouse without throttling, you'll have to use the CLI and disable it with the --throttling.* flags mentioned above.
And...
Are results sent to a remote server?
Nope. Lighthouse runs locally, auditing a page using a local version of the Chrome browser installed the machine. Report results are never processed or beaconed to a remote server.
From the web.dev website:
All tests are run using a simulated mobile device, throttled to a fast 3G network & 4x CPU slowdown.
From the web.dev Github repo:
Note: this repo contains the written content for web.dev. The client-side JS and server are not yet open source but we hope to share them soon! bowing_manī¸
Concluding I would say that web.dev runs Lighthouse in a browser, using local JS, but Google is not very clear about this. My claim can be backed up by people expecting Lighthouse to be able to audit local websites.
Recently, I have an issue with my website from around 6pm gmt and 4am gmt.
I thought at first it was due to visitors at high traffic time but from 0am to 4am I have less visitors than times where the problem doesn't occur. My server gets high CPU usage. Here's the top command screenshot.
I have a dedicated server with 8 cores and 8 GB memory.
You should provide more infos about your issues. That's just a print screen and it's pretty hard to deduct anything based only on that! Check your apache server logs for that specific cPanel user (/usr/local/apache/domlogs/cpanel_user/domain.tld - that's the apache access log file for that account), run as suggested by others SHOW PROCESSLIST in mysql or install a tool like mytop or mtop
UPDATE: This was my mistake, see my comment the below. Now Cloudfront works great with new settings.
Sometimes dns waits 600ms and than it will wait another half second which makes 90kb file waiting more than 1 second. Sometimes pingdom wait time shows even 1 second. If I try another test, it will go sometimes to 90ms all together.
I understand that first request will take more time because cloudfront needs first to take file from our server. I set cache time to 86400 s which means if it should get file from cache for whole 24 hours. But if I try pingdom just 2 hours after first test it will go again very slow.
The below are my results and settings. Am I missing something?
Most of the cases its the DNS that makes the delay because amazon is really scalable.
I had similar issues with my ISP and was able to resolve its quickly by changing the DNS servers.
Try changing your DNS to Google DNS
IP V4
8.8.8.8
8.8.4.4
IP V6
2001:4860:4860::8888
2001:4860:4860::8844
Google Public Dns Documentation
Or use OPEN DNS
208.67.220.220
208.67.222.222
OPEN DNS Documentation
CloudFront is not only scalable, it also eliminates bottlenecks, but aims to speed it up.
AWS CloudFront is a service with low latency and fast transfer rates.
Here are some of the symptoms that may be slower when using CloudFront.
(This includes most problems.)
The requesting Edge may be receiving a large number of requests.
The edge server closest to the client may be farther than the web host server.
(Geographic delay)
DNS lookups can be delayed.
There is not much of this possibility, but make sure the x-edge is in the "View in cloud front" state.
Cache may be missing.
Detailed troubleshooting is difficult because you do not know what the test is or what the condition is.
If logging is enabled, further troubleshooting is possible.
It is generally recommended to enable logging.
If you have any questions, please feel free to ask!
thank you.
we have seen several actions where a simple screaming frog action would almost take down our server (it does not go down, but it slows down almost to a halt and PHP processes go crazy). We run Magento ;)
Now we applied this Nginx ruleset: https://gist.github.com/denji/8359866
But I was wondering of there is a more strict or better way to kick out too gready crawlers and screaming frog crawl episodes. Say 'after 2 minutes' of intense requesting we should already know someone is running too many requests of some automated system (not blocking the Google bot ofcourse)
Help and ideas appreciated
Seeing how a simple SEO utility scan may cause your server to crawl, you should realize that blocking of spiders isn't real solution. Suppose that you have managed to block every spider in the world or created a sophisticated ruleset to define that this number of requests per second is legit, and this one is not.
But it's obvious that your server can't handle even a few visitors at the same time. Few more visitors and they will bring your server down, whenever your store receives more traffic.
You should address the main performance bottleneck, which is PHP.
PHP is slow. With Magento it's slower. That's it.
Imagine every request to your Magento store causes scanning and parsing of dozens and dozens of PHP files. This will hit the CPU so bad.
If you have unoptimized PHP-FPM configuration, this will hit your RAM so bad also.
These are the things which should be done in order of priority to ease the PHP strain:
Make use of Full Page Cache
Really, it's a must with Magento. You don't loose anything, but only gain performance. Common choices are:
Lesti FPC. This is easiest to install and configure. Works most of the time even if your theme is badly coded. Profit - your server will no longer be down and will be able to serve more visitors. It can even store its cache to Redis if you have enough RAM and you are willing to configure it. It will cache, and it will cache things fast.
Varnish. It is the fastest caching server, but it's tricky to configure if you're running Magento 1.9. You will need Turpentine Magento plugin. The latter is quite picky to make work if your theme is not well coded. If you're running Magento 2, it's just compatible with Varnish out of the box and quite easy to configure.
Adjust PHP-FPM pool settings
Make sure that your PHP-FPM pool is configured properly. A value for pm.max_children that is too small will make for slow page requests. A value that is too high might hang your server because it will lack RAM. Set it to (50% of total RAM divided by 128MB) for starters.
Make sure to adjust pm.max_requests and set it to a sane number, i.e. 500. Many times, having it set to 0 (the default) will lead to "fat" PHP processes which will eventually eat all of the RAM on server.
PHP 7
If you're running Magento 2, you really should be using PHP 7 since it's twice as fast as PHP 5.5 or PHP 5.6.
Same suggestions with configs in my blog post Magento 1.9 performance checklist.
In My office website,webpage has 3css files ,2 javascript files ,11images and 1page request total 17 requests from server, If 10000 people visit my office site ...
This may slow the website due to more requests??
And any issues to the server due to huge traffic ??
I remember My tiny office server has
Intel i3 Processor
Nvidia 2Gb Graphic card
Microsoft 2008 server
8 GB DDR3 Ram and
500GB Hard disk..
Website developed on Asp.Net
Net speed was 10mbps download and 2mbps upload.using static ip address.
There are many reasons a website may be slow.
A huge spike in Additional Traffic.
Extremely Large or non-optimized graphics.
Large amount of external calls.
Server issue.
All websites should have optimized images, flash files, and video's. Large types media slow down the overall loading of each page. Optimize each image.PNG images have an improved weighted optimization that can offer better looking images with smaller file size.You could also run a Traceroute to your site.
Hope this helps.
This question is impossible to answer because there are so many variables. It sounds like you're hypothesising that you will have 10000 simultaneous users, do you really expect there to be that many?
The only way to find out if your server and site hold up under that kind of load is to profile it.
There is a tool called Apache Bench http://httpd.apache.org/docs/2.0/programs/ab.html which you can run from the command line and simulate a number of requests to your server to benchmark it. The tool comes with an install of apache, then you can simulate 10000 requests to your server and see how the request time holds up. At the same time you can run performance monitor in windows to diagnose if there are any bottlenecks.
Example usage taken from wikipedia
ab -n 100 -c 10 http://www.yahoo.com/
This will execute 100 HTTP GET requests, processing up to 10 requests
concurrently, to the specified URL, in this example,
"http://www.yahoo.com".
I don't think that downloads your page dependencies (js, css, images), but there probably are other tools you can use to simulate that.
I'd recommend that you ensure that you enable compression on your site and set up caching as this will significanly reduce the load and number of requests for very little effort.
Rather than hardware, you should think about your server's upload capacity. If your upload bandwidth is low, of course it would be a problem.
The most possible reason is because one session is lock all the rest requests.
If you not use session, turn it off and check again.
relative:
Replacing ASP.Net's session entirely
jQuery Ajax calls to web service seem to be synchronous