Google provides Speed index for given URL by doing a lighthouse measurement:
https://developers.google.com/speed/pagespeed/insights
The data differs a lot from my own measurement. Now my guess is for the mobile page test they use some 4G emulation on a machine located in the US while my testet webserver is located in europe.
Any idea where to find information on the geo location they perform testing from ?
Lighthouse uses 'a slow 4G connection'. This choice influences the importance of network speed versus page weight.
From the Lighthouse Github repo:
How does Lighthouse use network throttling, and how can I make it better?
Good question. Network and CPU throttling are applied by default in a Lighthouse run. The network attempts to emulate slow 4G connectivity and the CPU is slowed down 4x from your machine's default speed. If you prefer to run Lighthouse without throttling, you'll have to use the CLI and disable it with the --throttling.* flags mentioned above.
And...
Are results sent to a remote server?
Nope. Lighthouse runs locally, auditing a page using a local version of the Chrome browser installed the machine. Report results are never processed or beaconed to a remote server.
From the web.dev website:
All tests are run using a simulated mobile device, throttled to a fast 3G network & 4x CPU slowdown.
From the web.dev Github repo:
Note: this repo contains the written content for web.dev. The client-side JS and server are not yet open source but we hope to share them soon! bowing_manī¸
Concluding I would say that web.dev runs Lighthouse in a browser, using local JS, but Google is not very clear about this. My claim can be backed up by people expecting Lighthouse to be able to audit local websites.
Related
I am running a LAMP Stack on a google cloud customized compute engine primarily to host wordpress websites running woocommerce stores.
Following are server specs:
RAM: 5GB, Cores: 1, Space: 30GB, OS: CentOS7, Maria DB Version: 5.5.64, PHP Version: 7.3
Currently facing extreme ttfb Values over 10-20 secs even with very low traffic. Have done the following optimisations for improving the timing but it doesn't seem to improve it. The site has close to 1500 products.
Wordpress caching using hummingbird and auto optimize (minify, GZIP compression etc..) custom .htaccess with header expires, APCU PHP cache, cloudflare CDN, compressed images.
Optimized mariadb with optimum memory allocation, allocated optimum memory to apache and PHP as well.
Tried adding more cores and increase memory of compute engine in vain.
Disabling theme and template has little to no effect.
All the above optimizations has had little effect on the ttfb timings, is this a server/network related issue on my google cloud compute instance ?
Pls check the ttfb values below, test link:
TTFB Test Results
Thanks in advance !
I think you can measure the repose times. Try to measure the time spent waiting for the initial response by going to your browser and clicking "F12" >> "Network" tab and then search for your website using the browser in the same window.
You will get the response times by each process to connect to your website. If you click a specific process and then select the timing you will be able to see the TTFB and with that try to catch where is taking more time.
I believe this is more related with your installations than with the server itself.
If you want to test your server connection you could try to avoid the app side and use a trace or iperf to test your TCP connections times to your server from your local computer (to the external IP), this will only work if you have ICMP traffic allowed.
And the last thing is the same than John mentioned above, check if you're server is not swaping memory or even try to monitor the CPU and mem in use while you run the ttbf test, that will give you an idea if the problem is with the server or with the website and its configuration.
Additionally here are some recommendations to reduce ttbf (https://wp-rocket.me/blog/how-to-reduce-ttfb-wordpress-site/). Hoping it can help some how with this.
I am using a vps for my website so I don't believe I can access it from the local network or something.
I am using digitalocean as a vps.
So where should I install tools like ab, siege, jmeter etc. , locally on the vps / on my own computer (client) / on another droplet(vps) in the same region and connect to the web server droplet via private network?
From my understanding if I use those tools on the vps itself, they might use too much of the cpu and ram (same cpu and ram the web server uses) for the test to be correct.
On the other hand testing remotely might end up with bad values because of network bottleneck. Is this the case if I use another vps on the same subnet (digitalocean private network function for example)?
I am lost, both solutions seem wrong so what am I missing?
The best option is to install the load generator on another VPS residing in the same subnet as the application under test - this way you will be able to get more "clean" results not impacted by connect times / latency
Having both application under test and the load generator at the same machine is not recommended as load testing tools themselves are very resource intensive and you may run into the situation when both applications are "struggling" for resources hence load generator is not capable of sending requests fast enough and application under test cannot handle requests properly. In general it is recommended to keep an eye on resources consumption by the application under test/load generators in order to ensure that both have enough headroom, you will also be able to correlate increasing number of virtual users with increased resources consumption. You can use an APM tool or alternatively JMeter PerfMon Plugin if you don't have any alternatives in place.
As a fallback you can use your local machine for testing, however make sure that you have enough bandwidth (you can check it using i.e. https://www.speedtest.net/ service) and your ISP is aware of your plans and won't block you for the fraudulent action (as it might be considered a DOS attack)
We get good results using Unix machines from Amazon Webservices as load generator. You get not such a clean result like Dimitri mentioned, when the load generator is located in the same network. But you get a realistic result, like the enduser will get it too. With our scenario we evaluate some key values during execution like CPU, DB connections and amount of changed data sets in db during test. We repeat the test several times because there is always some variance in the result. The loadtest in the same network will deliver more stable results and can be compared to a measurement in a laboratory, but I think it is very good to know how your application behave in reality.
i recently installed a Bitnami Wordpress Network stack on google cloud compute.
I keep getting a warning saying that it is over utilised however, when i view cpu and disk usage statistics, i cannot see how this is possible? Both statistics are usually very low only spiking when I am administering websites (ie importing large files, backups, etc).
For exmaple as i post this message right now usage for the
Is this just a marketing ploy to get me to upgrade my instance?
What happens when we overutilise anyway? (what are the symptoms...as my wordpress network appears to me to be functioning flawlessly)
Please see images of my disk and cpu usage over the last 7 days
[CPU utilisation statistcs 7 days][1]
[disk operations 7 days][2]
[Network Packets statistics 7 days][3]
[1]: https://i.stack.imgur.com/iZa0L.png
[2]: https://i.stack.imgur.com/lUOno.png
[3]: https://i.stack.imgur.com/SnbHq.jpg
You need to install the Monitoring Agent in order to get accurate recommendations.
If the monitoring agent is installed and running on a VM instance, the
CPU and memory metrics collected by the agent are automatically used
to compute sizing recommendations. The agent metrics provided by the
monitoring agent give better insights into resource utilization of the
instance than the default Compute Engine metrics. This allows the
recommendation engine to estimate resource requirements better and
make more precise recommendations.
Read: https://cloud.google.com/compute/docs/instances/apply-sizing-recommendations-for-instances?hl=en_GB&_ga=2.217293398.-1509163014.1517671366#using_the_monitoring_agent_for_more_precise_recommendations
How to install the Monitoring Agent to get accurate sizing recommendations:
https://cloud.google.com/monitoring/agent/install-agent
Any service can be used to load test a website on CDN? So that we can ensure our website still can run without problem even under high volume traffic for example DDoS.
I suppose the target service should able to generate huge amount concurrent connections and large bandwidth.
If there are any reference site or report, please guide me to.
Thanks all.
Check out http://loadimpact.com/ and https://www.blitz.io/
Or you can always use tools like siege and ab
In My office website,webpage has 3css files ,2 javascript files ,11images and 1page request total 17 requests from server, If 10000 people visit my office site ...
This may slow the website due to more requests??
And any issues to the server due to huge traffic ??
I remember My tiny office server has
Intel i3 Processor
Nvidia 2Gb Graphic card
Microsoft 2008 server
8 GB DDR3 Ram and
500GB Hard disk..
Website developed on Asp.Net
Net speed was 10mbps download and 2mbps upload.using static ip address.
There are many reasons a website may be slow.
A huge spike in Additional Traffic.
Extremely Large or non-optimized graphics.
Large amount of external calls.
Server issue.
All websites should have optimized images, flash files, and video's. Large types media slow down the overall loading of each page. Optimize each image.PNG images have an improved weighted optimization that can offer better looking images with smaller file size.You could also run a Traceroute to your site.
Hope this helps.
This question is impossible to answer because there are so many variables. It sounds like you're hypothesising that you will have 10000 simultaneous users, do you really expect there to be that many?
The only way to find out if your server and site hold up under that kind of load is to profile it.
There is a tool called Apache Bench http://httpd.apache.org/docs/2.0/programs/ab.html which you can run from the command line and simulate a number of requests to your server to benchmark it. The tool comes with an install of apache, then you can simulate 10000 requests to your server and see how the request time holds up. At the same time you can run performance monitor in windows to diagnose if there are any bottlenecks.
Example usage taken from wikipedia
ab -n 100 -c 10 http://www.yahoo.com/
This will execute 100 HTTP GET requests, processing up to 10 requests
concurrently, to the specified URL, in this example,
"http://www.yahoo.com".
I don't think that downloads your page dependencies (js, css, images), but there probably are other tools you can use to simulate that.
I'd recommend that you ensure that you enable compression on your site and set up caching as this will significanly reduce the load and number of requests for very little effort.
Rather than hardware, you should think about your server's upload capacity. If your upload bandwidth is low, of course it would be a problem.
The most possible reason is because one session is lock all the rest requests.
If you not use session, turn it off and check again.
relative:
Replacing ASP.Net's session entirely
jQuery Ajax calls to web service seem to be synchronous