Is Google Pagespeed Insights reliable - pagespeed

I am working on one website, to optimize the performance.
I had, according to Pagespeed Insights (https://developers.google.com/speed/pagespeed/insights/?) a Server Response time of 1.7s.
After some work, Lighthouse (embedded in Chrome) gives me a Server Response time of 200ms, my page is noticeably faster. But Pagespeed insights still gives me results in the 1.5s range.
Which one should I trust?

Google PageSpeed uses a “combo” of lab and real-world data, whereas Lighthouse uses lab data only to build its report. Since lighthouse is integrated into PSI for sake of consistency you should trust PSI over just lab data.

Related

Is there a way of seeing a breakdown of the peak memory usage in Wordpress Query Monitor?

On my Wordpress site I'm seeing significant differences in page speed between staging and production. The code running on both is identical and the databases are similar size, but some pages in production load > 2s slower than staging. I'm not sure what could be causing this.
The only real difference I can see between the two sites is the peak memory usage stat exposed by the Wordpress Query Monitor plugin. On most pages, staging returns ~80Mbs which seems quite high, but production returns almost double that at 160Mbs. Is there a way of breaking down this stat and seeing what contributes to it?

How to use google's page speed insight tool on localhost

I'm using google's page speed insight tool and on production, it works fine.
But is it possible to use it on localhost? or are there any equivalent tool for testing local pages?
I know the lighthouse tab is also an option but the metrics are somehow different! I need the same API used in that service!
You can use Lighthouse Command Line Interface (CLI) (or run it from NodeJS if you are familiar with that), this is the engine that powers Page Speed Insights.
That way you configure CPU slowdown and network latency to closely match how you experience Page Speed Insights.
With regards to Lighthouse in the browser, the metrics should be the same (in terms of what is measured).
If you are getting vastly different performance numbers there may be several causes such as:
plugins (so run in incognito mode)
latency difference (if your website is in India or Australia for example then the latency will be high using Google's servers in America so you will get better scores from localhost)
settings (not running "simulated" throttling).
You can find a bit more info on the Lighthouse CLI advantages in this answer I gave.

Google Pagespeed Third-party usage. Who are 'Research Online'? How do I detect the software?

When I scan my Google Cloud compute website with pagespeedtest.net
e.g. https://developers.google.com/speed/pagespeed/insights/?url=https%3A%2F%2Fredbridgeramblers.org.uk%2Fulinks.php
I get an unexpected 3rd party
Research Online 41 KB 106 ms
I expect only Google Analytics.
I have tried running the test from 2 different PCs and get the same result.
Who are Research Online?
How can I detect the code?
When I open the network tab in Chrome or Firefox I cannot see Research Online.
I also ran https://www.webpagetest.org/ and looked in the waterfall view and could not see anything suspicious.
The weekly scan in Google Cloud compute does not find anything suspicious.
I get the same 3rd party on other web pages on the site.
I had same 'Research Online' issue. Your question the only mention of it I found. But checking 1 hour later it's gone. The report has changed too. Also my desktop pagespeed has increased from 79 to 100 (yee hah!) .
Guesses - Google pagespeed, like much of their product, is more random than they will admit. Or A/B testing.

Slow development environment (DNN/ASP.NET)?

I have a site that runs off of dotNetNuke with much customization. In production, this site runs fine, speed is relatively optimal, etc. In development, it's PAINFULLY slow (anywhere from 10-30 seconds per action slow). Can anyone recommend any tools/ideas on how to diagnose this issue? The environments are very similar (the dev database is not as powerful as the production one, but it's not enough to warrant this type of delay). I'm looking for something that can help determine all the points of contact for the requests, etc.
Any thoughts?
Try out the following tools:
YSlow: YSlow analyzes web pages and why they're slow based on Yahoo!'s rules for high performance web sites
PageSpeed: The PageSpeed family of tools is designed to help you optimize the performance of your website. PageSpeed Insights products will help you identify performance best practices that can be applied to your site, and PageSpeed optimization tools can help you automate the process.
Firebug and Network Monitoring: Look at detailed measurements of your site's network activity.
Fiddler
YSlow, PageSpeed, and Firebug are great tools you should definitely use but the fact that you're only seeing the issue in the development environment seems to imply it's not the site that's the problem but something with the development environment. Generally, I find most slowness in these cases is related to Disk and/or RAM issues. Use Task Manager to verify the machine has enough RAM for it's current load. Make sure there's sufficient available disk space for proper caching to occur. You may need a faster hard drive.
Run the site lokal in release mode and see if it changes something.
If you can run the live site in debug mode and see if it slows down as much as in the lokal environment.

Worker process taking high CPU%

All of my websites are hosted in IIS and configured with one application pool. This application pool consists 10 websites running.
It is working fine till today, but all of sudden I am observing that there is sudden up and down % in CPU usage. I am unable to trace out the problem.
Is there anyway to check which website is taking much load among all in the application pool?
Performance counters, task manager and native code analysis tools only tell part of the story. To gain a deeper understanding of what is happening inside your ASP.NET application you need to use WinDBG, SOS and ADPlus.
Tess Ferrandez has a great series of articles on tracking down what is to blame here:
.NET Debugging Demos Lab 4: High CPU hang
.NET Debugging Demos Lab 4: High CPU Hang - Review
This is a real world example:
High CPU in .NET app using a static Generic.Dictionary
You will probably want to separate your sites into individual application pools so you can identify and isolate the site that is causing the high CPU (but it already looks like you have a suspect so I'd isolate that one). From then you can follow Tess's advice and guidance to track down the cause.
You should also take a look at the logs to see if you're experiencing an unexpected spike or increase in traffic. Perhaps there's a badly behaved search engine site indexer nailing the site. If that's the case then maybe you need to (if you haven't already done so) create a robots.txt to prevent crawlers from indexing parts of the site that don't need to be indexed. On top of that if certain crawlers are being overly promiscious then just ban them. Perhaps consider a sitemap for google to tame and tune its activities.
If your server has reached it's max capacity, you will see CPU go up and down erratically because the GC will start trying to recover resources(cache..etc), which in turn causes your sites to work even harder. It's an endless cycle.
Have you been monitoring your performance counters? Do you have any idea what normal capacity is for your site? If you cannot answer these questions, I suggest you gather some perf numbers as soon as possible.
My rule of thumb is to always measure first, then make necessary changes.
Most of the time performance bottlenecks aren't where you think they would be.
There is really no performance counter way to tell, because the CPU counters are at the process level. Your best bet would be to do a time corelation with other events in the event log and .NET/ASP.NET counters for garbage collection, requests etc.
If you really want to go hardcore, you could use the SysInternals toolset to take snapshots of your app pool over time and then do a post-analysis to figure out what code was executed when the spike happened. Here is a related example from Mark Russinovich's blog - http://blogs.technet.com/b/markrussinovich/archive/2008/04/07/3031251.aspx.

Resources