Struggling to get CLS down under 0.1s on mobile. Can't reproduce it on tests - pagespeed

I try to optimize the whole Pagespeed of this page but I can't get the CLS under 0.1 on mobile. I really don't know why as I use critical css, page-caching and font-preloading and I cant reproduce the behaviour in tests.
https://developers.google.com/speed/pagespeed/insights/?url=https%3A%2F%2Fwww.birkengold.com%2Frezept%2Fselbstgemachte-zahnpasta
Tested with an simulated Galaxy S5 on 3G Fast.
https://www.webpagetest.org/result/210112_DiK9_256ca61d8f9383a5b927ef5f55644338/
In no Scenario I get somewhere near the 0.1 in CLS.

Field Data and Origin Summary
Field data and Origin Summary are real world data.
There is the key difference between these metrics and the synthetic test that Page Speed Insights runs.
For example: CLS is measured until page unload in the real world, as mentioned in this explanation on CLS from Addy Osmani who works on Google Chrome.
For this reason your CLS can be high for pages if they perform poorly at certain screen sizes (as Lighthouse / PSI only tests one mobile screen size by default) or if there are things like lazy loading not performing well in the real world and causing layout shifts when things load too slowly.
It could also be certain browsers, connection speeds etc. etc.
How can you find the page / root cause that is ruining your Web Vitals?
Let's assume you have a page that does well in the Lighthouse synthetic test but it performs poorly in the real world at certain screen sizes. How can you identify it?
For that you need to gather Real User Metrics (RUM) data.
RUM data is data gathered in the real world as real users use your site and stored on your server for later analysis / problem identification.
There is an easy way to do this yourself, using the Web Vitals Library.
This allows you to gather CLS, FID, LCP, FCP and TTFB data, which is more than enough to identify pages that perform poorly.
You can pipe the data gathered to your own API, or to Google Analytics for analysis.
If you gather and then combine the web vitals information with User Agent strings (to get the browser and OS) and the browser size information (to get the effective screen size) you can narrow down if the issue is down to a certain browser, a certain screen size, a certain connection speed (as you can see slower connections from high FCP / LCP figures) etc. etc.

Related

Why is there a CLS problem with real user data but not with the lab data?

when I test the page in DevTools, there is no problem with CLS.
But, with pagespeed-insights there appears a difficulty with the mobile page CLS.
Not for the lab data, but only for the real data:
https://federhilfe.de/rotkehlchen-anlocken-im-garten-ansiedeln-fuetterung-nisthilfen/
https://pagespeed.web.dev/report?url=https%3A%2F%2Ffederhilfe.de%2Frotkehlchen-anlocken-im-garten-ansiedeln-fuetterung-nisthilfen%2F&hl=de
Do you have any idea how to solve this problem?
Thank you very much!
Alex
The guide at web.dev/lab-and-field-data-differences mentions a few reasons why CLS may be different across lab and field measurement:
CLS may depend on real-user interaction, while lab tests do not simulate any interactions (by default)
Personalized content from real users may affect CLS, which may not be represented in lab testing
Real users may have warm caches, so content may not need to load, minimizing CLS, whereas lab tests traditionally run with empty caches
To help identify why CLS is happening to real users, the first step I'd recommend is to use the website yourself with something like the Web Vitals extension enabled. Just from scrolling to the bottom of the page with the heads-up display (HUD) turned on, I can see CLS values gradually increasing and exceeding the "good" threshold of 0.1
I'd also recommend measuring Web Vitals in the field to ensure that you have additional debug data about what might be causing CLS to your real users.
Even though this kind of interaction-induced CLS wouldn't be reproduced by lab testing, it could still identify possible causes of real-user CLS. The PageSpeed Insights link you shared, you can click Show audits relevant to: CLS. For the desktop results, there are two audits that need attention:
Image elements do not have explicit width and height
Avoid large layout shifts

Which metric of web performance is more effective for SPA - platform?

I have a plan about measuring our platform(SPA)'s performance and tracking a metric result everyday.
There are two metric that I can measure.
domcontentloaded
load
Which one is more effective for SPA platform?
// notes
(I already knew there are many metric(FP, LCP, ....) but in this time, I just want to take only one metric in domcontentloaded or load.)
TL;DR: choose a modern metric. DCL and load are ineffective measurements for page load performance.
Both DOM Content Loaded and the plain load event are generally seen an outmoded, poor signals for page load performance. There are some technical differences between the two (DCL happens as the browser is finishing up parsing the DOM (basically the last step), and load happens once that parsing is complete).
Using a more modern standard will provide much better signal WRT how your page is loading. You mention Largest Contentful Paint, which is possibly a good one to choose, given that LCP tends to correlate with when people see a site as being "ready." Total Blocking Time may also be helpful for a SPA, as that measures the amount of time between FCP and TTI where the site was unable to handle user input.

Page speed Does not pass even after scoring 90+ on Page Speed Insight

My webpage is scoring 90+ on desktop version but yet it's test result on Field Data show "does not pass". While the same page on Mobile with 70+ speed is marked as "Passed"
What's the criteria over here and what else is needed to pass test on desktop version. Here is the page on which I'm performing test: Blog Page
Note: This page speed is on 90+ from about 2 months. Moreover if anyone can guide about improving page speed on Mobile in WordPress using DIVI builder, that would be helpful.
Although 6 items show in "Field Data" only three of them actually count towards your Core Web Vitals assessment.
First Input Delay (FID)
Largest Contentful Paint (LCP)
Cumulative Layout Shift (CLS)
You will notice that they are denoted with a blue marker.
On mobile all 3 of them pass, despite a lower overall performance score.
However on Desktop your LCP occurs at 3.6 seconds average, which is not a pass (it needs to be within 2.5 seconds).
That is why you do not pass on Desktop but do on mobile.
This appears to be something with your font at a glance (sorry not at PC to test properly), causing a late switch out. I could be wrong, as I said, I haven't had chance to test so you need to investigate using Dev Tools etc.
Bear in mind that the score you see (95+ on Desktop, 75+ on mobile) is part of a synthetic test performed each time you run Page Speed Insights and has no bearing on your Field Data or Origin Summary.
The data in the "Field Data" (and Origin Summary) is real world data, gathered from browsers, so they can be far apart if you have a problem at a particular screen size (for example) etc. that is not picked up in a synthetic test.
Field Data pass or fails a website based on historical data.
Field Data Over the previous 28-day collection period, field data shows
that this page does not pass the Core Web Vitals assessment.
So if you have made recent changes to your website to improve your site score you need to wait atleast a month so that Field Data shows result based on newer data.
https://developers.google.com/speed/docs/insights/v5/about#distribution

Why would a website use artificially long response times?

I've noticed, on some websites, that there are recurrent http get requests taking equally long times to download a very small amount of data (around 5 lines of text).
Like a heartbeat, these requests are also chained in such a way that there is always something going on in the background.
It is present in multiple well known websites. For example, see Gmail and facebook are both using this technique for their hearthbeats
How can one reproduce this behaviour?
And why would someone use this technique on his website?
Edit:
My hypothesis is that they can now control the refresh times of all clients by adjusting a single value in the server application
Most likely this is an implementation of long polling. It's arguably a hack to simulate push updates to the browser, enabling real time updates of the page as soon as something of importance happens on the server.

Could "filling up" Google Analytics with millions of events slow down query performance / increase sampling?

Considering doing some relatively large scale event tracking on my website.
I estimate this would create up to 6 million new events per month in Google Analytics.
My questions are, would all of this extra data that I'm now hanging onto:
a) Slow down GA UI performance
and
b) Increase the amount of data sampling
Notes:
I have noticed that GA seems to be taking longer to retrieve results for longer timelines for my website lately, but I don't know if it has to do with the increased amount of event tracking I've been doing lately or not – it may be that GA is fighting for resources as it matures and as more and more people collect more and more data...
Finally, one might guess that adding events may only slow down reporting on events, but this isn't necessarily so is it?
Drewdavid,
The amount of data being loaded will influence the speed of GA performance, but nothing really dramatic I would say. I am running a website/app with 15+ million events per month and even though all the reporting is automated via API, every now and then we need to find something specific and use the regular GA UI.
More than speed I would be worried about sampling. That's the reason we automated the reporting in the first place as there are some ways how you can eliminate it (with some limitations. See this post for instance that describes using Analytics Canvas, one my of favorite tools (am not affiliated in any way :-).
Also, let me ask what would be the purpose of your events? Think twice if you would actually use them later on...
Slow down GA UI performance
Standard Reports are precompiled and will display as usual. Reports that are generated ad hoc (because you apply filters, segments etc.) will take a little longer, but not so much that it hurts.
Increase the amount of data sampling
If by "sampling" you mean throwing away raw data, Google does not do that (I actually have that in writing from a Google representative). However the reports might not be able to resolve all data points (e.g. you get Top 10 Keywords and everything else is lumped under "other").
However those events will count towards you data limit which is ten million interaction hits (pageviews, events, transactions, any single product in a transaction, user timings and possibly others). Google will not drop data or close your account without warning (again, I have that in writing from a Google Sales Manager) but they reserve to right to either force you to collect less interaction hits or to close your account some time after they issued a warning (actually they will ask you to upgrade to Premium first, but chances are you don't want to spend that much money).
Google is pretty lenient when it comes to violations of the data limit but other peoples leniency is not a good basis for a reliable service, so you want to make sure that you stay withing the limits.

Resources