Core Web Vitals Assessment: Failed - pagespeed

There is a new feature in PageSpeed Insights that shows you the experience of real users. Now I checked one of my websites with this feature on mobile and I got this message:
"Core Web Vitals Assessment: Failed Computed from the Core Web Vitals
metrics over the latest 28-day collection period. Learn more"
Why is PageSpeed returning me this answer? It seems all ok in Google Search Console under Core web vitals.

Core Web Vitals result is based on 3 metrics below
Largest Contentful Paint (LCP)
First Input Delay (FID)
Cumulative Layout Shift (CLS)
All 3 metric's score is divided into 3 categories - Good, Needs Improvement and Poor.
If any of these 3 metric is not having 'Good' score, the overall results is shown as 'Failed'.
Check the metrics score just below the 'Failed' text, and find out which metric score needs to be improved.

Related

Core web vitals results consistency problem, even all URL's are in good ( LCP | CLS )

1. In Core Web Vitals, Does the Google search console rely on traffic to show the results of LCP and CLS? If yes then how much minimum traffic is required? Because In the live page insight tool there are all URLs are in good condition.
LCP is : Under 2.5 | CLS is : Under 0.1
2. Does Google search console/core web vitals results depend on CruX for each device OR page insight tool?
3. All the pages have CLS less than 0.01 but when the Google search console makes URL Group average it shows 0.23 CLS. As I have checked all the sample URLs given in the group are also good. is there any way to check how it got calculated?
4. Sometimes in the search console my web vitals URLs count goes to 0 on the desktop. Does anyone know about this thing?
It seems like you might be conflating the Chrome UX Report data from real users at the top of the page with the Lighthouse data from lab tests at the bottom. Core Web Vitals are assessed using real-user data from the Chrome UX Report.
In your first screenshot, it shows a CLS value of 0.22. This means 75% of desktop users on the page have a CLS of at or below 0.22, which is in the "Needs Improvement" category. Search Console is saying that the average experience over all pages in the group is a CLS value of 0.23, which is very similar to this particular page's performance.
The CLS value of 0.001 is based on a Lighthouse test. Importantly, Lighthouse doesn't interact with the page as a real user would, so it doesn't encounter types of CLS that may be caused by scrolling or clicking. Think of this value more as a lower bound for CLS than your page's actual user-perceived performance. That's also why Core Web Vitals are not assessed using data from Lighthouse.

Page speed Does not pass even after scoring 90+ on Page Speed Insight

My webpage is scoring 90+ on desktop version but yet it's test result on Field Data show "does not pass". While the same page on Mobile with 70+ speed is marked as "Passed"
What's the criteria over here and what else is needed to pass test on desktop version. Here is the page on which I'm performing test: Blog Page
Note: This page speed is on 90+ from about 2 months. Moreover if anyone can guide about improving page speed on Mobile in WordPress using DIVI builder, that would be helpful.
Although 6 items show in "Field Data" only three of them actually count towards your Core Web Vitals assessment.
First Input Delay (FID)
Largest Contentful Paint (LCP)
Cumulative Layout Shift (CLS)
You will notice that they are denoted with a blue marker.
On mobile all 3 of them pass, despite a lower overall performance score.
However on Desktop your LCP occurs at 3.6 seconds average, which is not a pass (it needs to be within 2.5 seconds).
That is why you do not pass on Desktop but do on mobile.
This appears to be something with your font at a glance (sorry not at PC to test properly), causing a late switch out. I could be wrong, as I said, I haven't had chance to test so you need to investigate using Dev Tools etc.
Bear in mind that the score you see (95+ on Desktop, 75+ on mobile) is part of a synthetic test performed each time you run Page Speed Insights and has no bearing on your Field Data or Origin Summary.
The data in the "Field Data" (and Origin Summary) is real world data, gathered from browsers, so they can be far apart if you have a problem at a particular screen size (for example) etc. that is not picked up in a synthetic test.
Field Data pass or fails a website based on historical data.
Field Data Over the previous 28-day collection period, field data shows
that this page does not pass the Core Web Vitals assessment.
So if you have made recent changes to your website to improve your site score you need to wait atleast a month so that Field Data shows result based on newer data.
https://developers.google.com/speed/docs/insights/v5/about#distribution

Largest Contentful Paint - different scores?

I have troubles with the Largest Contentful Paint on my Desktop-Sites. Since a few days the LCP is growing and growing and I don't know why and how I should stopp it. A problem is also that I get always different scores.
Page Speed Insights tells me that the featured image is the LCP and it takes 0,7 till 0,8 seconds for loading. The score for 28 days shows a value of 4,6s. That would be ok. But in search console under Web Core Vitals section I have 215 URLs for Desktop that have a LCP score of 7,4 seconds and it is getting higher and higher. When I do a test in WebDevTools in incognito mode the LCP for Desktop is about 9700 ms (too much). Why there is so a difference and what can I do to get the LCP down? The LCP wasn't a problem some days ago...
Greetings Kathrin
Keep in mind that the data you see in Chrome (Lighthouse) is so called Lab-data and does significantly differ from what you see in search console as search console uses field-data.
The field data is collected from all devices with a chrome installed accessing your website. So e.g. if your servers are in the US, but most of your users come from Africa, your LCP might be high as the image takes long time to load.
Another thing that could be a problem in your particular case is that as of some changes to your site, a different element became the LCP. This is something you can check in Lighthouse and see what element is the LCP, actually.

Retrieving last 12 months data per ga:year

I'm trying to generate data for the last 12 months in Google Sheets using the GA add-on but ga:users is not showing same figure as GA web because my dimension is ga:year which makes it break it down to adding 2016 to 2017 data.
ga:month would break it down too much and give the overall figure a greater difference.
Any ideas how to fix this so that I can same data as web for users?
Thanks.
Here's my configuration:
Last year Partner
core
ga:xxxxxxxxxxx
01/08/2016
31/07/2017
ga:users
ga:sessions
ga:bounceRate
ga:pageviews
ga:pageviewsPerSession
ga:year
HIGHER_PRECISION
Adding the users from the two distinct years will not match the users from the web interface. The web interface is giving you the de-duplicated number of users.
i.e. anyone visiting the site in Nov-16 and Feb-17 is counted as one user in the web interface, while you are double counting them from your yearly output.

Client ID and Sampling

I have a site that is on the brink of reaching the collection limits and would like to know some info on behaviours and effects of Sampling when a sampling rate is implemented and also when Google start to enforce Sampling.
Question 1: Does anybody know how Sampling works to ensure multiple visits by a Unique visitor give a true sampled view. By this I mean once a particular Client ID has started to be collected and included in a Sample will all future sessions for that Client ID be included or is this not guaranteed meaning only some visits by a particular client ID will be collected within the sample.
Question 2. As an extension to Question 1 does sampling across devices also ensure the same Client ID is collected i.e. PC Browser and Tablet Browser have the same ClientID set - will GA ensure both are included in the Sample.
Question 3. As an extension to the Quation 1 & 2 does Developer Implemented vs Google enforced Sampling cause different behaviours?
Question 4. As an extension to the above if I track a Web Site and App Session activity to the same account and use the same Client ID will Sampling ensure both are included.
Thank You

Resources