Improve Pagespeed when using Facebook Pixel - pagespeed

I am creating a general facebook integration solution, which uses the facebook pixel, however I am in a situation where if a use the facebook pixel it instantly reduces the pagespeed which is also a requirement for me.
I am using the PageSpeed Insghts to measure my pagespeed, and by adding the facebook pixel my Largest Contentful Paint gets a hit and I get this opportunity showing the facebook pixel load as the problem:
The only solution I found is by adding a setTimeout(fb, 3000) around the pixel script so that it only gets loaded after the page is fully rendered, however I am concerned about this solution since this will largely postpone the facebook load and it may cause some problems for some of my users.
Anyone knows any other way to fix this hit in pagespeed using facebook pixel?

Thanks for providing the clarification.
Your best bet is to use defer in your script embed vs arbitrary timeouts, this means your critical rendering path and interactivity are unaffected. LCP specifically should improve as your application rendering remains unaffected by the pixel.
via MDN:
This Boolean attribute is set to indicate to a browser that the script is meant to be executed after the document has been parsed, but before firing DOMContentLoaded.
Note: if you are not observing specific interaction issues and are purely focused on addressing LCP numbers, deferring may meet the need.

Related

Lowering Largest Contentful Paint, Over A Full Second After FCP On Mobile

I've spent quite a bit of time on performance optimization for my site and I'm right there at the finish line for getting all the green good scores in search console for mobile usability and core web vitals. The last outstanding thing is getting my LCP under 2.5 seconds for my blog posts.
I'm consistently getting 2.6-2.7 and I can't quite figure out why. The LCP element is the <h1> tag on my blog posts and there are no above-the-fold images.
Example URL With Higher LCP
Let me say that I am using a Beaver Builder website and some marketing tags like GA, GTM, etc but I've spent a lot of time analyzing my script and style loads to create an optimal experience with preconnects and preloads of various resources. I know it's a big ask to try to get a site like this down in load time. I'm on Kinsta hosting with low TTFB, full-page caching and CDN. I am also using the Perfmatters plugin to control various aspects of load times.
I've done everything I can think of to get the LCP down and it seems like the <H1> tag is visible almost immediately but then may be repainted later towards the end of the page load, but I can't figure out the cause of this.
Anyone out there feeling generous that could take a look?
Page Speed Insights URL
as i can see Pagespeed is showing aggregated real user experience of your entire domain, not of this specific pages but for all pages on this domain.
DOM content load time for your page on mobile is around 3sec, that means your LCP has to be greater than this
You need to reduce your DOM content load time first. You need to prioritize your network calls such that redundant dependency needs to be minimized
And also for desktop page, you are lazy loading first fold image which is not considered as a good user experience and might be contributing to LCP for desktop version of your page, also impact negatively for your SEO

Gtag.js increasing LCP in pagespeed insights

I have a problem with lcp score in core web vitals (mobile). My LCP score is 3 seconds. The required lcp score is 2.5 seconds in mobile. I have tried everything. Have optimized images with a premium plugin (EWWWW Image Optimizer). Serving the files through a CDN. Reduced the network requests to a minimum and applied other things which are common. The only way I can get the lcp score down is by removing gtag.js. This reduces the lcp to 2 seconds but I really need this script for analyzing the traffic. Is there any other way I can reduce the LCP? HAve tried all the suggestions listed in Lighthouse. Removed Unused CSS,JS. Defered Loading of Images, CSS, JS.
The PageSpeed Insights results that stand out to me are:
Avoid an excessive DOM size (1,889 elements)
Minimize main-thread work (3.5 s)
In the mobile test, PSI reports 3.5 seconds spent on the main thread, with 1.275 s on style and layout. I think some of this can be attributed to the large DOM. So try to find ways to trim excess content or markup from the page to simplify the HTML structure.
Script evaluation also accounts for 0.987 s, suggesting that there is significant JavaScript code that runs on page load. I'm not too familiar with GTM but based on the results it does seem like it's contributing a lot to main thread consumption (2.88 s). Some things to check are disabling or removing unnecessary services in GTM, as these third parties can be adding excess JavaScript, and optimizing the services you do need so that anything not absolutely necessary for the initial page load is deferred until later. See the Loading Third-Party JavaScript guide for some additional advice related to GTM.
On the live site, the slider image is the LCP, and this is loaded via CSS. The browser does not know to load the image until the CSS is loaded and it knows the image is needed on the page. You could speed things up with a preload on that image.
You're also using a lot of web fonts. You would free up bandwidth for the main image by reducing the number of fonts.

domInteractive vs Time to Interactive - whats the difference?

Google offers a number of polyfill libraries for measuring and tracking First Input Delay(FID) and Time to Interactive (TTI) on analytics platforms. However this metric does not come standard with GA.
domInteractive however is a metric you can track out of the box with GA.
What's the difference? The only explanation I've found for the competing interactive metrics is a vague forum post explaining that TTI may offer a more complex look at interactive delays, but without much in the way of details.
Am I better off tracking TTI on my users if I'm concerned about input delays affecting conversion, or am I fine to stick with domInteractive?
My understanding is the following:
Time to Interactive (TTI) is when the website is visually usable and engaging. For example, when a user can click around on the UI and the website is functional. Ideally, we want all experiences to get interactive as soon as possible. Examples of websites with poor TTI are websites where a user can be actively engaged with the UI for a good amount of time before anything actually happens. Poor TTI is caused by too much (main thread) JavaScript which adversely causes delays for interactivity for visible UI elements. An example, of this is here. This is an especially important metric to consider for the mobile space since everyone doesn't have a nice phone (so it will take longer to parse the JavaScript needed to load a site) as well as the variance that occurs due to different network speeds: i.e. WI-FI, 3G, 4G
domInteractive however is when a page's primary content is visible and meaningful paints have occurred. At this stage a user can visually see the webpage and the corresponding UI elements that represent the site's DOM.
First Input Delay (FID) is the measurement of how long it took to respond to a user event. For example, how long did it take for a button's event handler to take over and respond once the user clicked the button.
As far as I know FID and TTI are experimental metrics right now so they probably wouldn't be baked into Google Analytics by default. As for your question: "Am I better off tracking TTI on my users if I'm concerned about input delays affecting conversion, or am I fine to stick with domInteractive?" You actually want to track FID if you're concerned with input delays affecting conversion. TTI is still a very useful metric to track since it measures when your site as a whole is interactive and both TTI and FID will provide more value than domInteractive.
If you're still interested check out this explanation on the Cost of JavaScript by Addy Osmani. He does a beautiful job explaining the performance issues we are facing with JavaScript as well as talking about TTI and FID.
Cheers
According to this link, domInteractive is "when the parser finished its work on the main document". Time to interactive - is time when all page scripts (including library e.g. Angular and yours) finished initialization, page is not freezen and user can start interacting with it.
Had to dig into the Spec but I think I found what I was looking for:
The DOMContentLoaded event fires after the transition to "interactive" but before the transition to "complete", at the point where all subresources apart from async script elements have loaded.
Basically domInteractive will not reflect async scripts that are still loading in, which is why your TTI metric can vary so widely.

Do I necessarily need to remove "Render Blocking CSS"

I put my homepage through Google's PageSpeed test and it gave me a score of 69 for Mobile and 95 for Desktop. The one and only issue being a Render Blocking CSS.
Now, all my web pages on my website are Above the Fold. i.e. There is no scroll involved anywhere. Given this, personally I feel I should not be doing anything special, since the CSS is required to view my page the way I designed it, right from the get go.
If I do asynchronous loading or something, it'll end up showing the content on a black and white un-organised page, just before the intended output.
Do I ignore Google? It would mean that I'd never score 100/100, and wouldn't that affect my SEO chances?
TL;DR — No, you don't have to. But in most cases, it helps, indirectly.
Render blocking is in place to prevent FOUC.
Ideally you should only load the CSS responsible for rendering the "above the fold" of your page as render blocking and all the rest of your styles using async methods.
However, most sites load all their CSS as render blocking. Why? Because most websites do not afford a CSS specialist to customize their CSS loading for their specific case. They'll sometimes pay for a theme, but that's it.
Themes are not typically optimized from this point of view because there is no way to know what elements the user will want in their above the fold area.
Is this a huge problem?
NO.
First of all, all of this is only about when the user loads the very first page of your website. All the other pages will use the cached stylesheets (already loaded on first page visited) (unless you load different stylesheets for different pages).
And second of all, the general idea that Google lowers your page's SEO score for having render-blocking CSS is, technically, wrong. They do penalize for a lot of other reasons (like accessibility, readability and responsiveness issues) but not for having render-blocking CSS.
However, there is an indirect correlation between the two.
Google Page Speed is a tool telling you how you can improve the loading speed of your page or to leave the impression the page loads faster.
if you fix the problems it identifies, the page will load faster or at least it will seem to load faster
if your page is or feels faster, there are less chances users will hit the back button while waiting for your page to load.
THIS user behavior is where the SEO penalty comes in. Google registers any such behavior as a general "user did not find what he was looking for on that website" and lowers the page's SEO score for whatever the user searched for
Any method of keeping users from hitting the back button in the first 30 seconds after they left for your website (that will keep the bounce rate down) is a good method to fight SEO penalties.
And... it's true: one of the most efficient methods is to make your page load faster.
Others include:
make the loading process look professional (place correctly sized placeholders for images, so the page doesn't jump around when loading);
keep FOUC as close to 0 as possible
render something, rather than nothing
if possible, give users a general idea of how much of the page has loaded (in %)
make the website loadup with some basic schema of what's on longer pages. users will read the schema, trying to figure out if they're on the right page and they won't notice the loading time - since you give them something to do while waiting
cut the "bla bla" and try to be honest about whatever your page has to offer
I can't emphasize this enough: it really pays off to be honest. There is a huge difference in results, SEO wise:
If your page is about A, but you want to show this to users looking for B, do not tell them you've got B and don't hide it from them. Just tell them:
"Look, this is not B, it's A, but here are a few reasons why you should consider A instead of B."
Most users will read those reasons. Especially if they're well written, they address real problems, and they don't look like they're just trying to buy time.
A very good idea is to place your strongest argument second or third in the list (second if first is rather long, third if first two are not so long).
The reasoning is: if you place it further down, many users don't read past three weak arguments - they label the entire list as unconvincing and go back.
Also, if you place the arguments in the order of their importance, the user will realize it and, as soon as they reach two arguments that are not convincing, they'll assume it gets worse further down the list and, again, they'll hit back button.
But if you place a second or third argument stronger than the previous ones, they will read through the entire list hoping to find another one.
Now, if your arguments are compelling, the user will go for A instead of B => Win.
If not, they will still go for B, but at least they'll do it later (after they read your reasons), and the penalty will be much smaller, if any (the longer time a user spends on your page, the less the penalty, should they press back) => No loss.
If you can keep the user occupied for more than 30 seconds, you're typically in the clear SEO wise. And that's the really important SEO issue at hand, not render-blocking per-se.
In the end, it is totally possible to create a page with a very low score on Page Speed while having a very high SEO score. It's unlikely, but totally possible.

How to speed up Google adsense and analytics loading time?

This might fall under the category of "you can't", but I thought it might be prudent to at least see if there is something I can do about this.
According to FireBug, the major bottleneck in my page loading times seems to be a gap between the loading of the html and the loading of Google adsense and analytics. Notice in the screenshot below that the initial GET only takes 214 ms, and that adsense + analytics loading takes roughly 130 ms combined. However, the entire page load time is 1.12 seconds due to that large pause in between the initial GET and the adsense/analytics loading.
If it makes any difference at all, the site is running off of the ASP.NET MVC RC1 stack.
alt text http://kevinwilliampang.com/pics/firebug.jpg
Update: After removing adsense and analytics, I'm still seeing a slow response time. Hovering over the initial GET request, I see that the following speeds: 96ms Receiving Data, 736ms DOMContentLoaded (event), 778ms 'load' (event). I'm guessing then that the performance is a result of my own jQuery javascript that has processing tied to the ($document).ready() event?
You should place your analytics code at the bottom of the page so that everything else loads first. Other than that, I don't think there's much you can do.
edit: Actually, I just found this interesting blog post on a way to speed up analytics by hosting your own urchin.js file. Maybe it's worth a look.
I've never seen anything like that using Firebug on Stack Overflow and we use Analytics as well.
I just ran a trace and I see the request for the
http://www.google-analytics.com/__utm.gif?...
Happening directly after the DOMContentLoaded event (the blue line). So I'd suspect the AdSense, first. Have you tried disabling that?
As it goes, I happen to have rather heavily researched this just this week. Long story short, you are screwed. As others have said the best you can do is put it at the bottom of the list of requests and make the rest of your code depend on ready rather than onload events - jQuery is really good here. Some portion of the js is static, so you could clone that locally if you keep an eye on it for maintenance purposes.
The google code isn't quite as helpful as it could be in this area*, but it's their ballgame and anything you do to change it is going to be both complex and risky. In theory, wrapping with a non-blocking script call in the header is possible, but would be unlikely to gain you a benefit given the additional abstraction, and ultimately with adsense your payload is an html source, not script.
* it's possible google have a good reason, but nothing I can deduce from the code they expose
Probably not anything you can do aside from putting those includes right before the closing body tag, if you haven't already. JavaScript includes block parallel HTTP requests which is why they should be kept out of <head>
Surely Google's servers will be the fastest part of the loading, given that your ISP and most ISPs will have a local cache of the files too?
You could inject the script into the head on page load perhaps, but I'm not sure how that effects urchin.js.
Could be that your page simply takes that long to parse? It seems nothing network-related is happening. It simply waits around a second before the adsense/analytics requests are even fired off.
I don't suppose you have a few hundred tables on the page or something? ;)

Resources