What is the benefit of base64 encoding a favicon? - single-page-application

I've got this web app where the favicon is inlined in the HTML, e.g.,
<link rel="icon" href="data:image/x-icon;base64,A VERY VERY LONG STRING...">
However I can definitely see that both Chrome and Firefox (latest version as of this date) issue a request to favicon.ico at the root of my website anyway, e.g. http://example.com/favicon.ico
In case it matters:
The base64-encoded string embedded in the href attribute is quite big.
The favicon <link> tag is managed by react-helmet
The website itself isn't particularly slow. (Consistent good Apdex score throughout.)
I can only assume that the developers at the time (all gone now) wanted to inline the favicon to avoid an HTTP request and therefore wrote some "infrastructure" to support that: namely using a Webpack plugin to automatically base64 encode all assets imported as JavaScript modules (e.g. import favicon from './assets/favicon.ico').
Clearly this isn't working as it was intended but what strikes me the most is that the actual base64 string weights more than the favicon.ico file itself (20k vs 15k). So I'm not sure where the benefit is (if any).

While I don't know any better than you why the original developers designed it that way, it makes sense for offline file rendering of a simple all-in-one html file.
I actually just looked this up, because I am building a SUPER small all-in-one html file. I don't have to include an extra file if it's base 64 encoded into the single html file.

Here's my last two days of reading in few a minute.
As of 2021, 93% of online browsers could view a SVG as an Favicon
https://en.wikipedia.org/wiki/Usage_share_of_web_browsers
https://caniuse.com/link-icon-svg
.ICO is outdated way to create 'favicons' and requires you to make multiple small sizes of your image whereas .PNG can scale down from any size. It's easily the best lazy option for a quick icon. Because the viewing size of Icons are so small, any complex picture is undistinguishable. Making very simple designs optimal.
This is where .SVG shines.
https://www.iconfinder.com/
find image > inspect > open in new page > save image as
Paint 3D's Magic Select is free tool worth mentioning
This is by far the most informative and straight forward video on auto SVG
https://www.youtube.com/watch?v=10m_2bPXa1s
Now, we're left with a 4-8KBs of data. Which could be a 5th of your .PNGs size.
Next we'll want to optimize it
https://www.youtube.com/watch?v=iVzW3XuOm7E
So we could skip a DOM request by having all the data in the head but that leads us here.
https://css-tricks.com/probably-dont-base64-svg/
Now say we're creating a Single Page Application and care about SEO. Not only do we score higher and reduce our load times but we offer a better experience for users with the lowest internet speeds.

Related

Is it a bad idea to let a css background-image url point to a non-image file?

I want to do something hackish in order to have responsive images generated automatically with js & php. I would like to do this with my css background-image: url(#/path/to/image.jpg), so that I can specify various image paths that javascript can grab but which won't actually result in separate HTTP requests.
The browser interprets the above code as http://example.com/cssfolder/#/path/to/image.jpg. Which means it is just pointing to the css folder. In my brief tests, if there is an index file in the folder the Chrome console will complain "Resource interpreted as Image but transferred with MIME type text/html" but if there is no index it will give a 404 message.
So that feels kind of bad, but it doesn't seem to affect the browsing experience. I guess developers may not like this method since it pollutes the console with errors, but is there an objective reason to avoid this? Are there some browsers that would actually mess up the page, bearing in mind that a real background image would be immediately supplied with javascript?

Is it faster to precompile less?

So I am wondering whether it is better for me as far as page speed, to precompile my less style sheets instead of using less.js. After testing this via google page speed, I noticed I actually went down a couple points after precompiling. As far as i know it should be less demanding on the user. But I guess I am wrong? Is there something else I should take into account? Minifying the css is also an option, but I don't think that will make a considerable difference.
Thanks.
I think you should change the title of your question in something like "does precompiling Less generates a better page speed score?"
If probably depends of the size of your Less code. In general page speed wants that you first load something readable and load non critical CSS and JavaScript afterwards.
When you compile all your Less code in one big CSS file, page speed will complain that you should load the critical CSS first (preferred to do that in source (no extra http request) ).
When compiling client side with less.js the compiled CSS code is inserted in source, but require two http request (the compiler and the less file). Less code can be smaller than the compiled CSS. But you will have to load the compiler (possible from CDN).
But overall you should realize that the client side compiler have to compile your Less code again for every page request. Client side compiling will take time and so create a bad user experience in most situations.
Minifying the css is also an option, but I don't think that will make a considerable
difference.
Minifying reduce the number of bytes that have to be send and so always helps to make your website load faster.
some tests
When i load a simple page with:
<link href="css/bootstrap.min.css" rel="stylesheet">
I found a page speed score of 95/100 and have to fix:
Optimize CSS Delivery of the following:
http://example.com/css/bootstrap.min.css
When i load the same page with:
<link rel="stylesheet/less" type="text/css" href="less/bootstrap.less" />
<script src="//cdnjs.cloudflare.com/ajax/libs/less.js/2.0.0/less.min.js"></script>
I found a page speed score of 91/100 and have to fix:
Remove render-blocking JavaScript:
http://cdnjs.cloudflare.com/ajax/libs/less.js/2.0.0/less.min.js
Although the second situation has to do many http request (to load all the less code) and run the code client side the score is not so much lower (and still good enough).
When you don't optimize the CSS in the first situation, for instance minify the code and set proper caching headers and so on the page speed score will further go down.
So in summary, yes you should precompile your Less code and minify the CSS code for a better user experience and page speed is not always a good predictor for the user experience.

"Eliminate render-blocking CSS in above-the-fold content"

I've been using Google PageSpeed insights to try and improve my site's performance, and so far it's proven extremely successful. Things like deferring scripts worked beautifully, since I already had an in-house version of jQuery's .ready() to defer scripts until the page had loaded fully, all I had to do was inline that particular function and move the full scripts to the end of the page. That worked great.
But now I find myself glaring at the one remaining yellow dot on the checklist: "Eliminate render-blocking CSS in above-the-fold content".
The way my CSS is set up is to have one global _.css file containing styles that apply to the page structure in general, or are used in more than one or two places across the site. Most pages then have an associated CSS file (for instance, party.php has party.css) containing styles specific to that particular page. All CSS files are cached indefinitely, as I append /t=FILEMTIME to filenames (and later remove them with .htaccess) in order to guarantee that files are updated when they are changed.
So anyway, Google recommends inlining critical styles needed for above-the-fold content. Trouble is... well, take a look at this screenshot: http://prntscr.com/1qt49e
As you can see... ALL of the content is above-the-fold! People hate scrolling, especially on a game that involves loading many pages. So I designed the site to fit on one screen (assuming a good enough resolution). So that means... ALL of the styles apply to above-the-fold content! So... is there any solution? Or am I stuck with that yellow mark on an otherwise near-perfect score?
A related question has been asked before: What is “above-the-fold content” in Google Pagespeed?
Firstly you have to notice that this is all about 'mobile pages'.
So when I interpreted your question and screenshot correctly, then this is not for your site!
On the contrary - doing some of the things advised by Google in their guidelines will things make worse than better for 'normal' websites.
And not everything that comes from Google is the "holy grail" just because it comes from Google. And they themselves are not a good role model if you have a look at their HTML markup.
The best advice I could give you is:
Set width and height on replaced elements in your CSS, so that the browser can layout the elements and doesn't have to wait for the replaced content!
Additionally why do you use different CSS files, rather than just one?
The additional request is worse than the small amount of data volume. And after the first request the CSS file is cached anyway.
The things one should always take care of are:
reduce the number of requests as much as possible
keep your overall page weight as low as possible
And don't puzzle your brain about how to get 100% of Google's PageSpeed Insights tool ...! ;-)
Addition 1: Here is the page on which Google shows us, what they recommend for Optimize CSS Delivery.
As said before, I don't think that this is neither realistic nor that it makes sense for a "normal" website! Because mainly when you have a responsive web design it is most certain that you use media queries and other layout styles. So if you are not gonna load your CSS first and in a blocking manner you'll get a FOUT (Flash Of Unstyled Text). I really do not believe that this is "better" than at least some more milliseconds to render the page!
Imho Google is starting a new "hype" (when I have a look at all the question about it here on Stackoverflow) ...!
How I got a 99/100 on Google Page Speed (for mobile)
TLDR: Compress and embed your entire css script between your <style></style> tags.
I've been chasing down that elusive 100/100 score for about a week now. Like you, the last remaining item was was eliminating "render-blocking css for above the fold content."
Surely there is an easy solve?? Nope. I tried out Filament group's loadCSS solution. Too much .js for my liking.
What about async attributes for css (like js)? They don't exist.
I was ready to give up. Then it dawned on me. If linking the script was blocking the render, what if I instead embedded my entire css in the head instead. That way there was nothing to block.
It seemed absolutely WRONG to embed 1263 lines of CSS in my style tag. But I gave it a whirl. I compressed it (and prefixed it) first using:
postcss -u autoprefixer --autoprefixer.browsers 'last 2 versions' -u cssnano --cssnano.autoprefixer false *.css -d min/ See the NPM postcss package.
Now it was just one LONG line of space-less css. I plopped the css in <style>your;great-wall-of-china-long;css;here</style> tags on my home page. Then I re-analyzed with page speed insights.
I went from 90/100 to 99/100 on mobile!!!
This goes against everything in me (and probably you). But it SOLVED the problem. I'm just using it on my home page for now and including the compressed css programmatically via a PHP include.
YMMV (your mileage may vary) pending on the length of your css. Google may ding you for too much above the fold content. But don't assume; test!
Notes
I'm only doing this on my home page for now so people get a FAST render on my most important page.
Your css won't get cached. I'm not too worried though. The second they hit another page on my site, the .css will get cached (see Note 1).
Few tips that may help:
I came across this article in CSS optimization yesterday:
CSS profiling for ... optimization
A lot of useful info on CSS and what CSS causes the most performance drains.
I saw the following presentation on jQueryUK on "hidden secrets" in Googe Chrome (Canary) Dev Tools:
DevTools Can do that.
Check out the sections on Time to First Paint, repaints and costly CSS.
Also, if you are using a loader like requireJS you could have a look at one of the CSS loader plugins, called require-CSS, which uses CSSO - a optimzer that also does structural optimization, eg. merging blocks with identical properties. I used it a few times and it can save quite a lot of CSS from case to case.
Off the question:
I second #Enzino in creating a sprite for all the small icons you are loading. The file sizes are so small it does not really warrant a server roundtrip for each icon. Also keep in mind the total number of concurrent http requests are browser can do. So requests for a larger number of small icons are "render-blocking" as well. Although an empty page compare to yours, I like how duckduckgo loads for example.
Please have a look on the following page https://varvy.com/pagespeed/render-blocking-css.html .
This helped me to get rid of "Render Blocking CSS". I used the following code in order to remove "Render Blocking CSS". Now in google page speed insight I am not getting issue related with render blocking css.
<!-- loadCSS -->
<script src="https://cdn.rawgit.com/filamentgroup/loadCSS/6b637fe0/src/cssrelpreload.js"></script>
<script src="https://cdn.rawgit.com/filamentgroup/loadCSS/6b637fe0/src/loadCSS.js"></script>
<script src="https://cdn.rawgit.com/filamentgroup/loadCSS/6b637fe0/src/onloadCSS.js"></script>
<script>
/*!
loadCSS: load a CSS file asynchronously.
*/
function loadCSS(href){
var ss = window.document.createElement('link'),
ref = window.document.getElementsByTagName('head')[0];
ss.rel = 'stylesheet';
ss.href = href;
// temporarily, set media to something non-matching to ensure it'll
// fetch without blocking render
ss.media = 'only x';
ref.parentNode.insertBefore(ss, ref);
setTimeout( function(){
// set media back to `all` so that the stylesheet applies once it loads
ss.media = 'all';
},0);
}
loadCSS('styles.css');
</script>
<noscript>
<!-- Let's not assume anything -->
<link rel="stylesheet" href="styles.css">
</noscript>
I too have struggled with this new pagespeed metric.
Although I have found no practical way to get my score back up to %100 there are a few things I have found helpful.
Combining all css into one file helped a lot. All my sites are back up to %95 - %98.
The only other thing I could think of was to inline all the necessary css (which appears to be most of it - at least for my pages) on the first page to get the sweet high score. Although it may help your speed score this will probably make your page load slower though.
The 2019 optimal solution for this is HTTP/2 Server Push.
You do not need any hacky javascript solutions or inline styles. However, you do need a server that supports HTTP 2.0 (any modern server version will), which itself requires your server to run SSL. However, with Let's Encrypt there's no reason not to be using SSL anyway.
My site https://r.je/ has a 100/100 score for both mobile and desktop.
The reason for these errors is that the browser gets the HTML, then has to wait for the CSS to be downloaded before the page can be rendered. Using HTTP2 you can send both the HTML and the CSS at the same time.
You can use HTTP/2 push by setting the Link header.
Apache example (.htaccess):
Header add Link "</style.css>; as=style; rel=preload, </font.css>; as=style; rel=preload"
For NGINX you can add the header to your location tag in the server configuration:
location = / {
add_header Link "</style.css>; as=style; rel=preload, </font.css>; as=style; rel=preload";
}
With this header set, the browser receives the HTML and CSS at the same time which stops the CSS from blocking rendering.
You will want to tweak it so that the CSS is only sent on the first request, but the Link header is the most complete and least hacky solution to "Eliminate Render Blocking Javascript and CSS"
For a detailed discussion, take a look at my post here: Eliminate Render Blocking CSS using HTTP/2 Push
Consider using a package to automatically generate inline styles from your css files. A good one is Grunt Critical or Critical css for Laravel.

Flash Of Unstyled Content (FOUC) in Firefox 3.5+

We've reached the end of our tether here trying to overcome a nasty and intermittent FOUC in Firefox 3.5.x+ for a new release we're working on.
We've tried:
Disabling Javascript in FF
Using Quirks mode rendering by removing the DOCTYPE
Moving from #import for additional CSS to <link>
Switching concatenation on and off
Removing CSS files from the concat, one at a time
Switching the local cache off in Firefox
etc
Our previous release never exhibited any FOUC issues, so it's something we've done to this release. Changes we've made so far include:
Using Base64 encoded images over Data URIs for all decorative imagery, served via CSS.
Separating 'framework'-related CSS files from page-specific CSS and bundling them as two separate CSS files
To recreate the problem... use Firefox 3.5.x or 3.6.x, then:
Head on over to: http://my.publisher-subdomain.env.yola.net/
Login with username: 'stack#yola.com' and password: 'stackoverflow'
Once logged-in, you should be at http://my.publisher-subdomain.env.yola.net/sites/
Click the Account link in the main nav.
The Account page should load, and you should see a FOUC. If the FOUC does not occur, clear your cache and reload the page.
Your help would be greatly appreciated! :)
UPDATE:
The dev environment is still exhibiting the FOUC, but only if FireFox is running low on memory or has a lot of extensions installed. Latency and rendering speed definitely affect the visibility of this FOUC.
Although this is a very old question, I found it when I was searching for a solution to the same problem. So, I wanted to post the solution for future reference. I just needed to move the reference to my CSS files above the references to external Javascript that needed to be in my header.
I can be wrong, but this could be a concurrent connections issue. According to my Firebug's "Net" tab
the HTML page simply takes a lot of time to load - maybe also because it is on a development server? - and the style sheet gets loaded after the HTML page.
I can't claim to entirely understand what's happening here, but I would try putting the style sheet onto a different domain as a first measure. That should make Firefox establish a connection straight away.
It would probably also be a good idea to go back to normal images instead of data: URIs - that would reduce the size of the style sheet, and data: URIs won't work at all in IE < 8.

what is style.css?ver=1 tag?

I found out that some websites use css tag like style.css?ver=1. What is this?
What is purpose of ?ver=1?
How do I do it in code?
To avoid caching of CSS.
If the website updates their CSS they update the ver to a higher number, therefore browser is forced to get a new file and not use cached previous version.
Otherwise a browser may get a new HTML code and old CSS and some elements of the website may look broken.
Adding '?ver=1' makes the HTTP request look like a GET query with parameters, and well-behaved browsers (and proxies) will refuse to cache parameterized queries. Of course well-behaved browsers (and proxies) should also pay attention to the 'Cache-control: no-cache', 'Expires', 'Last-Modified', and 'ETag' response headers (all of which were added to HTTP to specify correct caching behavior).
The '?ver=1' method is an expensive way to force behavior when the site developer doesn't know how (or is too lazy) to implement the correct response headers. In particular, it means that every page request is going to force requesting that CSS file, even though, in practice, CSS files change rarely, if at all.
My recommendation? Don't do it.
The purpose of the ?ver=1 is to parameterize the css file, so when they publish a new style.css file they up the version and it forces the client to download the new file, instead of pulling from the cached version.
If you are developing a web application in HTML and CSS or any other technology, and you are using some external CSS or JS files, you might notice one thing that in some cases if you made any changes to your existing .css or .js files then the browsers are not reflecting the changes immediately.
What happens in that case is that the browser do not download a fresh copy of the latest version of the .css and .js files, instead it uses those files stored in your local cache. As a result the changes you made recently are not visible to you.
<link rel="stylesheet" href="style.css?v=1.1">
The above case when you load the web page the browser will treat "style.css" as a different file along with "?v=1.1". Hence the browser is forced to download a fresh copy if the stylesheet or the script file.
I think that ?ver=1 is for the version no. of the web app. Every time a new build is created, the app can update the ver to the new version. This is so that the browser will load the new CSS file and not use the cached one (both use different file names).
You can refer to this site: http://www.knowlegezone.com/36/article/Technology/Software/JavaScript/CSS-Caching-Hack----javascript-as-well
IMO a better way to do this would be to include a hash generated off of the file size or a checksum based on the file contents or last-modified date. That way you don't have to update some version number and just let the number be driven off of the file's changing properties.

Resources