How can I optimize all my images? - gruntjs

Diagnose
I recently came across : PageSpeed Insights, it basically test your page speed, spit out a score, and display what cause your page to slow-down.
I enter my url and here is my : result.
Issues
I clearly don't have a lot score, but I'm working on improving them.
I got a lot of image optimization problems. I've tried 2 things.
__
1.Use ImageOptim Software
I've tried using ImageOptim Mac Software to optimize all my images in my img/ folder.
2.Use grunt imagemin plug-in
On top of that I , I use a build tool to re-compress all my images
in my img/ folder, and store the compress one in dist/img/folder.
imagemin: {
dynamic: {
options: {
optimizationLevel: 3,
svgoPlugins: [{ removeViewBox: false }],
use: [mozjpeg()]
},
files: [{
expand: true, // Enable dynamic expansion
cwd: 'img', // Src matches are relative to this path
src: ['**/**/**/*.{png,jpg,gif,ico}'], // Actual patterns to match
dest: 'dist/img' // Destination path prefix
}]
}
Imagemin Result
Luckily, I got all my 104 images reduce down 4.11MB.
Re-Test Result
But sadly, after re-linking my entire page to new images directory dist/img/. I test my site again with PageSpeed Insights, I still got the same warning image optimization.
How can I can fix/improve this problem ?
Is it because I set the optimizationLevel: 3 too low ?
Any approach / idea / strategy / better solution / suggestion ?
Thanks a lot !

I would recommend optimizing your images beforehand using one of these tools:
Windows
FileOptimizer It uses multiple tools to make your images as small as possible.
JPEG (All Platforms)
MozJPEG (by Mozilla)
PNG (All Platforms)
PNGQuant, especially the web fronted for it: TinyPNG (careful lossy!)

You could consider using a PageSpeed server module.
Those are able to automatically apply image optimization and thereby satisfy PageSpeed Insights recommendations.
See https://developers.google.com/speed/pagespeed/module and https://developers.google.com/speed/pagespeed/module/faq#other-servers for availability.

There is nothing wrong in using imagemin.
The problem is about css based resizing. If the natual img size is 150px and the css is squeezing it inside a 100px box, google wants you to resize the img to 100px.

Related

Reduce loading time of website after minify / cache

I have created website in wordpress and its about to go live from test server to live server.
Its simple website which having multiple plugins.
After completion of development we have enable cache and minify css/js from below pluign.
1)Better WordPress Minify
2)W3 Total Cache
I have been testing website on my test server which is basically shared server.
I have done test loading time on
1) pingdom
2) GTmatrix
3) google page speed tool
4) webpage test
Now i am getting website loading time which is vary from 6-10s, can you help me how further i can reduce loading time.(i have applied all .htaccess tricks and w3totalcache settings)
Below are parameter need to yet fix from gtmatrix and google page speed which i have tried but coudnt achive.
GTmatrix:
1) Y-slow -> Add expiry headers (list show minify js, css only (minify bunch only))
2) Page speed -> Leverage browser caching (list show minify js, css only (minify bunch only))
Google page speed:
1) Leverage browser caching (list show minify js, css only (minify bunch only))
Can anyone guide me further how can i,
1) How can i apply browser caching for already minify js and css?
2) There are multiple images from database which taking time to load on home page..how can i reduce loading time? (images are optimised already)
I have tried on check google but coudnt find anything suitable for me..
Please help.
Thank you in advance.
For #2, you can implement lazy load for images. Also, make sure you are specifying width and height of the images, and loading appropriate size images (i.e. not scaling down to display the required size).
The images might be optimized but are they as small (width x height) as they can be? You didn't load a larger image than you needed did you?

Xamarin.Forms: Organize images in subfolders

Is it possible to organize my images in subfolders? Something like this example.
Android Project:
·Drawable
·navbarIcons
-user.png
-stats.png
·statsImages
-goals.png
-assists.png
iOS Project:
·Resources
·navbarIcons
-user.png
-stats.png
·statsImages
-goals.png
-assists.png
Or is it mandatory to let them on the Drawable/Resources folder?
Its definitely possible to create subfolders on iOS. Also ensure that your casing is correct because they are case-sensitive and that your Build Actions are set up correctly.
UPDATE:
As you can see its possible to add them in subfolders. Don't forget to add your images in the correct sizes such as 2x and 3x for iOS as I did below.
UPDATE 2:
Another thing you could do is put the images in your Shared PCL project and go the embedded images route. I believe this route doesn't give you as much flexibility when it comes to DPIs though:
https://developer.xamarin.com/guides/xamarin-forms/user-interface/images/#Embedded_Images

Is there any performance impact of using too many font-face declarations?

Question:
If there is a big list of font-faces in a CSS file (say over 2000), how efficiently browser will pick a font from the "big list" for application to an HTML block? Please ignore font file size, network latency, caching or everything else.
Details:
I am working on an opensource project to create a single CSS file that contains font-faces of all fonts hosted on fonts.google.com. The reason is to make font inclusion simple and cross project, i.e just include the same single CSS file in every project, and lets go.
I am concerned about performance impact of too many font-face declarations. The overall CSS file size will be less than 25KB Gzipped, so I am ok with that. But there will be 1950+ font-faces that can potentially make some browsers slow in real world.
Although these font files will not be downloaded by browsers unless they are really used in the HTML document, so thats not an issue. I am just concerned about browsers' efficiency of handling these font-faces declarations in memory and efficiently referencing to them when used in CSS.
Can anyone help?
Edit:
Here is the css file that I plan to use: https://raw.githubusercontent.com/praisedpk/Local-Google-Fonts/master/google-fonts/webfonts.css
Its from github repo: Local Google Fonts
Yes, there is a performance impact, but it is not terribly high.
To measure the impact, I used Chrome's "Inspect" option with the "Network" tab.
(Other options are available to do this too.)
I went to the link provided after checking the "Disable cache" button. Doing so enables you to measure the full impact of downloading a file for the first time.
This shows that the webfonts.css file is 17.9 KB in size and took my system using Chrome to download it in 26ms. (Not too bad, but this all depends on what is good enough in your judgement.)
If you hover over the "Waterfall" time bar you can see more details about this file download to the browser.
You have to consider two perspectives: The network and the rendering of the browser. Both aspects cause performance impacts when you load a lot of fonts. Google has the same problems on https://fonts.google.com/ - and they have solved it quite well.
So let's see what they have done. Let's open Chrome, open https://fonts.google.com/, press F12 to open the developer tools and press F5 to relead the page. Now switch to the network tab to see which files have been loaded. Surprise: only about 25 fonts have been loaded.
Now scroll down the page to see more font samples - and watch the network tab. You will see that Google loads more and more fonts while you scroll down.
This is called "loading on demand". See this SO topic for more information: Load fonts on demand
var link = document.createElement("link")
link.setAttribute("rel", "stylesheet")
link.setAttribute("type", "text/css")
link.setAttribute("href", "http://fonts.googleapis.com/css?family=Indie+Flower")
document.getElementsByTagName("head")[0].appendChild(link)
So Google watches which parts of the page are currently visible and loads the fonts immediatly. Of course they also apply caching headers to make this process faster when a user revisits the website.
To detect if a certain part of your site is in view you can use scripts like this one: https://plnkr.co/edit/91gybbZ7sYVELEYvy1IK?p=preview
$(document).ready(function() {
myElement = $('.someObject');
window.onscroll = function() {
var screenTop = $(window).scrollTop();
var screenBottom = screenTop + $(window).height();
var elTop = $(myElement).offset().top;
var elBottom = elTop + $(myElement).height();
var info = 'screenTop:' + screenTop + ' screenBottom:' + screenBottom + ' elTop:' + elTop + ' elBottom:' + elBottom;
if((elBottom <= screenBottom) && (elTop >= screenTop)) {
$('.info').html('Element is in view + <small>' + info + '</small>');
} else {
$('.info').html('Element is out of view + <small>' + info + '</small>');
}
}
})
If I understand your question correctly, you're not asking for network loading times but rather for the time to select the right font declaration from the loaded list.
The time to select the font from the css file is neglectable. Almost any other operation while loading your site will take much more time. If you'd like to test this with your browser of choice, just open the css file in the browser and search for any font with CTRL+F. The results should be visible in less than 100ms, even on older machines.
Anyway you should consider that not all browsers behave the same and that Internet Explorer (or at least the older versions) WILL download every font you declare. It doesn't matter if you use it or not so your project won't be an option if the site should be cross-browser compatible.
Use cache-control settings in the response header of your posted pages to inform the browser how long a cached CSS file can be reused before checking for a newer version.
For example, using the SO website and viewing a downloaded CSS file with Chrome's Inspect function, a max-age value of 604,800 seconds (1 week) is defined.
To show a display like the prior image, click on a file listed in the "Network" tab of Chrome's Inspect utility.
I believe that the max-age duration informs the browser how long it can use the currently cached file before it needs to check to see if a newer file is available. In some instances, it is better to have this duration be much shorter.
For more information about this, visit this MDN page on Cache-Control.
There are also numerous SO entries with the cache-control topic that may be applicable to your website's environment.

Is there a way to style Google Chrome default PDF viewer

Is there a way to style google chrome default pdf view? I'm trying to change the gray background color to white also make the scroller little bigger for mobile devices if possible.
I tried to target it on css with no luck
// pdf viewer custom style
iframe {
html {
body {
background-color: #ffffff !important;
}
#zoom-toolbar {
display: none !important;
}
#zoom-buttons {
display: none !important;
}
}
}
It looks like it's creating shadow document on the html but I couldn't find any way to target it
There is no way to directly style the Chrome default PDF viewer (PDFium). Because the plugin displays and controls content outside the scope of the current page's DOM, it can only be modified by the plugin. As indicated here it is impossible to make modifications to this sort of plugin controlled content unless the plugin also adds a content script that allows the page to pass messages to the plugin; the plugin must additionally be programmed to respond to messages and appropriately update the content. In other words the PDF viewer uses a separate DOM to the page which is not directly accessible. Instead you need to access an implemented API.
In this discussion Mike West (Google/Chromium dev) states, in answer to a question on DOM accessibility in Chrome's PDF viewer:
The functionality available in the PDF viewer is (intentionally) fairly limited ... The APIs you're having trouble finding simply don't exist.
Basic API functions are some of those specified by Adobe in their Parameters for Opening PDF Files and are accessed through the URL (eg http://example.org/doc.pdf#page=3&pagemode=thumbs. They are, as indicated above, quite limited, allowing the user to go directly to a page, set zoom factor, show thumbnails etc. Accessing an expanded API through content script messages can potentially be done if you know the available JavaScript messages. A complete list of available JS message names can be determined from the relevant PDFium source here from which it can be seen that advanced styling of the viewer, such as changing colours, isn't possible. (This question gives an example of how to implement the API). Certainly there is no access to PDFium's DOM.
This API is deliberately left undocumented; it may change with additions or removals at any time. Thus, while it's possible that in the future there will be an API to let you style some aspects of the viewer, it's very unlikely that any would go so far as to change the background colour or modify a CSS shadow. And, as stated above, without an API you can't modify content controlled by a plugin when you don't have access to its DOM.
You may, instead, wish to try PDF.js. It is an open source JavaScript library that renders PDF files using HTML5 Canvas. It is also Firefox's default PDF viewer and is quite capable.
Implementing it as a web app is beyond the scope of this question, but there are many helpful tutorials available. And as you, the developer, will have access to all constituent files, you will certainly be able to style the PDF.js viewer as much as you wish.
Just paste this into your browser console.
var cover = document.createElement("div");
let css = `
position: fixed;
pointer-events: none;
top: 0;
left: 0;
width: 100vw;
height: 100vh;
background-color: #3aa757;
mix-blend-mode: multiply;
z-index: 1;
`
cover.setAttribute("style", css);
document.body.appendChild(cover);
Update: Recent versions of Chrome seem to have moved the PDF viewer resources out of resources.pak and into the browser binary itself. It should still be possible to download the Chromium source, edit the files described below, and then recompile, but that's much more painful than simply hacking resources.pak. Thanks, Google.
As a matter of fact, there is a way, but we've got to get our hands dirty, and the process must be repeated every time we update Chrome. Still, to me, the effort is well worth it. I like to change the PDF viewer's background to white, so that when I activate the color-inverting Deluminate extension at night, I get a nice solid black background. It's so much easier on my eyes compared to the default background, which, when inverted, is blindingly bright.
The Chrome source tree contains thousands of HTML, JS, and CSS files that control the behavior and appearance of many parts of the browser, including the PDF viewer. When Chrome is built, these "resources" are bundled together into a single file, resources.pak, which the browser unpacks into memory during startup. What we need to do is unpack resources.pak on disk, edit the files that style the PDF viewer, and then repack the bundle.
The first thing we need is a tool that can unpack resources.pak. The only one that I know of is ChromePAK-V5. It's written in Go, so we need that to build it. We also need to install a build-time dependency called go-bindata. Here's how I went about it:
cd ~/code/chrome
go get -u github.com/jteeuwen/go-bindata/...
git clone https://github.com/shuax/ChromePAK-V5.git
cd ChromePAK-V5
~/go/bin/go-bindata -nomemcopy -o assets.go assets
go build
cd ..
Now that we've got the binary ChromePAK-V5/ChromePAK-V5, we can use it to unpack resources.pak. In my case, running Chromium on Linux, the file is located at /usr/lib/chromium/resources.pak, but it might be somewhere else for you. Once you've found it, copy it, make a backup, and unpack it:
cd ~/code/chrome
cp /usr/lib/chromium/resources.pak .
cp resources.pak resources.pak.bak
ChromePAK-V5/ChromePAK-V5 -c=unpack -f=resources.pak
At this point, the files we need will be located somewhere in the resources directory. Now, in the original Chrome source tree, these files all had sensible paths, such as chrome/browser/resources/pdf/pdf_viewer.js. Unfortunately, these original paths are not recorded in the resources.pak file. ChromePAK-V5 tries to be clever by using a table that maps the SHA1 hashes of resources files to their original paths, but over time, files change, along with their hashes, and ChromePAK-V5 can no longer recognize them. If a file is unrecognized, ChromePAK-V5 will unpack it to, e.g., resources/unknown/12345. And, in general, these numbers change from one Chrome release to the next. So, to find the files that we need to edit, we basically need to grep for "fingerprints" that identify them. Let's get started.
The background color of the PDF viewer is controlled by the file which, in the Chrome source tree, is named chrome/browser/resources/pdf/pdf_viewer.js. To find the file, grep inside resources/unknown for the string PDFViewer.BACKGROUND_COLOR. In my case, the file was unpacked at unknown/10282. Open this file, and change the line (at/near the end of the file) that sets PDFViewer.BACKGROUND_COLOR. I changed it to 0xFFFFFFFF, i.e., white (which becomes black under Deluminate).
Going further, we can also restyle the PDF viewer's toolbar. By default, the toolbar is dark, so it becomes obnoxiously bright under Deluminate. To fix that, we need to find chrome/browser/resources/pdf/elements/viewer-pdf-toolbar.html. I found it at unknown/10307 by grepping for shadow-elevation-2dp. What I did was to go to the #toolbar block and add filter: invert(100%);. Voila, no more blinding toolbar at night.
Finally, if we really want to go all the way, we can get rid of the brief "flash" of the original background color that occurs when loading a PDF. This color is controlled by chrome/browser/resources/pdf/index.css, which I found at unknown/10304 by grepping for viewer-page-indicator {. I changed the background-color property of body to white (i.e. black under Deluminate).
The hard part is now over. The final step is to repack the resources and overwrite the system resources.pak:
ChromePAK-V5/ChromePAK-V5 -c=repack -f=resources.json
sudo cp resources.pak /usr/lib/chromium # or wherever yours should go
Now restart the browser and enjoy!
A codeless approach is to install a tampermonkey plugin.
https://greasyfork.org/en/scripts/437073-pdf-background-color-controller
This is very useful if you are reading a pdf via a browser and just want to change the background color.

Increase Number Size in Charts

I'm doing a report using Qliksense but when using a proyector you can barely see the numbers on the graphs.
Any ideas on how to modify the code of the program to increase number/text size overall?
A possible way to do that would be to use the themes in QlikSense. Yet they have limited functionality, but they may be able to help in your case.
You will have to go into your QlikSense themes directory, mine is under C:\Users\user\AppData\Local\Programs\Qlik\Sense\Client\themes. Here you will find the default themes (you can also create your own ones).
Create a copy of the folder sense which is the default theme (let's name this new folder senseModif from now on). Open the file theme.json from the new directory and play around with the values. For example if you want to set text sizes in a pie chart you can modify this segment:
"pieChart": {
...
"item": {
"name": {
"fontSize": {
"default": "50px",
"large": "50px",
"medium": "50px",
"small": "50px",
"tiny": "50px"
},
...
I already changed the fontSize values here for 50px. To see the result of this just go to the QS web interface, open your app, and open a sheet in it. You will have a similar url for the app: http://localhost:4848/sense/app/filenameofapp.qvf/sheet/GPUNXF/state/analysis/. Now just put theme/senseModif at the end of the url and it will now display the objects with your new styles.
As I said, at least for me, it's not working for every object type right now, but at least it's a start.
Much more on the themes and on their usage can be found here.

Resources