I am starting making use of "advanced" selectors, and even though with this I am being able to locate and identify all elements on my HTML without needing to classify them or give them an ID, I am worried about if this way of programming (which I suppose is the correct one as most of the online templates/plugins etc... use it) does not need more resources and increases loading times than just using classes and ID's as selectors.
For example, the code above could be done just with some simple selectors, IDs and classes, but I preferred to do it as you can see:
.impairsRight > div{
float: right;
width: 100%;
margin-bottom:15px;}
.content-hola .impairsRight > div:nth-child(odd) > p {
margin-left:30%;}
.content-hola .impairsRight > div:nth-child(even) > p {
margin-right:30%;}
.impairsRight > div:nth-child(even) > h5 {
text-align:right;}
.impairsRight > div:nth-child(odd) > h5 {
text-align:left;}
Is this heavier on size/speed/use of resources than simple ones? Is there any tool or website that can test this kind of things? (not only loading times, algo resources needings)
I have never heard of a case where the use of CSS selectors impair the speed of a web site to a noticable degree.
This is really nothing you have to worry about.
Related
I searched but I don't get a concrete answer.
I will ask a simple question
Is it more effective to do this :
body > html > section > div.class1,
body > html > section > table > tbody > tr > td > div.class1
{
background-color : red;
}
or this :
div.class1 {
background-color: red;
}
Browsers will check and convert every code you give to them (HTML and CSS inclued), for each selector the browser set the attributes to the right HTML markers. For very small website it doesn't affect a lot but for large website like Amazon that have more than 1 million lines of code, it will affect a lot the performances.
I think this is a good exemple: https://www.html5rocks.com/en/tutorials/internals/howbrowserswork/#Main_flow_examples
Less specificity is faster. Every selector is another run of a loop. Rule
of thumb, after 3 selectors deep perf will start to be impacted.
https://csswizardry.com/2011/09/writing-efficient-css-selectors/
Also note that well perf is in question, the real performance hit will be when you specify too deeply. I promise you that the long term maintenance become the real problem. Again 3 selectors deep is again a good Rule of Thumb.
Lastly, if you need help with css structure/architecture try:
1- http://getbem.com/
2- https://www.xfive.co/blog/itcss-scalable-maintainable-css-architecture/
I'm trying to wrap my head around something about CSS. I've always thought that the order of including CSS files matters (the "cascading" part of it). I'm using Bootstrap 3, and trying to override the background color of the active top nav links.
The exact selector I have to use in order to do this is: (SCSS actually, but that shouldn't matter)
.navbar-default .navbar-nav > .active > a {
background: $sp-blue;
color: #fff;
}
And then scss-lint yells at me for having a depth of applicability greater than 3. But if I try this:
.navbar-nav > .active > a {
background: $sp-blue;
color: #fff;
}
then it stops working. This is what I don't understand. Why do I have to include .navbar-default in the selector? If .navbar-nav is within it, I shouldn't need more than that. It's annoying to have to copy the selector exactly as it's used in the previous stylesheet. Now, if I use !important, then it works, but we all know that's bad practice.
Can someone help me grasp this aspect of CSS?
That's becuase .navbar-default .navbar-nav > .active > a is more specific than .navbar-nav > .active > a.
Although the ordering of stylesheets have something to do on how the browser will analyze which css is more relevant, CSS specificity also plays a role.
Basically, the more specific your CSS selector is, the more relevant it is for the browser. Say, we have your css ordered like this:
/*this will be followed*/
.navbar-default .navbar-nav > .active > a {
background: #fff;
}
/*this will be ignored*/
.navbar-nav > .active > a {
background: #000;
}
Although the second selector is ordered last, it cannot override the previous selector, simply because it has a weaker specificity. It can override another css only if it has an equal or greater specificity than the previous one. But of course, !important is an exception to that rule.
Further Reading: http://css-tricks.com/specifics-on-css-specificity/
This is a FANTASTIC question! This shows you are actually thinking through css rather than the millions of novices who don't ask and just 'important!' whenever there's a problem. But you are EXACTLY right. Supposedly well-written CSS has the following constraints:
Do not use 'important!.
'important!' breaks the cascading nature of CSS and opens your design to problems with maintainability and simplicity:
See: What are the implications of using "!important" in CSS?
Avoid a large 'depth of applicability' For the same reasons as 'important!' again this results in a design that is not as maintainable, simple, or easily redesigned:
See: https://smacss.com/book/applicability
Don't modify your framework code Certainly you shouldn't go back and modify your framework's CSS. What if you need to update this code in the future, or some automated process overwrites it? Your code breaks.
One solution (if you believe in the above) is to use the CSS rules them self and target your elements in a more efficient way. Using a CSS specificity calculator such as this one:
http://specificity.keegan.st/
We can see other ways to target your element better. According to the calculator, the specificity of your element is 0031. However, a single id
#mynav {
background: $sp-blue;
color: #fff;
}
scores 0100 and would satisfy the above constraints. One problem is that some ALSO say...
Don't use IDs For similar reasons as the above:
See: http://oli.jp/2011/ids/
We could also use 'inline style' (as you can see from the calculator), however....
Don't use inline styles What's so bad about in-line CSS?
So is there any other way to 1) increase the specificity 2) without increasing the depth, and not using 3) IDs 4) 'important!' 5) inline styles, or 6) modifying your framework code? Well by assigning multiple classes to an element (or parent), we can increase specificity without increasing depth of applicability. Of course this would also work
.thislink1.thislink2.thislink3.thislink4 {
background: $sp-blue;
color: #fff;
}
Now, of course, you don't have to assign 4 different classes to your anchor just to be specific enough or to have your depth less than 3. If you assign a class to the parent
.navbar-nav.mynav > .active.myactivelink {
background: $sp-blue;
color: #fff;
}
this scores 0040 (depth of 2). You could perhaps simplify this further if you are sure that your css modifications are loaded last (since the later loaded rules take priority).
.navbar-nav > .active > a.mylink {
background: $sp-blue;
color: #fff;
}
this scores 0031 (with a depth of 3), like your bootstrap CSS, but if it's loaded after the bootstrap it will be applied.
The takeaway: CSS is not a computer language studied by experts to quantify metrics such as cyclomatic complexity and code quality, it is a style sheet language without much rigorous study of objective measures. It is easy enough to learn that you have many inexperienced armatures and more bad advice than hard evidence. I am certainly no expert so take the above as 'options'. Realize that style is important to learn from the pitfalls others have suffered from. But your particular situation includes your preferences and type of development environment you are in, and how much time it takes to learn all this advice.
basically just want to know if the attached image shows a valid CSS usage? I'm using a lot of nested divs lately and this is how I'm targeting them in my CSS. It works really well but I'm just curious if it's a good way of doing it? Or is there an easier / more efficient way?
Thanks!
link to the image
First of all: yor way is totally ok and the efficiency depends on the whole page. Maybe it can get more efficient with those ideas:
If your div-classes or ids are unique
You can also write just the class - you dont have to write the whole path then. Instead of
#content > .inner > .content > div { }
it is possible to write for example
.content > div { }
Helpful when you are using nested divs
When using nested divs you very often have to type a lot of code multiple times:
#content > .inner > .content { }
#content > .inner > .content > div {}
#content > .inner > .footer {}
#content > .inner > .footer > div {}
There are very helpful scripts called LESS and SASS (both of them work pretty much the same). They allow you to write everything just one time like
#content {
.inner {
.content {
// some stuff
div {
// some stuff
}
}
.footer {
//some stuff
div {
// some stuff
}
}
}
}
The direct child selector (ie. > ) is fine, but personally I don't like it because it makes it difficult to move and re-use styles. For example if I want to use .content somewhere other than #container I'm going to have to change a whole heap of CSS. Ideally you should be able to re-use blocks of markup without having to change CSS.
The direct child selector is best used to limit the depth to which a style is applied. For example it would be appropriate to use .content > p if you want the style to apply only to direct children so you can have another style for deeper children. If that's not the case then you might as well just use well named class and ID selectors.
I have a .scss file that, among other things contains this:
nav {
font-size: 0;
ul {
margin: $padding/3;
}
li {
z-index: 10;
position: relative;
display: inline-block;
font-size: $fontSize;
/**
* If we want separated, Uncomment!
margin: $padding/3;
#include border-radius(5px);
*/
&:first-child {
#include border-radius(0 5px 5px 0);
}
&:last-child {
#include border-radius(5px 0 0 5px);
}
padding: $padding/3 0;
#include background(linear-gradient(lighten($textColor, 10%), $textColor));
border: 1px solid lighten($textColor, 20%);
a {
color: $brightColor;
padding: $padding/3 $padding;
font-weight: bold;
text-decoration: none;
#include transition(.2s all);
}
//Nested menues
ul {
opacity: 0;
//display: none;
position: absolute;
margin: 0;
top: 0;
left: 0;
right: 0;
z-index: 5;
pointer-events: none;
#include transition(.2s all);
li {
#include background(linear-gradient(darken($brightColor, 10%), darken($brightColor, 30%)));
display: block;
border: 1px solid lighten($textColor, 20%);
&:first-child {
#include border-radius(0);
}
&:last-child {
#include border-radius(0 0 5px 5px);
}
a {
color: $textColor;
}
}
}
&:hover ul {
pointer-events: all;
top: 100%;
opacity: 1;
//display: block;
}
}
}
How bad/harmful it is in practice? I've heard many talks about "Don't go over 3 nested selectors!" But how harmful is it really? Does it have any visible impact on page loads? The benchmarks I've done say no, but is there anything I'm missing?
It depends on how much dynamic manipulation of the DOM and styles will go on after page load. It's not page loads (mostly) or slow selectors on initial layout that are at issue, it's repaints/reflows.
Now, Steve Souders says that on the average site it's simply not a real concern. However, on a web app or highly interactive site, poorly performing CSS rules can make your repaints slower than they have to be. If you have a lot of repaints...
Experts such as Nicole Sullivan, Paul Irish, and Steve Souders have covered the way CSS interacts with with JavaScript and how to write highly performant CSS selectors. It's more than depth (different selectors have different performance), but a good rule of thumb is to limit both depth and complexity to keep yourself out of trouble--but not so much performance trouble, read on.
However, as jankfree.org notes, it's not so much descendant or specific selectors as it is certain properties in certain contexts (html5rocks.com) that make paints expensive. I see long or complicated selectors more as a maintainability issue (Nicolas Gallagher) than a performance issue--keeping in mind that maintainability interacts with performance. Highly maintainable code can iterate faster and is easier to debug (helping you find and fix performance issues).
Now, as to Sass optimization. Yes, Sass can optimize your CSS. But it cannot optimize your selectors. A 4 level nested block will be output as a 4 level nested selector. Sass cannot change it without possibly making your CSS not work. You, as the author, have to optimize the way you write Sass to optimize your output. I, personally, use nesting only in a limited way (a killer feature in Sass for me is composing styles with #extend and placeholders). However, if you really love nesting you might be able to tweak your output to some degree using the Sass parent selector reference (or the newer #at-root).
So far as I know, neither Sass nor Compass has a built-in tool to analyze selectors and warn about them. It's probably possible to create a tool to do that (set a max depth and have your pre-processor warn you) utilizing an AST. More directly, Google Page Speed does have an existing feature that provides some information. SCSS Lint has a nesting option. There's also CSS Lint. (These can theoretically be added to run in your Compass config's on_stylesheet_saved if you're not already using something like Grunt or Gulp).
Just think about how you would want to write the actual css selector. Don't nest everything just because it is a child of the element.
nav li ul li a {
/* over specific, confusing */
}
.sub-menu a {
/* add a class to nested menus */
}
Once you start chaining that many selectors, it becomes a pain to override and can lead to specificity issues.
Don't nest CSS. We feel comfortable nesting css because that closely mirrors what we do in HTML. Nesting gives us context that .some-child is inside .some-parent. It gives us some control over the cascading. But not much else.
As SMACSS suggests, I would nest in class names instead. i.e, use .child-of-parent instead of .parent .child or .parent > .child
Nesting badly in practice can lead to extremely slow pages. See how github speeded up their diff pages.The least you should do is follow the inception rule which states that you shouldn't be nesting beyond 4 levels.
However, I would go one step further and say that we shouldn't nest CSS at all. I wrote a blog post with my opinions. Hope this is useful.
Just to chime in and enforce what others have said. It's a bad practice not necessarily from a performance point of view (It's probable you'll get better paint time increases from removing blurs/shadows and rounded corners than optimising selectors) but from a maintainability point of view.
The more heavily nested a selector, the more specific the resultant CSS rule (which you know already). Therefore, when you want to 'trump' that rule at some point you'll have to write a rule of equal (or greater) specificity further down the cascade to overrule the first. If you have an ID in there, that's going to make it FAR more specific too (so avoid unless you need them and know you won't need to override down the line).
To follow this to its logical conclusion, don't nest unless you need to. Don't have a rule like this:
.selector .another .yeah-another {}
When this would do the same job:
.yeah-another {}
It just makes life easier for everyone (including you) down the line.
My opinion:
You tell me which is worse on your eyes
from the op
nav li ul li a {color: $textColor;}
or as has been suggested
.nav-menuitem-menu-menuitem-link {color: $textColor;}
So...
The question is "Is it bad practice to hypernest in SCSS?" (or is it SASS?) I say no. But it's an ancillary argument.
The WORSE practice lies in leaving the SASS (or is it SCSS?) output, in it's machine-driven state, for production.
S*SS is a only a tool in your bag of tricks, no different than Notepad++ or Git or Chrome. It's role is to make your life a little easier by bringing some very general programming concepts to the point of building some css. It's role is NOT building your css. You can't expect it to do your job for you and create completely usable, readable, performing output.
Nest as much and as deep as you want, then follow Good Practice...
...which would be going thru your css afterwards and hand tweaking. Test, build, etc with your hypernested output. And when S*SS creates my first example above, give that anchor a class and call it with nav .class.
Although not directly an answer to your question, you can keep highly nested sass for your own purposes but still use #at-root. Check it out here.
.parent {
#at-root {
.child1 { ... }
.child2 { ... }
}
}
// compiles to ...
.child1 { ... }
.child2 { ... }
I know about mixins and parametric mixins. What we are looking for is a way to make any general purpose selectors in CSS / LESS into a mixin.
Eg in Twitter BootStrap, we have here
.navbar .nav > li {
float: left;
}
If I have to use it in a class say .mynavbar I want to be able to do this
INPUT->
.mynavbar {
.navbar .nav >li;
}
OUTPUT->
.mynavbar {
float:left;
}
Now I know this can't be done with the current version of LESS as the compiler flags a parser error. I wanted someone to help me out on changing the source code of less.js a little so that this is workable.
I've managed to reach the source code for the mixin parser. I've tried changing the RegExp there, but it interferes with other parts of the parser. I know we have to make only a few changes because, instead of just accepting .mixin and #mixin we have to accept any mixin like tags / attribute selectors like input[type=text].
It is currently needed for the development of a UI framework that uses Bootstrap. Unfortunately many places in bootstrap are littered with direct tag selectors instead of ids and classes.
This is possible since version 1.4.0 (2013-06-05) of LESS that include the extend feature. Based on the original example
.navbar .nav > li {
float: left;
}
.mynavbar:extend(.navbar .nav > li) {}
compiles to
.navbar .nav > li,
.mynavbar {
float: left;
}
Documentation here and discussion & example use for the original question here
EDIT: Added Code Sample
First off, I would strongly discourage doing such things. Instead, try to use the power of CSS and build your HTML such that the bootstrap rules apply, for example. Anyway, since you asked for it, here is the solution.
The problem is not the complexity of the selector, or the child rule, but the tag name selector part (i. e. the li). So, what we have to fix is the mixin parser only matching classes and ids. I guess we would not want to tamper with the first class or id test, since that is probably needed to distinguish mixins from normal CSS rules (although the tests run fine with that check commented out). (Actually, there is a parser preference in action, and the only thing tried after mixins are comments and directives, so we can safely remove that check as well). However, we can easily allow tag names in later parts of the mixin selector by adding a question mark after [#.] in the matching regular expression. So
while (e = $(/^[#.](?:[\w-]|\\(?:[A-Fa-f0-9]{1,6} ?|[^A-Fa-f0-9]))+/)) {
– i. e. line 825 – becomes
while (e = $(/^[#.]?(?:[\w-]|\\(?:[A-Fa-f0-9]{1,6} ?|[^A-Fa-f0-9]))+/)) {
The test cases still run through fine, afterwards, but subtle breakage my occur, so be cautious.
Edit: There is a GitHub Issue for the same problem. Apparently the less folks rather want the mixin feature to be more narrow and function-like, instead of allowing a more flexible … well … mixing in of rules. With regard to the CSS output, that's probably reasonable.