CSS at-rules (e.g., #import) have existed since CSS2. New rules are slowly being added to CSS3 such as #supports with varying levels of browser support. How do the major browsers handle unsupported rules they don't recognize? Are they just ignored? Or are they treated as syntax errors?
E.g., if I were to use the #supports at-rule which is not supported by any version of IE, would IE fail with a syntax error, or would it be silently ignored?
#supports (pointer-events: none) {
...
}
The CSS 2.1 spec says
4.2 Rules for handling parsing errors
At-rules with unknown at-keywords.
User agents must ignore an invalid at-keyword together with everything following it, up to the
end of the block that contains the invalid at-keyword, or up to and
including the next semicolon (;), or up to and including the next
block ({...}), whichever comes first.
For example, consider the following:
#three-dee {
#background-lighting {
azimuth: 30deg;
elevation: 190deg;
}
h1 { color: red }
}
h1 { color: blue }
The #three-dee at-rule is not part of CSS 2.1. Therefore, the whole
at-rule (up to, and including, the third right curly brace) is
ignored. A CSS 2.1 user agent ignores it, effectively reducing the
style sheet to:
h1 { color: blue }
Related
I have an input slider.
<input type="range" min="1" max="100" value="70" class="slider" id="volume-slider">
When trying to style the track like this for Firefox and Chrome
input[type=range]::-moz-range-track,
input[type=range]::-webkit-slider-runnable-track {
background: red;
}
It works on Firefox but not Chrome. But when I separate this into separate declaration blocks like this:
input[type=range]::-moz-range-track {
background: red;
}
input[type=range]::-webkit-slider-runnable-track {
background: red;
}
It now works on Chrome too. What is happening here?
It is specified to be this way. Invalid selector in selector list normally invalidates entire rule-set. In this case for Chrome anything ::-moz-<…> prefixed will be unknown so it would invalidate whole rule; since ::-webkit-<…> prefixed pseudo-elements are historical exception, because they managed to spawn wildly and due WebKit/Blink hegemony established themselves, now the specs explicitly dictates that anything ::-webkit-<…> must be treated as valid:
https://www.w3.org/TR/selectors-4/#compat
Due to legacy Web-compat constraints, user agents expecting to parse
Web documents must support the following features:
All pseudo-elements whose names begin with the string “-webkit-” (matched ASCII case-insensitively) and that are not functional
notations must be treated as valid at parse time. (That is,
::-webkit-asdf is valid at parse time, but ::-webkit-jkl() is not.) If
they’re not otherwise recognized and supported, they must be treated
as matching nothing, and are unknown -webkit- pseudo-elements.
Side note: this exception was added to the standard somewhere around 2018 and presumably Firefox behaved that way at that time already. But as a rule of thumb, if you want to target widest possible range of "unknown"/outdated browsers, you should always duplicate rule-sets with suspicious selectors. (Or do feature detection.)
What does the CSS standard say about unsupported expressions? How should a browser deal with them? How do actual browser implementations deal with them?
I'm implementing a CSS property optimizer (for a minifier project), and we want to leave CSS fallbacks intact. Our goal is to optimize the CSS as much as possible but in such a way that it should render exactly the same as the original.
This is why it's essential for me to understand how these things work.
Simple properties
For simple properties, it's really easy.
Let's say we have this:
div {
color: #f00;
color: rgba(1,2,3,.4);
}
In this case, if the browser doesn't support rgba then the first declaration #f00 wins. There is no question here.
Shorthands
However, how does it work with shorthand properties?
Here's some code:
div {
background: #f00;
background: rgba(1,2,3,.4);
}
How does the browser render this if it doesn't understand rgba? As you know, the syntax of background is: background: <color> <image> <repeat> <attachment> <position>; and such a shorthand declaration overrides any of the 5 fine-grained declarations that came before it; so the difficulty lies in which one of the 5 fine-grained properties the browser tries to assign the unknown token to. I have several possibilities in mind:
the browser decides it doesn't understand the latter declaration at all and drops it entirely
the browser thinks that rgba(...) represents a background-image and even though it doesn't know what to do with it, clears out the previous background-color as well
the browser thinks that rgba(...) represents a background-color and since it doesn't understand it, falls back to using #f00 instead
Let's make it even more interesting, say we have this:
div {
background: #fff url(...) no-repeat;
background: rgba(1,2,3,.4) linear-gradient(...) repeat-y;
}
How does a browser interpret this CSS snippet, ...
if the browser doesn't understand rgba?
if the browser doesn't understand linear-gradient?
if the browser doesn't understand repeat-y?
if the browser doesn't understand any two of the three?
if the browser doesn't understand any of the three?
The parsing rules in section 4.2 of the CSS2.1 spec speaks in terms of declarations, which refer to entire property-value pairs, regardless of whether the properties are shorthand or not:
Illegal values. User agents must ignore a declaration with an illegal value. For example:
img { float: left } /* correct CSS 2.1 */
img { float: left here } /* "here" is not a value of 'float' */
img { background: "red" } /* keywords cannot be quoted */
img { border-width: 3 } /* a unit must be specified for length values */
A CSS 2.1 parser would honor the first rule and ignore the rest, as if the style sheet had been:
img { float: left }
img { }
img { }
img { }
A user agent conforming to a future CSS specification may accept one or more of the other rules as well.
Notice that the third example shows the use of an illegal value for the background shorthand property resulting in the entire declaration being ignored.
Although the spec speaks of illegal values, as far as an implementation is concerned an unrecognized value and an illegal value are the same thing, since either way the implementation doesn't know what to do with such a value.
So the answer to the first part of your question is
the browser decides it doesn't understand the latter declaration at all and drops it entirely
And the answers to the second part are all the same: the latter declaration is ignored entirely.
As far as I know, if a browser cannot understand even a part of an expression, then it handles the property as syntactically wrong, and ignores the whole line.
Why does not the css rules work in Chrome?
css:
.selector { (;property: value;); }
or
.selector { [;property: value;]; }
First off, this isn't a hack. Equally, this doesn't only apply to Chrome v30. The same functionality applies in all other modern browsers.
As defined here in the CSS2.1 specification:
...parentheses (( )), brackets ([ ]), and braces ({ }) must always occur in matching pairs and may be nested.
When you add a (, for instance, Chrome will wait for the closing ) before attempting to apply any styles. However, no CSS property is wrapped in parenthesis like this, so thus no style is applied.
Take this example:
.selector {
(color:#f00;); /* Invalid, ignored. */
font-weight:Bold; /* Valid, not ignored. */
}
Here the color declaration is in parenthesis and the font-weight declaration isn't. Chrome will ignore the color property altogether as this is not a valid CSS declaration, but still process the font-weight as normal:
JSFiddle demo.
Parenthesis, brackets and braces like this are simply invalid CSS declarations and are ultimately ignored in the same way that the following would also achieve the same:
.selector {
color; /* Invalid, ignored. */
font-weight:Bold; /* Valid, not ignored. */
}
It's also worth noting that Chrome will treat anything between parenthesis and brackets as a single CSS declaration. In your case, (;property: value;); is treated as one declaration, regardless of the extra semicolons.
It's also worth noting that if you fail to match the closing pair prior to ending the selector (with a }), any selector given after will not be processed (example).
That particular set of CSS oddities is actually known, and what is called bracket hacks for Safari (yes they have a name). They also worked in Chrome until version 28. It works in versions 7.0 and older in Safari at this time (version 8 is the current version of Safari as I am writing this update).
Several answers tagged css discourage the use of !important in favor of specificity. Why?
There is actual math you can use to predict, control, and reverse-engineer the impact of CSS rules. By using !important you're breaking that. Look at this JS fiddle for example, which doesn't use !important: http://jsfiddle.net/hXPk7/
If you use Firebug or Chrome dev tools to inspect the title element where it says "Richard", you should see these rules, in this order:
/**************************/
/* /hXPk7/show/ (line 20) */
/**************************/
#myExample #title .name {
color: yellow;
}
/********************************************************/
/* /hXPk7/show/ (line 14) - Inherited fromdiv#myExample */
/********************************************************/
#myExample {
color: blue;
}
Note that this is not the order in which they appear in the CSS stylesheet - instead they are ordered in decreasing order of their specificity. The ones which take precedence are listed first, and the others (whose rules are overridden by more specific rules) probably have a property crossed out. This demonstrates that specificity makes it easy to trace (debug?) where an element is getting its CSS properties from.
Now, compare with this JS fiddle - which is effectively the same, but has a single new rule which now uses !important: http://jsfiddle.net/hXPk7/1/
Inspect the same element using Firebug or Chrome dev tools, and you'll see something like this:
/**************************/
/* /hXPk7/1/show/ (line 20) */
/**************************/
#myExample #title .name {
color: yellow;
}
/**************************/
/* /hXPk7/1/show/ (line 26) */
/**************************/
span {
color: black !important;
}
/********************************************************/
/* /hXPk7/1/show/ (line 14) - Inherited fromdiv#myExample */
/********************************************************/
#myExample {
color: blue;
}
Again, the rules are ordered according to their specificity - but note that this time, while the most specific rule which is listed first specifies a color of yellow, the browser instead renders the text as black! This is because the !important declaration has broken the normal behavior of specificity, taking precedence in a way which can be challenging to trace. Imagine a more realistic web site, with potentially hundreds of rules, and the one controlling the color isn't obvious to find, or to change.
Now, maybe this is a problem with the developer tools, but I think it reflects the fact that !important takes a normally easy-to-predict system of precedence and makes it more challenging. Maybe there are times to use it, but it should not be the first tool you reach for when writing CSS.
This question is similar to the one I asked here. I am cleaning up some files and I came across this in this css:
.something
{
height: 33px;
-height: 34px; /* does this do anything?? /
}
and
.something
{
_width: 150px; / does this do anything?? */
}
EDIT: Okay, so the _ (underscore) is a css hack for IE, which is fine, I'll just leave it, but what about the minus sign, does it do anything at all?
Also, we are not supporting anything below IE 7 anymore, so if anything is a hack for IE6 I can take it out.
Straight from the W3C CSS 2.1 Spec -
4.1.2.1 Vendor-specific extensions
In CSS, identifiers may begin with '-' (dash) or '_' (underscore). Keywords
and property names beginning with '-' or '_' are reserved for vendor-specific extensions.
However that said, using an underscore to prefix a CSS property is a well known CSS hack to apply that rule for rendering in IE 6.
Since a CSS identifier can start with a '-' (dash) and be valid, this can be used to quickly comment out parts of CSS during development. For example in the CSS below, none of the properties will be set for h1 and only margin will be set for h2.
-h1 { color:blue; margin:2em; }
h2 { -color:pink; margin:2em; } /* property "-color" not valid */
I'm not sure about the minus sign, but the underscore is a hack to have a rule ignored in IE < 6.
http://wellstyled.com/css-underscore-hack.html
This is a CSS hack, to trick some browsers to use them (or not use them).
This one is the Underscore Hack
Versions 6 and below of Internet
Explorer recognize properties with
this prefix (after discarding the
prefix). All other browsers ignore
such properties as invalid. Therefore,
a property that is preceded by an
underscore or a hyphen is applied
exclusively in Internet Explorer 6 and
below.
#elem {
width: [W3C Model Width];
_width: [BorderBox Model];
}
This hack uses invalid CSS[3] and
there are valid CSS directives to
accomplish a similar result. Thus some
people do not recommend using it. On
the other hand this hack does not
change the specificity of a selector
making maintenance and extension of a
CSS file easier.
CSS Hacks is one (not so elegant) technique of achieving same look and feel and across browsers.
It means the CSS property will be applied in IE 6 and below. It is a CSS hack.
A cleaner method of applying styles to different IEs is using conditional comments.