W3C validator, CSS3 and Bootstrap - css

I have a site that uses the Twitter Bootstrap framework which renders without errors in all browsers. However, when I plug the main URL of our site into the W3C validator it spits out 1,465 errors, most of which are parsing errors.
A few examples:
Parse Error {*zoom:1; Value Error : background-color Too many values
or values are not recognized : #3f6998 \000009
I understand the * and \000009 are IE specific, so it is important that these are failing validation?
Is there a validator that takes these into consideration?

You will always get css validation error while using CSS3. Most of the styles(css3) are not valid under w3 rules as of now.. You can simply ignore these errors and move ahead.. Just make sure you do not have any other validation issues other than those with css3 styles. If so you are perfectly fine..
Update :
You can try something like this :
http://jigsaw.w3.org/css-validator/validator?profile=css3&uri=PATH_TO_YOUR_WEBSITE
For eg :
http://jigsaw.w3.org/css-validator/validator?profile=css3&uri=http://stackoverflow.com
Still it will show lot of errors. There are no CSS3 validators yet implemented which is accepted.
[Updated]
URL : https://validator.w3.org/
Please use the official validator.

Here is Bootstrap’s explanation for their validation errors:
https://getbootstrap.com/docs/getting-started/#support-validators
In order to provide the best possible experience to old and buggy browsers, Bootstrap uses CSS browser hacks in several places to target special CSS to certain browser versions in order to work around bugs in the browsers themselves. These hacks understandably cause CSS validators to complain that they are invalid. In a couple places, we also use bleeding-edge CSS features that aren't yet fully standardized, but these are used purely for progressive enhancement.
These validation warnings don't matter in practice since the non-hacky portion of our CSS does fully validate and the hacky portions don't interfere with the proper functioning of the non-hacky portion, hence why we deliberately ignore these particular warnings.

Related

What is going on in IE8?

I don't know if there is any easy answer to this, but can anyone give me any idea why my site is totally crapping out in IE8? Is there something relatively easy that I can address to make it not become a complete mess?
Or at least any area to start investigating where similar problems tend to crop up in IE8?
Thanks!
http://firewalkcreative.com/2012/
Start by viewing and fixing validation errors. The most critical errors are often structural ones, like an unclosed tag. While it's good practice to fix non-structural errors (such as the one you mentioned in your other post), browsers are forgiving if you get the basic structure right.
The bigger culprit (but don't neglect fixing validation errors) is that you are using HTML 5 tags which IE8 knows nothing about like section and header. Thus, the CSS styles aren't applied to those tags. Modernizr will easily fix this.
I noticed it was using html5, then got a timeout upon refresh. Without being able to access the site I can't tell you exactly, but html5 and ie8 are going to cause problems (most of the time) unless you use a workaround like a .js plugin.
I frequently use Modernizr

CSS validator where you can disable error messages on per file basis

Does there exist a CSS validator where one could hint inside a file, á la jslint, what validation rules are and are not effective inside this file or a rule.
I tried
W3C validator
csslint
Neither of them offers this functionality and always lead to validation errors for a mark-up which one wants to use (IE hacks, vendor extensions)
To clarify matters further: I'd like to use this validator in a commit hook to catch CSS which does not conform the project policy. I am not that interested if people think if vendor prefixes are good or bad.
I don't believe there is a tool out there that's sophisticated enough to find all stylesheets that your page links to and offer customized validation rules for each stylesheet.
What I always do is isolate my IE hacks in a separate file, hidden in a conditional comment, so the validator never sees them even if I pass the URI of my page which links to all its stylesheets.
For vendor prefixes, you can tell the W3C validator to raise warnings instead of errors, although you won't be able to get it to outright ignore them because they simply do not validate.

What type of XHTML and CSS validations errors are safe to avoid?

What type of XHTML and CSS validations errors can be avoided? which would not harmful today and tomorrow (if we do not touch xhtml, css )?
I mean errors which will not create any problem on future upgrade of browser, css and html version? they just show as an error today?
I think one thing I know is Vendor extensions. Are there any other errors/warnings which will not create any bad effect for user and developer?
If I'm making a site and i get many errors should i try to give time to solve every error? if i will try to solve all error then i will have to use javascript on some instances in place of css
The XHTML and CSS validators will validate against the corresponding specifications of the W3C standards. Ignoring these mean that your page(s) are deviating from those standards.
Web browsers aim to implement these standards, so ignoring a warning is likely to cause issues on at least some browsers. Therefore, you cannot ignore any warning that the validators give.
Also, having XHTML and CSS conformant web pages is not guaranteed to work on all browsers and be compatible with them as the browsers may implement something differently or incorrectly.
Having conformant pages is still a good thing, as most browsers are (for the most part) conformant and having more conformant pages helps put the ownership on the browser implementers. That is, you (as a web page author) need only concern yourself with being standard compliant. If a browser can't handle that, the issue is with the browser, not the web page author.
If you want to be compatible with a large number of browsers, start with the valid conformant page and then add the minimum needed to get it working on other non-conformant browsers. Doing it this way is a lot easier than starting with a non-conformant page and trying to make that work on most browsers.
You should try to avoid all parse errors. If in doubt, try the validator.w3.org and use the html tidy function to clean up the code.
Each browser will render and parse XHTML and css differently. Even if it works now it might not work tomorrow.
The only safe answer is "none". The best guarantee you have for future compatibility wth all browsers is stick to the standard and have fully validated xhtml and css.

Suppress "filter" error in Firefox Error Console?

I currently use the non-standard "filter" method of rendering opacity on my site in addition to the standard way, since IE still doesn't seem to support it and unless I'm mistaken, opacity is part of CSS3 which isn't final anyway.
I guess this is more of annoyance, but Firefox correctly notes an error in parsing the value for "filter", as it should since it's not in the standard. This is great, except that I use this in about 10 places in my CSS sheet, so with every refresh when I'm debugging my JavaScript I have to swim through a sea of useless "filter" warnings before I see anything relevant to what I'm doing.
Is there a way to have Firefox specifically ignore a bit of CSS code and not throw out errors about it, if I know that it's nonstandard and browser specific? That would be the cleanest solution. I can't use the traditional server side method of just not outputting the code in the CSS file to begin with because I'm doing most of this testing offline.
Thanks
I'd suggest looking into conditional comments, as documented at MSDN.

Is it acceptable for invalid XHTML?

I've noticed a lot of sites, SO included, use XHTML as their mark-up language and then fail to adhere to the spec. Just browsing the source for SO there are missing closing tags for paragraphs, invalid elements, etc.
So should tools (and developers) use the XHTML doctype if they are going to produce invalid mark up? And should browsers be more firm in their acceptance of poor mark-up?
And before anyone shouts hypocrite, my blog has one piece of invalid mark-up involving the captha (or it did the last time I checked) which involves styling the noscript tag.
There are many reasons to use valid markup. My favorite is that it allows you to use validation as a form of regression testing, preventing the markup equivalent of "delta rot" from leading to real rendering problems once the errors reach some critical mass. And really, it's just plain sloppy to allow "lazy" errors like typos and mis-nested/unclosed tags to accumulate. Valid markup is one way to identify passionate programmers.
There's also the issue of debugging: valid markup also gives you a stable baseline from which to work on the inevitable cross-browser compatibility woes. No web developer who values his time should begin debugging browser compatibility problems without first ensuring that the markup is at least syntactically valid—and any other invalid markup should have a good reason for being there.
(Incidentally, stackoverflow.com fails both these tests, and suggestions to fix the problems were declined.)
All of that said, to answer your specific question, it's probably not worthwhile to use one of the XHTML doctypes unless you plan to produce valid (or at least well-formed) markup. XHTML's primary advantages are derived from the fact that XHTML is XML, allowing it to be processed and transformed by tools and technologies that work with XML. If you don't plan to make your XHTML well-formed XML, then there's little point in choosing that doctype. The latest HTML 4 spec will probably do everything you need, and it's much more forgiving.
We should always try to make it validate according to standards. We'll be sure that the website will display and work fine on current browsers AND future browsers.
I don't think that, if you specify a doctype, there is any reason not to adhere to this doctype.
Using XHTML makes automated error detection easy, every change can be automatically checked for invalid markup. This prevents errors, especially when using automatically generated content. It is really easy for a web developer using a templating engine (JSP, ASP.NET StringTemplate, etcetera) to copy/paste one closing tag too little or too many. When this is your only error, it can be detected and fixed immediately. I once worked for a site that had 165 validation errors per page, of which 2 or 3 were actual bugs. These were hard to find in the clutter of other errors. Automatic validation would have prevented these errors at the source.
Needless to say, choosing a standard and sticking to it can never benefit interoperability with other systems (screen scrapers, screen readers, search engines) and I have never come across a situation where a valid semantic XHTML with CSS solution wasn't possible for all major browsers.
Obviously, when working with complex systems, it's not always possible to stick to your doctype, but this is mostly a result of improper communication between the different teams developing different parts of these systems, or, most likely, legacy systems. In the last case it's probably better to isolate these cases and change your doctype accordingly.
It's good to be pragmatic and not adhere to XHTML just because someone said so, regardless of costs, but with current knowledge about CSS and browsers, testing and validation tools, most of the time the benefits are much greater than the costs.
You can say that I have an OCD on XHTML validity. I find that most of the problems with the code not being valid comes from programmers not knowing the difference between HTML and XHTML. I've been writing 100% valid XHTML and CSS or a while now and have never had any major rendering problems with other browsers. If you keep everything valid, and don't try anything too exotic css wise, you will save yourself a ton of time in fixes.
I wouldn't use XHTML at all just to save myself the philosophical stress. It's not like any browsers are treating it like XHTML anyway.
Browsers will reject poor mark-up if the page is sent as application/xhtml+xml, but they rarely are. This is fine.
I would be more concerned about things like inline use of CSS and JavaScript with Stack Overflow, just because they make maintenance harder.
Though I believe in striving for valid XHTML and CSS, it's often hard to do for a number of reasons.
First, some of the content could be loaded via AJAX. Sometimes, fragments are not properly inserted into the existing DOM.
The HTML that you are viewing may not have all been produced in the same document. For example, the page could be made of up components, or templates, and then thrown together right before the browser renders it. This isn't an excuse, but you can't assume that the HTML you're seeing was hand coded all at once.
What if some of the code generated by Markdown is invalid? You can't blame Stack Overflow for not producing valid code.
Lastly, the purpose of the DOCTYPE is not to simply say "Hey, I'm using valid code" but it's also to give the browser a heads up what you're trying to do so that it can at least come close to correctly parsing that information.
I don't think that most developers specify a DOCTYPE and then explicitly fail to adhere to it.
while I agree with the sentiment of "if it renders fine then don't worry about it" statement, however it's good for follow a standard, even though it may not be fully supported right now. you can still use Table for layout, but it's not good for a reason.
No, you should not use XHTML if you can't guarantee well-formedness, and in practice you can't guarantee it if you don't use XML serializer to generate markup. Read about producing XML.
Well-formedness is the thing that differentiates XHTML from HTML. XHTML with "just one" markup error ceases to be XHTML. It has to be perfect every time.
If "XHTML" site appears to work with some errors, it's because browsers ignore the DOCTYPE and interpret page as HTML.
See XHTML proxy that forces interpretation of pages as XHTML. Most of the time they fail miserably. This is one of the reason why future of XHTML is uncertain and why development of HTML has been resumed.
It depends. I had that issue with my blog where a YouTube video caused invalid XHTML, but it rendered fine. On the other hand, I have a "Valid XHTML" link, and a combination of a "Valid XHTML" claim and invalid XHTML is not professional.
As SO does not claim to be valid, I think it's acceptable, but personally if I were Jeff i would be bothered and try to fix it even if it looks good in modern browsers, but some people rather just move on and actually get things done instead of fixing non-existent bugs.
So long as it works in IE, FF, Safari, (insert other browser here) you should be okay. Validation isn't as important as having it render correctly in multiple browsers. Just because it is valid, doesn't mean it'll work in IE properly, for instance.
Run Google Analytics or similar on your site and see what kind of browsers your users are using and then judge which browsers you need to support the most and worry about the less important ones when you have the spare time to do so.
I say, if it renders OK, then it doesn't matter if it's pixel perfect.
It takes a while to get a site up and running the way you want it, going back and making changes is going to change the way the page renders slightly, then you have to fix those problems.
Now, I'm not saying you should built sloppy web pages, but I see no reason to fix what ain't broke. Browsers aren't going to drop support for error correction anytime in the near future.
I don't understand why everyone get caught up trying to make their websites fit the standard when some browsers sill have problems properly rendering standard code. I've been in web design for something like 10 years and I stopped double codding (read: hacking css), and changing stupid stuff just so I could put a button on my site.
I believe that using a < div> will cause you to be invalid regardless, and it get a bit harder to do any major JavaScript/AJAX without it.
There are so many standards and they are so badly "enforced" or supported that I don't think it matters. Don't get me wrong, I think there should be standards but because they are not enforced, nobody follows them and it's a massive downward spiral.
For 99.999% of the sites out there, it really won't matter. The only time I've had it matter, I ran the HTML input through HTMLTidy to XHTML-ize it, and then ran my processing on it.
Pretty much, it's the old programmer's axiom: trust no input.

Resources