I'm working on a web site which contains sections that need to be secured by SSL.
I have the site configured so that it runs fine when it's always in SSL, I see the SSL padlock in IE7/IE8/FireFox/Safari/Chrome
To implement the SSL switching, I created a class that implemented IHTTPModule and wired up HTTPApplication.PreRequestHandlerExecute.
I go through some custom logic to determine whether or not my request should use SSL, and then I redirect. I have to deal with two scenarios:
Currently in SSL and request doesn't require SSL
Currently not in SSL but request requires SSL
I end up doing the followng (where ctx is HttpContext.Current and pathAndQuery is ctx.Request.Url.PathAndQuery)
// SSL required and current connection is not SSL
if (requestRequiresSSL & !ctx.Request.IsSecureConnection)
ctx.Response.Redirect("https://www.myurl.com" + pathAndQuery);
// SSL not required but current connection is SSL
if (!requestRequiresSSL & ctx.Request.IsSecureConnection)
ctx.Response.Redirect("http://www.myurl.com" + pathAndQuery);
The switching back and forth now works fine. However, when I go into SSL mode, FireFox and IE8 warns me that my request isn't entirely encrypted.
It looks like my module is short circuiting my request somehow, would appreciate any thoughts.
I would suspect, that when you determine which resources require encryption, and which not, you do not include the images, or some header and footers as well, or even CSS files, if you use any.
As you always throw away SSL for such a content, it may happen that part of the page (main html) requires SSL, but the consequential request for an image on this page does not.
The browser is warning you, that some parts of the page were not delivered using SSL.
I will check if the request is for HTML, and only then drop the SSL if needed. Otherwise, keep it the way it is (most probably images and such are referenced with relative paths, than a full blown url).
I.e., if you have:
<html>
<body>
Some content...
<img src="images/someimage.jpg">
</body>
</html>
and you request this page using SSL, but your evaluation of requestRequiresSSL does not take into account the images as secured resources, it will form a http, not https request, and you will see the warning.
Make sure when you request a resource and evaluate requestRequiresSSL, to check the referrer and if this is an image:
// SSL not required but current connection is SSL
if (!requestRequiresSSL && ctx.Request.IsSecureConnection && !isHtmlContent)
ctx.Response.Redirect("http://www.myurl.com" + pathAndQuery);
Just figure out how to determine isHtmlContent (if you do not serve images from a database, etc., but from a disk location), just check the the resource filename (.aspx, .asmx, .ashx, .html, etc.).
That way, if the connection is encrypted, but the resource itself is not html, and no set for "encryption", you are not going to drop the encryption.
I highly recommend using this (free / open source) component to do what you're trying:
http://www.codeproject.com/KB/web-security/WebPageSecurity_v2.aspx
Any content that is not normally handled by .Net (such as regular html and most graphic files) will not execute the httpmodule because it doesn't go through .net
Your best bet is to just handle this at the IIS level. See the following for info on how to configure your server.
http://www.jameskovacs.com/blog/HowToAutoRedirectToASSLsecuredSiteInIIS.aspx
I highly recommend you this product:
http://www.e2xpert.com/web/Http-Https-Switch.aspx
It is professional and easy to use. It comes with a powerful configuration tool, by which just one click can finish the entire configuration for you.
Just use SSL throughout your site, for all pages and for all images/scripts/stylesheets. That just makes everything oh-so-simple. IE and Firefox will no longer complain, you will no longer have crazy modules trying to guess whether any given request should be redirected, etc.
For the average user it's nearly impossible for them to make a informed decision when the only thing Firefox vaguely tells them is, "Parts of the page you are viewing were not encrypted before being transmitted over the Internet." This is about as helpful as the "somethings wrong" engine light and in fact is telling them after their information has been transferred.
The least this message should be accompanied with is a list providing the URL, type of content (images, javascript, css) and what it means to the user. BTW I get this message when using GMail.
Until that happens, as others stated your code should work once you determine the unsecured elements. Then you can use Firebug (http://getfirebug.com) to check the content being delivered over the connection.
Related
I'm doing some work with this right now and I have to say, it makes no sense at all to me! Basically, I have some CDN server which provides css, images ect for a site. For whatever reason, in order for my browser to stop blocking those resources with a CORS error, I had to have that server (the CDN) add the Access-Control-Allow-Origin header. But as far as I can tell that does absolutely nothing to increase security. Shouldn't the page I request which references those cross-domain resources be telling the browser it's safe to get stuff from the other domain? If that were a malicious domain wouldn't it just have the Access-Control-Allow-Origin set to * so that sites load their malicious responses (you don't have to answer that because obviously they would)?
So can someone explain how this mechanism/feature provides security? As far as I can tell the implementors fucked up and it actually does nothing. The header should be required from the page which references/requests cross-domain resources rather than from that domain being requested.
To be clear; if I request a page at domain A it would make sense for the response to include the Access-Control-Allow-Origin header white listing resources from domain B (Access-Control-Allow-Origin:.B.com), however it makes no sense at all for domain B to effectively white list itself by providing the header; Access-Control-Allow-Origin: which is how this is currently implemented. Can anyone clarify what the benefit of this feature is?
If I have a protected resource hosted on site A, but also control sites B, C, and D, I may want to use that resource on all of my sites but still prevent anyone else from using that resource on theirs. So I instruct my site A to send Access-Control-Allow-Origin: B, C, D along with all of its responses. It's up to the web browser itself to honor this and not serve the response to the underlying Javascript or whatever initiated the request if it didn't come from an allowed origin. Error handlers will be invoked instead. So it's really not for your security as much as it's an honor-system (all major browsers do this) access control method for servers.
Primarily Access-Control-Allow-Origin is about protecting data from leaking from one server (lets call it privateHomeServer.com) to another server (lets call it evil.com) via an unsuspecting user's web browser.
Consider this scenario:
You are on your home network browsing the web when you accidentally stumble onto evil.com. This web page contains malicious javascript that tries to look for web servers on your local home network and then sends their content back to evil.com. It does this by trying to open XMLHttpRequests on all local IP addresses (eg. 192.168.1.1, 192.168.1.2, .. 192.168.1.255) until it finds a web server.
If you are using an old web browser that isn't Access-Control-Allow-Origin aware or you have set Access-Control-Allow-Origin * on your privateHomeServer then your browser would happily retrieve the data from your privateHomeServer (which presumably you didn't bother passwording as it was safely behind your home firewall) and then handing that data to the malicious javascript which can then send the information on to the evil.com server.
On the other hand using an Access-Control-Allow-Origin aware browser and default web configuration on privateHomeServer (ie. not sending Access-Control-Allow-Origin *) your web browser would block the malicious javascript from seeing any data retrieved from privateHomeServer. So this way you are protected from such attacks unless you go out of your way to change the default configuration on your server.
Regarding the question:
Shouldn't the page I request which references those cross-domain
resources be telling the browser it's safe to get stuff from the other
domain?
The fact that your page contains code that is attempting to get resources from a particular server is implicitly telling the web browser that you believe the resources are safe to fetch. It wouldn't make sense to need to repeat this again elsewhere.
CORS makes only sense for Mashup content provider and nothing more.
Example: You are a provider of a embedded maps mashup service which requires a registration. Now you want to make sure that your ajax mashup map will only work for your registered users on their domains. Other domains should be excluded. Only for this reason CORS makes sense.
Another example: Someone misuse CORS for a REST-Service. The clever developer set up a ajax proxy and et voilà you can access from every domain on that service.
Such a ajax proxy would make no sense for a mashup, on the other way the CORS makes no sense for REST-Services, because you could bypass the restriction with a simple http-client.
Scenario goes like this:
Main parts of web site is on one server. All traffic goes over https. I have no control ovet this server.
Themes use css files and images from another server. Also over https. I have full control over this server.
How vulnerable is the main site (how and why) if css files and images would go over http? I am asking only about css and images.
I don't know how relevant is, but server is Apache and language is PHP.
---------------- edit ------------
So far, there is 'a man in the middle' attack who can change css and thus hide my content, introduce new images and add more text.
But can not create live links, or add js...
Here is a good discussion about this topic started by symcbean.
Any unencrypted HTTP connection can potentially be intercepted and modified by men-in-the-middle. That means, any resource you're retrieving via an HTTP connection is untrustworthy; it cannot be confirmed whether it's the original resource as intended. That means an attacker may be able to include resources in your page which you did not intend to include.
In the case of CSS files content can be altered on your site (display: none, content: "Please go to example.com and enter your password"), in the case of images exploits may be introduced (through buggy image decoding client-side), in the case of Javascript entirely arbitrary behaviour may be injected (e.g. sending all key-strokes to a 3rd party server).
A third party may modify those CSS or images to convey different things, either by tampering that data on the fly, or by spoofing the target. The browser would not know if is getting those from a reliable source, and probably would complain about mixed content issues. CSS3 has many features that may bring pictures from another domain or include unintended content.
I'm thinking about my website architecture that's using https.. I now have a CDN server hosting images , css and more static files.
The website itself is using HTTPS for securing sensitive costumer data. Will using the static images , loaded by for example 'http://cdn.example.com/images/test.jpg' on a website 'https://www.example.com' popup a "Loading insecure data" message?
So loading external NOT SECURED data on a SECURED website.
Will this be causing a popup warning "Loading insecure data, continue?"?
Thx!
Yes.
If a page is loaded over HTTPS then every resource it uses should also be loaded over HTTPS.
Otherwise a man-in-the-middle could replace images with misleading ones (or ones that exploit buffer overflow issues in browsers to execute code) and scripts with ones that do different things (such as leak data to the third party).
You have to load every resource over https to get rid of that warning. You can either move the resources to your server that supports encryption, or link to an external resource over https.
If you really want to load http content in https, you can follow this method using a backend handler in charge of downloading and exposing the required content with self forged links including a hash. The security issue is then fixed and you get the content accessible through https.
Dealing with HTTP content in HTTPS pages
I did this recently.
I have a raspberry pi loaded with nginx, and PHP.
I us Https to handle requests from the web to the PHP code which in turn sends http requests to my local network to assemble the page. Works well.
I have a stylesheet that loads images from an external domain and I need it to load from https:// from secure order pages and http:// from other pages, based on the current URL. I found that starting the URL with a double slash inherits the current protocol. Do all browsers support this technique?
HTML ex:
<img src="//cdn.domain.example/logo.png" />
CSS ex:
.class { background: url(//cdn.domain.example/logo.png); }
If the browser supports RFC 1808 Section 4, RFC 2396 Section 5.2, or RFC 3986 Section 5.2, then it will indeed use the page URL's scheme for references that begin with "//".
When used on a link or #import, IE7/IE8 will download the file twice per http://paulirish.com/2010/the-protocol-relative-url/
Update from 2014:
Now that SSL is encouraged for everyone and doesn’t have performance concerns, this technique is now an anti-pattern. If the asset you need is available on SSL, then always use the https:// asset.
One downside occurs if your URLs are viewed outside the context of a web page. For example, an email message sitting in an email client (say, Outlook) effectively has no URL, and when you're viewing a message containing a protocol-relative URL, there is no obvious protocol context at all (the message itself is independent of the protocol used to fetch it, whether it's POP3, IMAP, Exchange, uucp or whatever) so the URL has no protocol to be relative to. I've not investigated compatibility with email clients to see what they do when presented with a missing protocol handler - I'm guessing that most will take a guess at http. Apple Mail refuses to let you enter a URL without a protocol. It's analogous to the way that relative URLs do not work in email because of a similarly missing context.
Similar problems could occur in other non-HTTP contexts such as in tweets, SMS messages, Word documents etc.
The more general explanation is that anonymous protocol URLs cannot work in isolation; there must be a relevant context. In a typical web page it's thus fine to pull in a script library that way, but any external links should always specify a protocol. I did try one simple test: //stackoverflow.com maps to file:///stackoverflow.com in all browsers I tried it in, so they really don't work by themselves.
The reason could be to provide portable web pages. If the outer page is not transported encrypted (http), why should the linked scripts be encrypted? This seems to be an unnecessary performance loss. In case, the outer page is securely transported encrypted (https), then the linked content should be encrypted, too. If the page is encrypted, the linked content not, IE seems to issue a Mixed Content warning. The reason is that an attacker can manipulate the scripts on the way. See http://ie.microsoft.com/testdrive/Browser/MixedContent/Default.html?o=1 for a longer discussion.
The HTTPS Everywhere campaign from the EFF suggests to use https whenever possible. We have the server capacity these days to serve web pages always encrypted.
Just for completeness. This was mentioned in another thread:
The "two forward slashes" are a common shorthand for "whatever protocol is being used right now"
if (plain http environment) {
use 'http://example.com/my-resource.js'
} else {
use 'https://example.com/my-resource.js'
}
Please check the full thread.
It seems to be a pretty common technique now. There is no downside, it only helps to unify the protocol for all assets on the page so should be used wherever possible.
I read about "HTTP persistent connection" but somehow I don't seem to understand what does persistent mean in this context.
Could you'll elaborate?
It means the server doesn't close the socket once it's finished pushing out the response (so the length of the response has to be otherwise indicated, via headers or chunking), so the client can make other requests on the same socket. A web page often requests several other pieces (images, CSS, scripts, ...) on the same server as the page itself, so reusing the socket for some of those further requests to the same server can reduce overall latency compared to closing the original socket and opening new ones for all the follow-on requests.
All the discussion till now has been from the browser side of things. The browser first request the actual page, and it parses the page and finds out all other resources that it needs before it can render that page. The browser requests these resources and other dependent resources one by one. So maintaining a persistent connection is very efficient here, as the overhead of creating and destroying connections is avoided.
Now from web server side of things, a persistent connection would be one that allows it to "push" content to the web browser. Now HTTP doesn't support this. So, there are few workarounds with javascript where the page is basically refreshed after a while.
You can see this being trick being used by many web based email providers which continuously keep checking in the background for new mails. This gives a feeling that when a new mails arrives, the server "pushes" the new mail notification to the web browser. But in fact, its actually the web browser which keeps on checking the server for any new mail.
Also another point that I would like to state is that we actually don't see any page refresh that's because of another trick which allows only specific parts of the page to be refreshed by the request. (HINT: AJAX)
I think this is a switching for http or https for website browser. If you have old https:// and you are now using http for browser .htaccess file then this problem should created via yoast plugins one page crawl page. don't worry about it is not important error. For hackers this is a way to hack your website if your ssl connection is empty they should attach there page or domain to your ssl connection
e.b http://www.example.com and when you brows https://www.example.com in browser there are some other link with open your site domain.
Solution for this always use your full address for website: to protect hackers against your website use ssl and https:/ page for your website.
Then this problem have never scene in any test site or page.