What is the point of doing this?
I want a reason why it's a good idea to send a person back to where they came from if the referrer is outside of the domain. I want to know why a handful of websites out there insist that this is good practice. It's easily exploitable, easily bypassed by anyone who's logging in with malicious intent, and just glares in my face as a useless "security" measure. I don't like to have my biased opinions on things without other input, so explain this one to me.
The request headers are only as trustworthy as your client, why would you use them as a means of validation?
There are three reasons why someone might want to do this. Checking the referer is a method of CSRF Prevention. A site may not want people to link to sensitive content and thus use this to bounce the browser back. It may also be to prevent spiders from accessing content that the publisher wishes to restrict.
I agree it is easy to bypass this referer restriction on your own browser using something like TamperData. It should also be noted that the browser's http request will not contain a referer if your coming from an https:// page going to an http:// page.
Related
Let's say that a page is just printing the value of the HTTP 'referer' header with no escaping. So the page is vulnerable to an XSS attack, i.e. an attacker can craft a GET request with a referer header containing something like <script>alert('xss');</script>.
But how can you actually use this to attack a target? How can the attacker make the target issue that specific request with that specific header?
This sounds like a standard reflected XSS attack.
In reflected XSS attacks, the attacker needs the victim to visit some site which in some way is under the attacker's control. Even if this is just a forum where an attacker can post a link in the hope somebody will follow it.
In the case of a reflected XSS attack with the referer header, then the attacker could redirect the user from the forum to a page on the attacker's domain.
e.g.
http://evil.example.com/?<script>alert(123)>
This page in turn redirects to the following target page in a way that preserves referer.
http://victim.example.org/vulnerable_xss_page.php
Because it is showing the referer header on this page without the proper escaping, http://evil.example.com/?<script>alert(123)> gets output within the HTML source, executing the alert. Note this works in Internet Explorer only.
Other browsers will automatically encode the URL rendering
%3cscript%3ealert%28123%29%3c/script%3e
instead which is safe.
I can think of a few different attacks, maybe there are more which then others will hopefully add. :)
If your XSS is just some header value reflected in the response unencoded, I would say that's less of a risk compared to stored. There may be factors to consider though. For example if it's a header that the browser adds and can be set in the browser (like the user agent), an attacker may get access to a client computer, change the user agent, and then let a normal user use the website, now with the attacker's javascript injected. Another example that comes to mind is a website may display the url that redirected you there (referer) - in this case the attacker only has to link to the vulnerable application from his carefully crafted url. These are kind of edge cases though.
If it's stored, that's more straightforward. Consider an application that logs user access with all request headers, and let's suppose there is an internal application for admins that they use to inspect logs. If this log viewer application is web based and vulnerable, any javascript from any request header could be run in the admin context. Obviously this is just one example, it doesn't need to be blind of course.
Cache poisoning may also help with exploiting a header XSS.
Another thing I can think of is browser plugins. Flash is less prevalent now (thankfully), but with different versions of Flash you could set different request headers on your requests. What exactly you can and cannot set is a mess and very confusing across Flash plugin versions.
So there are several attacks, and it is necessary to treat all headers as user input and encode them accordingly.
Exploitation of xss at referrer header is almost like a traditional reflected xss, Just an additional point to make is "Attacker's website redirects to victim website and hence referrer header with required javascript will be appended to the victim website request".
Here One essential point that needs to be discussed is Why only with IE one can exploit this vulnerability why not with other browsers?
Traditional answer for this question is 'Chrome and firefox automatically encodes URL parameters and hence XSS is not possible..' But interesting thing here is when we have n number of bypasses for traditional xss bypasses. why can't we have bypasses for this scenario.
Yes.. We can bypass with following payload which is same way to bypass HTML validation in traditional payload.
http://evil.example.com/?alert(1)//cctixn1f .
Here the response could be something like this:
The link on the
referring
page seems to be wrong or outdated.
Response End
If victim clicks on referring page, alert will be generated..
Bottomline: Not just only IE, XSS can be possible even in Mozilla and Firefox when referrer is being used as part of href tag..
We use header('Location...') for redirect on lot of request. We have 4% of request, that don't do it.
Do you have any idea?
Our requests are from all countries.
Location header needs to be implemented by every browser. It's a part of HTTP/1.1, so if anything wants to call itself a "web browser" than it needs to implement Location header. So answering your question: every web browser can use Location:.
Still though there is something you need to consider:
First thing you need to remember is that the header('Location needs to use absolute paths, as relative URL might not be supported or behave incorrectly in different browsers (old IE had issues with that - according to RFC 2616 the location header needs to be absolute). So it might be worth checking if you always use absolute URLs in redirects.
Second thing is that your tracking system might not work correctly. If someone uses do not track policy or edits his HTTP referrer than your tracking system might be fooled into thinking that the redirect did not occur. While it still does, only your tracker won't see it happening.
And third and final thing are web crawlers which might ignore headers completely (they almost never do, but it might be one of these rare cases where someone has an amature spam bot trying to crawl through your site), or send incorrect responses.
I agree that 4% is oddly high, but it might happen even solely from a single, long attempt of crawl your website by some dodgy bot.
Hope it helps!
A page we have is visited by users from two domains. lets call them x.com and y.com
I want some of the code to only display when the user visits from y.com- how do i do this in the same vbscript file? Or do i HAVE to have separate files?
i was thinking something like
if request.SOMETHING.contains("x") then etc
Try Request.ServerVariables("HTTP_REFERER").
You'll notice that REFERER is misspelled; that's because HTTP_REFERER was set in stone in RFC 1945 before anyone caught the spelling error.
More info
In addition to checking the referer as others have suggested you could also while calling the page put a value in the url indicating where you have come from (assuming you have access to the pages you are linking from).
This is more easy for a malicious or just curious user to mess with than the http referrer so in some ways it is less reliable. However you should bear in mind that the http referrer isn't a guaranteed solution anyway (a browser might not send it, security programs might strip out the header, etc.) and that any user who manually edits things in the querystring that they shouldn't be playing with has no grounds for complaint if things stop working. As long as it won't be a security hole it should be fine. And if changing the value is a security hole you shouldn't use the referrer either since that can be easily modified by those with a mind to.
request.servervariables("HTTP_REFERER")
It seems common in the Rails community, at least, to respond to successful POST, PUT or DELETE requests by redirecting instead of returning success. For instance, if I PUT a legal change to my user profile, the idiomatic response would be a 302 Redirect to the profile page.
Isn't this wrong? Shouldn't we be returning 200 OK from the request? Or a 201 Created, in the case of a POST request? Either of those, in the HTTP/1.1 Status Definitions are allowed to (or required to) include a response, anyway.
I guess I'm wondering, before I go and "fix" my application, whether there is there a darn good reason why the community has gone the way of redirects instead of successful responses.
I'll assume, your use of the PUT verb notwithstanding, that you're talking about a web app that will be accessed primarily through the browser. In that case, the usual reason for following up a POST with a redirect is the post-redirect-get pattern, which avoids duplicate requests caused by a user refreshing or using the back and forward controls of their browser. It seems that in many instances this pattern is overloaded by redirecting not to a success page, but to the next most likely place the user would visit. I don't think either way you mention is necessarily wrong, but doing the redirect may be more user-friendly at the expense of not strictly adhering to the semantics of HTTP.
It's called the POST-Redirect-GET (PRG) pattern. This pattern will prevent clients from (accidently) re-executing non-idempotent requests when for example navigating forth and back in browser's history.
It's a good general web development practice which doesn't only apply on RoR. I'd just keep it as is.
In a perfect world, yes, probably. However HTTP clients and servers are a mess when it comes to standardization and don't always agree on proper protocol. Redirecting after a post helps avoid things like duplicate form submissions.
I do not believe this is possible, but I figure there are people out there way smarter than me, so why not check ..
I would like to have an HTTP image that is viewable from within a page when used w/in an img tag, but NOT visible if the img src link is called directly. Does that make sense? Viewable in page, but not if called directly.
Quick edit .. acceptable alternative is to embed image in page in such a way as url is not human readable / able to be extracted and typed into browser.
Update 2 ... .NET IIS7 env.
Note that "security" products such as Norton Internet Security and Norton Personal Firewall prevent the HTTP Referer: (TBL's spelling mistake, not mine) header being sent by default. As these products are widely used, referrer blocking will break things for an awful lot of people.
FWIW, if I was keen to get your image other than by viewing your page (although I can't imagine why I should be) I would just grab the bits as they came over the network when I viewed your page, using something like Charles or Fiddler. It's completely impossible to make content available over the web but prevent people from making a copy.
I believe that you can achieve something like this by relying on the referrer header supplied by the browser - when the referrer is a web page on your own site, you serve up the image, but not otherwise.
It's not 100% reliable (as passing the referrer isn't mandatory in the HTTP spec) but works well enough for some sites.
This is achieved through configuration of your webserver; you therefore might have more luck asking this on ServerFault.
Yes, there are lots of articles on how to setup mod_rewrite rules in apache to try and prevent direct access to files.
http://www.cyberciti.biz/faq/apache-mod_rewrite-hot-linking-images-leeching-howto/
It depends on how it gets built. You can always make sure your referrer is the page that you expect it to be hosted from and lock down requests there.
If you have some notion of authentication, you could bury the image under some type of php/ruby script or asp.net http handler that requests the image from the server or database (in a place that is not publicly viewable but is reachable by your server-side code) and that handler could check for your authentication status before returning it.
Frankly, I re-worked my solution so I didn't really have to worry bout it ... know that's a cop out, as it doesn't REALLY answer the question, but there it is. My concern that users would be able to defraud the "game" I was creating if they could figure out the sequence that was being used to name the images. Quick and dirty solution .... don't make image file names sequential / predictable.